Updates from: 02/22/2022 02:07:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Replaces values within a string in a case-sensitive manner. The function behaves
#### Replace characters using a regular expression **Example 1:** Using **oldValue** and **replacementValue** to replace the entire source string with another string.
-LetΓÇÖs say your HR system has an attribute `BusinessTitle`. As part of recent job title changes, your company wants to update anyone with the business title ΓÇ£Product DeveloperΓÇ¥ to ΓÇ£Software EngineerΓÇ¥.
+Let's say your HR system has an attribute `BusinessTitle`. As part of recent job title changes, your company wants to update anyone with the business title "Product Developer" to "Software Engineer".
Then in this case, you can use the following expression in your attribute mapping. `Replace([BusinessTitle],"Product Developer", , , "Software Engineer", , )` * **source**: `[BusinessTitle]`
-* **oldValue**: ΓÇ£Product DeveloperΓÇ¥
-* **replacementValue**: ΓÇ£Software EngineerΓÇ¥
+* **oldValue**: "Product Developer"
+* **replacementValue**: "Software Engineer"
* **Expression output**: Software Engineer **Example 2:** Using **oldValue** and **template** to insert the source string into another *templatized* string. The parameter **oldValue** is a misnomer in this scenario. It is actually the value that will get replaced.
-LetΓÇÖs say you want to always generate login id in the format `<username>@contoso.com`. There is a source attribute called **UserID** and you want that value to be used for the `<username>` portion of the login id.
+Let's say you want to always generate login id in the format `<username>@contoso.com`. There is a source attribute called **UserID** and you want that value to be used for the `<username>` portion of the login id.
Then in this case, you can use the following expression in your attribute mapping. `Replace([UserID],"<username>", , , , , "<username>@contoso.com")`
-* **source:** `[UserID]` = ΓÇ£jsmithΓÇ¥
-* **oldValue:** ΓÇ£`<username>`ΓÇ¥
-* **template:** ΓÇ£`<username>@contoso.com`ΓÇ¥
-* **Expression output:** ΓÇ£jsmith@contoso.comΓÇ¥
+* **source:** `[UserID]` = "jsmith"
+* **oldValue:** "`<username>`"
+* **template:** "`<username>@contoso.com`"
+* **Expression output:** "jsmith@contoso.com"
**Example 3:** Using **regexPattern** and **replacementValue** to extract a portion of the source string and replace it with an empty string or a custom value built using regex patterns or regex group names.
-LetΓÇÖs say you have a source attribute `telephoneNumber` that has components `country code` and `phone number` separated by a space character. E.g. `+91 9998887777`
+Let's say you have a source attribute `telephoneNumber` that has components `country code` and `phone number` separated by a space character. E.g. `+91 9998887777`
Then in this case, you can use the following expression in your attribute mapping to extract the 10 digit phone number. `Replace([telephoneNumber], , "\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})", , "${phoneNumber}", , )`
-* **source:** `[telephoneNumber]` = ΓÇ£+91 9998887777ΓÇ¥
-* **regexPattern:** ΓÇ£`\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})`ΓÇ¥
-* **replacementValue:** ΓÇ£`${phoneNumber}`ΓÇ¥
+* **source:** `[telephoneNumber]` = "+91 9998887777"
+* **regexPattern:** "`\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})`"
+* **replacementValue:** "`${phoneNumber}`"
* **Expression output:** 9998887777 You can also use this pattern to remove characters and collapse a string.
For example, the expression below removes parenthesis, dashes and space characte
`Replace([mobile], , "[()\\s-]+", , "", , )`
-* **source:** `[mobile] = ΓÇ£+1 (999) 888-7777ΓÇ¥`
-* **regexPattern:** ΓÇ£`[()\\s-]+`ΓÇ¥
-* **replacementValue:** ΓÇ£ΓÇ¥ (empty string)
+* **source:** `[mobile] = "+1 (999) 888-7777"`
+* **regexPattern:** "`[()\\s-]+`"
+* **replacementValue:** "" (empty string)
* **Expression output:** 19998887777 **Example 4:** Using **regexPattern**, **regexGroupName** and **replacementValue** to extract a portion of the source string and replace it with another literal value or empty string.
-LetΓÇÖs say your source system has an attribute AddressLineData with two components street number and street name. As part of a recent move, letΓÇÖs say the street number of the address changed and you want to update only the street number portion of the address line.
-Then in this case, you can use the following expression in your attribute mapping to extract the 10 digit phone number.
+Let's say your source system has an attribute AddressLineData with two components street number and street name. As part of a recent move, let's say the street number of the address changed and you want to update only the street number portion of the address line.
+Then in this case, you can use the following expression in your attribute mapping to extract the street number.
`Replace([AddressLineData], ,"(?<streetNumber>^\\d*)","streetNumber", "888", , )`
-* **source:** `[AddressLineData]` = ΓÇ£545 Tremont StreetΓÇ¥
-* **regexPattern:** ΓÇ£`(?<streetNumber>^\\d*)`ΓÇ¥
-* **regexGroupName:** ΓÇ£streetNumberΓÇ¥
-* **replacementValue:** ΓÇ£888ΓÇ¥
+* **source:** `[AddressLineData]` = "545 Tremont Street"
+* **regexPattern:** "`(?<streetNumber>^\\d*)`"
+* **regexGroupName:** "streetNumber"
+* **replacementValue:** "888"
* **Expression output:** 888 Tremont Street Here is another example where the domain suffix from a UPN is replaced with an empty string to generate login id without domain suffix. `Replace([userPrincipalName], , "(?<Suffix>@(.)*)", "Suffix", "", , )`
-* **source:** `[userPrincipalName]` = ΓÇ£jsmith@contoso.comΓÇ¥
-* **regexPattern:** ΓÇ£`(?<Suffix>@(.)*)`ΓÇ¥
-* **regexGroupName:** ΓÇ£SuffixΓÇ¥
-* **replacementValue:** ΓÇ£ΓÇ¥ (empty string)
+* **source:** `[userPrincipalName]` = "jsmith@contoso.com"
+* **regexPattern:** "`(?<Suffix>@(.)*)`"
+* **regexGroupName:** "Suffix"
+* **replacementValue:** "" (empty string)
* **Expression output:** jsmith
-**Example 5:** Using **regexPattern**, **regexGroupName** and **replacementAttributeName** to handle scenarios when the source attribute is empty or doesnΓÇÖt have a value.
+**Example 5:** Using **regexPattern**, **regexGroupName** and **replacementAttributeName** to handle scenarios when the source attribute is empty or doesn't have a value.
-LetΓÇÖs say your source system has an attribute telephoneNumber. If telephoneNumber is empty, you want to extract the 10 digits of the mobile number attribute.
+Let's say your source system has an attribute telephoneNumber. If telephoneNumber is empty, you want to extract the 10 digits of the mobile number attribute.
Then in this case, you can use the following expression in your attribute mapping. `Replace([telephoneNumber], , "\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})", "phoneNumber" , , [mobile], )`
-* **source:** `[telephoneNumber]` = ΓÇ£ΓÇ¥ (empty string)
-* **regexPattern:** ΓÇ£`\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})`ΓÇ¥
-* **regexGroupName:** ΓÇ£phoneNumberΓÇ¥
-* **replacementAttributeName:** `[mobile]` = ΓÇ£+91 8887779999ΓÇ¥
+* **source:** `[telephoneNumber]` = "" (empty string)
+* **regexPattern:** "`\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})`"
+* **regexGroupName:** "phoneNumber"
+* **replacementAttributeName:** `[mobile]` = "+91 8887779999"
* **Expression output:** 8887779999 **Example 6:** You need to find characters that match a regular expression value and remove them.
Switch(source, defaultValue, key1, value1, key2, value2, …)
**Description:** When **source** value matches a **key**, returns **value** for that **key**. If **source** value doesn't match any keys, returns **defaultValue**. **Key** and **value** parameters must always come in pairs. The function always expects an even number of parameters. The function should not be used for referential attributes such as manager.
+> [!NOTE]
+> Switch function performs a case-sensitive string comparison of the **source** and **key** values. If you'd like to perform a case-insensitive comparison, normalize the **source** string before comparison using a nested ToLower function and ensure that all **key** strings use lowercase.
+> Example: `Switch(ToLower([statusFlag]), "0", "true", "1", "false", "0")`. In this example, the **source** attribute `statusFlag` may have values ("True" / "true" / "TRUE"). However, the Switch function will always convert it to lowercase string "true" before comparison with **key** parameters.
+ **Parameters:** | Name | Required/ Repeating | Type | Notes |
active-directory Access Reviews Downloadable Review History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-downloadable-review-history.md
Title: Create and manage downloadable access review history report (Preview) - Azure Active Directory
+ Title: Create and manage downloadable access review history report - Azure Active Directory
description: Using Azure Active Directory access reviews, you can download a review history for access reviews in your organization. documentationcenter: ''
na Previously updated : 03/25/2021 Last updated : 02/18/2022
-# Create and manage downloadable access review history report (Preview) in Azure AD access reviews
+# Create and manage downloadable access review history report in Azure AD access reviews
With Azure Active Directory (Azure AD) Access Reviews, you can create a downloadable review history to help your organization gain more insight. The report pulls the decisions that were taken by reviewers when a report is created. These reports can be constructed to include specific access reviews, for a specific time frame, and can be filtered to include different review types and review results.
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
This scenario looks at the classic legacy application using HTTP authorization headers to control access to protected content.
-Being legacy, the application lacks any form of modern protocols to support a direct integration with Azure AD. Modernizing the app is also costly, requires careful planning, and introduces risk of potential downtime.
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
-One option would be to consider [Azure AD Application Proxy](../app-proxy/application-proxy.md), to gate remote access to the application.
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and headers-based SSO, significantly improving the overall security posture of the application.
-Another approach is to use an F5 BIG-IP Application Delivery Controller (ADC), as it too provides the protocol transitioning required to bridge legacy applications to the modern ID control plane.
-
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application for both remote and local access.
+> [!NOTE]
+> Organizations can also gain remote access to this type of application with [Azure AD Application Proxy](../app-proxy/application-proxy.md)
## Scenario architecture
You can now access the Easy Button functionality that provides quick configurati
![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
-5. Review the list of configuration steps and select Next
+5. Review the list of configuration steps and select **Next**
![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
Consider the **Azure Service Account Details** be the BIG-IP client application
2. Enable **Single Sign-On (SSO) & HTTP Headers**
-3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** you noted down during tenant registration
+3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** you noted when registering the Easy Button client in your tenant.
4. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD i
For this scenario, we have an application using **Kerberos authentication**, also known as **Integrated Windows Authentication (IWA)**, to gate access to protected content.
-Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. Modernizing the app would be ideal, but is costly, requires careful planning, and introduces risk of potential downtime.
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
-One option would be to consider using [Azure AD Application Proxy](../app-proxy/application-proxy.md), as it provides the protocol transitioning required to bridge the legacy application to the modern identity control plane. Or for our scenario, we'll achieve this using F5's BIG-IP Application Delivery Controller (ADC).
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and headers-based SSO, significantly improving the overall security posture of the application.
-Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application for remote and local access.
+> [!NOTE]
+> Organizations can also gain remote access to this type of application with [Azure AD Application Proxy](../app-proxy/application-proxy.md)
## Scenario architecture
Consider the **Azure Service Account Details** be the BIG-IP client application
2. Enable **Single Sign-On (SSO) & HTTP Headers**
-3. Enter the **Tenant Id, Client ID,** and **Client Secret** you noted down during tenant registration
+3. Enter the **Tenant Id, Client ID,** and **Client Secret** you noted when registering the Easy Button client in your tenant.
![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-kerberos-easy-button/azure-configuration-properties.png)
-Before you select **Next**, confirm that BIG-IP can successfully connect to your tenant.
+Before you select **Next**, confirm the BIG-IP can successfully connect to your tenant.
### Service Provider
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers.
-See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
Consider the **Azure Service Account Details** be the BIG-IP client application
2. Enable **Single Sign-On (SSO) & HTTP Headers**
-3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** you noted down during tenant registration
+3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** you noted when registering the Easy Button client in your tenant.
5. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=partners,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
-For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
Some of these are global settings so can be re-used for publishing more applicat
2. Enable **Single Sign-On (SSO) & HTTP Headers**
-3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted down from your registered application
+3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted when registering the Easy Button client in your tenant.
-4. Before you select **Next**, confirm that BIG-IP can successfully connect to your tenant.
+4. Before you select **Next**, confirm the BIG-IP can successfully connect to your tenant.
![ Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-oracle/configuration-general-and-service-account-properties.png)
active-directory F5 Big Ip Oracle Jde Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-oracle-jde-easy-button.md
Some of these are global settings so can be re-used for publishing more applicat
3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted down from your registered application
-4. Before you select **Next**, confirm that BIG-IP can successfully connect to your tenant.
+4. Before you select **Next**, confirm the BIG-IP can successfully connect to your tenant.
![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-oracle-jde/configuration-general-and-service-account-properties.png)
active-directory Cappm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cappm-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Clarity | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Clarity'
description: Learn how to configure single sign-on between Azure Active Directory and Clarity.
Previously updated : 05/25/2021 Last updated : 02/11/2022
-# Tutorial: Azure Active Directory integration with Clarity
+# Tutorial: Azure AD SSO integration with Clarity
In this tutorial, you'll learn how to integrate Clarity with Azure Active Directory (Azure AD). When you integrate Clarity with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
+4. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://ca.ondemand.saml.20.post.<COMPANY_NAME>`
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://fedsso.ondemand.ca.com/affwebservices/public/saml2assertionconsumer` > [!NOTE]
- > This value is not real. Update this value with the actual Identifier. Contact [Clarity Client support team](mailto:catechnicalsupport@ca.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > This value is not real. Update this value with the actual Identifier. Contact [Clarity Client support team](mailto:technical.support@broadcom.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Clarity SSO
-To configure single sign-on on **Clarity** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Clarity support team](mailto:catechnicalsupport@ca.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Clarity** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Clarity support team](mailto:technical.support@broadcom.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Clarity test user
-In this section, you create a user called B.Simon in Clarity. Work with [Clarity support team](mailto:catechnicalsupport@ca.com) to add the users in the Clarity platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Clarity. Work with [Clarity support team](mailto:technical.support@broadcom.com) to add the users in the Clarity platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Igrafx Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/igrafx-platform-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with iGrafx Platform'
+description: Learn how to configure single sign-on between Azure Active Directory and iGrafx Platform.
++++++++ Last updated : 02/18/2022++++
+# Tutorial: Azure AD SSO integration with iGrafx Platform
+
+In this tutorial, you'll learn how to integrate iGrafx Platform with Azure Active Directory (Azure AD). When you integrate iGrafx Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to iGrafx Platform.
+* Enable your users to be automatically signed-in to iGrafx Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* iGrafx Platform single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* iGrafx Platform supports **SP** initiated SSO.
+
+* iGrafx Platform supports **Just In Time** user provisioning.
+
+## Add iGrafx Platform from the gallery
+
+To configure the integration of iGrafx Platform into Azure AD, you need to add iGrafx Platform from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **iGrafx Platform** in the search box.
+1. Select **iGrafx Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for iGrafx Platform
+
+Configure and test Azure AD SSO with iGrafx Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in iGrafx Platform.
+
+To configure and test Azure AD SSO with iGrafx Platform, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure iGrafx Platform SSO](#configure-igrafx-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create iGrafx Platform test user](#create-igrafx-platform-test-user)** - to have a counterpart of B.Simon in iGrafx Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **iGrafx Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://<CustomerName>.igrafxcloud.com/saml/metadata` |
+ | `https://<CustomerName>.igrafxdemo.com/saml/metadata` |
+ | `https://<CustomerName>.igrafxtraining.com/saml/metadata` |
+ | `https://<CustomerName>.igrafx.com/saml/metadata` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://<CustomerName>.igrafxcloud.com/` |
+ | `https://<CustomerName>.igrafxdemo.com/` |
+ | `https://<CustomerName>.igrafxtraining.com/` |
+ | `https://<CustomerName>.igrafx.com/` |
+
+ c. In the **Sign on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ |-|
+ | `https://<CustomerName>.igrafxcloud.com/` |
+ | `https://<CustomerName>.igrafxdemo.com/` |
+ | `https://<CustomerName>.igrafxtraining.com/` |
+ | `https://<CustomerName>.igrafx.com/` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [iGrafx Platform Client support team](mailto:support@igrafx.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to iGrafx Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **iGrafx Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure iGrafx Platform SSO
+
+To configure single sign-on on **iGrafx Platform** side, you need to send the **App Federation Metadata Url** to [iGrafx Platform support team](mailto:support@igrafx.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create iGrafx Platform test user
+
+In this section, a user called B.Simon is created in iGrafx Platform. iGrafx Platform supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in iGrafx Platform, a new one is created when you attempt to access iGrafx Platform.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to iGrafx Platform Sign-on URL where you can initiate the login flow.
+
+* Go to iGrafx Platform Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the iGrafx Platform tile in the My Apps, this will redirect to iGrafx Platform Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure iGrafx Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Peakon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peakon-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Peakon | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Peakon'
description: Learn how to configure single sign-on between Azure Active Directory and Peakon.
Previously updated : 03/19/2019 Last updated : 02/15/2022
-# Tutorial: Azure Active Directory integration with Peakon
+# Tutorial: Azure AD SSO integration with Peakon
-In this tutorial, you learn how to integrate Peakon with Azure Active Directory (Azure AD).
-Integrating Peakon with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Peakon with Azure Active Directory (Azure AD). When you integrate Peakon with Azure AD, you can:
-* You can control in Azure AD who has access to Peakon.
-* You can enable your users to be automatically signed-in to Peakon (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Peakon.
+* Enable your users to be automatically signed-in to Peakon with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Peakon, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Peakon single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Peakon single sign-on (SSO) enabled subscription.
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
To configure Azure AD integration with Peakon, you need the following items:
In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Peakon supports **SP** and **IDP** initiated SSO
+* Peakon supports **SP** and **IDP** initiated SSO.
* Peakon supports [**automated** user provisioning and deprovisioning](peakon-provisioning-tutorial.md) (recommended).
-## Adding Peakon from the gallery
+## Add Peakon from the gallery
To configure the integration of Peakon into Azure AD, you need to add Peakon from the gallery to your list of managed SaaS apps.
-**To add Peakon from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Peakon**, select **Peakon** from result panel then click **Add** button to add the application.
-
- ![Peakon in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Peakon based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Peakon needs to be established.
-
-To configure and test Azure AD single sign-on with Peakon, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Peakon Single Sign-On](#configure-peakon-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Peakon test user](#create-peakon-test-user)** - to have a counterpart of Britta Simon in Peakon that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Peakon** in the search box.
+1. Select **Peakon** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-### Configure Azure AD single sign-on
+## Configure and test Azure AD SSO for Peakon
-In this section, you enable Azure AD single sign-on in the Azure portal.
+Configure and test Azure AD SSO with Peakon using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Peakon.
-To configure Azure AD single sign-on with Peakon, perform the following steps:
+To configure and test Azure AD SSO with Peakon, perform the following steps:
-1. In the [Azure portal](https://portal.azure.com/), on the **Peakon** application integration page, select **Single sign-on**.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Peakon SSO](#configure-peakon-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Peakon test user](#create-peakon-test-user)** - to have a counterpart of B.Simon in Peakon that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Configure single sign-on link](common/select-sso.png)
+## Configure Azure AD SSO
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Single sign-on select mode](common/select-saml-option.png)
+1. In the Azure portal, on the **Peakon** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Peakon Domain and URLs single sign-on information](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://app.peakon.com/saml/<companyid>/metadata`
To configure Azure AD single sign-on with Peakon, perform the following steps:
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot that shows the "Set additional U R Ls" selected with the "Sing on U R L" text box highlighted.](common/metadata-upload-additional-signon.png)
-
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://app.peakon.com/login` > [!NOTE]
To configure Azure AD single sign-on with Peakon, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- b. Azure AD Identifier
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Peakon.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Peakon**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Peakon Single Sign-On
+## Configure Peakon SSO
1. In a different web browser window, sign in to Peakon as an Administrator. 2. In the menu bar on the left side of the page, click **Configuration**, then navigate to **Integrations**.
- ![The Config](./media/peakon-tutorial/tutorial_peakon_config.png)
+ ![Screenshot shows the Configuration](./media/peakon-tutorial/menu.png)
3. On **Integrations** page, click on **Single Sign-On**.
- ![The Single](./media/peakon-tutorial/tutorial_peakon_single.png)
+ ![Screenshot shows the Single](./media/peakon-tutorial/profile.png)
4. Under **Single Sign-On** section, click on **Enable**.
- ![The enable](./media/peakon-tutorial/tutorial_peakon_enable.png)
+ ![Screenshot shows to enable Single Sign-On](./media/peakon-tutorial/enable.png)
5. On the **Single sign-on for employees using SAML** section, perform the following steps:
- ![The saml](./media/peakon-tutorial/tutorial_peakon_saml.png)
+ ![Screenshot shows SAML Single sign-on](./media/peakon-tutorial/settings.png)
a. In the **SSO Login URL** textbox, paste the value of **Login URL**, which you have copied from the Azure portal.
To configure Azure AD single sign-on with Peakon, perform the following steps:
e. Click the **icon** to copy the **Reply URL (ACS)** and paste in **Reply URL** textbox in **Basic SAML Configuration** section on Azure portal.
- f. Click **Save**
-
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Peakon.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Peakon**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Peakon**.
-
- ![The Peakon link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+ f. Click **Save**.
### Create Peakon test user
In the case of Peakon, provisioning is a manual task.
2. In the menu bar on the left side of the page, click **Configuration**, then navigate to **Employees**.
- ![The employee](./media/peakon-tutorial/tutorial_peakon_employee.png)
+ ![Screenshot shows the employee](./media/peakon-tutorial/employee.png)
3. On the top right side of the page, click **Add employee**.
- ![The add employee](./media/peakon-tutorial/tutorial_peakon_addemployee.png)
+ ![Screenshot shows to add employee](./media/peakon-tutorial/add-employee.png)
3. On the **New employee** dialog page, perform the following steps:
- ![The new employee](./media/peakon-tutorial/tutorial_peakon_create.png)
+ ![Screenshot shows the new employee](./media/peakon-tutorial/create.png)
1. In the **Name** textbox, type first name as **Britta** and last name as **simon**.
In the case of Peakon, provisioning is a manual task.
1. Click **Create employee**.
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Peakon Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Peakon Sign-on URL directly and initiate the login flow from there.
-When you click the Peakon tile in the Access Panel, you should be automatically signed in to the Peakon for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Peakon for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Peakon tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Peakon for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure FreshDesk you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Screensteps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/screensteps-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with ScreenSteps | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ScreenSteps'
description: Learn how to configure single sign-on between Azure Active Directory and ScreenSteps.
Previously updated : 03/07/2019 Last updated : 02/15/2022
-# Tutorial: Azure Active Directory integration with ScreenSteps
+# Tutorial: Azure AD SSO integration with ScreenSteps
-In this tutorial, you learn how to integrate ScreenSteps with Azure Active Directory (Azure AD).
-Integrating ScreenSteps with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ScreenSteps with Azure Active Directory (Azure AD). When you integrate ScreenSteps with Azure AD, you can:
-* You can control in Azure AD who has access to ScreenSteps.
-* You can enable your users to be automatically signed-in to ScreenSteps (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ScreenSteps.
+* Enable your users to be automatically signed-in to ScreenSteps with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with ScreenSteps, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* ScreenSteps single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ScreenSteps single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ScreenSteps supports **SP** initiated SSO
-
-## Adding ScreenSteps from the gallery
-
-To configure the integration of ScreenSteps into Azure AD, you need to add ScreenSteps from the gallery to your list of managed SaaS apps.
-
-**To add ScreenSteps from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **ScreenSteps**, select **ScreenSteps** from result panel then click **Add** button to add the application.
+* ScreenSteps supports **SP** initiated SSO.
- ![ScreenSteps in the results list](common/search-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Configure and test Azure AD single sign-on
+## Add ScreenSteps from the gallery
-In this section, you configure and test Azure AD single sign-on with ScreenSteps based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ScreenSteps needs to be established.
-
-To configure and test Azure AD single sign-on with ScreenSteps, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ScreenSteps Single Sign-On](#configure-screensteps-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ScreenSteps test user](#create-screensteps-test-user)** - to have a counterpart of Britta Simon in ScreenSteps that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of ScreenSteps into Azure AD, you need to add ScreenSteps from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ScreenSteps** in the search box.
+1. Select **ScreenSteps** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with ScreenSteps, perform the following steps:
+## Configure and test Azure AD SSO for ScreenSteps
-1. In the [Azure portal](https://portal.azure.com/), on the **ScreenSteps** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with ScreenSteps using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ScreenSteps.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with ScreenSteps, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ScreenSteps SSO](#configure-screensteps-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ScreenSteps test user](#create-screensteps-test-user)** - to have a counterpart of B.Simon in ScreenSteps that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **ScreenSteps** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![ScreenSteps Domain and URLs single sign-on information](common/sp-signonurl.png)
+4. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<tenantname>.ScreenSteps.com`
To configure Azure AD single sign-on with ScreenSteps, perform the following ste
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
- b. Azure AD Identifier
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- c. Logout URL
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ScreenSteps.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ScreenSteps**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure ScreenSteps Single Sign-On
+## Configure ScreenSteps SSO
1. In a different web browser window, log into your ScreenSteps company site as an administrator. 1. Click **Account Settings**.
- ![Account management](./media/screensteps-tutorial/ic778523.png "Account management")
+ ![Screenshot that shows Account management](./media/screensteps-tutorial/account.png "Account management")
1. Click **Single Sign-on**.
- ![Screenshot that shows "Single Sign-on" selected.](./media/screensteps-tutorial/ic778524.png "Remote authentication")
+ ![Screenshot that shows "Single Sign-on" selected.](./media/screensteps-tutorial/groups.png "Remote authentication")
1. Click **Create Single Sign-on Endpoint**.
- ![Remote authentication](./media/screensteps-tutorial/ic778525.png "Remote authentication")
+ ![Screenshot that shows Remote authentication](./media/screensteps-tutorial/title.png "Remote authentication")
1. In the **Create Single Sign-on Endpoint** section, perform the following steps:
- ![Create an authentication endpoint](./media/screensteps-tutorial/ic778526.png "Create an authentication endpoint")
+ ![Screenshot that shows Create an authentication endpoint](./media/screensteps-tutorial/settings.png "Create an authentication endpoint")
a. In the **Title** textbox, type a title.
To configure Azure AD single sign-on with ScreenSteps, perform the following ste
1. **Edit** the new endpoint.
- ![Edit endpoint](./media/screensteps-tutorial/ic778528.png "Edit endpoint")
+ ![Screenshot that shows to edit endpoint](./media/screensteps-tutorial/certificate.png "Edit endpoint")
1. In the **Edit Single Sign-on Endpoint** section, perform the following steps:
- ![Remote authentication endpoint](./media/screensteps-tutorial/ic778527.png "Remote authentication endpoint")
+ ![Screenshot that shows Remote authentication endpoint](./media/screensteps-tutorial/authentication.png "Remote authentication endpoint")
a. Click **Upload new SAML Certificate file**, and then upload the certificate, which you have downloaded from Azure portal.
To configure Azure AD single sign-on with ScreenSteps, perform the following ste
h. Click the **Make default for account** button to use this endpoint for all users who log into ScreenSteps. Alternatively you can click the **Add to Site** button to use this endpoint for specific sites in **ScreenSteps**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ScreenSteps.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ScreenSteps**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **ScreenSteps**.
-
- ![The ScreenSteps link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create ScreenSteps test user In this section, you create a user called Britta Simon in ScreenSteps. Work with [ScreenSteps Client support team](https://www.screensteps.com/contact) to add the users in the ScreenSteps platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ScreenSteps tile in the Access Panel, you should be automatically signed in to the ScreenSteps for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to ScreenSteps Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to ScreenSteps Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ScreenSteps tile in the My Apps, this will redirect to ScreenSteps Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ScreenSteps you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Trelica Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trelica-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Trelica | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Trelica'
description: Learn how to configure single sign-on between Azure Active Directory and Trelica.
Previously updated : 05/06/2020 Last updated : 02/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Trelica
+# Tutorial: Azure AD SSO integration with Trelica
-In this tutorial, you learn how to integrate Trelica with Azure Active Directory (Azure AD).
-
-With this integration, you can:
+In this tutorial, you learn how to integrate Trelica with Azure Active Directory (Azure AD). When you integrate Trelica with Azure AD, you can:
* Control in Azure AD who has access to Trelica. * Enable your users to be automatically signed in to Trelica with their Azure AD accounts. * Manage your accounts in one central location: the Azure portal.
-To learn more about software as a service (SaaS) app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Trelica supports IDP-initiated SSO. * Trelica supports just-in-time user provisioning.
-* After you configure Trelica, you can enforce session control. This control protects exfiltration and infiltration of your organization's sensitive data in real time. Session control extends from conditional access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
-## Adding Trelica from the gallery
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Trelica from the gallery
To configure the integration of Trelica into Azure AD, you need to add Trelica from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) by using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal by using either a work or school account, or a personal Microsoft account.
1. On the leftmost navigation pane, select the **Azure Active Directory** service. 1. Go to **Enterprise Applications**, and then select **All Applications**. 1. To add a new application, select **New application**. 1. In the **Add from the gallery** section, enter **Trelica** in the search box. 1. Select **Trelica** from the search results, and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Trelica
+## Configure and test Azure AD SSO for Trelica
Configure and test Azure AD SSO with Trelica by using a test user called **B.Simon**. For SSO to work, you must establish a linked relationship between an Azure AD user and the related user in Trelica.
-To configure and test Azure AD SSO with Trelica, complete the following building blocks:
+To configure and test Azure AD SSO with Trelica, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Trelica, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal:
-1. In the [Azure portal](https://portal.azure.com/), on the **Trelica** application integration page, go to the **Manage** section. Select **Single sign-on**.
+1. In the Azure portal, on the **Trelica** application integration page, go to the **Manage** section. Select **Single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ![The Set up Single Sign-On with SAML page, with the pencil icon for Basic SAML Configuration highlighted](common/edit-urls.png)
-1. On the **Set up Single Sign-on with SAML** page, enter the following values:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- 1. In the **Identifier** box, enter the URL **https://app.trelica.com**.
+ 1. In the **Identifier** box, type the URL:
+ `https://app.trelica.com`.
- 1. In the **Reply URL** box, enter a URL having the pattern
+ 1. In the **Reply URL** box, type a URL using the following pattern:
`https://app.trelica.com/Id/Saml2/<CUSTOM_IDENTIFIER>/Acs`. > [!NOTE]
In this section, you enable B.Simon to use Azure single sign-on by granting acce
1. In the Azure portal, select **Enterprise Applications** > **All applications**. 1. In the applications list, select **Trelica**. 1. In the app's overview page, go to the **Manage** section, and select **Users and groups**.-
- ![The Manage section, with Users and groups highlighted](common/users-groups-blade.png)
- 1. Select **Add user**. In the **Add Assignment** dialog box, select **Users and groups**.-
- ![The Users and groups window, with Add user highlighted](common/add-assign-user.png)
- 1. In the **Users and groups** dialog box, select **B.Simon** from the users list. Then choose the **Select** button at the bottom of the screen. 1. If you expect any role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Then choose the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog box, select **Assign**.
Trelica supports just-in-time user provisioning, which is enabled by default. Th
## Test SSO
-In this section, you test your Azure AD single sign-on configuration by using the My Apps portal.
-
-When you select the Trelica tile in the My Apps portal, you're automatically signed in to the Trelica for which you set up SSO. For more information about the My Apps portal, see [Introduction to the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [Tutorials for integrating SaaS apps with Azure Active Directory](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Trelica for which you set up the SSO.
-- [Try Trelica with Azure AD](https://aad.portal.azure.com/)
+* You can use Microsoft My Apps. When you click the Trelica tile in the My Apps, you should be automatically signed in to the Trelica for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Trelica with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Trelica you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Trunarrative Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trunarrative-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with TruNarrative | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with TruNarrative'
description: Learn how to configure single sign-on between Azure Active Directory and TruNarrative.
Previously updated : 02/28/2020 Last updated : 02/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with TruNarrative
+# Tutorial: Azure AD SSO integration with TruNarrative
In this tutorial, you'll learn how to integrate TruNarrative with Azure Active Directory (Azure AD). When you integrate TruNarrative with Azure AD, you can:
In this tutorial, you'll learn how to integrate TruNarrative with Azure Active D
* Enable your users to be automatically signed-in to TruNarrative with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* TruNarrative supports **SP** initiated SSO
-
-* Once you configure TruNarrative you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* TruNarrative supports **SP** initiated SSO.
-
-## Adding TruNarrative from the gallery
+## Add TruNarrative from the gallery
To configure the integration of TruNarrative into Azure AD, you need to add TruNarrative from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **TruNarrative** in the search box. 1. Select **TruNarrative** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for TruNarrative
+## Configure and test Azure AD SSO for TruNarrative
Configure and test Azure AD SSO with TruNarrative using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TruNarrative.
-To configure and test Azure AD SSO with TruNarrative, complete the following building blocks:
+To configure and test Azure AD SSO with TruNarrative, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure TruNarrative SSO](#configure-trunarrative-sso)** - to configure the single sign-on settings on application side.
- * **[Create TruNarrative test user](#create-trunarrative-test-user)** - to have a counterpart of B.Simon in TruNarrative that is linked to the Azure AD representation of user.
+ 1. **[Create TruNarrative test user](#create-trunarrative-test-user)** - to have a counterpart of B.Simon in TruNarrative that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **TruNarrative** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **TruNarrative** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ a. In the **Identifier** box, type a URL using the following pattern:
`https://<SUBDOMAIN>.trunarrative.cloud`
- b. In the **Identifier** box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.trunarrative.cloud`
-
- c. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.trunarrative.cloud/IdP/sso.aspx`
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.trunarrative.cloud`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [TruNarrative Client support team](mailto:helpdesk@trunarrative.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [TruNarrative Client support team](mailto:helpdesk@trunarrative.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **TruNarrative**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
To configure single sign-on on **TruNarrative** side, you need to send the downl
### Create TruNarrative test user
-In this section, you create a user called B.Simon in TruNarrative. Work with TruNarrative support team to add the users in the TruNarrative platform. Users must be created and activated before you use single sign-on.
-
+In this section, you create a user called B.Simon in TruNarrative. Work with [TruNarrative support team](mailto:helpdesk@trunarrative.com) to add the users in the TruNarrative platform. Users must be created and activated before you use single sign-on.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the TruNarrative tile in the Access Panel, you should be automatically signed in to the TruNarrative for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to TruNarrative Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to TruNarrative Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the TruNarrative tile in the My Apps, this will redirect to TruNarrative Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [Try TruNarrative with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+Once you configure TruNarrative you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Verme Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/verme-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Verme | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Verme'
description: Learn how to configure single sign-on between Azure Active Directory and Verme.
Previously updated : 06/22/2020 Last updated : 02/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Verme
+# Tutorial: Azure AD SSO integration with Verme
In this tutorial, you'll learn how to integrate Verme with Azure Active Directory (Azure AD). When you integrate Verme with Azure AD, you can:
In this tutorial, you'll learn how to integrate Verme with Azure Active Director
* Enable your users to be automatically signed in to Verme with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Verme supports **SP and IDP** initiated SSO.
-* Once you configure Verme you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
-
-## Adding Verme from the gallery
+## Add Verme from the gallery
To configure the integration of Verme into Azure AD, you need to add Verme from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Verme** in the search box. 1. Select **Verme** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Verme
+## Configure and test Azure AD SSO for Verme
Configure and test Azure AD SSO with Verme using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Verme.
-To configure and test Azure AD SSO with Verme, complete the following building blocks:
+To configure and test Azure AD SSO with Verme, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Verme, complete the following building b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Verme** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Verme** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Sign-on URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.verme.ru/saml/begin?idp=<IDP_NAME>`
- b. In the **Relay State** text box, type the URL:
+ b. In the **Relay State** text box, type the value:
`verme_ms_login` > [!NOTE]
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Verme**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Verme. Work with [Ver
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Verme tile in the Access Panel, you should be automatically signed in to the Verme for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Verme Sign on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Verme Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Verme for which you set up the SSO.
-- [Try Verme with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Verme tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Verme for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Verme with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Verme you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Vocoli Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vocoli-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Vocoli | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Vocoli'
description: Learn how to configure single sign-on between Azure Active Directory and Vocoli.
Previously updated : 08/16/2019 Last updated : 02/16/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Vocoli
+# Tutorial: Azure AD SSO integration with Vocoli
In this tutorial, you'll learn how to integrate Vocoli with Azure Active Directory (Azure AD). When you integrate Vocoli with Azure AD, you can:
In this tutorial, you'll learn how to integrate Vocoli with Azure Active Directo
* Enable your users to be automatically signed-in to Vocoli with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Vocoli supports **IDP** initiated SSO
+* Vocoli supports **IDP** initiated SSO.
-## Adding Vocoli from the gallery
+## Add Vocoli from the gallery
To configure the integration of Vocoli into Azure AD, you need to add Vocoli from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Vocoli** in the search box. 1. Select **Vocoli** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Vocoli
+## Configure and test Azure AD SSO for Vocoli
Configure and test Azure AD SSO with Vocoli using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Vocoli.
-To configure and test Azure AD SSO with Vocoli, complete the following building blocks:
+To configure and test Azure AD SSO with Vocoli, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Vocoli SSO](#configure-vocoli-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Vocoli test user](#create-vocoli-test-user)** - to have a counterpart of B.Simon in Vocoli that is linked to the Azure AD representation of user.
+ 1. **[Create Vocoli test user](#create-vocoli-test-user)** - to have a counterpart of B.Simon in Vocoli that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Vocoli** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Vocoli** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Vocoli**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in Vocoli. Work with [Vocoli s
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Vocoli tile in the Access Panel, you should be automatically signed in to the Vocoli for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Vocoli for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Vocoli tile in the My Apps, you should be automatically signed in to the Vocoli for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Vocoli with Azure AD](https://aad.portal.azure.com/)
+Once you configure Vocoli you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Zengine Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zengine-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Zengine | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Zengine'
description: Learn how to configure single sign-on between Azure Active Directory and Zengine.
Previously updated : 08/24/2020 Last updated : 02/15/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Zengine
+# Tutorial: Azure AD SSO integration with Zengine
In this tutorial, you'll learn how to integrate Zengine with Azure Active Directory (Azure AD). When you integrate Zengine with Azure AD, you can:
In this tutorial, you'll learn how to integrate Zengine with Azure Active Direct
* Enable your users to be automatically signed-in to Zengine with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Zengine supports **SP** initiated SSO
-
-* Once you configure Zengine you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Zengine supports **SP** initiated SSO.
-## Adding Zengine from the gallery
+## Add Zengine from the gallery
To configure the integration of Zengine into Azure AD, you need to add Zengine from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Zengine** in the search box. 1. Select **Zengine** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Zengine Configure and test Azure AD SSO with Zengine using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zengine.
-To configure and test Azure AD SSO with Zengine, complete the following building blocks:
+To configure and test Azure AD SSO with Zengine, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Zengine, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zengine** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Zengine** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://auth.zenginehq.com/saml2/v1/sls/<ENVIRONMENT_NAME>`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://auth.zenginehq.com/saml2/v1/metadata/<ENVIRONMENT_NAME>`
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://auth.zenginehq.com/saml2/v1/sls/<ENVIRONMENT_NAME>`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Zengine Client support team](mailto:support@wizehive.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Zengine Client support team](mailto:support@wizehive.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Zengine**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called Britta Simon in Zengine. Work with [Z
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Zengine tile in the Access Panel, you should be automatically signed in to the Zengine for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to Zengine Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to Zengine Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the Zengine tile in the My Apps, this will redirect to Zengine Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [Try Zengine with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+Once you configure Zengine you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Zest Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zest-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Zest'
+description: Learn how to configure single sign-on between Azure Active Directory and Zest.
++++++++ Last updated : 03/02/2022++++
+# Tutorial: Azure AD SSO integration with Zest
+
+In this tutorial, you'll learn how to integrate Zest with Azure Active Directory (Azure AD). When you integrate Zest with Azure AD, you can:
+
+* Control in Azure AD who has access to Zest.
+* Enable your users to be automatically signed-in to Zest with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Zest single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Zest supports **IDP** initiated SSO.
+
+## Add Zest from the gallery
+
+To configure the integration of Zest into Azure AD, you need to add Zest from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zest** in the search box.
+1. Select **Zest** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Zest
+
+Configure and test Azure AD SSO with Zest using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zest.
+
+To configure and test Azure AD SSO with Zest, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zest SSO](#configure-zest-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zest test user](#create-zest-test-user)** - to have a counterpart of B.Simon in Zest that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Zest** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `http://my.zestbenefits.com/idp/identity/AuthServices`|
+ | `http://my.zestbenefits.com/idp/identity/AuthServices?<SSOPortalId>` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |--|
+ |`https://my.zestbenefits.com/idp/identity/AuthServices/Acs` |
+ |`https://<CustomDomain>/idp/identity/AuthServices/Acs` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Zest Client support team](mailto:help@zestbenefits.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Zest** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zest.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zest**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Zest SSO
+
+To configure single sign-on on **Zest** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Zest support team](mailto:help@zestbenefits.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Zest test user
+
+In this section, you create a user called Britta Simon in Zest. Work with [Zest support team](mailto:help@zestbenefits.com) to add the users in the Zest platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Zest for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Zest tile in the My Apps, you should be automatically signed in to the Zest for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Zest you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Howto Deploy Java Liberty App With Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app-with-postgresql.md
Last updated 11/19/2021 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes-+ # Deploy a Java application with Azure Database for PostgreSQL server to Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
Last updated 02/01/2021 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes-+ # Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
app-service App Service Web Restore Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-restore-snapshots.md
description: Learn how to restore your app from a snapshot. Recover from unexpec
ms.assetid: 4164f9b5-f735-41c6-a2bb-71f15cdda417 Previously updated : 09/02/2021 Last updated : 02/17/2022 # Restore an app in Azure from a snapshot
-This article shows you how to restore an app in [Azure App Service](../app-service/overview.md) from a snapshot. You can restore your app to a previous state, based on one of your app's snapshots. You do not need to enable snapshots, the platform automatically saves a snapshot of all apps for data recovery purposes.
+This article shows you how to restore an app in [Azure App Service](../app-service/overview.md) from a snapshot. You can restore your app to a previous state, based on one of your app's snapshots. You do not need to enable snapshot backups; the platform automatically saves a hourly snapshot of each app's content and configuration for data recovery purposes. Hourly snapshots for the last 30 days are available. The retention period and snapshot frequency are not configurable.
-Snapshots are incremental shadow copies of your App Service app. When your app is in Premium tier or higher, App Service takes periodic snapshots of both the app's content and its configuration. They offer several advantages over [standard backups](manage-backup.md):
+Restoring from snapshots is available to apps running in one of the **Standard** or **Premium** tiers. For information about scaling up your app, see [Scale up an app in Azure](manage-scale-up.md).
+
+> [!NOTE]
+> Snapshot restore is not available for:
+>
+> - App Service environments (**Isolated** tier)
+> - Azure Functions in the [**Consumption**](../azure-functions/consumption-plan.md) or [**Elastic Premium**](../azure-functions/functions-premium-plan.md) pricing plans.
+>
+> Snapshot restore is available in preview for Azure Functions in [dedicated (App Service)](../azure-functions/dedicated-plan.md) **Standard** or **Premium** tiers.
+
+## Snapshots vs Backups
+
+Snapshots are incremental shadow copies and offer several advantages over [standard backups](manage-backup.md):
- No file copy errors due to file locks.-- Higher maximum snapshot size (30GB).-- No configuration required for supported pricing tiers.-- Snapshots can be restored to a new App Service app in any Azure region.
+- Higher snapshot size (maximum 30 GB).
+- Enabled by default in supported pricing tiers and no configuration required.
+- Restore to a new or existing App Service app or slot in any Azure region.
+
+## What snapshot restore includes
+
+The following table shows which content is restored when you restore a snapshot:
+
+|Settings| Restored?|
+|-|-|
+| **Windows apps**: All app content under `%HOME%` directory<br/>**Linux apps**: All app content under `/home` directory<br/>**Custom containers (Windows and Linux)**: Content in [persistent storage](configure-custom-container.md?pivots=container-linux#use-persistent-shared-storage)| Yes |
+| Content of the [run-from-ZIP package](deploy-run-package.md)| No |
+| Content from any [custom mounted Azure storage](configure-connect-to-azure-storage.md?pivots=container-windows)| No |
-Restoring from snapshots is available to apps running in **Premium** tier or higher. For information about scaling
-up your app, see [Scale up an app in Azure](manage-scale-up.md).
+> [!NOTE]
+> Maximum supported size for snapshot restore is 30GB. Snapshot restore fails if your storage size is greater than 30GB. To reduce your storage size, consider moving files like logs, images, audios, and videos to [Azure Storage](../storage/index.yml), for example.
-## Limitations
+The following table shows which app configuration is restored:
-- Maximum supported size for snapshot restore is 30GB. Snapshot restore fails if your storage size is greater than 30GB. To reduce your storage size, consider moving files like logs, images, audios, and videos to [Azure Storage](../storage/index.yml), for example.-- Any connected database that [standard backup](manage-backup.md#what-gets-backed-up) supports or [mounted Azure storage](configure-connect-to-azure-storage.md?pivots=container-windows) is *not* included in the snapshot. Consider using the native backup capabilities of the connected Azure service (for example, [SQL Database](../azure-sql/database/automated-backups-overview.md) and [Azure Files](../storage/files/storage-snapshots-files.md)).-- App Service stops the target app or target slot while restoring a snapshot. To minimize downtime for the production app, restore the snapshot to a [staging slot](deploy-staging-slots.md) first, then swap into production.-- Snapshots for the last 30 days are available. The retention period and snapshot frequency are not configurable.-- App Services running on an App Service environment do not support snapshots.
+|Settings| Restored?|
+|-|-|
+|[Native log settings](troubleshoot-diagnostic-logs.md), including the Azure Storage account and container settings | Yes |
+|Application Insights configuration | Yes |
+|[Health check](monitor-instances-health-check.md) | Yes |
+| Network features, such as [private endpoints](networking/private-endpoint.md), [hybrid connections](app-service-hybrid-connections.md), and [virtual network integration](overview-vnet-integration.md) | No|
+|[Authentication](overview-authentication-authorization.md)| No|
+|[Managed identities](overview-managed-identity.md)| No |
+|[Custom domains](app-service-web-tutorial-custom-domain.md)| No |
+|[TLS/SSL](configure-ssl-bindings.md)| No |
+|[Scale out](../azure-monitor/autoscale/autoscale-get-started.md?toc=/azure/app-service/toc.json)| No |
+|[Diagnostics with Azure Monitor](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)| No |
+|[Alerts and Metrics](../azure-monitor/alerts/alerts-classic-portal.md)| No |
+|[Backup](manage-backup.md)| No |
+|Associated [deployment slots](deploy-staging-slots.md)| No |
+|Any connected database that [standard backup](manage-backup.md#what-gets-backed-up) supports| No |
-## Restore an app from a snapshot
+## Restore from a snapshot
-1. On the **Settings** page of your app in the [Azure portal](https://portal.azure.com), click **Backups** to display the **Backups** page. Then click **Restore** under the **Snapshot(Preview)** section.
+> [!NOTE]
+> App Service stops the target app or target slot while restoring a snapshot. To minimize downtime for the production app, restore the snapshot to a [deployment slot](deploy-staging-slots.md) first, then [swap](deploy-staging-slots.md#swap-two-slots) into production.
+
+# [Azure portal](#tab/portal)
+
+1. On the **Settings** page of your app in the [Azure portal](https://portal.azure.com), click **Backups** to display the **Backups** page. Then click **Restore** under the **Snapshot** section.
- ![Screenshot that shows how to restore an app from a snapshot.](./media/app-service-web-restore-snapshots/1.png)
+ :::image type="content" source="./media/app-service-web-restore-snapshots/1.png" alt-text="Screenshot that shows how to restore an app from a snapshot.":::
2. In the **Restore** page, select the snapshot to restore.
- ![Screenshot that shows how to select the snapshot to restore. ](./media/app-service-web-restore-snapshots/2.png)
+ <!-- ![Screenshot that shows how to select the snapshot to restore. ](./media/app-service-web-restore-snapshots/2.png) -->
-3. Specify the destination for the app restore in **Restore destination**.
+3. Specify the destination for the app restore in **Restore destination**. To restore to a [deployment slot](deploy-staging-slots.md), select **Existing app**.
- ![Screenshot that shows how to specify the restoration destination.](./media/app-service-web-restore-snapshots/3.png)
+ <!-- ![Screenshot that shows how to specify the restoration destination.](./media/app-service-web-restore-snapshots/3.png) -->
- > [!WARNING]
- > As a best practice we recommend restoring to a new slot then performing a swap. If you choose **Overwrite**, all existing data in your app's current file system is erased and overwritten. Before you click **OK**, make sure that it is what you want to do.
- >
+ > [!NOTE]
+ > It's recommended that you restore to a deployment slot and then perform a swap into production. If you choose **Overwrite**, all existing data in your app's current file system is erased and overwritten. Before you click **OK**, make sure that it is what you want to do.
>
- > [!Note]
- > Due to current technical limitations, you can only restore to apps in the same scale unit. This limitation will be removed in a future release.
- >
- >
-
- You can select **Existing App** to restore to a slot. Before you use this option, you should have already created a slot in your app.
- 4. You can choose to restore your site configuration.
- ![Screenshot that shows how to restore site configuration.](./media/app-service-web-restore-snapshots/4.png)
+ :::image type="content" source="./media/app-service-web-restore-snapshots/4.png" alt-text="Screenshot that shows how to restore site configuration.":::
5. Click **OK**.+
+# [Azure CLI](#tab/cli)
+
+1. List the restorable snapshots for your app and copy the timestamp of the one you want to restore.
+
+ ```azurecli-interactive
+ az webapp config snapshot list --name <app-name> --resource-group <group-name>
+ ```
+
+2. To restore the snapshot by overwriting the app's content and configuration:
+
+ ```azurecli-interactive
+ az webapp config snapshot restore --name <app-name> --resource-group <group-name> --time <snapshot-timestamp>
+ ```
+
+ To restore the snapshot to a different app:
+
+ ```azurecli-interactive
+ az webapp config snapshot restore --name <target-app-name> --resource-group <target-group-name> --source-name <source-app-name> --source-resource-group <source-group-name> --time <source-snapshot-timestamp>
+ ```
+
+ To restore app content only and not the app configuration, use the `--restore-content-only` parameter. For more information, see [az webapp config snapshot restore](/cli/webapp/config/snapshot#az_webapp_config_snapshot_restore).
+
+--
azure-arc Create Complete Managed Instance Indirectly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-indirectly-connected.md
Last updated 12/09/2021-+ # Quickstart: Deploy Azure Arc-enable data services - indirectly connected mode - Azure CLI
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Last updated 01/27/2022 -+ # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
azure-arc Rotate User Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-user-tls-certificate.md
DNS:mi1-svc, DNS:mi1-svc.test.svc.cluster.local, DNS:mi1-svc.test.svc
Use the following command by providing Kubernetes secret that you created previously to rotate the certificate:
-```console
+```azurecli
az sql mi-arc update -n <managed instance name> --k8s-namespace <arc> --use-k8s --service-cert-secret <your-cert-secret> ``` For example:
-```console
+```azurecli
az sql mi-arc update -n mysqlmi --k8s-namespace <arc> --use-k8s --service-cert-secret mymi-cert-secret ``` Use the following command to rotate the certificate with the PEM formatted certificate public and private keys. The command generates a default service certificate name.
-```console
+```azurecli
az sql mi-arc update -n <managed instance name> --k8s-namespace arc --use-k8s --cert-public-key-file <path-to-my-cert-public-key> --cert-private-key-file <path-to-my-cert-private-key> --k8s-namespace <your-k8s-namespace> ``` For example:
-```console
+```azurecli
az sql mi-arc update -n mysqlmi --k8s-namespace arc --use-k8s --cert-public-key-file ./mi1-1-cert --cert-private-key-file ./mi1-1-pvt ``` You can also provide a Kubernetes service cert secret name for `--service-cert-secret` parameter. In this case, it's taken as an updated secret name. The command checks if the secret exists. If not, the command creates a secret name and then rotates the secret in the managed instance.
-```console
+```azurecli
az sql mi-arc update -n <managed instance name> --k8s-namespace <arc> --use-k8s --cert-public-key-file <path-to-my-cert-public-key> --cert-private-key-file <path-to-my-cert-private-key> --service-cert-secret <path-to-mymi-cert-secret> ``` For example:
-```console
+```azurecli
az sql mi-arc update -n mysqlmi --k8s-namespace arc --use-k8s --cert-public-key-file ./mi1-1-cert --cert-private-key-file ./mi1-1-pvt --service-cert-secret mi1-12-1-cert-secret ```
azure-arc Troubleshooting Get Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshooting-get-logs.md
az arcdata dc debug copy-logs --target-folder <desired folder> --exclude-dumps -
For example:
-```console
-#az arcdata dc debug copy-logs --target-folder C:\temp\logs --exclude-dumps --skip-compress --resource-kind postgresql-12 --resource-name pg1 --use-k8s --k8s-namespace
+```azurecli
+az arcdata dc debug copy-logs --target-folder C:\temp\logs --exclude-dumps --skip-compress --resource-kind postgresql-12 --resource-name pg1 --use-k8s --k8s-namespace
``` The following folder hierarchy is an example. It's organized by pod name, then container, and then by directory hierarchy within the container.
azure-arc Upload Metrics And Logs To Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics-and-logs-to-azure-monitor.md
az role assignment create --assignee <appId> --role 'Monitoring Metrics Publishe
# [PowerShell](#tab/powershell)
-```powershell
+```azurecli
az role assignment create --assignee <appId> --role 'Monitoring Metrics Publisher' --scope subscriptions/<SubscriptionID>/resourceGroups/<resourcegroup> ```
azure-arc Upload Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics.md
The Arc data services extension managed identity is used for uploading metrics.
### (1) Retrieve managed identity of the Arc data controller extension # [PowerShell](#tab/powershell)
-```powershell
+```azurecli
$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group <resource group> --cluster-name <connectedclustername> --cluster-type connectedClusters --name <name of extension> | convertFrom-json).identity.principalId #Example $Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group myresourcegroup --cluster-name myconnectedcluster --cluster-type connectedClusters --name ads-extension | convertFrom-json).identity.principalId ``` # [macOS & Linux](#tab/linux)
-```console
+```azurecli
export MSI_OBJECT_ID=`az k8s-extension show --resource-group <resource group> --cluster-name <connectedclustername> --cluster-type connectedClusters --name <name of extension> | jq '.identity.principalId' | tr -d \"` #Example export MSI_OBJECT_ID=`az k8s-extension show --resource-group myresourcegroup --cluster-name myconnectedcluster --cluster-type connectedClusters --name ads-extension | jq '.identity.principalId' | tr -d \"`
N/A
Run the below command to assign the **Monitoring Metrics Publisher** role: # [PowerShell](#tab/powershell)
-```powershell
+```azurecli
az role assignment create --assignee $Env:MSI_OBJECT_ID --role 'Monitoring Metrics Publisher' --scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME" ``` # [macOS & Linux](#tab/linux)
-```console
+```azurecli
az role assignment create --assignee ${MSI_OBJECT_ID} --role 'Monitoring Metrics Publisher' --scope "/subscriptions/${subscription}/resourceGroups/${resourceGroup}" ```
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Azure Arc-enabled Kubernetes provides its agents with out- of-the-box auto-upgra
The following command connects a cluster to Azure Arc with auto-upgrade **enabled**:
-```console
+```azurecli
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest ```
With auto-upgrade enabled, the agent polls Azure hourly for availability of a ne
To opt-out of auto-upgrade, specify the `--disable-auto-upgrade` parameter while connecting the cluster to Azure Arc. The following command connects a cluster to Azure Arc with auto-upgrade **disabled**:
-```console
+```azurecli
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest --disable-auto-upgrade ```
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest --dis
After you connect a cluster to Azure Arc, you can toggle the auto-upgrade capability with the `az connectedk8s update` command, as shown below:
-```console
+```azurecli
az connectedk8s update --name AzureArcTest1 --resource-group AzureArcTest --auto-upgrade false ```
az connectedk8s update --name AzureArcTest1 --resource-group AzureArcTest --auto
If you have disabled auto-upgrade for agents, you can manually initiate upgrades for these agents using the `az connectedk8s upgrade` command as shown below:
-```console
+```azurecli
az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.1.0 ```
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
The server application needs the `Microsoft.Authorization/*/read` permissions to
Enable Azure role-based access control (RBAC) on your Azure Arc-enabled Kubernetes cluster by running the following command:
-```console
+```azurecli
az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id "${SERVER_APP_ID}" --app-secret "${SERVER_APP_SECRET}" ```
Copy the following JSON object into a file called *custom-role.json*. Replace th
1. Create the role definition by running the following command from the folder where you saved *custom-role.json*:
- ```bash
+ ```azurecli
az role definition create --role-definition @custom-role.json ``` 1. Create a role assignment by using this custom role definition:
- ```bash
+ ```azurecli
az role assignment create --role "Arc Deployment Viewer" --assignee <AZURE-AD-ENTITY-ID> --scope $ARM_ID/namespaces/<namespace-name> ```
There are two ways to get the *kubeconfig* file that you need to access the clus
Run the following command to start the proxy process:
-```console
+```azurecli
az connectedk8s proxy -n <clusterName> -g <resourceGroupName> ```
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
- Install the `connectedk8s` Azure CLI extension of version >= 1.2.0:
- ```console
+ ```azurecli
az extension add --name connectedk8s ``` If you've already installed the `connectedk8s` extension, update the extension to the latest version:
- ```console
+ ```azurecli
az extension update --name connectedk8s ```
A conceptual overview of this feature is available in [Cluster connect - Azure A
- Replace the placeholders and run the below command to set the environment variables used in this document:
- ```console
+ ```azurecli
CLUSTER_NAME=<cluster-name> RESOURCE_GROUP=<resource-group-name> ARM_ID_CLUSTER=$(az connectedk8s show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query id -o tsv)
A conceptual overview of this feature is available in [Cluster connect - Azure A
You can enable the Cluster Connect on any Azure Arc-enabled Kubernetes cluster by running the following command on a machine where the `kubeconfig` file is pointed to the cluster of concern:
-```console
+```azurecli
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $RESOURCE_GROUP ```
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
- For Azure AD user account:
- ```console
+ ```azurecli
AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query objectId -o tsv) ``` - For Azure AD application:
- ```console
+ ```azurecli
AAD_ENTITY_OBJECT_ID=$(az ad sp show --id <id> --query objectId -o tsv) ```
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
- If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:
- ```console
+ ```azurecli
az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER ```
az connectedk8s enable-features --features cluster-connect -n $CLUSTER_NAME -g $
- If using Azure Active Directory authentication option, after logging into Azure CLI using the Azure AD entity of interest, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):
- ```console
+ ```azurecli
az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP ``` - If using the service account authentication option, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere:
- ```console
+ ```azurecli
az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP --token $TOKEN ```
azure-arc Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/faq.md
The system assigned managed identity associated with your Azure Arc-enabled Kube
To check when the certificate is about to expire for any given cluster, run the following command:
-```console
+```azurecli
az connectedk8s show -n <name> -g <resource-group> ```
If the value of `managedIdentityCertificateExpirationTime` indicates a timestamp
1. Delete Azure Arc-enabled Kubernetes resource and agents on the cluster.
- ```console
+ ```azurecli
az connectedk8s delete -n <name> -g <resource-group> ``` 1. Recreate the Azure Arc-enabled Kubernetes resource by deploying agents on the cluster.
- ```console
+ ```azurecli
az connectedk8s connect -n <name> -g <resource-group> ```
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* Install the **connectedk8s** Azure CLI extension of version >= 1.2.0:
- ```
+ ```azurecli
az extension add --name connectedk8s ```
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html) * If you want to connect a OpenShift cluster to Azure Arc, execute the following command one time on your cluster before running `az connectedk8s connect`:
- ```
+ ```bash
oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa ```
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html) * If you want to connect a OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`:
- ```
+ ```bash
oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa ```
az group create --name AzureArcTest --location EastUS --output table
Output:
-```
+```output
Location Name - eastus AzureArcTest
New-AzResourceGroup -Name AzureArcTest -Location EastUS
Output:
-```
+```output
ResourceGroupName : AzureArcTest Location : eastus ProvisioningState : Succeeded
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest
Output:
-```
+```output
Helm release deployment succeeded {
New-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName AzureArc
Output:
-```
+```output
Location Name Type -- - - eastus AzureArcTest1 microsoft.kubernetes/connectedclusters
az connectedk8s list --resource-group AzureArcTest --output table
Output:
-```
+```output
Name Location ResourceGroup - - AzureArcTest1 eastus AzureArcTest
Get-AzConnectedKubernetes -ResourceGroupName AzureArcTest
Output:
-```
+```output
Location Name Type -- - - eastus AzureArcTest1 microsoft.kubernetes/connectedclusters
Azure Arc-enabled Kubernetes deploys a few agents into the `azure-arc` namespace
1. View these deployments and pods using:
- ```
+ ```bash
kubectl get deployments,pods -n azure-arc ```
Azure Arc-enabled Kubernetes deploys a few agents into the `azure-arc` namespace
Output:
- ```
+ ```output
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cluster-metadata-operator 1/1 1 1 13d deployment.apps/clusterconnect-agent 1/1 1 1 13d
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Connecting clusters to Azure requires both access to an Azure subscription and `
If you are using Helm version >= 3.7.0, you will run into the following error when `az connectedk8s connect` is run to connect the cluster to Azure Arc:
-```console
-$ az connectedk8s connect -n AzureArcTest -g AzureArcTest
+```azurecli
+az connectedk8s connect -n AzureArcTest -g AzureArcTest
+```
+```output
Unable to pull helm chart from the registry 'mcr.microsoft.com/azurearck8s/batch1/stable/azure-arc-k8sagents:1.4.0': Error: unknown command "chart" for "helm" Run 'helm --help' for usage. ```
In this case, you'll need to install a prior version of [Helm 3](https://helm.sh
If the provided kubeconfig file does not have sufficient permissions to install the Azure Arc agents, the Azure CLI command will return an error. ```azurecli
-$ az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
+az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
+```
+
+```output
Ensure that you have the latest helm version installed before proceeding to avoid unexpected errors. This operation might take a while...
If `az connectedk8s connect` is timing out and failing when connecting an OpenSh
Connecting a Kubernetes cluster to Azure Arc-enabled Kubernetes requires installation of Azure Arc agents on the cluster. If the cluster is running over a slow internet connection, the container image pull for agents may take longer than the Azure CLI timeouts. ```azurecli
-$ az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
+az connectedk8s connect --resource-group AzureArc --name AzureArcCluster
+```
+
+```output
Ensure that you have the latest helm version installed before proceeding to avoid unexpected errors. This operation might take a while... ```
This operation might take a while...
Helm `v3.3.0-rc.1` version has an [issue](https://github.com/helm/helm/pull/8527) where helm install/upgrade (used by `connectedk8s` CLI extension) results in running of all hooks leading to the following error:
-```console
-$ az connectedk8s connect -n AzureArcTest -g AzureArcTest
+```azurecli
+az connectedk8s connect -n AzureArcTest -g AzureArcTest
+```
+
+```output
Ensure that you have the latest helm version installed before proceeding. This operation might take a while...
To recover from this issue, follow these steps:
To help troubleshoot issues with `sourceControlConfigurations` resource (Flux v1), run these az commands with `--debug` parameter specified:
-```console
+```azurecli
az provider show -n Microsoft.KubernetesConfiguration --debug az k8s-configuration create <parameters> --debug ``` To help troubleshoot issues with `fluxConfigurations` resource (Flux v2), run these az commands with `--debug` parameter specified:
-```console
+```azurecli
az provider show -n Microsoft.KubernetesConfiguration --debug az k8s-configuration flux create <parameters> --debug ```
If you experience an error during installation or if the extension is in a faile
One example:
-```console
+```azurecli
az k8s-extension show -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux -t <connectedClusters or managedClusters>
-flux
+```
-...
+```output
"statuses": [ { "code": "InstallationFailed",
flux
Another example:
-```console
+```azurecli
az k8s-extension show -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux -t <connectedClusters or managedClusters>
+```
+```output
"statuses": [ { "code": "InstallationFailed",
Reason: Bad Request'}]}
For all these cases, possible remediation actions are to force delete the extension, uninstall the Helm release, and delete the `flux-system` namespace from the cluster.
-```console
+```azurecli
az k8s-extension delete --force -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux -t <managedClusters OR connectedClusters>
+```
+
+```console
helm uninstall flux -n flux-system kubectl delete namespaces flux-system ```
Some other aspects to consider:
* For AKS cluster, assure that the subscription has the following feature flag enabled: `Microsoft.ContainerService/AKS-ExtensionManager`.
- ```console
+ ```azurecli
az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager
- ```
+ ```
* Assure that the cluster does not have any policies that restrict creation of the `flux-system` namespace or resources in that namespace.
juju config kubernetes-worker allow-privileged=true
Usage of older version of agents where Cluster Connect feature was not yet supported will result in the following error:
-```console
-$ az connectedk8s proxy -n AzureArcTest -g AzureArcTest
+```azurecli
+az connectedk8s proxy -n AzureArcTest -g AzureArcTest
+```
+```output
Hybrid connection for the target resource does not exist. Agent might not have started successfully. ```
When this occurs, ensure that you are using `connectedk8s` Azure CLI extension o
If the Cluster Connect feature is disabled on the cluster, then `az connectedk8s proxy` will fail to establish a session with the cluster.
-```console
-$ az connectedk8s proxy -n AzureArcTest -g AzureArcTest
+```azurecli
+az connectedk8s proxy -n AzureArcTest -g AzureArcTest
+```
+```output
Cannot connect to the hybrid connection because no agent is connected in the target arc resource. ```
The above warning is observed when you have used a service principal to log into
1. Fetch the Object ID of the Azure AD application used by Azure Arc service:
- ```console
+ ```azurecli
az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv ``` 1. Use the `<objectId>` value from above step to enable custom locations feature on the cluster: - If you are enabling custom locations feature as part of connecting the cluster to Arc, run the following command:
- ```console
+ ```azurecli
az connectedk8s connect -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> ``` - If you are enabling custom locations feature on an existing Azure Arc-enabled Kubernetes cluster, run the following command:
- ```console
+ ```azurecli
az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations ```
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
The CI/CD workflow will populate the manifest directory with extra manifests to
1. [Create a new GitOps connection](./tutorial-use-gitops-flux2.md) to your newly imported **arc-cicd-demo-gitops** repository in Azure Repos.
- ```console
+ ```azurecli
az k8s-configuration flux create \ --name cluster-config \ --cluster-name arc-cicd-cluster \
The CI/CD workflow will populate the manifest directory with extra manifests to
1. [Create a new GitOps connection](./tutorial-use-gitops-flux2.md) to your newly forked **arc-cicd-demo-gitops** repository in GitHub.
- ```console
+ ```azurecli
az k8s-configuration flux create \ --name cluster-config \ --cluster-name arc-cicd-cluster \
If you're not going to continue to use this application, delete any resources wi
1. Delete the Azure Arc GitOps configuration connection:
- ```console
+ ```azurecli
az k8s-configuration flux delete \ --name cluster-config \ --cluster-name arc-cicd-cluster \
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
If you generated your own SSH keys, then you already have the private and public
Use the following in Azure CLI if Flux is generating the keys.
-```console
-$ az k8s-configuration show --resource-group <resource group name> --cluster-name <connected cluster name> --name <configuration name> --cluster-type connectedClusters --query 'repositoryPublicKey'
+```azurecli
+az k8s-configuration show --resource-group <resource group name> --cluster-name <connected cluster name> --name <configuration name> --cluster-type connectedClusters --query 'repositoryPublicKey'
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAREDACTED" ```
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. * Registration of your subscription with the `AKS-ExtensionManager` feature flag. Use the following command:
- ```console
+ ```azurecli
az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager ```
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
* Azure CLI version 2.15 or later. [Install the Azure CLI](/cli/azure/install-azure-cli) or use the following commands to update to the latest version:
- ```console
+ ```azurecli
az version az upgrade ``` * Registration of the following Azure service providers. (It's OK to re-register an existing provider.)
- ```console
+ ```azurecli
az provider register --namespace Microsoft.Kubernetes az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.KubernetesConfiguration
To manage GitOps through the Azure CLI or the Azure portal, you need the followi
Registration is an asynchronous process and should finish within 10 minutes. Use the following code to monitor the registration process:
- ```console
+ ```azurecli
az provider show -n Microsoft.KubernetesConfiguration -o table
+ ```
+ ```output
Namespace RegistrationPolicy RegistrationState -- - Microsoft.KubernetesConfiguration RegistrationRequired Registered
The GitOps agents require TCP on port 443 (`https://:443`) to function. The agen
Install the latest `k8s-configuration` and `k8s-extension` CLI extension packages:
-```console
+```azurecli
az extension add -n k8s-configuration az extension add -n k8s-extension ``` To update these packages, use the following commands:
-```console
+```azurecli
az extension update -n k8s-configuration az extension update -n k8s-extension ``` To see the list of az CLI extensions installed and their versions, use the following command:
-```console
+```azurecli
az extension list -o table
+```
+```output
Experimental ExtensionType Name Path Preview Version - -- -- -- -- -- False whl connectedk8s C:\Users\somename\.azure\cliextensions\connectedk8s False 1.2.0
In the following example:
If the `microsoft.flux` extension isn't already installed in the cluster, it will be installed.
-```console
+```azurecli
az k8s-configuration flux create -g flux-demo-rg -c flux-demo-arc -n gitops-demo --namespace gitops-demo -t connectedClusters --scope cluster -u https://github.com/fluxcd/flux2-kustomize-helm-example --branch main --kustomization name=infra path=./infrastructure prune=true --kustomization name=apps path=./apps/staging prune=true dependsOn=["infra"]
+```
+```output
Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus Warning! https url is being used without https auth params, ensure the repository url provided is not a private repo 'Microsoft.Flux' extension not found on the cluster, installing it now. This may take a few minutes...
Creating the flux configuration 'gitops-demo' in the cluster. This may take a fe
Show the configuration after time to finish reconciliations.
-```console
+```azurecli
az k8s-configuration flux show -g flux-demo-rg -c flux-demo-arc -n gitops-demo -t connectedClusters
+```
+```output
Command group 'k8s-configuration flux' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus { "complianceState": "Compliant",
statefulset.apps/redis-master 1/1 95m
You can delete the Flux configuration by using the following command. This action deletes both the `fluxConfigurations` resource in Azure and the Flux configuration objects in the cluster. Because the Flux configuration was originally created with the `prune=true` parameter for the kustomization, all of the objects created in the cluster based on manifests in the Git repository will be removed when the Flux configuration is removed.
-```console
+```azurecli
az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n gitops-demo -t connectedClusters --yes ```
If the Flux extension was created automatically when the Flux configuration was
For an Azure Arc-enabled Kubernetes cluster, use this command:
-```console
+```azurecli
az k8s-extension delete -g flux-demo-rg -c flux-demo-arc -n flux -t connectedClusters --yes ```
The `source`, `helm`, `kustomize`, and `notification` Flux controllers are insta
Here's an example for including the [Flux image-reflector and image-automation controllers](https://fluxcd.io/docs/components/image/). If the Flux extension was created automatically when a Flux configuration was first created, the extension name will be `flux`.
-```console
+```azurecli
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connectedClusters or managedClusters> --name flux --extension-type microsoft.flux --config image-automation-controller.enabled=true image-reflector-controller.enabled=true ```
For a description of all parameters that Flux supports, see the [official Flux d
You can see the full list of parameters that the `k8s-configuration flux` CLI command supports by using the `-h` parameter:
-```console
+```azurecli
az k8s-configuration flux -h
+```
+```output
Group az k8s-configuration flux : Commands to manage Flux v2 Kubernetes configurations. This command group is in preview and under development. Reference and support levels:
Commands:
Here are the parameters for the `k8s-configuration flux create` CLI command:
-```console
+```azurecli
az k8s-configuration flux create -h
+```
+```output
This command is from the following extension: k8s-configuration Command
kubectl create secret generic -n flux-config my-custom-secret --from-file=identi
For both cases, when you create the Flux configuration, use `--local-auth-ref my-custom-secret` in place of the other authentication parameters:
-```console
+```azurecli
az k8s-configuration flux create -g <cluster_resource_group> -c <cluster_name> -n <config_name> -t connectedClusters --scope cluster --namespace flux-config -u <git-repo-url> --kustomization name=kustomization1 --local-auth-ref my-custom-secret ``` Learn more about using a local Kubernetes secret with these authentication methods:
By using `az k8s-configuration flux create`, you can create one or more kustomiz
You can also use `az k8s-configuration flux kustomization` to create, update, list, show, and delete kustomizations in a Flux configuration:
-```console
+```azurecli
az k8s-configuration flux kustomization -h
+```
+```output
Group az k8s-configuration flux kustomization : Commands to manage Kustomizations associated with Flux v2 Kubernetes configurations.
Commands:
Here are the kustomization creation options:
-```console
+```azurecli
az k8s-configuration flux kustomization create -h
+```
+```output
This command is from the following extension: k8s-configuration Command
If you've been using Flux v1 in Azure Arc-enabled Kubernetes or AKS clusters and
Use these az CLI commands to find and then delete existing `sourceControlConfigurations` in a cluster:
-```console
+```azurecli
az k8s-configuration list --cluster-name <Arc or AKS cluster name> --cluster-type <connectedClusters OR managedClusters> --resource-group <resource group name> az k8s-configuration delete --name <configuration name> --cluster-name <Arc or AKS cluster name> --cluster-type <connectedClusters OR managedClusters> --resource-group <resource group name> ```
azure-arc Use Gitops With Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/use-gitops-with-helm.md
You can learn more about the HelmRelease in the official [Helm Operator document
Using the Azure CLI extension for `k8s-configuration`, link your connected cluster to the example Git repository. Give this configuration the name `azure-arc-sample` and deploy the Flux operator in the `arc-k8s-demo` namespace.
-```console
+```azurecli
az k8s-configuration create --name azure-arc-sample --cluster-name AzureArcTest1 --resource-group AzureArcTest --operator-instance-name flux --operator-namespace arc-k8s-demo --operator-params='--git-readonly --git-path=releases' --enable-helm-operator --helm-operator-chart-version='1.2.0' --helm-operator-params='--set helm.versions=v3' --repository-url https://github.com/Azure/arc-helm-demo.git --scope namespace --cluster-type connectedClusters ```
To customize the creation of the configuration, [learn about additional paramete
Using the Azure CLI, verify that the configuration was successfully created.
-```console
+```azurecli
az k8s-configuration show --name azure-arc-sample --cluster-name AzureArcTest1 --resource-group AzureArcTest --cluster-type connectedClusters ```
azure-cache-for-redis Cache Event Grid Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-cli.md
Last updated 1/5/2021 -+ # Quickstart: Route Azure Cache for Redis events to web endpoint with Azure CLI
azure-monitor Container Insights Enable Existing Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-existing-clusters.md
Title: Monitor an Azure Kubernetes Service (AKS) cluster deployed | Microsoft Do
description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS) cluster with Container insights already deployed in your subscription. Last updated 09/12/2019-+ # Enable monitoring of Azure Kubernetes Service (AKS) cluster already deployed
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
The Azure Diagnostics extension agents (also called WAD and LAD for Windows and
Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](../agents/diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/storage-insights/create-or-update). Supported data types:
-* Syslog
-* Windows events
+* [Syslog](../agents/data-sources-syslog.md)
+* [Windows events](../agents/data-sources-windows-events.md)
* Service Fabric
-* ETW Events
-* IIS Logs
+* [ETW Events](../agents/data-sources-event-tracing-windows.md)
+* [IIS Logs](../agents/data-sources-iis-logs.md)
## Using Private links Customer-managed storage accounts are used to ingest Custom logs or IIS logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
azure-resource-manager Rollback On Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/rollback-on-error.md
Title: Roll back on error to successful deployment
description: Specify that a failed deployment should roll back to a successful deployment. Last updated 02/02/2021 -+ # Rollback on error to successful deployment
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-specs.md
Title: Create & deploy template specs
description: Describes how to create template specs and share them with other users in your organization. Last updated 01/12/2022-+ # Azure Resource Manager template specs
azure-signalr Signalr Quickstart Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-csharp.md
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
```csharp using System; using System.IO;
+ using System.Linq;
using System.Net.Http; using System.Threading.Tasks; using Microsoft.AspNetCore.Http;
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.Azure.WebJobs.Extensions.SignalRService; using Newtonsoft.Json;-
+
namespace CSharp { public static class Function { private static HttpClient httpClient = new HttpClient();-
+ private static string Etag = string.Empty;
+ private static string StarCount = "0";
+
[FunctionName("index")]
- public static IActionResult Index([HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, ExecutionContext context)
+ public static IActionResult GetHomePage([HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, ExecutionContext context)
{ var path = Path.Combine(context.FunctionAppDirectory, "content", "https://docsupdatetracker.net/index.html"); return new ContentResult
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
ContentType = "text/html", }; }-
+
[FunctionName("negotiate")]
- public static SignalRConnectionInfo Negotiate(
+ public static SignalRConnectionInfo Negotiate(
[HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req,
- [SignalRConnectionInfo(HubName = "serverlessSample")] SignalRConnectionInfo connectionInfo)
+ [SignalRConnectionInfo(HubName = "serverless")] SignalRConnectionInfo connectionInfo)
{ return connectionInfo; }-
+
[FunctionName("broadcast")] public static async Task Broadcast([TimerTrigger("*/5 * * * * *")] TimerInfo myTimer,
- [SignalR(HubName = "serverlessSample")] IAsyncCollector<SignalRMessage> signalRMessages)
+ [SignalR(HubName = "serverless")] IAsyncCollector<SignalRMessage> signalRMessages)
{ var request = new HttpRequestMessage(HttpMethod.Get, "https://api.github.com/repos/azure/azure-signalr"); request.Headers.UserAgent.ParseAdd("Serverless");
+ request.Headers.Add("If-None-Match", Etag);
var response = await httpClient.SendAsync(request);
- var result = JsonConvert.DeserializeObject<GitResult>(await response.Content.ReadAsStringAsync());
+ if (response.Headers.Contains("Etag"))
+ {
+ Etag = response.Headers.GetValues("Etag").First();
+ }
+ if (response.StatusCode == System.Net.HttpStatusCode.OK)
+ {
+ var result = JsonConvert.DeserializeObject<GitResult>(await response.Content.ReadAsStringAsync());
+ StarCount = result.StarCount;
+ }
+
await signalRMessages.AddAsync( new SignalRMessage { Target = "newMessage",
- Arguments = new[] { $"Current star count of https://github.com/Azure/azure-signalr is: {result.StarCount}" }
+ Arguments = new[] { $"Current star count of https://github.com/Azure/azure-signalr is: {StarCount}" }
}); }-
+
private class GitResult { [JsonRequired]
azure-signalr Signalr Quickstart Azure Functions Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-java.md
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
```java package com.signalr;
-
+
import com.google.gson.Gson; import com.microsoft.azure.functions.ExecutionContext; import com.microsoft.azure.functions.HttpMethod;
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
import java.util.Optional; public class Function {
+ private static String Etag = "";
+ private static String StarCount;
+
@FunctionName("index") public HttpResponseMessage run( @HttpTrigger(
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
String text = IOUtils.toString(inputStream, StandardCharsets.UTF_8.name()); return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "text/html").body(text).build(); }
-
+
@FunctionName("negotiate") public SignalRConnectionInfo negotiate( @HttpTrigger(
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
@SignalROutput(name = "$return", hubName = "serverless") public SignalRMessage broadcast( @TimerTrigger(name = "timeTrigger", schedule = "*/5 * * * * *") String timerInfo) throws IOException, InterruptedException {
-
HttpClient client = HttpClient.newHttpClient();
- HttpRequest req = HttpRequest.newBuilder().uri(URI.create("https://api.github.com/repos/azure/azure-signalr")).header("User-Agent", "serverless").build();
+ HttpRequest req = HttpRequest.newBuilder().uri(URI.create("https://api.github.com/repos/azure/azure-signalr")).header("User-Agent", "serverless").header("If-None-Match", Etag).build();
HttpResponse<String> res = client.send(req, BodyHandlers.ofString());
- Gson gson = new Gson();
- GitResult result = gson.fromJson(res.body(), GitResult.class);
- return new SignalRMessage("newMessage", "Current star count of https://github.com/Azure/azure-signalr is:".concat(result.stargazers_count));
+ if (res.headers().firstValue("Etag").isPresent())
+ {
+ Etag = res.headers().firstValue("Etag").get();
+ }
+ if (res.statusCode() == 200)
+ {
+ Gson gson = new Gson();
+ GitResult result = gson.fromJson(res.body(), GitResult.class);
+ StarCount = result.stargazers_count;
+ }
+
+ return new SignalRMessage("newMessage", "Current start count of https://github.com/Azure/azure-signalr is:".concat(StarCount));
} class GitResult {
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
```javascript var https = require('https');
+ var etag = '';
+ var star = 0;
+
module.exports = function (context) { var req = https.request("https://api.github.com/repos/azure/azure-signalr", { method: 'GET',
- headers: {'User-Agent': 'serverless'}
+ headers: {'User-Agent': 'serverless', 'If-None-Match': etag}
}, res => {
+ if (res.headers['etag']) {
+ etag = res.headers['etag']
+ }
+
var body = ""; res.on('data', data => { body += data; }); res.on("end", () => {
- var jbody = JSON.parse(body);
+ if (res.statusCode === 200) {
+ var jbody = JSON.parse(body);
+ star = jbody['stargazers_count'];
+ }
+
context.bindings.signalRMessages = [{ "target": "newMessage",
- "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${jbody['stargazers_count']}` ]
+ "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${star}` ]
}] context.done(); });
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
context.done(); }); req.end();
- }
+ }
``` 3. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `index` function, create a new file `https://docsupdatetracker.net/index.html` in `content` directory under your project root folder. And copy the following content.
azure-signalr Signalr Quickstart Azure Functions Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-python.md
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
import azure.functions as func
+ etag = ''
+ start_count = 0
def main(myTimer: func.TimerRequest, signalRMessages: func.Out[str]) -> None:
- headers = {'User-Agent': 'serverless'}
+ global etag
+ global start_count
+ headers = {'User-Agent': 'serverless', 'If-None-Match': etag}
res = requests.get('https://api.github.com/repos/azure/azure-signalr', headers=headers)
- jres = res.json()
+ if res.headers.get('ETag'):
+ etag = res.headers.get('ETag')
+ if res.status_code == 200:
+ jres = res.json()
+ start_count = jres['stargazers_count']
+
signalRMessages.set(json.dumps({ 'target': 'newMessage',
- 'arguments': [ 'Current star count of https://github.com/Azure/azure-signalr is: ' + str(jres['stargazers_count']) ]
+ 'arguments': [ 'Current star count of https://github.com/Azure/azure-signalr is: ' + str(start_count) ]
})) ```
azure-sql Active Geo Replication Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-configure-portal.md
description: Learn how to configure geo-replication for an SQL database using th
-+
azure-sql Authentication Azure Ad Only Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-only-authentication.md
Last updated 02/14/2022-+ # Azure AD-only authentication with Azure SQL
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automated-backups-overview.md
description: Azure SQL Database and Azure SQL Managed Instance automatically cre
-+
azure-sql Elastic Query Getting Started Vertical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/elastic-query-getting-started-vertical.md
INSERT INTO [dbo].[CustomerInformation] ([CustomerID], [CustomerName], [Company]
``` The "master_key_password" is a strong password of your choosing used to encrypt the connection credentials.
- The "username" and "password" should be the username and password used to log in into the Customers database.
+ The "username" and "password" should be the username and password used to log in into the Customers database (create a new user in Customers database if one does not already exists).
Authentication using Azure Active Directory with elastic queries is not currently supported. ### External data sources
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
azure-sql Firewall Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/firewall-configure.md
azure-sql Geo Distributed Application Configure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/geo-distributed-application-configure-tutorial.md
azure-sql Serverless Tier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/serverless-tier-overview.md
azure-sql Transparent Data Encryption Byok Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-byok-key-rotation.md
azure-sql Transparent Data Encryption Byok Remove Tde Protector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-byok-remove-tde-protector.md
azure-sql Instance Pools Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/instance-pools-configure.md
description: This article describes how to create and manage Azure SQL Managed I
-+
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
Storage for database backups is allocated to support the [point-in-time restore
In the General Purpose service tier, every database file gets dedicated IOPS and throughput that depend on the file size. Larger files get more IOPS and throughput. IO characteristics of database files are shown in the following table:
-| **File size** | **>=0 and <=128 GiB** | **>128 and <= 512 GiB** | **>0.5 and <=1 TiB** | **>1 and <=2 TiB** | **>2 and <=4 TiB** | **>4 and <=8 TiB** | **>8 and <=16 TiB** |
-|:--|:--|:--|:--|:--|:--|:--|:--|
-| IOPS per file | 500 | 2300 | 5000 | 7500 | 7500 | 12,500 | |
-| Throughput per file | 100 MiB/s | 150 MiB/s | 200 MiB/s | 250 MiB/s| 250 MiB/s | 480 MiB/s | |
+| **File size** | **>=0 and <=128 GiB** | **>128 and <= 512 GiB** | **>0.5 and <=1 TiB** | **>1 and <=2 TiB** | **>2 and <=4 TiB** | **>4 and <=8 TiB** |
+|:--|:--|:--|:--|:--|:--|:--|
+| IOPS per file | 500 | 2300 | 5000 | 7500 | 7500 | 12,500 |
+| Throughput per file | 100 MiB/s | 150 MiB/s | 200 MiB/s | 250 MiB/s| 250 MiB/s | 250 MiB/s |
If you notice high IO latency on some database file or you see that IOPS/throughput is reaching the limit, you might improve performance by [increasing the file size](https://techcommunity.microsoft.com/t5/Azure-SQL-Database/Increase-data-file-size-to-improve-HammerDB-workload-performance/ba-p/823337).
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-build-chat.md
Copy the fetched **ConnectionString** and it will be used later in this tutorial
# [C#](#tab/csharp)
-* [ASP.NET Core 3.1 or above](/aspnet/core)
+* [ASP.NET Core 6](/aspnet/core)
# [JavaScript](#tab/javascript)
Copy the fetched **ConnectionString** and it will be used later in this tutorial
# [Java](#tab/java) -- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above-- [Apache Maven](https://maven.apache.org/download.cgi)
+* [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above
+* [Apache Maven](https://maven.apache.org/download.cgi)
In this tutorial, we'll build a real-time chat web application. In a real web ap
# [C#](#tab/csharp)
-We'll use [ASP.NET Core](/aspnet/core) to host the web pages and handle incoming requests.
+We'll use [ASP.NET Core 6](/aspnet/core) to host the web pages and handle incoming requests.
First let's create an empty ASP.NET Core app.
First let's create an empty ASP.NET Core app.
dotnet new web dotnet add package Microsoft.Azure.WebPubSub.AspNetCore --version 1.0.0-beta.3 ```
+2. Replace the default app.MapGet() in Program.cs with following code snippet.
-2. Then add `app.UseStaticFiles();` before `app.UseRouting();` in `Startup.cs` to support static files. Remove the default `endpoints.MapGet` inside `app.UseEndpoints`.
-
- ```csharp
- public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
+ ``` csharp
+ if (app.Environment.IsDevelopment())
{
- if (env.IsDevelopment())
- {
- app.UseDeveloperExceptionPage();
- }
+ app.UseDeveloperExceptionPage();
+ }
- app.UseStaticFiles();
- app.UseRouting();
+ app.UseStaticFiles();
+ app.UseRouting();
- app.UseEndpoints(endpoints =>
- {
- });
- }
+ app.UseEndpoints(endpoints =>
+ {
+ });
``` 3. Also create an HTML file and save it as `wwwroot/https://docsupdatetracker.net/index.html`, we'll use it for the UI of the chat app later. ```html <html>
- <body>
- <h1>Azure Web PubSub Chat</h1>
- </body>
-
+ <body>
+ <h1>Azure Web PubSub Chat</h1>
+ </body>
</html> ```-
+
You can test the server by running `dotnet run --urls http://localhost:8080` and access http://localhost:8080/https://docsupdatetracker.net/index.html in browser. You may remember in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md) the subscriber uses an API in Web PubSub SDK to generate an access token from connection string and use it to connect to the service. This is usually not safe in a real world application as connection string has high privilege to do any operation to the service so you don't want to share it with any client. Let's change this access token generation process to a REST API at server side, so client can call this API to request an access token every time it needs to connect, without need to hold the connection string.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
dotnet add package Microsoft.Extensions.Azure ```
-2. Add a `SampleChatHub` class to handle hub events. And DI the service middleware and service client inside `ConfigureServices()`. Don't forget to replace `<connection_string>` with the one of your services.
+2. Add a `SampleChatHub` class to handle hub events. Add DI for the service middleware and service client. Don't forget to replace `<connection_string>` with the one of your services.
```csharp
- public void ConfigureServices(IServiceCollection services)
+ using Microsoft.Azure.WebPubSub.AspNetCore;
+
+ var builder = WebApplication.CreateBuilder(args);
+
+ builder.Services.AddWebPubSub(
+ o => o.ServiceEndpoint = new ServiceEndpoint("<connection_string>"))
+ .AddWebPubSubServiceClient<SampleChatHub>();
+
+ var app = builder.Build();
+
+ if (app.Environment.IsDevelopment())
{
- services.AddWebPubSub(o => o.ServiceEndpoint = new ServiceEndpoint("<connection_string>"))
- .AddWebPubSubServiceClient<SampleChatHub>();
+ app.UseDeveloperExceptionPage();
}
- public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
- {
- if (env.IsDevelopment())
- {
- app.UseDeveloperExceptionPage();
- }
+ app.UseDefaultFiles();
+ app.UseStaticFiles();
+ app.UseRouting();
- app.UseStaticFiles();
- app.UseRouting();
+ app.UseEndpoints(endpoints =>
+ {
+ });
- app.UseEndpoints(endpoints =>
- {
- });
- }
+ app.Run();
- private sealed class SampleChatHub : WebPubSubHub
+ sealed class SampleChatHub : WebPubSubHub
{ } ```
- `AddWebPubSubServiceClient<THub>()` is used to inject the service client `WebPubSubServiceClient<THub>`, with which we can use in negotiation step to generate client connection token and in hub methods to invoke service REST APIs when hub events are triggered.
+ `AddWebPubSubServiceClient<THub>()` is used to inject the service client `WebPubSubServiceClient<THub>`, with which we can use in negotiation step to generate client connection token and in hub methods to invoke service REST APIs when hub events are triggered.
-3. Add a `/negotiate` API to the server inside `app.UseEndpoints` to generate the token.
+4. Add a `/negotiate` API to the server inside `app.UseEndpoints` to generate the token.
```csharp app.UseEndpoints(endpoints =>
- {
- endpoints.MapGet("/negotiate", async context =>
- {
- var id = context.Request.Query["id"];
- if (id.Count != 1)
- {
- context.Response.StatusCode = 400;
- await context.Response.WriteAsync("missing user id");
- return;
- }
- var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient<SampleChatHub>>();
- await context.Response.WriteAsync(serviceClient.GetClientAccessUri(userId: id).AbsoluteUri);
- });
+ {
+ endpoints.MapGet("/negotiate", async (WebPubSubServiceClient<SampleChatHub> serviceClient, HttpContext context) =>
+ {
+ var id = context.Request.Query["id"];
+ if (id.Count != 1)
+ {
+ context.Response.StatusCode = 400;
+ await context.Response.WriteAsync("missing user id");
+ return;
+ }
+ await context.Response.WriteAsync(serviceClient.GetClientAccessUri(userId: id).AbsoluteUri);
+ });
}); ```
- This token generation code is similar to the one we used in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md), except we pass one more argument (`userId`) when generating the token. User ID can be used to identify the identity of client so when you receive a message you know where the message is coming from.
+ This token generation code is similar to the one we used in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md), except we pass one more argument (`userId`) when generating the token. User ID can be used to identify the identity of client so when you receive a message you know where the message is coming from.
- You can test this API by running `dotnet run --urls http://localhost:8080` and accessing `http://localhost:8080/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
+ You can test this API by running `dotnet run --urls http://localhost:8080` and accessing `http://localhost:8080/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
-4. Then update `https://docsupdatetracker.net/index.html` to include the following script to get the token from server and connect to service.
+5. Then update `https://docsupdatetracker.net/index.html` to include the following script to get the token from server and connect to service.
```html <html>
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</html> ```
- If you are using Chrome, you can test it by opening the home page, input your user name. Press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
+ If you are using Chrome, you can test it by opening the home page, input your user name. Press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
# [JavaScript](#tab/javascript)
Events are delivered to server in the form of Webhook. Webhook is served and exp
Azure Web PubSub follows [CloudEvents](./reference-cloud-events.md) to describe the event data. # [C#](#tab/csharp)+ Here we're using Web PubSub middleware SDK, there is already an implementation to parse and process CloudEvents schema, so we don't need to deal with these details. Instead, we can focus on the inner business logic in the hub methods.
-1. Add event handlers inside `UseEndpoints`. Specify the endpoint path for the events, let's say `/eventhandler`.
+1. Add event handlers inside `UseEndpoints`. Specify the endpoint path for the events, let's say `/eventhandler`. The `UseEndpoints` should look like follows:
```csharp app.UseEndpoints(endpoints => {
+ endpoints.MapGet("/negotiate", async (WebPubSubServiceClient<SampleChatHub> serviceClient, HttpContext context) =>
+ {
+ var id = context.Request.Query["id"];
+ if (id.Count != 1)
+ {
+ context.Response.StatusCode = 400;
+ await context.Response.WriteAsync("missing user id");
+ return;
+ }
+ await context.Response.WriteAsync(serviceClient.GetClientAccessUri(userId: id).AbsoluteUri);
+ });
+ endpoints.MapWebPubSubHub<SampleChatHub>("/eventhandler/{*path}"); }); ``` 2. Go the `SampleChatHub` we created in previous step. Add a constructor to work with `WebPubSubServiceClient<SampleChatHub>` so we can use to invoke service. And override `OnConnectedAsync()` method to respond when `connected` event is triggered.+ ```csharp
- private sealed class SampleChatHub : WebPubSubHub
+ sealed class SampleChatHub : WebPubSubHub
{ private readonly WebPubSubServiceClient<SampleChatHub> _serviceClient;
Besides system events like `connected` or `disconnected`, client can also send m
# [C#](#tab/csharp)
-Implement the `OnMessageReceivedAsync()` method in `SampleChatHub`.
+Implement the OnMessageReceivedAsync() method in SampleChatHub.
1. Handle message event. ```csharp
- private sealed class SampleChatHub : WebPubSubHub
+ sealed class SampleChatHub : WebPubSubHub
{ private readonly WebPubSubServiceClient<SampleChatHub> _serviceClient;
Implement the `OnMessageReceivedAsync()` method in `SampleChatHub`.
} ```
- This event handler uses `WebPubSubServiceClient.SendToAllAsync()` to broadcast the received message to all clients. You can see in the end we returned `UserEventResponse`, which contains a message directly to the caller and make the WebHook request success. If you have extra logic to validate and would like to break this call, you can throw an exception here. The middleware will deliver the exception message to service and service will drop current client connection.
+ This event handler uses `WebPubSubServiceClient.SendToAllAsync()` to broadcast the received message to all clients. You can see in the end we returned `UserEventResponse`, which contains a message directly to the caller and make the WebHook request success. If you have extra logic to validate and would like to break this call, you can throw an exception here. The middleware will deliver the exception message to service and service will drop current client connection. Do not forget to include the `using Microsoft.Azure.WebPubSub.Common;` statement at the begining of the `Program.cs` file.
2. Update `https://docsupdatetracker.net/index.html` to add the logic to send message from user to server and display received messages in the page.
Implement the `OnMessageReceivedAsync()` method in `SampleChatHub`.
Now run the server using `dotnet run --urls http://localhost:8080` and open multiple browser instances to access http://localhost:8080/https://docsupdatetracker.net/index.html, then you can chat with each other.
-The complete code sample of this tutorial can be found [here][code-csharp].
+The complete code sample of this tutorial can be found [here][code-csharp-net6], the ASP.NET Core 3.1 version [here][code-csharp].
# [JavaScript](#tab/javascript)
Check other tutorials to further dive into how to use the service.
[code-js]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp/ [code-java]: https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp/ [code-csharp]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp/
+[code-csharp-net6]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-net6/
backup Backup Azure Afs Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-afs-automation.md
Title: Back up an Azure file share by using PowerShell description: In this article, learn how to back up an Azure Files file share by using the Azure Backup service and PowerShell. Previously updated : 08/20/2019 Last updated : 02/11/2022 +++ # Back up an Azure file share by using PowerShell
Set up PowerShell as follows:
> [!NOTE] > The minimum PowerShell version required for backup of Azure file shares is Az.RecoveryServices 2.6.0. Using the latest version, or at least the minimum version, helps you avoid issues with existing scripts. Install the minimum version by using the following PowerShell command: >
- > ```powershell
+ > ```azurepowershell-interactive
> Install-module -Name Az.RecoveryServices -RequiredVersion 2.6.0 > ``` 2. Find the PowerShell cmdlets for Azure Backup by using this command:
- ```powershell
+ ```azurepowershell-interactive
Get-Command *azrecoveryservices* ```
Set up PowerShell as follows:
6. Associate the subscription that you want to use with the account, because an account can have several subscriptions:
- ```powershell
+ ```azurepowershell-interactive
Select-AzSubscription -SubscriptionName $SubscriptionName ``` 7. If you're using Azure Backup for the first time, use the **Register-AzResourceProvider** cmdlet to register the Azure Recovery Services provider with your subscription:
- ```powershell
+ ```azurepowershell-interactive
Register-AzResourceProvider -ProviderNamespace "Microsoft.RecoveryServices" ``` 8. Verify that the providers registered successfully:
- ```powershell
+ ```azurepowershell-interactive
Get-AzResourceProvider -ProviderNamespace "Microsoft.RecoveryServices" ```
Follow these steps to create a Recovery Services vault:
1. If you don't have an existing resource group, create a new one by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. In this example, we create a resource group in the West US region:
- ```powershell
+ ```azurepowershell-interactive
New-AzResourceGroup -Name "test-rg" -Location "West US" ``` 1. Use the [New-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/new-azrecoveryservicesvault) cmdlet to create the vault. Specify the same location for the vault that you used for the resource group.
- ```powershell
+ ```azurepowershell-interactive
New-AzRecoveryServicesVault -Name "testvault" -ResourceGroupName "test-rg" -Location "West US" ```
Follow these steps to create a Recovery Services vault:
To view all vaults in the subscription, use [Get-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/get-azrecoveryservicesvault):
-```powershell
+```azurepowershell-interactive
Get-AzRecoveryServicesVault ``` The output is similar to the following. Note that the output provides the associated resource group and location.
-```powershell
+```azurepowershell-interactive
Name : Contoso-vault ID : /subscriptions/1234 Type : Microsoft.RecoveryServices/vaults
The vault context is the type of data protected in the vault. Set it by using [S
The following example sets the vault context for **testvault**:
-```powershell
+```azurepowershell-interactive
Get-AzRecoveryServicesVault -Name "testvault" | Set-AzRecoveryServicesVaultContext ```
Get-AzRecoveryServicesVault -Name "testvault" | Set-AzRecoveryServicesVaultConte
We plan to deprecate the vault context setting in accordance with Azure PowerShell guidelines. Instead, you can store or fetch the vault ID, and pass it to relevant commands. If you haven't set the vault context or you want to specify the command to run for a certain vault, pass the vault ID as `-vaultID` to all relevant commands as follows:
-```powershell
+```azurepowershell-interactive
$vaultID = Get-AzRecoveryServicesVault -ResourceGroupName "Contoso-docs-rg" -Name "testvault" | select -ExpandProperty ID New-AzRecoveryServicesBackupProtectionPolicy -Name "NewAFSPolicy" -WorkloadType "AzureFiles" -RetentionPolicy $retPol -SchedulePolicy $schPol -VaultID $vaultID ```
New-AzRecoveryServicesBackupProtectionPolicy -Name "NewAFSPolicy" -WorkloadType
A backup policy specifies the schedule for backups, and how long backup recovery points should be kept.
-A backup policy is associated with at least one retention policy. A retention policy defines how long a recovery point is kept before it's deleted. You can configure backups with daily, weekly, monthly, or yearly retention.
+A backup policy is associated with at least one retention policy. A retention policy defines how long a recovery point is kept before it's deleted. You can configure backups with daily, weekly, monthly, or yearly retention. With multiple backups policy, you can also configure backups hourly retention.
+
+**Choose a policy type**:
+
+# [Daily backup policy](#tab/daily-backup-policy)
Here are some cmdlets for backup policies:
Here are some cmdlets for backup policies:
By default, a start time is defined in the schedule policy object. Use the following example to change the start time to the desired start time. The desired start time should be in Universal Coordinated Time (UTC). The example assumes that the desired start time is 01:00 AM UTC for daily backups.
-```powershell
+```azurepowershell-interactive
$schPol = Get-AzRecoveryServicesBackupSchedulePolicyObject -WorkloadType "AzureFiles" $UtcTime = Get-Date -Date "2019-03-20 01:30:00Z" $UtcTime = $UtcTime.ToUniversalTime()
$schpol.ScheduleRunTimes[0] = $UtcTime
The following example stores the schedule policy and the retention policy in variables. It then uses those variables as parameters for a new policy (**NewAFSPolicy**). **NewAFSPolicy** takes a daily backup and retains it for 30 days.
-```powershell
+```azurepowershell-interactive
$schPol = Get-AzRecoveryServicesBackupSchedulePolicyObject -WorkloadType "AzureFiles" $retPol = Get-AzRecoveryServicesBackupRetentionPolicyObject -WorkloadType "AzureFiles" New-AzRecoveryServicesBackupProtectionPolicy -Name "NewAFSPolicy" -WorkloadType "AzureFiles" -RetentionPolicy $retPol -SchedulePolicy $schPol
New-AzRecoveryServicesBackupProtectionPolicy -Name "NewAFSPolicy" -WorkloadType
The output is similar to the following:
-```powershell
+```azurepowershell-interactive
Name WorkloadType BackupManagementType BackupTime DaysOfWeek - -- - - NewAFSPolicy AzureFiles AzureStorage 10/24/2019 1:30:00 AM ```
+# [Multiple backups policy](#tab/multiple-backups-policy)
+
+To create a backup policy that configures multiple backups a day, follow these steps:
+
+1. Fetch the schedule policy object using below cmdlet:
+
+ ```azurepowershell-interactive
+ $schPol=Get-AzRecoveryServicesBackupSchedulePolicyObject -WorkloadType AzureFiles -BackupManagementType AzureStorage -ScheduleRunFrequency Hourly
+ $schPol
+
+ ```
+
+ The output is similar to:
+
+ ```azurepowershell
+ ScheduleRunFrequency : Hourly
+ ScheduleRunDays :
+ ScheduleRunTimes :
+ ScheduleInterval : 8
+ ScheduleWindowStartTime : 12/22/2021 8:00:00 AM
+ ScheduleWindowDuration : 16
+ ScheduleRunTimeZone : India Standard Time
+
+ ```
+
+1. Set the different parameters of the schedule as required.
+
+ ```azurepowershell-interactive
+ $schpol.ScheduleInterval=4
+ $schpol.ScheduleWindowDuration=12
+
+ ```
+
+ The previous configuration helps you schedule 4 backups a day in a window of 8 am ΓÇô 8pm (8am+12hours). You can also set the timezone as mentioned in the following cmdlet:
+
+ ```azurepowershell-interactive
+ $timeZone= $timeZone=Get-TimeZone -ListAvailable |Where-Object{$_.Id-match "Russia Time Zone 11"}
+ $schPol.ScheduleRunTimeZone=$timeZone.Id
+ ```
+
+1. Fetch the retention policy object using following cmdlet:
+
+ ```azurepowershell-interactive
+ $retPol=Get-AzRecoveryServicesBackupRetentionPolicyObject -WorkloadType AzureFiles -BackupManagementType AzureStorage -ScheduleRunFrequency Hourly
+ ```
+
+1. Set the retention values as required.
+
+ ```azurepowershell-interactive
+ $retPol.DailySchedule.DurationCountInDays=15
+ ```
+
+1. Create a new backup policy using [New-AzRecoveryServicesBackupProtectionPolicy](/powershell/module/az.recoveryservices/new-azrecoveryservicesbackupprotectionpolicy).
+
+ You can pass the schedule and retention policy objects set previously as inputs.
+
+ ```azurepowershell-interactive
+ New-AzRecoveryServicesBackupProtectionPolicy -Name "FilesPolicytesting" -WorkloadType AzureFiles -RetentionPolicy $retpol -SchedulePolicy $schpol
+ ```
+
+ The output displays the policy configuration:
+
+ ```azurepowershell
+ Name WorkloadType BackupManagementType ScheduleFrequency BackupTime WindowStartTime Interval WindowDuration TimeZone
+ (UTC) (Hours)
+ - -- -- -- -- --
+ FilesPolicy
+ testing AzureFiles AzureStorage Hourly 12/22/2021 8:00:00 AM 4 12 Russia Time Zone 11
+
+ ```
++ ## Enable backup After you define the backup policy, you can enable protection for the Azure file share by using the policy.
You fetch the relevant policy object by using [Get-AzRecoveryServicesBackupProte
The following example retrieves policies for the workload type **AzureFiles**:
-```powershell
+```azurepowershell-interactive
Get-AzRecoveryServicesBackupProtectionPolicy -WorkloadType "AzureFiles" ``` The output is similar to the following:
-```powershell
+```azurepowershell-interactive
Name WorkloadType BackupManagementType BackupTime DaysOfWeek - -- - - dailyafs AzureFiles AzureStorage 1/10/2018 12:30:00 AM
dailyafs AzureFiles AzureStorage 1/10/2018 12:30:00
The following policy retrieves the backup policy named **dailyafs**:
-```powershell
+```azurepowershell-interactive
$afsPol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "dailyafs" ```
Enable protection by using [Enable-AzRecoveryServicesBackupProtection](/powershe
The following example enables protection for the Azure file share **testAzureFileShare** in storage account **testStorageAcct**, with the policy **dailyafs**:
-```powershell
+```azurepowershell-interactive
Enable-AzRecoveryServicesBackupProtection -StorageAccountName "testStorageAcct" -Name "testAzureFS" -Policy $afsPol ```
We recommend that you list items and then retrieve their unique name from the na
> > Not installing the minimum version might result in a failure of existing scripts. Install the minimum version of PowerShell by using the following command: >
->```powershell
+>```azurepowershell-interactive
>Install-module -Name Az.RecoveryServices -RequiredVersion 2.6.0 >```
Use [Backup-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices
Run the on-demand backup as follows:
-```powershell
+```azurepowershell-interactive
$afsContainer = Get-AzRecoveryServicesBackupContainer -FriendlyName "testStorageAcct" -ContainerType AzureStorage $afsBkpItem = Get-AzRecoveryServicesBackupItem -Container $afsContainer -WorkloadType "AzureFiles" -FriendlyName "testAzureFS" $job = Backup-AzRecoveryServicesBackupItem -Item $afsBkpItem
$job = Backup-AzRecoveryServicesBackupItem -Item $afsBkpItem
The command returns a job with an ID to be tracked, as shown in the following example:
-```powershell
+```azurepowershell-interactive
WorkloadName Operation Status StartTime EndTime JobID - -- testAzureFS Backup Completed 11/12/2018 2:42:07 PM 11/12/2018 2:42:11 PM 8bdfe3ab-9bf7-4be6-83d6-37ff1ca13ab6
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk.md
Title: Encryption of backup data using customer-managed keys
description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Last updated 12/02/2021 -+
backup Manage Afs Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-afs-backup-cli.md
Title: Manage Azure file share backups with the Azure CLI description: Learn how to use the Azure CLI to manage and monitor Azure file shares backed up by Azure Backup. Previously updated : 06/10/2021 Last updated : 02/09/2022 # Manage Azure file share backups with the Azure CLI
az backup policy create --resource-group azurefiles --vault-name azurefilesvault
} ```
+**Example to create a backup policy that configures multiple backups a day**
+
+This sample JSON is for the following requirements:
+
+- **Schedule**: Back up *every 4 hours* starting from *8 AM (UTC)* for the *next 12 hours*.
+- **Retention**: Daily - *5 days*, Weekly - *Every Sunday for 12 weeks*, Monthly - *First Sunday of every month for 60 months*, and Yearly - *First Sunday of January for 10 years*.
+
+```json
+{
+ "properties":{
+ "backupManagementType": "AzureStorage",
+ "workloadType": "AzureFileShare",
+ "schedulePolicy": {
+ "schedulePolicyType": "SimpleSchedulePolicy",
+ "scheduleRunFrequency": "Hourly",
+ "hourlySchedule": {
+ "interval": 4,
+ "scheduleWindowStartTime": "2021-09-29T08:00:00.000Z",
+ "scheduleWindowDuration": 12
+ }
+ },
+ "timeZone": "UTC",
+ "retentionPolicy": {
+ "retentionPolicyType": "LongTermRetentionPolicy",
+ "dailySchedule": {
+ "retentionTimes": null,
+ "retentionDuration": {
+ "count": 5,
+ "durationType": "Days"
+ }
+ },
+ "weeklySchedule": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "retentionTimes": null,
+ "retentionDuration": {
+ "count": 12,
+ "durationType": "Weeks"
+ }
+ },
+ "monthlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "retentionScheduleDaily": null,
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": null,
+ "retentionDuration": {
+ "count": 60,
+ "durationType": "Months"
+ }
+ },
+ "yearlySchedule": {
+ "retentionScheduleFormatType": "Weekly",
+ "monthsOfYear": [
+ "January"
+ ],
+ "retentionScheduleDaily": null,
+ "retentionScheduleWeekly": {
+ "daysOfTheWeek": [
+ "Sunday"
+ ],
+ "weeksOfTheMonth": [
+ "First"
+ ]
+ },
+ "retentionTimes": null,
+ "retentionDuration": {
+ "count": 10,
+ "durationType": "Years"
+ }
+ }
+ }
+ }
+}
+
+```
+ Once the policy is created successfully, the output of the command will display the policy JSON that you have passed as a parameter while executing the command. You can modify the schedule and retention section of the policy as required.
backup Use Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/use-archive-tier-support.md
Title: Use Archive Tier
description: Learn about using Archive Tier Support for Azure Backup. Last updated 10/23/2021-+ zone_pivot_groups: backup-client-powershelltier-clitier-portaltier
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 01/27/2022 Last updated : 02/14/2022
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- February 2022
+ - [Multiple backups per day for Azure Files is now generally available](#multiple-backups-per-day-for-azure-files-is-now-generally-available)
- January 2022 - [Back up Azure Database for PostgreSQL is now generally available](#back-up-azure-database-for-postgresql-is-now-generally-available) - October 2021
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Multiple backups per day for Azure Files is now generally available
+
+Low RPO (Recovery Point Objective) is a key requirement for Azure Files that contains the frequently updated, business-critical data. To ensure minimal data loss in the event of a disaster or unwanted changes to file share content, you may prefer to take backups more frequently than once a day.
+
+Using Azure Backup, you can create a backup policy or modify an existing backup policy to take multiple snapshots in a day. This capability allows you to define the duration in which your backup jobs will run. Therefore, you can align your backup schedule with the working hours when there are frequent updates to Azure Files content. With this release, you can also configure policy for multiple backups per day using Azure PowerShell and Azure CLI.
+
+For more information, see [how to configure multiple backups per day via backup policy](./manage-afs-backup.md#create-a-new-policy).
+ ## Back up Azure Database for PostgreSQL is now generally available Azure Backup and Azure Database services together help you to build an enterprise-class backup solution for Azure PostgreSQL (is now generally available). You can meet your data protection and compliance needs with a customer-controlled backup policy that enables retention of backups for up to 10 years.
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
Last updated 10/14/2021 -+ # Install and run the Spatial Analysis container (Preview)
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The tables below summarize current availability:
|:|:|:|:|:|:| |Denmark |Toll-Free | | |Public Preview |Public Preview* | |Denmark |Local | | |Public Preview |Public Preview* |
+|USA (includes PR) |Toll-Free |GA |GA |Public Preview |Public Preview* |
+|USA (includes PR) |Local | | |Public Preview |Public Preview* |
*** \* Available through Azure Bot Framework and Dynamics only
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
Domains that arenΓÇÖt previously used in O365 must be provisioned.
An example would be using `\*.contoso.com`, which would match the SBC FQDN `sbc.contoso.com`, but wouldn't match with `sbc.test.contoso.com`.
+ >[!NOTE]
+ > SBC FQDN in Azure Communication Services direct routing must be different from SBC FQDN in Teams Direct Routing.
+ >[!IMPORTANT] >During Public Preview only: if you plan to use a wildcard certificate for the domain that is not registered in Teams, please raise a support ticket, and our team will add it as a trusted domain.
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Last updated 11/02/2021 -+ # Tutorial: Deploy a background processing application with Azure Container Apps Preview
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
Last updated 01/31/2022 -+ zone_pivot_groups: container-apps
container-registry Container Registry Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-customer-managed-keys.md
Title: Encrypt registry with a customer-managed key
description: Learn about encryption-at-rest of your Azure container registry, and how to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault Last updated 09/13/2021-+ # Encrypt registry using a customer-managed key
container-registry Container Registry Oras Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md
Last updated 11/11/2021 -+ # Push and pull supply chain artifacts, using a private container registry in Azure (Preview)
container-registry Tutorial Deploy Connected Registry Nested Iot Edge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-deploy-connected-registry-nested-iot-edge-cli.md
Last updated 10/21/2021 -+ # Tutorial: Deploy a connected registry to a nested IoT Edge hierarchy
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Last updated 11/02/2021 -+ # What is Azure Cosmos DB analytical store?
cosmos-db How To Create Container Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/how-to-create-container-cassandra.md
Last updated 10/16/2020 ms.devlang: csharp-+ # Create a container in Azure Cosmos DB Cassandra API
cosmos-db How To Create Container Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/how-to-create-container-gremlin.md
Last updated 10/16/2020
ms.devlang: csharp-+ # Create a container in Azure Cosmos DB Gremlin API
cosmos-db Create Mongodb Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-go.md
ms.devlang: golang Last updated 08/26/2021-+ # Quickstart: Connect a Go application to Azure Cosmos DB's API for MongoDB [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db How To Create Container Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-create-container-mongodb.md
Last updated 08/26/2021
ms.devlang: csharp-+ # Create a container in Azure Cosmos DB API for MongoDB
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Last updated 12/09/2021 --+ # Restore an Azure Cosmos DB account that uses continuous backup mode
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-container.md
Last updated 01/03/2022 ms.devlang: csharp-+ # Create a container in Azure Cosmos DB SQL API
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-indexing-policy.md
Last updated 05/25/2021 -+ # Manage indexing policies in Azure Cosmos DB
cosmos-db How To Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-time-to-live.md
Last updated 12/09/2021 ms.devlang: csharp-+ # Configure time to live in Azure Cosmos DB
cost-management-billing Link Partner Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id.md
Last updated 11/04/2021
-+ # Link a partner ID to your Azure accounts
data-factory Connector Azure Cosmos Db Mongodb Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
Previously updated : 09/09/2021 Last updated : 02/17/2022 # Copy data to or from Azure Cosmos DB's API for MongoDB using Azure Data Factory or Synapse Analytics
Use the following steps to create a linked service to Azure Cosmos DB's API for
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Create a new linked service with Azure Synapse UI.":::
-2. Search for Cosmos and select the Azure Cosmos DB's API for MongoDB connector.
+2. Search for Azure Cosmos DB (MongoDB API) and select the Azure Cosmos DB's API for MongoDB connector.
:::image type="content" source="media/connector-azure-cosmos-db-mongodb-api/azure-cosmos-db-mongodb-api-connector.png" alt-text="Select the Azure Cosmos DB's API for MongoDB connector.":::
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md
Previously updated : 02/09/2022 Last updated : 02/17/2022 # Copy and transform data in Azure Cosmos DB (SQL API) by using Azure Data Factory
Use the following steps to create a linked service to Azure Cosmos DB in the Azu
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
-2. Search for Cosmos and select the Azure Cosmos DB (SQL API) connector.
+2. Search for Azure Cosmos DB (SQL API) and select the Azure Cosmos DB (SQL API) connector.
:::image type="content" source="media/connector-azure-cosmos-db/azure-cosmos-db-connector.png" alt-text="Select Azure Cosmos DB (SQL API) connector.":::
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 12/24/2021 Last updated : 02/17/2022 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
Use the following steps to create an Azure Data Lake Storage Gen2 linked service
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
-2. Search for Data Lake and select the Azure Data Lake Storage Gen2 connector.
+2. Search for Azure Data Lake Storage Gen2 and select the Azure Data Lake Storage Gen2 connector.
:::image type="content" source="media/connector-azure-data-lake-storage/azure-data-lake-storage-connector.png" alt-text="Select Azure Data Lake Storage Gen2 connector.":::
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md
Previously updated : 12/13/2021 Last updated : 02/17/2022 # Copy data to or from Azure Data Lake Storage Gen1 using Azure Data Factory or Azure Synapse Analytics
Use the following steps to create a linked service to Azure Data Lake Storage Ge
:::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
-2. Search for and select the Azure Data Lake Storage Gen1 connector.
+2. Search for Azure Data Lake Storage Gen1 and select the Azure Data Lake Storage Gen1 connector.
:::image type="content" source="media/connector-azure-data-lake-store/azure-data-lake-store-connector.png" alt-text="Screenshot of the Azure Data Lake Storage Gen1 connector.":::
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
Last updated 09/09/2021-+ # Create a trigger that runs a pipeline on a schedule
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md
-+ Last updated 09/09/2021
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Title: Troubleshoot pipeline orchestration and triggers in Azure Data Factory
description: Use different methods to troubleshoot pipeline trigger issues in Azure Data Factory. Previously updated : 11/22/2021 Last updated : 02/21/2022
Long queue-related error messages can appear for various reasons.
* If you receive an error message from any source or destination via connectors, which can generate a long queue, go to [Connector Troubleshooting Guide.](connector-troubleshoot-guide.md) * If you receive an error message about Mapping Data Flow, which can generate a long queue, go to [Data Flows Troubleshooting Guide.](data-flow-troubleshoot-guide.md) * If you receive an error message about other activities, such as Databricks, custom activities, or HDI, which can generate a long queue, go to [Activity Troubleshooting Guide.](data-factory-troubleshoot-guide.md)
-* If you receive an error message about running SSIS packages, which can generate a long queue, go to the [Azure-SSIS Package Execution Troubleshooting Guide](ssis-integration-runtime-ssis-activity-faq.yml) and [Integration Runtime Management Troubleshooting Guide.](ssis-integration-runtime-management-troubleshoot.md)
+* If you receive an error message about running SSIS packages, which can generate a long queue, go to the [Azure-SSIS Package Execution Troubleshooting Guide](ssis-integration-runtime-ssis-activity-faq.md) and [Integration Runtime Management Troubleshooting Guide.](ssis-integration-runtime-management-troubleshoot.md)
### Error message - "code":"BadRequest", "message":"null"
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
Previously updated : 10/26/2021 Last updated : 02/16/2022
For failed activities that are running on a self-hosted IR or a shared IR, the s
### Out of memory issue
-#### Symptoms
+- **Symptoms**
-An OutOfMemoryException (OOM) error occurs when you try to run a lookup activity with a linked IR or a self-hosted IR.
+ An OutOfMemoryException (OOM) error occurs when you try to run a lookup activity with a linked IR or a self-hosted IR.
-#### Cause
+- **Cause**
-A new activity can throw an OOM error if the IR machine experiences momentary high memory usage. The issue might be caused by a large volume of concurrent activity, and the error is by design.
+ A new activity can throw an OOM error if the IR machine experiences momentary high memory usage. The issue might be caused by a large volume of concurrent activity, and the error is by design.
-#### Resolution
+- **Resolution**
-Check the resource usage and concurrent activity execution on the IR node. Adjust the internal and trigger time of activity runs to avoid too much execution on a single IR node at the same time.
+ Check the resource usage and concurrent activity execution on the IR node. Adjust the internal and trigger time of activity runs to avoid too much execution on a single IR node at the same time.
### Concurrent jobs limit issue
-#### Symptoms
+- **Symptoms**
-When you try to increase the concurrent jobs limit from UI, the process hangs in *Updating* status.
-
-Example scenario: The maximum concurrent jobs value is currently set to 24, and you want to increase the count so that your jobs can run faster. The minimum value that you can enter is 3, and the maximum value is 32. You increase the value from 24 to 32 and then select the **Update** button. The process gets stuck in *Updating* status, as shown in the following screenshot. You refresh the page, and the value is still displayed as 24. It hasn't been updated to 32 as you had expected.
--
-#### Cause
-
-The limit on the number of concurrent jobs depends on the computer's logic core and memory. Try to adjust the value downward to a value such as 24, and then view the result.
+ When you try to increase the concurrent jobs limit from UI, the process hangs in *Updating* status.
+
+ Example scenario: The maximum concurrent jobs value is currently set to 24, and you want to increase the count so that your jobs can run faster. The minimum value that you can enter is 3, and the maximum value is 32. You increase the value from 24 to 32 and then select the **Update** button. The process gets stuck in *Updating* status, as shown in the following screenshot. You refresh the page, and the value is still displayed as 24. It hasn't been updated to 32 as you had expected.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/updating-status.png" alt-text="Screenshot of the Nodes pane of the integration runtime, displaying the process stuck in &quot;Updating&quot; status.":::
+
+- **Cause**
-> [!TIP]
-> - To learn more about logic core count and to determine your machine's logic core count, see [Four ways to find the number of cores in your CPU on Windows 10](https://www.top-password.com/blog/find-number-of-cores-in-your-cpu-on-windows-10/).
-> - To learn how to calculate the math.log, go to the [Logarithm calculator](https://www.rapidtables.com/calc/math/Log_Calculator.html).
+ The limit on the number of concurrent jobs depends on the computer's logic core and memory. Try to adjust the value downward to a value such as 24, and then view the result.
+
+ > [!TIP]
+ > - To learn more about logic core count and to determine your machine's logic core count, see [Four ways to find the number of cores in your CPU on Windows 10](https://www.top-password.com/blog/find-number-of-cores-in-your-cpu-on-windows-10/).
+ > - To learn how to calculate the math.log, go to the [Logarithm calculator](https://www.rapidtables.com/calc/math/Log_Calculator.html).
### Self-hosted IR high availability (HA) SSL certificate issue
-#### Symptoms
-
-The self-hosted IR work node has reported the following error:
+- **Symptoms**
-"Failed to pull shared states from primary node net.tcp://abc.cloud.corp.Microsoft.com:8060/ExternalService.svc/. Activity ID: XXXXX The X.509 certificate CN=abc.cloud.corp.Microsoft.com, OU=test, O=Microsoft chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The revocation function was unable to check revocation because the revocation server was offline."
+ The self-hosted IR work node has reported the following error:
+
+ "Failed to pull shared states from primary node net.tcp://abc.cloud.corp.Microsoft.com:8060/ExternalService.svc/. Activity ID: XXXXX The X.509 certificate CN=abc.cloud.corp.Microsoft.com, OU=test, O=Microsoft chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The revocation function was unable to check revocation because the revocation server was offline."
-#### Cause
+- **Cause**
-When you handle cases that are related to an SSL/TLS handshake, you might encounter some issues related to certificate chain verification.
+ When you handle cases that are related to an SSL/TLS handshake, you might encounter some issues related to certificate chain verification.
-#### Resolution
+- **Resolution**
-- Here's a quick, intuitive way to troubleshoot an X.509 certificate chain build failure:
-
- 1. Export the certificate, which needs to be verified. To do so, do the following:
+ - Here's a quick, intuitive way to troubleshoot an X.509 certificate chain build failure:
+
+ 1. Export the certificate, which needs to be verified. To do so, do the following:
+
+ a. In Windows, select **Start**, start typing **certificates**, and then select **Manage computer certificates**.
+
+ b. In File Explorer, on the left pane, search for the certificate that you want to check, right-click it, and then select **All tasks** > **Export**.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/export-tasks.png" alt-text="Screenshot of the &quot;All Tasks&quot; > &quot;Export&quot; control for a certificate on the &quot;Manage computer certificates&quot; pane.":::
- a. In Windows, select **Start**, start typing **certificates**, and then select **Manage computer certificates**.
-
- b. In File Explorer, on the left pane, search for the certificate that you want to check, right-click it, and then select **All tasks** > **Export**.
+ 2. Copy the exported certificate to the client machine.
+ 3. On the client side, in a Command Prompt window, run the following command. Be sure to replace *\<certificate path>* and *\<output txt file path>* with the actual paths.
+
+ ```
+ Certutil -verify -urlfetch <certificate path> > <output txt file path>
+ ```
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/export-tasks.png" alt-text="Screenshot of the &quot;All Tasks&quot; > &quot;Export&quot; control for a certificate on the &quot;Manage computer certificates&quot; pane.":::
-
- 2. Copy the exported certificate to the client machine.
- 3. On the client side, in a Command Prompt window, run the following command. Be sure to replace *\<certificate path>* and *\<output txt file path>* with the actual paths.
+ For example:
- ```
- Certutil -verify -urlfetch <certificate path> > <output txt file path>
- ```
-
- For example:
-
- ```
- Certutil -verify -urlfetch c:\users\test\desktop\servercert02.cer > c:\users\test\desktop\Certinfo.txt
- ```
- 4. Check for errors in the output TXT file. You can find the error summary at the end of the TXT file.
-
- For example:
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/error-summary.png" alt-text="Screenshot of an error summary at the end of the TXT file.":::
-
- If you don't see an error at the end of the log file, as shown in the following screenshot, you can consider that the certificate chain has been built successfully on the client machine.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/log-file.png" alt-text="Screenshot of a log file showing no errors.":::
--- If an AIA (Authority Information Access), CDP (CRL Distribution Point), or OCSP (Online Certificate Status Protocol) file name extension is configured in the certificate file, you can check it in a more intuitive way:
-
- 1. Get this information by checking the certificate details, as shown in the following screenshot:
+ ```
+ Certutil -verify -urlfetch c:\users\test\desktop\servercert02.cer > c:\users\test\desktop\Certinfo.txt
+ ```
+ 4. Check for errors in the output TXT file. You can find the error summary at the end of the TXT file.
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/certificate-detail.png" alt-text="Screenshot of certificate details.":::
+ For example:
- 1. Run the following command. Be sure to replace *\<certificate path>* with the actual path of the certificate.
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/error-summary.png" alt-text="Screenshot of an error summary at the end of the TXT file.":::
- ```
- Certutil -URL <certificate path>
- ```
+ If you don't see an error at the end of the log file, as shown in the following screenshot, you can consider that the certificate chain has been built successfully on the client machine.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/log-file.png" alt-text="Screenshot of a log file showing no errors.":::
- The URL Retrieval tool opens.
+ - If an AIA (Authority Information Access), CDP (CRL Distribution Point), or OCSP (Online Certificate Status Protocol) file name extension is configured in the certificate file, you can check it in a more intuitive way:
+
+ 1. Get this information by checking the certificate details, as shown in the following screenshot:
- 1. To verify certificates with AIA, CDP, and OCSP file name extensions, select **Retrieve**.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/retrieval-button.png" alt-text="Screenshot of the URL Retrieval Tool and the Retrieve button.":::
-
- You've built the certificate chain successfully if the certificate status from AIA is *Verified* and the certificate status from CDP or OCSP is *Verified*.
-
- If you fail when you try to retrieve AIA or CDP, work with your network team to get the client machine ready to connect to the target URL. It will be enough if either the HTTP path or the Lightweight Directory Access Protocol (LDAP) path can be verified.
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/certificate-detail.png" alt-text="Screenshot of certificate details.":::
+
+ 1. Run the following command. Be sure to replace *\<certificate path>* with the actual path of the certificate.
+
+ ```
+ Certutil -URL <certificate path>
+ ```
+
+ The URL Retrieval tool opens.
+
+ 1. To verify certificates with AIA, CDP, and OCSP file name extensions, select **Retrieve**.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/retrieval-button.png" alt-text="Screenshot of the URL Retrieval Tool and the Retrieve button.":::
+
+ You've built the certificate chain successfully if the certificate status from AIA is *Verified* and the certificate status from CDP or OCSP is *Verified*.
+
+ If you fail when you try to retrieve AIA or CDP, work with your network team to get the client machine ready to connect to the target URL. It will be enough if either the HTTP path or the Lightweight Directory Access Protocol (LDAP) path can be verified.
### Self-hosted IR could not load file or assembly
-#### Symptoms
+- **Symptoms**
-You get the following error message:
-
-"Could not load file or assembly 'XXXXXXXXXXXXXXXX, Version=4.0.2.0, Culture=neutral, PublicKeyToken=XXXXXXXXX' or one of its dependencies. The system cannot find the file specified. Activity ID: 92693b45-b4bf-4fc8-89da-2d3dc56f27c3"
-
-Here is a more specific error message:
-
-"Could not load file or assembly 'System.ValueTuple, Version=4.0.2.0, Culture=neutral, PublicKeyToken=XXXXXXXXX' or one of its dependencies. The system cannot find the file specified. Activity ID: 92693b45-b4bf-4fc8-89da-2d3dc56f27c3"
-
-#### Cause
-
-In Process Monitor, you can view the following result:
--
-> [!TIP]
-> In Process Monitor, you can set filters as shown in following screenshot.
->
-> The preceding error message says that the DLL System.ValueTuple is not located in the related *Global Assembly Cache* (GAC) folder, in the *C:\Program Files\Microsoft Integration Runtime\4.0\Gateway* folder, or in the *C:\Program Files\Microsoft Integration Runtime\4.0\Shared* folder.
->
-> Basically, the process loads the DLL first from the *GAC* folder, then from the *Shared* folder, and finally from the *Gateway* folder. Therefore, you can load the DLL from any path that's helpful.
-
-<br>
--
-#### Resolution
-
-You'll find the *System.ValueTuple.dll* file in the *C:\Program Files\Microsoft Integration Runtime\4.0\Gateway\DataScan* folder. To resolve the issue, copy the *System.ValueTuple.dll* file to the *C:\Program Files\Microsoft Integration Runtime\4.0\Gateway* folder.
+ You get the following error message:
+
+ "Could not load file or assembly 'XXXXXXXXXXXXXXXX, Version=4.0.2.0, Culture=neutral, PublicKeyToken=XXXXXXXXX' or one of its dependencies. The system cannot find the file specified. Activity ID: 92693b45-b4bf-4fc8-89da-2d3dc56f27c3"
+
+ Here is a more specific error message:
+
+ "Could not load file or assembly 'System.ValueTuple, Version=4.0.2.0, Culture=neutral, PublicKeyToken=XXXXXXXXX' or one of its dependencies. The system cannot find the file specified. Activity ID: 92693b45-b4bf-4fc8-89da-2d3dc56f27c3"
-You can use the same method to resolve other missing file or assembly issues.
+- **Cause**
-#### More information about this issue
+ In Process Monitor, you can view the following result:
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/process-monitor.png#lightbox" lightbox="media/self-hosted-integration-runtime-troubleshoot-guide/process-monitor.png" alt-text="Screenshot of the Paths list in Process Monitor.":::
+
+ > [!TIP]
+ > In Process Monitor, you can set filters as shown in following screenshot.
+ >
+ > The preceding error message says that the DLL System.ValueTuple is not located in the related *Global Assembly Cache* (GAC) folder, in the *C:\Program Files\Microsoft Integration Runtime\4.0\Gateway* folder, or in the *C:\Program Files\Microsoft Integration Runtime\4.0\Shared* folder.
+ >
+ > Basically, the process loads the DLL first from the *GAC* folder, then from the *Shared* folder, and finally from the *Gateway* folder. Therefore, you can load the DLL from any path that's helpful.
+
+
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/set-filters.png" alt-text="Screenshot of the &quot;Process Monitor Filter&quot; page, listing the filters for the DLL.":::
+
+- **Resolution**
-The reason why you see the *System.ValueTuple.dll* under *%windir%\Microsoft.NET\assembly* and *%windir%\assembly* is that this is a .NET behavior.
+ You'll find the *System.ValueTuple.dll* file in the *C:\Program Files\Microsoft Integration Runtime\4.0\Gateway\DataScan* folder. To resolve the issue, copy the *System.ValueTuple.dll* file to the *C:\Program Files\Microsoft Integration Runtime\4.0\Gateway* folder.
-In the following error, you can clearly see that the *System.ValueTuple* assembly is missing. This issue arises when the application tries to check the *System.ValueTuple.dll* assembly.
-
-> "\<LogProperties>\<ErrorInfo>[{"Code":0,"Message":"The type initializer for 'Npgsql.PoolManager' threw an exception.","EventType":0,"Category":5,"Data":{},"MsgId":null,"ExceptionType":"System.TypeInitializationException","Source":"Npgsql","StackTrace":"","InnerEventInfos":[{"Code":0,"Message":"Could not load file or assembly 'System.ValueTuple, Version=4.0.2.0, Culture=neutral, PublicKeyToken=XXXXXXXXX' or one of its dependencies. The system cannot find the file specified.","EventType":0,"Category":5,"Data":{},"MsgId":null,"ExceptionType":"System.IO.FileNotFoundException","Source":"Npgsql","StackTrace":"","InnerEventInfos":[]}]}]\</ErrorInfo>\</LogProperties>"
-
-For more information about GAC, see [Global Assembly Cache](/dotnet/framework/app-domains/gac).
+ You can use the same method to resolve other missing file or assembly issues.
+- **More information about this issue**
+ The reason why you see the *System.ValueTuple.dll* under *%windir%\Microsoft.NET\assembly* and *%windir%\assembly* is that this is a .NET behavior.
+
+ In the following error, you can clearly see that the *System.ValueTuple* assembly is missing. This issue arises when the application tries to check the *System.ValueTuple.dll* assembly.
+
+ > "\<LogProperties>\<ErrorInfo>[{"Code":0,"Message":"The type initializer for 'Npgsql.PoolManager' threw an exception.","EventType":0,"Category":5,"Data":{},"MsgId":null,"ExceptionType":"System.TypeInitializationException","Source":"Npgsql","StackTrace":"","InnerEventInfos":[{"Code":0,"Message":"Could not load file or assembly 'System.ValueTuple, Version=4.0.2.0, Culture=neutral, PublicKeyToken=XXXXXXXXX' or one of its dependencies. The system cannot find the file specified.","EventType":0,"Category":5,"Data":{},"MsgId":null,"ExceptionType":"System.IO.FileNotFoundException","Source":"Npgsql","StackTrace":"","InnerEventInfos":[]}]}]\</ErrorInfo>\</LogProperties>"
+
+ For more information about GAC, see [Global Assembly Cache](/dotnet/framework/app-domains/gac).
+
+
### Self-hosted integration runtime Authentication Key is missing
-#### Symptoms
-
-The self-hosted integration runtime suddenly goes offline without an Authentication Key, and the Event Log displays the following error message:
-
-"Authentication Key is not assigned yet"
+- **Symptoms**
-
-#### Cause
--- The self-hosted IR node or logical self-hosted IR in the Azure portal was deleted.-- A clean uninstall was performed.
+ The self-hosted integration runtime suddenly goes offline without an Authentication Key, and the Event Log displays the following error message:
+
+ "Authentication Key is not assigned yet"
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/key-missing.png" alt-text="Screenshot of the integration runtime event pane showing that the Authentication Key is not yet assigned.":::
-#### Resolution
+- **Cause**
-If neither of the preceding causes applies, you can go to the *%programdata%\Microsoft\Data Transfer\DataManagementGateway* folder to see whether the *Configurations* file has been deleted. If it was deleted, follow the instructions in the Netwrix article [Detect who deleted a file from your Windows file servers](https://www.netwrix.com/how_to_detect_who_deleted_file.html).
+ - The self-hosted IR node or logical self-hosted IR in the Azure portal was deleted.
+ - A clean uninstall was performed.
+
+- **Resolution**
+ If neither of the preceding causes applies, you can go to the *%programdata%\Microsoft\Data Transfer\DataManagementGateway* folder to see whether the *Configurations* file has been deleted. If it was deleted, follow the instructions in the Netwrix article [Detect who deleted a file from your Windows file servers](https://www.netwrix.com/how_to_detect_who_deleted_file.html).
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/configurations-file.png" alt-text="Screenshot of the event log details pane for checking the Configurations file.":::
### Can't use self-hosted IR to bridge two on-premises datastores
-#### Symptoms
-
-After you create self-hosted IRs for both the source and destination datastores, you want to connect the two IRs to finish a copy activity. If the datastores are configured in different virtual networks, or the datastores can't understand the gateway mechanism, you receive either of the following errors:
+- **Symptoms**
-* "The driver of source cannot be found in destination IR"
-* "The source cannot be accessed by the destination IR"
-
-#### Cause
-
-The self-hosted IR is designed as a central node of a copy activity, not a client agent that needs to be installed for each datastore.
+ After you create self-hosted IRs for both the source and destination datastores, you want to connect the two IRs to finish a copy activity. If the datastores are configured in different virtual networks, or the datastores can't understand the gateway mechanism, you receive either of the following errors:
+
+ * "The driver of source cannot be found in destination IR"
+ * "The source cannot be accessed by the destination IR"
-In this case, you should create the linked service for each datastore with the same IR, and the IR should be able to access both datastore through the network. It doesn't matter whether the IR is installed at the source datastore or the destination datastore, or on a third machine. If two linked services are created with different IRs but used in the same copy activity, the destination IR is used, and you need to install the drivers for both datastores on the destination IR machine.
+- **Cause**
-#### Resolution
+ The self-hosted IR is designed as a central node of a copy activity, not a client agent that needs to be installed for each datastore.
+
+ In this case, you should create the linked service for each datastore with the same IR, and the IR should be able to access both datastore through the network. It doesn't matter whether the IR is installed at the source datastore or the destination datastore, or on a third machine. If two linked services are created with different IRs but used in the same copy activity, the destination IR is used, and you need to install the drivers for both datastores on the destination IR machine.
-Install drivers for both the source and destination datastores on the destination IR, and make sure that it can access the source datastore.
-
-If the traffic can't pass through the network between two datastores (for example, they're configured in two virtual networks), you might not finish copying in one activity even with the IR installed. If you can't finish copying in a single activity, you can create two copy activities with two IRs, each in a VENT:
-* Copy one IR from datastore 1 to Azure Blob Storage
-* Copy another IR from Azure Blob Storage to datastore 2.
+- **Resolution**
-This solution could simulate the requirement to use the IR to create a bridge that connects two disconnected datastores.
+ Install drivers for both the source and destination datastores on the destination IR, and make sure that it can access the source datastore.
+
+ If the traffic can't pass through the network between two datastores (for example, they're configured in two virtual networks), you might not finish copying in one activity even with the IR installed. If you can't finish copying in a single activity, you can create two copy activities with two IRs, each in a VENT:
+ * Copy one IR from datastore 1 to Azure Blob Storage
+ * Copy another IR from Azure Blob Storage to datastore 2.
+
+ This solution could simulate the requirement to use the IR to create a bridge that connects two disconnected datastores.
### Credential sync issue causes credential loss from HA
-#### Symptoms
-
-If the data source credential "XXXXXXXXXX" is deleted from the current integration runtime node with payload, you receive the following error message:
+- **Symptoms**
-"When you delete the link service on Azure portal, or the task has the wrong payload, please create new link service with your credential again."
-
-#### Cause
+ If the data source credential "XXXXXXXXXX" is deleted from the current integration runtime node with payload, you receive the following error message:
+
+ "When you delete the link service on Azure portal, or the task has the wrong payload, please create new link service with your credential again."
-Your self-hosted IR is built in HA mode with two nodes, but the nodes aren't in a credentials sync state. This means that the credentials stored in the dispatcher node aren't synced to other worker nodes. If any failover happens from the dispatcher node to the worker node, and the credentials exist only in the previous dispatcher node, the task will fail when you're trying to access credentials, and you'll receive the preceding error.
+- **Cause**
-#### Resolution
+ Your self-hosted IR is built in HA mode with two nodes, but the nodes aren't in a credentials sync state. This means that the credentials stored in the dispatcher node aren't synced to other worker nodes. If any failover happens from the dispatcher node to the worker node, and the credentials exist only in the previous dispatcher node, the task will fail when you're trying to access credentials, and you'll receive the preceding error.
+
+- **Resolution**
-The only way to avoid this issue is to make sure that the two nodes are in credentials sync state. If they aren't in sync, you have to reenter the credentials for the new dispatcher.
+ The only way to avoid this issue is to make sure that the two nodes are in credentials sync state. If they aren't in sync, you have to reenter the credentials for the new dispatcher.
### Can't choose the certificate because the private key is missing
-#### Symptoms
-
-* You've imported a PFX file to the certificate store.
-* When you selected the certificate through the IR Configuration Manager UI, you received the following error message:
-
- "Failed to change intranet communication encryption mode. It is likely that certificate '\<*certificate name*>' may not have a private key that is capable of key exchange or the process may not have access rights for the private key. Please see inner exception for detail."
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/private-key-missing.png" alt-text="Screenshot of the Integration Runtime Configuration Manager Settings pane, displaying a &quot;private key missing&quot; error message.":::
+- **Symptoms**
-#### Cause
--- The user account has a low privilege level and can't access the private key.-- The certificate was generated as a signature but not as a key exchange.-
-#### Resolution
-
-* To operate the UI, use an account with appropriate privileges for accessing the private key.
-* Import the certificate by running the following command:
+ * You've imported a PFX file to the certificate store.
+ * When you selected the certificate through the IR Configuration Manager UI, you received the following error message:
- ```
- certutil -importpfx FILENAME.pfx AT_KEYEXCHANGE
- ```
-
-### Self-hosted integration runtime nodes out of the sync issue
+ "Failed to change intranet communication encryption mode. It is likely that certificate '\<*certificate name*>' may not have a private key that is capable of key exchange or the process may not have access rights for the private key. Please see inner exception for detail."
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/private-key-missing.png" alt-text="Screenshot of the Integration Runtime Configuration Manager Settings pane, displaying a &quot;private key missing&quot; error message.":::
-#### Symptoms
+- **Cause**
-Self-hosted integration runtime nodes try to sync the credentials across nodes but get stuck in the process and encounter the error message below after a while:
+ - The user account has a low privilege level and can't access the private key.
+ - The certificate was generated as a signature but not as a key exchange.
-"The Integration Runtime (Self-hosted) node is trying to sync the credentials across nodes. It may take several minutes."
+- **Resolution**
->[!Note]
->If this error appears for over 10 minutes, please check the connectivity with the dispatcher node.
+ * To operate the UI, use an account with appropriate privileges for accessing the private key.
+ * Import the certificate by running the following command:
+
+ ```
+ certutil -importpfx FILENAME.pfx AT_KEYEXCHANGE
+ ```
-#### Cause
+### Self-hosted integration runtime nodes out of the sync issue
-The reason is that the worker nodes do not have access to the private keys. This can be confirmed from the self-hosted integration runtime logs below:
+- **Symptoms**
-`[14]0460.3404::05/07/21-00:23:32.2107988 [System] A fatal error occurred when attempting to access the TLS server credential private key. The error code returned from the cryptographic module is 0x8009030D. The internal error state is 10001.`
+ Self-hosted integration runtime nodes try to sync the credentials across nodes but get stuck in the process and encounter the error message below after a while:
+
+ "The Integration Runtime (Self-hosted) node is trying to sync the credentials across nodes. It may take several minutes."
+
+ >[!Note]
+ >If this error appears for over 10 minutes, please check the connectivity with the dispatcher node.
+
+- **Cause**
-You have no issue with the sync process when you use the service principal authentication in the linked service. However, when you switch the authentication type to account key, the syncing issue started. This is because the self-hosted integration runtime service runs under a service account (NT SERVICE\DIAHostService) and it need to be added to the private key permissions.
+ The reason is that the worker nodes do not have access to the private keys. This can be confirmed from the self-hosted integration runtime logs below:
+
+ `[14]0460.3404::05/07/21-00:23:32.2107988 [System] A fatal error occurred when attempting to access the TLS server credential private key. The error code returned from the cryptographic module is 0x8009030D. The internal error state is 10001.`
+
+ You have no issue with the sync process when you use the service principal authentication in the linked service. However, when you switch the authentication type to account key, the syncing issue started. This is because the self-hosted integration runtime service runs under a service account (NT SERVICE\DIAHostService) and it need to be added to the private key permissions.
+- **Resolution**
-#### Resolution
-
-To solve this issue, you need to add the self-hosted integration runtime service account (NT SERVICE\DIAHostService) to the private key permissions. You can apply the following steps:
-
-1. Open your Microsoft Management Console (MMC) Run Command.
-
- :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/management-console-run-command.png" alt-text="Screenshot that shows the MMC Run Command":::
-
-1. In the MMC pane, apply the following steps:
-
- :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-1.png" alt-text="Screenshot that shows the second step to add self-hosted IR service account to the private key permissions." lightbox="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-1-expanded.png":::
-
- 1. Select **File**.
- 1. Choose **Add/Remove Snap-in** in th drop-down menu.
- 1. Select **Certificates** in the "Available snap-ins" pane.
- 1. Select **Add**.
- 1. In the pop-up "Certificates snap-in" pane, choose **Computer account**.
- 1. Select **Next**.
- 1. In the "Select Computer" pane, choose **Local computer: the computer this console is running on**.
- 1. Select **Finish**.
- 1. Select **OK** in the "Add or Remove Snap-ins" pane.
-
-1. In the pane of MMC, move on with the following steps:
-
- :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-2.png" alt-text="Screenshot that shows the third step to add self-hosted IR service account to the private key permissions." lightbox="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-2-expanded.png":::
-
- 1. From the left folder list, select **Console Root -> Certificates (Local Computer) -> Personal -> Certificates**.
- 1. Right-click the **Microsoft Intune Beta MDM**.
- 1. Select **All Tasks** in the drop-down list.
- 1. Select **Manage Private Keys**.
- 1. Select **Add** under "Group or user names".
- 1. Select **NT SERVICE\DIAHostService** to grant it full control access to this certificate, apply and safe.
- 1. Select **Check Names** and then select **OK**.
- 1. In the "Permissions" pane, select **Apply** and then select **OK**.
+ To solve this issue, you need to add the self-hosted integration runtime service account (NT SERVICE\DIAHostService) to the private key permissions. You can apply the following steps:
+
+ 1. Open your Microsoft Management Console (MMC) Run Command.
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/management-console-run-command.png" alt-text="Screenshot that shows the MMC Run Command":::
+
+ 1. In the MMC pane, apply the following steps:
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-1.png" alt-text="Screenshot that shows the second step to add self-hosted IR service account to the private key permissions." lightbox="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-1-expanded.png":::
+
+ 1. Select **File**.
+ 1. Choose **Add/Remove Snap-in** in th drop-down menu.
+ 1. Select **Certificates** in the "Available snap-ins" pane.
+ 1. Select **Add**.
+ 1. In the pop-up "Certificates snap-in" pane, choose **Computer account**.
+ 1. Select **Next**.
+ 1. In the "Select Computer" pane, choose **Local computer: the computer this console is running on**.
+ 1. Select **Finish**.
+ 1. Select **OK** in the "Add or Remove Snap-ins" pane.
+
+ 1. In the pane of MMC, move on with the following steps:
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-2.png" alt-text="Screenshot that shows the third step to add self-hosted IR service account to the private key permissions." lightbox="./media/self-hosted-integration-runtime-troubleshoot-guide/add-service-account-to-private-key-2-expanded.png":::
+
+ 1. From the left folder list, select **Console Root -> Certificates (Local Computer) -> Personal -> Certificates**.
+ 1. Right-click the **Microsoft Intune Beta MDM**.
+ 1. Select **All Tasks** in the drop-down list.
+ 1. Select **Manage Private Keys**.
+ 1. Select **Add** under "Group or user names".
+ 1. Select **NT SERVICE\DIAHostService** to grant it full control access to this certificate, apply and safe.
+ 1. Select **Check Names** and then select **OK**.
+ 1. In the "Permissions" pane, select **Apply** and then select **OK**.
### UserErrorJreNotFound error message when you run a copy activity to Azure
-#### Symptoms
-
-When you try to copy content to Microsoft Azure by using a Java-based tool or program (for example, copying ORC or Parquet format files), you receive an error message that resembles the following:
-
-> ErrorCode=UserErrorJreNotFound,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Java Runtime Environment is not found. Go to `http://go.microsoft.com/fwlink/?LinkId=808605` to download and install on your Integration Runtime (Self-hosted) node machine. Note 64-bit Integration Runtime requires 64-bit JRE and 32-bit Integration Runtime requires 32-bit JRE.,Source=Microsoft.DataTransfer.Common,''Type=System.DllNotFoundException,Message=Unable to load DLL 'jvm.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E),Source=Microsoft.DataTransfer.Richfile.HiveOrcBridge
-
-#### Cause
-
-This issue occurs for either of the following reasons:
--- Java Runtime Environment (JRE) isn't installed correctly on your Integration Runtime server.
+- **Symptoms**
-- Your Integration Runtime server lacks the required dependency for JRE.-
-By default, Integration Runtime resolves the JRE path by using registry entries. Those entries should be automatically set during JRE installation.
-
-#### Resolution
-
-Follow the steps in this section carefully. Serious problems might occur if you modify the registry incorrectly. Before you modify it, [back up the registry for restoration](https://support.microsoft.com/topic/how-to-back-up-and-restore-the-registry-in-windows-855140ad-e318-2a13-2829-d428a2ab0692) in case problems occur.
+ When you try to copy content to Microsoft Azure by using a Java-based tool or program (for example, copying ORC or Parquet format files), you receive an error message that resembles the following:
+
+ > ErrorCode=UserErrorJreNotFound,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Java Runtime Environment is not found. Go to `http://go.microsoft.com/fwlink/?LinkId=808605` to download and install on your Integration Runtime (Self-hosted) node machine. Note 64-bit Integration Runtime requires 64-bit JRE and 32-bit Integration Runtime requires 32-bit JRE.,Source=Microsoft.DataTransfer.Common,''Type=System.DllNotFoundException,Message=Unable to load DLL 'jvm.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E),Source=Microsoft.DataTransfer.Richfile.HiveOrcBridge
-To fix this issue, follow these steps to verify the status of the JRE installation:
+- **Cause**
-1. Make sure that Integration Runtime (Diahost.exe) and JRE are installed on the same platform. Check the following conditions:
- - 64-bit JRE for 64-bit ADF Integration Runtime should be installed in the folder: `C:\Program Files\Java\`
+ This issue occurs for either of the following reasons:
+
+ - Java Runtime Environment (JRE) isn't installed correctly on your Integration Runtime server.
- > [!NOTE]
- > The folder is not `C:\Program Files (x86)\Java\`
+ - Your Integration Runtime server lacks the required dependency for JRE.
- - JRE 7 and JRE 8 are both compatible for this copy activity. JRE 6 and versions that are earlier than JRE 6 have not been validated for this use.
+ By default, Integration Runtime resolves the JRE path by using registry entries. Those entries should be automatically set during JRE installation.
-2. Check the registry for the appropriate settings. To do this, follow these steps:
+- **Resolution**
- 1. In the **Run** menu, type **Regedit**, and then press Enter.
+ Follow the steps in this section carefully. Serious problems might occur if you modify the registry incorrectly. Before you modify it, [back up the registry for restoration](https://support.microsoft.com/topic/how-to-back-up-and-restore-the-registry-in-windows-855140ad-e318-2a13-2829-d428a2ab0692) in case problems occur.
- 1. In the navigation pane, locate the following subkey:<br/> `HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment`. <br/>
-
- In the **Details** pane, there should be a Current Version entry that shows the JRE version (for example, 1.8).
+ To fix this issue, follow these steps to verify the status of the JRE installation:
- :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/java-runtime-environment-image.png" alt-text="Screenshot showing the Java Runtime Environment.":::
-
- 1. In the navigation pane, locate a subkey that is an exact match for the version (for example 1.8) under the JRE folder. In the details pane, there should be a **JavaHome** entry. The value of this entry is the JRE installation path.
+ 1. Make sure that Integration Runtime (Diahost.exe) and JRE are installed on the same platform. Check the following conditions:
+ - 64-bit JRE for 64-bit ADF Integration Runtime should be installed in the folder: `C:\Program Files\Java\`
+
+ > [!NOTE]
+ > The folder is not `C:\Program Files (x86)\Java\`
+
+ - JRE 7 and JRE 8 are both compatible for this copy activity. JRE 6 and versions that are earlier than JRE 6 have not been validated for this use.
- :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/java-home-entry-image.png" alt-text="Screenshot showing a JavaHome entry.":::
-
-3. Locate the bin\server folder in the following path: <br/>
-
- `C:\Program Files\Java\jre1.8.0_74`
+ 2. Check the registry for the appropriate settings. To do this, follow these steps:
- :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/folder-of-jre.png" alt-text="Screenshot showing the JRE folder.":::
-
-1. Check whether this folder contains a jvm.dll file. If it does not, check for the file in the `bin\client` folder.
-
- :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/file-location-image.png" alt-text="Screenshot showing a jvm.dll file location.":::
-
-> [!NOTE]
-> - If any of these configurations are not as described in these steps, use the [JRE windows installer](https://java.com/en/download/manual.jsp) to fix the problems.
-> - If all the configurations in these steps are correct as described, there may be a VC++ runtime library missing in the system. You can fix this problem by installing the VC++ 2010 Redistributable Package.
+ 1. In the **Run** menu, type **Regedit**, and then press Enter.
+
+ 1. In the navigation pane, locate the following subkey:
+
+ `HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Runtime Environment`.
+
+ In the **Details** pane, there should be a Current Version entry that shows the JRE version (for example, 1.8).
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/java-runtime-environment-image.png" alt-text="Screenshot showing the Java Runtime Environment.":::
+
+ 1. In the navigation pane, locate a subkey that is an exact match for the version (for example 1.8) under the JRE folder. In the details pane, there should be a **JavaHome** entry. The value of this entry is the JRE installation path.
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/java-home-entry-image.png" alt-text="Screenshot showing a JavaHome entry.":::
+
+ 3. Locate the bin\server folder in the following path:
+
+ `C:\Program Files\Java\jre1.8.0_74`
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/folder-of-jre.png" alt-text="Screenshot showing the JRE folder.":::
+
+ 1. Check whether this folder contains a jvm.dll file. If it does not, check for the file in the `bin\client` folder.
+
+ :::image type="content" source="./media/self-hosted-integration-runtime-troubleshoot-guide/file-location-image.png" alt-text="Screenshot showing a jvm.dll file location.":::
+
+ > [!NOTE]
+ > - If any of these configurations are not as described in these steps, use the [JRE windows installer](https://java.com/en/download/manual.jsp) to fix the problems.
+ > - If all the configurations in these steps are correct as described, there may be a VC++ runtime library missing in the system. You can fix this problem by installing the VC++ 2010 Redistributable Package.
## Self-hosted IR setup ### Integration runtime registration error
-#### Symptoms
-
-You might occasionally want to run a self-hosted IR in a different account for either of the following reasons:
-- Company policy disallows the service account.-- Some authentication is required.-
-After you change the service account on the service pane, you might find that the integration runtime stops working, and you get the following error message:
-
-"The Integration Runtime (Self-hosted) node has encountered an error during registration. Cannot connect to the Integration Runtime (Self-hosted) Host Service."
--
-#### Cause
-
-Many resources are granted only to the service account. When you change the service account to another account, the permissions of all dependent resources remain unchanged.
+- **Symptoms**
-#### Resolution
-
-Go to the integration runtime event log to check the error.
--
-* If the error in the event log is "UnauthorizedAccessException," do the following:
-
- 1. Check the *DIAHostService* logon service account in the Windows service panel.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/logon-service-account.png" alt-text="Screenshot of the Logon service account properties pane.":::
-
- 1. Check to see whether the logon service account has read/write permissions for the *%programdata%\Microsoft\DataTransfer\DataManagementGateway* folder.
-
- - By default, if the service logon account hasn't been changed, it should have read/write permissions.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/service-permission.png" alt-text="Screenshot of the service permissions pane.":::
-
- - If you've changed the service logon account, mitigate the issue by doing the following:
-
- a. Perform a clean uninstallation of the current self-hosted IR.
- b. Install the self-hosted IR bits.
- c. Change the service account by doing the following:
-
- i. Go to the self-hosted IR installation folder, and then switch to the *Microsoft Integration Runtime\4.0\Shared* folder.
- ii. Open a Command Prompt window by using elevated privileges. Replace *\<user>* and *\<password>* with your own username and password, and then run the following command:
- `dmgcmd.exe -SwitchServiceAccount "<user>" "<password>"`
- iii. If you want to change to the LocalSystem account, be sure to use the correct format for this account: `dmgcmd.exe -SwitchServiceAccount "NT Authority\System" ""`
- Do *not* use this format: `dmgcmd.exe -SwitchServiceAccount "LocalSystem" ""`
- iv. Optionally, because Local System has higher privileges than Administrator, you can also directly change it in "Services".
- v. You can use a local/domain user for the IR service logon account.
+ You might occasionally want to run a self-hosted IR in a different account for either of the following reasons:
+ - Company policy disallows the service account.
+ - Some authentication is required.
+
+ After you change the service account on the service pane, you might find that the integration runtime stops working, and you get the following error message:
+
+ "The Integration Runtime (Self-hosted) node has encountered an error during registration. Cannot connect to the Integration Runtime (Self-hosted) Host Service."
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/ir-registration-error.png" alt-text="Screenshot of the Integration Runtime Configuration Manager window, showing an IR registration error.":::
+
+- **Cause**
- d. Register the integration runtime.
+ Many resources are granted only to the service account. When you change the service account to another account, the permissions of all dependent resources remain unchanged.
-* If the error is "Service 'Integration Runtime Service' (DIAHostService) failed to start. Verify that you have sufficient privileges to start system services," do the following:
+- **Resolution**
- 1. Check the *DIAHostService* logon service account in the Windows service panel.
+ Go to the integration runtime event log to check the error.
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/logon-service-account.png" alt-text="Screenshot of the &quot;Log On&quot; pane for the service account.":::
-
- 1. Check to see whether the logon service account has **Log on as a service** permission to start the Windows service:
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/logon-as-service.png" alt-text="Screenshot of the &quot;Log on as service&quot; properties pane.":::
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/ir-event-log.png" alt-text="Screenshot of the IR event log, showing that a runtime error has occurred.":::
+
+ * If the error in the event log is "UnauthorizedAccessException," do the following:
+
+ 1. Check the *DIAHostService* logon service account in the Windows service panel.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/logon-service-account.png" alt-text="Screenshot of the Logon service account properties pane.":::
+
+ 1. Check to see whether the logon service account has read/write permissions for the *%programdata%\Microsoft\DataTransfer\DataManagementGateway* folder.
+
+ - By default, if the service logon account hasn't been changed, it should have read/write permissions.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/service-permission.png" alt-text="Screenshot of the service permissions pane.":::
+
+ - If you've changed the service logon account, mitigate the issue by doing the following:
+
+ a. Perform a clean uninstallation of the current self-hosted IR.
+ b. Install the self-hosted IR bits.
+ c. Change the service account by doing the following:
+
+ i. Go to the self-hosted IR installation folder, and then switch to the *Microsoft Integration Runtime\4.0\Shared* folder.
+ ii. Open a Command Prompt window by using elevated privileges. Replace *\<user>* and *\<password>* with your own username and password, and then run the following command:
+ `dmgcmd.exe -SwitchServiceAccount "<user>" "<password>"`
+ iii. If you want to change to the LocalSystem account, be sure to use the correct format for this account: `dmgcmd.exe -SwitchServiceAccount "NT Authority\System" ""`
+ Do *not* use this format: `dmgcmd.exe -SwitchServiceAccount "LocalSystem" ""`
+ iv. Optionally, because Local System has higher privileges than Administrator, you can also directly change it in "Services".
+ v. You can use a local/domain user for the IR service logon account.
+
+ d. Register the integration runtime.
+
+ * If the error is "Service 'Integration Runtime Service' (DIAHostService) failed to start. Verify that you have sufficient privileges to start system services," do the following:
+
+ 1. Check the *DIAHostService* logon service account in the Windows service panel.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/logon-service-account.png" alt-text="Screenshot of the &quot;Log On&quot; pane for the service account.":::
+
+ 1. Check to see whether the logon service account has **Log on as a service** permission to start the Windows service:
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/logon-as-service.png" alt-text="Screenshot of the &quot;Log on as service&quot; properties pane.":::
-#### More information
+- **More information**
-If neither of the preceding two resolution patterns applies in your case, try to collect the following Windows event logs:
-- Applications and Services Logs > Integration Runtime-- Windows Logs > Application
+ If neither of the preceding two resolution patterns applies in your case, try to collect the following Windows event logs:
+ - Applications and Services Logs > Integration Runtime
+ - Windows Logs > Application
### Can't find the Register button to register a self-hosted IR
-#### Symptoms
+- **Symptoms**
-When you register a self-hosted IR, the **Register** button isn't displayed on the Configuration Manager pane.
--
-#### Cause
-
-As of the release of Integration Runtime 3.0, the **Register** button on existing integration runtime nodes has been removed to enable a cleaner and more secure environment. If a node has been registered to an integration runtime, whether it's online or not, re-register it to another integration runtime by uninstalling the previous node, and then install and register the node.
-
-#### Resolution
-
-1. In Control Panel, uninstall the existing integration runtime.
+ When you register a self-hosted IR, the **Register** button isn't displayed on the Configuration Manager pane.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/no-register-button.png" alt-text="Screenshot of the Configuration Manager pane, displaying a message that the integration runtime node is not registered.":::
- > [!IMPORTANT]
- > In the following process, select **Yes**. Do not keep data during the uninstallation process.
+- **Cause**
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/delete-data.png" alt-text="Screenshot of the &quot;Yes&quot; button for deleting all user data from the integration runtime.":::
+ As of the release of Integration Runtime 3.0, the **Register** button on existing integration runtime nodes has been removed to enable a cleaner and more secure environment. If a node has been registered to an integration runtime, whether it's online or not, re-register it to another integration runtime by uninstalling the previous node, and then install and register the node.
+
+- **Resolution**
-1. If you don't have the integration runtime installer MSI file, go to [download center](https://www.microsoft.com/en-sg/download/details.aspx?id=39717) to download the latest integration runtime.
-1. Install the MSI file, and register the integration runtime.
+ 1. In Control Panel, uninstall the existing integration runtime.
+
+ > [!IMPORTANT]
+ > In the following process, select **Yes**. Do not keep data during the uninstallation process.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/delete-data.png" alt-text="Screenshot of the &quot;Yes&quot; button for deleting all user data from the integration runtime.":::
+
+ 1. If you don't have the integration runtime installer MSI file, go to [download center](https://www.microsoft.com/en-sg/download/details.aspx?id=39717) to download the latest integration runtime.
+ 1. Install the MSI file, and register the integration runtime.
### Unable to register the self-hosted IR because of localhost
-#### Symptoms
-
-You're unable to register the self-hosted IR on a new machine when you use get_LoopbackIpOrName.
+- **Symptoms**
-**Debug:**
-A runtime error has occurred.
-The type initializer for 'Microsoft.DataTransfer.DIAgentHost.DataSourceCache' threw an exception.
-A non-recoverable error occurred during a database lookup.
-
-**Exception detail:**
-System.TypeInitializationException: The type initializer for 'Microsoft.DataTransfer.DIAgentHost.DataSourceCache' threw an exception. > System.Net.Sockets.SocketException: A non-recoverable error occurred during a database lookup at System.Net.Dns.GetAddrInfo(String name).
+ You're unable to register the self-hosted IR on a new machine when you use get_LoopbackIpOrName.
+
+ **Debug:**
+ A runtime error has occurred.
+ The type initializer for 'Microsoft.DataTransfer.DIAgentHost.DataSourceCache' threw an exception.
+ A non-recoverable error occurred during a database lookup.
+
+ **Exception detail:**
+ System.TypeInitializationException: The type initializer for 'Microsoft.DataTransfer.DIAgentHost.DataSourceCache' threw an exception. > System.Net.Sockets.SocketException: A non-recoverable error occurred during a database lookup at System.Net.Dns.GetAddrInfo(String name).
-#### Cause
+- **Cause**
-The issue usually occurs when the localhost is being resolved.
+ The issue usually occurs when the localhost is being resolved.
-#### Resolution
+- **Resolution**
-Use localhost IP address 127.0.0.1 to host the file and resolve the issue.
+ Use localhost IP address 127.0.0.1 to host the file and resolve the issue.
### Self-hosted setup failed
-#### Symptoms
+- **Symptoms**
-You're unable to uninstall an existing IR, install a new IR, or upgrade an existing IR to a new IR.
+ You're unable to uninstall an existing IR, install a new IR, or upgrade an existing IR to a new IR.
-#### Cause
+- **Cause**
-The integration runtime installation depends on the Windows Installer service. You might experience installation problems for the following reasons:
-- Insufficient available disk space.-- Lack of permissions.-- The Windows NT service is locked.-- CPU utilization is too high.-- The MSI file is hosted in a slow network location.-- Some system files or registries were touched unintentionally.
+ The integration runtime installation depends on the Windows Installer service. You might experience installation problems for the following reasons:
+ - Insufficient available disk space.
+ - Lack of permissions.
+ - The Windows NT service is locked.
+ - CPU utilization is too high.
+ - The MSI file is hosted in a slow network location.
+ - Some system files or registries were touched unintentionally.
### The IR service account failed to fetch certificate access
-#### Symptoms
-
-When you install a self-hosted IR via Microsoft Integration Runtime Configuration Manager, a certificate with a trusted certificate authority (CA) is generated. The certificate couldn't be applied to encrypt communication between two nodes, and the following error message is displayed:
-
-"Failed to change Intranet communication encryption mode: Failed to grant Integration Runtime service account the access of to the certificate '\<*certificate name*>'. Error code 103"
--
-#### Cause
-
-The certificate is using key storage provider (KSP) storage, which is not supported yet. To date, self-hosted IR supports only cryptographic service provider (CSP) storage.
-
-#### Resolution
-
-We recommend that you use CSP certificates in this case.
+- **Symptoms**
-**Solution 1**
-
-To import the certificate, run the following command:
-
-`Certutil.exe -CSP "CSP or KSP" -ImportPFX FILENAME.pfx`
--
-**Solution 2**
-
-To convert the certificate, run the following commands:
-
-`openssl pkcs12 -in .\xxxx.pfx -out .\xxxx_new.pem -password pass: <EnterPassword>`
-`openssl pkcs12 -export -in .\xxxx_new.pem -out xxxx_new.pfx`
+ When you install a self-hosted IR via Microsoft Integration Runtime Configuration Manager, a certificate with a trusted certificate authority (CA) is generated. The certificate couldn't be applied to encrypt communication between two nodes, and the following error message is displayed:
+
+ "Failed to change Intranet communication encryption mode: Failed to grant Integration Runtime service account the access of to the certificate '\<*certificate name*>'. Error code 103"
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/integration-runtime-service-account-certificate-error.png" alt-text="Screenshot displaying the error message &quot;... Failed to grant Integration Runtime service account certificate access&quot;.":::
-Before and after conversion:
+- **Cause**
+ The certificate is using key storage provider (KSP) storage, which is not supported yet. To date, self-hosted IR supports only cryptographic service provider (CSP) storage.
+- **Resolution**
+ We recommend that you use CSP certificates in this case.
+
+ **Solution 1**
+
+ To import the certificate, run the following command:
+
+ `Certutil.exe -CSP "CSP or KSP" -ImportPFX FILENAME.pfx`
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/use-certutil.png" alt-text="Screenshot of the certutil command for importing the certificate.":::
+
+ **Solution 2**
+
+ To convert the certificate, run the following commands:
+
+ `openssl pkcs12 -in .\xxxx.pfx -out .\xxxx_new.pem -password pass: <EnterPassword>`
+ `openssl pkcs12 -export -in .\xxxx_new.pem -out xxxx_new.pfx`
+
+ Before and after conversion:
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/before-certificate-change.png" alt-text="Screenshot of the result before the certificate conversion.":::
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/after-certificate-change.png" alt-text="Screenshot of the result after the certificate conversion.":::
+
### Self-hosted integration runtime version 5.x+ For the upgrade to version 5.x of the self-hosted integration runtime, we require **.NET Framework Runtime 4.7.2** or later. On the download page, you'll find download links for the latest 4.x version and the latest two 5.x versions. For Azure Data Factory v2 and Azure Synapse customers:
For Azure Data Factory v1 customers:
- Self-hosted integration runtime 5.X doesn't support Azure Data Factory v1. - The self-hosted integration runtime will be automatically upgraded to the latest version of 4.x. And the latest version of 4.x won't expire. - If you try to manually install self-hosted integration runtime 5.x and register the key, you'll be notified that self-hosted integration runtime 5.x doesn't support Azure Data Factory v1.--
+
+
## Self-hosted IR connectivity issues ### Self-hosted integration runtime can't connect to the cloud service
-#### Symptoms
-
-When you attempt to register the self-hosted integration runtime, Configuration Manager displays the following error message:
+- **Symptoms**
-"The Integration Runtime (Self-hosted) node has encountered an error during registration."
--
-#### Cause
+ When you attempt to register the self-hosted integration runtime, Configuration Manager displays the following error message:
+
+ "The Integration Runtime (Self-hosted) node has encountered an error during registration."
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/unable-to-connect-to-cloud-service.png" alt-text="Screenshot of the &quot;The Integration Runtime (Self-hosted) node has encountered an error during registration&quot; message.":::
+
+- **Cause**
-The self-hosted IR can't connect to the service back end. This issue is usually caused by network settings in the firewall.
+ The self-hosted IR can't connect to the service back end. This issue is usually caused by network settings in the firewall.
-#### Resolution
+- **Resolution**
-1. Check to see whether the integration runtime service is running. If it is, go to step 2.
+ 1. Check to see whether the integration runtime service is running. If it is, go to step 2.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/integration-runtime-service-running-status.png" alt-text="Screenshot showing that the self-hosted IR service is running.":::
+
+ 1. If no proxy is configured on the self-hosted IR, which is the default setting, run the following PowerShell command on the machine where the self-hosted integration runtime is installed:
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/integration-runtime-service-running-status.png" alt-text="Screenshot showing that the self-hosted IR service is running.":::
+ ```powershell
+ (New-Object System.Net.WebClient).DownloadString("https://wu2.frontend.clouddatahub.net/")
+ ```
-1. If no proxy is configured on the self-hosted IR, which is the default setting, run the following PowerShell command on the machine where the self-hosted integration runtime is installed:
-
- ```powershell
- (New-Object System.Net.WebClient).DownloadString("https://wu2.frontend.clouddatahub.net/")
- ```
-
- > [!NOTE]
- > The service URL might vary, depending on the location of your data factory or Synapse workspace instance. To find the service URL, use the Manage page of the UI in your data factory or Azure Synapse instance to find **Integration runtimes** and click your self-hosted IR to edit it. There select the **Nodes** tab and click **View Service URLs**.
+ > [!NOTE]
+ > The service URL might vary, depending on the location of your data factory or Synapse workspace instance. To find the service URL, use the Manage page of the UI in your data factory or Azure Synapse instance to find **Integration runtimes** and click your self-hosted IR to edit it. There select the **Nodes** tab and click **View Service URLs**.
+
+ The following is the expected response:
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/powershell-command-response.png" alt-text="Screenshot of the PowerShell command response.":::
+ 1. If you don't receive the response you had expected, use one of the following methods, as appropriate:
+
+ * If you receive a "Remote name could not be resolved" message, there's a Domain Name System (DNS) issue. Contact your network team to fix the issue.
+ * If you receive an "ssl/tls cert is not trusted" message, [check the certificate](https://wu2.frontend.clouddatahub.net/) to see whether it's trusted on the machine, and then install the public certificate by using Certificate Manager. This action should mitigate the issue.
+ * Go to **Windows** > **Event viewer (logs)** > **Applications and Services Logs** > **Integration Runtime**, and check for any failure that's caused by DNS, a firewall rule, or company network settings. If you find such a failure, forcibly close the connection. Because every company has its own customized network settings, contact your network team to troubleshoot these issues.
+
+ 1. If "proxy" has been configured on the self-hosted integration runtime, verify that your proxy server can access the service endpoint. For a sample command, see [PowerShell, web requests, and proxies](https://stackoverflow.com/questions/571429/powershell-web-requests-and-proxies).
+
+ ```powershell
+ $user = $env:username
+ $webproxy = (get-itemproperty 'HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet
+ Settings').ProxyServer
+ $pwd = Read-Host "Password?" -assecurestring
+ $proxy = new-object System.Net.WebProxy
+ $proxy.Address = $webproxy
+ $account = new-object System.Net.NetworkCredential($user,[Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($pwd)), "")
+ $proxy.credentials = $account
+ $url = "https://wu2.frontend.clouddatahub.net/"
+ $wc = new-object system.net.WebClient
+ $wc.proxy = $proxy
+ $webpage = $wc.DownloadData($url)
+ $string = [System.Text.Encoding]::ASCII.GetString($webpage)
+ $string
+ ```
+
The following is the expected response:
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/powershell-command-response.png" alt-text="Screenshot of the PowerShell command response.":::
-
-1. If you don't receive the response you had expected, use one of the following methods, as appropriate:
-
- * If you receive a "Remote name could not be resolved" message, there's a Domain Name System (DNS) issue. Contact your network team to fix the issue.
- * If you receive an "ssl/tls cert is not trusted" message, [check the certificate](https://wu2.frontend.clouddatahub.net/) to see whether it's trusted on the machine, and then install the public certificate by using Certificate Manager. This action should mitigate the issue.
- * Go to **Windows** > **Event viewer (logs)** > **Applications and Services Logs** > **Integration Runtime**, and check for any failure that's caused by DNS, a firewall rule, or company network settings. If you find such a failure, forcibly close the connection. Because every company has its own customized network settings, contact your network team to troubleshoot these issues.
-
-1. If "proxy" has been configured on the self-hosted integration runtime, verify that your proxy server can access the service endpoint. For a sample command, see [PowerShell, web requests, and proxies](https://stackoverflow.com/questions/571429/powershell-web-requests-and-proxies).
- ```powershell
- $user = $env:username
- $webproxy = (get-itemproperty 'HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet
- Settings').ProxyServer
- $pwd = Read-Host "Password?" -assecurestring
- $proxy = new-object System.Net.WebProxy
- $proxy.Address = $webproxy
- $account = new-object System.Net.NetworkCredential($user,[Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($pwd)), "")
- $proxy.credentials = $account
- $url = "https://wu2.frontend.clouddatahub.net/"
- $wc = new-object system.net.WebClient
- $wc.proxy = $proxy
- $webpage = $wc.DownloadData($url)
- $string = [System.Text.Encoding]::ASCII.GetString($webpage)
- $string
- ```
-
-The following is the expected response:
-
-
-> [!NOTE]
-> Proxy considerations:
-> * Check to see whether the proxy server needs to be put on the Safe Recipients list. If so, make sure [these domains](./data-movement-security-considerations.md#firewall-requirements-for-on-premisesprivate-network) are on the Safe Recipients list.
-> * Check to see whether SSL/TLS certificate "wu2.frontend.clouddatahub.net/" is trusted on the proxy server.
-> * If you're using Active Directory authentication on the proxy, change the service account to the user account that can access the proxy as "Integration Runtime Service."
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/powershell-command-response.png" alt-text="Screenshot of the expected PowerShell command response.":::
+
+ > [!NOTE]
+ > Proxy considerations:
+ > * Check to see whether the proxy server needs to be put on the Safe Recipients list. If so, make sure [these domains](./data-movement-security-considerations.md#firewall-requirements-for-on-premisesprivate-network) are on the Safe Recipients list.
+ > * Check to see whether SSL/TLS certificate "wu2.frontend.clouddatahub.net/" is trusted on the proxy server.
+ > * If you're using Active Directory authentication on the proxy, change the service account to the user account that can access the proxy as "Integration Runtime Service."
### Error message: Self-hosted integration runtime node/logical self-hosted IR is in Inactive/ "Running (Limited)" state
-#### Cause
-
-The self-hosted integrated runtime node might have a status of **Inactive**, as shown in the following screenshot:
+- **Cause**
-
-This behavior occurs when nodes can't communicate with each other.
-
-#### Resolution
+ The self-hosted integrated runtime node might have a status of **Inactive**, as shown in the following screenshot:
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/inactive-self-hosted-ir-node.png" alt-text="Screenshot of self-hosted integrated runtime node with inactive status":::
+
+ This behavior occurs when nodes can't communicate with each other.
-1. Log in to the node-hosted virtual machine (VM). Under **Applications and Services Logs** > **Integration Runtime**, open Event Viewer, and filter the error logs.
+- **Resolution**
-1. Check to see whether an error log contains the following error:
+ 1. Log in to the node-hosted virtual machine (VM). Under **Applications and Services Logs** > **Integration Runtime**, open Event Viewer, and filter the error logs.
- ```
- System.ServiceModel.EndpointNotFoundException: Could not connect to net.tcp://xxxxxxx.bwld.com:8060/ExternalService.svc/WorkerManager. The connection attempt lasted for a time span of 00:00:00.9940994. TCP error code 10061: No connection could be made because the target machine actively refused it 10.2.4.10:8060.
- System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it.
- 10.2.4.10:8060
- at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
- at System.Net.Sockets.Socket.Connect(EndPoint remoteEP)
- at System.ServiceModel.Channels.SocketConnectionInitiator.Connect(Uri uri, TimeSpan timeout)
- ```
-
-1. If you see this error, run the following command in a Command Prompt window:
-
- ```
- telnet 10.2.4.10 8060
- ```
-
-1. If you receive the "Could not open connection to the host" command-line error that's shown in the following screenshot, contact your IT department for help to fix this issue. After you can successfully telnet, contact Microsoft Support if you still have issues with the integration runtime node status.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/command-line-error.png" alt-text="Screenshot of the &quot;Could not open connection to the host&quot; command-line error.":::
+ 1. Check to see whether an error log contains the following error:
-1. Check to see whether the error log contains the following entry:
-
- ```
- Error log: Cannot connect to worker
- ```
+ ```
+ System.ServiceModel.EndpointNotFoundException: Could not connect to net.tcp://xxxxxxx.bwld.com:8060/ExternalService.svc/WorkerManager. The connection attempt lasted for a time span of 00:00:00.9940994. TCP error code 10061: No connection could be made because the target machine actively refused it 10.2.4.10:8060.
+ System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it.
+ 10.2.4.10:8060
+ at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
+ at System.Net.Sockets.Socket.Connect(EndPoint remoteEP)
+ at System.ServiceModel.Channels.SocketConnectionInitiator.Connect(Uri uri, TimeSpan timeout)
+ ```
+
+ 1. If you see this error, run the following command in a Command Prompt window:
-1. To resolve the issue, try one or both of the following methods:
- - Put all the nodes in the same domain.
- - Add the IP to host mapping in all the hosted VM's host files.
+ ```
+ telnet 10.2.4.10 8060
+ ```
+
+ 1. If you receive the "Could not open connection to the host" command-line error that's shown in the following screenshot, contact your IT department for help to fix this issue. After you can successfully telnet, contact Microsoft Support if you still have issues with the integration runtime node status.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/command-line-error.png" alt-text="Screenshot of the &quot;Could not open connection to the host&quot; command-line error.":::
+
+ 1. Check to see whether the error log contains the following entry:
+
+ ```
+ Error log: Cannot connect to worker
+ ```
+
+ 1. To resolve the issue, try one or both of the following methods:
+ - Put all the nodes in the same domain.
+ - Add the IP to host mapping in all the hosted VM's host files.
### Connectivity issue between the self-hosted IR and your data factory or Azure Synapse instance or the self-hosted IR and the data source or sink To troubleshoot the network connectivity issue, you should know how to collect the network trace, understand how to use it, and [analyze the Microsoft Network Monitor (Netmon) trace](#analyze-the-netmon-trace) before applying the Netmon Tools in real cases from the self-hosted IR.
-#### Symptoms
-
-You might occasionally need to troubleshoot certain connectivity issues between the self-hosted IR and your data factory or Azure Synapse instance, as shown in the following screenshot, or between the self-hosted IR and the data source or sink.
--
-In either instance, you might encounter the following errors:
-
-* "Copy failed with error:Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot connect to SQL Server: 'IP address'"
-
-* "One or more errors occurred. An error occurred while sending the request. The underlying connection was closed: An unexpected error occurred on a receive. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host Activity ID."
-
-#### Resolution
-
-When you encounter the preceding errors, troubleshoot them by following the instructions in this section.
--- Collect a Netmon trace for analysis: -
- 1. You can set the filter to see a reset from the server to the client side. In the following example screenshot, you can see that the server side is the Data Factory server.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/data-factory-server.png" alt-text="Screenshot of the Data factory server.":::
+- **Symptoms**
- 1. When you get the reset package, you can find the conversation by following Transmission Control Protocol (TCP).
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/find-conversation.png" alt-text="Screenshot of the TCP conversation.":::
-
- 1. Get the conversation between the client and the Data Factory server below by removing the filter.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/get-conversation.png" alt-text="Screenshot of conversation details.":::
--- An analysis of the Netmon trace you've collected shows that the Time to Live (TTL)) total is 64. According to the values mentioned in the [IP Time to Live (TTL) and Hop Limit Basics](https://packetpushers.net/ip-time-to-live-and-hop-limit-basics/) article, extracted in the following list, you can see that it's the Linux System that resets the package and causes the disconnection.-
- Default TTL and Hop Limit values vary between different operating systems, as listed here:
- - Linux kernel 2.4 (circa 2001): 255 for TCP, User Datagram Protocol (UDP), and Internet Control Message Protocol (ICMP)
- - Linux kernel 4.10 (2015): 64 for TCP, UDP, and ICMP
- - Windows XP (2001): 128 for TCP, UDP, and ICMP
- - Windows 10 (2015): 128 for TCP, UDP, and ICMP
- - Windows Server 2008: 128 for TCP, UDP, and ICMP
- - Windows Server 2019 (2018): 128 for TCP, UDP, and ICMP
- - macOS (2001): 64 for TCP, UDP, and ICMP
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/ttl-61.png" alt-text="Screenshot showing a TTL value of 61.":::
+ You might occasionally need to troubleshoot certain connectivity issues between the self-hosted IR and your data factory or Azure Synapse instance, as shown in the following screenshot, or between the self-hosted IR and the data source or sink.
- In the preceding example, the TTL is shown as 61 instead of 64, because when the network package reaches its destination, it needs to go through various hops, such as routers or network devices. The number of routers or network devices is deducted to produce the final TTL.
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/http-request-error.png" alt-text="Screenshot of a &quot;Processed HTTP request failed&quot; message":::
- In this case, you can see that a reset can be sent from the Linux System with TTL 64.
--- To confirm where the reset device might come from, check the fourth hop from self-hosted IR.
-
- *Network package from Linux System A with TTL 64 -> B TTL 64 minus 1 = 63 -> C TTL 63 minus 1 = 62 -> TTL 62 minus 1 = 61 self-hosted IR*
--- In an ideal situation, the TTL hops number would be 128, which means that the Windows operating system is running your data factory instance. As shown in the following example, *128 minus 107 = 21 hops*, which means that 21 hops for the package were sent from the data factory instance to the self-hosted IR during the TCP 3 handshake.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/ttl-107.png" alt-text="Screenshot showing a TTL value of 107.":::
-
- Therefore, you need to engage the network team to check to see what the fourth hop is from the self-hosted IR. If it's the firewall, as with the Linux System, check any logs to see why that device resets the package after the TCP 3 handshake.
+ In either instance, you might encounter the following errors:
+
+ * "Copy failed with error:Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot connect to SQL Server: 'IP address'"
+
+ * "One or more errors occurred. An error occurred while sending the request. The underlying connection was closed: An unexpected error occurred on a receive. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host Activity ID."
- If you're unsure where to investigate, try to get the Netmon trace from both the self-hosted IR and the firewall during the problematic time. This approach will help you figure out which device might have reset the package and caused the disconnection. In this case, you also need to engage your network team to move forward.
+- **Resolution**
+ When you encounter the preceding errors, troubleshoot them by following the instructions in this section.
+
+ - Collect a Netmon trace for analysis:
+
+ 1. You can set the filter to see a reset from the server to the client side. In the following example screenshot, you can see that the server side is the Data Factory server.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/data-factory-server.png" alt-text="Screenshot of the Data factory server.":::
+
+ 1. When you get the reset package, you can find the conversation by following Transmission Control Protocol (TCP).
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/find-conversation.png" alt-text="Screenshot of the TCP conversation.":::
+
+ 1. Get the conversation between the client and the Data Factory server below by removing the filter.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/get-conversation.png" alt-text="Screenshot of conversation details.":::
+
+ - An analysis of the Netmon trace you've collected shows that the Time to Live (TTL)) total is 64. According to the values mentioned in the [IP Time to Live (TTL) and Hop Limit Basics](https://packetpushers.net/ip-time-to-live-and-hop-limit-basics/) article, extracted in the following list, you can see that it's the Linux System that resets the package and causes the disconnection.
+
+ Default TTL and Hop Limit values vary between different operating systems, as listed here:
+ - Linux kernel 2.4 (circa 2001): 255 for TCP, User Datagram Protocol (UDP), and Internet Control Message Protocol (ICMP)
+ - Linux kernel 4.10 (2015): 64 for TCP, UDP, and ICMP
+ - Windows XP (2001): 128 for TCP, UDP, and ICMP
+ - Windows 10 (2015): 128 for TCP, UDP, and ICMP
+ - Windows Server 2008: 128 for TCP, UDP, and ICMP
+ - Windows Server 2019 (2018): 128 for TCP, UDP, and ICMP
+ - macOS (2001): 64 for TCP, UDP, and ICMP
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/ttl-61.png" alt-text="Screenshot showing a TTL value of 61.":::
+
+ In the preceding example, the TTL is shown as 61 instead of 64, because when the network package reaches its destination, it needs to go through various hops, such as routers or network devices. The number of routers or network devices is deducted to produce the final TTL.
+
+ In this case, you can see that a reset can be sent from the Linux System with TTL 64.
+
+ - To confirm where the reset device might come from, check the fourth hop from self-hosted IR.
+
+ *Network package from Linux System A with TTL 64 -> B TTL 64 minus 1 = 63 -> C TTL 63 minus 1 = 62 -> TTL 62 minus 1 = 61 self-hosted IR*
+
+ - In an ideal situation, the TTL hops number would be 128, which means that the Windows operating system is running your data factory instance. As shown in the following example, *128 minus 107 = 21 hops*, which means that 21 hops for the package were sent from the data factory instance to the self-hosted IR during the TCP 3 handshake.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/ttl-107.png" alt-text="Screenshot showing a TTL value of 107.":::
+
+ Therefore, you need to engage the network team to check to see what the fourth hop is from the self-hosted IR. If it's the firewall, as with the Linux System, check any logs to see why that device resets the package after the TCP 3 handshake.
+
+ If you're unsure where to investigate, try to get the Netmon trace from both the self-hosted IR and the firewall during the problematic time. This approach will help you figure out which device might have reset the package and caused the disconnection. In this case, you also need to engage your network team to move forward.
+
### Analyze the Netmon trace > [!NOTE]
How to determine whether you're affected:
### Couldn't establish a trust relationship for the SSL/TLS secure channel
-#### Symptoms
-
-The self-hosted IR couldn't connect to the Azure Data Factory or Azure Synapse service.
-
-When you check the self-hosted IR event log or the client notification logs in the CustomLogEvent table, you'll find the following error message:
-
-"The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure."
-
-The simplest way to check the server certificate of the service is to open the service URL in your browser. For example, open the [check server certificate link](https://eu.frontend.clouddatahub.net/) on the machine where the self-hosted IR is installed, and then view the server certificate information.
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/server-certificate.png" alt-text="Screenshot of the check server certificate pane of the Azure Data Factory service.":::
-
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/certificate-path.png" alt-text="Screenshot of the window for checking the server certification path.":::
-
-#### Cause
-
-There are two possible reasons for this issue:
--- Reason 1: The root CA of the service's server certificate isn't trusted on the machine where the self-hosted IR is installed. -- Reason 2: You're using a proxy in your environment, the server certificate of the service is replaced by the proxy, and the replaced server certificate isn't trusted by the machine where the self-hosted IR is installed.-
-#### Resolution
+- **Symptoms**
-- For reason 1: Make sure that the service's server certificate and its certificate chain are trusted by the machine where the self-hosted IR is installed.-- For reason 2: Either trust the replaced root CA on the self-hosted IR machine, or configure the proxy not to replace the service's server certificate.
+ The self-hosted IR couldn't connect to the Azure Data Factory or Azure Synapse service.
+
+ When you check the self-hosted IR event log or the client notification logs in the CustomLogEvent table, you'll find the following error message:
+
+ "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure."
+
+ The simplest way to check the server certificate of the service is to open the service URL in your browser. For example, open the [check server certificate link](https://eu.frontend.clouddatahub.net/) on the machine where the self-hosted IR is installed, and then view the server certificate information.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/server-certificate.png" alt-text="Screenshot of the check server certificate pane of the Azure Data Factory service.":::
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/certificate-path.png" alt-text="Screenshot of the window for checking the server certification path.":::
-For more information about trusting certificates on Windows, see [Installing the trusted root certificate](/skype-sdk/sdn/articles/installing-the-trusted-root-certificate).
+- **Cause**
-#### Additional information
-We've rolled out a new SSL certificate, which is signed from DigiCert. Check to see whether the DigiCert Global Root G2 is in the trusted root CA.
+ There are two possible reasons for this issue:
+
+ - Reason 1: The root CA of the service's server certificate isn't trusted on the machine where the self-hosted IR is installed.
+ - Reason 2: You're using a proxy in your environment, the server certificate of the service is replaced by the proxy, and the replaced server certificate isn't trusted by the machine where the self-hosted IR is installed.
- :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/trusted-root-ca-check.png" alt-text="Screenshot showing the DigiCert Global Root G2 folder in the Trusted Root Certification Authorities directory.":::
+- **Resolution**
-If it isn't in the trusted root CA, [download it here](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt ).
+ - For reason 1: Make sure that the service's server certificate and its certificate chain are trusted by the machine where the self-hosted IR is installed.
+ - For reason 2: Either trust the replaced root CA on the self-hosted IR machine, or configure the proxy not to replace the service's server certificate.
+
+ For more information about trusting certificates on Windows, see [Installing the trusted root certificate](/skype-sdk/sdn/articles/installing-the-trusted-root-certificate).
+
+- **Additional information**
+ We've rolled out a new SSL certificate, which is signed from DigiCert. Check to see whether the DigiCert Global Root G2 is in the trusted root CA.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/trusted-root-ca-check.png" alt-text="Screenshot showing the DigiCert Global Root G2 folder in the Trusted Root Certification Authorities directory.":::
+
+ If it isn't in the trusted root CA, [download it here](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt ).
## Next steps
data-factory Ssis Integration Runtime Ssis Activity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-ssis-activity-faq.md
+
+ Title: Troubleshoot package execution in the SSIS integration runtime
+description: "This article provides troubleshooting guidance for SSIS package execution in the SSIS integration runtime"
+++++++ Last updated : 02/21/2022++
+# Troubleshoot package execution in the SSIS integration runtime
+
+
+This article includes the most common errors that you might find when you're executing SQL Server Integration Services (SSIS) packages in the SSIS integration runtime. It describes the potential causes and actions to solve the errors.
+
+## General
+
+### Where to find logs for troubleshooting
+
+Use the Azure Data Factory portal to check the output of the SSIS package execution activity. The output includes the execution result, error messages, and operation ID. For details, see [Monitor the pipeline](how-to-invoke-ssis-package-ssis-activity.md#monitor-the-pipeline).
+
+Use the SSIS catalog (SSISDB) to check the detail logs for the execution. For details, see [Monitor Running Packages and Other Operations](/sql/integration-services/performance/monitor-running-packages-and-other-operations).
+
+## Common errors, causes, and solutions
+
+### Error message: "Connection Timeout Expired" or "The service has encountered an error processing your request. Try again. "
+
+Here are potential causes and recommended actions:
+* The data source or destination is overloaded. Check the load on your data source or destination and see whether it has enough capacity. For example, if you used Azure SQL Database, consider scaling up if the database is likely to time out.
+* The network between the SSIS integration runtime and the data source or destination is unstable, especially when the connection is cross-region or between on-premises and Azure. Apply the retry pattern in the SSIS package by following these steps:
+ * Make sure your SSIS packages can rerun on failure without side effects (for example, data loss or data duplication).
+ * Configure **Retry** and **Retry interval** of **Execute SSIS Package** activity on the **General** tab.
+ :::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-general.png" alt-text="Set properties on the General tab":::
+ * For an ADO.NET and OLE DB source or destination component, set **ConnectRetryCount** and **ConnectRetryInterval** in Connection Manager in the SSIS package or SSIS activity.
+
+
+### Error message: "ADO NET Source has failed to acquire the connection '...'` with "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server wasn't found or wasn't accessible. "
+
+This issue usually means the data source or destination is inaccessible from the SSIS integration runtime. The reasons can vary. Try these actions:
+* Make sure you're passing the data source or destination name/IP correctly.
+* Make sure the firewall is set properly.
+* Make sure your virtual network is configured properly if your data source or destination is on-premises:
+ * You can verify whether the issue is from virtual network configuration by provisioning an Azure VM in the same virtual network. Then check whether the data source or destination can be accessed from the Azure VM.
+ * You can find more details about using a virtual network with an SSIS integration runtime in [Join an Azure-SSIS integration runtime to a virtual network](join-azure-ssis-integration-runtime-virtual-network.md).
++
+### Error message: "ADO NET Source has failed to acquire the connection '...'" with "Couldn't create a managed connection manager. "
+
+The potential cause is that the ADO.NET provider used in the package isn't installed in the SSIS integration runtime. You can install the provider by using a custom setup. You can find more details about custom setup in [Customize setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md).
+
+### Error message: "The connection '...' isn't found "
+
+A known issue in older versions of SQL Server Management Studio (SSMS) can cause this error. If the package contains a custom component (for example, SSIS Azure Feature Pack or partner components) that isn't installed on the machine where SSMS is used to do the deployment, SSMS will remove the component and cause the error. Upgrade [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) to the latest version that has the issue fixed.
+
+### Error message: `SSIS Executor exit code: -1073741819. "
+
+* Potential cause & recommended action:
+ * This error may be because of the limitation for Excel source and destination when multiple Excel sources or destinations are executing in parallel in multi-thread. You can work around this limitation by change your Excel components to execute in sequence, or separate them into different packages and trigger through "Execute Package Task" with ExecuteOutOfProcess property set as True.
+
+### Error message: "There isn't enough space on the disk "
+
+This error means the local disk is used up in the SSIS integration runtime node. Check whether your package or custom setup is consuming a lot of disk spaces:
+* If the disk is consumed by your package, it will be freed up after the package execution finishes.
+* If the disk is consumed by your custom setup, you'll need to stop the SSIS integration runtime, modify your script, and start the integration runtime again. The whole Azure blob container that you specified for custom setup will be copied to the SSIS integration runtime node, so check whether there's any unnecessary content under that container.
+
+### Error message: "Failed to retrieve resource from master. Microsoft.SqlServer.IntegrationServices.Scale.ScaleoutContract.Common.MasterResponseFailedException: Code:300004. Description:Load file "***" failed. "
+
+* Potential cause & recommended action:
+ * If the SSIS Activity is executing package from file system (package file or project file), this error will occur if the project, package or configuration file is not accessible with the package access credential you provided in the SSIS Activity
+ * If you are using Azure File:
+ * The file path should start with \\\\\<storage account name\>.file.core.windows.net\\\<file share path\>
+ * The domain should be "Azure"
+ * The username should be \<storage account name\>
+ * The password should be \<storage access key\>
+ * If you are using on-premises file, please check if VNet, package access credential and permission are configured properly so that your Azure-SSIS integration runtime can access your on-premises file share
++
+### Error message: "The file name '...' specified in the connection was not valid "
+
+* Potential cause & recommended action:
+ * An invalid file name is specified
+ * Make sure you are using FQDN (Fully Qualified Domain Name) instead of short time in your connection manager
+
+### Error message: "Cannot open file '...' "
+
+This error occurs when package execution can't find a file in the local disk in the SSIS integration runtime. Try these actions:
+* Don't use the absolute path in the package that's being executed in the SSIS integration runtime. Use the current execution working directory (.) or the temp folder (%TEMP%) instead.
+* If you need to persist some files on SSIS integration runtime nodes, prepare the files as described in [Customize setup](how-to-configure-azure-ssis-ir-custom-setup.md). All the files in the working directory will be cleaned up after the execution is finished.
+* Use Azure Files instead of storing the file in the SSIS integration runtime node. For details, see [Use Azure file shares](/sql/integration-services/lift-shift/ssis-azure-files-file-shares#use-azure-file-shares).
++
+### Error message: "The database 'SSISDB' has reached its size quota "
+
+A potential cause is that the SSISDB database created in Azure SQL Database or in SQL Managed Instance has reached its quota. Try these actions:
+* Consider increasing the DTU of your database. You can find details in [SQL Database limits for a logical server](../azure-sql/database/resource-limits-logical-server.md).
+* Check whether your package would generate many logs. If so, you can configure an elastic job to clean up these logs. For details, see [Clean up SSISDB logs with Azure Elastic Database jobs](how-to-clean-up-ssisdb-logs-with-elastic-jobs.md).
+
+### Error message: "The request limit for the database is ... and has been reached. "
+
+If many packages are running in parallel in the SSIS integration runtime, this error might occur because SSISDB has hit its request limit. Consider increasing the DTU of SSISDB to resolve this issue. You can find details in [SQL Database limits for a logical server](../azure-sql/database/resource-limits-logical-server.md).
+
+### Error message: "SSIS Operation failed with unexpected operation status: ... "
+
+The error is mostly caused by a transient problem, so try to rerun the package execution. Apply the retry pattern in the SSIS package by following these steps:
+
+* Make sure your SSIS packages can rerun on failure without side effects (for example, data loss or data duplication).
+* Configure **Retry** and **Retry interval** of **Execute SSIS Package** activity on the **General** tab.
+ :::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-general.png" alt-text="Set properties on the General tab":::
+* For an ADO.NET and OLE DB source or destination component, set **ConnectRetryCount** and **ConnectRetryInterval** in Connection Manager in the SSIS package or SSIS activity.
+
+### Error message: "There is no active worker. "
+
+This error usually means the SSIS integration runtime has an unhealthy status. Check the Azure portal for the status and detailed errors. For more information, see [Azure-SSIS integration runtime](./monitor-integration-runtime.md#azure-ssis-integration-runtime).
+
+### Error message: "Your integration runtime cannot be upgraded and will eventually stop working, since we cannot access the Azure Blob container you provided for custom setup. "
+
+This error occurs when the SSIS integration runtime can't access the storage configured for custom setup. Check whether the shared access signature (SAS) URI that you provided is valid and hasn't expired.
+
+### Error message: "Microsoft OLE DB Provider for Analysis Services. 'Hresult: 0x80004005 Description:' COM error: COM error: mscorlib; Exception has been thrown by the target of an invocation "
+
+One potential cause is that the username or password with Azure AD Multi-Factor Authentication enabled is configured for Azure Analysis Services authentication. This authentication isn't supported in the SSIS integration runtime. Try to use a service principal for Azure Analysis Services authentication:
+
+1. Prepare a service principal as described in [Automation with service principals](../analysis-services/analysis-services-service-principal.md).
+2. In the Connection Manager, configure **Use a specific user name and password:** set **app:*&lt;AppID&gt;*@*&lt;TenantID&gt;*** as the username and clientSecret as the password. Here is an example of a correctly formatted user name:
+
+ `app:12345678-9012-3456-789a-bcdef012345678@9abcdef0-1234-5678-9abc-def0123456789abc`
+1. In Connection Manager, configure **Use a specific user name and password**: set **AppID** as the username and **clientSecret** as the password.
+
+
+### Error message: "ADONET Source has failed to acquire the connection {GUID} with the following error message: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'" when using a managed identity "
+
+Make sure you don't configure the authentication method of Connection Manager as **Active Directory Password Authentication** when the parameter *ConnectUsingManagedIdentity* is **True**. You can configure it as **SQL Authentication** instead, which is ignored if *ConnectUsingManagedIdentity* is set.
+
+### Error message: "0xC020801F at ..., OData Source [...]: Cannot acquire a managed connection from the run-time connection manager "
+
+One potential cause is that the Transport Layer Security (TLS) is not enable in SSIS integration runtime which is required by your OData source. You can enable TLS in SSIS integration runtime by using Customize setup. More detail can be found at [Can't connect Project Online Odata from SSIS](/office365/troubleshoot/cant-connect-project-online-odata-from-ssis) and [Customize setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md).
++
+### Error message: "Request staging task with operation guid ... fail since error: Failed to dispatch staging operation with error message: Microsoft.SqlServer.IntegrationServices.AisAgentCore.AisAgentException: Failed to load data proxy. "
+
+Make sure your Azure-SSIS integration runtime is configured with Self-Hosted integration runtime. More detail can be found at [Configure Self-Hosted IR as a proxy for Azure-SSIS IR in ADF](self-hosted-integration-runtime-proxy-ssis.md).
+
+### Error message: "Staging task status: Failed. Staging task error: ErrorCode: 2010, ErrorMessage: The Self-hosted Integration Runtime ... is offline "
+
+Make sure your Self-Hosted integration runtime is installed and started. More detail can be found at [Create and configure a self-hosted integration runtime](create-self-hosted-integration-runtime.md)
+
+### Error message: "Staging task error: ErrorCode: 2906, ErrorMessage: Package execution failed. Output: {"OperationErrorMessages": "Error: The requested OLE DB provider ... is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode... "
+
+Make sure the corresponding provider used by your OLE DB connectors in your package are installed on Self-Hosted integration runtime machine properly. More detail can be found at [Configure Self-Hosted IR as a proxy for Azure-SSIS IR in ADF](self-hosted-integration-runtime-proxy-ssis.md#prepare-the-self-hosted-ir)
+
+### Error message: "Staging task error: ErrorCode: 2906, ErrorMessage: Package execution failed. Output: {"OperationErrorMessages": "Error: System.IO.FileLoadException: Could not load file or assembly 'Microsoft.WindowsAzure.Storage, Version=..., Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference.'... "
+
+One potential cause is your Self-Hosted integration runtime is not installed or upgraded properly. Suggest to download and reinstall the latest Self-hosted integration runtime. More detail can be found at [Create and configure a self-hosted integration runtime](create-self-hosted-integration-runtime.md#installation-best-practices)
+
+
+### Error message: "Staging task failed. TaskStatus: Failed, ErrorCode: 2906, ErrorMessage: Package execution failed. For more details, select the output of your activity run on the same row., Output: {"OperationErrorMessages": "4/14/2021 7:10:35 AM +00:00 : = Failed to start Named pipe proxy... "
+
+Check if security policies are correctly assigned to the account running self-hosted IR service. If Windows authentication is used on Execute SSIS Package activity or the execution credential is set in SSIS catalog (SSISDB), the same security policies must be assigned to the Windows account used. More detail can be found at [Configure Self-Hosted IR as a proxy for Azure-SSIS IR in ADF](self-hosted-integration-runtime-proxy-ssis.md#enable-windows-authentication-for-on-premises-tasks)
++
+### Error message: "A connection is required when requesting metadata. If you are working offline, uncheck Work Offline on the SSIS menu to enable the connection "
+
+* Potential cause & recommended action:
+ * If there is also a warning message "The component does not support using connection manager with ConnectByProxy value setting trueΓÇ£ in the execution log, this means a connection manager is used on a component which hasn't supported "ConnectByProxy" yet. The supported components can be found at [Configure Self-Hosted IR as a proxy for Azure-SSIS IR in ADF](self-hosted-integration-runtime-proxy-ssis.md#enable-ssis-packages-to-use-a-proxy)
+ * Execution log can be found in [SSMS report](/sql/integration-services/performance/monitor-running-packages-and-other-operations#reports) or in the log folder you specified in SSIS package execution activity.
+ * vNet can also be used to access on-premises data as an alternative. More detail can be found at [Join an Azure-SSIS integration runtime to a virtual network](join-azure-ssis-integration-runtime-virtual-network.md)
+
+
+### Error message: "Staging task status: Failed. Staging task error: ErrorCode: 2906, ErrorMessage: Package execution failed. Output: {"OperationErrorMessages": "SSIS Executor exit code: -1.\n", "LogLocation": "...\\SSISTelemetry\\ExecutionLog\\...", "effectiveIntegrationRuntime": "...", "executionDuration": ..., "durationInQueue": { "integrationRuntimeQueue": ... }} "
+
+Make sure Visual C++ runtime is installed on Self-Hosted integration runtime machine. More detail can be found at [Configure Self-Hosted IR as a proxy for Azure-SSIS IR in ADF](self-hosted-integration-runtime-proxy-ssis.md#prepare-the-self-hosted-ir)
+
+
+### Multiple Package executions are triggered unexpectedly
+
+* Potential cause & recommended action:
+ * ADF stored procedure activity or Lookup activity are used to trigger SSIS package execution. The t-sql command may hit transient issue and trigger the rerun which would cause multiple package executions.
+ * Use ExecuteSSISPackage activity instead which ensures package execution wonΓÇÖt rerun unless user set retry count in activity. Detail can be found at [Run an SSIS package with the Execute SSIS Package activity](how-to-invoke-ssis-package-ssis-activity.md).
+ * Refine your t-sql command to be able to rerun by checking if an execution has already been triggered
+
+
+### Package execution takes too long
+
+Here are potential causes and recommended actions:
+
+* Too many package executions have been scheduled on the SSIS integration runtime. All these executions will be waiting in a queue for their turn.
+ * Determine the maximum by using this formula:
+
+ Max Parallel Execution Count per IR = Node Count * Max Parallel Execution per Node
+ * To learn how to set the node count and maximum parallel execution per node, see [Create an Azure-SSIS integration runtime in Azure Data Factory](create-azure-ssis-integration-runtime.md).
+* The SSIS integration runtime is stopped or has an unhealthy status. To learn how to check the SSIS integration runtime status and errors, see [Azure-SSIS integration runtime](monitor-integration-runtime.md#azure-ssis-integration-runtime).
+
+We also recommend that you set a timeout on the **General** tab:
+
+ :::image type="content" source="media/how-to-invoke-ssis-package-ssis-activity/ssis-activity-general.png" alt-text="Set properties on the General tab":::.
+
+
+### Poor performance in package execution
+
+Try these actions:
+
+* Make sure the SSIS integration runtime is in the same region as the data source and destination.
+
+* Set the logging level of package execution to **Performance** to collect duration information for each component in the execution. For details, see [Integration Services (SSIS) logging](/sql/integration-services/performance/integration-services-ssis-logging).
+
+* Check IR node performance in the Azure portal:
+ * For information about how to monitor the SSIS integration runtime, see [Azure-SSIS integration runtime](monitor-integration-runtime.md#azure-ssis-integration-runtime).
+ * You can find CPU/memory history for the SSIS integration runtime by viewing the metrics of the data factory in the Azure portal.
+ :::image type="content" source="media/ssis-integration-runtime-ssis-activity-faq/monitor-metrics-ssis-integration-runtime.png" alt-text="Monitor metrics of the SSIS integration runtime":::
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-ordered.md
Last updated 01/11/2022 -+ #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure. # Tutorial: Order Azure Data Box
databox Data Box Disk Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-quickstart-portal.md
Last updated 11/04/2020 -+ #Customer intent: As an IT admin, I need to quickly deploy Data Box Disk so as to import data into Azure.
databox Data Box Heavy Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-quickstart-portal.md
Last updated 11/04/2020 -+ #Customer intent: As an IT admin, I need to quickly deploy Data Box Heavy so as to import data into Azure.
defender-for-iot Concept Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-key-concepts.md
Engines identify security issues via continuous monitoring and five analytics en
- **Anomaly detection**: Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the engine uses a patented technique called Industrial Finite State Modeling (IFSM). The solution requires a shorter learning period than generic mathematical approaches or analytics, which were originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives. - **Operational incident detection**: Identifies operational issues such as intermittent connectivity that can indicate early signs of equipment failure.
+
+Tools are available to enable and disable sensor engines. Alerts are not triggered from engines that are disabled. See [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md).
+
+You can fine-tune detection instructions by working with Smart IT learning. See [Learning and Smart IT Learning modes](how-to-control-what-traffic-is-monitored.md#learning-and-smart-it-learning-modes)
+
+## Detection engines and alerts
+
+Alerts are triggered when sensor engines detect changes in network traffic and behavior that need your attention. This section describes the kind of alerts that each engine triggers.
+
+| Alert type | Description |
+|-|-|
+| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
+| Protocol violation alerts | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
+| Operational alerts | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
+| Malware alerts | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
+| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but is not defined as a scanning device. |
+
+For more alert information, see:
+
+- [Manage the alert event](how-to-manage-the-alert-event.md)
+
+- [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)
+
+- [Alert types and descriptions](alert-engine-messages.md)
## Network Traffic Analysis for risk and vulnerability assessment
Defender for IoT enables the effective management of multiple deployments and a
The on-premises management console is a web-based administrative platform that lets you monitor and control the activities of global sensor installations. In addition to managing the data received from deployed sensors, the on-premises management console seamlessly integrates data from various business resources: CMDBs, DNS, firewalls, Web APIs, and more. We recommend that you familiarize yourself with the concepts, capabilities, and features available to sensors before working with the on-premises management console.
Using custom, condition-based alert triggering and messaging helps pinpoint spec
For a complete list of supported protocols see, [Supported Protocols](concept-supported-protocols.md#supported-protocols). -
-### Secure development environment
-
-The Horizon ODE enables development of custom or proprietary protocols that cannot be shared outside an organization. For example, because of legal regulations or corporate policies.
-
-Develop dissector plugins without:
--- revealing any proprietary information about how your protocols are defined.--- sharing any of your sensitive PCAPs.--- violating compliance regulations.-
-Contact <ms-horizon-support@microsoft.com> for information about developing protocol plugins.
-
-### Customization and localization
-
-The SDK supports various customization options, including:
-
- - Text for function codes.
-
- - Full localization text for alerts, events, and protocol parameters.
-
- :::image type="content" source="media/references-horizon-sdk/localization.png" alt-text="View fully localized alerts.":::
-
-## Horizon architecture
-
-The architectural model includes three product layers.
--
-### Defender for IoT platform layer
-
-Enables immediate integration and real-time monitoring of custom dissector plugins in the Defender for IoT platform, without the need to upgrade the Defender for IoT platform version.
-
-### Defender for IoT services layer
-
-Each service is designed as a pipeline, decoupled from a specific protocol, enabling more efficient, independent development.
-
-Each service is designed as a pipeline, decoupled from a specific protocol. Services listens for traffic on the pipeline. They interact with the plugin data and the traffic captured by the sensors to index deployed protocols and analyze the traffic payload, and enable a more efficient and independent development.
-
-### Custom dissector layer
-
-Enables creation of plugins using the Defender for IoT proprietary SDK (including C++ implementation and JSON configuration) to:
--- Define how to identify the protocol--- Define how to map the fields you want to extract from the traffic, and extract them --- Define how to integrate with the Defender for IoT services-
- :::image type="content" source="media/references-horizon-sdk/layers.png" alt-text="The built-in layers.":::
-
-Defender for IoT provides basic dissectors for common protocols. You can build your dissectors on top of these protocols.
-- ## What is an Inventory Device The Defender for IoT Device inventory displays an extensive range of asset attributes that are detected by sensors monitoring the organizations networks and managed endpoints.
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
# Accelerate alert workflows
-This article describes how to accelerate alert workflows by using alert comments, alert groups, and custom alert rules in Microsoft Defender for IoT. These tools help you:
+This article describes how to accelerate alert workflows by using alert comments, alert groups, and custom alert rules for standard protocols and proprietary protocols in Microsoft Defender for IoT. These tools help you:
- Analyze and manage the large volume of alert events detected in your network.
This article describes how to accelerate alert workflows by using alert comments
Work with alert comments to improve communication between individuals and teams during the investigation of an alert event. - Use alert comments to improve: - **Workflow steps**: Provide alert mitigation steps.
Use alert comments to improve:
- **Workflow guidance**: Provide recommendations, insights, or warnings about the event. - The list of available options appears in each alert. Users can select one or several messages.
-To add alert comments:
-
-1. On the side menu, select **System Settings**.
+**To add alert comments:**
-2. In the **System Setting** window, select **Alert Comments**.
+1. On the side menu, select **System Settings** > **Network Monitoring**> **Alert Comments**.
-3. In the **Add comments** box, enter the comment text. Use up to 50 characters. Commas are not permissible.
+3. Enter a description and select **Submit**.
-4. Select **Add**.
## Accelerate incident workflows by using alert groups
Alert groups are applied when you create forwarding rules for the following part
- ArcSight The relevant alert group appears in partner output solutions.
Alert groups are predefined. For details about alerts associated with alert grou
## Customize alert rules Use custom alert rules to more specifically pinpoint activity of interest to you.- You can add custom alert rules based on: -- A category, for example a protocol, port or file.
+- A category, for example a standard protocol, or port or file.
+
+- Traffic detections based proprietary protocols developed in a Horizon plugin. (Horizon Open Development Environment ODE).
+ - Source and destination addresses-- A condition based on the category chosen, for example a function associated with a protocol, a file name, port or transport number.-- A condition based on date and time reference, for example if a detection was made on a specific day or a certain part of the day.
-If the sensor detects the activity described in the rule, the alert is sent. information that individual sensors detect. For example, define a rule that instructs a sensor to trigger an alert based on a source IP, destination IP, or command (within a protocol). When the sensor detects the traffic defined in the rule, an alert or event is generated.
+- A combination of protocol fields from all protocol layers. For example, in an environment running MODBUS, you may want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and ethernet destination, or an alert when any access is performed to a specific IP address.
+
+If the sensor detects the activity described in the rule, the alert is sent.
You can also use alert rule actions to instruct Defender for IoT to:
You can also use alert rule actions to instruct Defender for IoT to:
- Assign an alert severity. - Generate an event rather than alert. The detected information will appear in the event timeline. The alert message indicates that a user-defined rule triggered the alert. +
+### Create custom alerts
**To create a custom alert rule:** 1. Select **Custom Alerts** from the side menu of a sensor.
-1. Select the plus sign (**+**) to create a rule.
-1. Define a rule name.
-1. Select a category or protocol from the **Categories** pane.
-1. Define a specific source and destination IP or MAC address, or choose any address.
-1. Define one or several rule conditions. Two categories of conditions can be created:
- - Conditions based on unique values associated with the category selected. Select Add and define the values.
- - Conditions based on the when the activity was detected. In the Detections section, select a time period and day in which the detection must occur in order to send the alert. You can choose to send the alert if the activity is detected anytime, during or after working hours. Use the Define working hours option to instruct Defender for IoT working hours for your organization.
-1. Define rule actions:
- - Indicate if the rule triggers an **Alarm** or **Event**.
- - Assign a severity level to the alert.
- - Indicate if the alert will include a PCAP file.
-1. Select **Save**.
-
-The rule is added to the **Customized Alerts Rules** list, where you can review basic rule parameters, the last time the rule was triggered, and more. You can also enable and disable the rule from the list.
-
+
+1. Select **Create rule** (**+**).
-### See also
+ :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-alerts-rules.png" alt-text="Create custom alert rules":::
+
+1. Define an alert name.
+1. Select protocol to detect.
+1. Define a message to display. Alert messages can contain alphanumeric characters you enter, as well as traffic variables detected. For example, include the detected source and destination addresses in the alert messages. Use { } to add variables to the message
+1. Select the engine that should detect the activity.
+1. **Select the source and destination devices that pairs for which activity should be detected.**
+
+#### Create rule conditions
+
+Define one or several rule conditions. Two categories of conditions can be created:
+
+**Condition based on unique values**
+
+Create conditions based on unique values associated with the category selected. Rule conditions can comprise one or several sets of fields, operators, and values. Create condition sets, by using AND.
+
+**To create a rule condition:**
+
+1. Select a **Variable**. Variables represent fields configured in the plugin.
+
+7. Select an **Operator**:
+
+ - (==) Equal to
+
+ - (!=) Not equal to
+
+ - (>) Greater than
+
+
+ - In Range
+
+ - Not in Range
+ - Same as (field X same as field Y)
+
+ - (>=) Greater than or equal to
+ - (<) Less than
+
+ - (<=) Less than or equal to
+
+8. Enter a **Value** as a number. If the variable you selected is a MAC address or IP address, the value must be converted from a dotted-decimal address to decimal format. Use an IP address conversion tool, for example <https://www.ipaddressguide.com/ip>.
+
+ :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-rule-conditions.png" alt-text="Custom rule condition":::
+
+9. Select plus (**+**) to create a condition set.
-[View information provided in alerts](how-to-view-information-provided-in-alerts.md)
+When the rule condition or condition set is met, the alert is sent. You will be notified if the condition logic is not valid.
+
+**Condition Based when activity took place**
+
+Create conditions based on when the activity was detected. In the Detected section, select a time period and day in which the detection must occur in order to send the alert. You can choose to send the alert if the activity is detected:
+- any time throughout the day
+- during working hours
+- after working hours
+- a specific time
+
+Use the Define working hours option to instruct Defender for IoT working hours for your organization.
+
+#### Define rule actions
+
+The following actions can be defined for the rule:
+
+- Indicate if the rule triggers an **Alarm** or **Event**.
+- Assign a severity level to the alert (Critical, Major, Minor, Warning).
+- Indicate if the alert will include a PCAP file.
+
+The rule is added to the **Customized Alerts Rules** page.
++
+### Managing customer alert rules
+
+Manage the rules you create from the Custom alert rules page, for example:
++
+- Review the last time the rule was triggered, the number of times the alert was triggered for the rule in the last week, or the last time the rule was modified.
+- Enable or disable rules.
+- Delete rules.
+
+Select the checkbox next to multiple rules to perform a bulk enable/disable or delete.
+
+### Tracking changes to custom alert rules
+
+Changes made to custom alert rules are tracked in the event timeline. For example if a user changes a severity level, the protocol detected or any other rule parameter.
+
+**To view changes to the alert rule:**
+
+1. Navigate to the Event timeline page.
++
+### See also
[Manage the alert event](how-to-manage-the-alert-event.md)
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
Last updated 11/09/2021
-# Activate and set up your on-premises management console
+# Activate and set up your on-premises management console
Activation and setup of the on-premises management console ensures that:
Activation and setup of the on-premises management console ensures that:
1. Navigate to the IP address you received for the on-premises management console during the system installation.
-1. Enter the username and password you received for the on-premises management console during the system installation.
+1. Enter the username and password you received for the on-premises management console during the system installation.
If you forgot your password, select the **Recover Password** option, and see [Password recovery](how-to-manage-the-on-premises-management-console.md#password-recovery) for instructions on how to recover your password.
After you sign in for the first time, you will need to activate the on-premises
The on-premises management console can be associated to one, or more subscriptions. The activation file will be associated with all of the selected subscriptions, and the number of committed devices at the time of download.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png" alt-text="You can select multiple subscriptions to onboard your on-premises management console to.":::
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png" alt-text="You can select multiple subscriptions to onboard your on-premises management console to." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png":::
If you have not already onboarded a subscription, then [Onboard a subscription](how-to-manage-subscriptions.md#onboard-a-subscription).
Zones are logical entities that enable you to divide devices within a site into
You configure zones as a part of the site configuration process. The following table describes the parameters in the **Site Management** window.
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
Administrator users carry out activation when signing in for the first time and
Security analysts and read-only users can't activate a sensor or generate a new password.
-## Sign-in and activation for administrator users
+## Sign in and activation for administrator users
-Administrators who sign in for the first time should verify that they have access to activation and password recovery files that were downloaded during sensor onboarding. If not, they need Azure security administrator, subscription contributor, or subscription owner permissions to generate these files via Defender for IoT in the Azure portal.
+Administrators who sign in for the first time should verify that they have access to the activation and password recovery files for this sensor. These files were downloaded during sensor onboarding. If Administrators don't have these files, they can generate new ones via Defender for IoT in the Azure portal. The following Azure permissions are needed to generate the files:
-### First-time sign-in and activation checklist
+- Azure security administrator
+- Subscription contributor
+- Subscription owner permissions
+
+### First-time sign in and activation checklist
Before signing in to the sensor console, administrator users should have access to: - The sensor IP address that was defined during the installation. -- User sign-in credentials for the sensor. If you downloaded an ISO for the sensor, use the default credentials that you received during the installation. We recommend that you create a new *Administrator* user after activation.
+- User sign in credentials for the sensor. If you downloaded an ISO for the sensor, use the default credentials that you received during the installation. We recommend that you create a new *Administrator* user after activation.
- An initial password. If you purchased a preconfigured sensor from Arrow, you need to generate a password when signing in for the first time.
A locally connected, or cloud-connected activation file was generated and downlo
### About certificates
-Following sensor installation, a local self-signed certificate is generated and used to access the sensor console. After an administrator signs in to the console for the first time, that user is prompted to onboard an SSL/TLS certificate.
+Following sensor installation, a local self-signed certificate is generated. The certificate is used to access the sensor console. After administrators sign in to the console for the first time, they're prompted to onboard an SSL/TLS certificate.
Two levels of security are available:
The console supports the following certificate types:
> [!IMPORTANT] > We recommend that you don't use the default self-signed certificate. The certificate is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks.
-See [Manage certificates](how-to-manage-individual-sensors.md#manage-certificates) for more information about working with certificates.
+For more information about working with certificates, see [Manage certificates](how-to-manage-individual-sensors.md#manage-certificates).
### Sign in and activate the sensor
See [Manage certificates](how-to-manage-individual-sensors.md#manage-certificate
1. Go to the sensor console from your browser by using the IP defined during the installation. The sign-in dialog box opens.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/azure-defender-for-iot-sensor-log-in-screen.png" alt-text="Microsoft Defender for IoT sensor.":::
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Sensor log in screen":::
+ 1. Enter the credentials defined during the sensor installation, or select the **Password recovery** option. If you purchased a preconfigured sensor from Arrow, generate a password first. For more information on password recovery, see [Investigate password failure at initial sign-in](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#investigate-password-failure-at-initial-sign-in).
-1. After you sign in, the **Activation** dialog box opens. Select **Upload** and go to the activation file that you downloaded during the sensor onboarding.
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/activation-upload-screen-with-upload-button.png" alt-text="Select Upload and go to the activation file.":::
+1. Select **Login/Next**. The **Sensor Network Settings** tab opens.
-1. Select the **Sensor Network Configuration** link if you want to change the sensor network configuration before activation. See [Update sensor network configuration before activation](#update-sensor-network-configuration-before-activation).
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate.png" alt-text="log in to sensor":::
-1. Accept the terms and conditions.
+1. Use this tab if you want to change the sensor network configuration before activation. The configuration parameters were defined during the software installation, or when you purchased a preconfigured sensor. The following parameters were defined:
-1. Select **Activate**. The SSL/TLS certificate dialog box opens.
+ - IP address
+ - DNS
+ - Default gateway
+ - Subnet mask
+ - Host name
-1. Define a certificate name.
-1. Upload the CRT and key files.
-1. Enter a passphrase and upload a PEM file if required.
-1. Select **Next**. The validation screen opens. By default, validation between the management console and connected sensors is enabled.
-1. Turn off the **Enable system-wide validation** toggle to disable validation. We recommend that you enable validation.
-1. Select **Save**.
+ You might want to update this information before activating the sensor. For example, you might need to change the preconfigured parameters defined by Arrow. You can also define proxy settings before activating your sensor.
+
+ If you want to work with a proxy, enable the proxy toggle and add the proxy host, port and username.
-You might need to refresh your screen after uploading the CA-signed certificate.
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-wizard-activate-proxy.png" alt-text="Initial Log in to sensor using a proxy":::
-For information about uploading a new certificate, supported certificate parameters, and working with CLI certificate commands, see [Manage individual sensors](how-to-manage-individual-sensors.md).
+1. Select **Next.** The Activation tab opens.
-#### Update sensor network configuration before activation
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-file.png" alt-text="First time log in activation file":::
-The sensor network configuration parameters were defined during the software installation, or when you purchased a preconfigured sensor. The following parameters were defined:
+1. Select **Upload** and go to the activation file that you downloaded during the sensor onboarding.
-- IP address-- DNS-- Default gateway-- Subnet mask-- Host name
+1. Approve the terms and conditions.
-You might want to update this information before activating the sensor. For example, you might need to change the preconfigured parameters defined by Arrow. You can also define proxy settings before activating your sensor.
+1. Select **Activate**. The SSL/TLS certificate tab opens. Before defining certificates, see [About certificates](#about-certificates).
-**To update sensor network configuration parameters:**
+ It is **not recommended** to use a locally generated certificate in a production environment.
-1. Select the **Sensor Network Configuration** link form the **Activation** dialog box.
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/wizard-upload-activation-certificates-1.png" alt-text="Initial sensor login certificates":::
- :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/editable-network-configuration-screen-v2.png" alt-text="Sensor Network Configuration.":::
+1. Enable the **Import trusted CA certificate (recommended)** toggle.
+1. Define a certificate name.
+1. Upload the Key, CRT, and PEM files.
+1. Enter a passphrase and upload a PEM file if required.
+1. It's recommended to select **Enable certificate validation** to validate the connections between management console and connected sensors.
+
+1. Select **Finish**.
+
+You might need to refresh your screen after uploading the CA-signed certificate.
+
+For information about uploading a new certificate, supported certificate parameters, and working with CLI certificate commands, see [Manage individual sensors](how-to-manage-individual-sensors.md).
-2. The parameters defined during installation are displayed. The option to define the proxy is also available. Update any settings as required and select **Save**.
### Activate an expired license (versions under 10.0)
-For users with versions prior to 10.0, your license may expire, and the following alert will be displayed.
+For users with versions prior to 10.0, your license may expire, and the following alert will be displayed.
+ :::image type="content" source="media/how-to-activate-and-set-up-your-on-premises-management-console/activation-popup.png" alt-text="When your license expires youΓÇÖll need to update your license through the activation file.":::
**To activate your license:**
For users with versions prior to 10.0, your license may expire, and the followin
1. Select **Activate**.
-### Subsequent sign-ins
+### Subsequent sign ins
-After first-time activation, the Microsoft Defender for IoT sensor console opens after sign-in without requiring an activation file. You need only your sign-in credentials.
+After first-time activation, the Microsoft Defender for IoT sensor console opens after sign-in without requiring an activation file or certificate definition. You only need your sign-in credentials.
-After your sign in, the Microsoft Defender for IoT console opens.
+After your sign in, the Microsoft Defender for IoT sensor console opens.
+
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png" alt-text="Screenshot that shows the Defender for IoT initial dashboard." lightbox="media/how-to-activate-and-set-up-your-sensor/initial-dashboard.png":::
## Initial setup and learning (for administrators)
-After your first sign-in, the Microsoft Defender for IoT sensor starts to monitor your network automatically. Network devices will appear in the device map and device inventory sections. Microsoft Defender for IoT will begin to detect and alert you on all security and operational incidents that occur in your network. You can then create reports and queries based on the detected information.
+After your first sign in, the Microsoft Defender for IoT sensor starts to monitor your network automatically. Network devices will appear in the device map and device inventory sections. Microsoft Defender for IoT will begin to detect and alert you on all security and operational incidents that occur in your network. You can then create reports and queries based on the detected information.
-Initially this activity is carried out in the Learning Mode, which instructs your sensor to learn your network's usual activity. For example, the sensor learns devices discovered in your network, protocols detected in the network, and file transfers that occur between specific devices. This activity becomes your network's baseline activity.
+Initially this activity is carried out in the Learning mode, which instructs your sensor to learn your network's usual activity. For example, the sensor learns devices discovered in your network, protocols detected in the network, and file transfers that occur between specific devices. This activity becomes your network's baseline activity.
### Review and update basic system settings
Define the sensor's system settings. For example:
- Define integration with Active Directory and mail server as appropriate.
-### Disable learning mode
+### Disable Learning mode
-After adjusting the system settings, you can let the Microsoft Defender for IoT sensor run in learning mode until you feel that system detections accurately reflect your network activity.
+After adjusting the system settings, you can let the sensor run in Learning mode until you feel that system detections accurately reflect your network activity.
-The learning mode should run for about 2 to 6 weeks, depending on your network size and complexity. After you disable learning mode, any activity that differs from your baseline activity will trigger an alert.
+The learning mode should run for about 2 to 6 weeks, depending on your network size and complexity. After you disable Learning mode, any activity that differs from your baseline activity will trigger an alert.
**To disable learning mode:** -- Select **System Settings** and turn off the **Learning** option.
+- Select **System Settings**, **Network Monitoring,** **Detection Engines and Network Modeling** and disable the **Learning** toggle.
-## First-time sign-in for security analysts and read-only users
+## First-time sign in for security analysts and read-only users
Before you sign in, verify that you have: - The sensor IP address.-- Sign-in credentials that your administrator provided.
+- Sign in credentials that your administrator provided.
+
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/sensor-log-in-1.png" alt-text="Sensor login after initial setup":::
+ ## Console tools: Overview
-You access console tools from the side menu.
+You can access console tools from the side menu. Tools help you:
+- Gain deep, comprehensive visibility into your network
+- Analyze network risks, vulnerabilities, trends and statistics
+- Set up your sensor for maximum performance
+- Create and manage users
-**Navigation**
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/main-page-side-bar.png" alt-text="The main menu of the sensor console on the left side of the screen":::
-| Window | Icon | Description |
-| --|--|--|
-| Dashboard | :::image type="icon" source="media/concept-sensor-console-overview/dashboard-icon-azure.png" border="false"::: | View an intuitive snapshot of the state of the network's security. |
-| Device map | :::image type="icon" source="media/concept-sensor-console-overview/asset-map-icon-azure.png" border="false"::: | View the network devices, device connections, and device properties in a map. Various zooms, highlight, and filter options are available to display your network. |
-| Device inventory | :::image type="icon" source="media/concept-sensor-console-overview/asset-inventory-icon-azure.png" border="false"::: | The device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details.|
-| Alerts | :::image type="icon" source="media/concept-sensor-console-overview/alerts-icon-azure.png" border="false"::: | Display alerts when policy violations occur, deviations from the baseline behavior occur, or any type of suspicious activity in the network is detected. |
-| Reports | :::image type="icon" source="media/concept-sensor-console-overview/reports-icon-azure.png" border="false"::: | View reports that are based on data-mining queries. |
+### Discover
-**Analysis**
+| Tools| Description |
+| --|--|
+| Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. <! For more information, see TBD >|
+| Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zoom, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate sensor detections in the Device Map](how-to-work-with-the-sensor-device-map.md#investigate-sensor-detections-in-the-device-map). |
+| Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md#investigate-sensor-detections-in-an-inventory).|
+| Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that require your attention. For more information, see [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor).|
-| Window| Icon | Description |
-||||
-| Event timeline | :::image type="icon" source="media/concept-sensor-console-overview/event-timeline-icon-azure.png" border="false"::: | View a timeline with information about alerts, network events (informational), and user operations, such as user sign-ins and user deletions.|
+### Analyze
-**Navigation**
+| Tools| Description |
+|||
+| Event timeline | View a timeline with information about alerts, network events, and user operations. For more information, see [Event timeline](how-to-track-sensor-activity.md#event-timeline).|
+| Data mining | Generate comprehensive and granular information about your network's devices at various layers. For more information, see [Sensor data mining queries](how-to-create-data-mining-queries.md#sensor-data-mining-queries).|
+| Trends and Statistics | View trends and statistics about an extensive range of network traffic and activity. As a small example, display charts and graphs showing top traffic by port, connectivity drops by hours, S7 traffic by control function, number of devices per VLAN, SRTP errors by day, or Modbus traffic by function. For more information, see [Sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md#sensor-trends-and-statistics-reports).
+| Risk Assessment | Proactively address vulnerabilities, identify risks such as missing patches or unauthorized applications. Detect changes to device configurations, controller logic, and firmware. Prioritize fixes based on risk scoring and automated threat modeling. For more information, see [Risk assessment reporting](how-to-create-risk-assessment-reports.md#risk-assessment-reporting).|
+| Attack Vector | Display a graphical representation of a vulnerability chain of exploitable devices. These vulnerabilities can give an attacker access to key network devices. The Attack Vector Simulator calculates attack vectors in real time and analyzes all attack vectors for a specific target. For more information, see [Attack vector reporting](how-to-create-attack-vector-reports.md#attack-vector-reporting).|
-| Window | Icon | Description |
-||||
-| Data mining | :::image type="icon" source="media/concept-sensor-console-overview/data-mining-icon-azure.png" border="false"::: | Generate comprehensive and granular information about your network's devices at various layers. |
-| Investigation | :::image type="icon" source="media/concept-sensor-console-overview/trends-and-statistics-icon-azure.jpg" border="false"::: | View trends and statistics in an extensive range of widgets. |
-| Risk Assessment | :::image type="icon" source="media/concept-sensor-console-overview/vulnerabilities-icon-azure.png" border="false"::: | Display the **Vulnerabilities** window. |
+### Manage
-**Admin**
+| Tools| Description |
+|||
+| System settings | Configure the system settings. For example, define DHCP settings, provide mail server details, or create port aliases. |
+| Custom alert rules | Use custom alert rules to more specifically pinpoint activity or traffic of interest to you. For more information, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules). |
+| Users | Define users and roles with various access levels. For more information, see [About Defender for IoT console users](how-to-create-and-manage-users.md#about-defender-for-iot-console-users). |
+| Forwarding | Forward alert information to partners that integrate with Defender for IoT, for example, Microsoft Sentinel, Splunk, ServiceNow. You can also send to email addresses, webhook servers, and more. <br /> See [Forward alert information](how-to-forward-alert-information-to-partners.md) for details. |
-| Window | Icon | Description |
-||||
-| Users | :::image type="icon" source="media/concept-sensor-console-overview/users-icon-azure.png" border="false"::: | Define users and roles with various access levels. |
-| Forwarding | :::image type="icon" source="medi) for details. |
-| System settings | :::image type="icon" source="media/concept-sensor-console-overview/system-settings-icon-azure.png" border="false"::: | Configure the system settings. For example, define DHCP settings, provide mail server details, or create port aliases. |
-| Import settings | :::image type="icon" source="medi) for details. |
**Support**
-| Window| Icon | Description |
-|-|||
-| Support | :::image type="icon" source="media/concept-sensor-console-overview/support-icon-azure.png" border="false"::: | Contact [Microsoft Support](https://support.microsoft.com/) for help. |
+| Tool| Description |
+|-||
+| Support | Contact [Microsoft Support](https://support.microsoft.com/) for help.|
+
+## Review system messages
+
+ System messages provide general information about your sensor that may require your attention, for example if:
+ - your sensor activation file is expired or will expire soon
+ - your sensor isn't detecting traffic
+- your sensor SSL certificate is expired or will expire soon
+
+ :::image type="content" source="media/how-to-activate-and-set-up-your-sensor/system-messages.png" alt-text="System messages screen on main sensor console page, viewed by selecting the bell icon":::
+
+**To review system messages:**
+1. Sign into the sensor
+1. Select the **System Messages** icon (Bell icon).
+ ## See also
-[Threat intelligence research and packages #](how-to-work-with-threat-intelligence-packages.md)
+[Threat intelligence research and packages ](how-to-work-with-threat-intelligence-packages.md)
[Onboard a sensor](getting-started.md#onboard-a-sensor)
defender-for-iot How To Analyze Programming Details Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-analyze-programming-details-changes.md
+
+ Title: Analyze programming details and changes
+description: Enhance forensics by displaying programming events carried out on your network devices and analyzing code changes. This information helps you discover suspicious programming activity.
Last updated : 01/30/2022++++
+# Analyze programming details and changes
+
+Enhance forensics by displaying programming events carried out on your network devices and analyzing code changes. This information helps you discover suspicious programming activity, for example:
+
+ - Human error: An engineer is programming the wrong device.
+
+ - Corrupted programming automation: Programming is erroneously carried out because of automation failure.
+
+ - Hacked systems: Unauthorized users logged into a programming device.
+
+You can display a programmed device and scroll through various programming changes carried out on it by other devices.
+
+View code that was added, changed, removed, or reloaded by the programming device. Search for programming changes based on file types, dates, or times of interest.
+
+## When to review programming activity
+
+You may need to review programming activity:
+
+ - After viewing an alert regarding unauthorized programming
+
+ - After a planned update to controllers
+
+ - When a process or machine is not working correctly (to see who carried out the last update and when)
+
+ :::image type="content" source="media/how-to-work-with-maps/differences.png" alt-text="Programming Change Log":::
+
+Other options let you:
+
+ - Mark events of interest with a star.
+
+ - Download a *.txt file with the current code.
+
+## About authorized versus unauthorized programming events
+
+Unauthorized programming events are carried out by devices that have not been learned or manually defined as programming devices. Authorized programming events are carried out by devices that were resolved or manually defined as programming devices.
+
+The Programming Analysis window displays both authorized and unauthorized programming events.
+
+## Accessing programming details and changes
+
+Access the Programming Analysis window from the:
+
+- [Event Timeline](#event-timeline)
+
+- [Unauthorized Programming Alerts](#unauthorized-programming-alerts)
+
+### Event timeline
+
+Use the event timeline to display a timeline of events in which programming changes were detected.
++
+### Unauthorized programming alerts
+
+Alerts are triggered when unauthorized programming devices carry out programming activities.
++
+> [!NOTE]
+> You can also view basic programming information in the Device Properties window and Device Inventory.
+
+## Working in the programming timeline window
+
+This section describes how to view programming files and compare versions. Search for specific files sent to a programmed device. Search for files based on:
+
+ - Date
+
+ - File type
+
+ :::image type="content" source="media/how-to-work-with-maps/timeline-view.png" alt-text="programming timeline window":::
+
+|Programming timeline type | Description |
+|--|--|
+| Programmed Device | Provides details about the device that was programmed, including the hostname and file. |
+| Recent Events | Displays the 50 most recent events detected by the sensor. <br />To highlight an event, hover over it and click the star. :::image type="icon" source="media/how-to-work-with-maps/star.png" border="false"::: <br /> The last 50 events can be viewed. |
+| Files | Displays the files detected for the chosen date and the file size on the programmed device. <br /> By default, the maximum number of files available for display per device is 300. <br /> By default, the maximum file size for each file is 15 MB. |
+| File status :::image type="icon" source="media/how-to-work-with-maps/status-v2.png" border="false"::: | File labels indicate the status of the file on the device, including: <br /> **Added**: the file was added to the endpoint on the date or time selected. <br /> **Updated**: The file was updated on the date or time selected. <br /> **Deleted**: This file was removed. <br /> **No label**: The file was not changed. |
+| Programming Device | The device that made the programming change. Multiple devices may have carried out programming changes on one programmed device. The hostname, date, or time of change and logged in user are displayed. |
+| :::image type="icon" source="media/how-to-work-with-maps/current.png" border="false"::: | Displays the current file installed on the programmed device. |
+| :::image type="icon" source="media/how-to-work-with-maps/download-text.png" border="false"::: | Download a text file of the code displayed. |
+| :::image type="icon" source="media/how-to-work-with-maps/compare.png" border="false"::: | Compare the current file with the file detected on a selected date. |
+
+### Choose a file to review
+
+This section describes how to choose a file to review.
+
+**To choose a file to review:**
+
+1. Select an event from the **Recent Events** pane
+
+2. Select a file from the File pane. The file appears in the Current pane.
+
+ :::image type="content" source="media/how-to-work-with-maps/choose-file.png" alt-text="Select the file to work with.":::
+
+### Compare files
+
+This section describes how to compare programming files.
+
+**To compare:**
+
+1. Select an event from the Recent Events pane.
+
+2. Select a file from the File pane. The file appears in the Current pane. You can compare this file to other files.
+
+3. Select the compare indicator.
+
+ :::image type="content" source="media/how-to-work-with-maps/compare.png" alt-text="Compare indicator":::
+
+ The window displays all dates the selected file was detected on the programmed device. The file may have been updated on the programmed device by multiple programming devices.
+
+ The number of differences detected appears in the upper right-hand corner of the window. You may need to scroll down to view differences.
+
+ :::image type="content" source="media/how-to-work-with-maps/scroll.png" alt-text="scroll down to your selection":::
+
+ The number is calculated by adjacent lines of changed text. For example, if eight consecutive lines of code were changed (deleted, updated, or added) this will be calculated as one difference.
+
+ :::image type="content" source="media/how-to-work-with-maps/program-timeline.png" alt-text="Your programming timeline view." lightbox="media/how-to-work-with-maps/program-timeline.png":::
+
+4. Select a date. The file detected on the selected date appears in the window.
+
+5. The file selected from the Recent Events/Files pane always appears on the right.
+
+## Device programming information: Other locations
+
+In addition to reviewing details in the Programming Timeline, you can access programming information in the Device Properties window and the Device Inventory.
+
+| Device type | Description |
+|--|--|
+| Device properties | The device properties window provides information on the last programming event detected on the device. |
+| The device inventory | The device inventory indicates if the device is a programming device. <br> :::image type="content" source="media/how-to-work-with-maps/inventory-v2.png" alt-text="The inventory of devices"::: |
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
Title: Create and manage users
-description: Create and manage users of sensors and the on-premises management console. Users can be assigned the role of administrator, security analyst, or read-only user.
Previously updated : 11/09/2021
+description: Create and manage users of sensors and the on-premises management console. Users can be assigned the role of Administrator, Security Analyst, or Read-only user.
Last updated : 01/26/2022 # About Defender for IoT console users
-This article describes how to create and manage users of sensors and the on-premises management console. User roles include administrator, security analyst, or read-only user. Each role is associated with a range of permissions to tools for the sensor or on-premises management console. Roles are designed to facilitate granular, secure access to Microsoft Defender for IoT.
+This article describes how to create and manage users of sensors and the on-premises management console. User roles include Administrator, Security Analyst, or Read-only users. Each role is associated with a range of permissions to tools for the sensor or on-premises management console. Roles are designed to facilitate granular, secure access to Microsoft Defender for IoT.
-Features are also available to track user activity and enable Active Directory sign-in.
+Features are also available to track user activity and enable Active Directory sign in.
-By default, each sensor and on-premises management console is installed with a *cyberx and support* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for security analysts and read-only users.
+By default, each sensor and on-premises management console is installed with a *cyberx and support* user. These users have access to advanced tools for troubleshooting and setup. Administrator users should sign in with these user credentials, create an admin user, and then create extra users for Security Analysts and Read-only users.
## Role-based permissions The following user roles are available: - **Read only**: Read-only users perform tasks such as viewing alerts and devices on the device map. These users have access to options displayed under **Navigation**. -- **Security analyst**: Security analysts have read-only user permissions. They can also perform actions on devices, acknowledge alerts, and use investigation tools. These users have access to options displayed under **Navigation** and **Analysis**.
+- **Security analyst**: Security Analysts have Read-only user permissions. They can also perform actions on devices, acknowledge alerts, and use investigation tools. These users have access to options displayed under **Navigation** and **Analysis**.
-- **Administrator**: Administrators have access to all tools, including defining system configurations, creating and managing users, and more. These users have access to options displayed under **Navigation**, **Analysis**, and **Administration**.
+- **Administrator**: Administrators have access to all tools, including system configurations, creating and managing users, and more. These users have access to options displayed under **Discover**, **Analyze**, and **Manage** sections of the console main screen.
### Role-based permissions to on-premises management console tools
-This section describes permissions available to administrators, security analysts, and read-only users for the on-premises management console.
+This section describes permissions available to Administrators, Security Analysts, and Read-only users for the on-premises management console.
-| Permission | Read only | Security analyst | Administrator |
+| Permission | Read-only | Security Analyst | Administrator |
|--|--|--|--| | View and filter the enterprise map | Γ£ô | Γ£ô | Γ£ô | | Build a site | | | Γ£ô |
This section describes permissions available to administrators, security analyst
| Manage users | | | Γ£ô | | Send alert data to partners | | | Γ£ô | | Manage certificates | | | Γ£ô |
-| Session timeout when users are not active | 30 minutes | 30 minutes | 30 minutes |
+| Session timeout when users aren't active | 30 minutes | 30 minutes | 30 minutes |
#### Assign users to access groups
-Administrators can enhance user access control in Defender for IoT by assigning users to specific *access groups*. Access groups are assigned to zones, sites, regions, and business units where a sensor is located. By assigning users to access groups, administrators gain specific control over where users manage and analyze device detections.
+Administrators can enhance user access control in Defender for IoT by assigning users to specific *access groups*. Access groups are assigned to zones, sites, regions, and business units where a sensor is located. By assigning users to access groups, administrators gain specific control over where users manage and analyze device detections.
Working this way accommodates large organizations where user permissions can be complex or determined by a global organizational security policy. For more information, see [Define global access control](how-to-define-global-user-access-control.md). ### Role-based permissions to sensor tools
-This section describes permissions available to sensor administrators, security analysts, and read-only users.
+This section describes permissions available to sensor Administrators, Security Analysts, and Read-only users.
-| Permission | Read only | Security analyst | Administrator |
+| Permission | Read-only | Security Analyst | Administrator |
|--|--|--|--| | View the dashboard | Γ£ô | Γ£ô | Γ£ô | | Control map zoom views | | | Γ£ô |
This section describes how to define users. Cyberx, support, and administrator u
**To define a user**: 1. From the left pane for the sensor or the on-premises management console, select **Users**.+
+ :::image type="content" source="media/how-to-create-and-manage-users/users-pane.png" alt-text="Users pane for creating users":::
1. In the **Users** window, select **Create User**.
-1. On the **Create User** pane, define the following parameters:
+
+1. In the **Create User** pane, define the following parameters:
- **Username**: Enter a username. - **Email**: Enter the user's email address. - **First Name**: Enter the user's first name. - **Last Name**: Enter the user's last name.
- - **Role**: Define the user's role. See [Role-based permissions](#role-based-permissions).
- - **Access Group**: If you're creating a user for the on-premises management console, define the user's access group. See [Define global access control](how-to-define-global-user-access-control.md).
+ - **Role**: Define the user's role. For more information, see [Role-based permissions](#role-based-permissions).
+ - **Access Group**: If you're creating a user for the on-premises management console, define the user's access group. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
- **Password**: Select the user type as follows:
- - **Local User**: Define a password for the user of a sensor or an on-premises management console. The password must include at least six characters and must include letters and numbers.
- - **Active Directory User**: You can allow users to sign in to the sensor or management console by using Active Directory credentials. Defined Active Directory groups can be associated with specific permission levels. For example, configure a specific Active Directory group and assign all users in the group to the read-only user type.
+ - **Local User**: Define a password for the user of a sensor or an on-premises management console. Password must have at least eight characters and contain lowercase and uppercase- alphabetic characters, numbers, and symbols.
+ - **Active Directory User**: You can allow users to sign in to the sensor or management console by using Azure Active Directory credentials. Defined Azure Active Directory groups can be associated with specific permission levels. For example, configure a specific Azure Active Directory group and assign all users in the group to the Read-only user type.
## User session timeout
-If users are not active at the keyboard or mouse for a specific time, they're signed out of their session and must sign in again.
+If users aren't active at the keyboard or mouse for a specific time, they're signed out of their session and must sign in again.
-When users have not worked with their console mouse or keyboard for 30 minutes, a session sign-out is forced.
+When users haven't worked with their console mouse or keyboard for 30 minutes, a session sign out is forced.
This feature is enabled by default and on upgrade but can be disabled. In addition, session counting times can be updated. Session times are defined in seconds. Definitions are applied per sensor and on-premises management console.
You can track user activity in the event timeline on each sensor. The timeline d
**To view user activity**:
-1. Sign in to the sensor.
+1. Select **Event Timeline** from the sensor side menu.
+
+1. Verify that **User Operations** filter is set to **Show**.
-1. In the event timeline, enable the **User Operations** option.
+ :::image type="content" source="media/how-to-create-and-manage-users/track-user-activity.png" alt-text="Event timeline showing user that signed in to Defender for IoT":::
- :::image type="content" source="media/how-to-create-azure-for-defender-users-and-roles/User-login-attempts.png" alt-text="View a user's activity.":::
+1. Use the filters or Ctrl F option to find the information of interest to you.
## Integrate with Active Directory servers
Two types of LDAP-based authentication are supported:
- **Trusted user**: Only the user password is retrieved. Other user details that are retrieved are based on users defined in the sensor.
-### Active Directory and Defender for IoT permissions
-
-You can associate Active Directory groups defined here with specific permission levels. For example, configure a specific Active Directory group and assign Read Only permissions to all users in the group.
-
-**To configure Active Directory**:
+### Azure Active Directory and Defender for IoT permissions
-1. From the left pane, select **System Settings**.
-
- :::image type="content" source="media/how-to-setup-active-directory/ad-system-settings-v2.png" alt-text="View your Active Directory system settings.":::
+You can associate Azure Active Directory groups defined here with specific permission levels. For example, configure a specific Azure Active Directory group and assign Read-only permissions to all users in the group.
-1. On the **System Settings** pane, select **Active Directory**.
+### Azure Active Directory configuration guidelines
- :::image type="content" source="media/how-to-setup-active-directory/ad-configurations-v2.png" alt-text="Edit your Active Directory configurations.":::
+- You must define the LDAP parameters here exactly as they appear in Azure Active Directory.
+- For all the Azure Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Azure Active Directory use uppercase.
+- You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.
-1. In the **Edit Active Directory Configuration** dialog box, select **Active Directory Integration Enabled** > **Save**. The **Edit Active Directory Configuration** dialog box expands, and you can now enter the parameters to configure Active Directory.
+**To configure Azure Active Directory**:
- :::image type="content" source="media/how-to-setup-active-directory/ad-integration-enabled-v2.png" alt-text="Enter the parameters to configure Active Directory.":::
+1. From the left pane, select **System Settings**.
+1. Select **Integrations** and then select **Active Directory**.
-> [!NOTE]
-> - You must define the LDAP parameters here exactly as they appear in Active Directory.
-> - For all the Active Directory parameters, use lowercase only. Use lowercase even when the configurations in Active Directory use uppercase.
-> - You can't configure both LDAP and LDAPS for the same domain. You can, however, use both for different domains at the same time.
+1. Enable the **Active Directory Integration Enabled** toggle.
1. Set the Active Directory server parameters, as follows:
You can associate Active Directory groups defined here with specific permission
| Domain controller FQDN | Set the fully qualified domain name (FQDN) exactly as it appears on your LDAP server. For example, enter `host1.subdomain.domain.com`. | | Domain controller port | Define the port on which your LDAP is configured. | | Primary domain | Set the domain name (for example, `subdomain.domain.com`) and the connection type according to your LDAP configuration. |
- | Active Directory groups | Enter the group names that are defined in your Active Directory configuration on the LDAP server. |
+ | Azure Active Directory groups | Enter the group names that are defined in your Azure Active Directory configuration on the LDAP server. You can enter a group name that you'll associate with Admin, Security Analyst and Read-only permission levels. Use these groups when creating new sensor users.|
| Trusted domains | To add a trusted domain, add the domain name and the connection type of a trusted domain. <br />You can configure trusted domains only for users who were defined under users. |
-#### ActiveDirectory Groups for the On-premises management console
+#### Azure Active Directory groups for the On-premises management console
-If you are creating Active Directory groups for on-premises management console users, you must create an Access Group rule for each Active Directory group. On-premises management console Active Directory credentials will not work if an Access Group rule does not exists for the Active Directory user group. See [Define global access control](how-to-define-global-user-access-control.md).
+If you're creating Azure Active Directory groups for on-premises management console users, you must create an Access Group rule for each Azure Active Directory group. On-premises management console Azure Active Directory credentials won't work if an Access Group rule doesn't exist for the Azure Active Directory user group. For more information, see [Define global access control](how-to-define-global-user-access-control.md).
1. Select **Save**.
If you are creating Active Directory groups for on-premises management console u
## Change a user's password
-The Administrator can change the password for the Security Analyst, and Read Only role. The Administrator role user can't change their own password and must contact a higher-level role. The Security Analyst, and Read Only roles canΓÇÿt reset their, or any other role's passwords. The Security Analyst, and Read Only roles need to contact a user with a higher role level to have their passwords reset. The CyberX role can change the password for all user roles. The Support role can change the password for a Support, Administrator, Security Analyst, and Read Only user's role.
+User passwords can be changed for users created with a local password.
+
+**Administrator users**
+
+The Administrator can change the password for the Security Analyst, and Read-only role. The Administrator role user can't change their own password and must contact a higher-level role.
+
+**Security Analyst and Read-only users**
+
+The Security Analyst and Read-only roles can't reset their or any other role's passwords. The Security Analyst and Read-only roles need to contact a user with a higher role level to have their passwords reset.
+
+**CyberX and Support users**
+
+CyberX role can change the password for all user roles. The Support role can change the password for a Support, Administrator, Security Analyst, and Read-only user roles.
**To reset a user's password on the sensor**:
-1. Log in to the sensor using a user with the role Administrator, Support, or CyberX.
+1. Sign in to the sensor using a user with the role Administrator, Support, or CyberX.
1. Select **Users** from the left-hand panel.
- :::image type="content" source="media/password-recovery-images/sensor-page.png" alt-text="Select the user option from the left side pane.":::
-
-1. Locate the user and select **Edit** from the **Actions** dropdown menu.
+1. Locate the local user whose password needs to be changed.
+
+1. On this row, select three dots (...) and then select **Edit**.
- :::image type="content" source="media/password-recovery-images/edit.png" alt-text="select edit from the actions dropdown menu.":::
+ :::image type="content" source="media/how-to-create-and-manage-users/change-password.png" alt-text="Change password dialog for local sensor users":::
-1. Enter the new password in the **New Password**, and **Confirm New Password** fields.
+1. Enter and confirm the new password in **Change Password** section.
> [!NOTE] > Passwords must be at least 16 characters, contain lowercase and uppercase alphabetic characters, numbers and one of the following symbols: #%*+,-./:=?@[]^_{}~
The Administrator can change the password for the Security Analyst, and Read Onl
**To reset a user's password on the on-premises management console**:
-1. Log in to the on-premises management console using a user with the role Administrator, Support, or CyberX.
+1. Sign in to the on-premises management console using a user with the role Administrator, Support, or CyberX.
1. Select **Users** from the left-hand panel.
- :::image type="content" source="media/password-recovery-images/console-page.png" alt-text="On the left panel select the user's option.":::
-
-1. Locate your user and select the edit icon :::image type="icon" source="media/password-recovery-images/edit-icon.png" border="false":::.
+1. Locate your user and select the edit icon :::image type="icon" source="media/password-recovery-images/edit-icon.png" border="false"::: .
1. Enter the new password in the **New Password**, and **Confirm New Password** fields.
You can recover the password for the on-premises management console, or the sens
**To recover the password for the on-premises management console, or the sensor**:
-1. On the sign in screen of either the on-premises management console, or the sensor select **Password recovery**. The **Password recovery** screen opens.
+1. On the sign in screen of either the on-premises management console, or the sensor, select **Password recovery**. The **Password recovery** screen opens.
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Select Password recovery from the sign in screen of either the on-premises management console, or the sensor.":::
+ :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Select Password recovery from the sign in screen of either the on-premises management console, or the sensor":::
1. Select either **CyberX**, or **Support** from the drop-down menu, and copy the unique identifier code.
You can recover the password for the on-premises management console, or the sens
1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded.
- :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Enter the unique identifier and then select recover.":::
+ :::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Enter the unique identifier and then select recover." lightbox="media/how-to-create-and-manage-users/enter-identifier.png":::
> [!NOTE] > Don't alter the password recovery file. It's a signed file, and will not work if tampered with.
defender-for-iot How To Deploy Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md
Title: Deploy certificates description: Learn how to set up and deploy certificates for Defender for IoT. Previously updated : 11/09/2021 Last updated : 02/06/2022
Defender for IoT uses SSL/TLS certificates to secure communication between the f
Defender for IoT Admin users can upload a certificate to sensor consoles and their on-premises management console from the SSL/TLS Certificates dialog box. ## About certificate generation methods
When using a key length of 4096 bits, the SSL handshake at the start of each con
### Certificate chain file requirements (if .pem is used)
-A .pem file containing the certificates of all the certificate authorities in the chain of trust that led to your certificate.ΓÇ»
+A .pem file containing the certificates of all the certificate authorities in the chain of trust that led to your certificate.
Bag attributes are supported in the certificate chain file.
If the conversion fails:
- Use the conversion commands described in [Convert existing files to supported files](#convert-existing-files-to-supported-files). - Make sure the file parameters are accurate. See, [File type requirements](#file-type-requirements) and [Certificate File Parameter Requirements](#certificate-file-parameter-requirements) for details. -- Consult your certificate lead.
+- Consult your certificate lead.
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
The device inventory displays an extensive range of device attributes that your sensor detects. Use the inventory to gain insight and full visibility into the devices on your network. Options are available to:
This section describes device details available from the inventory and describes
| Groups | The groups that this device participates in. | | Last Activity | The last activity that the device performed. | | Discovered | When this device was first seen in the network. |
-| PLC mode (preview) | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. if both states are the same, only oe state is presented. |
+| PLC mode (preview) | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. If both states are the same, only one state is presented. |
**To hide and display columns:**
The filters that you save are also saved as Device map groups. This feature prov
1. Select the double arrow >> on the left side of the page. The filters you create appear in the **Saved Views** pane.
- :::image type="content" source="media/how-to-inventory-sensor/save-views.png" alt-text="Screen capture shows the saved Device inventory filter":::
+ :::image type="content" source="media/how-to-inventory-sensor/save-views.png" alt-text="Screenshot that shows the saved Device inventory filter.":::
### View filtered information as a map group
Don't update file names.
**To import:**
-1. Select **System Settings** > **Import Settings**> **Windows Information**.
+1. Select **System Settings** > **Import Settings** > **Windows Information**.
2. Select **Import File**, and then select all the files (Ctrl+A).
You can filter the inventory to display devices that are inactive:
1. Select **Last Activity** in the column field. 1. Choose the time period in the **Filter** field.
- :::image type="content" source="media/how-to-inventory-sensor/save-filter.png" alt-text="Screen capture shows last activity filter in Inventory":::
+ :::image type="content" source="media/how-to-inventory-sensor/save-filter.png" alt-text="Screenshot that shows the last activity filter in Inventory.":::
### Delete inactive devices
You'll be prompted to record a reason for deleting devices. This information, as
1. Select a filter option. 1. Select **Apply**. 1. Select **Delete Inactive Devices**.
-1. In the confirmation dialog box that opens, enter the reason for the deletion and select **Delete**. All devices detected within the range of the filter will be deleted. If you delete a large number of devices, the delete process may take a few minutes.
+1. In the confirmation dialog box that opens, enter the reason for the deletion and select **Delete**. All devices detected within the range of the filter will be deleted. If you delete a large number of devices, the delete process may take a few minutes.
## Export device inventory information
-You can export device inventory information to .csv file.
+You can export device inventory information to a .csv file.
**To export:**
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
+
+ Title: View and manage alerts in the Defender for IoT portal on Azure
+description: View and manage alerts detected by cloud-connected network sensors in the Defender for IoT portal on Azure.
Last updated : 02/02/2022+++
+# View and manage alerts on the Defender for IoT portal (Preview)
+
+This article describes Defender for IoT alert capabilities for alerts displayed on the Defender for IoT portal on Azure.
+
+## About alerts
+
+Defender for IoT alerts lets you enhance the security and operation of your network by giving you real-time information about:
+
+- Deviations from authorized network activity and device configurations
+- Protocol and operational anomalies
+- Suspected malware traffic
++
+Alerts triggered by Defender for IoT are displayed on the Alerts page in the Azure portal. Use the Alerts page to:
+
+- Learn when an alert was detected.
+- Investigate the alert by reviewing an extensive range of alert information. This may include, source and destination details, PCAP information, vendor, firmware and OS details, and MITRE ATT&CK information.
+- Manage the alert by taking remediation steps on the device or network process, or changing the device status or severity.
+- Integrate alert details with other Microsoft services. For example, with Microsoft Sentinel playbooks and workbooks. See [About the Defender for IoT and Microsoft Sentinel Integration](concept-sentinel-integration.md#about-the-defender-for-iot-and-microsoft-sentinel-integration).
+
+### How is the Alerts page populated?
+
+The Alerts page is populated by with alert information detected by sensors that are set up for cloud-connection to the Defender for IoT portal on Azure.
+
+Alert details triggered by these sensors and aggregated in the Alerts page:
+
+- Provides comprehensive insight into threats, anomalies, deviations and misconfigurations across your entire network.
+
+- Helps SOC teams better understand how sensors are handling activity across the network.
+
+## Alert types and messages
+
+You can view alert messages you may receive. Reviewing alert types and messages ahead of time will help you plan remediation and integration with playbooks.
+[Alert types and descriptions](alert-engine-messages.md#alert-types-and-descriptions).
++
+## View alerts
+
+This section describes the information available in the Alerts table.
+
+**To view default alert information:**
+
+1. Navigate to the Defender for IoT portal on Azure.
+
+1. Select **Alerts (Preview)**. The following alert information is available by default.
+
+ | Parameter | Description
+ |--|--|
+ | **Severity**| A predefined alert severity assigned by the sensor. The severity can be updated. See [Manage alert status and severity](#manage-alert-status-and-severity) for details.
+ | **Name** | The alert title.
+ | **Site** | The site associated with the sensor. This site name is defined when you register a sensor with Microsoft Defender for IoT on the Azure portal. The name can be viewed in the Sites and Sensors page on the portal. See [View onboarded sensors](how-to-manage-sensors-on-the-cloud.md#view-onboarded-sensors) for information on registered sensors.
+ | **Engine** | The sensor engine that detected the Operational Technology (OT) traffic. To learn more about engines, see [Detection engines](how-to-control-what-traffic-is-monitored.md#detection-engines). For device builders, the term micro-agent will be displayed.
+ | **Detection time** | The first time the alert was detected. The alert traffic may occur several times after the first detection. If the alert Status is **New**, the detection time won't change. If the alert is Closed and the traffic is seen again, a new detection time will be displayed.
+ | **Status** | The alert status: New, Active, Closed
+ | **Source device** | The IP address, MAC, or device name.
+ | **Tactics** | The MITRE ATT&CK stage.
+
+**To view additional information:**
+
+1. Select **Edit columns** from the Alerts page.
+1. In the Edit Columns dialog box, select **Add Column** and choose an item to add. The following items are available:
++
+ | Parameter | Description
+ |--|--|
+ | **Source device address** |The IP address of the source device. |
+ | **Destination device address** | The IP address of the destination device. |
+ | **Destination device** | The IP address, MAC, or destination device name.
+ | **ID** |The unique alert ID.
+ | **Protocol** | The protocol detected in the network traffic for this alert.
+ | **Sensor** | The sensor that detected the alert.
+ | **Zone** | The zone assigned to the sensor that detected the alert.
+ | **Category**| The category associated with the alert, for example scans, operational issues, custom alerts, illegal commands. Filtering the Alerts page by category helps you quickly find information important to you. For a list of categories available, see [Customize the view by category](#customize-the-view-by-category).
+ | **Type**| The internal name of the alert.
+
+### Customize the view
+
+Various Alerts page options help you easily find and view alerts and alert information important to you.
+
+**To filter the view:**
+
+1. Use the **Search**, **Time Range**, and **Filter** options at the top of the Alerts page.
+
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/filters-on-alerts-page.png" alt-text="Filters bar on alerts Cloud page":::
+
+**To group alerts:**
+
+1. Select **Group by** at the top right of the Alerts page.
+1. Group the view by the:
+ - alert severity
+ - alert name
+ - site associated with alert
+ - engine associated with the alert
+
+### Customize the view by category
+
+Use the category filter to quickly find information important to you. Using category filters also gives you information regarding the number of alerts for each category. For example, 50 operational alerts, 13 firmware changes or 23 command failures.
++
+The following categories are available:
+- Abnormal Communication Behavior
+- Abnormal HTTP Communication Behavior
+- Authentication
+- Backup
+- Bandwidth Anomalies
+- Buffer overflow
+- Command Failures
+- Configuration changes
+- Custom Alerts
+- Discovery
+- Firmware change
+- Illegal commands
+- Internet Access
+- Operation Failures
+- Operational issues
+- Programming
+- Remote access
+- Restart/Stop Commands
+- Scan
+- Sensor traffic
+- Suspicion of malicious activity
+- Suspicion of Malware
+- Unauthorized Communication Behavior
+- Unresponsive
+
+### Understand the alert count breakdown
+
+The number of alerts currently detected appears on the top-left section of the Alerts page. You may want more specific information about the alert count breakdown, for example the number of alerts associated with a certain alert severity, protocol or site.
+
+**To view an alert count breakdown:**
+
+1. Select **Group by** and select a group. The number of alerts is displayed for each group.
+
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/group-by-severity.png" alt-text="Alerts page group by filter with severity filter chosen":::
+
+1. Alternatively use the **Add filter** option to choose a subject of interest and select **Column.** The column dropdown shows the number alerts associated with the column name.
+
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-count-breakdown.png" alt-text="Alert filters showing protocols with count for each protocol":::
+
+## View alert descriptions and other details
+
+View more information about the alert, such as:
+- the alert description
+- links to related MITRE ATT&CK information
+- details about protocols
+- traffic and entities associated with the alert
+- alert remediation steps
+
+**To view details:**
+
+1. Select an alert.
+1. The details pane opens with the alert description, source, and destination information and other details.
+
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-detected.png" alt-text="Alert selected from Alerts cloud page":::
+
+1. To view more details and review remediation steps, select **View full details**. The Alert Details pane provides more information about source device and related entities. Related links in the MITRE Partnership website are also available.
+
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/alert-full-details.png" alt-text="Selected alert with full details":::
+
+If you're integrating with Microsoft Sentinel, the Alert details and entity information are sent to Microsoft Sentinel.
+
+### Alert remediation steps
+
+Defender for IoT provides remediation steps you can carry out for the alert. Remediation steps are designed to help SOC teams better understand OT issues and resolutions.
+
+**To view the alert remediation:**
+
+1. Select an alert from the Alerts page.
+1. Select **Take action** in the dialog box that opens.
+
+ :::image type="content" source="media/how-to-view-manage-cloud-alerts/take-action-cloud-alert.png" alt-text="Remediation action for sample cloud alert":::
+
+## Manage alert status and severity
+
+You can change the alert status and severity for a single alert or for a group of alerts.
+
+**To change the alert status:**
+
+1. Select an alert or group of alerts.
+1. Select **Change status** and select a status (New, Active, Closed).
+
+Changes to status aren't reflected in the on-premises management console or sensor.
+
+**To change the alert severity:**
+
+1. Select an alert or group of alerts.
+1. Select **Change severity** and select a severity.
+
+Changes to severity aren't reflected in the on-premises management console or sensor.
+
+## On-premises alert management
+
+Users working in hybrid deployments may be managing alerts on both the Microsoft Defender for IoT portal, Alerts page, and on on-premises sensors and the management console.
+
+Users working with alerts in Azure and on-premises should understand how alert management between the portal and the on-premises components operates.
+
+ Parameter | Description
+|--|--|
+| **Alert Exclusion rules**| Alert *Exclusion rules* defined in the on-premises management console impact the rules detected by managed sensors. As a result, the alerts excluded be these rules won't be displayed in the Alerts page. See [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules) for more information.
+| **Managing alerts on-premises** | Alerts **Learned**, **Acknowledged**, or **Muted** in the on-premises management console or in sensors aren't simultaneously updated in Alerts page on the Defender for IoT Cloud Alerts page. This means that this alert will stay open on the Cloud. However another alert will not be triggered from the on-premises components for this activity.
+| **Managing alert in the portal Alerts page** | Changing the status of an alert to **New**, **Active**, or **Closed** on the Alerts page or changing the alert severity on the Alerts page doesn't impact the alert status or severity in the on-premises management console or sensors.
+
+## See also
+
+[Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md#gain-insight-into-global-regional-and-local-threats)
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
A unique activation file is uploaded to each sensor that you deploy. For more in
### About activation files for locally connected sensors
-Locally connected sensors are associated with an Azure subscription. The activation file for your locally connected sensors contains an expiration date. One month before this date, a warning message appears at the top of the sensor console. The warning remains until after you've updated the activation file.
-
+Locally connected sensors are associated with an Azure subscription. The activation file for your locally connected sensors contains an expiration date. One month before this date, a warning message appears in the System Messages window in the top-right corner of the console. The warning remains until after you've updated the activation file.
You can continue to work with Defender for IoT features even if the activation file has expired.
You might need to upload a new activation file for an onboarded sensor when:
**To add a new activation file:**
-1. Go to the **Sensor Management** page.
+1. Go to the Azure portal for Defender for IoT.
+1. Use the search bar to find the sensor you need.
-2. Select the sensor for which you want to upload a new activation file.
+1. Select the three dots (...) on the row and select **Delete sensor**.
-3. Delete it.
+1. Onboard the sensor again by selecting **Getting Started**> **Set up OT/ICS Security** > **Register this sensor with Microsoft Defender for IoT**.
-4. Onboard the sensor again from the **Onboarding** page in the new mode or with a new Defender for IoT hub.
+1. Go to the **Sites and sensors** page.
-5. Download the activation file from the **Download Activation File** page.
+1. Use the search bar to find the sensor you just added, and select it.
+1. Select the three dots (...) on the row and select **Download activation file**.
6. Save the file.
- :::image type="content" source="media/how-to-manage-individual-sensors/download-activation-file.png" alt-text="Download the activation file from the Defender for IoT hub.":::
- 7. Sign in to the Defender for IoT sensor console.
-8. In the sensor console, select **System Settings** > **Reactivation**.
-
- :::image type="content" source="media/how-to-manage-individual-sensors/reactivation.png" alt-text="Reactivation selection on the System Settings screen.":::
+8. Select **System Settings** > **Sensor management** > **Subscription & Activation Mode**.
9. Select **Upload** and select the file that you saved.
- :::image type="content" source="media/how-to-manage-individual-sensors/upload-the-file.png" alt-text="Upload the file you saved.":::
- 10. Select **Activate**. ### Troubleshoot activation file upload
Sensor Administrators may be required to update certificates that were uploaded
**To update a certificate:**
-1. Select **System Settings**.
+1. Select **System Settings** and then select **Basic**.
-1. Select **SSL/TLS Certificates.**
+1. Select **SSL/TLS Certificate.**
:::image type="content" source="media/how-to-manage-individual-sensors/certificate-upload.png" alt-text="Upload a certificate":::
For more information about first-time certificate upload see,
This section describes how to ensure connection between the sensor and the on-premises management console. You need to do this if you're working in an air-gapped network and want to send device and alert information to the management console from the sensor. This connection also allows the management console to push system settings to the sensor and perform other management tasks on the sensor.
-To connect:
+**To connect:**
1. Sign in to the on-premises management console.
The sensor network configuration was defined during the sensor installation. You
If you create a new IP address, you might be required to sign in again.
-To change the configuration:
+**To change the configuration:**
1. On the side menu, select **System Settings**.
You can configure the sensor's time and region so that all the users see the sam
| Date format | Select one of the following format options:<br />- dd/MM/yyyy HH:mm:ss<br />- MM/dd/yyyy HH:mm:ss<br />- yyyy/MM/dd HH:mm:ss | | Date and time | Displays the current date and local time in the format that you selected.<br />For example, if your actual location is America and New York, but the time zone is set to Europe and Berlin, the time is displayed according to Berlin local time. |
-To configure the sensor time:
-
-1. On the side menu, select **System Settings**.
+**To configure the sensor time:**
-2. In the **System Settings** window, select **Time & Regional**.
+1. On the side menu, select **System settings** > **Basic**, > **Time & Region**.
3. Set the parameters and select **Save**.
When you control a sensor by using the on-premises management console, you can u
Sensor backup files are automatically named through the following format: `<sensor name>-backup-version-<version>-<date>.tar`. An example is `Sensor_1-backup-version-2.6.0.102-2019-06-24_09:24:55.tar`.
-To configure backup:
+**To configure backup:**
- Sign in to an administrative account and enter `$ sudo cyberx-xsense-system-backup`.
-To restore the latest backup file:
+**To restore the latest backup file:**
- Sign in to an administrative account and enter `$ sudo cyberx-xsense-system-restore`.
-To save the backup to an external SMB server:
+**To save the backup to an external SMB server:**
1. Create a shared folder in the external SMB server.
To access system properties:
[Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)
-[Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
+[Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)
defender-for-iot How To Manage The Alert Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-alert-event.md
Title: Manage alert events
-description: Manage alert events detected in your network.
Previously updated : 11/09/2021
+description: Manage alerts detected in your network.
Last updated : 02/06/2022
-# Manage alert events
+# Manage alerts
-The following options are available for managing alert events:
+This article describes how to manage alerts.
+
+## About managing alerts
+
+The following options are available for managing alerts:
| Action | Description | |--|--|
- | **Learn** | Authorize the detected event. For more information, see [About learning and unlearning events](#about-learning-and-unlearning-events). |
- | **Acknowledge** | Hide the alert once for the detected event. The alert will be triggered again if the event is detected again. For more information, see [About acknowledging and unacknowledging events](#about-acknowledging-and-unacknowledging-events). |
- | **Mute** | Continuously ignore activity with identical devices and comparable traffic. For more information, see [About muting and unmuting events](#about-muting-and-unmuting-events). |
-
-You can also export alert information.
-## About learning and unlearning events
+| **Remediate** |Remediate a device or network process that caused Defender for IoT to trigger the alert. For more information, see [View remediation steps](#view-remediation-steps).|
+| **Learn** | Authorize the detected traffic. For more information, see [Learn and unlearn alert traffic](#learn-and-unlearn-alert-traffic). |
+| **Mute** | Continuously ignore activity with identical devices and comparable traffic. For more information, see [Mute and unmute alerts](#mute-and-unmute-alerts).
+| **Change status** | Change the alert status to Closed or New. For more information, see [Close the alert](#close-the-alert). |
+| **Forward to partner solutions** | Create Forwarding rules that send alert details to integrated solutions, for example to Microsoft Sentinel, Splunk or Service Now. For more information, see [Forward alert information](how-to-forward-alert-information-to-partners.md#forward-alert-information) |
-Events that indicate deviations of the learned network might reflect valid network changes. When you want to approve these changes, you can instruct Defender for IoT to *learn* the behavior. Examples might include:
+Alerts are managed from the Alerts page on the sensor.
-- New activity was detected on existing device. For example, an authorized device attempted to access a new resource on another device.
+**To access the Alerts page:**
-- A new device is added to the network.
+1. Select **Alerts** from the sensor console, side pane.
+1. Review the alerts details and decide how to manage the alert.
-- Firmware version changes following standard maintenance procedures.
+ :::image type="content" source="media/how-to-manage-the-alert-event/main-alerts-screen.png" alt-text="Main sensor alerts screen":::
-- A new device performed a read/write operation on a destination controller.
+See [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor) for information on:
+- the kind of alert information available
+- customizing the alert view
+- how long alerts are saved
-- A new device performs a read/write operation on a destination controller and should be defined as a programming device.
+## View remediation steps
-- New legitimate scanning is carried out and the device should be defined as a scanning device.
+Defender for IoT provides remediation steps you can carry out for the alert. Steps may include remediating a device or network process that caused Defender for IoT to trigger the alert.
+Remediation steps help SOC teams better understand Operational Technology (OT) issues and resolutions. Review remediation information before managing the alert event or taking action on the device or the network.
-**To learn an event**:
+**To view alert remediation steps:**
-1. Navigate to the **Alerts** tab.
+1. Select an alert from the Alerts page.
+1. In the side pane, select **Take action.**
+1. Review remediation steps.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/alerts-tab.png" alt-text="Select the Alerts tab from the navigation bar on the left side of the screen.":::
+ :::image type="content" source="media/how-to-manage-the-alert-event/remediation-steps.png" alt-text="Sample remediation steps for alert action":::
-1. Select an alert from the list of alerts.
-1. Select **Learn**.
+Your administrator may have added instructions or comments to help you complete remediation or alert handling. If created, comments appear in the Alert Details section.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/learn.png" alt-text="The Address Detected Scan window.":::
-Activity reflected in alerts is calculated when you generate Data Mining, Risk Assessment, and Attack Vector reports. When you manage these events, the sensor updates the reports accordingly.
+After taking remediation steps, you may want to change the alert status to Close the alert.
-When you select **Learn**, the sensor considers traffic, configurations, or activity valid. The alert will no longer be triggered for the event. It also means the event won't be calculated when the sensor generates risk assessment, attack vector, and other reports.
+## Learn and unlearn alert traffic
-Learned events can be unlearned. When the sensor unlearns events, it will retrigger alerts related to this event.
+Some alerts indicate deviations of the learned network baseline. These alerts might reflect valid network changes, such as:
-**To unlearn an event**
+- New activity was detected on existing device. For example, an authorized device attempted to access a new resource on another device.
-1. Navigate to the **Alerts** tab.
+- Firmware version changes following standard maintenance procedures.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/alerts-tab.png" alt-text="Select the Alerts tab from the navigation bar on the left side of the screen.":::
+- A new device is added to the network.
-1. From the view drop-down menu, select **Acknowledged**.
+- A new device performed a read/write operation on a destination controller.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/view-acknowledged.png" alt-text="Select acknowledged from the view section drop-down menu." lightbox="media/how-to-work-with-alerts-sensor/view-acknowledged-expanded.png":::
+- A new device performs a read/write operation on a destination controller and should be defined as a programming device.
-1. Select an alert from the list of alerts.
+- New legitimate scanning is carried out and the device should be defined as a scanning device.
-1. Select **Unlearn**.
+When you want to approve these changes, you can instruct Defender for IoT to *learn* the traffic.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/unlearn.png" alt-text="Select Unlearn to unlearn an event.":::
+**To learn the traffic**:
-After an event is unlearned, it will move back to the Main View page.
+1. Navigate to the **Alerts** tab.
-**To see if an alert was learned, or acknowledged**:
+1. Select an alert from the list of alerts.
+1. Select **Take action**.
-1. Navigate to the Event Timeline tab :::image type="icon" source="media/how-to-work-with-alerts-sensor/event-timeline.png" border="false":::
+1. Enable the **Alert Learn** toggle.
-1. Locate the alert in the timeline.
+ :::image type="content" source="media/how-to-manage-the-alert-event/learn-remediation.png" alt-text="Learn option for Policy alert":::
- :::image type="content" source="media/how-to-work-with-alerts-sensor/event-timeline-acknowledged.png" alt-text="You can locate the acknowledged events in the event timeline.":::
+After learning the traffic, configurations, or activity are considered valid. An alert will no longer be triggered for this activity.
- You will see in the alert's window, if the alert was learned, or acknowledged.
-
-## About acknowledging and unacknowledging events
+In addition,
-In certain situations, you might not want a sensor to learn a detected event, or the option might not be available. Instead, the incident might require mitigation. For example:
+- The alert status is automatically updated to Closed.
-- **Mitigate a network configuration or device**: You receive an alert indicating that a new device was detected on the network. When investigating, you discover that the device is an unauthorized network device. You handle the incident by disconnecting the device from the network.-- **Update a sensor configuration**: You receive an alert indicating that a server initiated an excessive number of remote connections. This alert was triggered because the sensor anomaly thresholds were defined to trigger alerts above a certain number of sessions within one minute. You handle the incident by updating the thresholds.
+- The learn action appears in the **Event Timeline**.
-After you carry out mitigation or investigation, you can instruct the sensor to hide the alert by selecting **Acknowledge**. If the event is detected again, the alert will be retriggered.
+- For this traffic, the device won't be calculated when the sensor generates Risk Assessment, Attack Vector, and other reports.
-To clear the alert:
+### Unlearn alert traffic
- - Select **Acknowledge**.
+Learned traffic can be unlearned. When the sensor unlearns traffic, alerts are retriggered for this traffic combination detected.
-To view the alert again:
+**To unlearn an alert**
- - Access the alert and select **Unacknowledge**.
+1. Navigate alert you learned.
+
+1. Disable the **Alert learn** toggle.
-Unacknowledge alerts if further investigation is required.
+The alert status is automatically updated to **New**.
-## About muting and unmuting events
+## Mute and unmute alerts
Under certain circumstances, you might want to instruct your sensor to ignore a specific scenario on your network. For example:
- - The **Anomaly** engine triggers an alert on a spike in bandwidth between two devices, but the spike is valid for these devices.
-
- - The **Protocol Violation** engine triggers an alert on a protocol deviation detected between two devices, but the deviation is valid between the devices.
+ - The Anomaly engine triggers an alert on a spike in bandwidth between two devices, but the spike is valid for these devices.
-In these situations, learning is not available. When learning can't be carried out and you want to suppress the alert and remove the device when calculating risks and attack vectors, you can mute the alert event instead.
+ - The Protocol Violation engine triggers an alert on a protocol deviation detected between two devices, but the deviation is valid between the devices.
+
+ - The Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode may indicate that the PLC isn't secure. After investigation, it's determined that the new mode is acceptable.
-> [!NOTE]
-> You can't mute events in which an internet device is defined as the source or destination.
+In these situations, learning isn't available. You can mute the alert event when learning can't be carried out and you want to suppress the alert and remove the device when calculating risks and attack vectors.
-### What alert activity is muted?
A muted scenario includes the network devices and traffic detected for an event. The alert title describes the traffic that is being muted.
-The device or devices being muted will be displayed as an image in the alert. If two devices are shown, the specific alerted traffic between them will be muted.
+> [!NOTE]
+> You can't mute traffic if an internet device is defined as the source or destination.
-**Example 1**
+**To mute an alert:**
-When an event is muted, it's ignored anytime the primary (source) sends the secondary (destination) an illegal function code as defined by the vendor.
+1. Select an alert from the Alerts page and then select **Take action**.
+1. Enable the **Alert mute** toggle.
+**After an event is muted:**
-**Example 2**
+- The alert status will automatically be changed to **Closed.**
-When an event is muted, it's ignored anytime the source sends an HTTP header with illegal content, regardless of the destination.
+- The mute action will appear in the **Event Timeline**.
+- The sensor will recalculate devices when generating Risk Assessment, Attack Vector, and other reports. For example, if you muted an alert that detected malicious traffic on a device, that device won't be calculated in the Risk Assessment report.
-**After an event is muted:**
+## Close the alert
-- The alert will be accessible in the **Acknowledged** alert view until it is unmuted.
+ Close an alert when you finish remediating, investigating, or otherwise handling the alert. For example:
-- The mute action will appear in the **Event Timeline**.
+- **Mitigate a network configuration or device**: You receive an alert indicating that a new device was detected on the network. When investigating, you discover that the device is unauthorized. You handle the alert by disconnecting the device from the network.
+
+- **Update a sensor configuration**: You receive an alert indicating that a server initiated an excessive number of remote connections. This alert was triggered because the sensor anomaly thresholds were defined to trigger alerts above a certain number of sessions within one minute. You handle the alert by updating the thresholds.
+
+After you carry out remediation or investigation, you can close the alert.
+
+If the traffic is detected again, the alert will be retriggered.
+
+**To close a single alert:**
+
+1. Select an alert. The Alert Details section opens.
+1. Select the dropdown arrow in the Status field and select **Closed**.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/muted-event-notification-screenshot.png" alt-text="Event detected and muted.":::
+ :::image type="content" source="media/how-to-manage-the-alert-event/close-alert.png" alt-text="Option to close an alert from the Alerts page":::
-- The sensor will recalculate devices when generating risk assessment, attack vector, and other reports. For example, if you muted an alert that detected malicious traffic on a device, that device will not be calculated in the risk assessment report.
+**To close multiple alerts:**
-**To mute and unmute an alert:**
+1. Select the alerts you want to close from the Alerts page.
+1. Select **Change Status** from the action items on the top of the page.
+1. Select **Closed** and **Apply.**
-- Select the **Mute** icon on the alert.
+ :::image type="content" source="media/how-to-manage-the-alert-event/multiple-close.png" alt-text="Selecting multiple alerts to close from the Alerts page":::
-**To view muted alerts:**
+Change the alert status to **New** if further investigation is required.
-1. Select the **Acknowledged** option form the **Alerts** main screen.
+To view closed alerts on the Alerts page, verify that the **Status** filter is defined to show **Closed** alerts.
-2. Hover over an alert to see if it's muted.
## Export alert information
-Export alert information to a .csv file. You can export information of all alerts detected or export information based on the filtered view. The following information is exported:
+Export alert information to a .csv file. The following information is exported:
- Source address - Destination address
Export alert information to a .csv file. You can export information of all alert
- Acknowledged status - PCAP availability
-To export:
+**To export:**
-1. Select Alerts from the side menu.
+1. Select Export to CSV on the top of the Alerts page.
-1. Select Export.
-1. Select Export Extended Alerts to export alert information in separate rows for each alert that covers multiple devices. When Export Extended Alerts is selected, the .csv file will create a duplicate row of the alert event with the unique items in each row. Using this option makes it easier to investigate exported alert events.
+## Interaction with Azure Alerts page
+
+Your deployment may have been set up to work with cloud-connected sensors on the Defender for IoT portal on Azure. In cloud-connected environments, Alert detections shown on your sensors will also be seen in the Defender for IoT Alerts page, on the Azure portal.
+
+Viewing and managing alerts in the portal provides significant advantages. For example, you can:
+
+- Display an aggregated view of alert activity in all enterprise sensors
+- Learn about related MITRE ATT&CK techniques, tactics and stages
+- View alerts based on the sensor site
+- Integrate alert details with Microsoft Sentinel
+- Change the severity of an alert
+
+ :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Sample of alert as shown in cloud":::
+
+Users working with alerts on the Defender for IoT portal on Azure should understand how alert management between the portal and the sensor operates.
++
+ Parameter | Description
+|--|--|
+| **Alert Exclusion rules**| Alert *Exclusion rules* defined in the on-premises management console impact the alerts triggered by managed sensors. As a result, the alerts excluded by these rules also won't be displayed in the Alerts page on the portal. For more information, see [Create alert exclusion rules](how-to-work-with-alerts-on-premises-management-console.md#create-alert-exclusion-rules).
+| **Managing alerts on your sensor** | If you change the status of an alert, or learn or mute an alert on a sensor, the changes are not updated in the Defender for IoT Alerts page on the portal. This means that this alert will stay open on the portal. However another alert won't be triggered from sensor for this activity.
+| **Managing alerts in the portal Alerts page** | Changing the status of an alert on the Azure portal, Alerts page or changing the alert severity on the portal, does not impact the alert status or severity in on-premises sensors.
## See also
-[Alert types and descriptions](alert-engine-messages.md)
+- [Detection engines and alerts](concept-key-concepts.md#detection-engines-and-alerts)
+
+- [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor)
+
+- [Alert types and descriptions](alert-engine-messages.md)
+
+- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
-[Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Title: Set up your network
-description: Learn about solution architecture, network preparation, prerequisites, and other information needed to ensure that you successfully set up your network to work with Azure Defender for IoT appliances.
+description: Learn about solution architecture, network preparation, prerequisites, and other information needed to ensure that you successfully set up your network to work with Microsoft Defender for IoT appliances.
Last updated 12/19/2021
An overview of the industrial network diagram will allow you to define the prope
## Next steps
-[About the Defender for IoT installation](how-to-install-software.md)
+[About the Defender for IoT installation](how-to-install-software.md)
defender-for-iot How To View Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-alerts.md
Title: Filter and manage alerts from the Alerts page
-description: View alerts according to various categories, and uses search features to help you find alerts of interest.
Previously updated : 11/09/2021
+ Title: View alerts details on the sensor Alerts page
+description: View alerts detected by your Defender for IoT sensor.
Last updated : 02/06/2022
-# Filter and manage alerts from the Alerts page
+# View alerts on your sensor
-This article describes how to view alerts triggered by your sensor and manage them with alert tools.
+Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that need your attention.
-You can view alerts based on various categories, such as alerts that have been archived or pinned. You also can search for alerts of interest, such as alerts based on an IP or MAC address.
+This article describes how to view alerts triggered by your sensors.
-You can also view alerts from the sensor dashboard.
+Once an alert is selected, you can view comprehensive details about the alert activity, for example,
-To view alerts:
+- Detected protocols
+- Source and destination IP and MAC addresses
+- Vendor information
+- Device type information
-- Select **Alerts** from the side menu. The Alerts window displays the alerts that your sensor has detected.
+You can also gain contextual information about the alert by viewing the source and destination in the Device map and viewing related events in the Event timeline.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/alerts-screen.png" alt-text="View of the Alerts screen.":::
+To help you quickly pinpoint information of interest, you can view alerts:
-## View alerts by category
+- Based on various categories, such as alert severity, name or status
+- By using filters
+- By using free text search to find alert information of interest to you.
-You can view alerts according to various categories from the **Alerts** main view. Select an alert to review details and manage the event.
+After you review the information in an alert, you can carry out various forensic steps to guide you in managing the alert event. For example:
-| Sort by type | Description |
+- Analyze recent device activity (data-mining report).
+
+- Analyze other events that occurred at the same time (event timeline).
+
+- Analyze comprehensive event traffic (PCAP file).
+
+## View alerts and alert details
+
+This section describes how to view and filter alerts details on your sensor.
+
+**To view alerts in the sensor:**
+
+- Select **Alerts** from the side menu. The page displays the alerts detected by your sensor.
+
+ :::image type="content" source="media/how-to-view-alerts/view-alerts-main-page.png" alt-text="Alerts page on sensor" lightbox="media/how-to-view-alerts/view-alerts-main-page.png":::
+
+The following information is available from the Alerts page:
+
+| Name | Description |
|--|--|
-| **Important Alerts** | Alerts sorted by importance. |
-| **Pinned Alerts** | Alerts that the user pinned for further investigation. Pinned alerts are not archived and are stored for 14 days in the pinned folder. |
-| **Recent Alerts** | Alerts sorted by time. |
-| **Acknowledged Alerts** | Alerts that were acknowledged and unhandled, or that were muted, and unmuted. |
-| **Archived Alerts** | Alerts that the system archived automatically. By default, alerts are archived 14 days after the alert was triggered. Only the administrator user can access them. |
+| **Severity** | The alert severity: Critical, Major, Minor, Warning|
+| **Name** | The alert title |
+| **Engine** | The Defender for IoT detection engine that detected the activity and triggered the alert. If the event was detected by the Device Builder platform, the value will be Micro-agent. |
+| **Detection time** | The first time the alert activity was detected. |
+| **Status** | Indicates if the alert is new or closed. |
+| **Source Device** | The source device IP address |
+| **Destination Device** | The destination device IP address |
+| **ID** | The alert ID. |
+
+**To hide or display information:**
+
+1. Select **Edit Columns** from the Alerts page.
+1. Add and remove columns as required from the Edit columns dialog box.
+
+**How long are alerts saved?**
+
+- New alerts are automatically closed if no identical traffic detected 14 days after initial detection. After 90 days of being closed, the alert is removed from the sensor console.
+
+- If identical traffic is detected after the initial 14 days, the 14-day count for network traffic is reset.
+
+ Changing the status of an alert to *Learn*, *Mute* or *Close* does not impact how long the alert is displayed in the sensor console.
+
+### Filter the view
+
+Use filter, grouping and text search tools to view alerts of interest to you.
+
+**To filter by category:**
+
+1. Select **Add filter**.
+1. Define a filter and select **Apply**.
+
+ :::image type="content" source="media/how-to-view-alerts/alerts-filter.png" alt-text="Alert filter options":::
+
+**About the Groups type**
+
+The **Groups** option refers to the Device groups you created in the Device map and inventory.
+++
+**To view alerts based on a pre-defined category:**
-## Search for alerts of interest
+1. Select **Group by** from the Alerts page and choose a category. The page displays the alerts according to the category selected.
-The Alerts main view provides various search features to help you find alerts of interest.
+## View alert descriptions and details
+View more information about the alert, such as the alert description, details about protocols, traffic and entities associated with the alert, alert remediation steps, and more.
-### Text search
+**To view details:**
-Use the Free Search option to search for alerts by text, numbers, or characters.
+1. Select an alert.
+1. The details pane opens with the alert description, source/destination information and other details.
-To search:
+1. To view more details and review remediation steps, select **View full details**. The Alert Details pane provides more information about the traffic and devices. Comments may also have been added by your administrator.
-- Type the required text in the Free Search field and press Enter on your keyboard.
+## Gain contextual insight
-To clear the search:
+Gain contextual insight about alert activity by:
-- Delete the text in the Free Search field and press Enter on your keyboard.
+- Viewing source and destination devices in map view with other connected devices. Select **Map View** to see the map.
-### Device group or device IP address search
+ :::image type="content" source="media/how-to-view-alerts/view-alerts-map.png" alt-text="Map view of source and detected device from alert" lightbox="media/how-to-view-alerts/view-alerts-map.png" :::
+
+- Viewing an Event timeline with recent activity of the device. Select **Event Timeline** and use the filter options to customize the information displayed.
+
+ :::image type="content" source="media/how-to-view-alerts/alert-event-timeline.png" alt-text="Alert timeline for selected alert from Alerts page" lightbox="media/how-to-view-alerts/alert-event-timeline.png" :::
-Search for alerts that reference specific IP addresses or device groups. Device groups are created in the device map.
+### Remediate the alert incident
-To use advanced filters:
+Defender for IoT provides remediation steps you can carry out for the alert. This may include remediating a device or network process that caused Defender for IoT to trigger the alert.
+Remediation steps will help SOC teams better understand OT issues and resolutions. Review this information before managing the alert event or taking action on the device or the network.
-1. Select **Advanced Filters** from the **Alerts** main view.
+**To view alert remediation steps:**
-2. Choose a device group or a device.
+1. Select an alert from the Alerts page.
+1. In the side pane, select **Take action.**
-3. Select **Confirm**.
+ :::image type="content" source="media/how-to-view-alerts/alert-remediation-rename.png" alt-text="Take action section of alert":::
-4. To clear the search, select **Clear All**.
+Your administrator may have added guidance to help you complete the remediation or alert handling. If created comments will appear in the Alert Details section.
-### Security versus operational alert search
-Switch between viewing operational and security alerts:
+After taking remediation steps, you may want to change the alert status to close the alert.
-- **Security** alerts represent potential malware traffic, network anomalies, policy violations, and protocol violations. -- **Operational** alerts represent network anomalies, policy violations, and protocol violations.
+## Create alert reports
-When none of the options are selected, all the alerts are displayed.
+You can generate the following alert reports:
+- Export information on one, all or selected alerts to a CSV file
+- Export PDF reports
+- Download a full or filtered PCAP file for a specific alert.
-## Alert page options
+**To export to CSV file:**
-Alert messages provide the following actions:
+1. Select one or several alerts from the Alerts page. To create a csv file for all alert to a csv, don't select anything.
+1. Select **Export to CSV**.
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/acknowledge-an-alert-icon.png" border="false"::: to acknowledge an alert.
+**To export a PDF:**
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/unacknowledge-an-alert-icon.png" border="false"::: to unacknowledge an alert.
+1. Select one or several alerts from the Alerts page.
+1. Select **Export to PDF**.
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/pin-an-alert-icon.png" border="false"::: to pin an alert.
+**To download a PCAP file:**
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/unpin-an-alert-icon.png" border="false"::: to unpin an alert.
+1. Select an alert
+1. Select **View full details**.
+1. Select **Download Full PCAP** or **Download Filtered PCAP**.
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/acknowledge-all-alerts-icon.png" border="false"::: to acknowledge all alerts.
+PCAP files provide more detailed information about the network traffic that occurred at the time of the alert event.
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/learn-and-acknowledge-all-alerts.png" border="false"::: to learn and acknowledge all alerts.
+## View alerts in the Defender for IoT portal
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/export-to-csv.png" border="false"::: to export alert information to a .csv file. Use the **Extended Alert Export** option to export alert information in separate rows for each alert that covers multiple devices.
+If your deployment was set up to work with cloud-connected sensors, Alert detections shown on your sensors will also be seen in the Defender for IoT Alerts page, on the Azure portal.
-## Alert pop-up window options
+Viewing alerts in the portal provides significant advantages. For example, it lets you:
-- Select the :::image type="icon" source="media/how-to-work-with-alerts-sensor/export-to-pdf.png" border="false"::: icon to download an alert report as a PDF file.
+- Display an aggregated view of alert activity in all enterprise sensors
+- learn about related MITRE ATT&CK techniques, tactics and stages
+- View alerts based on the site
+- Change the severity of an alert
-- Select the :::image type="icon" source="media/how-to-work-with-alerts-sensor/download-pcap.png" border="false"::: icon to download the PCAP file. The file is viewable with Wireshark, the free network protocol analyzer.
+ :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Sample of alert as shown in cloud":::
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/download-filtered-pcap.png" border="false"::: to download a filtered PCAP file that contains only the relevant alert packets, thereby reducing output file size and allowing a more focused analysis. You can view the file by using Wireshark.
+### Manage alert events
-- Select the :::image type="icon" source="media/how-to-work-with-alerts-sensor/show-alert-in-timeline.png" border="false"::: icon to show the alert in the event timeline.
+You can manage an alert incident by:
-- Select the :::image type="icon" source="media/how-to-work-with-alerts-sensor/pin-an-alert-icon.png" border="false"::: icon to pin the alert.
+- Changing the status of an alert.
-- Select the :::image type="icon" source="media/how-to-work-with-alerts-sensor/unpin-an-alert-icon.png" border="false"::: icon to unpin the alert.
+- Instructing sensors to learn, acknowledge or mute activity detected.
-- Select :::image type="icon" source="media/how-to-work-with-alerts-sensor/learn-icon.png" border="false"::: to approve the traffic (security analysts and administrators only).
+- Create alert groups for display at SOC solutions.
-- Select a device to display it in the device map.
+- Forward alerts to partner vendors: SIEM systems, MSSP systems, and more.
## Next steps
defender-for-iot How To View Information Provided In Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-information-provided-in-alerts.md
- Title: About alert messages
-description: Select an alert from the Alerts window to review details.
Previously updated : 11/09/2021---
-# About alert messages
-
-Select an alert from the **Alerts** window to review alert details. Details include the following information:
--- Alert metadata--- Information about traffic, devices, and the event--- Links to connected devices in the device map--- Comments defined by security analysts and administrators--- Recommendations for investigating the event-
-## Alert metadata
-
-Alert details include the following alert metadata:
-
- - Alert ID
-
- - Policy engine that triggered the alert
-
- - Date and time that the alert was triggered
--
-## Information about devices, traffic, and the event
-
-The alert message provides information about:
-
- - The detected devices.
-
- - The traffic detected between the devices, such as protocols and function codes.
-
- - Insights into the implications of the event.
-
-You can use this information when deciding how to manage the alert event.
-
-## Links to connected devices in the device map
-
-To learn more about devices connected to the detected devices, you can select a device image in the alert and view connected devices in the map.
---
-The map filters to the device that you selected, and other devices connected to it. The map also displays the **Quick Properties** dialog box for the devices detected in the alerts.
-
-## Comments defined by security analysts and administrators
-
-Alerts might include a list of predefined comments. For example, comments can be instructions for mitigation actions to take, or names of individuals to contact about the event.
-
-When you're managing an alert event, you can choose the comment or comments that best reflect the event status or the steps you've taken to investigate the alert.
-
-Selected comments are saved in the alert message. Working with comments enhances communication between individuals and teams during the investigation of an alert event. As a result, comments can accelerate incident response time.
-
-An administrator or security analyst predefines comments. Selected comments are not forwarded to partner systems defined in the forwarding rules.
-
-After you review the information in an alert, you can carry out various forensic steps to guide you in managing the alert event. For example:
--- Analyze recent device activity (data-mining report). --- Analyze other events that occurred at the same time (event timeline). --- Analyze comprehensive event traffic (PCAP file).-
-## PCAP files
-
-In some cases, you can access a PCAP file from the alert message. This might be useful if you want more detailed information about the associated network traffic.
-
-To download a PCAP file, select :::image type="content" source="media/how-to-work-with-alerts-sensor/download-pcap.png" alt-text="Download icon."::: at the upper right of the **Alert details** dialog box.
-
-## Recommendations for investigating an event
-
-The **Recommendation** area of an alert displays information that might help you better understand an event. Review this information before managing the alert event or taking action on the device or the network.
-
-## See also
-
-[Accelerate Alert workflows](how-to-accelerate-alert-incident-response.md)
-
-[Manage the alert event](how-to-manage-the-alert-event.md)
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
Last updated 11/09/2021
-# Work with alerts on the on-premises management console
+# Work with alerts on the on-premises management console
-You can do the following from the **Alerts** window in the management console:
+You can do the following from the **Alerts** page in the management console:
- Work with alert filters
You can do the following from the **Alerts** window in the management console:
- Accelerate incident workflow with alert groups
+## Interaction with Cloud Alerts page
+
+If your deployment was set up to work with cloud-connected sensors, Alert detections shown on all enterprise sensors will also be seen in the Defender for IoT Alerts page, on the Azure portal.
++
+Viewing and managing alerts in the portal provides significant advantages. For example, it lets you:
+
+- Display an aggregated view of alert activity in all enterprise sensors.
+- Learn about related MITRE ATT&CK techniques, tactics and stages
+- View alerts based on the sensor site
+- Integrate alerts details with Microsoft Sentinel
+- Change the severity of an alert
+
+ :::image type="content" source="media/how-to-view-alerts/alert-cloud-mitre.png" alt-text="Sample of alert as shown in cloud":::
+ ## View alerts in the on-premises management console The on-premises management console aggregates alerts from all connected sensors. This provides an enterprise view of recent threats in your network and helps you better understand how sensor users are handling them. ### Work with alert filters
The alert presents the following information:
**On-premises management console Alert UUID** **Sensor alert ID**
Working with UUIDs ensures that each alert displayed in the on-premises manageme
- From the alert list, select an alert.
- :::image type="content" source="media/how-to-work-with-alerts-on-premises-management-console/alert-information.png" alt-text="Screenshot of alert information.":::
- **To view the alert in the sensor**: - Select **OPEN SENSOR** from the alert.
In addition to working with exclusion rules, you can suppress alerts by muting t
1. In the **By Device Address** section, define the:
- - Device IP address, MAC address, or subnet address that you want to exclude.
-
- - Traffic direction for the excluded devices, source, and destination.
+ - Device IP address, MAC address, or subnet address that you want to exclude.
+
+ - Traffic direction for the excluded devices, source, and destination.
1. Select **ADD**.
Rules that you create by using the API appear in the **Exclusion Rule** window a
## Next steps
-[Work with alerts on your sensor](how-to-work-with-alerts-on-your-sensor.md).
-Review the [Defender for IoT Engine alerts](alert-engine-messages.md).
+Review the [Defender for IoT Engine alerts](alert-engine-messages.md).
defender-for-iot How To Work With Alerts On Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-your-sensor.md
- Title: Understand sensor alerts
-description: Work with alerts to help you enhance the security and operation of your network.
Previously updated : 11/09/2021---
-# About sensor alerts
-
-Alerts help you enhance the security and operation of your network. Alerts provide you with information about:
--- Deviations from authorized network activity--- Protocol and operational anomalies--- Suspected malware traffic--
-Alert management options let users:
--- Instruct sensors to learn activity detected as authorized traffic.--- Acknowledge reviewing the alert.--- Instruct sensors to mute events detected with identical devices and comparable traffic.-
-Additional tools are available that help you enhance and expedite the alert investigation. For example:
-
- - Add instructional comments for alert reviewers.
-
- - Create alert groups for display at SOC solutions.
-
- - Search for specific alerts; review related PCAP files; view the detected devices and other connected devices in the device map or send alert details to partner systems.
-
- - Forward alerts to partner vendors: SIEM systems, MSSP systems, and more.
-
-## Alerts and engines
-
-Alerts are triggered when sensor engines detect changes in network traffic and behavior that need your attention. This article describes the kind of alerts that each engine triggers.
-
-| Alert type | Description |
-|-|-|
-| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
-| Protocol violation alerts | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
-| Operational alerts | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
-| Malware alerts | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
-| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but is not defined as a scanning device. |
-
-Tools are available to enable and disable sensor engines. Alerts are not triggered from engines that are disabled. See [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md).
-
-## Alerts and sensor reporting
-
-Activity reflected in alerts is calculated when you're generating Data Mining, Risk Assessment, and Attack Vector reports. When you manage these events, the sensor updates the reports accordingly.
-
-For example:
-
- - Unauthorized connectivity between a device in a defined subnet and devices located outside the subnet (public) will appear in the Data Mining *Internet Activity* report and the Risk Assessment *Internet Connections* section. After these devices are authorized (learned), they're calculated in generating these reports.
-
- - Malware events detected on network devices are reported in Risk Assessment reports. When alerts about malware events are *muted*, affected devices won't be calculated in the Risk Assessment report.
-
-## Next steps
-
-[Learning and Smart IT Learning modes](how-to-control-what-traffic-is-monitored.md#learning-and-smart-it-learning-modes)
-[View information provided in alerts](how-to-view-information-provided-in-alerts.md)
-[Manage the alert event](how-to-manage-the-alert-event.md)
-[Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)
-[Alert types and descriptions](alert-engine-messages.md)
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
Title: Work with the sensor device map
-description: The Device Map provides a graphical representation of network devices detected. Use the map to analyze, and manage device information, network slices and generate reports.
Previously updated : 11/09/2021
+description: The Device map provides a graphical representation of network devices detected. Use the map to analyze, and manage device information, network slices and generate reports.
Last updated : 02/02/2022
-# Investigate sensor detections in the Device Map
+# Investigate sensor detections in the Device map
-The Device Map provides a graphical representation of network devices detected. Use the map to:
+The Device map provides a graphical representation of network devices detected, as well as the connections between them. Use the map to:
- Retrieve, analyze, and manage device information.
The Device Map provides a graphical representation of network devices detected.
- Generate reports, for example export device details and summaries.
-To access the map:
+**To access the map:**
+
+- Select **Device map** from the console main screen.
- - Select **Device Map** from the console main screen.
## Map search and layout tools
-The following tools are used to working in the map.
+A variety of map tools help you gain insight into devices and connections of interest to you.
+- [Basic search tools](#basic-search-tools)
+- [Group highlight and filters tools](#group-highlight-and-filters-tools)
+- [Map display tools](#map-display-tools)
Your user role determines which tools are available in the Device Map window. See [Create and manage users](how-to-create-and-manage-users.md) for details about user roles.
-| Symbol | Description |
-|||
-| :::image type="icon" source="media/how-to-work-with-maps/search-bar-icon-v2.png" border="false":::| Search by IP address or MAC address for a specific device. Enter the IP address or MAC address in the text box. The map displays the device that you searched for with devices connected to it. |
-| Group Highlight and Filters <br /> :::image type="content" source="media/how-to-work-with-maps/group-highlight-and-filters-v2.png" alt-text="Screenshot of the group highlights and filters."::: | Filter or highlight the map based on default and custom device groups. |
-| :::image type="icon" source="media/how-to-work-with-maps/collapse-view-icon.png" border="false"::: | IT Collapse view, to enable a focused view on OT devices, and group IT devices. |
-| :::image type="icon" source="media/how-to-work-with-maps/device-management-icon.png" border="false"::: | Maintain current device arrangement in the map. For example, if you drag devices to new locations on the map, the devices will remain in these locations when exiting the map. |
-| :::image type="icon" source="media/how-to-work-with-maps/fit-to-screen-icon.png" border="false"::: | Fit to screen |
-| :::image type="icon" source="media/how-to-work-with-maps/layer-icon.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/layouts-icon-v2.png" border="false"::: | - View the Purdue layer identified for this device, including automatic, process control, supervisory, and enterprise <br /> - View connections between devices.|
-| :::image type="icon" source="media/how-to-work-with-maps/broadcast-icon.png" border="false"::: | Show or hide between broadcast and multicast. |
-| :::image type="icon" source="media/how-to-work-with-maps/time-icon.png" border="false"::: | Filter the devices on the map according to the time they last communicating with other devices. |
-| :::image type="icon" source="media/how-to-work-with-maps/notifications-icon.png" alt-text="notifications" border="false"::: | View notifications about a device. For example, if a new IP was detected for a device using an existing MAC address |
-| :::image type="icon" source="media/how-to-work-with-maps/export-import.png" alt-text="Export" border="false"::: | Export/Import device information. |
-| :::image type="icon" source="media/how-to-work-with-maps/properties-icon.png" alt-text="properties" border="false"::: | View basic device properties for selected devices. |
-| :::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" alt-text="Zoom In" border="false"::: or :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" alt-text="Zoom Out" border="false"::: | Zoom in or out of devices in the map. |
-
-## View OT elements only
-
-By default, IT devices are automatically aggregated by subnet, so that the map view is focused on OT and ICS networks. The presentation of the IT network elements is collapsed to a minimum, which reduces the total number of the devices presented on the map and provides a clear picture of the OT and ICS network elements.
-
-Each subnet is presented as a single entity on the device map, including an interactive collapsing and expanding capability to look at the details of an IT subnet and back.
-
-The figure below shows a collapsed IT subnet with 27 IT network elements.
--
-To enable the IT networks collapsing capability:
--- In the **System Settings** window, ensure that the Toggle IT Networks Grouping capability is enabled.--
-To expand an IT subnet:
-
-1. To differentiate between the IT and OT networks, from the System Settings screen, select **Subnets**.
-
- > [!NOTE]
- > It is recommended to name each subnet with meaningful names at the user can easily identify in order to differentiate between IT and OT networks.
+### Basic search tools
- :::image type="content" source="media/how-to-work-with-maps/subnet-list.png" alt-text="Subnets Configuration":::
+The following basic search tools are available:
+- Search by IP or MAC address
+- Multicast or broadcast traffic
+- Last seen: Filter the devices on the map according to the time they last communicated with other devices.
-2. In the **Edit Subnets Configuration** window, clear the **ICS Subnet** checkbox for each subnet that you want to define as an IT subnet. The IT subnets appear collapsed in the device map with the notifications for ICS devices, such as a controller or PLC, in IT networks.
+ :::image type="icon" source="media/how-to-work-with-maps/search-bar-icon-v2.png" border="false":::
- :::image type="content" source="media/how-to-work-with-maps/edit-config.png" alt-text="Edit Subnets Configuration":::
+When you search by IP or MAC address, the map displays the device that you searched for with devices connected to it.
-3. To expand the IT network on the map, in the Devices window, right-click it and select **Expand Network**.
- :::image type="content" source="media/how-to-work-with-maps/expand-network.png" alt-text="Expand your view of your network.":::
+### Group highlight and filters tools
-4. A confirmation box appears, notifying you that the layout change cannot be redone.
+Filter or highlight the map based on default and custom device groups.
-5. Select **OK**. The IT subnet elements appear on the map.
+- Filtering omits the devices that are not in the selected group.
+- Highlights displays all devices and highlights the selected items in the group in blue.
- :::image type="content" source="media/how-to-work-with-maps/fixed-map.png" alt-text="OK":::
+ :::image type="content" source="media/how-to-work-with-maps/group-highlight-and-filters-v2.png" alt-text="Screenshot of the group highlights and filters.":::
-To collapse an IT subnet:
+**To highlight or filter devices:**
-1. From the left pane, select **Devices**.
+1. Select **Device map** on the side menu.
-2. In the Devices window, select the collapse icon. The number in red indicates how many expanded IT subnets currently appear on the map.
-
- :::image type="content" source="media/how-to-work-with-maps/devices-notifications.png" alt-text="Device window":::
-
-3. Select the subnet(s) that you want to collapse or select **Collapse All**. The selected subnet appears collapsed on the map.
-
- :::image type="content" source="media/how-to-work-with-maps/close-all-subnets.png" alt-text="Collapse All":::
-
-The collapse icon is updated with the updated number of expanded IT subnets.
-
-## View or highlight device groups
-
-You can customize the map display based on device Groups. For example, groups of devices associated with a specific OT Protocol, VLAN, or subnet. Predefined groups are available and custom groups can be created.
-
-View groups by:
-
- - **Highlighting:** Highlight the devices that belong to a specific group in blue.
-
- - **Filtering:** Display only devices that belong to a specific group.
-
+1. From the Groups pane, select the group you want to highlight or filter.
+1. Toggle the **Highlight** or **Filter** option.
The following predefined groups are available: | Group name | Description |
The following predefined groups are available:
| **Subnets** | Devices that belong to a specific subnet. | | **VLAN** | Devices associated with a specific VLAN ID. | | **Cross subnet connections** | Devices that communicate from one subnet to another subnet. |
-| **Pinned alerts** | Devices for which the user has pinned an alert. |
-| **Attack vector simulations** | Vulnerable devices detected in attack vector reports. In order to view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v2.png" alt-text="Add Attack Vector Simulations":::. |
+| **Attack vector simulations** | Vulnerable devices detected in attack vector reports. To view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v3.png" alt-text="Add Attack Vector Simulations":::|
| **Last seen** | Devices grouped by the time frame they were last seen, for example: One hour, six hours, one day, seven days. | | **Not In Active Directory** | All non-PLC devices that are not communicating with the Active Directory. |
-To highlight or filter devices:
-
-1. Select **Device Map** on the side menu.
+For information about creating custom groups, see [Define custom groups](#define-custom-groups).
-2. Select the filter icon. :::image type="content" source="media/how-to-work-with-maps/menu-icon.png" alt-text="Menu":::
+### Map display tools
-3. From the Groups pane, select the group you want to highlight or filter devices.
-
-4. Select **Highlight** or **Filter**. Toggle the same selection to remove the highlight, or filter.
-
-## Define custom groups
-
-In addition to viewing predefined groups, you can define custom groups. The groups appear in the Device Map, Device Inventory, and Data Mining Reports.
-
-> [!NOTE]
-> You can also create groups from the Device Inventory.
-
-To create a group:
-
-1. Select **Devices** on the side menu. The Device Map is displayed.
-
-1. Select :::image type="content" source="media/how-to-work-with-maps/menu-icon.png" alt-text="Group Setting"::: to display the Groups settings.
-
-1. Select :::image type="content" source="media/how-to-work-with-maps/create-group-v2.png" alt-text="groups"::: to create a new custom group.
--
-1. Add the name of the group, use up to 30 characters.
-
-1. Select the relevant devices, as follows:
-
- - Add the devices from this menu by selecting them from the list (select on the arrow button),<br /> Or, <br />
- - Add the devices from this menu by copying them from a selected group (select on the arrow button)
-
-1. Select **Add group** to add existing groups to custom groups.
-
-### Add devices to a custom group
-
-You can add devices to a custom group or create a new custom group and the device.
-
-1. Right-click a device(s) on the map.
-
-1. Select **Add to group**.
-
-1. Enter a group name in the group field and select +. The new group appears. If the group already exists, it will be added to the existing custom group.
-
- :::image type="content" source="media/how-to-work-with-maps/groups-section-v2.png" alt-text="Group name":::
+| Icon | Description |
+|--|--|
+| :::image type="icon" source="media/how-to-work-with-maps/fit-to-screen-icon.png" border="false"::: | Fit to screen. |
+| :::image type="icon" source="media/how-to-work-with-maps/fit-to-selection-icon.png" border="false"::: | Fits a group of selected devices to the center of the screen. |
+| :::image type="icon" source="media/how-to-work-with-maps/collapse-view-icon.png" border="false"::: | IT/OT presentation. Collapse view to enable a focused view on OT devices, and group IT devices. |
+|:::image type="icon" source="media/how-to-work-with-maps/layouts-icon-v2.png" border="false"::: | Layout options, including: <br />**Pin layout**. Drag devices in the map to a new location and use the Pin option to save those locations when you leave the map to use another option. <br />**Layout by connection**. View connections between devices. <br />**Layout by Purdue**. View the devices in the map according to Enterprise, supervisory and process control layers. <br /> |
+| :::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" alt-text="Zoom In" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" alt-text="Zoom Out" border="false"::: | Zoom in or out of the map. |
-1. Add devices to a group by repeating steps 1-3.
-## Map zoom views
+### Map zoom views
Working with map views help expedite forensics when analyzing large networks.
Three device detail views can be displayed:
- [Device type and connection view](#device-type-and-connection-view)
- - [Detailed view](#detailed-view)
### BirdΓÇÖs-eye view
This view provides an at-a-glance view of devices represented as follows:
- Black dots indicate devices with no alerts
+ :::image type="content" source="media/how-to-work-with-maps/colored-dots-v2.png" alt-text="Bird eye view" lightbox="media/how-to-work-with-maps/colored-dots-v2.png":::
### Device type and connection view
-This view presents devices represented as icons on the map in order to highlight devices with alerts, device types, and connected devices.
+This view presents devices represented as icons on the map.
- Devices with alerts are displayed with a red ring
This view presents devices represented as icons on the map in order to highlight
- Devices displayed as a star were marked as important
-The device type icon is shown with connected devices.
+Overall connections are displayed.
-### Detailed view
+**To view specific connections:**
-The detailed view presents devices and device labels and indicators with the following information:
+1. Select a device in the map.
+1. Specific connections between devices are displayed in blue. In addition, you will see connections that cross various Purdue levels.
+ :::image type="content" source="media/how-to-work-with-maps/connections-purdue-level.png" alt-text="Detailed view" lightbox="media/how-to-work-with-maps/connections-purdue-level.png" :::
-### Control the zoom view
+### View IT subnets
-The map view displayed depends on the map zoom-level. Switching between the map views is done by changing the zoom levels.
+By default, IT devices are automatically aggregated by subnet, so that the map view is focused on OT and ICS networks. The presentation of the IT network elements is collapsed to a minimum, which reduces the total number of the devices presented on the map and provides a clear picture of the OT and ICS network elements.
+Each subnet is presented as a single entity on the Device map. Options are available to expand subnets to see details; and collapse subnets or hide them.
-### Enable simplified zoom views
+**To expand an IT subnet:**
+1. Right-click the icon on the map the represents the IT network and select **Expand Network**.
+1. A confirmation box appears, notifying you that the layout change cannot be redone.
+1. Select **OK**. The IT subnet elements appear on the map.
-Administrators who want security analysts and RO users to access BirdΓÇÖs-eye and device and type connection views, should enable the simplified view option.
+**To collapse an IT subnet:**
-To enable simplified map views:
+1. From the left pane, select **Devices**.
- - Select **System Settings** and then toggle the **Simplified Map View** option.
+2. Select the expanded subnet. The number in red indicates how many expanded IT subnets currently appear on the map.
+3. Select the subnet(s) that you want to collapse or select **Collapse All**. The selected subnet appears collapsed on the map.
-## Learn more about devices
+The collapse icon is updated with the updated number of collapsed IT subnets.
-An extensive range of tools are available to learn more about devices form the Device Map:
+**To disable the option to collapse and expand IT subnets:**
+1. Select the **Disable Display IT Network Groups**.
+1. Select Confirm the dialog box that opens.
+This option is available to Administrator users.
-- [Device Labels and Indicators](#device-labels-and-indicators)
+> [!NOTE]
+ > For information on updating default OT IT networks, see [Configure subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets).
-- [Device Quick Views](#device-quick-views)
+## Define custom groups
-- [View and Manage Device Properties](#view-and-manage-device-properties)
+In addition to viewing predefined groups, you can define custom groups. The groups appear in the Device map, Device inventory, and Data Mining Reports.
-- [View Device Types](#view-device-types)
+> [!NOTE]
+> You can also create groups from the Device Inventory.
-- [Backplane](#backplane-properties)
+**To create a group:**
-- [View a Timeline of Events for the Device](#view-a-timeline-of-events-for-the-device)
+1. Select **Create Custom Group** from the Device map.
-- [Analyze Programming Details and Changes](#analyze-programming-details-and-changes)
+1. In the Add custom group dialog box, add the name of the group. Use up to 30 characters.
-### Device labels and indicators
+1. Select an existing group(s) or choose specific device(s).
-The following labels and indicators may appear on devices on the map:
+1. Select **Submit**.
-| Device label | Description |
-|--|--|
-| :::image type="content" source="media/how-to-work-with-maps/host-v2.png" alt-text="IP host name"::: | IP address host name and IP address, or subnet addresses |
-| :::image type="content" source="media/how-to-work-with-maps/amount-alerts-v2.png" alt-text="Number of alerts"::: | Number of alerts associated with the device |
-| :::image type="icon" source="media/how-to-work-with-maps/type-v2.png" border="false"::: | Device type icon, for example storage, PLC or historian. |
-| :::image type="content" source="media/how-to-work-with-maps/grouped-v2.png" alt-text="devices grouped"::: | Number of devices grouped in a subnet in an IT network. In this example 8. |
-| :::image type="content" source="media/how-to-work-with-maps/not-authorized-v2.png" alt-text="device Learning period"::: | A device that was detected after the Learning period and was not authorized as a network device. |
-| Solid line | Logical connection between devices |
-| :::image type="content" source="media/how-to-work-with-maps/new-v2.png" alt-text="New device"::: | New device discovered after Learning is complete. |
+**To add devices to a custom group**:
+
+1. Right-click a device(s) on the map.
-### Device quick views
+1. Select **Add to custom group**.
-Access device properties and connections from the map.
+1. Select an existing group(s) or choose specific device(s).
-To open the quick properties menu:
+1. Select **Submit**.
- - Select the quick properties menu :::image type="content" source="media/how-to-work-with-maps/properties.png" alt-text="quick properties menu":::.
+## Learn more about devices
-#### Quick device properties
+An extensive range of tools are available to learn more about devices from the Device map, including:
-Select a device or multiple devices while the Quick Properties screen is open to see the highlights of those devices:
+- [Device labels and indicators](#device-labels-and-indicators)
+- [Device details](#device-details)
-#### Quick connection properties
+- [Device types](#device-types)
-Select a connection while the Quick Properties screen is open to see the protocols that are utilized in this connection and when they were last seen:
+- [Backplane properties](#backplane-properties)
-## View and manage device properties
+### Device labels and indicators
-You can view device proprieties for each device displayed on the map. For example, the device name, type or OS, or the firmware or vendor.
+The following labels and indicators may appear on devices on the map:
+| Device label | Description |
+|--|--|
+| :::image type="content" source="media/how-to-work-with-maps/host-v2.png" alt-text="IP host name"::: | IP address host name and IP address, or subnet addresses |
+| :::image type="content" source="media/how-to-work-with-maps/amount-alerts-v2.png" alt-text="Number of alerts"::: | Number of alerts associated with the device |
+| :::image type="icon" source="media/how-to-work-with-maps/type-v2.png" border="false"::: | Device type icon, for example storage, PLC or historian. |
+| :::image type="content" source="media/how-to-work-with-maps/grouped-v2.png" alt-text="devices grouped"::: | Number of devices grouped in a subnet in an IT network. In this example 8. |
+| :::image type="content" source="media/how-to-work-with-maps/not-authorized-v2.png" alt-text="device Learning period"::: | A device that was detected after the Learning period and was not authorized as a network device. |
+| Solid line | Logical connection between devices |
+| :::image type="content" source="media/how-to-work-with-maps/new-v2.png" alt-text="New device"::: | New device discovered after Learning is complete. |
-The following information can be updated manually. Information manually entered will override information discovered by Defender for IoT.
+### Device details and contextual information
- - Name
+You can access detailed and contextual information about a device from the map, for example:
+- Device properties, such as the device type, protocols detected, or Purdue level associated with the device.
+- Backplane properties.
+- Contextual information such as open alerts associated with the device.
- - Type
+**To view details:**
+1. Right-click a device on the map.
+1. Select **View properties**.
+1. Navigate to the information you need.
- - OS
+ :::image type="content" source="media/how-to-work-with-maps/device-details-from-map.png" alt-text="Device details shown for device selected in map":::
- - Purdue layer
+#### Device details
- - Description
+This section describes device details.
| Item | Description | |--|--|
-| Basic Information | The basic information needed. |
| Name | The device name. <br /> By default, the sensor discovers the device name as it defined in the network. For example, a name defined in the DNS server. <br /> If no such names were defined, the device IP address appears in this field. <br /> You can change a device name manually. Give your devices meaningful names that reflect their functionality. |
-| Type | The device type detected by the sensor. <br /> For more information, see [View device types](#view-device-types). |
+| Authorized status | Indicates if the device is authorized or not. During the Learning period, all the devices discovered in the network are identified as Authorized. When a device is discovered after the Learning period, it appears as Unauthorized by default. You can change this definition manually. For information on this status and manually authorizing and unauthorizing, see [Authorize and unauthorize devices](#authorize-and-unauthorize-devices). |
+| Last seen | The last time the device was detected. |
+| Alert | The number of open alerts associated with the device. |
+| Type | The device type detected by the sensor. |
| Vendor | The device vendor. This is determined by the leading characters of the device MAC address. This field is read-only. | | Operating System | The device OS detected by the sensor. |
-| Purdue Layer | The Purdue layer identified by the sensor for this device, including: <br /> - Automatic <br /> - Process Control <br /> - Supervisory <br /> - Enterprise |
+| Location | The Purdue layer identified by the sensor for this device, including: <br /> - Automatic <br /> - Process Control <br /> - Supervisory <br /> - Enterprise |
| Description | A free text field. <br /> Add more information about the device. | | Attributes | Additional information was discovered on the device. For example, view the PLC Run and Key state, the secure status of the PLC, or information on when the state changed. <br /> The information is read only and cannot be updated from the Attributes section. |
-| Settings | You can manually change device settings to prevent false positives: <br /> - **Authorized Device**: During the learning period, all the devices discovered in the network are identified as authorized devices. When a device is discovered after the learning period, it appears as an unauthorized device by default. You can change this definition manually. <br /> - **Known as Scanner**: Enable this option if you know that this device is known as scanner and there is no need to alert you about it. <br /> - **Programming Device**: Enable this option if you know that this device is known as a programming device and is used to make programming changes. Identifying it as a programming device will prevent alerts for programming changes originating from this asset. |
-| Custom Groups | The custom groups in the device map in which this device participates. |
-| State | The security and the authorization status of the device: <br /> - The status is `Secured` when there are no alerts <br /> - When there are alerts about the device, the number of alerts is displayed <br /> - The status `Unauthorized` is displayed for devices that were added to the network after the learning period. You can manually define the device as `Authorized Device` in the settings <br /> - In case the address of this device is defined as a dynamic address, `DHCP` is added to the status. |
--
-| Network | Description |
-|--|--|
-| Interfaces | The device interfaces. A RO field. |
+| Scanner or Programming device | **Scanner**: Enable this option if you know that this device is known as scanner and there is no need to alert you about it. <br /> **Programming Device**: Enable this option if you know that this device is known as a programming device and is used to make programming changes. Identifying it as a programming device will prevent alerts for programming changes originating from this asset. |
+| Network Interfaces | The device interfaces. A RO field. |
| Protocols | The protocols used by the device. A RO field. | | Firmware | If Backplane information is available, firmware information will not be displayed. | | Address | The device IP address. |
The following information can be updated manually. Information manually entered
| Model | The device model number. | | Firmware Version | The firmware version number. |
-To view the device information:
-
-1. Select **Devices** on the side menu.
-
-2. Right-click a device and select **View Properties**. The Device Properties window is displayed.
-
-3. Select on the required alert at the bottom of this window to view detailed information about alerts for this device.
-
-### View device types
-
-The Device Type is automatically identified by the sensor during the device discovery process. You can change the type manually.
--
-The following table presents all the types in the system:
-
-| Category | Device Type |
-|--|--|
-| ICS | Engineering Station <br /> PLC <br />Historian <br />HMI <br />IED <br />DCS Controller <br />RTU <br />Industrial Packaging System <br />Industrial Scale <br />Industrial Robot <br />Slot <br />Meter <br />Variable Frequency Drive <br />Robot Controller <br />Servo Drive <br />Pneumatic Device <br />Marquee |
-| IT | Domain Controller <br />DB Server <br />Workstation <br />Server <br />Terminal Station <br />Storage <br />Smart Phone <br />Tablet <br />Backup Server |
-| IoT | IP Camera <br />Printer <br />Punch Clock <br />ATM <br />Smart TV <br />Game console <br />DVR <br />Door Control Panel <br />HVAC <br />Thermostat <br />Fire Alarm <br />Smart Light <br />Smart Switch <br />Fire Detector <br />IP Telephone <br />Alarm System <br />Alarm Siren <br />Motion Detector <br />Elevator <br />Humidity Sensor <br />Barcode Scanner <br />Uninterruptible Power Supply <br />People Counter System <br />Intercom <br />Turnstile |
-| Network | Wireless Access Point <br />Router <br />Switch <br />Firewall <br />VPN Gateway <br />NTP Server <br />Wifi Pineapple <br />Physical Location <br />I/O Adapter <br /> Protocol Converter |
-
-To view the device information:
+#### Contextual information
-1. Select **Devices** on the side menu.
+ View contextual information about the device.
-2. Right-click a device and select **View Properties**. The Device Properties window is displayed.
+**To view:**
+1. Select **Map View** to see device connections to other devices.
+1. Select **Alerts** to see details about alerts associated with the device.
+1. Select **Event Timeline** to review events that occurred around the time of the detection.
-3. Select on the required alert to view detailed information about alerts for this device.
-
-### Backplane properties
+#### Backplane properties
If a PLC contains multiple modules separated into racks and slots, the characteristics might vary between the module cards. For example, if the IP address and the MAC address are the same, the firmware might be different.
-You can use the Backplane option to review multiple controllers/cards and their nested devices as one entity with a variety of definitions. Each slot in the Backplane view represents the underlying devices ΓÇô the devices that were discovered behind it.
+You can use the Backplane option to review multiple controllers/cards and their nested devices as one entity with various definitions. Each slot in the Backplane view represents the underlying devices ΓÇô the devices that were discovered behind it.
:::image type="content" source="media/how-to-work-with-maps/backplane-image-v2.png" alt-text="Backplane Properties":::
To view the underlying devices behind the slot, select **VIEW ON MAP**. The slot
:::image type="content" source="media/how-to-work-with-maps/map-appearance-v2.png" alt-text="VIEW ON MAP":::
-## View a timeline of events for the device
-
-View a timeline of events associated with a device.
-
-To view the timeline:
-
-1. Right-click a device from the map.
-
-2. Select **Show Events**. The Event Timeline window opens with information about events detected for the selected device.
-
-See [Event Timeline](#event-timeline) for details.
-
-## Analyze programming details and changes
-
-Enhance forensics by displaying programming events carried out on your network devices and analyzing code changes. This information helps you discover suspicious programming activity, for example:
-
- - Human error: An engineer is programming the wrong device.
-
- - Corrupted programming automation: Programming is erroneously carried out because of automation failure.
-
- - Hacked systems: Unauthorized users logged into a programming device.
-
-You can display a programmed device and scroll through various programming changes carried out on it by other devices.
-
-View code that was added, changed, removed, or reloaded by the programming device. Search for programming changes based on file types, dates, or times of interest.
-
-### When to review programming activity
-
-You may need to review programming activity:
-
- - After viewing an alert regarding unauthorized programming
-
- - After a planned update to controllers
-
- - When a process or machine is not working correctly (to see who carried out the last update and when)
--
-Other options let you:
-
- - Mark events of interest with a star.
-
- - Download a *.txt file with the current code.
-
-### About authorized vs unauthorized programming events
-
-Unauthorized programming events are carried out by devices that have not been learned or manually defined as programming devices. Authorized programming events are carried out by devices that were resolved or manually defined as programming devices.
-
-The Programming Analysis window displays both authorized and unauthorized programming events.
-
-### Accessing programming details and changes
-
-Access the Programming Analysis window from the:
--- [Event Timeline](#event-timeline)--- [Unauthorized Programming Alerts](#unauthorized-programming-alerts)-
-### Event timeline
-
-Use the event timeline to display a timeline of events in which programming changes were detected.
--
-### Unauthorized programming alerts
-
-Alerts are triggered when unauthorized programming devices carry out programming activities.
--
-> [!NOTE]
-> You can also view basic programming information in the Device Properties window and Device Inventory.
-
-### Working in the programming timeline window
-
-This section describes how to view programming files and compare versions. Search for specific files sent to a programmed device. Search for files based on:
-
- - Date
-
- - File type
--
-|Programming timeline type | Description |
-|--|--|
-| Programmed Device | Provides details about the device that was programmed, including the hostname and file. |
-| Recent Events | Displays the 50 most recent events detected by the sensor. <br />To highlight an event, hover over it and click the star. :::image type="icon" source="media/how-to-work-with-maps/star.png" border="false"::: <br /> The last 50 events can be viewed. |
-| Files | Displays the files detected for the chosen date and the file size on the programmed device. <br /> By default, the maximum number of files available for display per device is 300. <br /> By default, the maximum file size for each file is 15 MB. |
-| File status :::image type="icon" source="media/how-to-work-with-maps/status-v2.png" border="false"::: | File labels indicate the status of the file on the device, including: <br /> **Added**: the file was added to the endpoint on the date or time selected. <br /> **Updated**: The file was updated on the date or time selected. <br /> **Deleted**: This file was removed. <br /> **No label**: The file was not changed. |
-| Programming Device | The device that made the programming change. Multiple devices may have carried out programming changes on one programmed device. The hostname, date, or time of change and logged in user are displayed. |
-| :::image type="icon" source="media/how-to-work-with-maps/current.png" border="false"::: | Displays the current file installed on the programmed device. |
-| :::image type="icon" source="media/how-to-work-with-maps/download-text.png" border="false"::: | Download a text file of the code displayed. |
-| :::image type="icon" source="media/how-to-work-with-maps/compare.png" border="false"::: | Compare the current file with the file detected on a selected date. |
-
-### Choose a file to review
-
-This section describes how to choose a file to review.
-
-To choose a file to review:
-
-1. Select an event from the **Recent Events** pane
-2. Select a file form the File pane. The file appears in the Current pane.
--
-### Compare files
-
-This section describes how to compare programming files.
-
-To compare:
-
-1. Select an event from the Recent Events pane.
-
-2. Select a file from the File pane. The file appears in the Current pane. You can compare this file to other files.
-
-3. Select the compare indicator.
+## Manage device information from the map
- :::image type="content" source="media/how-to-work-with-maps/compare.png" alt-text="Compare indicator":::
+Under certain circumstances, you may need to update device information provided by Defender for IoT. The following options are available:
- The window displays all dates the selected file was detected on the programmed device. The file may have been updated on the programmed device by multiple programming devices.
+- [Update device properties](#update-device-properties)
+- [Delete devices](#delete-devices)
+- [Merge devices](#merge-devices)
+- [Authorize and unauthorize devices](#authorize-and-unauthorize-devices)
+- [Mark devices as important](#mark-devices-as-important)
- The number of differences detected appears in the upper right-hand corner of the window. You may need to scroll down to view differences.
- :::image type="content" source="media/how-to-work-with-maps/scroll.png" alt-text="scroll down to your selection":::
+### Update device properties
- The number is calculated by adjacent lines of changed text. For example, if eight consecutive lines of code were changed (deleted, updated, or added) this will be calculated as one difference.
+Certain device properties can be updated manually. Information manually entered will override information discovered by Defender for IoT.
- :::image type="content" source="media/how-to-work-with-maps/program-timeline.png" alt-text="Your programing timeline view.":::
+**To update properties:**
+1. Right-click a device from the map.
+1. Select **View properties**.
+1. Select **Edit properties.**
-4. Select a date. The file detected on the selected date appears in the window.
+ :::image type="content" source="media/how-to-work-with-maps/edit-config.png" alt-text="Dialog that allows user to edit the device properties":::
+1. Update any of the following:
-5. The file selected from the Recent Events/Files pane always appears on the right.
+ - Authorized status
+ - Device name
+ - Device type. For a list of types, see [Device types](#device-types).
+ - OS
+ - Purdue layer
+ - Description
+
-### Device programming information: Other locations
+#### Device types
-In addition to reviewing details in the Programming Timeline, you can access programming information in the Device Properties window and the Device Inventory.
+This table lists device types you can manually assign to a device.
-| Device type | Description |
+| Category | Device Type |
|--|--|
-| Device properties | The device properties window provides information on the last programming event detected on the device\. :::image type="content" source="media/how-to-work-with-maps/information-from-device-v2.png" alt-text="Your device's properties"::: |
-| The device inventory | The device inventory indicates if the device is a programming device\. :::image type="content" source="media/how-to-work-with-maps/inventory-v2.png" alt-text="The inventory of devices"::: |
-
-## Manage device information from the map
-
-The sensor does not update or impact devices directly on the network. Changes made here only impact how analyzes the device.
+| ICS | Engineering Station <br /> PLC <br />Historian <br />HMI <br />IED <br />DCS Controller <br />RTU <br />Industrial Packaging System <br />Industrial Scale <br />Industrial Robot <br />Slot <br />Meter <br />Variable Frequency Drive <br />Robot Controller <br />Servo Drive <br />Pneumatic Device <br />Marquee |
+| IT | Domain Controller <br />DB Server <br />Workstation <br />Server <br />Terminal Station <br />Storage <br />Smart Phone <br />Tablet <br />Backup Server |
+| IoT | IP Camera <br />Printer <br />Punch Clock <br />ATM <br />Smart TV <br />Game console <br />DVR <br />Door Control Panel <br />HVAC <br />Thermostat <br />Fire Alarm <br />Smart Light <br />Smart Switch <br />Fire Detector <br />IP Telephone <br />Alarm System <br />Alarm Siren <br />Motion Detector <br />Elevator <br />Humidity Sensor <br />Barcode Scanner <br />Uninterruptible Power Supply <br />People Counter System <br />Intercom <br />Turnstile |
+| Network | Wireless Access Point <br />Router <br />Switch <br />Firewall <br />VPN Gateway <br />NTP Server <br />Wifi Pineapple <br />Physical Location <br />I/O Adapter <br /> Protocol Converter |
### Delete devices
The device will be removed from the Device Map, Device Inventory, and Data Minin
The device must be inactive for at least 10 minutes to delete it.
-To delete a device from the device map:
-
-1. Select **Devices** on the side menu.
+**To delete a device from the device map:**
-2. Right-click a device and select **Delete**.
+1. Right-click a device on the map and select **Delete**.
### Merge devices
When merging, you instruct the sensor to combine the device properties of two de
For example, if you merge two devices, each with an IP address, both IP addresses will appear as separate interfaces in the Device Properties window. You can only merge authorized devices. - The event timeline presents the merge event. :::image type="content" source="media/how-to-work-with-maps/events-time.png" alt-text="The event timeline with merged events.":::
-You cannot undo a device merge. If you mistakenly merged two devices, delete the device and wait for The sensor to rediscover both.
+You cannot undo a device merge. If you mistakenly merged two devices, delete the device and wait for the sensor to rediscover both.
-To merge devices:
+**To merge devices:**
1. Select two devices (shift-click), and then right-click one of them.
To merge devices:
### Authorize and unauthorize devices
-During the Learning period, all the devices discovered in the network are identified as authorized devices. The **Authorized** label does not appear on these devices in the Device Map.
+During the Learning period, all the devices discovered in the network are identified as authorized devices. The **Authorized** label does not appear on these devices in the Device map.
When a device is discovered after the Learning period, it appears as an unauthorized device. In addition to seeing unauthorized devices in the map, you can also see them in the Device Inventory.
New devices detected after the Learning period will appear with a `New` and `Una
If you move a device on the map or manually change the device properties, the `New` label is removed from the device icon.
-#### Unauthorized devices - attack vectors and risk assessment reports
+#### Unauthorized devices - Attack Vectors and Risk Assessment reports
Unauthorized devices are included in Risk Assessment reports and Attack Vectors reports. - **Attack Vector Reports:** Devices marked as unauthorized are resolved in the Attack Vector as suspected rogue devices that might be a threat to the network.
- :::image type="content" source="media/how-to-work-with-maps/attack-vector-reports.png" alt-text="Vew your attack vector reports":::
+ :::image type="content" source="media/how-to-work-with-maps/attack-vector-reports.png" alt-text="View your attack vector reports.":::
-- **Risk Assessment Reports:** Devices marked as unauthorized are:
+- **Risk Assessment Reports:** Devices marked as unauthorized are identified in Risk Assessment reports.
- - Identified in Risk Assessment Reports
+ :::image type="content" source="media/how-to-work-with-maps/unauthorized-risk-assessment-report.png" alt-text="A Risk Assessment report showing an unauthorized device":::
-To authorize or unauthorize devices manually:
+**To authorize or unauthorize devices manually:**
-1. Right-click the device on the map and select **Unauthorize**
+1. Right-click the device on the map and select **Authorize** or **Unauthorize**.
### Mark devices as important
You can mark significant network devices as important, for example business crit
:::image type="icon" source="media/how-to-work-with-maps/star-one.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/star-two.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/star-3.png" border="false":::
-### Important devices - attack vectors and risk assessment reports
+**To mark a device as Important:**
+
+1. Right-click the device on the map and select **Mark as important**
+
+#### Important devices - Attack Vectors and Risk Assessment reports
Important devices are calculated when generating Risk Assessment reports and Attack Vectors reports. - Attack Vector reports devices marked as important are resolved in the Attack Vector as Attack Targets. - Risk Assessment Reports: Devices marked as important are calculated when providing the security score in the Risk Assessment report.
+
+#### Important devices - Defender for IoT on the Azure portal
-## Generate Activity reports from the map
-
-Generate an activity report for a selected device over the 1, 6, 12 or 24 hours. The following information is available:
-
- - Category: Basic detection information based on traffic scenarios.
-
- - Source and destination devices
-
- - Data: Additional information defected.
-
- - The time and date last seen.
-
-You can save the report as a Microsoft Excel or Word file.
--
-To generate an activity report for a device:
-
-1. Right-click a device from the Map.
-
-2. Select an Activity Report.
-
- :::image type="content" source="media/how-to-work-with-maps/activity-report.png" alt-text="View a report of your activity.":::
-
-## Generate Attack Vector reports from the map
-
-Simulate an Attack Vector report to learn if a device on the map you select is a vulnerable attack target.
-
-Attack Vector reports provide a graphical representation of a vulnerability chain of exploitable devices. These vulnerabilities can give an attacker access to key network devices. The Attack Vector simulator calculates attack vectors in real time and analyzes all attack vectors per a specific target.
-
-To view a device in an Attack Vector reports:
-
-1. Right-click a device from the map.
-
-2. Select **Simulate Attack Vectors**. The Attack Vector dialog box opens with the device you select as the attack target.
-
- :::image type="content" source="media/how-to-work-with-maps/simulation.png" alt-text="Add attack vector simulation":::
-
-3. Add the remaining parameters to the dialog box and select **Add Simulation**.
-
-## Export device information from the map
-
-Export the following device information from the Map.
-
- - Device details (Microsoft Excel)
-
- - A device summary (Microsoft Excel)
-
- - A word file with groups (Microsoft Word)
-
-To export:
-
-1. Select the Export icon from the Map.
+Devices you mark as important on your sensor are also marked as important in the Device inventory on the Defender for IoT portal on Azure.
-1. Select an export option.
## See also
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
The following feature enhancements are available with version 10.5.3 of Microsof
## October 2021
-The following feature enhancements are available with version 10.5.2 of Azure Defender for IoT.
+The following feature enhancements are available with version 10.5.2 of Microsoft Defender for IoT.
- [PLC operating mode detections (Public Preview)](#plc-operating-mode-detections-public-preview)
If the Key state is detected as Program or the Run state is detected as either R
:::image type="content" source="media/release-notes/device-inventory-plc.png" alt-text="Device inventory showing PLC operating mode."::: -- View PLC secure status and last change information per PLC in the Attributes section of the Device Properties screen. If the Key state is detected as Program or the Run state is detected as either Remote or Program the PLC is defined by Defender for IoT as *unsecure*. The Device Properties PLC Secured option will read false. For more information, see [View and manage device properties](how-to-work-with-the-sensor-device-map.md#view-and-manage-device-properties).
+- View PLC secure status and last change information per PLC in the Attributes section of the Device Properties screen. If the Key state is detected as Program or the Run state is detected as either Remote or Program the PLC is defined by Defender for IoT as *unsecure*. The Device Properties PLC Secured option will read false.
:::image type="content" source="media/release-notes/attributes-plc.png" alt-text="Attributes screen showing PLC information.":::
New threat intelligence packages can now be automatically pushed to cloud connec
Working with automatic updates helps reduce operational efforts and ensure greater security. Enable automatic updating by onboarding your cloud connected sensor on the Defender for IoT portal with the **Automatic Threat Intelligence Updates** toggle turned on.
-If you would like to take a more conservative approach to updating your threat intelligence data, you can manually push packages from the Azure Defender for IoT portal to cloud connected sensors only when you feel it is required.
+If you would like to take a more conservative approach to updating your threat intelligence data, you can manually push packages from the Microsoft Defender for IoT portal to cloud connected sensors only when you feel it is required.
This gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors. Manually push updates to sensors from the Defender for IoT **Sites and Sensors** page. You can also review the following information about threat intelligence packages:
For Fresh Installations:
#### Password recovery
-Sensor and on-premises management console Administrative users can now recover passwords from the Azure Defender for IoT portal. Previously password recovery required intervention by the support team.
+Sensor and on-premises management console Administrative users can now recover passwords from the Microsoft Defender for IoT portal. Previously password recovery required intervention by the support team.
### Onboarding #### On-premises management console - committed devices Following initial sign-in to the on-premises management console, users are now required to upload an activation file. The file contains the aggregate number of devices to be monitored on the organizational network. This number is referred to as the number of committed devices.
-Committed devices are defined during the onboarding process on the Azure Defender for IoT portal, where the activation file is generated.
+Committed devices are defined during the onboarding process on the Microsoft Defender for IoT portal, where the activation file is generated.
First-time users and users upgrading are required to upload the activation file. After initial activation, the number of devices detected on the network might exceed the number of committed devices. This event might happen, for example, if you connect more sensors to the management console. If there is a discrepancy between the number of detected devices and the number of committed devices, a warning appears in the management console. If this event occurs, you should upload a new activation file. #### Pricing page options
-Pricing page lets you onboard new subscriptions to Azure Defender for IoT and define committed devices in your network.
+Pricing page lets you onboard new subscriptions to Microsoft Defender for IoT and define committed devices in your network.
Additionally, the Pricing page now lets you manage existing subscriptions associated with a sensor and update device commitment. #### View and manage onboarded sensors
A new Site and Sensors portal page lets you:
#### Azure Sentinel new connector page
-The Azure Defender for IoT data connector page in Azure Sentinel has been redesigned. The data connector is now based on subscriptions rather than IoT Hubs; allowing customers to better manage their configuration connection to Azure Sentinel.
+The Microsoft Defender for IoT data connector page in Azure Sentinel has been redesigned. The data connector is now based on subscriptions rather than IoT Hubs; allowing customers to better manage their configuration connection to Azure Sentinel.
#### Azure portal permission updates
defender-for-iot Resources Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-frequently-asked-questions.md
For information on how to activate your on-premises management console, see [Act
## How to change the network configuration
-You can see how to [update your sensor network configuration before activation](how-to-activate-and-set-up-your-sensor.md#update-sensor-network-configuration-before-activation).
+You can update your sensor network configuration before or after activation. For more information, see [Activate and set up your sensor](how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor).
You can also [update the sensor network configuration](how-to-manage-individual-sensors.md#update-the-sensor-network-configuration) after activation.
You can follow these links, if [The appliance isn't responding](how-to-install-s
To learn more about how to get started with Defender for IoT, see the following articles: - Read the Defender for IoT [overview](overview.md)-- [Get started with Defender for IoT](getting-started.md)
+- [Get started with Defender for IoT](getting-started.md)
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-manage-proprietary-protocols.md
The Horizon console Overview window provides basic information about the plugins
| Application | The name of the plugin you uploaded. | |--|--|
-| :::image type="icon" source="media/how-to-manage-proprietary-protocols/toggle-icon.png" border="false"::: | Toggle the plugin on or off. The sensor will not handle protocol traffic defined in the plugin when you toggle off the plugin. |
+| :::image type="icon" source="media/how-to-manage-proprietary-protocols/toggle-icon.png" border="false"::: | Toggle the plugin on or off. The sensor will not handle protocol traffic defined in the plugin when you toggle off the plugin. |
| Time | The time the data was last analyzed. Updated every five seconds. | | PPS | The number of packets per second. | | Bandwidth | The average bandwidth detected within the last five seconds. | | Malforms | Malformed validations are used after the protocol has been positively validated. If there is a failure to process the packets based on the protocol, a failure response is returned.<br/> <br />This column indicates the number of malform errors in the past five seconds. | | Warnings | Packets match the structure and specification but there is unexpected behavior based on the plugin warning configuration. |
-| Errors | The number of packets that failed basic protocol validations that the packet matches the protocol definitions.  The Number displayed here indicates that n umber of errors detected in the past five seconds. |
+| Errors | The number of packets that failed basic protocol validations that the packet matches the protocol definitions. The Number displayed here indicates that n umber of errors detected in the past five seconds. |
| :::image type="icon" source="media/how-to-manage-proprietary-protocols/monitor-icon.png" border="false"::: | Review details about malform and warnings detected for your plugin. | ### Plugin performance details
Horizon dissection information is available for export in the dissection details
Enhance alert management in your enterprise by triggering custom alerts for any protocol based on Horizon framework traffic dissectors.
-These alerts can be used to communicate information:  
+These alerts can be used to communicate information:
- About traffic detections based on protocols and underlying protocols in a proprietary Horizon plugin.
Using custom, conditioned-based alert triggering and messaging helps pinpoint sp
### Working with Horizon alerts
-Alerts generated by Horizon custom alert rules are displayed in the sensor and management console Alerts window and in integrated partner systems when using Forwarding Rules. 
+Alerts generated by Horizon custom alert rules are displayed in the sensor and management console Alerts window and in integrated partner systems when using Forwarding Rules.
Alerts generated by Horizon can be acknowledged or muted. The learn option is not available for custom alerts as the alert events cannot be learned to policy baseline.
When you create multiple rules, alerts are triggered when any rule condition or
## See also
-[View information provided in alerts](how-to-view-information-provided-in-alerts.md)
+[Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules)
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
An Azure subscription is required for this tutorial.
If you don't already have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-If you already have a subscription that is onboarded for Azure Defender for IoT for OT environments, you will need to create a new subscription. To learn how to onboard a subscription, see [Onboard a subscription](how-to-manage-subscriptions.md#onboard-a-subscription).
+If you already have a subscription that is onboarded for Microsoft Defender for IoT for OT environments, you will need to create a new subscription. To learn how to onboard a subscription, see [Onboard a subscription](how-to-manage-subscriptions.md#onboard-a-subscription).
-There is a minimum security level needed to access different parts of Azure Defender for IoT. You must have a level of Security Owner, or a Subscription contributor of the subscription to onboard a subscription, and commit to a pricing. Security Reader level permissions to access the Defender for IoT user interface.
+There is a minimum security level needed to access different parts of Microsoft Defender for IoT. You must have a level of Security Owner, or a Subscription contributor of the subscription to onboard a subscription, and commit to a pricing. Security Reader level permissions to access the Defender for IoT user interface.
-The following table describes user access permissions to Azure Defender for IoT portal tools:
+The following table describes user access permissions to Microsoft Defender for IoT portal tools:
| Permission | Security reader | Security admin | Subscription contributor | Subscription owner | |--|--|--|--|--|
sudo apt purge -y microsoft-eiot-sensor
## Next steps
-[Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md#manage-your-iot-devices-with-the-device-inventory-for-organizations)
+[Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md#manage-your-iot-devices-with-the-device-inventory-for-organizations)
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
Title: Automate adding a lab user
description: This article shows you how to automate adding a user to a lab in Azure DevTest Labs using Azure Resource Manager templates, PowerShell, and CLI. Last updated 06/26/2020 -+ # Automate adding a lab user to a lab in Azure DevTest Labs
devtest-labs Devtest Lab Guidance Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-guidance-get-started.md
DevTest Labs is well-suited for transient activities like workshops, hands-on la
Lab owners can manage costs by deleting labs and VMs when they're no longer needed. -- Set [expiration dates](devtest-lab-set-lab-policy.md#set-expiration-date) on VMs.
+- Set [expiration dates](devtest-lab-use-resource-manager-template.md#set-vm-expiration-date) on VMs.
- [Delete labs](devtest-lab-delete-lab-vm.md#delete-a-lab) and all related resources. - [Delete all lab VMs by running a single PowerShell script](./devtest-lab-faq.yml#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab-).
digital-twins How To Route With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-route-with-managed-identity.md
Last updated 6/15/2021 -+ # Optional fields. Don't forget to remove # if you need a field. #
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
na
Last updated 04/29/2021 -+ # Configure reverse DNS for services hosted in Azure
expressroute About Public Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-public-peering.md
Last updated 12/16/2019 --+ # Create and manage ExpressRoute public peering
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
Last updated 02/10/2022 -+ # Scale SNAT ports with Azure Virtual Network NAT
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/snat-private-range.md
Last updated 04/14/2021 -+ # Azure Firewall SNAT private IP address ranges
hdinsight Hdinsight Autoscale Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-autoscale-clusters.md
It can take 10 to 20 minutes for the overall scaling operation to complete. When
During the cluster scaling down process, Autoscale decommissions the nodes to meet the target size. In case of load based autoscaling, If tasks are running on those nodes, Autoscale waits until the tasks are completed for Spark and Hadoop clusters. Since each worker node also serves a role in HDFS, the temporary data is shifted to the remaining worker nodes. Make sure there's enough space on the remaining nodes to host all temporary data.
-In case of schedule-based Autoscale scale-down, graceful decommission is not supported. This can cause job failures during a scale down operation, and it is recommended to plan schedules based on the anticipated job schedule patterns to include sufficient time for the ongoing jobs to conclude. You can set the schedules looking at historical spread of completion times so as to avoid job failures.
+> [!Note]
+> In case of schedule-based Autoscale scale-down, graceful decommission is not supported. This can cause job failures during a scale down operation, and it is recommended to plan schedules based on the anticipated job schedule patterns to include sufficient time for the ongoing jobs to conclude. You can set the schedules looking at historical spread of completion times so as to avoid job failures.
### Configure schedule-based Autoscale based on usage pattern
hdinsight Hdinsight Hadoop Create Linux Clusters Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-arm-templates.md
Title: Create Apache Hadoop clusters using templates - Azure HDInsight
description: Learn how to create clusters for HDInsight by using Resource Manager templates -+ Last updated 04/07/2020
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
Title: Use Azure Monitor logs to monitor Azure HDInsight clusters
description: Learn how to use Azure Monitor logs to monitor jobs running in an HDInsight cluster. -+ Last updated 09/21/2021
healthcare-apis Iot Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-fhir-portal-quickstart.md
Preserve this connection string to be used at a later step.
Azure offers an extensive suite of IoT products to connect and manage your IoT devices. You can build your own solution based on PaaS using Azure IoT Hub, or start with a manage IoT apps platform with Azure IoT Central. For this tutorial, we'll leverage Azure IoT Central, which has industry-focused solution templates to help you get started.
-Deploy the [Continuous patient monitoring application template](../../iot-central/healthcare/tutorial-continuous-patient-monitoring.md#create-continuous-patient-monitoring-application). This template includes two simulated devices producing real-time data to help you get started: **Smart Vitals Patch** and **Smart Knee Brace**.
+Deploy the [Continuous patient monitoring application template](../../iot-central/healthcare/tutorial-continuous-patient-monitoring.md#create-application). This template includes two simulated devices producing real-time data to help you get started: **Smart Vitals Patch** and **Smart Knee Brace**.
> [!NOTE] > Whenever your real devices are ready, you can use same IoT Central application to [onboard your devices](../../iot-central/core/howto-set-up-template.md) and replace device simulators. Your device data will automatically start flowing to FHIR as well.
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
The `urn:modelDefinition:fupmoiu28b:ymju9efv9` value in the `FROM` clause is a *
:::image type="content" source="media/howto-query-with-rest-api/show-device-template-id.png" alt-text="Screenshot that shows how to find the device template ID in the page URL.":::
-The response to this request looks like the following example:
+The response includes telemetry from multiple devices that share the same device template. The response to this request looks like the following example:
```json {
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
Title: Azure IoT Central application management guide
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to manage your IoT Central application. Application management includes users, organization, and security.
+ Title: Azure IoT Central application administration guide
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to administer your IoT Central application. Application administration includes users, organization, and security.
Last updated 01/04/2022
# This article applies to administrators.
-# IoT Central application management guide
+# IoT Central application administration guide
An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for administrators who manage IoT Central applications.
IoT Central application administration includes the following tasks:
## Create applications
-You use an *application template* to create an application. An application templates consist of:
+You use an *application template* to create an application. An application template consists of:
- Sample dashboards - Sample device templates
To learn more, see [Create a retail application](../retail/tutorial-in-store-ana
## Users and roles
-IoT Central uses a role-based access control system to manage user permissions within an application. IoT Central has three built-in roles for administrators, solution builders, and operators. An administrator can create custom roles with specific sets of permissions. An administrator is responsible for adding users to an application and assigning them to roles.
+IoT Central uses a role-based access control system to manage user permissions within an application. An administrator is responsible for adding users to an application and assigning them to roles. IoT Central has three built-in roles for app administrators, app builders, and app operators. An administrator can create custom roles with specific sets of permissions.
To learn more, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md). ## Organizations
-To manage which users see which devices in your IoT Central application, use an _organization_ hierarchy. You define an organization in your application.
-The user's role in application determines their permissions over the devices they can see.
+To manage which users see which devices in your IoT Central application, use an _organization_ hierarchy. When you define an organization in your application, there are three new built-in roles: _organization administrators_, _organization operators_ and _organization viewers_. The user's role in application determines their permissions over the devices they can see.
To learn more, see [Create an IoT Central organization](howto-create-organizations.md). ## Application security
-Devices that connect to your IoT Central application typically use X.509 certificates or shared access signatures (SAS) as credentials. The administrator manages the group certificates or keys that these device credentials are derived from.
+Devices that connect to your IoT Central application typically use X.509 certificates or shared access signatures (SAS) as credentials. An administrator manages the group certificates or keys that these device credentials are derived from. To learn more, see:
-To learn more, see [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment), [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment), and [How to roll X.509 device certificates](how-to-connect-devices-x509.md).
+- [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment)
+- [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment)
+- [How to roll X.509 device certificates](how-to-connect-devices-x509.md).
-The administrator can also create and manage the API tokens that a client application uses to authenticate with your IoT Central application. Client applications use the REST API to interact with IoT Central.
+An administrator can also create and manage the API tokens that a client application uses to authenticate with your IoT Central application. Client applications use the REST API to interact with IoT Central. To learn more, see:
-For data exports, the administrator can configure [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to secure the connections to the [export destinations](howto-export-data.md). To learn more, see:
+- [Get an API token](howto-authorize-rest-api.md#get-an-api-token)
-- [Configure a managed identity (Azure portal)](howto-manage-iot-central-from-portal.md#configure-a-managed-identity)-- [Configure a managed identity (REST API)](howto-manage-iot-central-with-rest-api.md)-- [Configure a managed identity (Azure CLI)](howto-manage-iot-central-from-cli.md#configure-a-managed-identity)
+For data exports, an administrator can configure [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to secure the connections to the [export destinations](howto-export-data.md). To learn more, see:
+
+- [Configure a managed identity](howto-manage-iot-central-from-portal.md#configure-a-managed-identity)
## Configure an application
-The administrator can configure the behavior and appearance of an IoT Central application. To learn more, see:
+An administrator can configure the behavior and appearance of an IoT Central application. To learn more, see:
- [Change application name and URL](howto-administer.md#change-application-name-and-url)-- [Customize the UI](howto-customize-ui.md)
+- [Customize application UI](howto-customize-ui.md)
- [Move an application to a different pricing plans](howto-faq.yml#how-do-i-move-from-a-free-to-a-standard-pricing-plan-)-- [Configure file uploads](howto-configure-file-uploads.md)+
+## Configure device file upload
+
+An administrator can configure file uploads of an IoT Central application that lets connected devices upload media and other files to Azure Storage container. To learn more, see:
+
+- [Upload files from your devices to the cloud](howto-configure-file-uploads.md)
## Export an application
To learn more, see [Create and use a custom application template](howto-create-i
## Migrate to a new version
-An administrator can migrate an application to a newer version. Currently, a newly created application is a V3 application. An administrator may need to migrate a V2 application to a V3 application.
+An administrator can migrate an application to a newer version. Currently, all newly created applications are V3 applications. Depending on when it was created, it may be V2. An administrator is responsible for migrating a V2 application to a V3 application.
To learn more, see [Migrate your V2 IoT Central application to V3](howto-migrate.md).
To learn more, see [Monitor application health](howto-manage-iot-central-from-po
## Monitor connected IoT Edge devices
-To learn how to monitor your IoT Edge fleet remotely using Azure Monitor and built-in metrics integration, see [Collect and transport metrics](../../iot-edge/how-to-collect-and-transport-metrics.md).
+To learn how to monitor your IoT Edge fleet remotely by using Azure Monitor and built-in metrics integration, see [Collect and transport metrics](../../iot-edge/how-to-collect-and-transport-metrics.md).
## Tools Many of the tools you use as an administrator are available in the **Administration** section of each IoT Central application. You can also use the following tools to complete some administrative tasks: -- [Azure command line](howto-manage-iot-central-from-cli.md)
+- [Azure Command-Line Interface (CLI) or PowerShell](howto-manage-iot-central-from-cli.md)
- [Azure portal](howto-manage-iot-central-from-portal.md) ## Next steps
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md
# Take a tour of the Azure IoT Central UI
-This article introduces you to Azure IoT Central UI. You can use the UI to create, manage, and use an IoT Central application and its connected devices.
+This article introduces you to Azure IoT Central UI. You can use the UI to create, administer, and use an IoT Central application and its connected devices.
## IoT Central homepage
The [IoT Central homepage](https://apps.azureiotcentral.com/) page is the place
### Create an application
-In the **Build** section you can browse the list of industry-relevant IoT Central templates, or start from scratch using a Custom app template.
+In the **Build** section you can browse the list of industry-relevant IoT Central templates, or start from scratch using a Custom application template.
:::image type="content" source="media/overview-iot-central-tour/iot-central-build.png" alt-text="IoT Central build page":::
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
This article provides an overview of the features of Azure IoT Central.
## Create an IoT Central application
-[Quickly deploy a new IoT Central application](quick-deploy-iot-central.md) and then customize it to your specific requirements. Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
+[Quickly deploy a new IoT Central application](quick-deploy-iot-central.md) and then customize it to your specific requirements. Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use application templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
Start with a generic _application template_ or with one of the industry-focused application templates:
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
To connect a device to to your IoT Central application, you need some connection
To register your device:
-1. In IoT Central, navigate to the **Devices** page and select **Create a device**:
+1. In IoT Central, navigate to the **Devices** page and select **Add a device**:
:::image type="content" source="media/quick-deploy-iot-central/create-device.png" alt-text="Screenshot that shows create a device in IoT Central.":::
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-smart-meter-app.md
-# Tutorial: Deploy and walk through the smart meter monitoring app template
+# Tutorial: Deploy and walk through the smart meter monitoring application template
-The smart meters not only enable automated billing, but also advanced metering use cases such as real-time readings and bi-directional communication. The smart meter app template enables utilities and partners to monitor smart meters status and data, define alarms and notifications. It provides sample commands, such as disconnect meter and update software. The meter data can be set up to egress to other business applications and to develop custom solutions.
+The smart meters not only enable automated billing, but also advanced metering use cases such as real-time readings and bi-directional communication. The _smart meter monitoring_ application template enables utilities and partners to monitor smart meters status and data, define alarms and notifications. It provides sample commands, such as disconnect meter and update software. The meter data can be set up to egress to other business applications and to develop custom solutions.
App's key functionalities:
App's key functionalities:
- Built-in visualization and dashboards - Extensibility for custom solution development
-Use the IoT Central *smart meter monitoring* application template and the guidance in this article to develop an end-to-end smart meter monitoring solution.
- :::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-app-architecture.png" alt-text="smart meter architecture.":::
This architecture consists of the following components. Some solutions may not require every component listed here.
A smart meter is one of the most important devices among all the energy assets.
### IoT Central platform
-When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your smart meters to IoT Central, the app template uses built-in features such as device models, commands, and dashboards. The app template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
+When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your smart meters to IoT Central, the application template uses built-in features such as device models, commands, and dashboards. The application template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
### Extensibility options to build with IoT Central
iot-central Tutorial Solar Panel App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-solar-panel-app.md
-# Tutorial: Deploy and walk through the solar panel monitoring app template
+# Tutorial: Deploy and walk through the solar panel monitoring application template
The solar panel monitoring app enables utilities and partners to monitor solar panels, such as their energy generation and connection status in near real time. It can send notifications based on defined threshold criteria. It provides sample commands, such as update firmware and other properties. The solar panel data can be set up to egress to other business applications and to develop custom solutions.
-App's key functionalities:
+Key application functionality:
- Solar panel sample device model - Solar Panel info and live status
App's key functionalities:
- Built-in visualization and dashboards - Extensibility for custom solution development
-Use the IoT Central *solar panel monitoring* application template and the guidance in this article to develop an end-to-end solar panel monitoring solution.
-
- :::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-app-architecture.png" alt-text="solar panel architecture.":::
This architecture consists of the following components. Some applications may not require every component listed here.
Solar panels are one of the significant sources of renewable energy. Typically,
### IoT Central platform
-When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your solar panels to IoT Central, the app template uses built-in features such as device models, commands, and dashboards. The app template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
+When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your solar panels to IoT Central, the application template uses built-in features such as device models, commands, and dashboards. The application template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
### Extensibility options to build with IoT Central
-The IoT Central platform provides two extensibility options: Continuous Data Export (CDE) and APIs. The customers and partners can choose between these options based to customize their solutions for specific needs. For example, one of our partners configured CDE with Azure Data Lake Storage (ADLS). They're using ADLS for long-term data retention and other cold path storage scenarios, such batch processing, auditing, and reporting purposes.
-
+The IoT Central platform provides two extensibility options: Continuous Data Export (CDE) and APIs. The customers and partners can choose between these options based to customize their solutions for specific needs. For example, one of our partners configured CDE with Azure Data Lake Storage (ADLS). They're using ADLS for long-term data retention and other cold path storage scenarios, such batch processing, auditing, and reporting purposes.
In this tutorial, you learn how to: > [!div class="checklist"]- > * Create a solar panel app for free > * Walk through the application > * Clean up resources - ## Prerequisites * There are no specific prerequisites required to deploy this app. * You can use the free pricing plan or use an Azure subscription. - ## Create a solar panel monitoring application - 1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Energy** tab: :::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-build.png" alt-text="Smart meter template":::
The following sections walk you through the key features of the application:
After you deploy the application template, you'll want to explore the app a bit more. Notice that it comes with sample smart meter device, device model, and dashboard. Adatum is a fictitious energy company that monitors and manages solar panels. On the solar panel monitoring dashboard, you see solar panel properties, data, and sample commands. This dashboard allows you or your support team to perform the following activities proactively, before any problems require additional support:+ * Review the latest panel info and its installed [location](../core/howto-use-location-data.md) on the map. * Check the panel status and connection status. * Review the energy generation and temperature trends to catch any anomalous patterns.
Select the sample device, **SP0123456789**. From the **Update Properties** tab,
:::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-device-properties.png" alt-text="Screenshot of Solar Panel Monitoring Template Update Properties tab."::: - ### Device template To see the solar panel device model, select the **Device templates** tab. The model has predefined interfaces for data, properties, commands, and views. :::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-device-templates.png" alt-text="Screenshot of Solar Panel Monitoring Template Device templates."::: - ## Clean up resources If you decide not to continue using this application, delete your application with the following steps:
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md
# Tutorial: Deploy and walk through the connected waste management application template
-Connected Waste Management app is an IoT Central app template to help you kickstart your IoT solution development to enable smart cities to remotely monitor to maximize efficient waste collection.
-
-Use the IoT Central *connected waste management* application template and the guidance in this article to develop an end-to-end connected waste management solution.
+The _connected waste management_ application template helps you kickstart your IoT solution development to enable smart cities to remotely monitor to maximize efficient waste collection.
:::image type="content" source="media/tutorial-connectedwastemanagement/concepts-connected-waste-management-architecture-1.png" alt-text="Connected waste management architecture.":::
-### Devices and connectivity
+### Devices and connectivity (1,2)
Devices such as waste bins that are used in open environments may connect through low-power wide area networks (LPWAN) or through a third-party network operator. For these types of devices, use the [Azure IoT Central Device Bridge](../core/howto-build-iotc-device-bridge.md) to send your device data to your IoT Central application. You can also use an IP capable device gateway that connects directly to your IoT Central application.
Azure IoT Central is an IoT App platform that helps you quickly build and deploy
When you connect your smart waste devices to IoT Central, the application provides device command and control, monitoring and alerting, a user interface with built-in RBAC, configurable dashboards, and extensibility options.
-### Extensibility and integrations
+### Extensibility and integrations (3)
You can extend your IoT application in IoT Central and optionally:
You can extend your IoT application in IoT Central and optionally:
* Automate workflows in other systems by triggering actions using Power Automate or webhooks from IoT Central application. * Programatically access your IoT application in IoT Central through IoT Central APIs.
-### Business applications
+### Business applications (4)
You can use IoT data to power various business applications within a waste utility. For example, in a connected waste management solution you can optimize the dispatch of trash collections trucks. The optimization can be done based on IoT sensors data from connected waste bins. In your [IoT Central connected waste management application](./tutorial-connected-waste-management.md) you can configure rules and actions, and set them to create alerts in [Connected Field Service](/dynamics365/field-service/connected-field-service). Configure Power Automate in IoT Central rules to automate workflows across applications and services. Additionally, based on service activities in Connected Field Service, information can be sent back to Azure IoT Central.
You can easily configure the following integration processes with IoT Central an
In this tutorial, you learn how to: > [!div class="checklist"]-
-> * Use the Azure IoT Central *Connected waste management* template to create your app.
-> * Explore and customize the dashboard.
+> * Use the Azure IoT Central *connected waste management* application template to create your app.
+> * Explore and customize the dashboard.
> * Explore the connected waste bin device template. > * Explore simulated devices. > * Explore and configure rules.
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-consumption-monitoring.md
Traditional water consumption tracking relies on water operators manually reading water consumption meters at the meter sites. More cities are replacing traditional meters with advanced smart meters enabling remote monitoring of consumption and remotely controlling valves to control water flow. Water consumption monitoring coupled with digital feedback message to the citizen can increase awareness and reduce water consumption.
-The water consumption monitoring app is an IoT Central app template to help you kickstart your IoT solution development to enable water utilities and cities to remotely monitor and control water flow to reduce consumption.
-
-Use the IoT Central *water consumption monitoring* application template and the guidance in this article to develop an end-to-end water consumption monitoring solution.
+The _water consumption monitoring_ application template helps you kickstart your IoT solution development to enable water utilities and cities to remotely monitor and control water flow to reduce consumption.
![Water consumption monitoring architecture](./media/tutorial-waterconsumptionmonitoring/concepts-waterconsumptionmonitoring-architecture1.png)
-### Devices and connectivity
+### Devices and connectivity (1,2)
Water management solutions use smart water devices such as flow meters, water quality monitors, smart valves, leak detectors.
When you build an IoT solution, Azure IoT Central simplifies the build process a
When you connect your smart water devices to IoT Central, the application provides device command and control, monitoring and alerting, a user interface with built-in RBAC, configurable dashboards, and extensibility options.
-### Extensibility and integrations
+### Extensibility and integrations (3)
You can extend your IoT application in IoT Central and optionally:
You can extend your IoT application in IoT Central and optionally:
* Automate workflows in other systems by triggering actions using Power Automate or webhooks from IoT Central application. * Programmatically access your IoT application in IoT Central through IoT Central APIs.
-### Business applications
+### Business applications (4)
You can use IoT data to power various business applications within a water utility. In your [IoT Central water consumption monitoring application](tutorial-water-consumption-monitoring.md) you can configure rules and actions, and set them to create alerts in [Connected Field Service](/dynamics365/field-service/connected-field-service). Configure Power Automate in IoT Central rules to automate workflows across applications and services. Additionally, based on service activities in Connected Field Service, information can be sent back to Azure IoT Central. In this tutorial, you learn how to: > [!div class="checklist"]- > * Use the Azure IoT Central water consumption monitoring template to create your water consumption monitoring application. > * Explore and customize the dashboard. > * Explore device templates.
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-quality-monitoring.md
Traditional water quality monitoring relies on manual sampling techniques and field laboratory analysis, which is time consuming and costly. By remotely monitoring water quality in real-time, water quality issues can be managed before citizens are affected. Moreover, with advanced analytics, water utilities, and environmental agencies can act on early warnings on potential water quality issues and plan on water treatment in advance.
-The water quality monitoring app is an IoT Central app template to help you kickstart your IoT solution development and enable water utilities to digitally monitor water quality in smart cities.
-
-Use the IoT Central *water quality monitoring* application template and the guidance in this article to develop an end-to-end water quality monitoring solution.
+The _water quality monitoring_ application template helps you kickstart your IoT solution development and enable water utilities to digitally monitor water quality in smart cities.
![Water quality monitoring architecture](./media/tutorial-waterqualitymonitoring/concepts-water-quality-monitoring-architecture1.png)
-### Devices and connectivity
+### Devices and connectivity (1,2)
Water management solutions use smart water devices such as flow meters, water quality monitors, smart valves, leak detectors.
When you build an IoT solution, Azure IoT Central simplifies the build process a
When you connect your smart water devices to IoT Central, the application provides device command and control, monitoring and alerting, a user interface with built-in RBAC, configurable dashboards, and extensibility options.
-### Extensibility and integrations
+### Extensibility and integrations (3,4)
You can extend your IoT application in IoT Central and optionally:
You can extend your IoT application in IoT Central and optionally:
* Automate workflows in other systems by triggering actions using Power Automate or webhooks from IoT Central application. * Programatically access your IoT application in IoT Central through IoT Central APIs.
-### Business applications
+### Business applications (5)
You can use IoT data to power various business applications within a water utility. In your [IoT Central water consumption monitoring application](tutorial-water-consumption-monitoring.md) you can configure rules and actions, and set them to create alerts in [Connected Field Service](/dynamics365/field-service/connected-field-service). Configure Power Automate in IoT Central rules to automate workflows across applications and services. Additionally, based on service activities in Connected Field Service, information can be sent back to Azure IoT Central. In this tutorial, you learn to: > [!div class="checklist"]- > * Use the **Water quality monitoring** template to create a water quality monitoring application. > * Explore and customize an dashboard. > * Explore a water quality monitoring device template.
iot-central Tutorial Continuous Patient Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md
-# Tutorial: Deploy and walkthrough the continuous patient monitoring app template
+# Tutorial: Deploy and walkthrough the continuous patient monitoring application template
-In the healthcare IoT space, Continuous Patient Monitoring is one of the key enablers of reducing the risk of readmissions, managing chronic diseases more effectively, and improving patient outcomes. Continuous Patient Monitoring can be split into two major categories:
+In the healthcare IoT space, continuous patient monitoring is one of the key enablers of reducing the risk of readmissions, managing chronic diseases more effectively, and improving patient outcomes. Continuous patient monitoring can be split into two major categories:
1. **In-patient monitoring**: Using medical wearables and other devices in the hospital, care teams can monitor patient vital signs and medical conditions without having to send a nurse to check up on a patient multiple times a day. Care teams can understand the moment that a patient needs critical attention through notifications and prioritizes their time effectively. 1. **Remote patient monitoring**: By using medical wearables and patient reported outcomes (PROs) to monitor patients outside of the hospital, the risk of readmission can be lowered. Data from chronic disease patients and rehabilitation patients can be collected to ensure that patients are adhering to care plans and that alerts of patient deterioration can be surfaced to care teams before they become critical.
-This application template can be used to build solutions for both categories of Continuous Patient Monitoring. The benefits include:
+The application template enables you to:
- Seamlessly connect different kinds of medical wearables to an IoT Central instance. - Monitor and manage the devices to ensure they remain healthy.
This application template can be used to build solutions for both categories of
:::image type="content" source="media/cpm-architecture.png" alt-text="Continuous patient monitoring architecture":::
-## Bluetooth Low Energy (BLE) medical devices
+### Bluetooth Low Energy (BLE) medical devices (1)
Many medical wearables used in healthcare IoT solutions are BLE devices. These devices can't communicate directly to the cloud and need to use a gateway to exchange data with your cloud solution. This architecture uses a mobile phone application as the gateway.
-## Mobile phone gateway
+### Mobile phone gateway (2)
-The mobile phone application's primary function is to collect BLE data from medical devices and communicate it to IoT Central. The app also guides patients through device setup and lets them view their personal health data. Other solutions could use a tablet gateway or a static gateway in a hospital room. An open-source sample mobile application is available for Android and iOS to use as a starting point for your application development. To learn more, see the [Continuous Patient Monitoring sample mobile app on GitHub](https://github.com/iot-for-all/iotc-cpm-sample).
+The mobile phone application's primary function is to collect BLE data from medical devices and communicate it to IoT Central. The app also guides patients through device setup and lets them view their personal health data. Other solutions could use a tablet gateway or a static gateway in a hospital room. An open-source sample mobile application is available for Android and iOS to use as a starting point for your application development. To learn more, see the [Continuous patient monitoring sample mobile app on GitHub](https://github.com/iot-for-all/iotc-cpm-sample).
-## Export to Azure API for FHIR&reg;
+### Export to Azure API for FHIR&reg; (3)
Azure IoT Central is HIPAA-compliant and HITRUST&reg; certified. You can also send patient health data to other services using the [Azure API for FHIR](../../healthcare-apis/fhir/overview.md). Azure API for FHIR is a standards-based API for clinical health data. The [Azure IoT connector for FHIR](../../healthcare-apis/fhir/iot-fhir-portal-quickstart.md) lets you use the Azure API for FHIR as a continuous data export destination from IoT Central.
-## Machine learning
+### Machine learning (4)
Use machine learning models with your FHIR data to generate insights and support decision making by your care team. To learn more, see the [Azure machine learning documentation](../../machine-learning/index.yml).
-## Provider dashboard
+### Provider dashboard (5)
Use the Azure API for FHIR data to build a patient insights dashboard or integrate it directly into an electronic medical record used by care teams. Care teams can use the dashboard to assist patients and identify early warning signs of deterioration. To learn more, see the [Build a Power BI provider dashboard](tutorial-health-data-triage.md) tutorial.
In this tutorial, you learn how to:
- There are no specific prerequisites required to deploy this app. - You can use the free pricing plan or use an Azure subscription.
-## Create Continuous Patient Monitoring application
+## Create application
1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Healthcare** tab: :::image type="content" source="media/app-manager-health.png" alt-text="Application template":::
-1. Select **Create app** under **Continuous Patient Monitoring**.
+1. Select **Create app** under **Continuous patient monitoring**.
To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md).
The following sections walk you through the key features of the application:
### Dashboards
-After deploying the app template, you'll first land on the **Lamna in-patient monitoring dashboard**. Lamna Healthcare is a fictitious hospital system that contains two hospitals: Woodgrove Hospital and Burkville Hospital. On the Woodgrove Hospital operator dashboard, you can:
+After deploying the application template, you'll first land on the **Lamna in-patient monitoring dashboard**. Lamna Healthcare is a fictitious hospital system that contains two hospitals: Woodgrove Hospital and Burkville Hospital. On the Woodgrove Hospital operator dashboard, you can:
* See device telemetry and properties such as the **battery level** of your device or its **connectivity** status.
iot-central Tutorial Health Data Triage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-health-data-triage.md
In this tutorial, you learn how to:
## Set up a continuous data export to Azure Event Hubs
-You will first need to set up a continuous data export from your Azure IoT Central app template to the Azure Event Hub in your subscription. You can do so by following the steps in this Azure IoT Central tutorial for [Exporting to Event Hubs](../core/howto-export-data.md). You will only need to export for the telemetry for the purposes of this tutorial.
+You will first need to set up a continuous data export from your Azure IoT Central application template to the Azure Event Hub in your subscription. You can do so by following the steps in this Azure IoT Central tutorial for [Exporting to Event Hubs](../core/howto-export-data.md). You will only need to export for the telemetry for the purposes of this tutorial.
## Create a Power BI streaming dataset
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Last updated 12/20/2021
For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
-You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
+You can use the IoT Central _in-store analytics condition monitoring_ application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
Use the application template to:
Use the application template to:
The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard.
-Use the IoT Central *in-store analytics* application template and the guidance in this article to develop an end-to-end in-store analytics solution.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/store-analytics-architecture-frame.png" alt-text="Azure IoT Central Store Analytics.":::
-1. Set of IoT sensors sending telemetry data to a gateway device.
-1. Gateway devices sending telemetry and aggregated insights to IoT Central.
-1. Continuous data export to the desired Azure service for manipulation.
-1. Data can be structured in the desired format and sent to a storage service.
-1. Business applications can query data and generate insights that power retail operations.
-
-## Condition monitoring sensors
+### Condition monitoring sensors (1)
An IoT solution starts with a set of sensors capturing meaningful signals from within a retail store environment. It is reflected by different kinds of sensors on the far left of the architecture diagram above.
-## Gateway devices
+### Gateway devices (2)
Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
-## IoT Central application
+### IoT Central application
The Azure IoT Central application ingests data from different kinds of IoT sensors and gateway devices within the retail store environment and generates a set of meaningful insights. Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices.
-## Data transform
+### Data transform (3)
-The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
+The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
-## Business application
+### Business application (4)
The IoT data can be used to power different kinds of business applications deployed within a retail environment. A retail store manager or staff member can use these applications to visualize business insights and take meaningful actions in real time. To learn how to build a real-time Power BI dashboard for your retail team, follow the [tutorial](./tutorial-in-store-analytics-create-app.md). In this tutorial, you learn how to: > [!div class="checklist"]
->
> - Use the Azure IoT Central **In-store analytics - checkout** template to create a retail store application > - Customize the application settings > - Create and customize IoT device templates
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
The benefits of a connected logistics solution include:
- Geo-fencing, route optimization, fleet management, and vehicle analytics. - Forecasting for predictable departure and arrival of shipments.
-Use the application template and guidance in this article to develop an end-to-end *connected logistics solution*.
- :::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-architecture.png" alt-text="Connected logistics dashboard." border="false":::
-1. IoT tags send telemetry data to a gateway device.
-2. Gateway devices send telemetry and aggregated insights to IoT Central.
-3. IoT Central routes data to an Azure service for manipulation.
-4. Services such as Azure Stream Analytics or Azure Functions can reformat data streams and send the data to storage accounts.
-5. End-user business applications can power business workflows.
-
-*IoT tags* provide physical, ambient, and environmental sensor capabilities such as temperature, humidity, shock, tilt, and light. IoT tags typically connect to gateway device through Zigbee (802.15.4). Tags are less expensive sensors and can be discarded at the end of a typical logistics journey to avoid challenges with reverse logistics.
+*IoT tags (1)* provide physical, ambient, and environmental sensor capabilities such as temperature, humidity, shock, tilt, and light. IoT tags typically connect to gateway device through Zigbee (802.15.4). Tags are less expensive sensors and can be discarded at the end of a typical logistics journey to avoid challenges with reverse logistics.
-*Gateways* enable upstream Azure IoT cloud connectivity using cellular or Wi-Fi channels. Bluetooth, NFC, and 802.15.4 Wireless Sensor Network modes are used for downstream communication with IoT tags. Gateways provide end-to-end secure cloud connectivity, IoT tag pairing, sensor data aggregation, data retention, and the ability to configure alarm thresholds.
+*Gateways (2)* enable upstream Azure IoT cloud connectivity using cellular or Wi-Fi channels. Bluetooth, NFC, and 802.15.4 Wireless Sensor Network modes are used for downstream communication with IoT tags. Gateways provide end-to-end secure cloud connectivity, IoT tag pairing, sensor data aggregation, data retention, and the ability to configure alarm thresholds.
Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. You can build end-to-end enterprise solutions to achieve a digital feedback loop in logistics.
-The IoT Central platform provides rich extensibility options through data export and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application.
+The IoT Central platform provides rich extensibility options through _data export and APIs (3)_. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred _line-of-business application (4,5)_.
This tutorial shows you how to get started with the IoT Central *connected logistics* application template. You'll learn how to deploy and use the template.
Create the application using following steps:
* **Billing Info**: The directory, Azure subscription, and region details are required to provision the resources. * **Create**: Select create at the bottom of the page to deploy your application.
- :::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-app-create.png" alt-text="Connected logistics app template":::
+ :::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-app-create.png" alt-text="Connected logistics application template":::
:::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-app-create-billinginfo.png" alt-text="Connected logistics billing info":::
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
The benefits of a digital distribution center include:
- Efficient order tracking. - Reduced costs, improved productivity, and optimized usage.
-Use the IoT Central *digital distribution center* application template and the guidance in this article to develop an end-to-end digital distribution center solution.
- :::image type="content" source="media/tutorial-iot-central-ddc/digital-distribution-center-architecture.png" alt-text="digital distribution center.":::
+### Video cameras (1)
-1. Set of IoT sensors sending telemetry data to a gateway device.
-2. Gateway devices sending telemetry and aggregated insights to IoT Central.
-3. Data is routed to the desired Azure service for manipulation.
-4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts.
-5. Processed data is stored in hot storage for near real-time actions or cold storage for more insight enhancements that is based on ML or batch analysis.
-6. Logic Apps can be used to power various business workflows in end-user business applications.
+Video cameras are the primary sensors in this digitally connected enterprise-scale ecosystem. Advancements in machine learning and artificial intelligence that allow video to be turned into structured data and process it at edge before sending to cloud. We can use IP cameras to capture images, compress them on the camera, and then send the compressed data over edge compute for video analytics pipeline or use GigE vision cameras to capture images on the sensor and then send these images directly to the Azure IoT Edge, which then compresses before processing in video analytics pipeline.
-### Video cameras
-
-Video cameras are the primary sensors in this digitally connected enterprise-scale ecosystem. Advancements in machine learning and artificial intelligence that allow video to be turned into structured data and process it at edge before sending to cloud. We can use IP cameras to capture images, compress them on the camera, and then send the compressed data over edge compute for video analytics pipeline or use GigE vision cameras to capture images on the sensor and then send these images directly to the Azure IoT Edge, which then compresses before processing in video analytics pipeline.
-
-### Azure IoT Edge Gateway
+### Azure IoT Edge gateway (2)
The "cameras-as-sensors" and edge workloads are managed locally by Azure IoT Edge and the camera stream is processed by analytics pipeline. The video analytics processing pipeline at Azure IoT Edge brings many benefits, including decreased response time, low-bandwidth consumption, which results in low latency for rapid data processing. Only the most essential metadata, insights, or actions are sent to the cloud for further action or investigation.
-### Device Management with IoT Central
-
+### Device management with IoT Central
+ Azure IoT Central is a solution development platform that simplifies IoT device and Azure IoT Edge gateway connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solutions to achieve a digital feedback loop in distribution centers.
-### Business Insights and actions using data egress
+### Business insights and actions using data egress (5,6)
IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights that are based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved through webhook, Service Bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights. In this tutorial, you learn how to, > [!div class="checklist"]- > * Create digital distribution center application. > * Walk through the application.
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
The benefits of smart inventory management include:
This application template focuses on device connectivity, and the configuration and management of RFID and Bluetooth low energy (BLE) reader devices.
-Use the IoT Central *smart inventory management* application template and the guidance in this article to develop an end-to-end smart inventory management solution.
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-architecture.png" alt-text="smart inventory management.":::
-1. Set of IoT sensors sending telemetry data to a gateway device.
-2. Gateway devices sending telemetry and aggregated insights to IoT Central.
-3. Data is routed to the desired Azure service for manipulation.
-4. Azure services like ASA or Azure Functions can be used to reformat data streams and send to the desired storage accounts.
-5. Processed data is stored in hot storage for near real-time actions or cold storage for additional insight enhancements that is based on ML or batch analysis.
-6. Logic Apps can be used to power various business workflows in end-user business applications.
-
-### Details
-
-Following section outlines each part of the conceptual architecture
-Telemetry ingestion from Radio-frequency identification (RFID), Bluetooth low energy (BLE) tags
-
-### RFID tags
+### RFID tags (1)
RFID tags transmit data about an item through radio waves. RFID tags typically don't have a battery unless specified. Tags receive energy from the radio waves generated by the reader and transmit a signal back toward the RFID reader.
-### BLE tags
+### BLE tags (1)
Energy beacon broadcasts packets of data at regular intervals. Beacon data is detected by BLE readers or installed services on smartphones and then transmitting that to the cloud.
-### RFID and BLE readers
+### RFID and BLE readers (1)
RFID reader converts the radio waves to a more usable form of data. Information collected from the tags is then stored in local edge server or sent to cloud using JSON-RPC 2.0 over MQTT. BLE reader also known as Access Points (AP) are similar to RFID reader. It's used to detect nearby Bluetooth signals and relay its message to local Azure IoT Edge or cloud using JSON-RPC 2.0 over MQTT. Many readers are capable of reading RFID and beacon signals, and providing additional sensor capability related to temperature, humidity, accelerometer, and gyroscope.
-### Azure IoT Edge gateway
+### Azure IoT Edge gateway (2)
Azure IoT Edge server provides a place to preprocess that data locally before sending it on to the cloud. We can also deploy cloud workloads artificial intelligence, Azure and third-party services, business logic using standard containers.
-### Device management with IoT Central
+### Device management with IoT Central
Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solutions to achieve a digital feedback loop in inventory management.
-### Business insights and actions using data egress
+### Business insights and actions using data egress (3)
IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved using webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights. In this tutorial, you learn how to, > [!div class="checklist"]- > * create smart inventory management application
-> * walk through the application
+> * walk through the application
## Prerequisites
The following sections walk you through the key features of the application:
### Dashboard
-After successfully deploying the app template, your default dashboard is a smart inventory management operator focused portal. Northwind Trader is a fictitious smart inventory provider managing warehouse with Bluetooth low energy (BLE) and retail store with Radio-frequency identification (RFID). In this dashboard, you'll see two different gateways providing telemetry about inventory along with associated commands, jobs, and actions that you can perform.
+After successfully deploying the application template, your default dashboard is a smart inventory management operator focused portal. Northwind Trader is a fictitious smart inventory provider managing warehouse with Bluetooth low energy (BLE) and retail store with Radio-frequency identification (RFID). In this dashboard, you'll see two different gateways providing telemetry about inventory along with associated commands, jobs, and actions that you can perform.
This dashboard is pre-configured to showcase the critical smart inventory management device operations activity. The dashboard is logically divided between two different gateway device management operations:
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
Last updated 12/21/2021
In the increasingly competitive retail landscape, retailers constantly face pressure to close the gap between demand and fulfillment. A new trend that has emerged to address the growing consumer demand is to house inventory near the end customers and the stores they visit.
-The IoT Central micro-fulfillment center application template enables you to monitor and manage all aspects of your fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers.
+The IoT Central _micro-fulfillment center_ application template enables you to monitor and manage all aspects of your fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers.
The application template enables you to:
The application template enables you to:
- Transform the environmental conditions within your fulfillment center into insights that the retail warehouse team can use. - Export the aggregated insights into existing or new business applications for the benefit of the retail staff members.
-Use the IoT Central *micro-fulfillment center* application template and the guidance in this article to develop an end-to-end micro-fulfillment center solution.
- ![Azure IoT Central Store Analytics](./media/tutorial-micro-fulfillment-center-app/micro-fulfillment-center-architecture-frame.png)
-1. Set of IoT sensors sending telemetry data to a gateway device
-2. Gateway devices sending telemetry and aggregated insights to IoT Central
-3. Continuous data export to the desired Azure service for manipulation
-4. Data can be structured in the desired format and sent to a storage service
-5. Business applications can query data and generate insights that power retail operations
-
-### Robotic carriers
+### Robotic carriers (1)
A micro-fulfillment center solution will likely have a large set of robotic carriers generating different kinds of telemetry signals. These signals can be ingested by a gateway device, aggregated, and then sent to IoT Central as reflected by the left side of the architecture diagram.
-### Condition monitoring sensors
+### Condition monitoring sensors (1)
An IoT solution starts with a set of sensors capturing meaningful signals from within your fulfillment center. It's reflected by different kinds of sensors on the far left of the architecture diagram above.
-### Gateway devices
+### Gateway devices (2)
Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
The Azure IoT Central application ingests data from different kinds of IoT senso
Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices.
-### Data transform
+### Data transform (3,4)
-The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a-Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
+The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a-Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
-### Business application
+### Business application (5)
The IoT data can be used to power different kinds of business applications deployed within a retail environment. A fulfillment center manager or employee can use these applications to visualize business insights and take meaningful actions in real time. To learn how to build a real-time Power BI dashboard for your retail team, follow the [tutorial](./tutorial-in-store-analytics-create-app.md). In this tutorial, you learn: > [!div class="checklist"]- > * How to deploy the application template > * How to use the application template
Create the application using following steps:
The following sections walk you through the key features of the application:
-After successfully deploying the app template, you see the **Northwind Traders micro-fulfillment center dashboard**. Northwind Traders is a fictitious retailer that has a micro-fulfillment center being managed in this Azure IoT Central application. On this dashboard, you see information and telemetry about the devices in this template, along with a set of commands, jobs, and actions that you can take. The dashboard is logically split into two sections. On the left, you can monitor the environmental conditions within the fulfillment structure, and on the right, you can monitor the health of a robotic carrier within the facility.
+After successfully deploying the application template, you see the **Northwind Traders micro-fulfillment center dashboard**. Northwind Traders is a fictitious retailer that has a micro-fulfillment center being managed in this Azure IoT Central application. On this dashboard, you see information and telemetry about the devices in this template, along with a set of commands, jobs, and actions that you can take. The dashboard is logically split into two sections. On the left, you can monitor the environmental conditions within the fulfillment structure, and on the right, you can monitor the health of a robotic carrier within the facility.
From the dashboard, you can:
iot-hub Iot Hub Dev Guide Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-azure-ad-rbac.md
Last updated 10/20/2021-+ # Control access to IoT Hub by using Azure Active Directory
iot-hub Iot Hub Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ip-filtering.md
Last updated 03/22/2021 -+ # Use IP filters
iot-hub Iot Hub Python Python Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-schedule-jobs.md
At the end of this tutorial, you have two Python apps:
**scheduleJobService.py**, which calls a direct method in the simulated device app and updates the device twin's desired properties using a job.
-> [!NOTE]
-> The **Azure IoT SDK for Python** does not directly support **Jobs** functionality. Instead this tutorial offers an alternate solution utilizing asynchronous threads and timers. For further updates, see the **Service Client SDK** feature list on the [Azure IoT SDK for Python](https://github.com/Azure/azure-iot-sdk-python) page.
->
- [!INCLUDE [iot-hub-include-python-sdk-note](../../includes/iot-hub-include-python-sdk-note.md)] ## Prerequisites
In this section, you create a Python console app that initiates a remote **lockD
3. Add the following `import` statements and variables at the start of the **scheduleJobService.py** file. Replace the `{IoTHubConnectionString}` placeholder with the IoT hub connection string you copied previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string). Replace the `{deviceId}` placeholder with the device ID you registered in [Register a new device in the IoT hub](#register-a-new-device-in-the-iot-hub): ```python
+ import os
import sys
+ import datetime
import time import threading import uuid
+ import msrest
- from azure.iot.hub import IoTHubRegistryManager
- from azure.iot.hub.models import Twin, TwinProperties, CloudToDeviceMethod, CloudToDeviceMethodResult, QuerySpecification, QueryResult
+ from azure.iot.hub import IoTHubJobManager, IoTHubRegistryManager
+ from azure.iot.hub.models import JobProperties, JobRequest, Twin, TwinProperties, CloudToDeviceMethod
CONNECTION_STRING = "{IoTHubConnectionString}" DEVICE_ID = "{deviceId}"
In this section, you create a Python console app that initiates a remote **lockD
UPDATE_PATCH = {"building":43,"floor":3} TIMEOUT = 60 WAIT_COUNT = 5
- ```
-4. Add the following function that is used to query for devices:
+ # Create IoTHubJobManager
+ iothub_job_manager = IoTHubJobManager.from_connection_string(CONNECTION_STRING)
- ```python
- def query_condition(iothub_registry_manager, device_id):
-
- query_spec = QuerySpecification(query="SELECT * FROM devices WHERE deviceId = '{}'".format(device_id))
- query_result = iothub_registry_manager.query_iot_hub(query_spec, None, 1)
-
- return len(query_result.items)
```
-5. Add the following methods to run the jobs that call the direct method and device twin:
+4. Add the following methods to run the jobs that call the direct method and device twin:
```python
- def device_method_job(job_id, device_id, wait_time, execution_time):
+ def device_method_job(job_id, device_id, execution_time):
print ( "" ) print ( "Scheduling job: " + str(job_id) )
- time.sleep(wait_time)
-
- iothub_registry_manager = IoTHubRegistryManager(CONNECTION_STRING)
-
- if query_condition(iothub_registry_manager, device_id):
- deviceMethod = CloudToDeviceMethod(method_name=METHOD_NAME, payload=METHOD_PAYLOAD)
+ job_request = JobRequest()
+ job_request.job_id = job_id
+ job_request.type = "scheduleDeviceMethod"
+ job_request.start_time = datetime.datetime.utcnow().isoformat()
+ job_request.cloud_to_device_method = CloudToDeviceMethod(method_name=METHOD_NAME, payload=METHOD_PAYLOAD)
+ job_request.max_execution_time_in_seconds = execution_time
+ job_request.query_condition = "DeviceId in ['{}']".format(device_id)
- response = iothub_registry_manager.invoke_device_method(DEVICE_ID, deviceMethod)
+ new_job_response = iothub_job_manager.create_scheduled_job(job_id, job_request)
- print ( "" )
- print ( "Direct method " + METHOD_NAME + " called." )
-
- def device_twin_job(job_id, device_id, wait_time, execution_time):
+ def device_twin_job(job_id, device_id, execution_time):
print ( "" ) print ( "Scheduling job " + str(job_id) )
- time.sleep(wait_time)
-
- iothub_registry_manager = IoTHubRegistryManager(CONNECTION_STRING)
- if query_condition(iothub_registry_manager, device_id):
+ job_request = JobRequest()
+ job_request.job_id = job_id
+ job_request.type = "scheduleUpdateTwin"
+ job_request.start_time = datetime.datetime.utcnow().isoformat()
+ job_request.update_twin = Twin(etag="*", properties=TwinProperties(desired=UPDATE_PATCH))
+ job_request.max_execution_time_in_seconds = execution_time
+ job_request.query_condition = "DeviceId in ['{}']".format(device_id)
- twin = iothub_registry_manager.get_twin(DEVICE_ID)
- twin_patch = Twin(properties= TwinProperties(desired=UPDATE_PATCH))
- twin = iothub_registry_manager.update_twin(DEVICE_ID, twin_patch, twin.etag)
+ new_job_response = iothub_job_manager.create_scheduled_job(job_id, job_request)
- print ( "" )
- print ( "Device twin updated." )
- ```
+ ```
-6. Add the following code to schedule the jobs and update job status. Also include the `main` routine:
+5. Add the following code to schedule the jobs and update job status. Also include the `main` routine:
```python def iothub_jobs_sample_run(): try:
- method_thr_id = uuid.uuid4()
- method_thr = threading.Thread(target=device_method_job, args=(method_thr_id, DEVICE_ID, 20, TIMEOUT), kwargs={})
- method_thr.start()
+ method_job_id = uuid.uuid4()
+ device_method_job(method_job_id, DEVICE_ID, TIMEOUT)
print ( "" )
- print ( "Direct method called with Job Id: " + str(method_thr_id) )
+ print ( "Direct method called with Job Id: " + str(method_job_id) )
- twin_thr_id = uuid.uuid4()
- twin_thr = threading.Thread(target=device_twin_job, args=(twin_thr_id, DEVICE_ID, 10, TIMEOUT), kwargs={})
- twin_thr.start()
+ twin_job_id = uuid.uuid4()
+ device_twin_job(twin_job_id, DEVICE_ID, TIMEOUT)
print ( "" )
- print ( "Device twin called with Job Id: " + str(twin_thr_id) )
+ print ( "Device twin called with Job Id: " + str(twin_job_id) )
while True: print ( "" )
- if method_thr.is_alive():
- print ( "...job " + str(method_thr_id) + " still running." )
- else:
- print ( "...job " + str(method_thr_id) + " complete." )
+ method_job_status = iothub_job_manager.get_scheduled_job(method_job_id)
+ print ( "...job " + str(method_job_id) + " " + method_job_status.status )
- if twin_thr.is_alive():
- print ( "...job " + str(twin_thr_id) + " still running." )
- else:
- print ( "...job " + str(twin_thr_id) + " complete." )
+ twin_job_status = iothub_job_manager.get_scheduled_job(twin_job_id)
+ print ( "...job " + str(twin_job_id) + " " + twin_job_status.status )
print ( "Job status posted, press Ctrl-C to exit" )
+ time.sleep(WAIT_COUNT)
- status_counter = 0
- while status_counter <= WAIT_COUNT:
- time.sleep(1)
- status_counter += 1
-
+ except msrest.exceptions.HttpOperationError as ex:
+ print ( "" )
+ print ( "Http error {}".format(ex.response.text) )
+ return
except Exception as ex: print ( "" )
- print ( "Unexpected error {0}" % ex )
+ print ( "Unexpected error {}".format(ex) )
return except KeyboardInterrupt: print ( "" )
You are now ready to run the applications.
``` 2. At another command prompt in your working directory, run the following command to trigger the jobs to lock the door and update the twin:
-
+ ```cmd/sh python scheduleJobService.py ```
You are now ready to run the applications.
In this tutorial, you used a job to schedule a direct method to a device and the update of the device twin's properties.
-To continue getting started with IoT Hub and device management patterns such as end-to-end image-based update in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
+To continue getting started with IoT Hub and device management patterns such as end-to-end image-based update in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
key-vault Byok Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/byok-specification.md
Last updated 02/04/2021 --+ # Bring your own key specification
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
Last updated 2/17/2022 --+ # Backend pool management
logic-apps Azure Arc Enabled Logic Apps Create Deploy Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/azure-arc-enabled-logic-apps-create-deploy-workflows.md
ms.suite: integration
Last updated 11/02/2021-+ #Customer intent: As a developer, I want to learn how to create and deploy automated Logic Apps workflows that can run anywhere that Kubernetes can run.
machine-learning How To Attach Arc Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-arc-kubernetes.md
Last updated 11/23/2021 -+ # Configure Kubernetes clusters for machine learning (preview)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Last updated 11/03/2021 -+ #Customer intent: As a data scientist, I want to figure out why my online endpoint deployment failed so that I can fix it.
private-link Create Private Endpoint Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-cli.md
Last updated 11/07/2020 -+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using the Azure CLI. # Quickstart: Create a private endpoint by using the Azure CLI
remote-rendering Late Stage Reprojection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/late-stage-reprojection.md
ApiHandle<RenderingSession> session = ...;
session->GetGraphicsBinding()->SetPoseMode(PoseMode::Local); // set local pose mode ```
-The mode can be changed anytime the graphics binding object is available.
+In general, the mode can be changed anytime the graphics binding object is available. There is an important distinction for `GraphicsBindingSimD3D11`: the pose mode can only be changed to `PoseMode.Remote`, if it has been initialized with proxy textures. If this isn't the case, `PoseMode.Local` is forced until the graphics binding is reinitialized. See the two overloads of `GraphicsBindingSimD3d11.InitSimulation`, which take either native pointers to [ID3D11Texture2D](/windows/win32/api/d3d11/nn-d3d11-id3d11texture2d) objects (proxy path) or the `width` and `height` of the desired user viewport (non-proxy path).
+
+### Desktop Unity runtime considerations
+
+Due to the technical background of `GraphicsBindingSimD3D11` and the fact of how offscreen rendering works in Unity, the ARR Unity runtime requires the user to specify the desired pose mode on startup of `RemoteManagerUnity` as follows:
+
+```cs
+public static void InitRemoteManager(Camera camera)
+{
+ RemoteUnityClientInit clientInit = new RemoteUnityClientInit(camera, PoseMode.Remote);
+ RemoteManagerUnity.InitializeManager(clientInit);
+}
+```
+
+If `PoseMode.Remote` is specified, the graphics binding will be initialized with offscreen proxy textures and all rendering will be redirected from the Unity scene's main camera to a proxy camera. This code path is only recommended for usage if runtime pose mode changes are required.
+
+> [!WARNING]
+> The proxy camera redirection might be incompatible with other Unity extensions, which expect scene rendering to take place with the main camera. The proxy camera can be retrieved via the `RemoteManagerUnity.ProxyCamera` property if it needs to be queried or registered elsewhere.
+
+If `PoseMode.Local` is used instead, the graphics binding will not be initialized with offscreen proxy textures and a fast path using the Unity scene's main camera to render will be used. This means that if the respective use case requires pose mode changes at runtime, `PoseMode.Remote` should be specified on `RemoteManagerUnity` initialization. It is strongly recommended to only use local pose mode and thus the non-proxy rendering path.
## Next steps
role-based-access-control Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md
Last updated 09/10/2021 --+ # Elevate access to manage all Azure subscriptions and management groups
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
Title: Automate threat response with playbooks in Microsoft Sentinel | Microsoft
description: This article explains automation in Microsoft Sentinel, and shows how to use playbooks to automate threat prevention and response. Previously updated : 11/09/2021 Last updated : 02/21/2022
Azure Logic Apps communicates with other systems and services using connectors.
### Permissions required
- To give your SecOps team the ability to use Logic Apps for Security Orchestration, Automation, and Response (SOAR) operations - that is, to create and run playbooks - in Microsoft Sentinel, you can assign Azure roles, either to specific members of your security operations team or to the whole team. The following describes the different available roles, and the tasks for which they should be assigned:
+ To give your SecOps team the ability to use Logic Apps to create and run playbooks in Microsoft Sentinel, assign Azure roles to your security operations team or to specific users on the team. The following describes the different available roles, and the tasks for which they should be assigned:
#### Azure roles for Logic Apps
Azure Logic Apps communicates with other systems and services using connectors.
- [Build the Azure Logic App](tutorial-respond-threats-playbook.md). -- [Test your Logic App](#run-a-playbook-manually-on-an-alert).
+- [Test your Logic App](#run-a-playbook-manually).
-- Attach the playbook to an [automation rule](#incident-creation-automated-response) or an [analytics rule](#alert-creation-automated-response), or [run manually when required](#run-a-playbook-manually-on-an-alert).
+- Attach the playbook to an [automation rule](#incident-creation-automated-response) or an [analytics rule](#alert-creation-automated-response), or [run manually when required](#run-a-playbook-manually).
### Use cases for playbooks
Two examples:
Playbooks can be run either **manually** or **automatically**.
-Running them manually means that when you get an alert, you can choose to run a playbook on-demand as a response to the selected alert. Currently this feature is supported only for alerts, not for incidents.
+Running them manually means that when you get an alert, you can choose to run a playbook on-demand as a response to the selected alert. Currently this feature is generally available for alerts, and in preview for incidents.
Running them automatically means to set them as an automated response in an analytics rule (for alerts), or as an action in an automation rule (for incidents). [Learn more about automation rules](automate-incident-handling-with-automation-rules.md).
For playbooks that are triggered by incident creation and receive incidents as t
- From the **Automation rules** tab in the **Automation** blade, create a new automation rule and specify the appropriate conditions and desired actions. This automation rule will be applied to any analytics rule that fulfills the specified conditions. > [!NOTE]
- > **Microsoft Sentinel automation rules require permissions to run playbooks.**
+ > **Microsoft Sentinel requires permissions to run incident-trigger playbooks.**
>
- > To run a playbook from an automation rule, Microsoft Sentinel uses a service account specifically authorized to do so. The use of this account (as opposed to your user account) increases the security level of the service and enables the automation rules API to support CI/CD use cases.
+ > To run a playbook based on the incident trigger, whether manually or from an automation rule, Microsoft Sentinel uses a service account specifically authorized to do so. The use of this account (as opposed to your user account) increases the security level of the service and enables the automation rules API to support CI/CD use cases.
>
- > This account must be granted explicit permissions (taking the form of the **Microsoft Sentinel Automation Contributor** role) on the resource group where the playbook resides. At that point, any automation rule will be able to run any playbook in that resource group.
+ > This account must be granted explicit permissions (taking the form of the **Microsoft Sentinel Automation Contributor** role) on the resource group where the playbook resides. At that point, you will be able to run any playbook in that resource group, either manually or from any automation rule.
> > When you add the **run playbook** action to an automation rule, a drop-down list of playbooks will appear for your selection. Playbooks to which Microsoft Sentinel does not have permissions will show as unavailable ("grayed out"). You can grant permission to Microsoft Sentinel on the spot by selecting the **Manage playbook permissions** link. >
For playbooks that are triggered by incident creation and receive incidents as t
See the [complete instructions for creating automation rules](tutorial-respond-threats-playbook.md#respond-to-incidents).
-### Run a playbook manually on an alert
+### Run a playbook manually
-Manual triggering is available from the Microsoft Sentinel portal in the following blades:
+While full automation is the best solution for many incident-handling, investigation, and mitigation tasks, there may often be cases where you would prefer your analysts have more human input and control over the situation. Also, you may want your SOC engineers to be able to test the playbooks they write before fully deploying them in automation rules.
-- In **Incidents** view, choose a specific incident, open its **Alerts** tab, and choose an alert.
+For these and other reasons, Microsoft Sentinel allows you to **run playbooks manually** on-demand for both incidents (now in Preview) and alerts.
-- In **Investigation**, choose a specific alert.
+- **To run a playbook on a specific incident,** select the incident from the grid in the **Incidents** blade. Select **Actions** from the incident details pane, and choose **Run playbook (Preview)** from the context menu. This opens the **Run playbook on incident** panel.
-1. Click on **View playbooks** for the chosen alert. You will get a list of all playbooks that start with an **When a Microsoft Sentinel Alert is triggered** and that you have access to.
+- **To run a playbook on an alert,** select an incident, enter the incident details, and from the **Alerts** tab, choose an alert and select **View playbooks**. This opens the **Alert playbooks** panel.
-1. Click on **Run** on the line of a specific playbook to trigger it.
+In either of these panels, you'll see two tabs: **Playbooks** and **Runs**.
-1. Select the **Runs** tab to view a list of all the times any playbook has been run on this alert. It might take a few seconds for any just-completed run to appear in this list.
+- In the **Playbooks** tab, you'll see a list of all the playbooks that you have access to and that use the appropriate trigger - the **Microsoft Sentinel Incident** trigger for incident playbooks and the **Microsoft Sentinel Alert** trigger for alert playbooks. Each playbook in the list has a **Run** button which you select to run the playbook immediately.
+If you want to run an incident-trigger playbook that you don't see in the list, [see the note about Microsoft Sentinel permissions above](#incident-creation-automated-response).
-1. Clicking on a specific run will open the full run log in Logic Apps.
-
-### Run a playbook manually on an incident
-
-Not supported yet.
+- In the **Runs** tab, you'll see a list of all the times any playbook has been run on the incident or alert you selected. It might take a few seconds for any just-completed run to appear in this list. Selecting a specific run will open the full run log in Logic Apps.
## Manage your playbooks
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
The following data sources are free with Microsoft Sentinel:
- Azure Activity Logs. - Office 365 Audit Logs, including all SharePoint activity, Exchange admin activity, and Teams. - Security alerts, including alerts from Microsoft Defender for Cloud, Microsoft 365 Defender, Microsoft Defender for Office 365, Microsoft Defender for Identity, and Microsoft Defender for Endpoint.-- Microsoft Defender for Cloud and Microsoft Defender for Cloud Apps alerts. However, raw logs for some Microsoft 365 Defender, Defender for Cloud Apps, Azure Active Directory (Azure AD), and Azure Information Protection (AIP) data types are paid.
+- Microsoft Defender for Cloud and Microsoft Defender for Cloud Apps alerts.
+> [!NOTE]
+> Although alerts are free, the raw logs for some Microsoft 365 Defender, Defender for Cloud Apps, Azure Active Directory (Azure AD), and Azure Information Protection (AIP) data types are paid.
+>
The following table lists the free data sources you can enable in Microsoft Sentinel. Some of the data connectors, such as Microsoft 365 Defender and Defender for Cloud Apps, include both free and paid data types. | Microsoft Sentinel Data Connector | Data type | Free or paid |
For more information about free and paid data sources and connectors, see [Conne
> Data connectors listed as Public Preview do not generate cost. Data connectors generate cost only once becoming Generally Available (GA). >
-## Estimate Microsoft Sentinel costs
-
-If you're not yet using Microsoft Sentinel, you can use the [Microsoft Sentinel pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=azure-sentinel) to estimate the potential cost of using Microsoft Sentinel. Enter *Microsoft Sentinel* in the Search box and select the resulting Microsoft Sentinel tile. The pricing calculator helps you estimate your likely costs based on your expected data ingestion and retention.
-
-For example, you can enter the GB of daily data you expect to ingest in Microsoft Sentinel, and the region for your workspace. The calculator provides the aggregate monthly cost across these components:
--- Log Analytics data ingestion-- Microsoft Sentinel data analysis-- Log Analytics data retention ## Manage and monitor Microsoft Sentinel costs
sentinel Bring Your Own Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/bring-your-own-ml.md
Last updated 11/09/2021 -+ # Bring your own Machine Learning (ML) into Microsoft Sentinel
sentinel Collaborate In Microsoft Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/collaborate-in-microsoft-teams.md
Microsoft Sentinel supports a direct integration with [Microsoft Teams](/microso
> [!IMPORTANT]
-> Integration with Microsoft Teams is is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Integration with Microsoft Teams is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Overview
sentinel Design Your Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/design-your-workspace-architecture.md
However, this recommendation for separate workspaces for non-SOC data comes from
- If the data egress cost is enough of a concern to make maintaining separate workspaces worthwhile, use a separate Microsoft Sentinel workspace for each region where you need reduce the data egress cost.
- <a name="note5"></a>[Decision tree note #5](#decision-tree): We recommend that you have as few workspaces as possible. Use the [Azure pricing calculator](billing.md#estimate-microsoft-sentinel-costs) to estimate the cost and determine which regions you actually need, and combine workspaces for regions with low egress costs. Bandwidth costs may be only a small part of your Azure bill when compared with separate Microsoft Sentinel and Log Analytics ingestion costs.
+ <a name="note5"></a>[Decision tree note #5](#decision-tree): We recommend that you have as few workspaces as possible. Use the [Azure pricing calculator](billing.md#estimate-costs-before-using-microsoft-sentinel) to estimate the cost and determine which regions you actually need, and combine workspaces for regions with low egress costs. Bandwidth costs may be only a small part of your Azure bill when compared with separate Microsoft Sentinel and Log Analytics ingestion costs.
For example, your cost might be estimated as follows:
sentinel Investigate Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-cases.md
To use the investigation graph:
![Use timeline in map to investigate alerts](media/tutorial-investigate-cases/use-timeline.png)
+## Comment on incidents
+
+As a security operations analyst, when investigating an incident you will want to thoroughly document the steps you take, both to ensure accurate reporting to management and to enable seamless cooperation and collaboration amongst coworkers. Microsoft Sentinel gives you a rich commenting environment to help you accomplish this.
+
+Another important thing that you can do with comments is enrich your incidents automatically. When you run a playbook on an incident that fetches relevant information from external sources (say, checking a file for malware at VirusTotal), you can have the playbook place the external source's response - along with any other information you define - in the incident's comments.
+
+Comments are simple to use. You access them through the **Comments** tab on the incident details page.
++
+### Frequently asked questions
+
+There are several considerations to take into account when using incident comments. The following list of questions points to these considerations.
+
+#### What kinds of input are supported?
+
+- **Text:** Comments in Microsoft Sentinel support text inputs in plain text, basic HTML, and Markdown. You can also paste copied text, HTML, and Markdown into the comment window.
+
+- **Images:** You can insert links to images in comments and the images will be displayed inline, but the images must already be hosted in a publicly accessible location such as Dropbox, OneDrive, Google Drive and the like. Images can't be uploaded directly to comments.
+
+#### Is there a size limit on comments?
+
+- **Per comment:** A single comment can contain up to **30,000 characters**.
+
+- **Per incident:** A single incident can contain up to **100 comments**.
+
+ > [!NOTE]
+ > The size limit of a single incident record in the *SecurityIncident* table in Log Analytics is 64 KB. If this limit is exceeded, comments (starting with the earliest) will be truncated, which may affect the comments that will appear in [advanced search](#search-for-incidents) results.
+ >
+ > The actual incident records in the incidents database will not be affected.
+
+#### Who can edit or delete comments?
+
+- **Editing:** Only the author of a comment has permission to edit it.
+
+- **Deleting:** Only users with the [Microsoft Sentinel Contributor](roles.md) role have permission to delete comments. Even the comment's author must have this role in order to delete it.
++ ## Closing an incident
By default, incident searches run across the **Incident ID**, **Title**, **Tags*
Using advanced search options changes the search behavior as follows:
-|Search behavior |Description |
+| Search behavior | Description |
|||
-|**Search button color** |The color of the search button changes, depending on the types of parameters currently being used in the search. <br><br>- As long as only the default parameters are selected, the button is grey. <br>- As soon as different parameters are selected, such as advanced search parameters, the button turns blue. |
-|**Auto-refresh** | Using advanced search parameters prevents you from selecting to automatically refresh your results. |
-|**Entity parameters** |All entity parameters are supported for advanced searches. When searching in any entity parameter, the search runs in all entity parameters. |
-|**Search strings** | Searching for a string of words includes all of the words in the search query. Search strings are case sensitive. |
-|**Cross workspace support** | Advanced searches are not supported for cross-workspace views. |
+| **Search button color** | The color of the search button changes, depending on the types of parameters currently being used in the search. <ul><li>As long as only the default parameters are selected, the button is grey. <li>As soon as different parameters are selected, such as advanced search parameters, the button turns blue. |
+| **Auto-refresh** | Using advanced search parameters prevents you from selecting to automatically refresh your results. |
+| **Entity parameters** | All entity parameters are supported for advanced searches. When searching in any entity parameter, the search runs in all entity parameters. |
+| **Search strings** | Searching for a string of words includes all of the words in the search query. Search strings are case sensitive. |
+| **Cross workspace support** | Advanced searches are not supported for cross-workspace views. |
| **Number of search results displayed** | When you're using advanced search parameters, only 50 results are shown at a time. |
-| | |
+| | |
> [!TIP] > If you're unable to find the incident you're looking for, remove search parameters to expand your search. If your search results in too many items, add more filters to narrow down your results.
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-deploy-solution.md
description: Learn how to deploy the Microsoft Sentinel solution for SAP environ
-+ Last updated 11/09/2021
This procedure describes how to ensure that your SAP system has the correct prer
1. Download and install one of the following SAP change requests from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR):
- - **SAP version 750 or later**: Install the SAP change request *NPLK900198*
- - **SAP version 740**: Install the SAP change request *NPLK900200*
+ - **SAP version 750 or later**: Install the SAP change request *NPLK900202*
+ - **SAP version 740**: Install the SAP change request *NPLK900201*
When you're performing this step, be sure to use binary mode to transfer the files to the SAP system, and use the **STMS_IMPORT** SAP transaction code.
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-deploy-alternate.md
Title: Microsoft Sentinel SAP data connector expert configuration options, on-premises deployment, and SAPControl log sources | Microsoft Docs description: Learn how to deploy the Microsoft Sentinel data connector for SAP environments using expert configuration options and an on-premises machine. Also learn more about SAPControl log sources.--++ Previously updated : 11/09/2021 Last updated : 02/22/2022 # Expert configuration options, on-premises deployment, and SAPControl log sources
To ingest SAP Control Web Service logs into Microsoft Sentinel, configure the fo
|**javaseverity** |Enter the lowest, inclusive, severity level for which you want to ingest Web Service logs into Microsoft Sentinel. Values include: <br><br>- **0** = All logs <br>- **1** = Warning <br>- **2** = Error |
+### Configuring User Master data collection
+
+To ingest tables directly from your SAP system with details about your users and role authorizations, configure your **systemconfig.ini** file with a `True`/`False` statement for each table.
+
+For example:
+
+```python
+[ABAP Table Selector]
+USR01_FULL = True
+USR02_FULL = True
+USR02_INCREMENTAL = True
+UST04_FULL = True
+AGR_USERS_FULL = True
+AGR_USERS_INCREMENTAL = True
+USR21_FULL = True
+AGR_1251_FULL = True
+ADR6_FULL = True
+AGR_TCODES_FULL = True
+DEVACCESS_FULL = True
+AGR_DEFINE_FULL = True
+AGR_DEFINE_INCREMENTAL = True
+AGR_PROF_FULL = True
+PAHI_FULL = True
+```
+
+For more information, see [Tables retrieved directly from SAP systems](sap-solution-log-reference.md#tables-retrieved-directly-from-sap-systems).
+ ## Next steps After you have your SAP data connector installed, you can add the SAP-related security content.
sentinel Sap Solution Deploy Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-deploy-snc.md
description: Learn how to deploy the Microsoft Sentinel data connector for SAP e
-+ Last updated 11/09/2021
sentinel Sap Solution Detailed Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-detailed-requirements.md
For example, in Ubuntu, you can mount a disk to the `/var/lib/docker` directory
The following SAP log change requests are required for the SAP solution, depending on your SAP Basis version: -- **SAP Basis versions 7.50 and higher**, install NPLK900198-- **For lower versions**, install NPLK900200
+- **SAP Basis versions 7.50 and higher**, install NPLK900202
+- **For lower versions**, install NPLK900201
- **To create an SAP role with the required authorizations**, for any supported SAP Basis version, install NPLK900163. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system) and [Required ABAP authorizations](#required-abap-authorizations). > [!NOTE]
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-log-reference.md
Title: Microsoft Sentinel SAP solution - Available logs reference | Microsoft Docs
-description: Learn about the SAP logs available from the Microsoft Sentinel SAP solution.
--
+ Title: Microsoft Sentinel SAP solution - data reference | Microsoft Docs
+description: Learn about the SAP logs, tables, and functions available from the Microsoft Sentinel SAP solution.
++ Previously updated : 11/09/2021 Last updated : 02/22/2022
-# Microsoft Sentinel SAP solution logs reference (public preview)
+# Microsoft Sentinel SAP solution data reference (public preview)
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
This article describes the SAP logs available from the Microsoft Sentinel SAP da
This article is intended for advanced SAP users.
-## ABAP Application log
+## Logs produced by the data connector agent
+
+The following sections describe the logs that are produced by the SAP data connector agent and ingested into Microsoft Sentinel.
+
+### ABAP Application log
- **Name in Microsoft Sentinel**: `ABAPAppLog_CL`
This article is intended for advanced SAP users.
Available by using RFC with a custom service based on standard services of XBP interface. This log is generated per client.
-### ABAPAppLog_CL log schema
+#### ABAPAppLog_CL log schema
| Field | Description | | | |
This article is intended for advanced SAP users.
-## ABAP Change Documents log
+### ABAP Change Documents log
- **Name in Microsoft Sentinel**: `ABAPChangeDocsLog_CL`
This article is intended for advanced SAP users.
Available by using RFC with a custom service based on standard services. This log is generated per client.
-### ABAPChangeDocsLog_CL log schema
+#### ABAPChangeDocsLog_CL log schema
| Field | Description |
This article is intended for advanced SAP users.
| Version | Version | | | |
-## ABAP CR log
+### ABAP CR log
- **Name in Microsoft Sentinel**: `ABAPCRLog_CL`
This article is intended for advanced SAP users.
>
-### ABAPCRLog_CL log schema
+#### ABAPCRLog_CL log schema
| Field | Description | | | |
This article is intended for advanced SAP users.
| ViewName | View name | | | |
-## ABAP DB table data log
+### ABAP DB table data log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by using RFC with a custom service. This log is generated with data across all clients.
-### ABAPTableDataLog_CL log schema
+#### ABAPTableDataLog_CL log schema
| Field | Description | | - | - |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| VersionNumber | Version number | | | |
-## ABAP Gateway log
+### ABAP Gateway log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
- **Log purpose**: Monitors Gateway activities. Available by the SAP Control Web Service. This log is generated with data across all clients.
-### ABAPOS_GW_CL log schema
+#### ABAPOS_GW_CL log schema
| Field | Description | | | - |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| SystemNumber | System number | | | |
-## ABAP ICM log
+### ABAP ICM log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by the SAP Control Web Service. This log is generated with data across all clients.
-### ABAPOS_ICM_CL log schema
+#### ABAPOS_ICM_CL log schema
| Field | Description | | | - |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| SystemNumber | System number | | | |
-## ABAP Job log
+### ABAP Job log
- **Name in Microsoft Sentinel**: `ABAPJobLog_CL`
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by using RFC with a custom service based on standard services of XBP interfaces. This log is generated with data across all clients.
-### ABAPJobLog_CL log schema
+#### ABAPJobLog_CL log schema
| Field | Description |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| WorkProcessNumber | Work process Number | | | |
-## ABAP Security Audit log
+### ABAP Security Audit log
- **Name in Microsoft Sentinel**: `ABAPAuditLog_CL`
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by using RFC XAL/SAL interfaces. SAL is available starting from version Basis 7.50. This log is generated with data across all clients.
-### ABAPAuditLog_CL log schema
+#### ABAPAuditLog_CL log schema
| Field | Description | | -- | - |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| Variable4 | Message variable 4 | | | |
-## ABAP Spool log
+### ABAP Spool log
- **Name in Microsoft Sentinel**: `ABAPSpoolLog_CL`
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by using RFC with a custom service based on standard tables. This log is generated with data across all clients.
-### ABAPSpoolLog_CL log schema
+#### ABAPSpoolLog_CL log schema
| Field | Description | | -- | |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| ValueAuthCheck | Value auth check | | | |
-## APAB Spool Output log
+### APAB Spool Output log
- **Name in Microsoft Sentinel**: `ABAPSpoolOutputLog_CL`
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by using RFC with a custom service based on standard tables. This log is generated with data across all clients.
-### ABAPSpoolOutputLog_CL log schema
+#### ABAPSpoolOutputLog_CL log schema
| Field | Description | | - | -- |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| | |
-## ABAP SysLog
+### ABAP SysLog
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by the SAP Control Web Service. This log is generated with data across all clients.
-### ABAPOS_Syslog_CL log schema
+#### ABAPOS_Syslog_CL log schema
| Field | Description |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| | |
-## ABAP Workflow log
+### ABAP Workflow log
- **Name in Microsoft Sentinel**: `ABAPWorkflowLog_CL`
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by using RFC with a custom service based on standard tables and standard services. This log is generated per client.
-### ABAPWorkflowLog_CL log schema
+#### ABAPWorkflowLog_CL log schema
| Field | Description |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
-## ABAP WorkProcess log
+### ABAP WorkProcess log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by the SAP Control Web Service. This log is generated with data across all clients.
-### ABAPOS_WP_CL log schema
+#### ABAPOS_WP_CL log schema
| Field | Description |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| | |
-## HANA DB Audit Trail
+### HANA DB Audit Trail
To have this log sent to Microsoft Sentinel, you must [deploy a Microsoft Management Agent](connect-syslog.md) to gather Syslog data from the machine running HANA DB. + - **Name in Microsoft Sentinel**: `Syslog` - **Related SAP documentation**: [General](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/48fd6586304c4f859bf92d64d0cd8b08.html) | [Audit Trail](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.03/en-US/0a57444d217649bf94a19c0b68b470cc.html)
To have this log sent to Microsoft Sentinel, you must [deploy a Microsoft Manage
Available by the Sentinel Linux Agent for Syslog. This log is generated with data across all clients.
-### Syslog log schema
+#### Syslog log schema
| Field | Description | | - | |
To have this log sent to Microsoft Sentinel, you must [deploy a Microsoft Manage
| SyslogMessage | Message, an unparsed audit trail message | | | |
-## JAVA files
+### JAVA files
To have this log sent to Microsoft Sentinel, you must [add it manually to the **systemconfig.ini** file](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel).
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
Available by the SAP Control Web Service. This log is generated with data across all clients.
-### JavaFilesLogsCL log schema
+#### JavaFilesLogsCL log schema
| Field | Description |
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| User | User | | | |
+## Tables retrieved directly from SAP systems
+
+This section lists the data tables that are retrieved directly from the SAP system and ingested into Microsoft Sentinel exactly as they are.
+
+To have the data from these tables ingested into Microsoft Sentinel, configure the relevant settings in the **systemconfig.ini** file. For more information, see [Configuring User Master data collection](sap-solution-deploy-alternate.md#configuring-user-master-data-collection).
+
+The data retrieved from these tables provides a clear view of the authorization structure, group membership, and user profiles. It also allows you to track the process of authorization grants and revokes, and identiy and govern the risks associated with those processes.
+
+The tables listed below are required to enable functions that identify privileged users, map users to roles, groups, and authorizations.
+
+| Table name | Table description |
+| - | -- |
+| USR01 | User master record (runtime data) |
+| USR02 | Logon data (kernel-side use) |
+| UST04 | User masters<br>Maps users to profiles |
+| AGR_USERS | Assignment of roles to users |
+| AGR_1251 |Authorization data for the activity group |
+| USGRP_USER |Assignment of users to user groups |
+| USR21 | User name/Address key assignment |
+| ADR6 | Email addresses (business address services) |
+| USRSTAMP | Time stamp for all changes to the user |
+| ADCP | Person/Address assignment (business address services) |
+| USR05 | User master parameter ID |
+| AGR_PROF | Profile name for role |
+| AGR_FLAGS | Role attributes |
+| DEVACCESS | Table for development user |
+| AGR_DEFINE | Role definition |
+| AGR_AGRS | Roles in composite roles |
+| PAHI | History of the system, database, and SAP parameters |
+|||
++
+## Functions available from the SAP solution
+
+This section describes the [functions](/azure-monitor/logs/functions.md) that are available in your workspace after you've deployed the Continuous Threat Monitoring for SAP solution. Find these functions in the Microsoft Sentinel **Logs** page to use in your KQL queries, listed under **Workspace functions**.
+
+### SAPUsersAssignments
+
+The **SAPUsersAssignments** function gathers data from multiple SAP data sources and creates a user-centric view of the current user master data, roles, and profiles currently assigned.
+
+ This function summarizes the user assignments to roles and profiles, and returns the following data:
++
+| Field | Description | Data Source/Notes |
+| - | - | - |
+| User | SAP user ID| SAL only |
+| Email | SMTP address| USR21 (SMTP_ADDR) |
+| UserType | User type| USR02 (USTYP) |
+| Timezone | Time zone| USR02 (TZONE) |
+| LockedStatus | Lock status| USR02 (UFLAG) |
+| LastSeenDate | Last seen date| USR02 (TRDAT) |
+| LastSeenTime | Last seen time| USR02 (LTIME) |
+| UserGroupAuth | User group in user master maintenance| USR02 (CLASS) |
+| Profiles |Set of profiles (default maximum set size = 50)|`["Profile 1", "Profile 2",...,"profile 50"]` |
+| DirectRoles | Set of Directly assigned roles (default max set size = 50) |`["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` |
+| ChildRoles |Set of indirectly assigned roles (default max set size = 50) |`["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` |
+| Client | Client ID | |
+| SystemID | System ID | As defined in the connector |
+||||
+
+### SAPUsersGetPrivileged
+
+The **SAPUsersGetPrivileged** function returns a list of privileged users per client and system ID.
+
+Users are considered privileged when they are listed in the *SAP - Privileged Users* watchlist, have been assigned to a profile listed in *SAP - Sensitive Profiles* watchlist, or have been added to a role listed in *SAP - Sensitive Roles* watchlist.
+
+**Parameters:**
+ - TimeAgo
+ - optional
+ - default vaule: 7 days
+ - Function will only seek User master data from TimeAgo until now()
+
+The **SAPUsersGetPrivileged** Microsoft Sentinel Function returns the following data:
+
+|Field| Description|
+|-|-|
+|User|SAP user ID |
+|Client| Client ID |
+|SystemID| System ID|
+| | |
+
+### SAPUsersAuthorizations
+
+lists user assignments to authorizations, including the following data:
+The **SAPUsersAuthorizations** Microsoft Sentinel Function brings together data from several tables to produce a user-centric view of the current roles and authorizations assigned. Only users with active role and authorization assignments are returned.
+
+**Parameters:**
+ - TimeAgo
+ - Optional
+ - Default value: 7 days
+ - Determines that the function seeks User master data from the time defined by the `TimeAgo` value until the time defined by the `now()` value.
+
+The **SAPUsersAuthorizations** function returns the following data:
+
+|Field| Description |Notes|
+|-|-|-|
+|User| SAP user ID||
+|Roles| Set of roles (default max set size = 50)| `["Role 1", "Role 2",...,"Role 50"]`|
+|AuthorizationsDetails| Set of authorizations (default max set size = 100|`{ {AuthorizationsDeatils1}`,<br>`{AuthorizationsDeatils2}`, <br>...,<br>`{AuthorizationsDeatils100}}`|
+|Client| Client ID |
+|SystemID| System ID|
+ ## Next steps
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-security-content.md
Title: Microsoft Sentinel SAP solution - security content reference | Microsoft Docs description: Learn about the built-in security content provided by the Microsoft Sentinel SAP solution.--++ Previously updated : 11/09/2021 Last updated : 02/22/2022 # Microsoft Sentinel SAP solution: security content reference (public preview)
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
||||| |**SAP - High - Login from unexpected network** | Identifies a sign-in from an unexpected network. <br><br>Maintain networks in the [SAP - Networks](#networks) watchlist. | Sign in to the backend system from an IP address that is not assigned to one of the networks. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access | |**SAP - High - SPNego Attack** | Identifies SPNego Replay Attack. | **Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
-|**SAP - Medium - Brute force attacks** | Identifies brute force attacks on the SAP system, according to failed sign-in attempts for the backend system. | Attempt to sign in from the same IP address to several systems/clients within the scheduled time interval. <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
+|**SAP - High- Dialog logon attempt from a privileged user** | Identifies dialog sign-in attempts, with the **AUM** type, by privileged users in a SAP system. For more information, see the [SAPUsersGetPrivileged](sap-solution-log-reference.md#sapusersgetprivileged). | Attempt to sign in from the same IP to several systems or clients within the scheduled time interval<br><br>**Data sources**: SAPcon - Audit Log | Impact, Lateral Movement |
+|**SAP - Medium - Brute force attacks** | Identifies brute force attacks on the SAP system using RFC logons | Attempt to login from the same IP to several systems/clients within the scheduled time interval using RFC<br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
|**SAP - Medium - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access | |**SAP - Medium - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | |**SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement an SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
description: Use this tutorial to help you use playbooks together with automatio
Previously updated : 11/09/2021 Last updated : 02/21/2022
Every playbook must start with a trigger. The trigger defines the action that wi
Now you can define what happens when you call the playbook. You can add actions, logical conditions, loops, or switch case conditions, all by selecting **New step**. This selection opens a new frame in the designer, where you can choose a system or an application to interact with or a condition to set. Enter the name of the system or application in the search bar at the top of the frame, and then choose from the available results.
-In every one of these steps, clicking on any field displays a panel with two menus: **Dynamic content** and **Expression**. From the **Dynamic content** menu, you can add references to the attributes of the alert or incident that was passed to the playbook, including the values and attributes of all the entities involved. From the **Expression** menu, you can choose from a large library of functions to add additional logic to your steps.
+In every one of these steps, clicking on any field displays a panel with two menus: **Dynamic content** and **Expression**. From the **Dynamic content** menu, you can add references to the attributes of the alert or incident that was passed to the playbook, including the values and attributes of all the [mapped entities](map-data-fields-to-entities.md) and [custom details](surface-custom-details-in-alerts.md) contained in the alert or incident. From the **Expression** menu, you can choose from a large library of functions to add additional logic to your steps.
:::image type="content" source="./media/tutorial-respond-threats-playbook/logic-app.png" alt-text="Logical app designer":::
To create an automation rule:
1. If you add a **Run playbook** action, you will be prompted to choose from the drop-down list of available playbooks. Only playbooks that start with the **incident trigger** can be run from automation rules, so only they will appear in the list.<a name="permissions-to-run-playbooks"></a>
+ <a name="explicit-permissions"></a>
+ > [!IMPORTANT]
- > Microsoft Sentinel must be granted explicit permissions in order to run playbooks from automation rules. If a playbook appears "grayed out" in the drop-down list, it means Sentinel does not have permission to that playbook's resource group. Click the **Manage playbook permissions** link to assign permissions.
+ > **Microsoft Sentinel must be granted explicit permissions in order to run playbooks based on the incident trigger**, whether manually or from automation rules. If a playbook appears "grayed out" in the drop-down list, it means Sentinel does not have permission to that playbook's resource group. Click the **Manage playbook permissions** link to assign permissions.
> In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and click **Apply**. > :::image type="content" source="./media/tutorial-respond-threats-playbook/manage-permissions.png" alt-text="Manage permissions"::: > - You yourself must have **owner** permissions on any resource group to which you want to grant Microsoft Sentinel permissions, and you must have the **Logic App Contributor** role on any resource group containing playbooks you want to run.
You use a playbook to respond to an **alert** by creating an **analytics rule**,
## Run a playbook on demand
-You can also run a playbook on demand.
+You can also manually run a playbook on demand, on both incidents (in Preview) and alerts. This can be useful in situations where you want more human input into and control over orchestration and response processes.
+
+### Run a playbook manually on an alert
+
+1. In the **Incidents** page, select an incident.
+
+1. Select **View full details** at the bottom of the incident details pane.
+
+1. In the incident details page, select the **Alerts** tab, choose the alert you want to run the playbook on, and select the **View playbooks** link at the end of the line of that alert.
+
+1. The **Alert playbooks** pane will open. You'll see a list of all playbooks configured with the **Microsoft Sentinel Alert** Logic Apps trigger that you have access to.
+
+1. Select **Run** on the line of a specific playbook to run it immediately.
+
+You can see the run history for playbooks on an alert by selecting the **Runs** tab on the **Alert playbooks** pane. It might take a few seconds for any just-completed run to appear in the list. Selecting a specific run will open the full run log in Logic Apps.
+
+### Run a playbook manually on an incident
+
+1. In the **Incidents** page, select an incident.
+
+1. From the incident details pane that appears on the right, select **Actions > Run playbook (Preview)**.
+ (Selecting the three dots at the end of the incident's line on the grid or right-clicking the incident will display the same list as the **Action** button.)
- > [!NOTE]
- > Only playbooks using the **alert trigger** can be run on-demand.
+1. The **Run playbook on incident** panel opens on the right. You'll see a list of all playbooks configured with the **Microsoft Sentinel Incident** Logic Apps trigger that you have access to.
-To run a playbook on-demand:
+ > [!NOTE]
+ > If you don't see the playbook you want to run in the list, it means Microsoft Sentinel doesn't have permissions to run playbooks in that resource group ([see the note above](#explicit-permissions)). To grant those permissions, select **Settings** from the main menu, choose the **Settings** tab, expand the **Playbook permissions** expander, and select **Configure permissions**. In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and select **Apply**.
-1. In the **Incidents** page, select an incident and click on **View full details**.
+1. Select **Run** on the line of a specific playbook to run it immediately.
-2. In the **Alerts** tab, click on the alert you want to run the playbook on, and scroll all the way to the right and click **View playbooks** and select a playbook to **run** from the list of available playbooks on the subscription.
+You can see the run history for playbooks on an incident by selecting the **Runs** tab on the **Run playbook on incident** panel. It might take a few seconds for any just-completed run to appear in the list. Selecting a specific run will open the full run log in Logic Apps.
## Next steps
sentinel Use Playbook Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-playbook-templates.md
You can repeat this process to create multiple playbooks on the same template.
1. Save the playbook. You'll now be able to see it in the **Active Playbooks** tab.
-1. To run this playbook, [set an automated response](automate-responses-with-playbooks.md#set-an-automated-response) or [run manually](automate-responses-with-playbooks.md#run-a-playbook-manually-on-an-alert) (alert trigger only).
+1. To run this playbook, [set an automated response](automate-responses-with-playbooks.md#set-an-automated-response) or [run manually](automate-responses-with-playbooks.md#run-a-playbook-manually).
1. Most of the templates can be used as is, but we recommend making any adjustments required to fit the new playbook to your SOC needs.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
> > You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+## February 2022
+
+- [Manually run playbooks based on the incident trigger (Public preview)](#manually-run-playbooks-based-on-the-incident-trigger-public-preview)
+
+### Manually run playbooks based on the incident trigger (Public preview)
+
+While full automation is the best solution for many incident-handling, investigation, and mitigation tasks, there may often be cases where you would prefer your analysts have more human input and control over the situation. Also, you may want your SOC engineers to be able to test the playbooks they write before fully deploying them in automation rules.
+
+For these and other reasons, Microsoft Sentinel now allows you to [**run playbooks manually on-demand for incidents**](automate-responses-with-playbooks.md#run-a-playbook-manually) as well as alerts.
+
+Learn more about [running incident-trigger playbooks manually](tutorial-respond-threats-playbook.md#run-a-playbook-manually-on-an-incident).
+ ## January 2022 - [Support for MITRE ATT&CK techniques (Public preview)](#support-for-mitre-attck-techniques-public-preview)
service-bus-messaging Service Bus Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-managed-service-identity.md
Title: Managed identities for Azure resources with Service Bus
description: This article describes how to use managed identities to access with Azure Service Bus entities (queues, topics, and subscriptions). Last updated 01/06/2022-+ # Authenticate a managed identity with Azure Active Directory to access Azure Service Bus resources
service-bus-messaging Service Bus Performance Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-performance-improvements.md
Service Bus enables clients to send and receive messages via one of three protoc
AMQP is the most efficient, because it maintains the connection to Service Bus. It also implements [batching](#batching-store-access) and [prefetching](#prefetching). Unless explicitly mentioned, all content in this article assumes the use of AMQP or SBMP. > [!IMPORTANT]
-> The SBMP is only available for .NET Framework. AMQP is the default for .NET Standard.
+> The SBMP protocol is only available for .NET Framework. AMQP is the default for .NET Standard.
## Choosing the appropriate Service Bus .NET SDK
service-fabric How To Managed Cluster Enable Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-enable-disk-encryption.md
Title: Enable Disk Encryption for Service Fabric managed cluster nodes description: Learn how to enable disk encryption for Azure Service Fabric managed cluster nodes in Windows using an ARM template. Previously updated : 2/14/2022 - Last updated : 2/14/2022+ # Enable disk encryption for Service Fabric managed cluster nodes
spatial-anchors Anchor Relationships Way Finding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/concepts/anchor-relationships-way-finding.md
Title: Anchor relationships and way-finding description: Learn about the conceptual model behind anchor relationships. Learn to connect anchors within a space and to use the Nearby API to fulfill a way-finding scenario.-+ -+ Last updated 11/20/2020
spatial-anchors Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/concepts/authentication.md
Title: Authentication and authorization description: Learn about the various ways an app or service can authenticate to Azure Spatial Anchors, and the levels of control that you have to gate access to Spatial Anchors.-+ -+ Last updated 11/20/2020
spatial-anchors Coarse Reloc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/concepts/coarse-reloc.md
Title: Coarse relocalization description: Learn how and when to use coarse relocalization. Coarse relocalization helps you find anchors that are near you. -+ -+ Last updated 01/28/2021
spatial-anchors Guidelines Effective Anchor Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/concepts/guidelines-effective-anchor-experiences.md
Title: Guidelines for effective anchor experiences description: Guidelines and considerations to create and locate anchors effectively by using Azure Spatial Anchors.-+ -+ Last updated 11/20/2020
spatial-anchors Logging Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/concepts/logging-diagnostics.md
Title: Logging and diagnostics description: In-depth explanation of how to generate and retrieve logging and diagnostics in Azure Spatial Anchors.-+ -+ Last updated 11/20/2020
spatial-anchors Create Locate Anchors Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/create-locate-anchors-overview.md
Title: Create & locate anchors overview description: In-depth explanation of how to create and locate anchors using Azure Spatial Anchors.-+ -+ Last updated 11/20/2020
spatial-anchors Create Asa Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-asa-account.md
Title: Create ASA Account description: Create an Azure Spatial Anchors account-+ -+ Last updated 11/22/2021
spatial-anchors Create Locate Anchors Cpp Ndk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-locate-anchors-cpp-ndk.md
Title: Create & locate anchors in C++/NDK description: In-depth explanation of how to create and locate anchors using Azure Spatial Anchors in C++/NDK.-+ -+ Last updated 11/20/2020
spatial-anchors Create Locate Anchors Cpp Winrt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-locate-anchors-cpp-winrt.md
Title: Create & locate anchors in C++/WinRT description: In-depth explanation of how to create and locate anchors using Azure Spatial Anchors in C++/WinRT.-+ -+ Last updated 11/20/2020
spatial-anchors Create Locate Anchors Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-locate-anchors-java.md
Title: Create & locate anchors in Java description: In-depth explanation of how to create and locate anchors using Azure Spatial Anchors in Java.-+ -+ Last updated 11/20/2020
spatial-anchors Create Locate Anchors Objc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-locate-anchors-objc.md
Title: Create & locate anchors in Objective-C description: In-depth explanation of how to create and locate anchors using Azure Spatial Anchors in Objective-C.-+ -+ Last updated 11/20/2020
spatial-anchors Create Locate Anchors Swift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-locate-anchors-swift.md
Title: Create & locate anchors in Swift description: In-depth explanation of how to create and locate anchors using Azure Spatial Anchors in Swift.-+ -+ Last updated 11/20/2020
spatial-anchors Create Locate Anchors Unity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/create-locate-anchors-unity.md
Title: Create & locate anchors in Unity description: In-depth explanation of how to create and locate anchors using Azure Spatial Anchors in Unity.-+ -+ Last updated 11/20/2020
spatial-anchors Set Up Coarse Reloc Cpp Ndk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/set-up-coarse-reloc-cpp-ndk.md
Title: Coarse relocalization in C++/NDK description: In-depth explanation of how to create and locate anchors using coarse relocalization in C++/NDK.-+ -+ Last updated 11/20/2020
spatial-anchors Set Up Coarse Reloc Cpp Winrt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/set-up-coarse-reloc-cpp-winrt.md
Title: Coarse relocalization in C++/WinRT description: In-depth explanation of how to create and locate anchors using coarse relocalization in C++/WinRT.-+ -+ Last updated 11/20/2020
spatial-anchors Set Up Coarse Reloc Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/set-up-coarse-reloc-java.md
Title: Coarse relocalization in Java description: In-depth explanation of how to create and locate anchors using coarse relocalization in Java.-+ -+ Last updated 11/20/2020
spatial-anchors Set Up Coarse Reloc Objc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/set-up-coarse-reloc-objc.md
Title: Coarse relocalization in Objective-C description: In-depth explanation of how to create and locate anchors using coarse relocalization in Objective-C.-+ -+ Last updated 11/20/2020
spatial-anchors Set Up Coarse Reloc Swift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/set-up-coarse-reloc-swift.md
Title: Coarse relocalization in Swift description: In-depth explanation of how to create and locate anchors using coarse relocalization in Swift.-+ -+ Last updated 11/20/2020
spatial-anchors Set Up Coarse Reloc Unity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/set-up-coarse-reloc-unity.md
Title: Coarse relocalization in Unity description: In-depth explanation of how to create and locate anchors using coarse relocalization in C#.-+ -+ Last updated 11/20/2020
spatial-anchors Setup Cpp Winrt Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/setup-cpp-winrt-project.md
Title: Install Azure Spatial Anchors for a C++/WinRT HoloLens application description: Configure a C++/WinRT HoloLens project to use Azure Spatial Anchors-+ -+ Last updated 11/20/2020
spatial-anchors Setup Unity Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/setup-unity-project.md
Title: Install Azure Spatial Anchors for Unity description: Configure a Unity project to use Azure Spatial Anchors-+ -+ Last updated 11/12/2021
spatial-anchors Use Coarse Relocalization Without Anchor Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/use-coarse-relocalization-without-anchor-session.md
Title: Use coarse relocalization without an anchor session description: Use coarse relocalization without an anchor session-+ -+ Last updated 01/28/2021
spatial-anchors Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/overview.md
Title: Azure Spatial Anchors overview description: Learn how Azure Spatial Anchors helps you develop cross-platform mixed reality experiences.-+ -+ Last updated 11/20/2020
spatial-anchors Get Started Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-android.md
Title: 'Quickstart: Create an Android app' description: In this quickstart, you learn how to build an Android app using Spatial Anchors.-+ -+ Last updated 11/20/2020
spatial-anchors Get Started Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-hololens.md
Title: 'Quickstart: Create a HoloLens app with DirectX' description: In this quickstart, you learn how to build a HoloLens app using Spatial Anchors.-+ -+ Last updated 11/20/2020
spatial-anchors Get Started Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-ios.md
Title: 'Quickstart: Create an iOS app' description: Learn to create an iOS app using Azure Spatial Anchors programmatically in either Swift or Objective-C.-+ -+ Last updated 11/20/2020
spatial-anchors Get Started Unity Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-android.md
Title: 'Quickstart: Create a Unity Android app' description: In this quickstart, you learn how to build an Android app with Unity using Spatial Anchors.-+ -+ Last updated 03/18/2021
spatial-anchors Get Started Unity Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-hololens.md
Title: 'Quickstart: Create a HoloLens app with Unity' description: In this quickstart, you learn how to build a HoloLens app with Unity using Spatial Anchors.-+ -+ Last updated 03/18/2021
spatial-anchors Get Started Unity Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-ios.md
Title: 'Quickstart: Create a Unity iOS app' description: In this quickstart, you learn how to build an iOS app with Unity using Spatial Anchors.-+ -+ Last updated 03/18/2021
spatial-anchors Get Started Xamarin Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-xamarin-android.md
Title: 'Quickstart: Create a Xamarin Android app' description: In this quickstart, you learn how to build an Android app with Xamarin using Spatial Anchors.-+ -+ Last updated 11/20/2020
spatial-anchors Get Started Xamarin Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-xamarin-ios.md
Title: 'Quickstart: Create a Xamarin iOS app' description: In this quickstart, you learn how to build an iOS app with Xamarin using Spatial Anchors.-+ -+ Last updated 11/20/2020
spatial-anchors Spatial Anchor Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/spatial-anchor-support.md
For support from the Spatial Anchors team and the user community, see [Azure Spa
### Azure Spatial Anchors samples If you are unable to run the samples, please file an issue in the [ASA samples repository](https://github.com/Azure/azure-spatial-anchors-samples/issues) by clicking _New issue_ then _Get started_
-## Provide content feedback
+## Provide content article feedback
At the bottom of each content article, there is an opportunity to open a GitHub issue and provide feedback on the Azure Spatial Anchor documentation content.
spatial-anchors Tutorial New Unity Hololens App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-new-unity-hololens-app.md
Title: 'Tutorial: Create a new HoloLens Unity app' description: In this tutorial, you learn how to create a new HoloLens Unity app using Azure Spatial Anchors.-+ -+ Last updated 2/3/2021
spatial-anchors Tutorial Share Anchors Across Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md
Title: 'Tutorial: Share anchors across sessions and devices' description: In this tutorial, you learn how to share Azure Spatial Anchor identifiers between Android/iOS devices in Unity with a back-end service.-+ -+ Last updated 11/20/2020
spatial-anchors Tutorial Use Cosmos Db To Store Anchors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-use-cosmos-db-to-store-anchors.md
Title: 'Tutorial: Share anchors with Azure Cosmos DB' description: In this tutorial, you learn how to share Azure Spatial Anchors identifiers across Android/iOS devices in Unity with a back-end service and Azure Cosmos DB.-+ -+ Last updated 11/20/2020
spatial-anchors Unity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/unity-overview.md
Title: Azure Spatial Anchors Unity overview description: Learn how Azure Spatial Anchors can be used within Unity Apps. Review quickstarts for Unity for HoloLens, Unity for Android, and Unity for iOS.--++ Last updated 11/12/2021
spring-cloud How To Start Stop Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-start-stop-delete.md
Last updated 10/31/2019 -+ # Start, stop, and delete an application in Azure Spring Cloud
spring-cloud How To Write Log To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-write-log-to-custom-persistent-storage.md
Last updated 11/17/2021-+ # How to use Logback to write logs to custom persistent storage
spring-cloud Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-custom-domain.md
Last updated 03/19/2020 -+ # Tutorial: Map an existing custom domain to Azure Spring Cloud
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Title: Introduction to Azure Storage - Cloud storage on Azure | Microsoft Docs
-description: The core Azure Storage platform is Microsoft's cloud storage solution. Azure Storage provides storage for data objects that is highly available, secure, durable, massively scalable, and redundant.
+ Title: Introduction to Azure Storage - Cloud storage on Azure
+description: The Azure Storage platform is Microsoft's cloud storage solution. Azure Storage provides highly available, secure, durable, massively scalable, and redundant storage for data objects in the cloud. Learn about the services available in Azure Storage and how you can use them in your applications, services, or enterprise solutions.
Previously updated : 04/08/2020 Last updated : 02/20/2022
-# Introduction to the core Azure Storage services
+# Introduction to Azure Storage
-The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. Core storage services offer a massively scalable object store for data objects, disk storage for Azure virtual machines (VMs), a file system service for the cloud, a messaging store for reliable messaging, and a NoSQL store. The services are:
+The Azure Storage platform is Microsoft's cloud storage solution for modern data storage scenarios. Azure Storage offers highly available, massively scalable, durable, and secure storage for a variety of data objects in the cloud. Azure Storage data objects are accessible from anywhere in the world over HTTP or HTTPS via a REST API. Azure Storage also offers client libraries for developers building applications or services with .NET, Java, Python, JavaScript, C++, and Go. Developers and IT professionals can use Azure PowerShell and Azure CLI to write scripts for data management or configuration tasks. The Azure portal and Azure Storage Explorer provide user-interface tools for interacting with Azure Storage.
-- **Durable and highly available.** Redundancy ensures that your data is safe in the event of transient hardware failures. You can also opt to replicate data across datacenters or geographical regions for additional protection from local catastrophe or natural disaster. Data replicated in this way remains highly available in the event of an unexpected outage.
+## Benefits of Azure Storage
+
+Azure Storage services offer the following benefits for application developers and IT professionals:
+
+- **Durable and highly available.** Redundancy ensures that your data is safe in the event of transient hardware failures. You can also opt to replicate data across data centers or geographical regions for additional protection from local catastrophe or natural disaster. Data replicated in this way remains highly available in the event of an unexpected outage.
- **Secure.** All data written to an Azure storage account is encrypted by the service. Azure Storage provides you with fine-grained control over who has access to your data. - **Scalable.** Azure Storage is designed to be massively scalable to meet the data storage and performance needs of today's applications. - **Managed.** Azure handles hardware maintenance, updates, and critical issues for you. - **Accessible.** Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft provides client libraries for Azure Storage in a variety of languages, including .NET, Java, Node.js, Python, PHP, Ruby, Go, and others, as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your data.
-## Review options for storage in Azure
-
-Azure provides a variety of storage tools and services, including Azure Storage. To determine which Azure technology is best suited for your scenario, see [Review your storage options](/azure/cloud-adoption-framework/ready/considerations/storage-options) in the Azure Cloud Adoption Framework.
-
-## About the core Azure Storage services
+## Azure Storage data services
The Azure Storage platform includes the following data
The Azure Storage platform includes the following data
Each service is accessed through a storage account. To get started, see [Create a storage account](storage-account-create.md).
-## Example scenarios
+## Review options for storing data in Azure
+
+Azure provides a variety of storage tools and services, including Azure Storage. To determine which Azure technology is best suited for your scenario, see [Review your storage options](/azure/cloud-adoption-framework/ready/considerations/storage-options) in the Azure Cloud Adoption Framework.
+
+## Sample scenarios for Azure Storage services
The following table compares Files, Blobs, Disks, Queues, and Tables, and shows example scenarios for each.
Every request to Azure Storage must be authorized. Azure Storage supports the fo
## Encryption
-There are two basic kinds of encryption available for the core storage services. For more information about security and encryption, see the [Azure Storage security guide](../blobs/security-recommendations.md).
+There are two basic kinds of encryption available for Azure Storage. For more information about security and encryption, see the [Azure Storage security guide](../blobs/security-recommendations.md).
### Encryption at rest
When making decisions about how your data is stored and accessed, you should als
## Storage APIs, libraries, and tools
-You can access resources in a storage account by any language that can make HTTP/HTTPS requests. Additionally, the core Azure Storage services offer programming libraries for several popular languages. These libraries simplify many aspects of working with Azure Storage by handling details such as synchronous and asynchronous invocation, batching of operations, exception management, automatic retries, operational behavior, and so forth. Libraries are currently available for the following languages and platforms, with others in the pipeline:
+You can access resources in a storage account by any language that can make HTTP/HTTPS requests. Additionally, Azure Storage offer programming libraries for several popular languages. These libraries simplify many aspects of working with Azure Storage by handling details such as synchronous and asynchronous invocation, batching of operations, exception management, automatic retries, operational behavior, and so forth. Libraries are currently available for the following languages and platforms, with others in the pipeline:
### Azure Storage data API and library references
You can access resources in a storage account by any language that can make HTTP
## Next steps
-To get up and running with core Azure Storage services, see [Create a storage account](storage-account-create.md).
+To get up and running with Azure Storage, see [Create a storage account](storage-account-create.md).
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Last updated 02/16/2022
-+ # Configure Azure Storage firewalls and virtual networks
storage Migration Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md
description: Basic functionality and comparison between tools used for migration
Previously updated : 02/18/2022 Last updated : 02/21/2022 # Comparison matrix
-The following comparison matrix shows basic functionality of different tools that can be used for migration of unstructured data.
+The following comparison matrix shows basic functionality of different tools that can be used for migration of unstructured data.
## Supported Azure services
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) |
-| |--|--|||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Intelligent Data Management](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) |
-| **Support provided by** | Microsoft | [Datadobi](https://support.datadobi.com/s/)<sub>1</sub> | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> |
-| **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes |
-| **Azure NetApp Files support** | No | Yes | Yes | Yes |
-| **Azure Blob Hot / Cool support** | No | Yes (via NFS preview) | Yes | Yes |
-| **Azure Blob Archive tier support** | No | No | No | Yes (as migration destination) |
-| **Azure Data Lake Storage support** | No | No | No | No |
-| **Supported Sources** | Windows Server 2012 R2 and up | NAS & cloud file systems | Any NAS and S3 | NAS, Blob, S3 |
+| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
+| |--|--||||
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) |
+| **Support provided by** | Microsoft | [Datadobi](https://support.datadobi.com/s/)<sub>1</sub> | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>|
+| **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes | Yes |
+| **Azure NetApp Files support** | No | Yes | Yes | Yes | Yes |
+| **Azure Blob Hot / Cool support** | No | Yes (via NFS ) | Yes | Yes | Yes |
+| **Azure Blob Archive tier support** | No | No | No | Yes | Yes |
+| **Azure Data Lake Storage support** | No | No | Yes | Yes | No |
+| **Supported Sources** | Windows Server 2012 R2 and up | NAS & cloud file systems | Any NAS, and S3 | Any NAS, Cloud File Storage, or S3 | Any NAS, S3, PFS, and Swift |
## Supported protocols (source / destination)
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) |
-| |--|--|||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Intelligent Data Management](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) |
-| **SMB 2.1** | Yes | Yes | Yes | Yes |
-| **SMB 3.0** | Yes | Yes | Yes | Yes |
-| **SMB 3.1** | Yes | Yes | Yes | Yes |
-| **NFS v3** | No | Yes | Yes | Yes |
-| **NFS v4.1** | No | Yes | No | Yes |
-| **Blob REST API** | No | No | Yes | Yes |
-| **S3** | No | Yes | Yes | Yes |
+| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
+| |--|--||||
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
+| **SMB 2.1** | Yes | Yes | Yes | Yes | Yes |
+| **SMB 3.0** | Yes | Yes | Yes | Yes | Yes |
+| **SMB 3.1** | Yes | Yes | Yes | Yes | Yes |
+| **NFS v3** | No | Yes | Yes | Yes | Yes |
+| **NFS v4.1** | No | Yes | No | Yes | Yes |
+| **Blob REST API** | No | No | Yes | Yes | Yes |
+| **S3** | No | Yes | Yes | Yes | Yes |
## Extended features
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) |
-| |--|--|||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Intelligent Data Management](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) |
-| **UID / SID remapping** | No | Yes | Yes | No |
-| **Protocol ACL remapping** | No | No | No | No |
-| **DFS Support** | Yes | Yes | Yes | Yes |
-| **Throttling support** | Yes | Yes | Yes | Yes |
-| **File pattern exclusions** | No | Yes | Yes | Yes (using copy functionality) |
-| **Support for selective file attributes** | Yes | Yes | Yes | Yes (for extended attributes) |
-| **Delete propagations** | Yes | Yes | Yes | Yes |
-| **Follow NTFS junctions** | No | Yes | No | Yes |
-| **Override SMB Owner and Group Owner** | Yes | Yes | Yes | No |
-| **Chain of custody reporting** | No | Yes | No | Yes |
-| **Support for alternate data streams** | No | Yes | Yes | No |
-| **Scheduling for migration** | No | Yes | Yes | Yes |
-| **Preserving ACL** | Yes | Yes | Yes | Yes |
-| **DACL support** | Yes | Yes | Yes | Yes |
-| **SACL support** | Yes | Yes | Yes | No |
-| **Preserving access time** | Yes | Yes | Yes | Yes |
-| **Preserving modified time** | Yes | Yes | Yes | Yes |
-| **Preserving creation time** | Yes | Yes | Yes | Yes |
-| **Azure Data Box support** | Yes | Yes | No | No |
-| **Migration of snapshots** | No | Manual | Yes | No |
-| **Symbolic link support** | No | Yes | No | Yes |
-| **Hard link support** | No | Migrated as separate files | Yes | Yes |
-| **Support for open / locked files** | Yes | Yes | Yes | Yes |
-| **Incremental migration** | Yes | Yes | Yes | Yes |
-| **Switchover support** | No | Yes | Yes | No (manual only) |
-| **[Other features](#other-features)** | [Link](#azure-file-sync)| [Link](#datadobi-dobimigrate) | [Link](#data-dynamics-data-mobility-and-migration) | [Link](#komprise-intelligent-data-management) |
+| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
+| |--|--||||
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
+| **UID / SID remapping** | No | Yes | Yes | No | No |
+| **Protocol ACL remapping** | No | No | No | No | No |
+| **DFS Support** | Yes | Yes | Yes | Yes | No |
+| **Throttling support** | Yes | Yes | Yes | Yes | Yes |
+| **File pattern exclusions** | No | Yes | Yes | Yes | Yes |
+| **Support for selective file attributes** | Yes | Yes | Yes | Yes | Yes |
+| **Delete propagations** | Yes | Yes | Yes | Yes | Yes |
+| **Follow NTFS junctions** | No | Yes | No | Yes | Yes |
+| **Override SMB Owner and Group Owner** | Yes | Yes | Yes | No | Yes |
+| **Chain of custody reporting** | No | Yes | Yes | Yes | Yes |
+| **Support for alternate data streams** | No | Yes | Yes | No | Yes |
+| **Scheduling for migration** | No | Yes | Yes | Yes | Yes |
+| **Preserving ACL** | Yes | Yes | Yes | Yes | Yes |
+| **DACL support** | Yes | Yes | Yes | Yes | Yes |
+| **SACL support** | Yes | Yes | Yes | No | Yes |
+| **Preserving access time** | Yes | Yes | Yes | Yes | Yes |
+| **Preserving modified time** | Yes | Yes | Yes | Yes | Yes |
+| **Preserving creation time** | Yes | Yes | Yes | Yes | Yes |
+| **Azure Data Box support** | Yes | Yes | Yes | No | Yes |
+| **Migration of snapshots** | No | Manual | Yes | No | No |
+| **Symbolic link support** | No | Yes | No | Yes | Yes |
+| **Hard link support** | No | Migrated as separate files | Yes | Yes | Yes |
+| **Support for open / locked files** | Yes | Yes | Yes | Yes | Yes |
+| **Incremental migration** | Yes | Yes | Yes | Yes | Yes |
+| **Switchover support** | No | Yes | Yes | No (manual only) | Yes |
+| **[Other features](#other-features)** | [Link](#azure-file-sync)| [Link](#datadobi-dobimigrate) | [Link](#data-dynamics-data-mobility-and-migration) | [Link](#komprise-elastic-data-migration) | [Link](#atempo-miria) |
## Assessment and reporting
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) |
-| |--|--|||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Intelligent Data Management](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) |
-| **Capacity** | No | Yes | Yes | Yes |
-| **# of files / folders** | No | Yes | Yes | Yes |
-| **Age distribution over time** | No | Yes | Yes | Yes |
-| **Access time** | No | Yes | Yes | Yes |
-| **Modified time** | No | Yes | Yes | Yes |
-| **Creation time** | No | Yes | Yes | Yes |
-| **Per file / object report status** | Partial | Yes | Yes | Yes |
+| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
+| |--|--||||
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
+| **Capacity** | No | Yes | Yes | Yes | Yes |
+| **# of files / folders** | No | Yes | Yes | Yes | Yes |
+| **Age distribution over time** | No | Yes | Yes | Yes | Yes |
+| **Access time** | No | Yes | Yes | Yes | Yes |
+| **Modified time** | No | Yes | Yes | Yes | Yes |
+| **Creation time** | No | Yes | Yes | Yes | Yes |
+| **Per file / object report status** | Partial | Yes | Yes | Yes | Yes |
## Licensing
-| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) |
-| |--|--|||
-| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Intelligent Data Management](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) |
-| **BYOL** | N / A | Yes | Yes | Yes |
-| **Azure Commitment** | Yes | Yes | Yes | Yes |
+| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) |
+| |--|--||| |
+| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)|
+| **BYOL** | N / A | Yes | Yes | Yes | Yes |
+| **Azure Commitment** | Yes | Yes | Yes | Yes | No |
## Other features
The following comparison matrix shows basic functionality of different tools tha
- Ability to override NFSv3 permissions and write new mode bits to target - Ability to convert NFSv3 POSIX draft ACLS to NFSv4 ACLS - SMB 1 (CIFS)
+- Browser-based access
+- REST API support for configuration, and migration management
- Support 24 x 7 x 365 ### Data Dynamics Data Mobility and Migration - Hash validation
-### Komprise Intelligent Data Management
+### Komprise Elastic Data Migration
- Project/directory based migrations-- Automatic retry of failures
+- Migration pre-checks with analytics
+- Migration planning with analytics
- Assessment / reporting: File types, file size, project based - Assessment / reporting: Custom metadata-based searches-- Full data lifecycle management solution for archival, replication, analytics-- Access time-based analytics on Blob, S3 data-- Tagging
+- Simple UI migration management
+- API access to automate large migration jobs
+- Automatic retry of failures
+- Access time-based analytics for Azure Blob Storage, and S3
+- Tagging support
- Support 24 x 7 x 365
+- In-product support via chat built-in
- Hash validation
-*List was last verified on March, 31st 2021.*
+### Atempo Miria
+
+- Custom metadata management
+- Automation through API
+- Petabyte-scale data movements
+- Hash validation
+
+> [!NOTE]
+> List was last verified on February, 21st 2022.
-<sub>1</sub> Support provided by ISV, not Microsoft
## See also - [Storage migration overview](../../../common/storage-migration-overview.md)
The following comparison matrix shows basic functionality of different tools tha
- [Migrate to Azure file shares](../../../files/storage-files-migration-overview.md) - [Migrate to Data Lake Storage with WANdisco LiveData Platform for Azure](../../../blobs/migrate-gen2-wandisco-live-data-platform.md) - [Copy or move data to Azure Storage with AzCopy](../../../common/storage-use-azcopy-v10.md)-- [Migrate large datasets to Azure Blob Storage with AzReplicate (sample application)](/samples/azure/azreplicate/azreplicate/)
+- [Migrate large datasets to Azure Blob Storage with AzReplicate (sample application)](/samples/azure/azreplicate/azreplicate/)
+
+> [!IMPORTANT]
+> <sub>1</sub> Support provided by ISV, not Microsoft
+
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-cli.md
Last updated 02/04/2022 -+ # Quickstart: Create an Azure synapse workspace with Azure CLI
virtual-desktop Azure Stack Hci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci.md
Last updated 11/02/2021 -+ # Set up Azure Virtual Desktop for Azure Stack HCI (preview)
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
Title: Azure Virtual Desktop host pool Azure portal - Azure
description: How to create an Azure Virtual Desktop host pool by using the Azure portal. -+ Last updated 08/06/2021
virtual-desktop Create Host Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-powershell.md
Last updated 07/23/2021 -++ # Create an Azure Virtual Desktop host pool with PowerShell or the Azure CLI
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
Last updated 10/20/2021 --+ # Automatic VM guest patching for Azure VMs
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set.md
Last updated 08/09/2021 -+ # Associate a virtual machine scale set to a Capacity Reservation group (preview)
virtual-machines Capacity Reservation Associate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-vm.md
Last updated 01/03/2022 -+ # Associate a VM to a Capacity Reservation group (preview)
virtual-machines Capacity Reservation Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-create.md
Last updated 08/09/2021 -+ # Create a Capacity Reservation (preview)
virtual-machines Capacity Reservation Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-modify.md
Last updated 08/09/2021 -+ # Modify a Capacity Reservation (preview)
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
Last updated 01/20/2022 --+ # Delete a VM and attached resources
virtual-machines Disks Pools Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-pools-manage.md
Last updated 11/02/2021 -+ # Manage an Azure disk pool (preview)
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
Last updated 07/23/2020 -+ # Ephemeral OS disks for Azure VMs
virtual-machines Key Vault Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-linux.md
Last updated 12/02/2019 --+ # Key Vault virtual machine extension for Linux
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
Last updated 12/02/2019 --+ # Key Vault virtual machine extension for Windows
virtual-machines How To Enable Write Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/how-to-enable-write-accelerator.md
Last updated 12/10/2021 -+ # Enable Write Accelerator
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
Last updated 10/27/2021 --+ # Preview: Run scripts in your Linux VM by using managed Run Commands
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
Last updated 12/07/2021-+ # Deploy a VM with trusted launch enabled
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command-managed.md
Last updated 10/28/2021 --+ # Preview: Run scripts in your Windows VM by using managed Run Commands
virtual-network Tutorial Create Nat Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-create-nat-gateway-cli.md
Previously updated : 03/10/2021 Last updated : 02/04/2022 # Tutorial: Create a NAT gateway using the Azure CLI
-This tutorial shows you how to use Azure Virtual Network NAT service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
+This tutorial shows you how to use Azure Virtual Network NAT service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
In this tutorial, you learn how to: > [!div class="checklist"]
+>
> * Create a virtual network. > * Create a virtual machine. > * Create a NAT gateway and associate with the virtual network.
In this tutorial, you learn how to:
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.+
+## Set parameter values to create resources
+
+Set the parameter values for use in creating the required resources. The $RANDOM function is used to create unique object names.
+ ## Create a resource group Create a resource group with [az group create](/cli/azure/group#az_group_create). An Azure resource group is a logical container into which Azure resources are deployed and managed.
-The following example creates a resource group named **myResourceGroupNAT** in the **eastus2** location:
-
-```azurecli-interactive
- az group create \
- --name myResourceGroupNAT \
- --location eastus2
-```
## Create the NAT gateway
In this section we create the NAT gateway and supporting resources.
### Create public IP address
-To access the Internet, you need one or more public IP addresses for the NAT gateway. Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public IP address resource named **myPublicIP** in **myResourceGroupNAT**.
+To access the Internet, you need one or more public IP addresses for the NAT gateway. Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public IP address resource.
-```azurecli-interactive
- az network public-ip create \
- --resource-group myResourceGroupNAT \
- --name myPublicIP \
- --sku standard \
- --allocation static
-```
### Create NAT gateway resource
-Create a global Azure NAT gateway with [az network nat gateway create](/cli/azure/network/nat#az_network_nat_gateway_create). The result of this command will create a gateway resource named **myNATgateway** that uses the public IP address **myPublicIP**. The idle timeout is set to 10 minutes.
+Create a global Azure NAT gateway with [az network nat gateway create](/cli/azure/network/nat#az_network_nat_gateway_create). The result of this command will create a gateway resource that uses the public IP address defined in the previous step. The idle timeout is set to 10 minutes.
-```azurecli-interactive
- az network nat gateway create \
- --resource-group myResourceGroupNAT \
- --name myNATgateway \
- --public-ip-addresses myPublicIP \
- --idle-timeout 10
- ```
### Create virtual network
-Create a virtual network named **myVnet** with a subnet named **mySubnet** [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create) in the **myResourceGroup** resource group. The IP address space for the virtual network is **10.1.0.0/16**. The subnet within the virtual network is **10.1.0.0/24**.
+Create a virtual network with a subnet with [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create). The IP address space for the virtual network is **10.1.0.0/16**. The subnet within the virtual network is **10.1.0.0/24**.
-```azurecli-interactive
- az network vnet create \
- --resource-group myResourceGroupNAT \
- --location eastus2 \
- --name myVnet \
- --address-prefix 10.1.0.0/16 \
- --subnet-name mySubnet \
- --subnet-prefix 10.1.0.0/24
-```
-### Create bastion host
+### Create bastion host subnet
-Create an Azure Bastion host named **myBastionHost** to access the virtual machine.
+Create an Azure Bastion host to access the virtual machine.
Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create a Azure Bastion subnet.
-```azurecli-interactive
-az network vnet subnet create \
- --resource-group myResourceGroupNAT \
- --name AzureBastionSubnet \
- --vnet-name myVNet \
- --address-prefixes 10.1.1.0/24
-```
-Create a public IP address for the bastion host with [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create).
+### Create public IP address for the bastion host
-```azurecli-interactive
-az network public-ip create \
- --resource-group myResourceGroupNAT \
- --name myBastionIP \
- --sku Standard
-```
+Create a public IP address for the bastion host with [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create).
-Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create the bastion host.
-```azurecli-interactive
-az network bastion create \
- --resource-group myResourceGroupNAT \
- --name myBastionHost \
- --public-ip-address myBastionIP \
- --vnet-name myVNet \
- --location eastus2
-```
+### Create the bastion host
+
+Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create the bastion host.
+ ### Configure NAT service for source subnet
-We'll configure the source subnet **mySubnet** in virtual network **myVnet** to use a specific NAT gateway resource **myNATgateway** with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update). This command will activate the NAT service on the specified subnet.
+Configure the source subnet in virtual network to use a specific NAT gateway resource with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update). This command will activate the NAT service on the specified subnet.
-```azurecli-interactive
- az network vnet subnet update \
- --resource-group myResourceGroupNAT \
- --vnet-name myVnet \
- --name mySubnet \
- --nat-gateway myNATgateway
-```
All outbound traffic to Internet destinations is now using the NAT gateway. It's not necessary to configure a UDR.
+## Create virtual machine
-## Virtual machine
-
-In this section, you'll create a virtual machine to test the NAT gateway to verify the public IP address of the outbound connection.
+Create a virtual machine to test the NAT gateway to verify the public IP address of the outbound connection.
Create the virtual machine with [az vm create](/cli/azure/vm#az_vm_create).
-```azurecli-interactive
-az vm create \
- --name myVM \
- --resource-group myResourceGroupNAT \
- --admin-username azureuser \
- --image win2019datacenter \
- --public-ip-address "" \
- --subnet mySubnet \
- --vnet-name myVNet
-```
Wait for the virtual machine creation to complete before moving on to the next section. ## Test NAT gateway In this section, we'll test the NAT gateway. We'll first discover the public IP of the NAT gateway. We'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
-
+ 1. Sign in to the [Azure portal](https://portal.azure.com) 1. Find the public IP address for the NAT gateway on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicIP**.
-2. Make note of the public IP address:
+1. Make note of the public IP address:
:::image type="content" source="./media/tutorial-create-nat-gateway-portal/find-public-ip.png" alt-text="Discover public IP address of NAT gateway" border="true":::
-3. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM** that is located in the **myResourceGroupNAT** resource group.
+1. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM** that is located in the **myResourceGroupNAT** resource group.
-4. On the **Overview** page, select **Connect**, then **Bastion**.
+1. On the **Overview** page, select **Connect**, then **Bastion**.
-5. Select the blue **Use Bastion** button.
+1. Select the blue **Use Bastion** button.
-6. Enter the username and password entered during VM creation.
+1. Enter the username and password entered during VM creation.
-7. Open **Internet Explorer** on **myTestVM**.
+1. Open **Internet Explorer** on **myTestVM**.
-8. Enter **https://whatsmyip.com** in the address bar.
+1. Enter **https://whatsmyip.com** in the address bar.
-9. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+1. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
:::image type="content" source="./media/tutorial-create-nat-gateway-portal/my-ip.png" alt-text="Internet Explorer showing external outbound IP" border="true"::: ## Clean up resources If you're not going to continue to use this application, delete
-the virtual network, virtual machine, and NAT gateway with the following steps:
+the virtual network, virtual machine, and NAT gateway with the following CLI command:
-```azurecli-interactive
+```azurecli-interactive
az group delete \
- --name myResourceGroupNAT
+ --name $resourceGroup
``` ## Next steps
virtual-network Virtual Network Cli Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-filter-network-traffic.md
ms.devlang: azurecli
Previously updated : 03/20/2018 Last updated : 02/03/2022 - # Filter inbound and outbound VM network traffic using an Azure CLI script sample This script sample creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, HTTPS, and SSH, while outbound traffic to the internet from the back-end subnet is not permitted. After running the script, you will have one virtual machine with two NICs. Each NIC is connected to a different subnet.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/bash), or from a local Azure CLI installation. If you use the CLI locally, this script requires that you are running version 2.0.28 or later. To find the installed version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you are running the CLI locally, you also need to run `az login` to create a connection with Azure.
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-network/filter-network-traffic/filter-network-traffic.sh "Filter VM network traffic")]
+
+### Run the script
+
-## Clean up deployment
+## Clean up deployment
-Run the following command to remove the resource group, VM, and all related resources:
```azurecli
-az group delete --name MyResourceGroup --yes
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack-standard-load-balancer.md
Previously updated : 07/15/2019 Last updated : 02/03/2022
This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network with a dual stack subnet, a Standard Load Balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules ,and dual public IPs.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/bash), or from a local Azure CLI installation. If you use the CLI locally, this script requires that you are running version 2.0.28 or later. To find the installed version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you are running the CLI locally, you also need to run `az login` to create a connection with Azure.
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-## Prerequisites
-To use the IPv6 for Azure virtual network feature, you must configure your subscription only once as follows:
-
-```azurecli
-az feature register --name AllowIPv6VirtualNetwork --namespace Microsoft.Network
-az feature register --name AllowIPv6CAOnStandardLB --namespace Microsoft.Network
-```
-
-It takes up to 30 minutes for feature registration to complete. You can check your registration status by running the following Azure CLI command:
-
-```azurecli
-az feature show --name AllowIPv6VirtualNetwork --namespace Microsoft.Network
-az feature show --name AllowIPv6CAOnStandardLB --namespace Microsoft.Network
-```
-
-After the registration is complete, run the following command:
-
-```azurecli
-az provider register --namespace Microsoft.Network
-```
## Sample script
-```azurecli
-# Create a resource group
-az group create \
name DsResourceGroup01 \location eastus-
-# Create an IPV4 IP address
-az network public-ip create \
name dsPublicIP_v4 \resource-group DsResourceGroup01 \location eastus \sku STANDARD \allocation-method static \version IPv4-
-# Create an IPV6 IP address
-az network public-ip create \
name dsPublicIP_v6 \resource-group DsResourceGroup01 \location eastus \sku STANDARD \allocation-method static \version IPv6-
-# Create public IP addresses for remote access to VMs
-az network public-ip create \
name dsVM0_remote_access \resource-group DsResourceGroup01 \location eastus \sku Standard \allocation-method static \version IPv4-
-az network public-ip create \
name dsVM1_remote_access \resource-group DsResourceGroup01 \location eastus \sku Standard \allocation-method static \version IPv4-
-# Create load balancer
-
-az network lb create \
name dsLB \resource-group DsResourceGroup01 \sku Standard \location eastus \frontend-ip-name dsLbFrontEnd_v4 \public-ip-address dsPublicIP_v4 \backend-pool-name dsLbBackEndPool_v4-
-# Create IPv6 front-end
-az network lb frontend-ip create \
lb-name dsLB \name dsLbFrontEnd_v6 \resource-group DsResourceGroup01 \public-ip-address dsPublicIP_v6-
-# Configure IPv6 back-end address pool
-az network lb address-pool create \
lb-name dsLB \name dsLbBackEndPool_v6 \resource-group DsResourceGroup01-
-# Create a load balancer rule
-
-az network lb rule create \
lb-name dsLB \name dsLBrule_v4 \resource-group DsResourceGroup01 \frontend-ip-name dsLbFrontEnd_v4 \protocol Tcp \frontend-port 80 \backend-port 80 \backend-pool-name dsLbBackEndPool_v4--
-az network lb rule create \
lb-name dsLB \name dsLBrule_v6 \resource-group DsResourceGroup01 \frontend-ip-name dsLbFrontEnd_v6 \protocol Tcp \frontend-port 80 \backend-port 80 \backend-pool-name dsLbBackEndPool_v6-
-# Create an availability set
-az vm availability-set create \
name dsAVset \resource-group DsResourceGroup01 \location eastus \platform-fault-domain-count 2 \platform-update-domain-count 2 -
-# Create network security group
-
-az network nsg create \
name dsNSG1 \resource-group DsResourceGroup01 \location eastus-
-# Create inbound rule for port 3389
-az network nsg rule create \
name allowRdpIn \nsg-name dsNSG1 \resource-group DsResourceGroup01 \priority 100 \description "Allow Remote Desktop In" \access Allow \protocol "*" \direction Inbound \source-address-prefixes "*" \source-port-ranges "*" \destination-address-prefixes "*" \destination-port-ranges 3389-
-# Create inbound rule for port 80
-az network nsg rule create \
name allowHTTPIn \nsg-name dsNSG1 \resource-group DsResourceGroup01 \priority 200 \description "Allow HTTP In" \access Allow \protocol "*" \direction Inbound \source-address-prefixes "*" \source-port-ranges 80 \destination-address-prefixes "*" \destination-port-ranges 80-
-# Create outbound rule
-
-az network nsg rule create \
name allowAllOut \nsg-name dsNSG1 \resource-group DsResourceGroup01 \priority 300 \description "Allow All Out" \access Allow \protocol "*" \direction Outbound \source-address-prefixes "*" \source-port-ranges "*" \destination-address-prefixes "*" \destination-port-ranges "*"-
-# Create the virtual network
-az network vnet create \
name dsVNET \resource-group DsResourceGroup01 \location eastus \address-prefixes "10.0.0.0/16" "fd00:db8:deca::/48"-
-# Create a single dual stack subnet
-
-az network vnet subnet create \
name dsSubNET \resource-group DsResourceGroup01 \vnet-name dsVNET \address-prefixes "10.0.0.0/24" "fd00:db8:deca:deed::/64" \network-security-group dsNSG1-
-# Create NICs
-az network nic create \
name dsNIC0 \resource-group DsResourceGroup01 \network-security-group dsNSG1 \vnet-name dsVNET \subnet dsSubNet \private-ip-address-version IPv4 \lb-address-pools dsLbBackEndPool_v4 \lb-name dsLB \public-ip-address dsVM0_remote_access-
-az network nic create \
name dsNIC1 \resource-group DsResourceGroup01 \network-security-group dsNSG1 \vnet-name dsVNET \subnet dsSubNet \private-ip-address-version IPv4 \lb-address-pools dsLbBackEndPool_v4 \lb-name dsLB \public-ip-address dsVM1_remote_access-
-# Create IPV6 configurations for each NIC
-
-az network nic ip-config create \
name dsIp6Config_NIC0 \nic-name dsNIC0 \resource-group DsResourceGroup01 \vnet-name dsVNET \subnet dsSubNet \private-ip-address-version IPv6 \lb-address-pools dsLbBackEndPool_v6 \lb-name dsLB-
-az network nic ip-config create \
name dsIp6Config_NIC1 \nic-name dsNIC1 \resource-group DsResourceGroup01 \vnet-name dsVNET \subnet dsSubNet \private-ip-address-version IPv6 \lb-address-pools dsLbBackEndPool_v6 \lb-name dsLB-
-# Create virtual machine dsVM0
-
- az vm create \
name dsVM0 \resource-group DsResourceGroup01 \nics dsNIC0 \size Standard_A2 \availability-set dsAVset \image MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest -
-# Create virtual machine dsVM1
-
-az vm create \
name dsVM1 \resource-group DsResourceGroup01 \nics dsNIC1 \size Standard_A2 \availability-set dsAVset \image MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest
-```
+### Run the script
-## View IPv6 dual stack virtual network in Azure portal
-You can view the IPv6 dual stack virtual network in Azure portal as follows:
-1. In the portal's search bar, enter *dsVnet*.
-2. When **myVirtualNetwork** appears in the search results, select it. This launches the **Overview** page of the dual stack virtual network named *dsVnet*. The dual stack virtual network shows the two NICs with both IPv4 and IPv6 configurations located in the dual stack subnet named *dsSubnet*.
-> [!NOTE]
-> The IPv6 for Azure virtual network is available in the Azure portal in read-only for this preview release.
+> [!TIP]
+> You can view the IPv6 dual stack virtual network in Azure portal on the virtual network page.
+> The dual stack virtual network shows the two NICs with both IPv4 and IPv6 configurations in the dual stack subnet.
## Clean up deployment
-Run the following command to remove the resource group, VM, and all related resources:
```azurecli
-az group delete --name <resourcegroupname> --yes
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual mach
| [az network nsg create](/cli/azure/network/nsg#az_network_nsg_create) | Creates a network security group (NSG), which is a security boundary between the internet and the virtual machine. | | [az network nsg rule create](/cli/azure/network/nsg/rule#az_network_nsg_rule_create) | Creates an NSG rule to allow inbound traffic. In this sample, port 22 is opened for SSH traffic. | | [az network nic create](/cli/azure/network/nic#az_network_nic_create) | Creates a virtual network card and attaches it to the virtual network, subnet, and NSG. |
-| [az vm availability-set create](/cli/azure/network/lb/rule#az_network_lb_rule_create) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set is not effected. |
+| [az vm availability-set create](/cli/azure/network/lb/rule#az_network_lb_rule_create) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set isn't affected. |
| [az vm create](/cli/azure/vm#az_vm_create) | Creates the virtual machine and connects it to the network card, virtual network, subnet, and NSG. This command also specifies the virtual machine image to be used and administrative credentials. | | [az group delete](/cli/azure/vm/extension#az_vm_extension_set) | Deletes a resource group including all nested resources. |
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack.md
Previously updated : 04/23/2019 Last updated : 02/03/2022
This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network with a dual stack subnet, a load balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules ,and dual public IPs.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/bash), or from a local Azure CLI installation. If you use the CLI locally, this script requires that you are running version 2.0.28 or later. To find the installed version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you are running the CLI locally, you also need to run `az login` to create a connection with Azure.
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-## Prerequisites
-To use the IPv6 for Azure virtual network feature, you must configure your subscription only once as follows:
-
-```azurecli
-az feature register --name AllowIPv6VirtualNetwork --namespace Microsoft.Network
-az feature register --name AllowIPv6CAOnStandardLB --namespace Microsoft.Network
-```
-
-It takes up to 30 minutes for feature registration to complete. You can check your registration status by running the following Azure CLI command:
-
-```azurecli
-az feature show --name AllowIPv6VirtualNetwork --namespace Microsoft.Network
-az feature show --name AllowIPv6CAOnStandardLB --namespace Microsoft.Network
-```
-
-After the registration is complete, run the following command:
-
-```azurecli
-az provider register --namespace Microsoft.Network
-```
## Sample script
-```azurecli
-# Create a resource group
-az group create \
name DsResourceGroup01 \location eastus-
-# Create an IPV4 IP address
-az network public-ip create \
name dsPublicIP_v4 \resource-group DsResourceGroup01 \location eastus \sku BASIC \allocation-method dynamic \version IPv4-
-# Create an IPV6 IP address
-az network public-ip create \
name dsPublicIP_v6 \resource-group DsResourceGroup01 \location eastus \sku BASIC \allocation-method dynamic \version IPv6-
-# Create public IP addresses for remote access to VMs
-az network public-ip create \
name dsVM0_remote_access \resource-group DsResourceGroup01 \location eastus \sku BASIC \allocation-method dynamic \version IPv4-
-# Create load balancer
-az network public-ip create \
name dsVM1_remote_access \resource-group DsResourceGroup01 \location eastus \sku BASIC \allocation-method dynamic \version IPv4
-az network lb create \
name dsLB \resource-group DsResourceGroup01 \sku Basic \location eastus \frontend-ip-name dsLbFrontEnd_v4 \public-ip-address dsPublicIP_v4 \backend-pool-name dsLbBackEndPool_v4-
-# Create IPv6 front-end
-az network lb frontend-ip create \
lb-name dsLB \name dsLbFrontEnd_v6 \resource-group DsResourceGroup01 \public-ip-address dsPublicIP_v6-
-# Configure IPv6 back-end address pool
-az network lb address-pool create \
lb-name dsLB \name dsLbBackEndPool_v6 \resource-group DsResourceGroup01-
-# Create a load balancer rule
+### Run the script
-az network lb rule create \
lb-name dsLB \name dsLBrule_v4 \resource-group DsResourceGroup01 \frontend-ip-name dsLbFrontEnd_v4 \protocol Tcp \frontend-port 80 \backend-port 80 \backend-pool-name dsLbBackEndPool_v4 -
-az network lb rule create
lb-name dsLB \name dsLBrule_v6 \resource-group DsResourceGroup01 \frontend-ip-name dsLbFrontEnd_v6 \protocol Tcp \frontend-port 80 \backend-port 80 \backend-pool-name dsLbBackEndPool_v6-
-# Create an availability set
-az vm availability-set create \
name dsAVset \resource-group DsResourceGroup01 \location eastus \platform-fault-domain-count 2 \platform-update-domain-count 2-
-# Create network security group
-
-az network nsg create \
name dsNSG1 \resource-group DsResourceGroup01 \location eastus-
-# Create inbound rule for port 3389
-az network nsg rule create \
name allowRdpIn \nsg-name dsNSG1 \resource-group DsResourceGroup01 \priority 100 \description "Allow Remote Desktop In" \access Allow \protocol "*" \direction Inbound \source-address-prefixes "*" \source-port-ranges 3389 \destination-address-prefixes "*" \destination-port-ranges 3389-
-# Create outbound rule
-
-az network nsg rule create \
name allowAllOut \nsg-name dsNSG1 \resource-group DsResourceGroup01 \priority 100 \description "Allow All Out" \access Allow \protocol "*" \direction Outbound \source-address-prefixes "*" \source-port-ranges "*" \destination-address-prefixes "*" \destination-port-ranges "*"-
-# Create the virtual network
-az network vnet create \
name dsVNET \resource-group DsResourceGroup01 \location eastus \address-prefixes "10.0.0.0/16" "fd00:db8:deca::/48"-
-# Create a single dual stack subnet
-
-az network vnet subnet create \
name dsSubNET \resource-group DsResourceGroup01 \vnet-name dsVNET \address-prefix 10.0.0.0/24 \address-prefix "fd00:db8:deca:deed::/64" \network-security-group dsNSG1-
-# Create NICs
-az network nic create \
name dsNIC0 \resource-group DsResourceGroup01 \network-security-group dsNSG1 \vnet-name dsVNET \subnet dsSubNet \private-ip-address-version IPv4 \lb-address-pools dsLbBackEndPool_v4 \lb-name dsLB \public-ip-address dsVM0_remote_access-
-az network nic create \
name dsNIC1 \resource-group DsResourceGroup01 \network-security-group dsNSG1 \vnet-name dsVNET \subnet dsSubNet \private-ip-address-version IPv4 \lb-address-pools dsLbBackEndPool_v4 \lb-name dsLB \public-ip-address dsVM1_remote_access-
-# Create IPV6 configurations for each NIC
-
-az network nic ip-config create \
name dsIp6Config_NIC0 \nic-name dsNIC0 \resource-group DsResourceGroup01 \vnet-name dsVNET \subnet dsSubNet \private-ip-address-version IPv6 \lb-address-pools dsLbBackEndPool_v6 \lb-name dsLB-
-az network nic ip-config create \
name dsIp6Config_NIC1 \nic-name dsNIC1 \resource-group DsResourceGroup01 \vnet-name dsVNET \subnet dsSubNet \private-ip-address-version IPv6 \lb-address-pools dsLbBackEndPool_v6 lb-name dsLB-
-# Create virtual machines
- az vm create \
name dsVM0 \resource-group DsResourceGroup01 \nics dsNIC0 \size Standard_A2 \availability-set dsAVset \image MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest -
-az vm create \
name dsVM1 \resource-group DsResourceGroup01 \nics dsNIC1 \size Standard_A2 \availability-set dsAVset \image MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest-
-```
+> [!TIP]
+> You can view the IPv6 dual stack virtual network in Azure portal on the virtual network page.
+> The dual stack virtual network shows the two NICs with both IPv4 and IPv6 configurations in the dual stack subnet.
## Clean up deployment
-Run the following command to remove the resource group, VM, and all related resources:
```azurecli
-az group delete --name <resourcegroupname> --yes
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer, and all related resources. Each command in the table links to command specific documentation.
This script uses the following commands to create a resource group, virtual mach
| [az network nsg create](/cli/azure/network/nsg#az_network_nsg_create) | Creates a network security group (NSG), which is a security boundary between the internet and the virtual machine. | | [az network nsg rule create](/cli/azure/network/nsg/rule#az_network_nsg_rule_create) | Creates an NSG rule to allow inbound traffic. In this sample, port 22 is opened for SSH traffic. | | [az network nic create](/cli/azure/network/nic#az_network_nic_create) | Creates a virtual network card and attaches it to the virtual network, subnet, and NSG. |
-| [az vm availability-set create](/cli/azure/network/lb/rule#az_network_lb_rule_create) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set is not effected. |
+| [az vm availability-set create](/cli/azure/network/lb/rule#az_network_lb_rule_create) | Creates an availability set. Availability sets ensure application uptime by spreading the virtual machines across physical resources such that if failure occurs, the entire set isn't affected. |
| [az vm create](/cli/azure/vm#az_vm_create) | Creates the virtual machine and connects it to the network card, virtual network, subnet, and NSG. This command also specifies the virtual machine image to be used and administrative credentials. | | [az group delete](/cli/azure/vm/extension#az_vm_extension_set) | Deletes a resource group including all nested resources. |
This script uses the following commands to create a resource group, virtual mach
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional Azure Networking CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
virtual-network Virtual Network Cli Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-multi-tier-application.md
ms.devlang: azurecli
Previously updated : 03/20/2018 Last updated : 02/03/2022 - # Create a virtual network for multi-tier applications using an Azure CLI script sample This script sample creates a virtual network with front-end and back-end subnets. Traffic to the front-end subnet is limited to HTTP and SSH, while traffic to the back-end subnet is limited to MySQL, port 3306. After running the script, you have two virtual machines, one in each subnet, that you can deploy web server and MySQL software to.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/bash), or from a local Azure CLI installation. If you use the CLI locally, this script requires that you are running version 2.0.28 or later. To find the installed version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you are running the CLI locally, you also need to run `az login` to create a connection with Azure.
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script +
+### Run the script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-network/virtual-network-multi-tier-application/virtual-network-multi-tier-application.sh "Virtual network for multi-tier application")]
-## Clean up deployment
+## Clean up deployment
-Run the following command to remove the resource group, VM, and all related resources:
```azurecli
-az group delete --name MyResourceGroup --yes
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
virtual-network Virtual Network Cli Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-peer-two-virtual-networks.md
ms.devlang: azurecli
Previously updated : 03/20/2018 Last updated : 02/03/2022
This script sample creates and connects two virtual networks in the same region through the Azure network. After running the script, you have a peering between two virtual networks.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/bash), or from a local Azure CLI installation. If you use the CLI locally, this script requires that you are running version 2.0.28 or later. To find the installed version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you are running the CLI locally, you also need to run `az login` to create a connection with Azure.
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-network/peer-two-virtual-networks/peer-two-virtual-networks.sh "Peer two networks")]
+
+### Run the script
+
-## Clean up deployment
+## Clean up deployment
-Run the following command to remove the resource group, VM, and all related resources:
```azurecli
-az group delete --name myResourceGroup --yes
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual machine, and all related resources. Each command in the following table links to command-specific documentation:
virtual-network Virtual Network Cli Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-route-traffic-through-nva.md
editor: '' tags:- ms.assetid: ms.devlang: azurecli Previously updated : 03/20/2018 Last updated : 02/03/2022 - # Route traffic through a network virtual appliance - Azure CLI script sample This script sample creates a virtual network with front-end and back-end subnets. It also creates a VM with IP forwarding enabled to route traffic between the two subnets. After running the script you can deploy network software, such as a firewall application, to the VM.
-You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/bash), or from a local Azure CLI installation. If you use the CLI locally, this script requires that you are running version 2.0.28 or later. To find the installed version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli). If you are running the CLI locally, you also need to run `az login` to create a connection with Azure.
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/virtual-network/route-traffic-through-nva/route-traffic-through-nva.sh "Route traffic through a network virtual appliance")]
+
+### Run the script
+
-## Clean up deployment
+## Clean up deployment
-Run the following command to remove the resource group, VM, and all related resources:
```azurecli
-az group delete --name MyResourceGroup --yes
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create a resource group, virtual network, and network security groups. Each command in the following table links to command-specific documentation:
web-application-firewall Waf Front Door Configure Ip Restriction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-configure-ip-restriction.md
Last updated 12/22/2020 -+ # Configure an IP restriction rule with a Web Application Firewall for Azure Front Door