Updates from: 07/21/2022 01:07:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Display Control Time Based One Time Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-control-time-based-one-time-password.md
Previously updated : 12/09/2021 Last updated : 07/20/2022
The following screenshot illustrates a TOTP verification page.
## Next steps -- Learn how to validate a TOTP code in [Define an Azure AD MFA technical profile](multi-factor-auth-technical-profile.md).
+- Learn more about multifactor authentication in [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md?pivots=b2c-custom-policy)
+
+- Learn how to validate a TOTP code in [Define an Azure AD MFA technical profile](multi-factor-auth-technical-profile.md).
+
+- Explore a sample [Azure AD B2C MFA with TOTP using any Authenticator app custom policy in GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/totp).
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-authentication.md
Previously updated : 06/27/2022 Last updated : 07/20/2022
To enable multifactor authentication, get the custom policy starter pack from Gi
- [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository from `https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack`, and then update the XML files in the **SocialAndLocalAccountsWithMFA** starter pack with your Azure AD B2C tenant name. The **SocialAndLocalAccountsWithMFA** enables social and local sign in options, and multifactor authentication options, except for the Authenticator app - TOTP option. - To support the **Authenticator app - TOTP** MFA option, download the custom policy files from `https://github.com/azure-ad-b2c/samples/tree/master/policies/totp`, and then update the XML files with your Azure AD B2C tenant name. Make sure to include `TrustFrameworkExtensions.xml`, `TrustFrameworkLocalization.xml`, and `TrustFrameworkBase.xml` XML files from the **SocialAndLocalAccounts** starter pack.-- Update your [page layout] to version `2.1.9`. For more information, see [Select a page layout](contentdefinitions.md#select-a-page-layout).
+- Update your [page layout] to version `2.1.14`. For more information, see [Select a page layout](contentdefinitions.md#select-a-page-layout).
::: zone-end
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
As you upload the files, Azure adds the prefix `B2C_1A_` to each.
In this article, you learned how to: > [!div class="checklist"]
-> * Create a sig- up and sign in user flow
+> * Create a sign-up and sign in user flow
> * Create a profile editing user flow > * Create a password reset user flow
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Another MFA-related error message is the one described previously: "Your credent
![Screenshot of the message that says your credentials didn't work.](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
+If you've configured a legacy per-user **Enabled/Enforced Azure AD Multi-Factor Authentication** setting and you see the error above, you can resolve the problem by removing the per-user MFA setting through these commands:
+
+```
+# Get StrongAuthenticationRequirements configure on a user
+(Get-MsolUser -UserPrincipalName username@contoso.com).StrongAuthenticationRequirements
+
+# Clear StrongAuthenticationRequirements from a user
+$mfa = @()
+Set-MsolUser -UserPrincipalName username@contoso.com -StrongAuthenticationRequirements $mfa
+
+# Verify StrongAuthenticationRequirements are cleared from the user
+(Get-MsolUser -UserPrincipalName username@contoso.com).StrongAuthenticationRequirements
+```
+ If you haven't deployed Windows Hello for Business and if that isn't an option for now, you can configure a Conditional Access policy that excludes the Azure Windows VM Sign-In app from the list of cloud apps that require MFA. To learn more about Windows Hello for Business, see [Windows Hello for Business overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification). > [!NOTE]
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
Some organizations use a list of known ΓÇÿbad actorΓÇÖ domains provided by their
You can control both inbound and outbound access using Cross Tenant Access Settings. In addition, you can trust MFA, Compliant device, and hybrid Azure Active Directory joined device (HAADJ) claims from all or a subset of external Azure AD tenants. When you configure an organization specific policy, it applies to the entire Azure AD tenant and will cover all users from that tenant regardless of the userΓÇÖs domain suffix.
+You can enable collaboration across Microsoft clouds such as Microsoft Azure China 21Vianet or Microsoft Azure Government with additional configuration. Determine if any of your collaboration partners reside in a different Microsoft cloud. If so, you should [enable collaboration with these partners using Cross Tenant Access Settings](/azure/active-directory/external-identities/cross-cloud-settings).
+ If you wish to allow inbound access to only specific tenants (allowlist), you can set the default policy to block access and then create organization policies to granularly allow access on a per user, group, and application basis. If you wish to block access to specific tenants (blocklist), you can set the default policy as allow and then create organization policies that block access to those specific tenants.
See the following articles on securing external access to resources. We recommen
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
active-directory Road To The Cloud Implement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-implement.md
Client workstations are traditionally joined to AD and managed via group policy
[Windows Autopilot](/mem/autopilot/windows-autopilot) is highly recommended to establish a streamlined onboarding and device provisioning, which can enforce these directives.
-For more information, see [Get started with cloud native Windows endpoints - Microsoft Endpoint Manager](/mem/cloud-native-windows-endpoints)
+For more information, see [Learn more about cloud-native endpoints](/mem/cloud-native-endpoints-overview)
## Applications
active-directory Aws Single Sign On Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-provisioning-tutorial.md
Title: 'Tutorial: Configure AWS Single Sign-On for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to AWS Single Sign-On.
+ Title: 'Tutorial: Configure AWS IAM Identity Center (successor to AWS Single Sign-On) for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to AWS IAM Identity Center.
documentationcenter: ''
Last updated 02/23/2021
-# Tutorial: Configure AWS Single Sign-On for automatic user provisioning
+# Tutorial: Configure AWS IAM Identity Center (successor to AWS Single Sign-On) for automatic user provisioning
-This tutorial describes the steps you need to perform in both AWS Single Sign-On and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [AWS Single Sign-On](https://console.aws.amazon.com/singlesignon) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both AWS IAM Identity Center (successor to AWS Single Sign-On) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [AWS IAM Identity Center](https://console.aws.amazon.com/singlesignon) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported > [!div class="checklist"]
-> * Create users in AWS Single Sign-On
-> * Remove users in AWS Single Sign-On when they no longer require access
-> * Keep user attributes synchronized between Azure AD and AWS Single Sign-On
-> * Provision groups and group memberships in AWS Single Sign-On
-> * [Single Sign-On]() to AWS Single Sign-On
+> * Create users in AWS IAM Identity Center
+> * Remove users in AWS IAM Identity Center when they no longer require access
+> * Keep user attributes synchronized between Azure AD and AWS IAM Identity Center
+> * Provision groups and group memberships in AWS IAM Identity Center
+> * [Single Sign-On](aws-single-sign-on-tutorial.md) to AWS IAM Identity Center
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A SAML connection from your Azure AD account to AWS SSO, as described in Tutorial
+* A SAML connection from your Azure AD account to AWS IAM Identity Center, as described in Tutorial
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and AWS Single Sign-On](../app-provisioning/customize-application-attributes.md).
+3. Determine what data to [map between Azure AD and AWS IAM Identity Center](../app-provisioning/customize-application-attributes.md).
-## Step 2. Configure AWS Single Sign-On to support provisioning with Azure AD
+## Step 2. Configure AWS IAM Identity Center to support provisioning with Azure AD
-1. Open the [AWS SSO Console](https://console.aws.amazon.com/singlesignon).
+1. Open the [AWS IAM Identity Center](https://console.aws.amazon.com/singlesignon).
2. Choose **Settings** in the left navigation pane
-3. Navigate to **Settings** -> **Identity source** -> **Provisioning** -> choose **Enable automatic provisioning**.
+3. In **Settings**, click on Enable in the Automatic provisioning section.
-4. In the Inbound automatic provisioning dialog box, copy and save the **SCIM endpoint** and **Access Token**. These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your AWS Single Sign-On application in the Azure portal.
+ ![Screenshot of enabling automatic provisioning.](media/aws-single-sign-on-provisioning-tutorial/automatic-provisioning.png)
+4. In the Inbound automatic provisioning dialog box, copy and save the **SCIM endpoint** and **Access Token** (visible after clicking on Show Token). These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your AWS IAM Identity Center application in the Azure portal.
+ ![Screenshot of extracting provisioning configurations.](media/aws-single-sign-on-provisioning-tutorial/inbound-provisioning.png)
-## Step 3. Add AWS Single Sign-On from the Azure AD application gallery
+## Step 3. Add AWS IAM Identity Center from the Azure AD application gallery
-Add AWS Single Sign-On from the Azure AD application gallery to start managing provisioning to AWS Single Sign-On. If you have previously setup AWS Single Sign-On for SSO, you can use the same application. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add AWS IAM Identity Center from the Azure AD application gallery to start managing provisioning to AWS IAM Identity Center. If you have previously setup AWS IAM Identity Center for SSO, you can use the same application. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 5. Configure automatic user provisioning to AWS Single Sign-On
+## Step 5. Configure automatic user provisioning to AWS IAM Identity Center
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
-### To configure automatic user provisioning for AWS Single Sign-On in Azure AD:
+### To configure automatic user provisioning for AWS IAM Identity Center in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **AWS Single Sign-On**.
+2. In the applications list, select **AWS IAM Identity Center**.
- ![The AWS Single Sign-On link in the Applications list](common/all-applications.png)
+ ![Screenshot of the AWS IAM Identity Center link in the Applications list.](common/all-applications.png)
3. Select the **Provisioning** tab.
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your AWS Single Sign-On **Tenant URL** and **Secret Token** retrieved earlier in Step 2. Click **Test Connection** to ensure Azure AD can connect to AWS Single Sign-On.
+5. Under the **Admin Credentials** section, input your AWS IAM Identity Center **Tenant URL** and **Secret Token** retrieved earlier in Step 2. Click **Test Connection** to ensure Azure AD can connect to AWS IAM Identity Center.
![Token](common/provisioning-testconnection-tenanturltoken.png)
This section guides you through the steps to configure the Azure AD provisioning
7. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to AWS Single Sign-On**.
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to AWS IAM Identity Center**.
-9. Review the user attributes that are synchronized from Azure AD to AWS Single Sign-On in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in AWS Single Sign-On for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the AWS Single Sign-On API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to AWS IAM Identity Center in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in AWS IAM Identity Center for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the AWS IAM Identity Center API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to AWS Single Sign-On**.
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to AWS IAM Identity Center**.
-11. Review the group attributes that are synchronized from Azure AD to AWS Single Sign-On in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in AWS Single Sign-On for update operations. Select the **Save** button to commit any changes.
+11. Review the group attributes that are synchronized from Azure AD to AWS IAM Identity Center in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in AWS IAM Identity Center for update operations. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for AWS Single Sign-On, change the **Provisioning Status** to **On** in the **Settings** section.
+13. To enable the Azure AD provisioning service for AWS IAM Identity Center, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to AWS Single Sign-On by choosing the desired values in **Scope** in the **Settings** section.
+14. Define the users and/or groups that you would like to provision to AWS IAM Identity Center by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
There are two ways to resolve this
2. Remove the duplicate attributes. For example, having two different attributes being mapped from Azure AD both mapped to "phoneNumber___" on the AWS side would result in the error if both attributes have values in Azure AD. Only having one attribute mapped to a "phoneNumber____ " attribute would resolve the error. ### Invalid characters
-Currently AWS SSO is not allowing some other characters that Azure AD supports like tab (\t), new line (\n), return carriage (\r), and characters such as " <|>|;|:% ".
+Currently AWS IAM Identity Center is not allowing some other characters that Azure AD supports like tab (\t), new line (\n), return carriage (\r), and characters such as " <|>|;|:% ".
-You can also check the AWS SSO troubleshooting tips [here](https://docs.aws.amazon.com/singlesignon/latest/userguide/azure-ad-idp.html#azure-ad-troubleshooting) for more troubleshooting tips
+You can also check the AWS IAM Identity Center troubleshooting tips [here](https://docs.aws.amazon.com/singlesignon/latest/userguide/azure-ad-idp.html#azure-ad-troubleshooting) for more troubleshooting tips
## Additional resources
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with AWS IAM Identity Center (successor to AWS Single Sign-On)'
-description: Learn how to configure single sign-on between Azure Active Directory and AWS IAM Identity Center (successor to AWS Single Sign-On).
+ Title: 'Tutorial: Azure AD SSO integration with AWS Single Sign-On'
+description: Learn how to configure single sign-on between Azure Active Directory and AWS Single Sign-On.
-# Tutorial: Azure AD SSO integration with AWS IAM Identity Center
+# Tutorial: Azure AD SSO integration with AWS Single Sign-On
-In this tutorial, you'll learn how to integrate AWS IAM Identity Center (successor to AWS Single Sign-On) with Azure Active Directory (Azure AD). When you integrate AWS IAM Identity Center with Azure AD, you can:
+In this tutorial, you'll learn how to integrate AWS Single Sign-On with Azure Active Directory (Azure AD). When you integrate AWS Single Sign-On with Azure AD, you can:
-* Control in Azure AD who has access to AWS IAM Identity Center.
-* Enable your users to be automatically signed-in to AWS IAM Identity Center with their Azure AD accounts.
+* Control in Azure AD who has access to AWS Single Sign-On.
+* Enable your users to be automatically signed-in to AWS Single Sign-On with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate AWS IAM Identity Center (success
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* AWS IAM Identity Center enabled subscription.
+* AWS Single Sign-On enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* AWS IAM Identity Center supports **SP and IDP** initiated SSO.
+* AWS Single Sign-On supports **SP and IDP** initiated SSO.
-* AWS IAM Identity Center supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
+* AWS Single Sign-On supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
-## Add AWS IAM Identity Center from the gallery
+## Add AWS Single Sign-On from the gallery
-To configure the integration of AWS IAM Identity Center into Azure AD, you need to add AWS IAM Identity Center from the gallery to your list of managed SaaS apps.
+To configure the integration of AWS Single Sign-On into Azure AD, you need to add AWS Single Sign-On from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **AWS IAM Identity Center** in the search box.
-1. Select **AWS IAM Identity Center** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **AWS Single Sign-On** in the search box.
+1. Select **AWS Single Sign-On** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for AWS IAM Identity Center
+## Configure and test Azure AD SSO for AWS Single Sign-On
-Configure and test Azure AD SSO with AWS IAM Identity Center using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS IAM Identity Center.
+Configure and test Azure AD SSO with AWS Single Sign-On using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS Single Sign-On.
-To configure and test Azure AD SSO with AWS IAM Identity Center, perform the following steps:
+To configure and test Azure AD SSO with AWS Single Sign-On, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure AWS IAM Identity Center SSO](#configure-aws-iam-identity-center-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create AWS IAM Identity Center test user](#create-aws-iam-identity-center-test-user)** - to have a counterpart of B.Simon in AWS IAM Identity Center that is linked to the Azure AD representation of user.
+1. **[Configure AWS Single Sign-On SSO](#configure-aws-single-sign-on-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AWS Single Sign-On test user](#create-aws-single-sign-on-test-user)** - to have a counterpart of B.Simon in AWS Single Sign-On that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **AWS IAM Identity Center** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **AWS Single Sign-On** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
a. Click **Upload metadata file**.
- b. Click on **folder logo** to select metadata file which is explained to download in **[Configure AWS IAM Identity Center SSO](#configure-aws-iam-identity-center-sso)** section and click **Add**.
+ b. Click on **folder logo** to select metadata file, which is explained to download in **[Configure AWS Single Sign-On SSO](#configure-aws-single-sign-on-sso)** section and click **Add**.
![image2](common/browse-upload-metadata.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://portal.sso.<REGION>.amazonaws.com/saml/assertion/<ID>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AWS IAM Identity Center Client support team](mailto:aws-sso-partners@amazon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AWS Single Sign-On Client support team](mailto:aws-sso-partners@amazon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. AWS IAM Identity Center application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. AWS Single Sign-On application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/edit-attribute.png) > [!NOTE]
- > If ABAC is enabled in AWS IAM Identity Center, the additional attributes may be passed as session tags directly into AWS accounts.
+ > If ABAC is enabled in AWS Single Sign-On, the additional attributes may be passed as session tags directly into AWS accounts.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate(Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up AWS IAM Identity Center** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up AWS Single Sign-On** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS IAM Identity Center.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS Single Sign-On.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **AWS IAM Identity Center**.
+1. In the applications list, select **AWS Single Sign-On**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure AWS IAM Identity Center SSO
+## Configure AWS Single Sign-On SSO
-1. To automate the configuration within AWS IAM Identity Center, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+1. To automate the configuration within AWS Single Sign-On, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
![My apps extension](common/install-myappssecure-extension.png)
-2. After adding extension to the browser, click on **Set up AWS IAM Identity Center** will direct you to the AWS IAM Identity Center application. From there, provide the admin credentials to sign into AWS IAM Identity Center. The browser extension will automatically configure the application for you and automate steps 3-10.
+2. After adding extension to the browser, click on **Set up AWS Single Sign-On** will direct you to the AWS Single Sign-On application. From there, provide the admin credentials to sign into AWS Single Sign-On. The browser extension will automatically configure the application for you and automate steps 3-10.
![Setup configuration](common/setup-sso.png)
-3. If you want to setup AWS IAM Identity Center manually, in a different web browser window, sign in to your AWS IAM Identity Center company site as an administrator.
+3. If you want to setup AWS Single Sign-On manually, in a different web browser window, sign in to your AWS Single Sign-On company site as an administrator.
-1. Go to the **Services -> Security, Identity, & Compliance -> AWS IAM Identity Center**.
+1. Go to the **Services -> Security, Identity, & Compliance -> AWS Single Sign-On**.
2. In the left navigation pane, choose **Settings**.
-3. On the **Settings** page, find **Identity source**, click on **Actions** pull-down menu, and select Change **identity source**.
+3. On the **Settings** page, find **Identity source** and click on **Change**.
![Screenshot for Identity source change service](./media/aws-single-sign-on-tutorial/settings.png)
-4. On the Change identity source page, choose **External identity provider**.
+4. On the Change identity source, choose **External identity provider**.
![Screenshot for selecting external identity provider section](./media/aws-single-sign-on-tutorial/external-identity-provider.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot for download and upload metadata section](./media/aws-single-sign-on-tutorial/upload-metadata.png)
- a. In the **Service provider metadata** section, find **AWS SSO SAML metadata**, select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
+ a. In the **Service provider metadata** section, find **AWS SSO SAML metadata** and select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
- b. Copy **AWS access portal sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
+ b. Copy **AWS SSO Sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
- c. In the **Identity provider metadata** section, select **Choose file** to upload the metadata file which you have downloaded from the Azure portal.
+ c. In the **Identity provider metadata** section, choose **Browse** to upload the metadata file, which you have downloaded from the Azure portal.
d. Choose **Next: Review**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
9. Click **Change identity source**.
-### Create AWS IAM Identity Center test user
+### Create AWS Single Sign-On test user
-1. Open the **AWS IAM Identity Center console**.
+1. Open the **AWS SSO console**.
2. In the left navigation pane, choose **Users**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
b. In the **Email address** field, enter the `username@companydomain.extension`. For example, `B.Simon@contoso.com`.
- c. In the **Confirm email address** field, re-enter the email address from the previous step.
+ c. In the **Confirm email address** field, reenter the email address from the previous step.
d. In the First name field, enter `Jane`.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
f. In the Display name field, enter `Jane Doe`.
- g. Choose **Next**, and then **Next** again.
+ g. Choose **Next: Groups**.
> [!NOTE]
- > Make sure the username entered in AWS IAM Identity Center matches the userΓÇÖs Azure AD sign-in name. This will you help avoid any authentication problems.
+ > Make sure the username entered in AWS SSO matches the userΓÇÖs Azure AD sign-in name. This will you help avoid any authentication problems.
5. Choose **Add user**. 6. Next, you will assign the user to your AWS account. To do so, in the left navigation pane of the
-AWS IAM Identity Center console, choose **AWS accounts**.
+AWS SSO console, choose **AWS accounts**.
7. On the AWS Accounts page, select the AWS organization tab, check the box next to the AWS account you want to assign to the user. Then choose **Assign users**. 8. On the Assign Users page, find and check the box next to the user B.Simon. Then choose **Next:
permission set**.
> [!NOTE] > Permission sets define the level of access that users and groups have to an AWS account. To learn more
-about permission sets, see the **AWS IAM Identity Center Multi Account Permissions** page.
+about permission sets, see the AWS SSO **Permission Sets** page.
10. Choose **Finish**. > [!NOTE]
-> AWS IAM Identity Center also supports automatic user provisioning, you can find more details [here](./aws-single-sign-on-provisioning-tutorial.md) on how to configure automatic user provisioning.
+> AWS Single Sign-On also supports automatic user provisioning, you can find more details [here](./aws-single-sign-on-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to AWS IAM Identity Center sign-in URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to AWS Single Sign-On sign-in URL where you can initiate the login flow.
-* Go to AWS IAM Identity Center sign-in URL directly and initiate the login flow from there.
+* Go to AWS Single Sign-On sign-in URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS IAM Identity Center for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS Single Sign-On for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the AWS IAM Identity Center tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS IAM Identity Center for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the AWS Single Sign-On tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS Single Sign-On for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure AWS IAM Identity Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AWS Single Sign-On you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory G Suite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|emails.[type eq "work"].address|String| |organizations.[type eq "work"].department|String| |organizations.[type eq "work"].title|String|
- |phoneNumbers.[type eq "work"].value|String|
- |phoneNumbers.[type eq "mobile"].value|String|
- |phoneNumbers.[type eq "work_fax"].value|String|
|addresses.[type eq "home"].country|String| |addresses.[type eq "home"].formatted|String| |addresses.[type eq "home"].locality|String|
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-authorized-ip-ranges.md
Title: API server authorized IP ranges in Azure Kubernetes Service (AKS)
description: Learn how to secure your cluster using an IP address range for access to the API server in Azure Kubernetes Service (AKS) Previously updated : 06/20/2022 Last updated : 07/20/2022 #Customer intent: As a cluster operator, I want to increase the security of my cluster by limiting access to the API server to only the IP addresses that I specify.
az aks update \
--api-server-authorized-ip-ranges "" ```
+> [!IMPORTANT]
+> When running this command using the PowerShell in Azure Cloud Shell or from your local computer,
+> the double-quote string value for the *--api-server-authorized-ip-rangers* argument needs to be [enclosed
+> in single quotes](/powershell/module/microsoft.powershell.core/about/about_quoting_rules#including-quote-characters-in-a-string).
+> Otherwise, an error message is returned indicating an expected argument is missing.
+ ## Find existing authorized IP ranges To find IP ranges that have been authorized, use [az aks show][az-aks-show] and specify the cluster's name and resource group. For example:
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshooting.md
If your node is in a failed state, you can mitigate by manually updating the VM
## Azure Files and AKS Troubleshooting
+### Azure Files CSI storage driver fails to mount a volume with a secret not in default namespace
+
+If you have configured an Azure Files CSI driver persistent volume or storage class with a storage
+access secrete in a namespace other than *default*, the pod does not search in its own namespace
+and returns an error when trying to mount the volume.
+
+This issue has been fixed in the 2022041 release. To mitigate this issue, you have two options:
+
+1. Upgrade the agent node image to the latest release.
+1. Specify the *secretNamespace* setting when configuring the persistent volume configuration.
+ ### What are the default mountOptions when using Azure Files? Recommended settings:
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
The certificate that has been uploaded to Application Gateway HTTP settings must
**Solution:** If you receive this error message, there's a mismatch between the certificate that has been uploaded to Application Gateway and the one that was uploaded to the backend server.
-Follow steps 1-11 in the preceding method to upload the correct trusted root certificate to Application Gateway.
+Follow steps 1-10 in the preceding method to upload the correct trusted root certificate to Application Gateway.
For more information about how to extract and upload Trusted Root Certificates in Application Gateway, see [Export trusted root certificate (for v2 SKU)](./certificates-for-backend-authentication.md#export-trusted-root-certificate-for-v2-sku).
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
This setting combined with HTTPS in the listener supports [end-to-end TLS](ssl-o
This setting specifies the port where the back-end servers listen to traffic from the application gateway. You can configure ports ranging from 1 to 65535.
+## Trusted root certificate
+
+If you select HTTPS as the back-end protocol, the Application Gateway requires a trusted root certificate to trust the back-end pool for end-to-end SSL. By default, the **Use well known CA certificate** option is set to **No**. If you plan to use a self-signed certificate, or a certificate signed by an internal Certificate Authority, then you must provide the Application Gateway the matching public certificate that the back-end pool will be using. This certificate must be uploaded directly to the Application Gateway in .CER format.
+
+If you plan to use a certificate on the back-end pool that is signed by a trusted public Certificate Authority, then you can set the **Use well known CA certificate** option to **Yes** and skip uploading a public certificate.
+ ## Request timeout This setting is the number of seconds that the application gateway waits to receive a response from the back-end server.
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The following table compares the features available with each SKU.
| WebSocket support | &#x2713; | &#x2713; | | HTTP/2 support | &#x2713; | &#x2713; | | Connection draining | &#x2713; | &#x2713; |
+| Proxy NTML authentication | &#x2713; | |
> [!NOTE] > The autoscaling v2 SKU now supports [default health probes](application-gateway-probe-overview.md#default-health-probe) to automatically monitor the health of all resources in its back-end pool and highlight those backend members that are considered unhealthy. The default health probe is automatically configured for backends that don't have any custom probe configuration. To learn more, see [health probes in application gateway](application-gateway-probe-overview.md).
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
The Private link configuration defines the infrastructure used by Application Ga
- **Frontend IP Configuration**: The frontend IP address that private link should forward traffic to on Application Gateway. - **Private IP address settings**: specify at least one IP address 1. Select **Add**.
+1. Within your **Application Gateways** properties blade, obtain and make a note of the **Resource ID**, you will require this if setting up a Private Endpoint within a diffrerent Azure AD tenant
**Configure Private Endpoint**
A private endpoint is a network interface that uses a private IP address from th
> [!Note] > If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener will not be shown as a _Target sub-resource_.
+> [!Note]
+> If you are setting up the **Private Endpoint** from within another tenant, you will need to utilise the Azure Application Gateway Resource ID, along with sub-resource as either _appGwPublicFrontendIp_ or _appGwPrivateFrontendIp_, depending upon your Azure Application Gateway Private Link Frontend IP Configuration.
+ # [Azure PowerShell](#tab/powershell) To configure Private link on an existing Application Gateway via Azure PowerShell, the following commands can be referenced:
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
az network application-gateway create \
--public-ip-address myAGPublicIPAddress \ --vnet-name myVNet \ --subnet myAGSubnet \
- --servers "$address1" "$address2"
+ --servers "$address1" "$address2" \
+ --priority 100
``` It can take up to 30 minutes for Azure to create the application gateway. After it's created, you can view the following settings in the **Settings** section of the **Application gateway** page:
availability-zones Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-api-mgt.md
There are no downtime requirements for any of the migration options.
* Changes can take from 15 to 45 minutes to apply. The API Management gateway can continue to handle API requests during this time.
-* Migrating to availability zones or changing the availability zone configuration will trigger a public IP address change.
+* Migrating to availability zones or changing the availability zone configuration will trigger a public [IP address change](../api-management/api-management-howto-ip-addresses.md#changes-to-the-ip-addresses).
* If you've configured autoscaling for your API Management instance in the primary location, you might need to adjust your autoscale settings after enabling zone redundancy. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones.
Learn more about:
> [Regions and Availability Zones in Azure](az-overview.md) > [!div class="nextstepaction"]
-> [Azure Services that support Availability Zones](az-region.md)
+> [Azure Services that support Availability Zones](az-region.md)
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
When you use App Configuration in client applications, ensure that you consider
To address these concerns, we recommend that you use a proxy service between your client applications and your App Configuration store. The proxy service can securely authenticate with your App Configuration store without a security issue of leaking authentication information. You can build a proxy service by using one of the App Configuration provider libraries, so you can take advantage of built-in caching and refresh capabilities for optimizing the volume of requests sent to App Configuration. For more information about using App Configuration providers, see articles in Quickstarts and Tutorials. The proxy service serves the configuration from its cache to your client applications, and you avoid the two potential issues that are discussed in this section.
+## Multitenant applications in App Configuration
+
+A multitenant application is built on an architecture where a shared instance of your application serves multiple customers or tenants. For example, you may have an email service that offers your users separate accounts and customized experiences. Your application usually manages different configurations for each tenant. Here are some architectural considerations for [using App Configuration in a multitenant application](/azure/architecture/guide/multitenant/service/app-configuration).
+ ## Configuration as Code Configuration as code is a practice of managing configuration files under your source control system, for example, a git repository. It gives you benefits like traceability and approval process for any configuration changes. If you adopt configuration as code, App Configuration has tools to assist you in [managing your configuration data in files](./concept-config-file.md) and deploying them as part of your build, release, or CI/CD process. This way, your applications can access the latest data from your App Configuration store(s).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - | | RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.7.18+](https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html), [4.9.17+](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.0+](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html) |
-| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
-| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.19](https://ubuntu.com/kubernetes/docs/1.19/components) |
+| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
+| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) |
| SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.2.4](https://github.com/rancher/rke/releases/tag/v1.2.4); Kubernetes versions: [1.19.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.6)), [1.18.14](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.14)), [1.17.16](https://github.com/kubernetes/kubernetes/releases/tag/v1.17.16)) | | Nutanix | [Karbon](https://www.nutanix.com/products/karbon) | Version 2.2.1 | | Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 | | Cisco | [Intersight Kubernetes Service (IKS)](https://www.cisco.com/c/en/us/products/cloud-systems-management/cloud-operations/intersight-kubernetes-service.html) Distribution | Upstream K8s version: 1.19.5 |
-| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.21.3 |
+| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 |
| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version 3.5.1 <br> MKE Version 3.4.7 |
-| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
+| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
description: Guidance for troubleshooting issues on Windows Arc-enabled server w
Previously updated : 5/9/2022 Last updated : 7/19/2022
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
```azurecli azcmagent show ```
- If you see `Agent Status: Disconnected`, [file a ticket](#file-a-ticket) with **Summary** as 'Arc agent or extensions service not working' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+ You should see the below output:
+ ```
+ Resource Name : <server name>
+ [...]
+ Dependent Service Status
+ Agent Service (himds) : running
+ GC Service (gcarcservice) : running
+ Extension Service (extensionservice) : running
+ ```
+ If instead you see `Agent Status: Disconnected` or any other status, [file a ticket](#file-a-ticket) with **Summary** as 'Arc agent or extensions service not working' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up. 4. If not, check if you see any errors in extension logs located at `C:\ProgramData\GuestConfig\extension_logs\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine 5. If none of the above works, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
- `streamDeclarations`: Defines the columns of the incoming data. This must match the structure of the log file. - `filePatterns`: Specifies the location and file pattern of the log files to collect. This defines a separate pattern for Windows and Linux agents.
- - `transformKql`: Specifies a [transformation](../logs/../essentials//data-collection-transformations.md) to apply to the incoming data before it's sent to the workspace. Data collection rules for Azure Monitor agent don't yet support transformations, so this value should currently be `source`.
+ - `transformKql`: Specifies a [transformation](../logs/../essentials//data-collection-transformations.md) to apply to the incoming data before it's sent to the workspace.
4. Click **Save**.
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
In the past, users used the [legacy Log Analytics Alert API](./api-alerts.md) to
- Manage all log rules in one API. - Single template for creation of alert rules (previously needed three separate templates). - Single API for all Azure resources log alerting.-- Support for stateful and 1-minute log alert previews for legacy rules.
+- Support for stateful (preview) and 1-minute log alerts.
- [PowerShell cmdlets](./alerts-manage-alerts-previous-version.md#manage-log-alerts-using-powershell) and [Azure CLI](./alerts-log.md#manage-log-alerts-using-cli) support for switched rules. - Alignment of severities with all other alert types and newer rules. - Ability to create [cross workspace log alert](../logs/cross-workspace-query.md) that span several external resources like Log Analytics workspaces or Application Insights resources for switched rules.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
You can use `opentelemetry-api` to create [tracers](https://opentelemetry.io/doc
1. Add spans in your code: ```java
+ import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.api.trace.Span;
+ Tracer tracer = GlobalOpenTelemetry.getTracer("myApp");
Span span = tracer.spanBuilder("mySpan").startSpan(); ```
+> [!TIP]
+> The tracer name ideally describes the source of the telemetry, in this case your application,
+> but currently Application Insights Java is not reporting this name to the backend.
+
+> [!TIP]
+> Tracers are thread-safe, so it's generally best to store them into static fields in order to
+> avoid the performance overhead of creating lots of new tracer objects.
+ ### Add span events You can use `opentelemetry-api` to create span events, which populate the traces table in Application Insights. The string passed in to `addEvent()` is saved to the _message_ field within the trace.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ForwardingRuleCount|Yes|Forwarding Rule Count|Count|Maximum|This metric indicates the number of forwarding rules present in each DNS forwarding ruleset.|No Dimensions|
-|VirtualNetworkLinkCount|Yes|Virtual Network Link Count|Count|Maximum|This metric indicates the number of associated virtual network links to a DNS forwarding ruleset.|No Dimensions|
+|ForwardingRuleCount|No|Forwarding Rule Count|Count|Maximum|This metric indicates the number of forwarding rules present in each DNS forwarding ruleset.|No Dimensions|
+|VirtualNetworkLinkCount|No|Virtual Network Link Count|Count|Maximum|This metric indicates the number of associated virtual network links to a DNS forwarding ruleset.|No Dimensions|
## Microsoft.Network/dnsResolvers |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|InboundEndpointCount|Yes|Inbound Endpoint Count|Count|Maximum|This metric indicates the number of inbound endpoints created for a DNS Resolver.|No Dimensions|
-|OutboundEndpointCount|Yes|Outbound Endpoint Count|Count|Maximum|This metric indicates the number of outbound endpoints created for a DNS Resolver.|No Dimensions|
+|InboundEndpointCount|No|Inbound Endpoint Count|Count|Maximum|This metric indicates the number of inbound endpoints created for a DNS Resolver.|No Dimensions|
+|OutboundEndpointCount|No|Outbound Endpoint Count|Count|Maximum|This metric indicates the number of outbound endpoints created for a DNS Resolver.|No Dimensions|
## Microsoft.Network/dnszones
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Automation/automationAccounts | [listKeys](/rest/api/automation/keys/listbyautomationaccount) | | Microsoft.Batch/batchAccounts | [listkeys](/rest/api/batchmanagement/batchaccount/getkeys) | | Microsoft.BatchAI/workspaces/experiments/jobs | listoutputfiles |
-| Microsoft.Blockchain/blockchainMembers | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys) |
-| Microsoft.Blockchain/blockchainMembers/transactionNodes | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/transactionnodes/listapikeys) |
| Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) | | Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/list-keys) | | Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) |
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-05-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-05-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-05-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.Automation/automationAccounts | [listKeys](/rest/api/automation/keys/listbyautomationaccount) | | Microsoft.Batch/batchAccounts | [listKeys](/rest/api/batchmanagement/batchaccount/getkeys) | | Microsoft.BatchAI/workspaces/experiments/jobs | listoutputfiles |
-| Microsoft.Blockchain/blockchainMembers | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/blockchainmembers/listapikeys) |
-| Microsoft.Blockchain/blockchainMembers/transactionNodes | [listApiKeys](/rest/api/blockchain/2019-06-01-preview/transactionnodes/listapikeys) |
| Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) | | Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/list-keys) | | Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) |
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2022-05-01/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2022-05-01/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2022-05-01/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-vmware Enable Hcx Access Over Internet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-hcx-access-over-internet.md
Title: Enable HCX access over the internet description: This article describes how to access HCX over a public IP address using Azure VMware solution. Previously updated : 06/27/2022 Last updated : 7/19/2022 # Enable HCX access over the internet
-This article describes how to access the HCX over a Public IP address using Azure VMware Solution. It also explains how to pair HCX sites, and create service mesh from on-premises to Azure VMware Solutions private cloud using Public IP. The service mesh allows you to migrate a workload from an on-premises datacenter to Azure VMware Solutions private cloud over the public internet.
+In this article you'll learn how to access the HCX over a Public IP address using Azure VMware Solution. You'll also learn how to pair HCX sites, and create service mesh from on-premises to Azure VMware Solutions private cloud using Public IP. The service mesh allows you to migrate a workload from an on-premises datacenter to Azure VMware Solutions private cloud over the public internet. This solution is useful where the customer isn't using Express Route or VPN connectivity with the Azure cloud.
> [!IMPORTANT]
-> This solution is useful where the customer is not using Express Route or VPN connectivity with the Azure cloud. The on-premises HCX appliance should be reachable from the internet to establish HCX communication from on-premises to Azure VMware Solution private cloud.
+> The on-premises HCX appliance should be reachable from the internet to establish HCX communication from on-premises to Azure VMware Solution private cloud.
## Configure Public IP block
Before you create a Public IP segment, get your credentials for NSX-T Manager fr
1. Sign in to NSX-T Manager using credentials provided by the Azure VMware Solution portal. 1. Under the **Manage** section, select **Identity**. 1. Copy the NSX-T Manager admin user password.
-1. Browse the NSX-T Manger and paste the admin password in the password field, and select **Login**.
-1. Under the **Networking** section select **Connectivity** and **Segments**, then select **ADD SEGMENT**.
-1. Provide Segment name, select Tier-1 router as connected gateway, and provide public segment under subnets.
+1. Browse the NSX-T Manger, paste the admin password in the password field, and select **Login**.
+1. Under the **Networking** section, select **Connectivity** and **Segments**, and then select **ADD SEGMENT**.
+1. Provide Segment name, select Tier-1 router as connected gateway, and provide the public segment under subnets.
1. Select **Save**. ΓÇ» ## Assign public IP to HCX manager HCX manager of destination Azure VMware Solution SDDC should be reachable from the internet to do site pairing with source site. HCX Manager can be exposed by way of DNAT rule and a static null route. Because HCX Manager is in the provider space, not within the NSX-T environment, the null route is necessary to allow HCX Manager to route back to the client by way of the DNAT rule. ### Add static null route to the T1 router
-1. Sign in to NSX-T manager, and select **Networking**.
+1. Sign in to NSX-T manager and select **Networking**.
1. Under the **Connectivity** section, select **Tier-1 Gateways**. 1. Edit the existing T1 gateway. 1. Expand **STATIC ROUTES**.
HCX manager of destination Azure VMware Solution SDDC should be reachable from t
### Add NAT rule to T1 gateway
-1. Sign in to NSX-T Manager, and select **Networking**.
+1. Sign in to NSX-T Manager and select **Networking**.
1. Select **NAT**. 1. Select the T1 Gateway. 1. Select **ADD NAT RULE**.
HCX manager of destination Azure VMware Solution SDDC should be reachable from t
1. Select **NSX Networks** as network type under **Network**. 1. Select the **Public-IP-Segment** created on NSX-T. 1. Enter **Name**.
-1. Under IP pools, enter **IP Ranges** for HCX uplink, **Prefix Length** and **Gateway** of public IP segment.
+1. Under IP pools, enter the **IP Ranges** for HCX uplink, **Prefix Length**, and **Gateway** of public IP segment.
1. Scroll down and select the **HCX Uplink** checkbox under **HCX Traffic Type** as this profile will be used for HCX uplink. 1. To create the Network Profile, select **Create**. ### Pair site Site pairing is required to create service mesh between source and destination sites.
-1. Sign in to **Source** site HCX Manager.
+1. Sign in to the **Source** site HCX Manager.
1. Select **Site Pairing** and select **ADD SITE PAIRING**. 1. Enter the remote HCX URL and sign in credentials, then select **Connect**.
Service Mesh will deploy HCX WAN Optimizer, HCX Network Extension and HCX-IX app
1. Select the Network Profile of source site. 1. Select the Network Profile of Destination that you created in the Network Profile section. 1. Select **Continue**.
-1. Review the Transport Zone information, and select **Continue**.
+1. Review the Transport Zone information, and then select **Continue**.
1. Review the Topological view, and select **Continue**. 1. Enter the Service Mesh name and select **FINISH**. ### Extend network The HCX Network Extension service provides layer 2 connectivity between sites. The extension service also allows you to keep the same IP and MAC addresses during virtual machine migrations. 1. Sign in to **source** HCX Manager.
-1. Under the **Network Extension** section, select the site for which you want to extend the network, and select **EXTEND NETWORKS**.
+1. Under the **Network Extension** section, select the site for which you want to extend the network, and then select **EXTEND NETWORKS**.
1. Select the network that you want to extend to destination site, and select **Next**. 1. Enter the subnet details of network that you're extending. 1. Select the destination first hop route (T1), and select **Submit**.
backup Guidance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/guidance-best-practices.md
To help you protect your backup data and meet the security needs of your busines
- In terms of the scope of the access, - _User2_ can access only the Resources of Subscription1, and User3 can access only the Resources of Subscription2.
- - _User4_ is a Backup Operator. It has the permission to enable backup, trigger on-demand backup, trigger
- - Restores, along with the capabilities of a Backup Reader. However, in this scenario, its scope is limited only to Subscription2.
+ - _User4_ is a Backup Operator. It has the permission to enable backup, trigger on-demand backup, trigger restores, along with the capabilities of a Backup Reader. However, in this scenario, its scope is limited only to Subscription2.
- _User1_ is a Backup Contributor. It has the permission to create vaults, create/modify/delete backup policies, and stop backups, along with the capabilities of a Backup Operator. However, in this scenario, its scope is limited only to _Subscription1_. - Storage accounts used by Recovery Services vaults are isolated and can't be accessed by users for any malicious purposes. The access is only allowed through Azure Backup management operations, such as restore.
Watch the following video to learn how to leverage Azure Monitor to configure va
Read the following articles as starting points for using Azure Backup: * [Azure Backup overview](backup-overview.md)
-* [Frequently Asked Questions](backup-azure-backup-faq.yml)
+* [Frequently Asked Questions](backup-azure-backup-faq.yml)
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 02/07/2022 Last updated : 07/20/2022
You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on
| **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server manually installed) VMs are supported. **Supported regions** | Azure Backup for SQL Server databases is available in all regions, except France South (FRS), UK North (UKN), UK South 2 (UKS2), UG IOWA (UGI), and Germany (Black Forest).
-**Supported operating systems** | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012, Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported.
+**Supported operating systems** | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 (all versions), Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported.
**Supported SQL Server versions** | SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported. **Supported .NET versions** | .NET Framework 4.5.2 or later installed on the VM **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances is always on [availability groups](backup-sql-server-on-availability-groups.md).
center-sap-solutions Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/deploy-s4hana.md
In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azur
## Prerequisites - An Azure subscription.
+- Register the **Microsoft.Workloads** Resource Provider on the subscription in which you are deploying the SAP system.
- An Azure account with **Contributor** role access to the subscriptions and resource groups in which you'll create the Virtual Instance for SAP solutions (VIS) resource. - The ACSS application **Azure SAP Workloads Management** also needs Contributor role access to the resource groups for the SAP system. There are two options to grant access: - If your Azure account has **Owner** or **User Access Admin** role access, you can automatically grant access to the application when deploying or registering the SAP system.
center-sap-solutions Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/register-existing-system.md
In this how-to guide, you'll learn how to register an existing SAP system with *
- Check that you're trying to register a [supported SAP system configuration](#supported-systems) - Check that your Azure account has **Contributor** role access on the subscription or resource groups where you have the SAP system resources.
+- Register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system.
- Make sure each virtual machine (VM) in the SAP system is currently running on Azure. These VMs include: - The ABAP SAP Central Services (ASCS) Server instance - The Application Server instance or instances
cognitive-services Export Import Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/export-import-refresh.md
To automate the import process, use the [import functionality of the authoring A
4. We recommend having a backup of your project/question answer pairs prior to running each refresh so that you can always roll-back if needed. 5. Select a url-based source to refresh > Select **Refresh URL**.
+6. Only one URL can be refreshed at a time.
### Refresh a URL programmatically
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
Learn more:
[Included CA Certificate List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT)
+>[!IMPORTANT]
+>Azure Communication Services direct routing supports only TLS 1.2 (or a later version), make sure that the cipher suites you're using on an SBC are supported by Azure Front Door. Microsoft 365 and Azure Front Door have slight differences in cipher suite support. For details, see [What are the current cipher suites supported by Azure Front Door?](/azure/frontdoor/concept-end-to-end-tls#supported-cipher-suites).
+ SBC pairing works on the Communication Services resource level. It means you can pair many SBCs to a single Communication Services resource. Still, you cannot pair a single SBC to more than one Communication Services resource. Unique SBC FQDNs are required for pairing to different resources.
SBC pairing works on the Communication Services resource level. It means you can
The connection points for Communication Services direct routing are the following three FQDNs: -- **sip.pstnhub.microsoft.com ΓÇö Global FQDN ΓÇö must be tried first. When the SBC sends a request to resolve this name, the Microsoft Azure DNS servers return an IP address that points to the primary Azure datacenter assigned to the SBC. The assignment is based on performance metrics of the datacenters and geographical proximity to the SBC. The IP address returned corresponds to the primary FQDN.-- **sip2.pstnhub.microsoft.com ΓÇö Secondary FQDN ΓÇö geographically maps to the second priority region.-- **sip3.pstnhub.microsoft.com ΓÇö Tertiary FQDN ΓÇö geographically maps to the third priority region.
+- **sip.pstnhub.microsoft.com** ΓÇö Global FQDN ΓÇö must be tried first. When the SBC sends a request to resolve this name, the Microsoft Azure DNS servers return an IP address that points to the primary Azure datacenter assigned to the SBC. The assignment is based on performance metrics of the datacenters and geographical proximity to the SBC. The IP address returned corresponds to the primary FQDN.
+- **sip2.pstnhub.microsoft.com** ΓÇö Secondary FQDN ΓÇö geographically maps to the second priority region.
+- **sip3.pstnhub.microsoft.com** ΓÇö Tertiary FQDN ΓÇö geographically maps to the third priority region.
These three FQDNs in order are required to:
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
from azure.mgmt.confidentialledger.models import ConfidentialLedger
# import the data plane sdk from azure.confidentialledger import ConfidentialLedgerClient
-from azure.confidentialledger.identity_service import ConfidentialLedgerIdentityServiceClient
+from azure.confidentialledger.certificate import ConfidentialLedgerCertificateClient
``` Next, we'll use the [DefaultAzureCredential Class](/python/api/azure-identity/azure.identity.defaultazurecredential) to authenticate the app.
Now that we have a ledger, we'll interact with it using the data plane client li
First, we will generate and save a confidential ledger certificate. ```python
-identity_client = ConfidentialLedgerIdentityServiceClient(identity_url)
+identity_client = ConfidentialLedgerCertificateClient(identity_url)
network_identity = identity_client.get_ledger_identity( ledger_id=ledger_name )
ledger_client = ConfidentialLedgerClient(
) ```
-We are prepared to write to the ledger. We will do so using the `append_to_ledger` function.
+We are prepared to write to the ledger. We will do so using the `create_ledger_entry` function.
```python
-append_result = ledger_client.append_to_ledger(entry_contents="Hello world!")
+append_result = ledger_client.create_ledger_entry(entry_contents="Hello world!")
print(append_result.transaction_id) ``` The print function will return the transaction ID of your write to the ledger, which can be used to retrieve the message you wrote to the ledger. ```python
-entry = ledger_client.get_ledger_entry(transaction_id=append_result.transaction_id)
-print(entry.contents)
+latest_entry = ledger_client.get_current_ledger_entry(transaction_id=append_result.transaction_id)
+print(f"Current entry (transaction id = {latest_entry['transactionId']}) in collection {latest_entry['collectionId']}: {latest_entry['contents']}")
``` The print function will return "Hello world!", as that is the message in the ledger that that corresponds to the transaction ID.
from azure.mgmt.confidentialledger.models import ConfidentialLedger
# import data plane sdk from azure.confidentialledger import ConfidentialLedgerClient
-from azure.confidentialledger.identity_service import ConfidentialLedgerIdentityServiceClient
+from azure.confidentialledger.certificate import ConfidentialLedgerCertificateClient
from azure.confidentialledger import TransactionState # Set variables
ledger_properties = ConfidentialLedger(**properties)
# Create a ledger
-foo = confidential_ledger_mgmt.ledger.begin_create(rg, ledger_name, ledger_properties)
-
-# wait until ledger is created
-foo.wait()
+confidential_ledger_mgmt.ledger.begin_create(rg, ledger_name, ledger_properties)
# Get the details of the ledger you just created
print (f"- ID: {myledger.id}")
# # Create a CL client
-identity_client = ConfidentialLedgerIdentityServiceClient(identity_url)
+identity_client = ConfidentialLedgerCertificateClient(identity_url)
network_identity = identity_client.get_ledger_identity( ledger_id=ledger_name )
ledger_client = ConfidentialLedgerClient(
) # Write to the ledger
-append_result = ledger_client.append_to_ledger(entry_contents="Hello world!")
+append_result = ledger_client.create_ledger_entry(entry_contents="Hello world!")
print(append_result.transaction_id)
-# Wait until transaction is committed on the ledger
-while True:
- commit_result = ledger_client.get_transaction_status(append_result.transaction_id)
- print(commit_result.state)
- if (commit_result.state == TransactionState.COMMITTED):
- break
- time.sleep(1)
- # Read from the ledger
-entry = ledger_client.get_ledger_entry(transaction_id=append_result.transaction_id)
-print(entry.contents)
+entry = ledger_client.get_current_ledger_entry(transaction_id=append_result.transaction_id)
+print(f"Current entry (transaction id = {latest_entry['transactionId']}) in collection {latest_entry['collectionId']}: {latest_entry['contents']}")
``` ## Clean up resources
cosmos-db How To Create Container Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-create-container-mongodb.md
This article explains the different ways to create a collection in Azure Cosmos
## <a id="dotnet-mongodb"></a>Create using .NET SDK ```csharp
-// Create a collection with a partition key by using Mongo Shell:
-db.runCommand( { shardCollection: "myDatabase.myCollection", key: { myShardKey: "hashed" } } )
+var bson = new BsonDocument
+{
+ { "customAction", "CreateCollection" },
+ { "collection", "<CollectionName>" },//update CollectionName
+ { "shardKey", "<ShardKeyName>" }, //update ShardKey
+ { "offerThroughput", 400} //update Throughput
+};
+var shellCommand = new BsonDocumentCommand<BsonDocument>(bson);
+// Create a collection with a partition key by using Mongo Driver:
+db.RunCommand(shellCommand);
```
-> [!Note]
-> MongoDB wire protocol does not understand the concept of [Request Units](../request-units.md). To create a new collection with provisioned throughput on it, use the Azure portal or Cosmos DB SDKs for SQL API.
- If you encounter timeout exception when creating a collection, do a read operation to validate if the collection was created successfully. The read operation throws an exception until the collection create operation is successful. For the list of status codes supported by the create operation see the [HTTP Status Codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) article. ## <a id="cli-mongodb"></a>Create using Azure CLI
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly.
<tr><td>Assert error handling</td><td>Assert error handling are now supported in data flows for data quality and data validation<br><a href="data-flow-assert.md">Learn more</a></td></tr> <tr><td rowspan=2><b>Data Movement</b></td><td>Parameterization natively supported in additional 4 connectors</td><td>We added native UI support of parameterization for the following linked <tr><td>SAP Change Data Capture (CDC) capabilities in the new SAP ODP connector (Public Preview)</td><td>SAP Change Data Capture (CDC) capabilities are now supported in the new SAP ODP connector.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-the-public-preview-of-the-sap-cdc-solution-in-azure/ba-p/3420904">Learn more</a></td></tr>
-<tr><td><b>Orchestration</b></td><td>ΓÇÿturnOffAsync' property is available in Web activity</td><td>Web activity supports an async request-reply pattern that invokes HTTP GET on the Location field in the response header of an HTTP 202 Response. It helps web activity automatically poll the monitoring end-point till the job runs. ΓÇÿturnOffAsync' property is supported to disable this behavior in cases where polling isn't needed<br><a href="control-flow-web-activity.md#type-properties">Learn more</a></td></tr>
-<tr><td><b>Monitoring</b></td><td> Rerun pipeline with new parameters</td><td>You can now rerun pipelines with new parameter values in Azure Data Factory.<br><a href="monitor-visually.md#rerun-pipelines-and-activities">Learn more</a></td></tr>
<tr><td><b>Integration Runtime</b></td><td>Time-To-Live in managed VNET (Public Preview)</td><td>Time-To-Live can be set to the provisioned computes in managed VNET.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879">Learn more</a></td></tr>
-<tr><td><b>Monitoring</b></td><td> Rerun pipeline with new parameters</td><td>You can now rerun pipelines with new parameter values in Azure Data Factory.<br><a href="monitor-visually.md#rerun-pipelines-and-activities">Learn more</a></td></tr>
+<tr><td><b>Monitoring</b></td><td> Rerun pipeline with new parameters</td><td>You can now rerun pipelines with new parameter values in Azure Data Factory.<br><a href="monitor-visually.md#rerun-pipelines-and-activities">Learn more</a></td></tr>
+<tr><td><b>Orchestration</b></td><td>ΓÇÿturnOffAsync' property is available in Web activity</td><td>Web activity supports an async request-reply pattern that invokes HTTP GET on the Location field in the response header of an HTTP 202 Response. It helps web activity automatically poll the monitoring end-point till the job runs. ΓÇÿturnOffAsync' property is supported to disable this behavior in cases where polling isn't needed<br><a href="control-flow-web-activity.md#type-properties">Learn more</a></td></tr>
</table>
+
## May 2022 <br>
databox Data Box Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-picked-up.md
Previously updated : 06/16/2022 Last updated : 07/20/2022 zone_pivot_groups: data-box-shipping
databox Data Box File Acls Preservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-file-acls-preservation.md
Previously updated : 07/06/2022 Last updated : 07/13/2022
Azure Data Box lets you preserve access control lists (ACLs), timestamps, and fi
ACLs, timestamps, and file attributes are the metadata that is transferred when the data from Data Box is uploaded to Azure Files. In this article, ACLs, timestamps, and file attributes are referred to collectively as *metadata*.
-The metadata can be copied with Windows and Linux data copy tools. Metadata isn't preserved when transferring data to blob storage.
+The metadata can be copied with Windows and Linux data copy tools. Metadata isn't preserved when transferring data to blob storage. Metadata is also not transferred when copying data over NFS.
The subsequent sections of the article discuss in detail as to how the timestamps, file attributes, and ACLs are transferred when the data from Data Box is uploaded to Azure Files.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **High volume of operations in a key vault**<br>(KV_OperationVolumeAnomaly) | An anomalous number of key vault operations were performed by a user, service principal, and/or a specific key vault. This anomalous activity pattern may be legitimate, but it could be an indication that a threat actor has gained access to the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **Suspicious policy change and secret query in a key vault**<br>(KV_PutGetAnomaly) | A user or service principal has performed an anomalous Vault Put policy change operation followed by one or more Secret Get operations. This pattern is not normally performed by the specified user or service principal. This may be legitimate activity, but it could be an indication that a threat actor has updated the key vault policy to access previously inaccessible secrets. We recommend further investigations. | Credential Access | Medium | | **Suspicious secret listing and query in a key vault**<br>(KV_ListGetAnomaly) | A user or service principal has performed an anomalous Secret List operation followed by one or more Secret Get operations. This pattern is not normally performed by the specified user or service principal and is typically associated with secret dumping. This may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault and is trying to discover secrets that can be used to move laterally through your network and/or gain access to sensitive resources. We recommend further investigations. | Credential Access | Medium |
-| **Unusual access denied - User accessing high volume of key vaults denied**<br>(KV_DeniedAccountVolumeAnomaly) | A user or service principal has attempted access to anomalously high volume of key vaults in the last 24 hours. This anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. We recommend further investigations. | Discovery | Low |
+| **Unusual access denied - User accessing high volume of key vaults denied**<br>(KV_AccountVolumeAccessDeniedAnomaly) | A user or service principal has attempted access to anomalously high volume of key vaults in the last 24 hours. This anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. We recommend further investigations. | Discovery | Low |
| **Unusual access denied - Unusual user accessing key vault denied**<br>(KV_UserAccessDeniedAnomaly) | A key vault access was attempted by a user that does not normally access it, this anomalous access pattern may be legitimate activity. Though this attempt was unsuccessful, it could be an indication of a possible attempt to gain access of key vault and the secrets contained within it. | Initial Access, Discovery | Low | | **Unusual application accessed a key vault**<br>(KV_AppAnomaly) | A key vault has been accessed by a service principal that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **Unusual operation pattern in a key vault**<br>(KV_OperationPatternAnomaly) | An anomalous pattern of key vault operations was performed by a user, service principal, and/or a specific key vault. This anomalous activity pattern may be legitimate, but it could be an indication that a threat actor has gained access to the key vault and the secrets contained within it. We recommend further investigations. | Credential Access | Medium |
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Title: Create custom security policies in Microsoft Defender for Cloud description: Azure custom policy definitions monitored by Microsoft Defender for Cloud. -- Previously updated : 12/23/2021 Last updated : 07/20/2022 zone_pivot_groups: manage-asc-initiatives
zone_pivot_groups: manage-asc-initiatives
To help secure your systems and environment, Microsoft Defender for Cloud generates security recommendations. These recommendations are based on industry best practices, which are incorporated into the generic, default security policy supplied to all customers. They can also come from Defender for Cloud's knowledge of industry and regulatory standards.
-With this feature, you can add your own *custom* initiatives. Although custom initiatives are not included in the secure score, you'll receive recommendations if your environment doesn't follow the policies you create. Any custom initiatives you create are shown in the list of all recommendations and you can filter by initiative to see the recommendations for your initiative. They are also shown with the built-in initiatives in the regulatory compliance dashboard, as described in the tutorial [Improve your regulatory compliance](regulatory-compliance-dashboard.md).
+With this feature, you can add your own *custom* initiatives. Although custom initiatives aren't included in the secure score, you'll receive recommendations if your environment doesn't follow the policies you create. Any custom initiatives you create are shown in the list of all recommendations and you can filter by initiative to see the recommendations for your initiative. They're also shown with the built-in initiatives in the regulatory compliance dashboard, as described in the tutorial [Improve your regulatory compliance](regulatory-compliance-dashboard.md).
As discussed in [the Azure Policy documentation](../governance/policy/concepts/definition-structure.md#definition-location), when you specify a location for your custom initiative, it must be a management group or a subscription. > [!TIP] > For an overview of the key concepts on this page, see [What are security policies, initiatives, and recommendations?](security-policy-concept.md).
+You can view your custom initiatives organized by controls, similar to the controls in the compliance standard. To learn how to create policy groups within the custom initiatives and organize them in your initiative, follow the guidance provided in the [policy definitions groups](../governance/policy/concepts/initiative-definition-structure.md).
+ ::: zone pivot="azure-portal" ## To add a custom initiative to your subscription
As discussed in [the Azure Policy documentation](../governance/policy/concepts/d
1. Select the policies to include and select **Add**. 1. Enter any desired parameters. 1. Select **Save**.
- 1. In the Add custom initiatives page, click refresh. Your new initiative will be available.
+ 1. In the Add custom initiatives page, select refresh. Your new initiative will be available.
1. Select **Add** and assign it to your subscription. ![Create or add a policy.](media/custom-security-policies/create-or-add-custom-policy.png)
As discussed in [the Azure Policy documentation](../governance/policy/concepts/d
* You'll begin to receive recommendations if your environment doesn't follow the policies you've defined.
-1. To see the resulting recommendations for your policy, click **Recommendations** from the sidebar to open the recommendations page. The recommendations will appear with a "Custom" label and be available within approximately one hour.
+1. To see the resulting recommendations for your policy, select **Recommendations** from the sidebar to open the recommendations page. The recommendations will appear with a "Custom" label and be available within approximately one hour.
[![Custom recommendations.](media/custom-security-policies/custom-policy-recommendations.png)](media/custom-security-policies/custom-policy-recommendations-in-context.png#lightbox)
As discussed in [the Azure Policy documentation](../governance/policy/concepts/d
## Configure a security policy in Azure Policy using the REST API
-As part of the native integration with Azure Policy, Microsoft Defender for Cloud enables you to take advantage Azure PolicyΓÇÖs REST API to create policy assignments. The following instructions walk you through creation of policy assignments, as well as customization of existing assignments.
+As part of the native integration with Azure Policy, Microsoft Defender for Cloud enables you to take advantage Azure PolicyΓÇÖs REST API to create policy assignments. The following instructions walk you through creation of policy assignments, and customization of existing assignments.
Important concepts in Azure Policy:
defender-for-cloud Episode Fifteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-fifteen.md
Last updated 07/14/2022
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Defender for Servers integration with Microsoft Defender for Endpoint](episode-sixteen.md)
defender-for-cloud Episode Sixteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-sixteen.md
+
+ Title: Defender for Servers integration with Microsoft Defender for Endpoint
+
+description: Learn about the integration between Defender for Servers and Microsoft Defender for Endpoint
+ Last updated : 07/20/2022++
+# Defender for Servers integration with Microsoft Defender for Endpoint
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Erel Hansav joins Yuri Diogenes to talk about the latest updates regarding the Defender for Servers integration with Microsoft Defender for Endpoint. Erel explains the architecture of this integration for the different versions of Windows Servers, how this integration takes place in the backend, the deployment options for Windows and Linux and the deployment at scale using Azure Policy.
++
+<iframe src="https://aka.ms/docs/player?id=aaf5dbcd-9a29-40c2-b355-8c832b27baa5" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:14](/shows/mdc-in-the-field/servers-med-integration#time=00m00s) - Introduction
+
+- [05:54](/shows/mdc-in-the-field/servers-med-integration#time=02m13s) - Understanding Microsoft Defender for Endpoint's integration with Defender for Servers
+
+- [06:51](/shows/mdc-in-the-field/servers-med-integration#time=15m30s) - Onboarding flow
+
+- [10:13](/shows/mdc-in-the-field/servers-med-integration#time=20m05s) - Options to deploy at scale
+
+## Recommended resources
+
+[Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
description: Learn about deploying Microsoft Defender for Endpoint from Microsof
Previously updated : 06/19/2022 Last updated : 07/20/2022 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
With Microsoft Defender for Servers, you can deploy [Microsoft Defender for Endp
> > At Ignite 2020, we launched the [Microsoft Defender for Cloud XDR suite](https://www.microsoft.com/security/business/threat-protection), and this EDR component was renamed **Microsoft Defender for Endpoint (MDE)**.
+You can learn about Defender for Cloud's integration with Microsoft Defender for Endpoint by watching this video from the Defender for Cloud in the Field video series: [Defender for Servers integration with Microsoft Defender for Endpoint](episode-sixteen.md)
+ ## Availability | Aspect | Details |
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
Title: Microsoft Defender for Cloud's main dashboard or 'overview' page description: Learn about the features of the Defender for Cloud overview page Previously updated : 11/09/2021 Last updated : 07/20/2022 # Microsoft Defender for Cloud's overview page -
-When you open Microsoft Defender for Cloud, the first page to appear is the overview page.
-
-This interactive dashboard provides a unified view into the security posture of your hybrid cloud workloads. Additionally, it shows security alerts, coverage information, and more.
+The Microsoft Defender for Cloud's overview page is an interactive dashboard that provides a unified view into the security posture of your hybrid cloud workloads. Additionally, it shows security alerts, coverage information, and more.
You can select any element on the page to get more detailed information.
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Title: Microsoft Defender for Cloud troubleshooting guide description: This guide is for IT professionals, security analysts, and cloud admins who need to troubleshoot Microsoft Defender for Cloud related issues.++ Previously updated : 12/26/2021 Last updated : 07/17/2022 # Microsoft Defender for Cloud Troubleshooting Guide This guide is for information technology (IT) professionals, information security analysts, and cloud administrators whose organizations need to troubleshoot Defender for Cloud related issues.
-Defender for Cloud uses the Log Analytics agent to collect and store data. See [Microsoft Defender for Cloud Platform Migration](./enable-data-collection.md) to learn more. The information in this article represents Defender for Cloud functionality after transition to the Log Analytics agent.
- > [!TIP]
-> A dedicated area of the Defender for Cloud pages in the Azure portal provides a collated, ever-growing set of self-help materials for solving common challenges with Defender for Cloud.
->
-> When you're facing an issue, or are seeking advice from our support team, **Diagnose and solve problems** is good place to look for solutions:
+> When you're facing an issue or need advice from our support team, the **Diagnose and solve problems** section of the Azure portal is good place to look for solutions:
> > :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Defender for Cloud's 'Diagnose and solve problems' page":::
-## Troubleshooting guide
+## Use the Audit Log to investigate issues
+
+The first place to look for troubleshooting information is the [Audit Log records](../azure-monitor/essentials/platform-logs-overview.md) records for the failed component. In the audit logs, you can see details including:
+
+- Which operations were performed
+- Who initiated the operation
+- When the operation occurred
+- The status of the operation
+
+The audit log contains all write operations (PUT, POST, DELETE) performed on your resources, but not read operations (GET).
+
+## Troubleshooting the native multicloud connector
+
+Defender for Cloud uses connectors to collect monitoring data from AWS accounts and GCP projects. If youΓÇÖre experiencing issues with the connector or you don't see data from AWS or GCP, we recommend that you review these troubleshooting tips:
+
+Common connector issues:
+
+- Make sure that the subscription associated with the connector is selected in the **subscriptions filter**, located in the **Directories + subscriptions** section of the Azure portal.
+- Standards should be assigned on the security connector. To check, go to the **Environment settings** in the Defender for Cloud left menu, select the connector, and select **Settings**. There should be standards assigned. You can select the three dots to check if you have permissions to assign standards.
+- Connector resource should be present in Azure Resource Graph (ARG). Use the following ARG query to check: `resources | where ['type'] =~ "microsoft.security/securityconnectors"`
+- Make sure that sending Kubernetes audit logs is enabled on the AWS or GCP connector so that you can get [threat detection alerts for the control plane](alerts-reference.md#alerts-k8scluster).
+- Make sure that Azure Arc and the Azure Policy Arc extension were installed successfully.
+- Make sure that the agent is installed to your Elastic Kubernetes Service (EKS) clusters. You can install the agent with the **Azure Policy add-on for Kubernetes should be installed and enabled on your clusters** recommendation, or **Azure policy extension for Kubernetes should be installed and enabled on your clusters** recommendations. Download the given script provided in the recommendation and run it on your cluster. The recommendation should disappear within an hour of when the script is run.
+- If youΓÇÖre experiencing issues with deleting the AWS or GCP connector, check if you have a lock (in this case there might be an error in the Azure Activity log, hinting at the presence of a lock).
+- Check that workloads exist in the AWS account or GCP project.
+
+AWS connector issues:
+
+- Make sure that the CloudFormation template deployment completed successfully.
+- You need to wait at least 12 hours since the AWS root account was created.
+- Make sure that EKS clusters are successfully connected to Arc-enabled Kubernetes.
+- If you don't see AWS data in Defender for Cloud, make sure that the AWS resources required to send data to Defender for Cloud exist in the AWS account.
-This guide explains how to troubleshoot Defender for Cloud related issues.
+GCP connector issues:
+
+- Make sure that the GCP Cloud Shell script completed successfully.
+- Make sure that GKE clusters are successfully connected to Arc-enabled Kubernetes.
+- Make sure that Azure Arc endpoints are in the firewall allowlist. The GCP connector makes API calls to these endpoints to fetch the necessary onboarding files.
+- If the onboarding of GCP projects failed, make sure you have ΓÇ£compute.regions.listΓÇ¥ permission and Azure AD permission to create the service principle used as part of the onboarding process. Make sure that the GCP resources `WorkloadIdentityPoolId`, `WorkloadIdentityProviderId`, and `ServiceAccountEmail` are created in the GCP project.
+
+## Troubleshooting the Log Analytics agent
+
+Defender for Cloud uses the Log Analytics agent to [collect and store data](./enable-data-collection.md). The information in this article represents Defender for Cloud functionality after transition to the Log Analytics agent.
Alert types:
Depending on the alert types, customers can gather the necessary information to
- Security logs in the Virtual Machine (VM) event viewer in Windows - AuditD in Linux-- The Azure activity logs, and the enable diagnostic logs on the attack resource.-
-Customers can share feedback for the alert description and relevance. Navigate to the alert itself, select the **Was This Useful** button, select the reason, and then enter a comment to explain which explains the feedback. We consistently monitor this feedback channel to improve our alerts.
+- The Azure activity logs and the enable diagnostic logs on the attack resource.
-## Audit log
+Customers can share feedback for the alert description and relevance. Navigate to the alert itself, select the **Was This Useful** button, select the reason, and then enter a comment to explain the feedback. We consistently monitor this feedback channel to improve our alerts.
-Most of the troubleshooting done in Defender for Cloud takes place by first looking at the [Audit Log](../azure-monitor/essentials/platform-logs-overview.md) records for the failed component. Through audit logs, you can determine:
+### Check the Log Analytics agent processes and versions
-- Which operations were taken place-- Who initiated the operation-- When the operation occurred-- The status of the operation-- The values of other properties that might help you research the operation
+Just like the Azure Monitor, Defender for Cloud uses the Log Analytics agent to collect security data from your Azure virtual machines. After data collection is enabled and the agent is correctly installed in the target machine, the `HealthService.exe` process should be running.
-The audit log contains all write operations (PUT, POST, DELETE) performed on your resources, however it does not include read operations (GET).
+Open the services management console (services.msc), to make sure that the Log Analytics agent service running as shown below:
-## Log Analytics agent
-Defender for Cloud uses the Log Analytics agent ΓÇô this is the same agent used by the Azure Monitor service ΓÇô to collect security data from your Azure virtual machines. After data collection is enabled and the agent is correctly installed in the target machine, the process below should be in execution:
+To see which version of the agent you have, open **Task Manager**, in the **Processes** tab locate the **Log Analytics agent Service**, right-click on it and select **Properties**. In the **Details** tab, look the file version as shown below:
-- HealthService.exe
-If you open the services management console (services.msc), you will also see the Log Analytics agent service running as shown below:
-
-![Services.](./media/troubleshooting-guide/troubleshooting-guide-fig5.png)
-
-To see which version of the agent you have, open **Task Manager**, in the **Processes** tab locate the **Log Analytics agent Service**, right-click on it and click **Properties**. In the **Details** tab, look the file version as shown below:
-
-![File.](./media/troubleshooting-guide/troubleshooting-guide-fig6.png)
-
-## Log Analytics agent installation scenarios
+### Log Analytics agent installation scenarios
There are two installation scenarios that can produce different results when installing the Log Analytics agent on your computer. The supported scenarios are: -- **Agent installed automatically by Defender for Cloud**: in this scenario you will be able to view the alerts in both locations, Defender for Cloud and Log search. You will receive email notifications to the email address that was configured in the security policy for the subscription the resource belongs to.
+- **Agent installed automatically by Defender for Cloud**: You can view the alerts in Defender for Cloud and Log search. You'll receive email notifications to the email address that was configured in the security policy for the subscription the resource belongs to.
-- **Agent manually installed on a VM located in Azure**: in this scenario, if you are using agents downloaded and installed manually prior to February 2017, you can view the alerts in the Defender for Cloud portal only if you filter on the subscription the workspace belongs to. If you filter on the subscription the resource belongs to, you won't see any alerts. You'll receive email notifications to the email address that was configured in the security policy for the subscription the workspace belongs to.
+- **Agent manually installed on a VM located in Azure**: in this scenario, if you're using agents downloaded and installed manually prior to February 2017, you can view the alerts in the Defender for Cloud portal only if you filter on the subscription the workspace belongs to. If you filter on the subscription the resource belongs to, you won't see any alerts. You'll receive email notifications to the email address that was configured in the security policy for the subscription the workspace belongs to.
> [!NOTE] > To avoid the behavior explained in the second scenario, make sure you download the latest version of the agent. - <a name="mon-network-req"></a>
-## Troubleshooting monitoring agent network requirements
+### Monitoring agent network connectivity issues
-For agents to connect to and register with Defender for Cloud, they must have access to network resources, including the port numbers and domain URLs.
+For agents to connect to and register with Defender for Cloud, they must have access to the DNS addresses and network ports for Azure network resources.
-- For proxy servers, you need to ensure that the appropriate proxy server resources are configured in agent settings. Read this article for more information on [how to change the proxy settings](../azure-monitor/agents/agent-windows.md).-- For firewalls that restrict access to the Internet, you need to configure your firewall to permit access to Log Analytics. No action is needed in agent settings.
+- When you use proxy servers, you need to make sure that the appropriate proxy server resources are configured correctly in the [agent settings](../azure-monitor/agents/agent-windows.md).
+- You need to configure your network firewalls to permit access to Log Analytics.
-The following table shows resources needed for communication.
+The Azure network resources are:
| Agent Resource | Ports | Bypass HTTPS inspection | ||||
The following table shows resources needed for communication.
| *.blob.core.windows.net | 443 | Yes | | *.azure-automation.net | 443 | Yes |
-If you encounter onboarding issues with the agent, make sure to read the article [How to troubleshoot Operations Management Suite onboarding issues](https://support.microsoft.com/help/3126513/how-to-troubleshoot-operations-management-suite-onboarding-issues).
+If you're having trouble onboarding the Log Analytics agent, make sure to read [how to troubleshoot Operations Management Suite onboarding issues](https://support.microsoft.com/help/3126513/how-to-troubleshoot-operations-management-suite-onboarding-issues).
+
+## Antimalware protection isn't working properly
-## Troubleshooting endpoint protection not working properly
+The guest agent is the parent process of everything the [Microsoft Antimalware](../security/fundamentals/antimalware.md) extension does. When the guest agent process fails, the Microsoft Antimalware protection that runs as a child process of the guest agent may also fail.
-The guest agent is the parent process of everything the [Microsoft Antimalware](../security/fundamentals/antimalware.md) extension does. When the guest agent process fails, the Microsoft Antimalware that runs as a child process of the guest agent may also fail. In scenarios like that is recommended to verify the following options:
+Here are some other troubleshooting tips:
-- If the target VM is a custom image and the creator of the VM never installed guest agent.-- If the target is a Linux VM instead of a Windows VM then installing the Windows version of the antimalware extension on a Linux VM will fail. The Linux guest agent has specific requirements in terms of OS version and required packages, and if those requirements are not met the VM agent will not work there either.-- If the VM was created with an old version of guest agent. If it was, you should be aware that some old agents could not auto-update itself to the newer version and this could lead to this problem. Always use the latest version of guest agent if creating your own images.-- Some third-party administration software may disable the guest agent, or block access to certain file locations. If you have third-party installed on your VM, make sure that the agent is on the exclusion list.-- Certain firewall settings or Network Security Group (NSG) may block network traffic to and from guest agent.-- Certain Access Control List (ACL) may prevent disk access.-- Lack of disk space can block the guest agent from functioning properly.
+- If the target VM was created from a custom image, make sure that the creator of the VM installed guest agent.
+- If the target is a Linux VM, then installing the Windows version of the antimalware extension will fail. The Linux guest agent has specific OS and package requirements.
+- If the VM was created with an old version of guest agent, the old agents might not have the ability to auto-update to the newer version. Always use the latest version of guest agent when you create your own images.
+- Some third-party administration software may disable the guest agent, or block access to certain file locations. If third-party administration software is installed on your VM, make sure that the antimalware agent is on the exclusion list.
+- Make sure that firewall settings and Network Security Group (NSG) aren't blocking network traffic to and from guest agent.
+- Make sure that there are no Access Control Lists (ACLs) that prevent disk access.
+- The guest agent requires sufficient disk space in order to function properly.
-By default the Microsoft Antimalware User Interface is disabled, read [Enabling Microsoft Antimalware User Interface on Azure Resource Manager VMs Post Deployment](/archive/blogs/azuresecurity/enabling-microsoft-antimalware-user-interface-post-deployment) for more information on how to enable it if you need.
+By default the Microsoft Antimalware user interface is disabled, but you can [enable the Microsoft Antimalware user interface](/archive/blogs/azuresecurity/enabling-microsoft-antimalware-user-interface-post-deployment) on Azure Resource Manager VMs.
## Troubleshooting problems loading the dashboard
-If you experience issues loading the workload protection dashboard, ensure that the user that registers the subscription to Defender for Cloud (i.e. the first user one who opened Defender for Cloud with the subscription) and the user who would like to turn on data collection should be *Owner* or *Contributor* on the subscription. From that moment on also users with *Reader* on the subscription can see the dashboard/alerts/recommendation/policy.
+If you experience issues loading the workload protection dashboard, make sure that the user that first enabled Defender for Cloud on the subscription and the user that want to turn on data collection have the *Owner* or *Contributor* role on the subscription. If that is the case, users with the *Reader* role on the subscription can see the dashboard, alerts, recommendations, and policy.
## Contacting Microsoft Support
-Some issues can be identified using the guidelines provided in this article, others you can also find documented at the Defender for Cloud public [Microsoft Q&A page](/answers/topics/azure-security-center.html). However if you need further troubleshooting, you can open a new support request using **Azure portal** as shown below:
+You can also find troubleshooting information for Defender for Cloud at the [Defender for Cloud Q&A page](/answers/topics/azure-security-center.html). If you need further troubleshooting, you can open a new support request using **Azure portal** as shown below:
![Microsoft Support.](./media/troubleshooting-guide/troubleshooting-guide-fig2.png) ## See also
-In this page, you learned how to configure security policies in Microsoft Defender for Cloud. To learn more about Microsoft Defender for Cloud, see the following:
+In this page, you learned about troubleshooting steps for Defender for Cloud. To learn more about Microsoft Defender for Cloud:
-- [Managing and responding to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md) ΓÇö Learn how to manage and respond to security alerts-- [Alerts Validation in Microsoft Defender for Cloud](alert-validation.md)-- [Microsoft Defender for Cloud FAQ](faq-general.yml) ΓÇö Find frequently asked questions about using the service
+- Learn how to [manage and respond to security alerts](managing-and-responding-alerts.md) in Microsoft Defender for Cloud
+- [Alert validation](alert-validation.md) in Microsoft Defender for Cloud
+- Review [frequently asked questions](faq-general.yml) about using Microsoft Defender for Cloud
defender-for-iot Configure Windows Endpoint Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-windows-endpoint-monitoring.md
Configure a firewall rule that opens outgoing traffic from the sensor to the sca
## Configure WMI domain scanning
-Before you can configure a WEM scan from your sensor, you need to configure WMI domain scanning on the Windows machine you'll be scanning. <!--where are these procedures being performed?-->
+Before you can configure a WEM scan from your sensor, you need to configure WMI domain scanning on the Windows machine you'll be scanning.
This procedure describes how to configure WMI scanning using a Group Policy Object (GPO), updating your firewall settings, defining permissions for your WMI namespace, and defining a local group.
If you'll be using a non-admin account to run your WEM scans, this procedure is
Learn more about active monitoring options. For more information, see: - [Configure active monitoring for OT networks](configure-active-monitoring.md)-- [Configure DNS servers for reverse lookup resolution for OT monitoring](configure-reverse-dns-lookup.md)
+- [Configure DNS servers for reverse lookup resolution for OT monitoring](configure-reverse-dns-lookup.md)
defender-for-iot Detect Windows Endpoints Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md
+
+ Title: Detect Windows workstations and servers with a local script
+description: Learn about how to detect Windows workstations and servers on your network using a local script.
Last updated : 07/12/2022+++
+# Detect Windows workstations and servers with a local script
+
+In addition to detecting OT devices on your network, use Defender for IoT to discover Microsoft Windows workstations and servers. Same as other detected devices, detected Windows workstations and servers are displayed in the Device inventory. The **Device inventory** pages on the sensor and on-premises management console show enriched data about Windows devices, including data about the Windows operating system and applications installed, patch-level data, open ports, and more.
+
+This article describes how to configure Defender for IoT to detect Windows workstations and servers with local surveying, performed by distributing and running a script on each device. While you can use active scanning and scheduled WMI scans to obtain this data, working with local scripts bypasses the risks of running WMI polling on an endpoint. Running a local script is also useful for regulated networks that have waterfalls and one-way elements.
+
+For more information, see [Configure Windows Endpoint Monitoring](configure-windows-endpoint-monitoring.md).
+
+## Supported operating systems
+
+The script described in this article is supported for the following Windows operating systems:
+
+- Windows XP
+- Windows 2000
+- Windows NT
+- Windows 7
+- Windows 10
+- Windows Server 2003/2008/2012/2016
+
+## Prerequisites
+
+Before you start, make sure that you have:
+
+- Administrator permissions on any devices where you intend to run the script
+- A Defender for IoT OT sensor already monitoring the network where the device is connected
+
+If an OT network sensor has already learned the device, running the script will retrieve its information and enrichment data.
+
+## Run the script
+
+This procedure describes how to obtain, deploy, and run the script on the Windows workstation and servers that you want to monitor in Defender for IoT.
+
+The script you run to detect enriched Windows data is run as a utility and not as an installed program. Running the script doesn't affect the endpoint.
+
+1. To acquire the script, [contact customer support](mailto:support.microsoft.com).
+
+1. Deploy the script once, or using ongoing automation, using standard automated deployment methods and tools.
+
+1. Copy the script to a local drive and unzip it. The following files appear:
+
+ - `start.bat`
+ - `settings.json`
+ - `data.bin`
+ - `run.bat`
+
+1. Run the `run.bat` file.
+
+ After the script runs to probe the registry, a CX-snapshot file appears with the registry information. The filename indicates the system name, date, and time of the snapshot with the following syntax: `CX-snaphot_SystemName_Month_Year_Time`
+
+Files generated by the script:
+
+- Remain on the local drive until you delete them.
+- Must remain in the same location. Do not separate the generated files.
+- Are overwritten if you run the script again.
+
+## Import device details
+
+After having run the script as described [earlier](#run-the-script), import the generated data to your sensor to view the device details in the **Device inventory**.
+
+**To import device details to your sensor**:
+
+1. Use standard, automated methods and tools to move the generated files from each Windows endpoint to a location accessible from your OT sensors.
+
+ Do not update filenames or separate the files from each other.
+
+1. On your OT sensor console, select **System Settings** > **Import Settings** > **Windows Information**.
+
+1. Select **Import File**, and then select all the files (Ctrl+A).
+
+1. Select **Close**. The device registry information is imported and a successful confirmation message is shown
+
+ If there's a problem uploading one of the files, you'll be informed which file upload failed.
+
+## Next steps
+
+For more information, see [View detected devices on-premises](how-to-investigate-sensor-detections-in-a-device-inventory.md).
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
After activating a sensor, you'll need to apply new activation files as follows:
|Location |Activation process | ||| |**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>However, you'll also need to apply a new activation file when [updating your sensor software](update-ot-software.md#download-and-apply-a-new-activation-file) from a legacy version to version 22.2.x. |
-| **Locally-managed** | Apply a new activation file to locally-managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
+| **Locally managed** | Apply a new activation file to locally managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. |
For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md).
You can access console tools from the side menu. Tools help you:
| --|--| | Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. | | Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zoom, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate sensor detections in the Device Map](how-to-work-with-the-sensor-device-map.md#investigate-sensor-detections-in-the-device-map). |
-| Device inventory | The **Device inventory** displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [View your device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).|
+| Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md).|
| Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that requires your attention. For more information, see [View alerts on your sensor](how-to-view-alerts.md#view-alerts-on-your-sensor).| ### Analyze
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Syslog and other default forwarding actions are delivered with your system. More
:::image type="content" source="media/how-to-work-with-alerts-sensor/alert-information-screen.png" alt-text="Alert information.":::
-Defender for IoT administrators has permission to use forwarding rules.
+Defender for IoT administrators have permission to use forwarding rules.
## About forwarded alert information
Alerts provide information about an extensive range of security and operational
- Suspicious traffic detected
-Relevant information is sent to partner systems when forwarding rules are created.
+- Disconnected sensors
+
+- Remote backup failures
+
+Relevant information is sent to partner systems when forwarding rules are created in the sensor console or the [on-premises management console](how-to-work-with-alerts-on-premises-management-console.md#create-forwarding-rules).
## About Forwarding rules and certificates
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
Title: Learn about devices discovered by all sensors
-description: Use the device inventory in the on-premises management console to get a comprehensive view of device information from connected sensors. Use import, export, and filtering tools to manage this information.
Previously updated : 06/12/2022
+ Title: Manage your OT device inventory from an on-premises management console
+description: Learn how to view and manage OT devices (assets) from the Device inventory page on an on-premises management console.
--
-# Investigate all sensor detections in the device inventory
-
-You can view device information from connected sensors by using the *device inventory* in the on-premises management console. This feature gives you a comprehensive view of all network information. Use import, export, and filtering tools to manage this information. The status information about the connected sensor versions also appears.
Last updated : 07/12/2022
-For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
+
-## View the device inventory from an on-premises management console
+# Manage your OT device inventory from an on-premises management console
+Use the **Device inventory** page from an on-premises management console to manage all OT and IT devices detected by sensors connected to that console. Identify new devices detected, devices that might need troubleshooting, and more.
-The following table describes the table columns in the device inventory.
+For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device).
-| Parameter | Description |
-|--|--|
-| **Unacknowledged Alerts** | The number of unhandled alerts associated with this device. |
-| **Business Unit** | The business unit that contains this device. |
-| **Region** | The region that contains this device. |
-| **Site** | The site that contains this device. |
-| **Zone** | The zone that contains this device. |
-| **Appliance** | The Microsoft Defender for IoT sensor that protects this device. |
-| **Name** | The name of this device as Defender for IoT discovered it. |
-| **Type** | The type of device, such as PLC or HMI. |
-| **Vendor** | The name of the device's vendor, as defined in the MAC address. |
-| **Operating System** | The OS of the device. |
-| **Firmware** | The device's firmware. |
-| **IP Address** | The IP address of the device. |
-| **VLAN** | The VLAN of the device. |
-| **MAC Address** | The MAC address of the device. |
-| **Protocols** | The protocols that the device uses. |
-| **Unacknowledged Alerts** | The number of unhandled alerts associated with this device. |
-| **Is Authorized** | The authorization status of the device:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been authorized. |
-| **Is Known as Scanner** | Whether this device performs scanning-like activities in the network. |
-| **Is Programming Device** | Whether this is a programming device:<br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations.<br />- **False**: The device isn't a programming device. |
-| **Groups** | Groups in which this device participates. |
-| **Last Activity** | The last activity that the device performed. |
-| **Discovered** | When this device was first seen in the network. |
-| **PLC mode (preview)** | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. if both states are the same, only one state is presented. |
-## Integrate data into the enterprise device inventory
+> [!TIP]
+> Alternately, view your device inventory from a [the Azure portal](how-to-manage-device-inventory-for-organizations.md), or from an [OT sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md).
+>
-Data integration capabilities let you enhance the data in the device inventory with information from other resources. These sources include CMDBs, DNS, firewalls, and Web APIs.
+## View the device inventory
-You can use this information to learn. For example:
+To view detected devices in the **Device Inventory** page in an on-premises management console, sign-in to your on-premises management console, and then select **Device Inventory**.
-- Device purchase dates and end-of-warranty dates
+For example:
-- Users responsible for each device -- Opened tickets for devices
+Use any of the following options to modify or filter the devices shown:
-- The last date when the firmware was upgraded
+|Option |Steps |
+|||
+| **Sort devices** | To sort the grid by a specific column, select the **Sort** :::image type="icon" source="media/how-to-work-with-asset-inventory-information/alphabetical-order-icon.png" border="false"::: button in the column you want to sort by. Use the arrow buttons that appear to sort ascending or descending. |
+|**Filter devices shown** | 1. In the column that you want to filter, select the **Filter** button :::image type="icon" source="media/how-to-work-with-asset-inventory-information/filter-a-column-icon.png" border="false":::.<br>2. In the **Filter** box, define your filter value. <br><br>Filters aren't saved when you refresh the **Device Inventory** page. |
+| **Save a filter** | To save the current set of filters, select the **Save As** button that appears in the filter row.|
+| **Load a saved filter** | Saved filters are listed on the left, in the **Groups** pane. <br><br>1. Select the **Options** :::image type="icon" source="media/how-to-work-with-asset-inventory-information/options-menu.png"border="false"::: button in the toolbar to display the **Groups** pane. <br>2. In the **Device Inventory Filters** list, select the saved filter you want to load. |
-- Devices allowed access to the internet
+For more information, see [Device inventory column reference](#device-inventory-column-reference).
-- Devices running active antivirus applications
+## Export the device inventory to CSV
-- Users signed in to devices
+Export your device inventory to a CSV file to manage or share data outside of the OT sensor.
+To export device inventory data, select the **Import/Export file** :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon-device-inventory.png" border="false"::: button, and then select one of the following:
-You can integrate data by either:
+- **Export Device Inventory View**: Exports only the devices currently displayed, with the current filter applied
+- **Export All Device Inventory**: Exports the entire device inventory, with no filtering
-- Adding it manually
+Save the exported file locally.
-- Running customized scripts that Defender for IoT provides
+## Enhance device inventory data
+Enhance the data in your device inventory with information from other sources, such as CMDBs, DNS, firewalls, and Web APIs. Use enhanced data to learn things such as:
-You can work with Defender for IoT technical support to set up your system to receive Web API queries.
+- Device purchase dates and end-of-warranty dates
+- Users responsible for each device
+- Opened tickets for devices
+- The last date when the firmware was upgraded
+- Devices allowed access to the internet
+- Devices running active antivirus applications
+- Users signed in to devices
-To add data manually:
+Enhancement data is shown as extra columns in the on-premises management console **Device inventory** page.
-1. On the side menu, select **Device Inventory** and then select :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon.png" border="false":::.
+Enhance data by adding it manually or by running customized scripts from Defender for IoT. You can also work with Defender for IoT support to set up your system to receive Web API queries.
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/asset-inventory-settings-v2.png" alt-text="Edit your device's inventory settings.":::
+For example, the following image shows an example of how you might use enhanced data in the device inventory:
-2. In the **Device Inventory Settings** dialog box, select **ADD CUSTOM COLUMN**.
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/add-custom-column.png" alt-text="Add a custom column to your inventory.":::
+# [Add data manually](#tab/manually)
-3. In the **Add Custom Column** dialog box, add the new column name (up to 250 characters UTF), select **Manual**, and select **SAVE**. The new item appears in the **Device Inventory Settings** dialog box.
+To enhance your data manually:
-4. In the upper-right corner of the **Device Inventory** window, select :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon-device-inventory.png" border="false"::: and select **Export All Device Inventory**. The CSV file is generated.
+1. Sign in to your on-premises management console, and select **Device inventory**.
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/sample-exported-csv-file.png" alt-text="The exported CSV file.":::
+1. On the top-right, select the **Settings** :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon.png" border="false"::: button to open the **Device Inventory Settings** dialog.
-5. Manually add the information to the new column and save the file.
+1. In the **Device Inventory Settings** dialog box, select **ADD CUSTOM COLUMN**.
-6. In the upper-right corner of the **Device Inventory** window, select :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon-device-inventory.png" border="false":::, select **Import Manual Input Columns**, and browse to the CSV file. The new data appears in the **Device Inventory** table.
+1. In the **Add Custom Column** dialog box, add the new column name using up to 250 UTF characters.
-To integrate data from other entities:
+1. Select **Manual** > **SAVE**. The new item appears in the **Device Inventory Settings** dialog box.
-1. In the upper-right corner of the **Device Inventory** window, select :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon-device-inventory.png" border="false"::: and select **Export All Device Inventory**.
+1. In the upper-right corner of the **Device Inventory** window, select the **Import/Export file** :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon-device-inventory.png" border="false"::: button > **Export All Device Inventory**.
-2. In the **Device Inventory Settings** dialog box, select **ADD CUSTOM COLUMN**.
+ A CSV file is generated with the data displayed.
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/add-custom-column.png" alt-text="Add a custom column to your inventory.":::
+1. Download and open the CSV file for editing, and manually add your information to the new column.
-3. In the **Add Custom Column** dialog box, add the new column name (up to 250 characters UTF), and then select **Automatic**. The **UPLOAD SCRIPT** and **TEST SCRIPT** options appear.
+1. Back in the **Device inventory** page, at the top-right, select the **Import/Export file** :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon-device-inventory.png" border="false"::: button again > **Import Manual Input Columns**. Browse to and select your edited CSV file.
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/add-custom-column-automatic.png" alt-text="Automatically add custom columns.":::
+The new data appears in the **Device Inventory** grid.
-4. Upload and test the script that you received from [Microsoft Support](https://support.serviceshub.microsoft.com/supportforbusiness/create?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
+# [Add data using automation](#tab/automation)
-## Retrieve information from the device inventory
+To enhance your data using automation scripts:
-You can retrieve an extensive range of device information detected by managed sensors and integrate that information with partner systems. For example, you can retrieve sensor, zone, site ID, IP address, MAC address, firmware, protocol, and vendor information. Filter information that you retrieve based on:
+1. Contact [Microsoft Support](https://support.serviceshub.microsoft.com/supportforbusiness/create?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099) to obtain the relevant scripts.
-- Authorized and unauthorized devices.
+1. Sign in to your on-premises management console, and select **Device inventory**.
-- Devices associated with specific sites.
+1. On the side, select the **Settings** :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon.png" border="false"::: button to open the **Device Inventory Settings** dialog.
-- Devices associated with specific zones.
+1. In the **Device Inventory Settings** dialog box, select **ADD CUSTOM COLUMN**.
-- Devices associated with specific sensors.
+1. In the **Add Custom Column** dialog box, add the new column name using up to 250 UTF characters.
-Work with Defender for IoT API commands to retrieve and integrate this information. For more information, see [Defender for IoT API sensor and management console APIs](references-work-with-defender-for-iot-apis.md).
+1. Select **Automatic**. When the **UPLOAD SCRIPT** and **TEST SCRIPT** buttons appear, upload and then test the script you'd received from [Microsoft Support](https://support.serviceshub.microsoft.com/supportforbusiness/create?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
-## Filter the device inventory
+The new data appears in the **Device Inventory** grid.
-You can filter the device inventory to show columns of interest. For example, you can view PLC device information.
+
+## Retrieve device inventory data via API
-The filter is cleared when you leave the window.
+You can retrieve an extensive range of device information detected by managed sensors and integrate that information with partner systems.
-To use the same filter multiple times, you can save a filter or a combination of filters that you need. You can open a left pane and view the filters that you've saved:
+For example:
+1. Retrieve sensor, zone, site ID, IP address, MAC address, firmware, protocol, and vendor information.
-To filter the device inventory:
+1. Filter that information based on any of the following values:
-1. In the column that you want to filter, select :::image type="icon" source="media/how-to-work-with-asset-inventory-information/filter-a-column-icon.png" border="false":::.
+ - Authorized and unauthorized devices.
-2. In the **Filter** dialog box, select the filter type:
+ - Devices associated with specific sites.
- - **Equals**: The exact value according to which you want to filter the column. For example, if you filter the protocol column according to **Equals** and `value=ICMP`, the column will present devices that use the ICMP protocol only.
+ - Devices associated with specific zones.
- - **Contains**: The value that's contained among other values in the column. For example, if you filter the protocol column according to **Contains** and `value=ICMP`, the column will present devices that use the ICMP protocol as a part of the list of protocols that the device uses.
+ - Devices associated with specific sensors.
-3. To organize the column information according to alphabetical order, select :::image type="icon" source="media/how-to-work-with-asset-inventory-information/alphabetical-order-icon.png" border="false":::. Arrange the order by selecting the :::image type="icon" source="media/how-to-work-with-asset-inventory-information/alphabetical-a-z-order-icon.png" border="false"::: and :::image type="icon" source="media/how-to-work-with-asset-inventory-information/alphabetical-z-a-order-icon.png" border="false"::: arrows.
+For more information, see [Defender for IoT sensor and management console APIs](references-work-with-defender-for-iot-apis.md).
-4. To save a new filter, define the filter and select **Save As**.
+## Device inventory column reference
-5. To change the filter definitions, change the definitions and select **Save Changes**.
+The following table describes the device properties shown in the **Device inventory** page on an on-premises management console.
+| Name | Description |
+|--|--|
+| **Unacknowledged Alerts** | The number of unhandled alerts associated with this device. |
+| **Business Unit** | The business unit that contains this device. |
+| **Region** | The region that contains this device. |
+| **Site** | The site that contains this device. |
+| **Zone** | The zone that contains this device. |
+| **Appliance** | The Microsoft Defender for IoT sensor that protects this device. |
+| **Name** | The name of this device as Defender for IoT discovered it. |
+| **Type** | The type of device, such as PLC or HMI. |
+| **Vendor** | The name of the device's vendor, as defined in the MAC address. |
+| **Operating System** | The OS of the device. |
+| **Firmware** | The device's firmware. |
+| **IP Address** | The IP address of the device. |
+| **VLAN** | The VLAN of the device. |
+| **MAC Address** | The MAC address of the device. |
+| **Protocols** | The protocols that the device uses. |
+| **Unacknowledged Alerts** | The number of unhandled alerts associated with this device. |
+| **Is Authorized** | The authorization status of the device:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been authorized. |
+| **Is Known as Scanner** | Whether this device performs scanning-like activities in the network. |
+| **Is Programming Device** | Whether the device is a programming device:<br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations.<br />- **False**: The device isn't a programming device. |
+| **Groups** | Groups in which this device participates. |
+| **Last Activity** | The last activity that the device performed. |
+| **Discovered** | When this device was first seen in the network. |
+| **PLC mode (preview)** | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. if both states are the same, only one state is presented. |
## Next steps
-[Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+For more information, see:
+
+- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
+- [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md)
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
Title: View your device inventory from a sensor console
-description: The device inventory displays an extensive range of device attributes that a sensor detects.
Previously updated : 06/09/2022
+ Title: Manage your OT device inventory from a sensor console
+description: Learn how to view and manage OT devices (assets) from the Device inventory page on a sensor console.
Last updated : 07/12/2022
-# View your device inventory from a sensor console
+# Manage your OT device inventory from a sensor console
-The device inventory displays an extensive range of device attributes that your sensor detects. Use the inventory to gain insight and full visibility of the devices on your network.
+Use the **Device inventory** page from a sensor console to manage all OT and IT devices detected by that console. Identify new devices detected, devices that might need troubleshooting, and more.
+For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device).
-Options are available to:
+> [!TIP]
+> Alternately, view your device inventory from a [the Azure portal](how-to-manage-device-inventory-for-organizations.md), or from an [on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md).
+>
+## View the device inventory
+This procedure describes how to view detected devices in the **Device inventory** page in an OT sensor console.
+1. Sign-in to your OT sensor console, and then select **Device inventory**.
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/sensor-device-inventory.png" alt-text="Screenshot of the sensor console's Device inventory page." lightbox="media/how-to-work-with-asset-inventory-information/sensor-device-inventory.png":::
-For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
+ Use any of the following options to modify or filter the devices shown:
-## View device attributes in the inventory
+ |Option |Steps |
+ |||
+ | **Sort devices** | Select a column header to sort the devices by that column. |
+ |**Filter devices shown** | Select **Add filter** to filter the devices shown. <br><br>In the **Add filter** box, define your filter by column name, operator, and filter value. Select **Apply** to apply your filter.<br><br>You can apply multiple filters at the same time. Search results and filters aren't saved when you refresh the **Device inventory** page. |
+ | **Save a filter** | To save the current set of filters:<br><br>1. Select **+Save Filter**. <br>2. In the **Create New Device Inventory Filter** pane on the right, enter a name for your filter, and then select **Submit**. <br><br>Saved filters are also saved as **Device map** groups, and provides extra granularity when [viewing network devices](how-to-work-with-the-sensor-device-map.md) on the **Device map** page. |
+ | **Load a saved filter** | If you have predefined filters saved, load them by selecting the **show side pane** :::image type="icon" source="media/how-to-inventory-sensor/show-side-pane.png" border="false"::: button, and then select the filter you want to load. |
+ |**Modify columns shown** | Select **Edit Columns** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false":::. In the **Edit columns** pane:<br><br> - Select **Add Column** to add new columns to the grid<br> - Drag and drop fields to change the columns order.<br>- To remove a column, select the **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/trashcan-icon.png" border="false"::: icon to the right.<br>- To reset the columns to their default settings, select **Reset** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/reset-icon.png" border="false":::. <br><br>Select **Save** to save any changes made. |
-This section describes device details available from the inventory, how to work with inventory filters, and how to view contextual information about each device.
+1. Select a device row to view more details about that device. Initial details are shown in a pane on the right, where you can also select **View full details** to drill down more.
-**To view the device inventory:**
+ For example:
-In the console left pane, select **Device inventory**.
+ :::image type="content" source="media/how-to-inventory-sensor/sensor-inventory-view-details.png" alt-text="Screenshot of the Device inventory page on an OT sensor console." lightbox="media/how-to-inventory-sensor/sensor-inventory-view-details.png":::
-The following columns are available for each device.
+For more information, see [Device inventory column reference](#device-inventory-column-reference).
-| Name | Description |
-|--|--|
-| **Description** | A description of the device |
-| **Discovered** | When this device was first seen on the network. |
-| **Firmware version** | The device's firmware, if detected. |
-| **FQDN** | The device's FQDN value |
-| **FQDN lookup time** | The device's FQDN lookup time |
-| **Groups** | The groups that this device participates in. |
-| **IP Address** | The IP address of the device. |
-| **Is Authorized** | The authorization status defined by the user:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been authorized. |
-| **Is Known as Scanner** | Defined as a network scanning device by the user. |
-| **Is Programming device** | Defined as an authorized programming device by the user. <br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. <br />- **False**: The device isn't a programming device. |
-| **Last Activity** | The last activity that the device performed. |
-| **MAC Address** | The MAC address of the device. |
-| **Name** | The name of the device as the sensor discovered it, or as entered by the user. |
-| **Operating System** | The OS of the device, if detected. |
-| **PLC mode** (preview) | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. If both states are the same, only one state is presented. |
-| **Protocols** | The protocols that the device uses. |
-| **Type** | The type of device as determined by the sensor, or as entered by the user. |
-| **Unacknowledged Alerts** | The number of unacknowledged alerts associated with this device. |
-| **Vendor** | The name of the device's vendor, as defined in the MAC address. |
-| **VLAN** | The VLAN of the device. For more information, see [Define VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names). |
-
-**To hide and display columns:**
-
-1. Select **Edit Columns** and select a column you need or delete a column.
-1. Select **Save**.
-
-**To view additional details:**
-
-1. Select an alert from the inventory and the select **View full details** in the dialog box that opens.
-1. Navigate to additional information such as firmware details, view contextual information such as alerts related to the device, or a timeline of events associated with the device.
-
-## Filter the inventory
-
-Customize the inventory to view devices important to you. An option is also available to save inventory filters for quick access to device information you need.
-
-**To create filters:**
-
-1. Select **Add filter** from the Device inventory page.
-1. Select a category from the **Column** field.
-1. Select an **Operator**.
- - **Equals**: The exact value according to which you want to filter the column. For example, if you filter the protocol column according to **Equals** and `value=ICMP`, the column will present devices that use the ICMP protocol only.
-
- - **Contains**: The value that's contained among other values in the column. For example, if you filter the protocol column according to **Contains** and `value=ICMP`, the column will present devices that use the ICMP protocol as a part of the list of protocols that the device uses.
-
-1. Select a filter value.
-
-### Save device inventory filters
-
-You can save a filter or a combination of filters that you need and view them in the device inventory when needed. Create broader filters based on a certain device type, or more narrow filters based on a specific protocol.
-
-The filters that you save are also saved as Device map groups. This feature provides an additional level of granularity in viewing network devices on the map.
-
-**To save and view filters:**
-
-1. Use the **Add filter** option to filter the table.
-1. Select **Save Filter**.
-1. Add a filter name in the dialog box that opens and select **Submit**.
-1. Select the double arrow >> on the left side of the page.
-The filters you create appear in the **Saved Views** pane.
-
- :::image type="content" source="media/how-to-inventory-sensor/save-views.png" alt-text="Screenshot that shows the saved Device inventory filter.":::
--
-### View filtered information as a map group
-
-You can display devices from saved filters in the Device map.
-
-**To view devices in the map:**
-
-1. After creating and saving an Inventory filter, navigate to the Device map.
-1. In the map page, open the Groups pane on the left.
-1. Scroll down to the **Asset Inventory Filters** group. The groups you saved from the Inventory appear.
--
-### Update device properties
-
-Certain device properties can be updated manually. Information manually entered will override information discovered by Defender for IoT.
-
-**To update properties:**
-
-1. Select a device from the inventory.
-1. Select **View full details**.
-1. Select **Edit properties.**
-1. Update any of the following:
-
- - Authorized status
- - Device name
- - Device type
- - OS
- - Purdue layer
- - Description
-1. Select **Save**.
-
-## Learn Windows registry details
-
-In addition to learning OT devices, you can discover Microsoft Windows workstations and servers. These devices are also displayed in the Device Inventory. After you learn devices, you can enrich the Device Inventory with detailed Windows information, such as:
--- Windows version installed--- Applications installed--- Patch-level information--- Open ports--- More robust information on OS versions-
-Two options are available for retrieving this information:
--- Active polling with scheduled WMI scans. For more information, see [Configure Windows Endpoint monitoring](configure-windows-endpoint-monitoring.md).--- Local surveying by distributing and running a script on the device. Working with local scripts bypasses the risks of running WMI polling on an endpoint. It's also useful for regulated networks with waterfalls and one-way elements.
+## Edit device details
-This section describes how to locally survey the Windows endpoint registry with a script. This information will be used for generating alerts, notifications, data mining reports, risk assessments, and attack vector reports.
+As you manage your network devices, you may need to update their details. For example, you may want to modify security value as assets change, or personalize the inventory to better identify devices, or if a device was classified incorrectly.
-You can survey the following Windows operating systems:
+**To edit device details**:
-- Windows XP
+1. Select one or more devices in the grid, and then select **View full details** in the pane on the right.
-- Windows 2000
+1. In the device details page, select **Edit Properties**.
-- Windows NT
+1. In the **Edit** pane on the right, modify the device fields as needed, and then select **Save** when you're done.
-- Windows 7
+Editable fields include:
-- Windows 10
+- Authorized status
+- Device name
+- Device type
+- OS
+- Purdue layer
+- Description
-- Windows 11
+For more information, see [Device inventory column reference](#device-inventory-column-reference).
-- Windows Server 2003/2008/2012/2016/2019
+## Export the device inventory to CSV
-### Before you begin
+Export your device inventory to a CSV file to manage or share data outside of the OT sensor.
-To work with the script, you need to meet the following requirements:
+To export device inventory data, on the **Device inventory** page, select **Export** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/export-button.png" border="false":::.
-- Administrator permissions are required to run the script on the device.
+The device inventory is exported with any filters currently applied, and you can save the file locally.
-- The sensor should have already learned the Windows device. This means that if the device already exists, the script will retrieve its information.
+## Delete a device
-- A sensor is monitoring the network that the Windows PC is connected to.
+If you have devices no longer in use, delete them from the device inventory so that they're no longer connected to Defender for IoT.
-### Acquire the script
+Devices might be inactive because of misconfigured SPAN ports, changes in network coverage, or because the device was unplugged from the network.
-To receive the script, [contact customer support](mailto:support.microsoft.com).
+Delete inactive devices to maintain a correct representation of current network activity, better understand your committed devices when managing your Defender for IoT plans, and to reduce clutter on your screen.
-### Deploy the script
+Devices you delete from the Inventory are removed from the map and won't be calculated when generating Defender for IoT reports, for example Data Mining, Risk Assessment, and Attack Vector reports.
-You can deploy the script once or schedule ongoing queries using standard automated deployment methods and tools.
+> [!NOTE]
+> Devices must be inactive for 7 days or more in order for you to be able to delete them.
+>
-### About the script
+**To delete inactive devices**:
-- The script is run as a utility and not an installed program. Running the script doesn't affect the endpoint.
+1. On the **Device inventory** page, filter the grid by the **Last Activity** field. In the **Filter** field, select one of the following time periods:
-- The files that the script generates remain on the local drive until you delete them.
+ - for seven days or more
+ - for 14 days or more
+ - 30 days or more
+ - 90 days or more
-- The files that the script generates are located next to each other. Don't separate them.
+1. Select **Delete Inactive Devices**. In the prompt displayed, enter the reason you're deleting the devices, and then select **Delete**.
-- If you run the script again in the same location, these files are overwritten.
+ All devices detected within the range of the selected filter are deleted. If there are a large number of devices to delete, the process may take a few minutes.
-**To run the script:**
+## Device inventory column reference
-1. Copy the script to a local drive and unzip it. The following files appear:
+The following table describes the device properties shown in the **Device inventory** page on a sensor console.
- - start.bat
-
- - settings.json
-
- - data.bin
-
- - run.bat
-
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/files-in-file-explorer.png" alt-text="View of the files in File Explorer.":::
-
-2. Run the `run.bat` file.
-
-3. After the registry is probed, the CX-snapshot file appears with the registry information.
-
-4. The file name indicates the system name and date and time of the snapshot. An example file name is `CX-snaphot_SystemName_Month_Year_Time`.
-
-### Import device details
-
-Information learned on each endpoint should be imported to the sensor.
-
-Files generated from the queries can be placed in one folder that you can access from the sensors. Use standard, automated methods and tools to move the files from each Windows endpoint to the location where you'll be importing them to the sensor.
-
-Don't update file names.
-
-**To import:**
-
-1. Select **System Settings** > **Import Settings** > **Windows Information**.
-
-2. Select **Import File**, and then select all the files (Ctrl+A).
-
-3. Select **Close**. The device registry information is imported. If there's a problem uploading one of the files, you'll be informed which file upload failed.
-
- :::image type="content" source="media/how-to-work-with-asset-inventory-information/add-new-file.png" alt-text="Upload of added files was successful.":::
-
-## View and delete inactive devices from the inventory
-
-You may want to view devices in your network that have been inactive and delete them.
-Devices may become inactive because of:
-- Misconfigured SPAN ports-- Changes in network coverage-- Unplugging from the network-
-Deleting inactive devices helps:
--- Defender for IoT creates a more accurate representation of current network activity-- Better evaluate committed devices when managing subscriptions-- Reduce clutter on your screen-
-For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)
-
-### View inactive devices
-
-You can filter the inventory to display devices that are inactive:
--- for seven days or more-- for 14 days or more-- 30 days or more-- 90 days or more-
-**To filter:**
-
-1. Select **Add filter**.
-1. Select **Last Activity** in the column field.
-1. Choose the time period in the **Filter** field.
-
- :::image type="content" source="media/how-to-inventory-sensor/save-filter.png" alt-text="Screenshot that shows the last activity filter in Inventory.":::
-
-### Delete inactive devices
-
-Devices you delete from the Inventory are removed from the map and won't be calculated when generating Defender for IoT reports, for example, Data Mining, Risk Assessment, and Attack Vector reports.
-
-You'll be prompted to record a reason for deleting devices. This information, as well as the date/time and number of devices deleted, appears in the Event timeline.
-
-**To delete inactive devices:**
-
-1. Select the **Last Seen** filter icon in the Inventory.
-1. Select a filter option.
-1. Select **Apply**.
-1. Select **Delete Inactive Devices**.
-1. In the confirmation dialog box that opens, enter the reason for the deletion and select **Delete**. All devices detected within the range of the filter will be deleted. If you delete a large number of devices, the delete process may take a few minutes.
-
-## Export device inventory information
-
-You can export device inventory information to a .csv file.
-
-**To export:**
--- Select **Export file** from the Device Inventory page. The report is generated and downloaded.
+| Name | Description |
+|--|--|
+| **Description** | A description of the device |
+| **Discovered** | When this device was first seen in the network. |
+| **Firmware version** | The device's firmware, if detected. |
+| **FQDN** | The device's FQDN value |
+| **FQDN lookup time** | The device's FQDN lookup time |
+| **Groups** | The groups that this device participates in. |
+| **IP Address** | The IP address of the device. |
+| **Is Authorized** | The authorization status defined by the user:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been |
+| **Is Known as Scanner** | Defined as a network scanning device by the user. |
+| **Is Programming device** | Defined as an authorized programming device by the user. <br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. <br />- **False**: The device isn't a programming device. |
+| **Last Activity** | The last activity that the device performed. |
+| **MAC Address** | The MAC address of the device. |
+| **Name** | The name of the device as the sensor discovered it, or as entered by the user. |
+| **Operating System** | The OS of the device, if detected. |
+| **PLC mode** (preview) | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. If both states are the same, only one state is presented. |
+| **Protocols** | The protocols that the device uses. |
+| **Type** | The type of device as determined by the sensor, or as entered by the user. |
+| **Unacknowledged Alerts** | The number of unacknowledged alerts associated with this device. |
+| **Vendor** | The name of the device's vendor, as defined in the MAC address. |
+| **VLAN** | The VLAN of the device. For more information, see [Define VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names). |
## Next steps For more information, see: -- [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)--- [Manage your IoT devices with the device inventory](../device-builders/how-to-manage-device-inventory-on-the-cloud.md#manage-your-iot-devices-with-the-device-inventory)
+- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
+- [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md)
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
Title: View your device inventory from the Azure portal
-description: Learn how to manage your IoT devices with the device inventory for organizations.
Previously updated : 03/09/2022
+ Title: Manage your device inventory from the Azure portal
+description: Learn how to view and manage OT and IoT devices (assets) from the Device inventory page in the Azure portal.
Last updated : 06/27/2022
-# View your device inventory from the Azure portal
+# Manage your device inventory from the Azure portal
+
+Use the **Device inventory** page in the Azure portal to manage all network devices detected by cloud-connected sensors, including OT, IoT, and IT. Identify new devices detected, devices that might need troubleshooting, and more.
+
+For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device).
> [!NOTE] > The **Device inventory** page in Defender for IoT on the Azure portal is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >-
-The device inventory can be used to view device systems, and network information. The search, filter, edit columns, and export tools can be used to manage this information.
--
-Some of the benefits of the device inventory include:
--- Identify all IT, IoT, and OT devices from different inputs. For example, to identify new devices detected in the last day or which devices aren't communicating and might require troubleshooting.--- Group, and filter devices by site, type, or vendor.--- Gain visibility into each device, and investigate the different threats, and alerts for each one.--- Export the entire device inventory to a CSV file for your reports.
+> Alternately, view device inventory from a [specific sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md), or from an [on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md).
## View the device inventory
-1. Open the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Defender for IoT** > **Device inventory**.
-
- :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-inventory.png" alt-text="Select device inventory from the left side menu under Defender for IoT.":::
--
-## Customize the device inventory table
+This procedure describes how to view detected devices in the **Device inventory** page in the Azure portal.
-In the device inventory table, you can add or remove columns. You can also change the column order by dragging and dropping a field.
+1. In Defender for IoT in the Azure portal, select **Device inventory**.
-**To customize the device inventory table**:
+ :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-inventory.png" alt-text="Screenshot of the Device inventory page in the Azure portal." lightbox="media/how-to-manage-device-inventory-on-the-cloud/device-inventory.png":::
-1. Select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false"::: button.
+ Use any of the following options to modify or filter the devices shown:
-1. In the Edit columns tab, select the drop-down menu to change the value of a column.
+ |Option |Steps |
+ |||
+ | **Sort devices** | Select a column header to sort the devices by that column. Select it again to change the sort direction. |
+ |**Filter devices shown** | Either use the **Search** box to search for specific device details, or select **Add filter** to filter the devices shown. <br><br>In the **Add filter** box, define your filter by column name, operator, and value. Select **Apply** to apply your filter.<br><br>You can apply multiple filters at the same time. Search results and filters aren't saved when you refresh the **Device inventory** page.|
+ |**Modify columns shown** | Select **Edit columns** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false":::. In the **Edit columns** pane:<br><br> - Select the **+ Add Column** button to add new columns to the grid.<br> - Drag and drop fields to change the columns order.<br>- To remove a column, select the **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/trashcan-icon.png" border="false"::: icon to the right.<br>- To reset the columns to their default settings, select **Reset** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/reset-icon.png" border="false":::. <br><br>Select **Save** to save any changes made. |
+ | **Group devices** | From the **Group by** above the gird, select either **Type** or **Class** to group the devices shown. Inside each group, devices retain the same column sorting. To remove the grouping, select **No grouping**. |
- :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-drop-down-menu.png" alt-text="Select the drop-down menu to change the value of a given column.":::
+1. Select a device row to view more details about that device. Initial details are shown in a pane on the right, where you can also select **View full details** to drill down more.
-1. Add a column by selecting the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/add-column-icon.png" border="false"::: button.
+ For example:
-1. Reorder the columns by dragging a column parameter to a new location.
+ :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/device-information-window.png" alt-text="Screenshot of a device details pane and the View full details button in the Azure portal." lightbox="media/how-to-manage-device-inventory-on-the-cloud/device-information-window.png":::
-1. Delete a column by selecting the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/trashcan-icon.png" border="false"::: button.
+For more information, see [Device inventory column reference](#device-inventory-column-reference).
- :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/delete-a-column.png" alt-text="Select the trash can icon to delete a column.":::
+### Identify devices that aren't connecting successfully
-1. Select **Save** to save any changes made.
+If you suspect that certain devices aren't actively communicating with Azure, we recommend that you verify whether those devices have communicated with Azure recently at all. For example:
-If you want to reset the device inventory to the default settings, in the Edit columns window, select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/reset-icon.png" border="false"::: button.
+1. In the **Device inventory** page, make sure that the **Last activity** column is shown.
-## Filter the device inventory
+ Select **Edit columns** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false"::: > **Add column** > **Last Activity** > **Save**.
-You can search, and filter the device inventory to define what information the table displays.
+1. Select the **Last activity** column to sort the grid by that column.
-**To filter the device inventory**:
+1. Filter the grid to show active devices during a specific time period:
-1. Select **Add filter**
+ 1. Select **Add filter**.
+ 1. In the **Column** field, select **Last activity**.
+ 1. Select a predefined time range, or define a custom range to filter for.
+ 1. Select **Apply**.
- :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/add-filter.png" alt-text="Select the add filter button to specify what you want to appear in the device inventory.":::
-
-1. In the Add filter window, select the column drop-down menu to choose which column to filter.
-
- :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/add-filter-window.png" alt-text="Select which column you want to filter in the device inventory.":::
-
-1. Enter a value in the filter field to filter by.
-
-1. Select the **Apply button**.
-
-Multiple filters can be applied at one time. The filters aren't saved when you leave the Device inventory page.
-
-## View device information
-
-To view a specific devices information, select the device and the device information window appears.
-
+1. Search for the devices you're verifying in the filtered list of devices.
## Edit device details
-As you manage your devices, you may need to update their details, such as to modify security value as assets change, to personalize an inventory so that you can better identify specific devices, or if a device was classified incorrectly.
-
-You can edit device details for each device, one at a time, or select multiple devices to edit details together.
-
-**To edit details for a single device**:
-
-1. In the **Device inventory** page, select the device you want to edit, and then select **Edit** :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/edit-device-details.png" border="false"::: in the toolbar at the top of the page.
-
- The **Edit** pane opens at the right.
+As you manage your network devices, you may need to update their details. For example, you may want to modify security value as assets change, or personalize the inventory to better identify devices, or if a device was classified incorrectly.
-1. Modify any of the field values as needed. For more information, see [Reference of editable fields](#reference-of-editable-fields).
+**To edit device details**:
-1. Select **Save** when you're finished editing the device details.
+1. Select one or more devices in the grid, and then select **Edit** :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/edit-device-details.png" border="false":::.
-**To edit details for multiple devices simultaneously**:
+1. If you've selected multiple devices, select **Add field type** and add the fields you want to edit, for all selected devices.
-1. In the **Device inventory** page, select the devices you want to edit, and then select **Edit** :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/edit-device-details.png" border="false"::: in the toolbar at the top of the page.
-
- The **Edit** pane opens at the right.
-
-1. Select **Add field type**, and then select one or more fields to edit.
-
-1. Update your field definitions as needed, and then select **Save**. For more information, see [Reference of editable fields](#reference-of-editable-fields).
+1. Modify the device fields as needed, and then select **Save** when you're done.
Your updates are saved for all selected devices.
-### Reference of editable fields
+For more information, see [Device inventory column reference](#device-inventory-column-reference).
-The following device fields are supported for editing in the Device inventory page:
+### Reference of editable fields
-**General information**:
+The following device fields are supported for editing in the **Device inventory** page:
|Name |Description | |||
+| **General information** | |
|**Name** | Mandatory. Supported for editing only when editing a single device. | |**Authorized Device** |Toggle on or off as needed as device security changes. | |**Description** | Enter a meaningful description for the device. |
The following device fields are supported for editing in the Device inventory pa
|**Hardware Vendor** | Select the device's hardware vendor from the dropdown menu. | |**Firmware** | Device the device's firmware name and version. You can either select the **delete** button to delete an existing firmware definition, or select **+ Add** to add a new one. | |**Tags** | Enter meaningful tags for the device. Select the **delete** button to delete an existing tag, or select **+ Add** to add a new one. |--
-**Settings**:
-
-|Name |Description |
-|||
+| **Settings** |
|**Importance** | Select **Low**, **Normal**, or **High** to modify the device's importance. | |**Programming device** | Toggle the **Programming Device** option on or off as needed for your device. |
-## Export the device inventory to CSV
-
-You can export a maximum of 30,000 devices at a time from your device inventory to a CSV file. If you have filters applied to the table, only the devices shown are exported to the CSV file.
+For more information, see [Device inventory column reference](#device-inventory-column-reference).
-Select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/export-button.png" border="false"::: button to export your current device inventory to a CSV file.
-
-## How to identify devices that haven't recently communicated with the Azure cloud
-
-If you are under the impression that certain devices aren't actively communicating, there's a way to check, and see which devices haven't communicated in a specified time period.
-
-**To identify all devices that have not communicated recently**:
-
-1. Open the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Defender for IoT** > **Device inventory**.
-
-1. Select **Edit columns** > **Add column** > **Last Activity** > **Save**.
+## Export the device inventory to CSV
-1. On the main Device inventory page, select **Last activity** to sort the page by last activity.
+Export your device inventory to a CSV file to manage or share data outside of the Azure portal. You can export a maximum of 30,000 devices at a time.
- :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/last-activity.png" alt-text="Screenshot of the device inventory organized by last activity." lightbox="media/how-to-manage-device-inventory-on-the-cloud/last-activity.png":::
+**To export device inventory data**:
-1. Select **Add filter** to add a filter on the last activity column.
+On the **Device inventory page**, select **Export** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/export-button.png" border="false":::.
- :::image type="content" source="media/how-to-manage-device-inventory-on-the-cloud/last-activity-filter.png" alt-text="Screenshot of the add filter screen where you can select the time period to see the last activity.":::
+The device inventory is exported with any filters currently applied, and you can save the file locally.
-1. Enter a time period, or a custom date range, and select **Apply**.
## Delete a device If you have devices no longer in use, delete them from the device inventory so that they're no longer connected to Defender for IoT.
+Devices might be inactive because of misconfigured SPAN ports, changes in network coverage, or because the device was unplugged from the network.
+
+Delete inactive devices to maintain a correct representation of current network activity, better understand your committed devices when managing your Defender for IoT plans, and to reduce clutter on your screen.
+ **To delete a device**: In the **Device inventory** page, select the device you want to delete, and then select **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/delete-device.png" border="false"::: in the toolbar at the top of the page.
-If your device has had activity in the past 14 days, it isn't considered inactive, and the **Delete** button will be grayed-out.
- At the prompt, select **Yes** to confirm that you want to delete the device from Defender for IoT. ## Device inventory column reference
-The following table describes the device properties shown in the device inventory table.
+The following table describes the device properties shown in the **Device inventory** page on the Azure portal.
| Parameter | Description | |--|--| | **Application** | The application that exists on the device. |
-| **Class** | The class of the device. <br>Default: `IoT`|
+|**Authorized Device** |Editable. Determines whether or not the device is *authorized*. This value may change as device security changes. |
+|**Business Function** | Editable. Describes the device's business function. |
+| **Class** | Editable. The class of the device. <br>Default: `IoT`|
| **Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent`|
-| **Description** | The description of the device. |
-| **Firmware vendor** | The vendor of the device's firmware. |
-| **Firmware version** | The version of the firmware. |
+| **Description** | Editable. The description of the device. |
+| **Firmware vendor** | Editable. The vendor of the device's firmware. |
+| **Firmware version** |Editable. The version of the firmware. |
| **First seen** | The date, and time the device was first seen. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. |
-| **Importance** | The level of importance of the device. |
+|**Hardware Model** | Editable. Determines the device's hardware model. |
+|**Hardware Vendor** |Editable. Determines the device's hardware vendor. |
+| **Importance** | Editable. The level of importance of the device. |
| **IPv4 Address** | The IPv4 address of the device. | | **IPv6 Address** | The IPv6 address of the device. | | **Last activity** | The date, and time the device last sent an event to the cloud. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. | | **Last update time** | The date, and time the device last sent a system information event to the cloud. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. |
-| **Location** | The physical location of the device. |
+| **Location** | Editable. The physical location of the device. |
| **MAC Address** | The MAC address of the device. | | **Model** | The device's model. |
-| **Name** | The name of the device as the sensor discovered it, or as entered by the user. |
-| **OS architecture** | The architecture of the operating system. |
-| **OS distribution** | The distribution of the operating system, such as Android, Linux, and Haiku. |
-| **OS platform** | The OS of the device, if detected. |
-| **OS version** | The version of the operating system, such as Windows 10 and Ubuntu 20.04.1. |
+| **Name** | Mandatory, and editable. The name of the device as the sensor discovered it, or as entered by the user. |
+| **OS architecture** | Editable. The architecture of the operating system. |
+| **OS distribution** | Editable. The distribution of the operating system, such as Android, Linux, and Haiku. |
+| **OS platform** | Editable. The OS of the device, if detected. |
+| **OS version** | Editable. The version of the operating system, such as Windows 10 and Ubuntu 20.04.1. |
| **PLC mode** | The PLC operating mode that includes the Key state (physical, or logical), and the Run state (logical). Possible Key states include, `Run`, `Program`, `Remote`, `Stop`, `Invalid`, and `Programming Disabled`. Possible Run states are `Run`, `Program`, `Stop`, `Paused`, `Exception`, `Halted`, `Trapped`, `Idle`, or `Offline`. If both states are the same, then only one state is presented. | | **PLC secured** | Determines if the PLC mode is in a secure state. A possible secure state is `Run`. A possible unsecured state can be either `Program`, or `Remote`. |
+|**Programming device** | Editable. Determines whether the device is a *Programming Device*. |
| **Programming time** | The last time the device was programmed. | | **Protocols** | The protocols that the device uses. |
-| **Purdue level** | The Purdue level in which the device exists. |
+| **Purdue level** | Editable. The Purdue level in which the device exists. |
| **Scanner** | Whether the device performs scanning-like activities in the network. | | **Sensor** | The sensor the device is connected to. | | **Site** | The site that contains this device. <br><br>All Enterprise IoT sensors are automatically added to the **Enterprise network** site.| | **Slots** | The number of slots the device has. |
-| **Subtype** | The subtype of the device, such as speaker and smart tv. <br>**Default**: `Managed Device` |
-| **Tags** | Tagging data for each device. |
-| **Type** | The type of device, such as communication, and industrial. <br>**Default**: `Miscellaneous` |
+| **Subtype** | Editable. The subtype of the device, such as speaker and smart tv. <br>**Default**: `Managed Device` |
+| **Tags** | Editable. Tagging data for each device. |
+| **Type** | Editable. The type of device, such as communication, and industrial. <br>**Default**: `Miscellaneous` |
| **Underlying devices** | Any relevant underlying devices for the device | | **Underlying device region** | The region for an underlying device | | **Vendor** | The name of the device's vendor, as defined in the MAC address. | | **VLAN** | The VLAN of the device. | | **Zone** | The zone that contains this device. | - ## Next steps
-For more information, see [Welcome to Microsoft Defender for IoT for device builders](overview.md).
+For more information, see:
+
+- [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md)
+- [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md)
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
Perform the update in the following order. Make sure each step is complete befor
1. Find the domain associated with the primary appliance and copy it to your clipboard.
- 1. Remove the primary <!--original text said secondary, I think it's a mistake--> domain from the list of trusted hosts. Run:
+ 1. Remove the primary domain from the list of trusted hosts. Run:
```bash sudo cyberx-management-trusted-hosts-remove -d [Primary domain]
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
Monitoring sensor health is possible through the Simple Network Management Proto
Supported SNMP versions are SNMP version 2 and version 3. The SNMP protocol utilizes UDP as its transport protocol with port 161.
+## Download the SNMP MIB file
+
+Download the SNMP MIB file from Defender for IoT in the Azure portal. Select **Sites and sensors > More actions > Download SNMP MIB file**.
+ ## Sensor OIDs | Management console and sensor | OID | Format | Description |
defender-for-iot How To View Information Per Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-information-per-zone.md
To view the device inventory associated with a specific zone:
:::image type="content" source="media/how-to-work-with-asset-inventory-information/default-business-unit.png" alt-text="The device inventory screen will appear.":::
-For more information, see [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md).
+For more information, see:
+
+- [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
+- [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+- [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
## View additional zone information
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
Export alert information to a .csv file. You can export information of all alert
Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all. 1. Select which engines the rule should apply to.-
-
+
Select the required engines, or choose them all. Alerts from selected engines will be sent.
-1. Select the checkbox if you want the forwarding to rule to report system notifications.
-
-1. Select the checkbox if you want the forwarding to rule to report alert notifications.
+1. Select which notifications you want to forward:
+
+ - **Report system notifications:** disconnected sensors, remote backup failures.
+
+ - **Report alert notifications:** date and time of alert, alert title, alert severity, source and destination name and IP, suspicious traffic and engine that detected the event.
1. Select **Add** to add an action to apply. Fill in any parameters needed for the selected action.
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
The following predefined groups are available:
For information about creating custom groups, see [Define custom groups](#define-custom-groups).
+### View filtered information as a map group
+
+You can display devices from saved filters in the Device map. For more information, see [View the device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md#view-the-device-inventory).
+
+**To view devices in the map:**
+
+1. After creating and saving an Inventory filter, navigate to the Device map.
+1. In the map page, open the Groups pane on the left.
+1. Scroll down to the **Asset Inventory Filters** group. The groups you saved from the Inventory appear.
+ ### Map display tools | Icon | Description |
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
The below API's can be used with the ServiceNow integration via the ServiceNow's
## Next steps -- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)--- [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
+- [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
+- [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
First, you'll use the AdtSampleApp solution from the sample project to build the
:::image type="content" source="media/tutorial-end-to-end/building-scenario-a.png" alt-text="Diagram of an excerpt from the full building scenario diagram highlighting the Azure Digital Twins instance section.":::
-Open a local **console window** and navigate into the *digital-twins-samples-main\AdtE2ESample\SampleClientApp* folder. Run the *SampleClientApp* project with this dotnet command:
+Open a local **console window** and navigate into the *digital-twins-samples-main\AdtSampleApp\SampleClientApp* folder. Run the *SampleClientApp* project with this dotnet command:
```cmd/sh dotnet run
The next step is setting up an [Azure Functions app](../azure-functions/function
In this section, you'll publish the pre-written function app, and ensure the function app can access Azure Digital Twins by assigning it an Azure Active Directory (Azure AD) identity.
-The function app is part of the sample project you downloaded, located in the *digital-twins-samples-main\AdtE2ESample\SampleFunctionsApp* folder.
+The function app is part of the sample project you downloaded, located in the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp* folder.
### Publish the app
To publish the function app to Azure, you'll need to create a storage account, t
1. Next, you'll zip up the functions and publish them to your new Azure function app.
- 1. Open a console window on your machine, and navigate into the *digital-twins-samples-main\AdtE2ESample\SampleFunctionsApp* folder inside your downloaded sample project.
+ 1. Open a console window on your machine, and navigate into the *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp* folder inside your downloaded sample project.
1. In the console, run the following command to publish the project locally:
hdinsight Apache Hbase Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-advisor.md
Title: Optimize for cluster advisor recommendations description: Optimize Apache HBase for cluster advisor recommendations in Azure HDInsight.--++ Previously updated : 01/03/2021 Last updated : 07/20/2022 #Customer intent: The azure advisories help to tune the cluster/query. This doc gives a much deeper understanding of the various advisories including the recommended configuration tunings. # Apache HBase advisories in Azure HDInsight
hdinsight Apache Kafka Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-bicep.md
Title: 'Quickstart: Apache Kafka using Bicep - HDInsight' description: In this quickstart, you learn how to create an Apache Kafka cluster on Azure HDInsight using Bicep. You also learn about Kafka topics, subscribers, and consumers.--++ Previously updated : 05/02/2022 Last updated : 07/20/2022 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
hdinsight Kafka Troubleshoot Full Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/kafka-troubleshoot-full-disk.md
Title: Broker fails to start due to a full disk in Azure HDInsight
description: Troubleshooting steps for Apache Kafka broker process that can't start due to disk full error. -- Previously updated : 10/11/2021++ Last updated : 07/20/2022 # Scenario: Brokers are unhealthy or can't restart due to disk space full issue
healthcare-apis Using Rest Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-rest-client.md
The line starting with `@name` contains a variable that captures the HTTP respon
``` ### Get access token
-@name getAADToken
+# @name getAADToken
POST https://login.microsoftonline.com/{{tenantid}}/oauth2/token Content-Type: application/x-www-form-urlencoded
healthcare-apis How To Create Mappings Copies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-create-mappings-copies.md
Title: Create copies of MedTech service mappings templates - Azure Health Data Services
-description: This article helps users create copies of their MedTech service Device and FHIR destination mappings templates.
+ Title: Create copies of the MedTech service device and FHIR destination mappings - Azure Health Data Services
+description: This article helps users create copies of their MedTech service device and FHIR destination mappings.
Previously updated : 03/21/2022 Last updated : 07/19/2022
-# How to create copies of Device and FHIR destination mappings
+# How to create copies of device and FHIR destination mappings
-This article provides steps for creating copies of your MedTech service's Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings that can be used outside of the Azure portal. These copies can be used for editing, troubleshooting, and archiving.
+This article provides steps for creating copies of your MedTech service's device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings that can be used outside of Azure. These copies can be used for editing, troubleshooting, and archiving.
> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
> [!NOTE]
-> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
+> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include copies of your device and FHIR destination mappings to assist in the troubleshooting process.
## Copy creation process 1. Select **"MedTech service"** on the left side of the Azure Health Data Services workspace.
- :::image type="content" source="media/iot-troubleshoot/iot-connector-blade.png" alt-text="Select MedTech service." lightbox="media/iot-troubleshoot/iot-connector-blade.png":::
+ :::image type="content" source="media/iot-troubleshoot/iot-connector-blade.png" alt-text="Screenshot of select MedTech service." lightbox="media/iot-troubleshoot/iot-connector-blade.png":::
-2. Select the name of the **MedTech service** that you'll be copying the Device and FHIR destination mappings from.
+2. Select the name of the **MedTech service** that you'll be copying the device and FHIR destination mappings from.
- :::image type="content" source="media/iot-troubleshoot/map-files-select-connector-with-box.png" alt-text="Select the MedTech service that you will be making mappings copies from" lightbox="media/iot-troubleshoot/map-files-select-connector-with-box.png":::
+ :::image type="content" source="media/iot-troubleshoot/map-files-select-connector-with-box.png" alt-text="Screenshoot of select the MedTech service that you will be making mappings copies from." lightbox="media/iot-troubleshoot/map-files-select-connector-with-box.png":::
> [!NOTE]
- > This process may also be used for copying and saving the contents of the **"Destination"** FHIR destination mappings.
+ > This process may also be used for copying and saving the contents of the **"Destination"** FHIR destination mapping.
3. Select the contents of the JSON and do a copy operation (for example: Press **Ctrl + C**).
- :::image type="content" source="media/iot-troubleshoot/map-files-select-device-json-with-box.png" alt-text="Select and copy contents of mappings" lightbox="media/iot-troubleshoot/map-files-select-device-json-with-box.png":::
+ :::image type="content" source="media/iot-troubleshoot/map-files-select-device-json-with-box.png" alt-text="Screenshot of select and copy contents of the mapping." lightbox="media/iot-troubleshoot/map-files-select-device-json-with-box.png":::
-4. Do a paste operation (for example: Press **Ctrl + V**) into a new file within an editor like Microsoft Visual Studio Code or Notepad. Ensure to save the file with the **.json** extension.
+4. Do a paste operation (for example: Press **Ctrl + V**) into a new file within an editor like Microsoft Visual Studio Code or Notepad. Make sure to save the file with the **.json** extension.
## Next steps
-In this article, you learned how to make file copies of the MedTech service Device and FHIR destination mappings templates. To learn how to troubleshoot Destination and FHIR destination mappings, see
+In this article, you learned how to make copies of the MedTech service device and FHIR destination mappings. To learn how to troubleshoot device and FHIR destination mappings, see
>[!div class="nextstepaction"]
->[Troubleshoot MedTech service Device and FHIR destination mappings](iot-troubleshoot-mappings.md)
+>[Troubleshoot the MedTech service device and FHIR destination mappings](iot-troubleshoot-mappings.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
load-balancer Tutorial Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-cli.md
In this tutorial, you learn how to:
> * Create a gateway load balancer. > * Chain a load balancer frontend to gateway load balancer.
-> [!IMPORTANT]
-> Azure Gateway Load Balancer is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
load-balancer Tutorial Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-portal.md
In this tutorial, you learn how to:
> * Create a gateway load balancer. > * Chain a load balancer frontend to gateway load balancer.
-> [!IMPORTANT]
-> Gateway Azure Load Balancer is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
load-balancer Tutorial Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-powershell.md
In this tutorial, you learn how to:
> * Create a gateway load balancer. > * Chain a load balancer frontend to gateway load balancer.
-> [!IMPORTANT]
-> Gateway Azure Load Balancer is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites - An Azure account with an active subscription.[Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
The following table identifies the authentication types that are available on th
| [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook | | [Active Directory OAuth](#azure-active-directory-oauth-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook | | [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Managed identity](#managed-identity-authentication) | **Consumption logic app**: <br><br>- **Built-in**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p><p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, Azure Event Hubs, Azure Service Bus, SQL Server <p><p>___________________________________________________________________________________________<p><p>**Standard logic app**: <p><p>- **Built-in**: HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, Azure Event Hubs, Azure Service Bus, SQL Server |
+| [Managed identity](#managed-identity-authentication) | **Consumption logic app**: <br><br>- **Built-in**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <p><p>- **Managed connector**: Azure AD, Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure VM, HTTP with Azure AD, SQL Server <p><p>___________________________________________________________________________________________<p><p>**Standard logic app**: <p><p>- **Built-in**: HTTP, HTTP Webhook <p><p>- **Managed connector**: Azure AD, Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure VM, HTTP with Azure AD, SQL Server |
||| <a name="secure-inbound-requests"></a>
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
ws.update(v1_legacy_mode=false)
# [Azure CLI extension v1](#tab/azurecliextensionv1)
-The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml(v1)/workspace#az-ml(v1)-workspace-update) command. To enable the parameter for a workspace, add the parameter `--v1-legacy-mode true`.
+The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml(v1)/workspace#az-ml(v1)-workspace-update) command. To disable the parameter for a workspace, add the parameter `--v1-legacy-mode False`.
> [!IMPORTANT] > The `v1-legacy-mode` parameter is only available in version 1.41.0 or newer of the Azure CLI extension for machine learning v1 (`azure-cli-ml`). Use the `az version` command to view version information.
+```azurecli
+az ml workspace update -g <myresourcegroup> -w <myworkspace> --v1-legacy-mode False
+```
+ The return value of the `az ml workspace update` command may not show the updated value. To view the current state of the parameter, use the following command: ```azurecli
az ml workspace show -g <myresourcegroup> -w <myworkspace> --query v1LegacyMode
## Next steps * [Use a private endpoint with Azure Machine Learning workspace](how-to-configure-private-link.md).
-* [Create private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
+* [Create private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
From the `cli/jobs/pipelines-with-components/basics` directory of the [`azureml-
- Interface: inputs and outputs. For example, a model training component will take training data and number of epochs as input, and generate a trained model file as output. Once the interface is defined, different teams can develop and test the component independently. - Command, code & environment: the command, code and environment to run the component. Command is the shell command to execute the component. Code usually refers to a source code directory. Environment could be an AzureML environment(curated or customer created), docker image or conda environment. -- **component_src**: This is the source code directory for a specific component. It contains the source code that will be executed in the component. You can use your preferred lanuage(Python, R...). The code must be executed by a shell command. The source code can take a few inputs from shell command line to control how this step is going to be executed. For example, a training step may take training data, learning rate, number of epochs to control the training process. The argument of a shell command is used to pass inputs and outputs to the code.
+- **component_src**: This is the source code directory for a specific component. It contains the source code that will be executed in the component. You can use your preferred language(Python, R...). The code must be executed by a shell command. The source code can take a few inputs from shell command line to control how this step is going to be executed. For example, a training step may take training data, learning rate, number of epochs to control the training process. The argument of a shell command is used to pass inputs and outputs to the code.
Now let's create a pipeline using the `3b_pipeline_with_data` example. We'll explain the detailed meaning of each file in following sections.
In the *3b_pipeline_with_data* example, we've created a three steps pipeline.
### Read and write data in pipeline
-One common scenario is to read and write data in your pipeline. In AuzreML, we use the same schema to [read and write data](how-to-read-write-data-v2.md) for all type of jobs (pipeline job, command job, and sweep job). Below are pipeline job examples of using data for common scenarios.
+One common scenario is to read and write data in your pipeline. In AzureML, we use the same schema to [read and write data](how-to-read-write-data-v2.md) for all type of jobs (pipeline job, command job, and sweep job). Below are pipeline job examples of using data for common scenarios.
- [local data](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components/basics/4a_local_data_input) - [web file with public URL](https://github.com/Azure/azureml-examples/blob/sdk-preview/cli/jobs/pipelines-with-components/basics/4c_web_url_input/pipeline.yml)
The most common used schema of the component YAML is described in below table. S
|key|description| |||
-|name|**Required**. Name of the component. Must be unique across the AzureML workspace. Must start with lowercase letter. Allow lowercase letters, numbers and understore(_). Maximum length is 255 characters.|
+|name|**Required**. Name of the component. Must be unique across the AzureML workspace. Must start with lowercase letter. Allow lowercase letters, numbers and underscore(_). Maximum length is 255 characters.|
|display_name|Display name of the component in the studio UI. Can be non-unique within the workspace.| |command|**Required** the command to execute| |code|Local path to the source code directory to be uploaded and used for the component.| |environment|**Required**. The environment that will be used to execute the component.| |inputs|Dictionary of component inputs. The key is a name for the input within the context of the component and the value is the component input definition. Inputs can be referenced in the command using the ${{ inputs.<input_name> }} expression.| |outputs|Dictionary of component outputs. The key is a name for the output within the context of the component and the value is the component output definition. Outputs can be referenced in the command using the ${{ outputs.<output_name> }} expression.|
-|is_deterministic|Whether to reuse previous job's result if the component inputs not change. Default value is `true`, also known as resue by default. The common scenario to set it as `false` is to force reload data from a cloud storage or URL.|
+|is_deterministic|Whether to reuse the previous job's result if the component inputs did not change. Default value is `true`, also known as reuse by default. The common scenario when set as `false` is to force reload data from a cloud storage or URL.|
For the example in *3b_pipeline_with_data/componentA.yml*, componentA has one data input and one data output, which can be connected to other steps in the parent pipeline. All the files under `code` section in component YAML will be uploaded to AzureML when submitting the pipeline job. In this example, files under `./componentA_src` will be uploaded (line 16 in *componentA.yml*). You can see the uploaded source code in Studio UI: double select the ComponentA step and navigate to Snapshot tab, as shown in below screenshot. We can see it's a hello-world script just doing some simple printing, and write current datetime to the `componentA_output` path. The component takes input and output through command line argument, and it's handled in the *hello.py* using `argparse`.
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
TypeError: register() takes 3 positional arguments but 4 were given
```
-You have **Flask 2** installed in your python environment but are running a server (< 7.0.0) that does not support Flask 2. To resolve, please upgrade to the latest version of server.
+You have **Flask 2** installed in your python environment but are running a server (< 0.7.0) that does not support Flask 2. To resolve, please upgrade to the latest version of server.
### 2. I encountered an ``ImportError`` or ``ModuleNotFoundError`` on modules ``opencensus``, ``jinja2``, ``MarkupSafe``, or ``click`` during startup like the following:
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
In this section, you learn how to secure the training environment in Azure Machi
To secure the training environment, use the following steps: 1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster) to run the training job.
-1. If your compute cluster or compute instance does not use a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
> [!TIP] > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
run = Experiment(ws, 'experiment_name').submit(run_config)
[PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/) is a lightweight open-source library that provides a high-level interface for PyTorch. Lightning abstracts away many of the lower-level distributed training configurations required for vanilla PyTorch. Lightning allows you to run your training scripts in single GPU, single-node multi-GPU, and multi-node multi-GPU settings. Behind the scene, it launches multiple processes for you similar to `torch.distributed.launch`.
-For single-node training (including single-node multi-GPU), you can run your code on Azure ML without needing to specify a `distributed_job_config`. For multi-node training, Lightning requires the following environment variables to be set on each node of your training cluster:
+For single-node training (including single-node multi-GPU), you can run your code on Azure ML without needing to specify a `distributed_job_config`.
+To run an experiment using multiple nodes with multiple GPUs, there are 2 options:
-- MASTER_ADDR-- MASTER_PORT-- NODE_RANK
+- Using PyTorch configuration (recommended): Define `PyTorchConfiguration` and specify `communication_backend="Nccl"`, `node_count`, and `process_count` (note that this is the total number of processes, ie, `num_nodes * process_count_per_node`). In Lightning Trainer module, specify both `num_nodes` and `gpus` to be consistent with `PyTorchConfiguration`. For example, `num_nodes = node_count` and `gpus = process_count_per_node`.
-To run multi-node Lightning training on Azure ML, follow the [per-node-launch](#per-node-launch) guidance, but note that currently, the `ddp` strategy works only when you run an experiment using multiple nodes, with one GPU per node.
+- Using MPI Configuration:
-To run an experiment using multiple nodes with multiple GPUs:
--- Define `MpiConfiguration` and specify `node_count`. Don't specify `process_count` because Lightning internally handles launching the worker processes for each node.-- For PyTorch jobs, Azure ML handles setting the MASTER_ADDR, MASTER_PORT, and NODE_RANK environment variables that Lightning requires:
+ - Define `MpiConfiguration` and specify both `node_count` and `process_count_per_node`. In Lightning Trainer, specify both `num_nodes` and `gpus` to be respectively the same as `node_count` and `process_count_per_node` from `MpiConfiguration`.
+ - For multi-node training with MPI, Lightning requires the following environment variables to be set on each node of your training cluster:
+ - MASTER_ADDR
+ - MASTER_PORT
+ - NODE_RANK
+ - LOCAL_RANK
+
+ Manually set these environment variables that Lightning requires in the main training scripts:
```python import os
+ from argparse import ArgumentParser
- def set_environment_variables_for_nccl_backend(single_node=False, master_port=6105):
- if not single_node:
- master_node_params = os.environ["AZ_BATCH_MASTER_NODE"].split(":")
- os.environ["MASTER_ADDR"] = master_node_params[0]
-
- # Do not overwrite master port with that defined in AZ_BATCH_MASTER_NODE
- if "MASTER_PORT" not in os.environ:
- os.environ["MASTER_PORT"] = str(master_port)
+ def set_environment_variables_for_mpi(num_nodes, gpus_per_node, master_port=54965):
+ if num_nodes > 1:
+ os.environ["MASTER_ADDR"], os.environ["MASTER_PORT"] = os.environ["AZ_BATCH_MASTER_NODE"].split(":")
else: os.environ["MASTER_ADDR"] = os.environ["AZ_BATCHAI_MPI_MASTER_NODE"]
- os.environ["MASTER_PORT"] = "54965"
+ os.environ["MASTER_PORT"] = str(master_port)
- os.environ["NCCL_SOCKET_IFNAME"] = "^docker0,lo"
try:
- os.environ["NODE_RANK"] = os.environ["OMPI_COMM_WORLD_RANK"]
+ os.environ["NODE_RANK"] = str(int(os.environ.get("OMPI_COMM_WORLD_RANK")) // gpus_per_node)
# additional variables os.environ["MASTER_ADDRESS"] = os.environ["MASTER_ADDR"] os.environ["LOCAL_RANK"] = os.environ["OMPI_COMM_WORLD_LOCAL_RANK"]
To run an experiment using multiple nodes with multiple GPUs:
except: # fails when used with pytorch configuration instead of mpi pass
+
+ if __name__ == "__main__":
+ parser = ArgumentParser()
+ parser.add_argument("--num_nodes", type=int, required=True)
+ parser.add_argument("--gpus_per_node", type=int, required=True)
+ args = parser.parse_args()
+ set_environment_variables_for_mpi(args.num_nodes, args.gpus_per_node)
+
+ trainer = Trainer(
+ num_nodes=args.num_nodes,
+ gpus=args.gpus_per_node
+ )
``` -- Lightning handles computing the world size from the Trainer flags `--gpus` and `--num_nodes` and manages rank and local rank internally:
+ Lightning handles computing the world size from the Trainer flags `--gpus` and `--num_nodes`.
```python from azureml.core import ScriptRunConfig, Experiment from azureml.core.runconfig import MpiConfiguration nnodes = 2
- args = ['--max_epochs', 50, '--gpus', 2, '--accelerator', 'ddp_spawn', '--num_nodes', nnodes]
- distr_config = MpiConfiguration(node_count=nnodes)
+ gpus_per_node = 4
+ args = ['--max_epochs', 50, '--gpus_per_node', gpus_per_node, '--accelerator', 'ddp', '--num_nodes', nnodes]
+ distr_config = MpiConfiguration(node_count=nnodes, process_count_per_node=gpus_per_node)
run_config = ScriptRunConfig( source_directory='./src',
marketplace Azure App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-offer-setup.md
Previously updated : 06/29/2022 Last updated : 07/20/2022 # Create an Azure application offer As a commercial marketplace publisher, you can create an Azure application offer so potential customers can buy your solution. This article explains the process to create an Azure application offer for the Microsoft commercial marketplace.
+## Before you begin
+
+Before you can publish an Azure application offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ If you havenΓÇÖt already done so, read [Plan an Azure application offer for the commercial marketplace](plan-azure-application-offer.md). It will provide the resources and help you gather the information and assets youΓÇÖll need when you create your offer. ## Create a new offer
marketplace Azure Container Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-offer-setup.md
Previously updated : 06/29/2022 Last updated : 07/20/2022 # Create an Azure Container offer
Before you start, create a commercial marketplace account in [Partner Center](./
## Before you begin
+Before you can publish an Azure Container offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ Review [Plan an Azure Container offer](marketplace-containers.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it. ## Create a new offer
marketplace Azure Vm Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-offer-setup.md
Previously updated : 06/29/2022 Last updated : 07/20/2022 # Create a virtual machine offer on Azure Marketplace
Before you start, [create a commercial marketplace account in Partner Center](cr
## Before you begin
+Before you can publish an Azure Container offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ If you haven't done so yet, review [Plan a virtual machine offer](marketplace-virtual-machines.md). It will explain the technical requirements for your virtual machine and list the information and assets youΓÇÖll need when you create your offer. ## Create a new offer
marketplace Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-account.md
Previously updated : 06/30/2022 Last updated : 07/20/2022 # Create a commercial marketplace account in Partner Center
Last updated 06/30/2022
- All Partner Center users
-To publish your offers to [Microsoft AppSource](https://appsource.microsoft.com/) or [Azure Marketplace](https://azuremarketplace.microsoft.com/), you need to create an account in the commercial marketplace program in Partner Center. This article covers how to create a commercial marketplace account to the commercial marketplace program.
+To publish your offers to [Microsoft AppSource](https://appsource.microsoft.com/) or [Azure Marketplace](https://azuremarketplace.microsoft.com/), you need to have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. This article covers how to create a commercial marketplace account to the commercial marketplace program. To verify that your account is enrolled in the commercial marketplace program, see [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
>[!NOTE] >If you had an account in the Cloud Partner Portal (CPP), we moved it to Partner Center. You don't need to create a new account. For more information, see [Publishers who moved from the Cloud Partner Portal](#publishers-who-moved-from-the-cloud-partner-portal).
marketplace Create Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-consulting-service-offer.md
Previously updated : 06/29/2022 Last updated : 07/20/2022 # Create a consulting service offer
This article explains how to create a consulting service offer for the commercia
## Before you begin
-To publish a consulting service offer, you must meet certain eligibility requirements to demonstrate expertise in your field. If you havenΓÇÖt already done so, read [Plan a consulting service offer](./plan-consulting-service-offer.md). It describes the prerequisites and assets youΓÇÖll need on hand to create a consulting service offer in Partner Center.
+To publish a consulting service offer, you must:
+
+- Have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+- Meet certain eligibility requirements to demonstrate expertise in your field.
+
+If you havenΓÇÖt already done so, read [Plan a consulting service offer](./plan-consulting-service-offer.md). It describes the prerequisites and assets youΓÇÖll need to create a consulting service offer in Partner Center.
## Create a consulting service offer
marketplace Create Managed Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-managed-service-offer.md
Previously updated : 03/28/2022 Last updated : 07/20/2022 # Create a Managed Service offer for the commercial marketplace This article explains how to create a Managed Service offer for the Microsoft commercial marketplace using Partner Center.
-To publish a Managed Service offer, you must have earned a Gold or Silver Microsoft Competency in Cloud Platform. If you havenΓÇÖt already done so, read [Plan a Managed Service offer for the commercial marketplace](./plan-managed-service-offer.md). It will help you prepare the assets you need when you create the offer in Partner Center.
+## Before you begin
+
+To publish a Managed Service offer, you must meet the following prerequisites:
+
+- Have earned a Gold or Silver Microsoft Competency in Cloud Platform. If you havenΓÇÖt already done so, read [Plan a Managed Service offer for the commercial marketplace](./plan-managed-service-offer.md). It will help you prepare the assets you need when you create the offer in Partner Center.
+- Have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
## Create a new offer
marketplace Create New Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-new-saas-offer.md
As a commercial marketplace publisher, you can create a software as a service (S
## Before you begin
+Before you can publish a SaaS offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ If you havenΓÇÖt already done so, read [Plan a SaaS offer](plan-saas-offer.md). It will explain the technical requirements for your SaaS app, and the information and assets youΓÇÖll need when you create your offer. Unless you plan to publish a simple listing (**Contact me** listing option) in the commercial marketplace, your SaaS application must meet technical requirements around authentication. > [!IMPORTANT]
You can light up [unified discovery and delivery](plan-SaaS-offer.md) of your Sa
### Link published Microsoft 365 App consumption clients
-1. If you do not have published Office add-in, Teams app, or SharePoint Framework solutions that works with your SaaS offer, select **No**.
-1. If you have published Office add-in, Teams app, or SharePoint Framework solutions that works with your SaaS offer, select **Yes**, then select **+Add another AppSource link** to add new links.
+1. If you do not have published Office add-in, Teams app, or SharePoint Framework solutions that work with your SaaS offer, select **No**.
+1. If you have published Office add-in, Teams app, or SharePoint Framework solutions that work with your SaaS offer, select **Yes**, then select **+Add another AppSource link** to add new links.
1. Provide a valid AppSource link. 1. Continue adding all the links by select **+Add another AppSource link** and provide valid AppSource links. 1. The order the linked products are shown on the listing page of the SaaS offer is indicated by the Rank value, you can change it by select, hold, and move the = icon up and down the list.
marketplace Dynamics 365 Business Central Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-setup.md
Previously updated : 06/29/2022 Last updated : 07/20/2022 # Create a Dynamics 365 Business Central offer
Before you start, create a commercial marketplace account in [Partner Center](cr
## Before you begin
+Before you can publish a Dynamics 365 Business Central offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It explains the technical requirements for this offer and lists the information and assets youΓÇÖll need when you create it. ## Create a new offer
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
Last updated 07/18/2022
# Create a Dynamics 365 apps on Dataverse and Power Apps offer
-This article describes how to create a _Dynamics 365 apps on Dataverse and Power Apps_ offer. Before you start, create a commercial marketplace account in [Partner Center](./create-account.md) and ensure it is enrolled in the commercial marketplace program.
+This article describes how to create a _Dynamics 365 apps on Dataverse and Power Apps_ offer.
## Before you begin
+Before you can publish a Dynamics 365 apps on Dataverse and Power Apps offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ Review [Plan a Microsoft Dynamics 365 offer](marketplace-dynamics-365.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it. ## Create a new offer
marketplace Dynamics 365 Operations Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-setup.md
Previously updated : 06/29/2022 Last updated : 07/20/2022 # Create a Dynamics 365 Operations Apps offer
Before you start, create a commercial marketplace account in [Partner Center](./
## Before you begin
+Before you can publish a Dynamics 365 Operations Apps offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ Review [Plan a Dynamics 365 offer](marketplace-dynamics-365.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it. ## Create a new offer
marketplace Iot Edge Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-offer-setup.md
Previously updated : 06/29/2022 Last updated : 07/20/2022 # Create an IoT Edge Module offer
Before you start, create a commercial marketplace account in [Partner Center](./
## Before you begin
+Before you can publish an IoT Edge Module offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ Review [Plan an IoT Edge Module offer](marketplace-iot-edge.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it. ## Create a new offer
marketplace Marketplace Criteria Content Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-criteria-content-validation.md
Previously updated : 12/06/2021 Last updated : 07/29/2022 # Azure Marketplace listing guidelines
-This article explains the requirements and guidelines for listing new offers and services on Azure Marketplace. All offers must meet the [listing requirements](#listing-requirements-for-all-offers) listed below. Use the links on the right to navigate to additional requirements and checklists for specific listing types.
+This article explains the requirements and guidelines for listing new offers and services on Azure Marketplace. All offers must meet the listing requirements described in this article. Use the links on the right to navigate to additional requirements and checklists for specific offer types.
-## Listing requirements for all offers
+## Listing requirements for most offers
| No. | Listing element | Base requirement | Optimal requirement | |: |: |: |: |
This article explains the requirements and guidelines for listing new offers and
| 6 | Videos | <ul><li>No video is required but, if provided, it must play back without any errors.</li><li>If provided, it may not refer to competitor companies *unless* it is demonstrating a migration solution. |<ul><li>Ideally, the length is 3 min. or more.</li><li>The solution offer is easily understood through video content.</li><li>Provides demo of solution capabilities. | | 7 | List status (listing options) | <ul><li>Must be labeled as one of the following types: <ul><li>*Contact Me*</li><li>*Trial*/*Get Trial Now*/*Start Trial*/*Test Drive*</li><li>*Buy Now*/*Get It Now*</li></ul></ul> | Customer can readily understand what the next steps are: <ol><li>Try the Trial.</li><li>Buy Now.</li><li>Contact via email or phone number to arrange for Proof of Concept (POC), Assessment, or Briefing.</li></ol> | | 8 | Solution pricing | Must have solution pricing tab/details, and pricing must be in the local currency of the partner solution offering. | Multiple billing options should be available with tier pricing to give customer options. |
-| 9 | Learn more | Links at the bottom (under the description, not Azure Marketplace links on the left) lead to more information about the solution and are publicly available and displaying correctly. | Links to specific items (for example, spec pages on the partner site) and not just the partner home page. |
-| 10 | Solution support and help | Link to at least one of the following: <ul><li>Telephone numbers</li><li>Email support</li><li>Chat agents</li><li>Community forums |<ul><li>All support methods are listed.</li><li>Paid support is offered free during the *Trial* or *Test Drive* period. |
+| 9 | Learn more | Links at the bottom (under the description, not Azure Marketplace links on the left) lead to more information about the solution and are publicly available and displaying correctly. | Links to specific items (for example, spec pages on the partner site) and not just the partner home page. |
+| 10 | Solution support and help | Provide at least one of the following: <br>- Telephone numbers [1]<br>- Email support [2] |- All support methods are listed.<br>- Paid support is offered free during the *Trial* or *Test Drive* period. |
| 11 | Legal | Policies or terms are available via a public URL. | |
-## Trial offer requirements
+[1] DoesnΓÇÖt apply to Consulting service and Power BI app offers
-| No. | Listing element | Base requirement | Optimal requirement |
+[2] DoesnΓÇÖt apply to Consulting service offers
+
+## Listing requirements for trial offers
+
+| No. | Listing element | Base requirement | Optimal requirement |
|: |: |: |: |
-| | List status (Listing option) | The link must lead to a customer-led *Trial* experience. | Other listing options (for example, *Buy Now*) are also available. |
+| | List status (Listing option) | The link must lead to a customer-led *Trial* experience. | Other listing options (for example, *Buy Now*) are also available. |
-## SaaS application requirements
+## Listing requirements for SaaS applications
-| No. | Listing element | Base requirement | Optimal requirement |
+| No. | Listing element | Base requirement | Optimal requirement |
|: |: |: |: | | 1 | Offer title |<ul><li>Must consist only of lowercase letters, alphanumeric characters, dashes, or underscores. The title can't be modified after it's published.</li><li>Describes solution offering.</li><li>Matches online promotion of solution on partner's website. | Contains key search words. | | 2 | Technical information: Configuration |<ul><li>For software as a service (SaaS) apps, choose whether you want only to list your app or to enable customers to purchase your app through Azure.</li><li>Select the text that you want on your offer's acquisition button: *Free*, *Free Trial*, or *Contact Me*.</li><li>In the pop-up window, select only one applicable product if your app utilizes the technology: Cortana Intelligence, Power BI Solution Templates, or Power Apps. | |
This article explains the requirements and guidelines for listing new offers and
| 10 | Contacts: Solution support and help | <ul><li>Engineering contact name: The name of the engineering contact for your app. This contact will receive technical communications from Microsoft.</li><li>Engineering contact email: The email address of the engineering contact for your app.</li><li>Engineering contacts phone: The phone number of the engineering contact. [ISO phone number notations](https://en.wikipedia.org/wiki/E.123) are supported.</li><li>Support contact name: The name of the support contact for your app. This contact will receive support-related communications from Microsoft.</li><li>Support contact email: The email address of the support contact for your app.</li><li>Support contact phone: The phone number of the support contact. [ISO phone number notations](https://en.wikipedia.org/wiki/E.123) are supported.</li><li>Support URL: The URL of your support page. | <ul><li>All support methods are listed.</li><li>Paid support offered free during the *Trial* or *Test Drive* period. | | 11 | Legal |<ul><li>Privacy policy URL: The URL for your app's privacy policy in the Privacy policy URL field in the CPP.</li><li>Terms of use: The terms of use of your app. Customers are required to accept these terms before they can try your app. | Policies or terms are available via a public URL site. |
-## Container offer requirements
+## Listing requirements for Container offers
| No. | Listing element | Base requirement | Optimal requirement | |: |: |: |: |
This article explains the requirements and guidelines for listing new offers and
| 3 | Marketplace artifacts | Logos are displayed correctly. |<ul><li>Logos: Small (48 x 48 px, optional), Medium (90 x 90 px, optional), and Large (from 216 x 216 to 350 x 350 px, required).</li><li>Screenshot (max. 5): Requires a .PNG image with a resolution of 1280 x 720 pixels.| | 4 | Lead management |<ul><li>Lead management: Select the system where your leads will be stored.</li><li>See [get customer leads](./partner-center-portal/commercial-marketplace-get-customer-leads.md) to connect your CRM system. | |
-## Consulting offer requirements
+## Listing requirements for Consulting service offers
| No. | Listing element | Base requirement | Optimal requirement | |: |: |: |: |
marketplace Commercial Marketplace Get Customer Leads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-get-customer-leads.md
Previously updated : 06/29/2022 Last updated : 07/13/2022 # Customer leads from your commercial marketplace offer
Here are some recommendations for driving leads through your sales cycle:
- **Follow-up**: Don't forget to follow up within 24 hours. You will get the lead in your CRM of choice immediately after the customer deploys a test drive; email them within while they are still warm. Request scheduling a phone call to better understand if your product is a good solution for their problem. Expect the typical transaction to require numerous follow-up calls. - **Nurture**: Nurture your leads to get you on the way to a higher profit margin. Check in, but don't bombard them. We recommend you email leads at least a few times before you close them out; don't give up after the first attempt. Remember, these customers directly engaged with your product and spent time in a free trial; they are great prospects.
-After the technical setup is in place, incorporate these leads into your current sales and marketing strategy and operational processes. We're interested in better understanding your overall sales process and want to work closely with you to provide high-quality leads and enough data to make you successful. We welcome your feedback on how we can optimize and enhance the leads we send you with additional data to help make these customers successful. Let us know if you're interested in [providing
-feedback](mailto:AzureMarketOnboard@microsoft.com) and suggestions to enable your sales team to be more successful with commercial marketplace leads.
+After the technical setup is in place, incorporate these leads into your current sales and marketing strategy and operational processes.
+
+We're interested in better understanding your overall sales process and want to work closely with you to provide high-quality leads and enough data to make you successful. We welcome your feedback on how we can optimize and enhance the leads we send you with additional data to help make these customers successful. Please share your feedback via the smile icon on the top right corner of Partner center. [Login](https://partner.microsoft.com/dashboard/home) to partner center.
## Next steps
marketplace Power Bi App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-app-offer-setup.md
Previously updated : 06/29/2022 Last updated : 07/20/2022 # Create a Power BI app offer This article describes how to create a Power BI app offer. All offers go through our certification process, which checks your solution for standard requirements, compatibility, and proper practices.
-Before you start, create a commercial marketplace account in [Partner Center](./create-account.md) and ensure it is enrolled in the commercial marketplace program.
- ## Before you begin
+Before you can publish a Power BI app offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ Review [Plan a Power BI offer](marketplace-power-bi.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it. ## Create a new offer
marketplace Power Bi Visual Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/power-bi-visual-offer-setup.md
Previously updated : 07/18/2022 Last updated : 07/20/2022 # Create a Power BI visual offer This article describes how to use Partner Center to submit a Power BI visual offer to [Microsoft AppSource](https://appsource.microsoft.com) for others to discover and use.
-Before you start, create a commercial marketplace account in [Partner Center](./create-account.md) and ensure it is enrolled in the commercial marketplace program.
- ## Before you begin
+Before you can publish a Power BI visual offer, you must have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ Review [Plan a Power BI visual offer](marketplace-power-bi-visual.md). It will explain the technical requirements for this offer and list the information and assets youΓÇÖll need when you create it. ## Create a new offer
marketplace Publisher Guide By Offer Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/publisher-guide-by-offer-type.md
You can configure a single offer type in different ways to enable different publ
Be sure to review the online store and offer type eligibility requirements and the technical publishing requirements before creating your offer.
+To publish your offers to [Microsoft AppSource](https://appsource.microsoft.com/) or [Azure Marketplace](https://azuremarketplace.microsoft.com/), you need to have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program. See [Create a commercial marketplace account in Partner Center](create-account.md) and [Verify your account information when you enroll in a new Partner Center program](/partner-center/verification-responses#checking-your-verification-status).
+ ## List of offer types The following table shows the commercial marketplace offer types in Partner Center.
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
To enable public network access for the Azure Migrate project, sign in to the Az
| **Pricing** | For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/). **Virtual network requirements** | The ExpressRoute/VPN gateway endpoint should reside in the selected virtual network or a virtual network connected to it. You might need about 15 IP addresses in the virtual network.
+**PowerShell support** | PowerShell is not supported. We recommend using the Azure Portal or REST APIs for leveraging Azure Migrate Private Link support.
## Next steps
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
There are some changes needed on VMs before you migrate them to Azure.
- For some operating systems, Azure Migrate makes changes automatically during the replication/migration process. - For other operating systems, you need to configure settings manually.-- It's important to configure settings manually before you begin migration. If you migrate the VM before you make the change, the VM might not boot up in Azure.
+- It's important to configure settings manually before you begin migration. Some of the changes may affect VM boot up, or connectivity to the VM may not be established. If you migrate the VM before you make the change, the VM might not boot up in Azure.
Review the tables to identify the changes you need to make.
Azure Migrate completes these actions automatically for these versions
- Debian 10, 9, 8, 7 - Oracle Linux 8, 7.7-CI, 7.7, 6
-For other versions, prepare machines as summarized in the table.
+For other versions, prepare machines as summarized in the table.
+> [!Note]
+> Some changes may affect the VM boot up, or connectivity to the VM may not be established.
**Action** | **Details** | **Linux version**
mysql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-monitoring.md
Azure Database for MySQL Flexible Server provides monitoring of servers through
In this article, you will learn about the various metrics available for your flexible server that give insight into the behavior of your server.
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+ ## Available metrics Azure Database for MySQL Flexible Server provides various metrics to understand how your workload is performing and based on this data, you can understand the impact on your server and application. For example, in flexible server, you can monitor **Host CPU percent**, **Active Connections**, **IO percent**, and **Host Memory Percent** to identify when there is a performance impact. From there, you may have to optimize your workload, scale vertically by changing compute tiers, or scaling horizontally by using read replica.
These metrics are available for Azure Database for MySQL:
|Metric display name|Metric|Unit|Description| |||||
-|Host CPU percent|cpu_percent|Percent|The percentage of CPU utilization on the server, including CPU utilization from both customer workload and Azure MySQL processes|
-|Host Network In |network_bytes_ingress|Bytes|Incoming network traffic on the server, including traffic from both customer database and Azure MySQL features like replication, monitoring, logs etc.|
-|Host Network out|network_bytes_egress|Bytes|Outgoing network traffic on the server, including traffic from both customer database and Azure MySQL features like replication, monitoring, logs etc.|
-|Replication Lag|replication_lag|Seconds|The time since the last replayed transaction. This metric is available for replica servers only.|
-|Active Connections|active_connection|Count|The number of active connections to the server.|
+|Host CPU percent|cpu_percent|Percent|Host CPU percent is total utilization of CPU to process all the tasks on your server over a selected period. This metric includes workload of your Azure Database for MySQL Flexible Server and Azure MySQL process. High CPU percent can help you find if your database server has more workload than it can handle. This metric is equivalent to total CPU utilization similar to utilization of CPU on any virtual machine.|
+|Host Network In |network_bytes_ingress|Bytes|Total sum of incoming network traffic on the server for a selected period. This metric includes traffic to your database and to Azure MySQL features like monitoring, logs etc.|
+|Host Network out|network_bytes_egress|Bytes|Total sum of outgoing network traffic on the server for a selected period. This metric includes traffic from your database and from Azure MySQL features like monitoring, logs etc.|
+|Replication Lag|replication_lag|Seconds|Replication lag is the number of seconds the replica is behind in replaying the transactions received from the source server. This metric is calculated from "Seconds_behind_Master" from the command "SHOW SLAVE STATUS" and is available for replica servers only. For more information, see "[Monitor replication latency](../single-server/how-to-troubleshoot-replication-latency.md)"|
+|Active Connections|active_connection|Count|The number of active connections to the server. Active connections are the total number of [threads connected](https://dev.mysql.com/doc/refman/8.0/en/server-status-variables.html#statvar_Threads_connected) to your server, which also includes threads from [azure_superuser](../single-server/how-to-create-users.md).|
|Backup Storage Used|backup_storage_used|Bytes|The amount of backup storage used.|
-|IO percent|io_consumption_percent|Percent|The percentage of IO in use.|
-|Host Memory Percent|memory_percent|Percent|The percentage of memory in use on the server, including memory utilization from both customer workload and Azure MySQL processes|
+|IO percent|io_consumption_percent|Percent|The percentage of IO in use over selected period. IO percent is for both read and write IOPS.|
+|Host Memory Percent|memory_percent|Percent|The total percentage of memory in use on the server, including memory utilization from both database workload and other Azure MySQL processes. This metric shows total consumption of memory of underlying host similar to consumption of memory on any virtual machine.|
|Storage Limit|storage_limit|Bytes|The maximum storage for this server.| |Storage Percent|storage_percent|Percent|The percentage of storage used out of the server's maximum.| |Storage Used|storage_used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
-|Total connections|total_connections|Count|The number of total connections to the server|
-|Aborted Connections|aborted_connections|Count|The number of failed attempts to connect to the MySQL, for example, failed connection due to bad credentials.|
-|Queries|queries|Count|The number of queries per second|
+|Total connections|total_connections|Count|The number of client connections to your Azure Database for MySQL Flexible server. Total Connections is sum of connections by clients using TCP/IP protocol over a selected period.|
+|Aborted Connections|aborted_connections|Count|Total number of failed attempts to connect to your MySQL server, for example, failed connection due to bad credentials. For more information on aborted connections, you can refer to this [documentation](https://dev.mysql.com/doc/refman/5.7/en/communication-errors.html).|
+|Queries|queries|Count|Total number of queries executed per minute on your server. Total count of queries per minute on your server from your database workload and Azure MySQL processes.|
## Next steps - See [How to set up alerts](./how-to-alert-on-metric.md) for guidance on creating an alert on a metric.
remote-rendering Get Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/get-information.md
There will be at most one error (either `error` or `internal error`) and there w
## Example *result* file
-The following example describes a conversion that successfully generated an arrAsset.
+The following example describes a conversion that successfully generated an arrAsset.
However, since there was a missing texture, the resulting arrAsset may not be as intended. ```JSON [
+ {"conversionId":"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"},
{"warning":"4004","title":"Missing texture","details":{"texture":"buggy_baseColor.png","material":"buggy_col"}}, {"result":"succeeded with warnings"} ] ```
+> [!NOTE]
+> The `conversionId` is an internal Id that does not correlate with the Id that was used to create the conversion.
+ ## Information about a converted model: The info file The arrAsset file produced by the conversion service is solely intended for consumption by the rendering service. There may be times, however, when you want to access information about a model without starting a rendering session. To support this workflow, the conversion service places a JSON file beside the arrAsset file in the output container. For example, if a file `buggy.gltf` is converted, the output container will contain a file called `buggy.info.json` beside the converted asset `buggy.arrAsset`. It contains information about the source model, the converted model, and about the conversion itself.
remote-rendering Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/resources/troubleshoot.md
Azure Remote Rendering hooks into the Unity render pipeline to do the frame comp
![Unity render pipeline](./media/troubleshoot-unity-pipeline.png)
+To fix, make sure the provided _HybridRenderingPipeline_ asset is used:
+![Screenshot of the Unity asset browser and Project Settings dialog. The HybridRenderingPipeline asset is highlighted in the asset browser. An arrow points from the asset to the UniversalRenderPipelineAsset field in project settings.](./../tutorials/unity/view-remote-models/media/hybrid-rendering-pipeline.png)
+
+..as described in more detail in the [Unity tutorial to set up the project](./../tutorials/unity/view-remote-models/view-remote-models.md#adjust-the-project-settings).
+ ## Checkerboard pattern is rendered after model loading If the rendered image looks like this:
route-server Quickstart Configure Route Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-portal.md
Previously updated : 09/08/2021 Last updated : 07/19/2022
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md
# Connect data from Microsoft 365 Defender to Microsoft Sentinel + Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention (DLP)**. The connector also lets you stream **advanced hunting** events from *all* of the above components into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
For more information about incident integration and advanced hunting event colle
- Your user must have read and write permissions on your Microsoft Sentinel workspace.
+### Prerequisites for Active Directory sync via MDI
+
+- Your tenant must be onboarded to Microsoft Defender for Identity.
+
+- You must have the MDI sensor installed.
+ ## Connect to Microsoft 365 Defender
-1. In Microsoft Sentinel, select **Data connectors**, select **Microsoft 365 Defender (Preview)** from the gallery and select **Open connector page**.
+In Microsoft Sentinel, select **Data connectors**, select **Microsoft 365 Defender (Preview)** from the gallery and select **Open connector page**.
-1. Under **Configuration** in the **Connect incidents & alerts** section, select the **Connect incidents & alerts** button.
+The **Configuration** section has three parts:
-1. To avoid duplication of incidents, it is recommended to mark the check box labeled **Turn off all Microsoft incident creation rules for these products.**
+1. [**Connect incidents and alerts**](#connect-incidents-and-alerts) enables the basic integration between Microsoft 365 Defender and Microsoft Sentinel, synchronizing incidents and their alerts between the two platforms.
- > [!NOTE]
- > When you enable the Microsoft 365 Defender connector, all of the Microsoft 365 Defender componentsΓÇÖ connectors (the ones mentioned at the beginning of this article) are automatically connected in the background. In order to disconnect one of the componentsΓÇÖ connectors, you must first disconnect the Microsoft 365 Defender connector.
+1. [**Connect entities**](#connect-entities) enables the integration of on-premises Active Directory user identities into Microsoft Sentinel through Microsoft Defender for Identity.
-1. To query Microsoft 365 Defender incident data, use the following statement in the query window:
- ```kusto
- SecurityIncident
- | where ProviderName == "Microsoft 365 Defender"
- ```
+1. [**Connect events**](#connect-events) enables the collection of raw advanced hunting events from Defender components.
+
+These are explained in greater detail below. See [Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md) for more information.
+
+### Connect incidents and alerts
+
+Select the **Connect incidents & alerts** button to connect Microsoft 365 Defender incidents to your Microsoft Sentinel incidents queue.
+
+If you see a check box labeled **Turn off all Microsoft incident creation rules for these products. Recommended**, mark it to avoid duplication of incidents.
+
+> [!NOTE]
+> When you enable the Microsoft 365 Defender connector, all of the Microsoft 365 Defender componentsΓÇÖ connectors (the ones mentioned at the beginning of this article) are automatically connected in the background. In order to disconnect one of the componentsΓÇÖ connectors, you must first disconnect the Microsoft 365 Defender connector.
+
+To query Microsoft 365 Defender incident data, use the following statement in the query window:
+
+```kusto
+SecurityIncident
+| where ProviderName == "Microsoft 365 Defender"
+```
+
+### Connect entities
+
+Use Microsoft Defender for Identity to sync user entities from your on-premises Active Directory to Microsoft Sentinel.
+
+Verify that you've satisfied the [prerequisites](#prerequisites-for-active-directory-sync-via-mdi) for syncing on-premises Active Directory users through Microsoft Defender for Identity (MDI).
+
+1. Select the **Go the UEBA configuration page** link.
+
+1. In the **Entity behavior configuration** page, if you haven't yet enabled UEBA, then at the top of the page, move the toggle to **On**.
+
+1. Mark the **Active Directory (Preview)** check box and select **Apply**.
+
+ :::image type="content" source="media/connect-microsoft-365-defender/ueba-configuration-page.png" alt-text="Screenshot of UEBA configuration page for connecting user entities to Sentinel.":::
+
+### Connect events
1. If you want to collect advanced hunting events from Microsoft Defender for Endpoint or Microsoft Defender for Office 365, the following types of events can be collected from their corresponding advanced hunting tables.
sentinel Enable Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-entity-behavior-analytics.md
# Enable User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel -
-> [!IMPORTANT]
->
-> The UEBA and Entity Pages features are now in **General Availability** in ***all*** Microsoft Sentinel geographies and regions.
- [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## Prerequisites
To enable or disable this feature (these prerequisites are not required to use t
## How to enable User and Entity Behavior Analytics
-1. From the Microsoft Sentinel navigation menu, select **Entity behavior**.
+1. Go to the **Entity behavior configuration** page. There are three ways to get to this page:
+
+ - Select **Entity behavior** from the Microsoft Sentinel navigation menu, then select **Entity behavior settings** from the top menu bar.
+
+ - Select **Settings** from the Microsoft Sentinel navigation menu, select the **Settings** tab, then under the **Entity behavior analytics** expander, select **Set UEBA**.
-1. From the top menu bar, select **Entity behavior settings**.
-If you haven't yet enabled UEBA, you will be taken to the **Settings** page. Select **Configure UEBA**.
+ - From the Microsoft 365 Defender data connector page, select the **Go the UEBA configuration page** link.
1. On the **Entity behavior configuration** page, switch the toggle to **On**.
+ :::image type="content" source="media/enable-entity-behavior-analytics/ueba-configuration.png" alt-text="Screenshot of UEBA configuration settings.":::
+
+1. Mark the check boxes next to the Active Directory source types from which you want to synchronize user entities with Microsoft Sentinel.
+
+ - **Active Directory** on-premises (Preview)
+ - **Azure Active Directory**
+
+ To sync user entities from on-premises Active Directory, your Azure tenant must be onboarded to Microsoft Defender for Identity (either standalone or as part of Microsoft 365 Defender) and you must have the MDI sensor installed on your Active Directory domain controller. See [Microsoft Defender for Identity prerequisites](/defender-for-identity/prerequisites) for more information.
+ 1. Mark the check boxes next to the data sources on which you want to enable UEBA. > [!NOTE]
If you haven't yet enabled UEBA, you will be taken to the **Settings** page. Sel
> > Once you have enabled UEBA, you will have the option, when connecting new data sources, to enable them for UEBA directly from the data connector pane if they are UEBA-capable.
-1. Select **Apply**. You will be returned to the **Entity behavior** page.
+1. Select **Apply**. If you accessed this page through the **Entity behavior** page, you will be returned there.
## Next steps
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
# Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel - > [!IMPORTANT] >
-> - The UEBA and Entity Pages features are now in **General Availability** in ***all*** Microsoft Sentinel geographies and regions.
->
> - The **IP address entity** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
Microsoft Sentinel presents artifacts that help your security analysts get a cle
- across time and frequency horizons (compared to user's own history). - as compared to peers' behavior. - as compared to organization's behavior.- :::image type="content" source="media/identify-threats-with-entity-behavior-analytics/context.png" alt-text="Entity context":::
+The user entity information that Microsoft Sentinel uses to build its user profiles comes from your Azure Active Directory (and/or your on-premises Active Directory, now in Preview). When you enable UEBA, it synchronizes your Azure Active Directory with Microsoft Sentinel, storing the information in an internal database visible through the *IdentityInfo* table in Log Analytics.
+
+Now in preview, you can also sync your on-premises Active Directory user entity information as well, using Microsoft Defender for Identity.
+
+See [Enable User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel](enable-entity-behavior-analytics.md) to learn how to enable UEBA and synchronize user identities.
### Scoring
sentinel Ueba Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ueba-reference.md
# Microsoft Sentinel UEBA reference - This reference article lists the input data sources for the User and Entity Behavior Analytics service in Microsoft Sentinel. It also describes the enrichments that UEBA adds to entities, providing needed context to alerts and incidents. ## UEBA data sources
The following table describes the enrichments featured in the **UsersInsights**
| **Is local admin**<br>*(IsLocalAdmin)* | The account has local administrator privileges. | True, False | | **Is new account**<br>*(IsNewAccount)* | The account was created within the past 30 days. | True, False | | **On premises SID**<br>*(OnPremisesSID)* | The on-premises SID of the user related to the action. | S-1-5-21-1112946627-1321165628-2437342228-1103 |
-|
#### DevicesInsights field
The following table describes the enrichments featured in the **DevicesInsights*
| **Threat intel indicator type**<br>*(ThreatIntelIndicatorType)* | The type of the threat indicator resolved from the IP address used in the action. | Botnet, C2, CryptoMining, Darknet, Ddos, MaliciousUrl, Malware, Phishing, Proxy, PUA, Watchlist | | **User agent**<br>*(UserAgent)* | The user agent used in the action. | Microsoft Azure Graph Client Library 1.0,<br>ΓÇïSwagger-Codegen/1.4.0.0/csharp,<br>EvoSTS | | **User agent family**<br>*(UserAgentFamily)* | The user agent family used in the action. | Chrome, Edge, Firefox |
-|
#### ActivityInsights field
The following tables describe the enrichments featured in the **ActivityInsights
| **Action uncommonly performed among peers**<br>*(ActionUncommonlyPerformedAmongPeers)* | 180 | The action is not commonly performed among user's peers. | True, False | | **First time action performed in tenant**<br>*(FirstTimeActionPerformedInTenant)* | 180 | The action was performed for the first time by anyone in the organization. | True, False | | **Action uncommonly performed in tenant**<br>*(ActionUncommonlyPerformedInTenant)* | 180 | The action is not commonly performed in the organization. | True, False |
-|
##### App used
The following tables describe the enrichments featured in the **ActivityInsights
| **App uncommonly used among peers**<br>*(AppUncommonlyUsedAmongPeers)* | 180 | The app is not commonly used among user's peers. | True, False | | **First time app observed in tenant**<br>*(FirstTimeAppObservedInTenant)* | 180 | The app was observed for the first time in the organization. | True, False | | **App uncommonly used in tenant**<br>*(AppUncommonlyUsedInTenant)* | 180 | The app is not commonly used in the organization. | True, False |
-|
##### Browser used
The following tables describe the enrichments featured in the **ActivityInsights
| **Browser uncommonly used among peers**<br>*(BrowserUncommonlyUsedAmongPeers)* | 30 | The browser is not commonly used among user's peers. | True, False | | **First time browser observed in tenant**<br>*(FirstTimeBrowserObservedInTenant)* | 30 | The browser was observed for the first time in the organization. | True, False | | **Browser uncommonly used in tenant**<br>*(BrowserUncommonlyUsedInTenant)* | 30 | The browser is not commonly used in the organization. | True, False |
-|
##### Country connected from
The following tables describe the enrichments featured in the **ActivityInsights
| **Country uncommonly connected from among peers**<br>*(CountryUncommonlyConnectedFromAmongPeers)* | 90 | The geo location, as resolved from the IP address, is not commonly connected from among user's peers. | True, False | | **First time connection from country observed in tenant**<br>*(FirstTimeConnectionFromCountryObservedInTenant)* | 90 | The country was connected from for the first time by anyone in the organization. | True, False | | **Country uncommonly connected from in tenant**<br>*(CountryUncommonlyConnectedFromInTenant)* | 90 | The geo location, as resolved from the IP address, is not commonly connected from in the organization. | True, False |
-|
##### Device used to connect
The following tables describe the enrichments featured in the **ActivityInsights
| **Device uncommonly used among peers**<br>*(DeviceUncommonlyUsedAmongPeers)* | 180 | The device is not commonly used among user's peers. | True, False | | **First time device observed in tenant**<br>*(FirstTimeDeviceObservedInTenant)* | 30 | The device was observed for the first time in the organization. | True, False | | **Device uncommonly used in tenant**<br>*(DeviceUncommonlyUsedInTenant)* | 180 | The device is not commonly used in the organization. | True, False |
-|
##### Other device-related
The following tables describe the enrichments featured in the **ActivityInsights
| | | | | | **First time user logged on to device**<br>*(FirstTimeUserLoggedOnToDevice)* | 180 | The destination device was connected to for the first time by the user. | True, False | | **Device family uncommonly used in tenant**<br>*(DeviceFamilyUncommonlyUsedInTenant)* | 30 | The device family is not commonly used in the organization. | True, False |
-|
##### Internet Service Provider used to connect
The following tables describe the enrichments featured in the **ActivityInsights
| **ISP uncommonly used among peers**<br>*(ISPUncommonlyUsedAmongPeers)* | 30 | The ISP is not commonly used among user's peers. | True, False | | **First time connection via ISP in tenant**<br>*(FirstTimeConnectionViaISPInTenant)* | 30 | The ISP was observed for the first time in the organization. | True, False | | **ISP uncommonly used in tenant**<br>*(ISPUncommonlyUsedInTenant)* | 30 | The ISP is not commonly used in the organization. | True, False |
-|
##### Resource accessed
The following tables describe the enrichments featured in the **ActivityInsights
| **Resource uncommonly accessed among peers**<br>*(ResourceUncommonlyAccessedAmongPeers)* | 180 | The resource is not commonly accessed among user's peers. | True, False | | **First time resource accessed in tenant**<br>*(FirstTimeResourceAccessedInTenant)* | 180 | The resource was accessed for the first time by anyone in the organization. | True, False | | **Resource uncommonly accessed in tenant**<br>*(ResourceUncommonlyAccessedInTenant)* | 180 | The resource is not commonly accessed in the organization. | True, False |
-|
##### Miscellaneous
The following tables describe the enrichments featured in the **ActivityInsights
| **Unusual number of devices added**<br>*(UnusualNumberOfDevicesAdded)* | 5 | A user added an unusual number of devices. | True, False | | **Unusual number of devices deleted**<br>*(UnusualNumberOfDevicesDeleted)* | 5 | A user deleted an unusual number of devices. | True, False | | **Unusual number of users added to group**<br>*(UnusualNumberOfUsersAddedToGroup)* | 5 | A user added an unusual number of users to a group. | True, False |
-|
- ### IdentityInfo table
The following table describes the user identity data included in the **IdentityI
| **AccountUPN** | string | The user principal name of the user account. | | **AdditionalMailAddresses** | dynamic | The additional email addresses of the user. | | **AssignedRoles** | dynamic | The Azure AD roles the user account is assigned to. |
+| **BlastRadius** | string | A calculation based on the position of the user in the org tree and the user's Azure Active Directory roles and permissions. <br>Possible values: *Low, Medium, High* |
+| **ChangeSource** | string | The source of the latest change to the entity. <br>Possible values:<br>- *AzureActiveDirectory*<br>- *ActiveDirectory*<br>- *UEBA*<br>- *Watchlist*<br>- *FullSync* |
| **City** | string | The city of the user account. | | **Country** | string | The country of the user account. | | **DeletedDateTime** | datetime | The date and time the user was deleted. |
The following table describes the user identity data included in the **IdentityI
| **Manager** | string | The manager alias of the user account. | | **OnPremisesDistinguishedName** | string | The Azure AD distinguished name (DN). A distinguished name is a sequence of relative distinguished names (RDN), connected by commas. | | **Phone** | string | The phone number of the user account. |
-| **SourceSystem** | string | The system where the user data originated. |
+| **SourceSystem** | string | The system where the user is managed. <br>Possible values:<br>- *AzureActiveDirectory*<br>- *ActiveDirectory*<br>- *Hybrid* |
| **State** | string | The geographical state of the user account. | | **StreetAddress** | string | The office street address of the user account. | | **Surname** | string | The surname of the user. account. | | **TenantId** | string | The tenant ID of the user. | | **TimeGenerated** | datetime | The time when the event was generated (UTC). | | **Type** | string | The name of the table. |
-| **UserState** | string | The current state of the user account in Azure AD (Active/Disabled/Dormant/Lockout). |
+| **UserAccountControl** | dynamic | Security attributes of the user account in the AD domain. <br> Possible values (may contain more than one):<br>- *AccountDisabled*<br>- *HomedirRequired*<br>- *AccountLocked*<br>- *PasswordNotRequired*<br>- *CannotChangePassword*<br>- *EncryptedTextPasswordAllowed*<br>- *TemporaryDuplicateAccount*<br>- *NormalAccount*<br>- *InterdomainTrustAccount*<br>- *WorkstationTrustAccount*<br>- *ServerTrustAccount*<br>- *PasswordNeverExpires*<br>- *MnsLogonAccount*<br>- *SmartcardRequired*<br>- *TrustedForDelegation*<br>- *DelegationNotAllowed*<br>- *UseDesKeyOnly*<br>- *DontRequirePreauthentication*<br>- *PasswordExpired*<br>- *TrustedToAuthenticationForDelegation*<br>- *PartialSecretsAccount*<br>- *UseAesKeys* |
+| **UserState** | string | The current state of the user account in Azure AD.<br>Possible values:<br>- *Active*<br>- *Disabled*<br>- *Dormant*<br>- *Lockout* |
| **UserStateChangedOn** | datetime | The date of the last time the account state was changed (UTC). | | **UserType** | string | The user type. | - ## Next steps This document described the Microsoft Sentinel entity behavior analytics table schema. - Learn more about [entity behavior analytics](identify-threats-with-entity-behavior-analytics.md).
+- [Enable UEBA in Microsoft Sentinel](enable-entity-behavior-analytics.md).
- [Put UEBA to use](investigate-with-ueba.md) in your investigations.
service-fabric Service Fabric Reliable Actors Enumerate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-enumerate.md
Last updated 07/11/2022
# Enumerate Service Fabric Reliable Actors
-The Reliable Actors service allows a client to enumerate metadata about the actors that the service is hosting. Because the actor service is a partitioned stateful service, enumeration is performed per partition. Because each partition might contain many actors, the enumeration is returned as a set of paged results. The pages are looped over until all pages are read. The following example shows how to create a list of all active actors in one partition of an actor service:
+The Reliable Actors service allows a client to enumerate metadata about the actors that the service is hosting. Because the actor service is a partitioned stateful service, enumeration is performed per partition. Because each partition might contain many actors, the enumeration is returned as a set of paged results. The pages are looped over until all pages are read. The following example shows how to create a list of all [active](service-fabric-reliable-actors-lifecycle.md) actors in one partition of an actor service:
```csharp IActorService actorServiceProxy = ActorServiceProxy.Create(
service-health Service Health Portal Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/service-health-portal-update.md
Title: Azure Service Health portal update
-description: We're' updating the Azure Service Health portal experience to let users engage with service events and manage actions to maintain the business continuity of impacted applications.
+description: We're updating the Azure Service Health portal experience to let users engage with service events and manage actions to maintain the business continuity of impacted applications.
Last updated 06/10/2022 # Azure Service Health portal update
-We're' updating the Azure Service Health portal experience. The new experience lets users engage with service events and manage actions to maintain the business continuity of impacted applications.
+We're updating the Azure Service Health portal experience. The new experience lets users engage with service events and manage actions to maintain the business continuity of impacted applications.
## Highlights of the new experience
You can use the scope column in the details view to filter on scope (Tenant vs S
You can now see events at both Tenant and Subscription level scope in Health History blade if you have Tenant level administrator access. The scope column in the details view indicates if the incident is a Tenant or Subscription level incident. You can also filter on scope (Tenant vs Subscriber).
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
Data stored in the cloud grows at an exponential pace. To manage costs for your
Azure storage capacity limits are set at the account level, rather than according to access tier. You can choose to maximize your capacity usage in one tier, or to distribute capacity across two or more tiers.
+> [!NOTE]
+> Setting the access tier is only allowed on Block Blobs. They are not supported for Append and Page Blobs.
+ ## Online access tiers When your data is stored in an online access tier (either Hot or Cool), users can access it immediately. The Hot tier is the best choice for data that is in active use, while the Cool tier is ideal for data that is accessed less frequently, but that still must be available for reading and writing.
storage Storage Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-introduction.md
# What is Azure Files?
-Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). Azure Files file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure Files shares are accessible from Linux or macOS clients. Additionally, SMB Azure file shares can be cached on Windows Servers with [Azure File Sync](../file-sync/file-sync-introduction.md) for fast access near where the data is being used.
+Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview), [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System), and [Azure Files REST API](/rest/api/storageservices/file-service-rest-api). Azure file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure file shares are accessible from Linux or macOS clients. Additionally, SMB Azure file shares can be cached on Windows servers with [Azure File Sync](../file-sync/file-sync-introduction.md) for fast access near where the data is being used.
Here are some videos on the common use cases of Azure Files: * [Replace your file server with a serverless Azure file share](https://sec.ch9.ms/ch9/3358/0addac01-3606-4e30-ad7b-f195f3ab3358/ITOpsTalkAzureFiles_high.mp4)
Here are some videos on the common use cases of Azure Files:
Azure file shares can be used to: * **Replace or supplement on-premises file servers**:
- Azure Files can be used to completely replace or supplement traditional on-premises file servers or NAS devices. Popular operating systems such as Windows, macOS, and Linux can directly mount Azure file shares wherever they are in the world. SMB Azure file shares can also be replicated with Azure File Sync to Windows Servers, either on-premises or in the cloud, for performance and distributed caching of the data where it's being used. With the recent release of [Azure Files AD Authentication](storage-files-active-directory-overview.md), SMB Azure file shares can continue to work with AD hosted on-premises for access control.
+ Azure Files can be used to replace or supplement traditional on-premises file servers or network-attached storage (NAS) devices. Popular operating systems such as Windows, macOS, and Linux can directly mount Azure file shares wherever they are in the world. SMB Azure file shares can also be replicated with Azure File Sync to Windows servers, either on-premises or in the cloud, for performance and distributed caching of the data. With [Azure Files AD Authentication](storage-files-active-directory-overview.md), SMB Azure file shares can work with Active Directory Domain Services (AD DS) hosted on-premises for access control.
* **"Lift and shift" applications**: Azure Files makes it easy to "lift and shift" applications to the cloud that expect a file share to store file application or user data. Azure Files enables both the "classic" lift and shift scenario, where both the application and its data are moved to Azure, and the "hybrid" lift and shift scenario, where the application data is moved to Azure Files, and the application continues to run on-premises. * **Simplify cloud development**:
- Azure Files can also be used in numerous ways to simplify new cloud development projects. For example:
+ Azure Files can also be used to simplify new cloud development projects. For example:
* **Shared application settings**:
- A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many application instances. Application instances can load their configuration through the File REST API, and humans can access them as needed by mounting the SMB share locally.
+ A common pattern for distributed applications is to have configuration files in a centralized location where they can be accessed from many application instances. Application instances can load their configuration through the [Azure Files REST API](/rest/api/storageservices/file-service-rest-api), and humans can access them by mounting the share locally.
* **Diagnostic share**: An Azure file share is a convenient place for cloud applications to write their logs, metrics, and crash dumps. Logs can be written by the application instances via the File REST API, and developers can access them by mounting the file share on their local machine. This enables great flexibility, as developers can embrace cloud development without having to abandon any existing tooling they know and love.
Azure file shares can be used to:
Azure file shares can be used as persistent volumes for stateful containers. Containers deliver "build once, run anywhere" capabilities that enable developers to accelerate innovation. For the containers that access raw data at every start, a shared file system is required to allow these containers to access the file system no matter which instance they run on. ## Key benefits
-* **Shared access**. Azure file shares support the industry standard SMB and NFS protocols, meaning you can seamlessly replace your on-premises file shares with Azure file shares without worrying about application compatibility. Being able to share a file system across multiple machines, applications/instances is a significant advantage with Azure Files for applications that need shareability.
+* **Easy to use**. When an Azure file share is mounted on your computer, you don't need to do anything special to access the data: just navigate to the path where the file share is mounted and open/modify a file.
+* **Shared access**. Azure file shares support the industry standard SMB and NFS protocols, meaning you can seamlessly replace your on-premises file shares with Azure file shares without worrying about application compatibility. Being able to share a file system across multiple machines, applications, and application instances is a significant advantage for applications that need shareability.
* **Fully managed**. Azure file shares can be created without the need to manage hardware or an OS. This means you don't have to deal with patching the server OS with critical security upgrades or replacing faulty hard disks.
-* **Scripting and tooling**. PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure file shares as part of the administration of Azure applications. You can create and manage Azure file shares using Azure portal and Azure Storage Explorer.
+* **Scripting and tooling**. PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure file shares as part of the administration of Azure applications. You can create and manage Azure file shares using Azure portal and Azure Storage Explorer.
* **Resiliency**. Azure Files has been built from the ground up to be always available. Replacing on-premises file shares with Azure Files means you no longer have to wake up to deal with local power outages or network issues.
-* **Familiar programmability**. Applications running in Azure can access data in the share via file [system I/O APIs](/dotnet/api/system.io.file). Developers can therefore leverage their existing code and skills to migrate existing applications. In addition to System IO APIs, you can use [Azure Storage Client Libraries](/previous-versions/azure/dn261237(v=azure.100)) or the [Azure Storage REST API](/rest/api/storageservices/file-service-rest-api).
+* **Familiar programmability**. Applications running in Azure can access data in the share via file [system I/O APIs](/dotnet/api/system.io.file). Developers can therefore leverage their existing code and skills to migrate existing applications. In addition to System IO APIs, you can use [Azure Storage Client Libraries](/previous-versions/azure/dn261237(v=azure.100)) or the [Azure Files REST API](/rest/api/storageservices/file-service-rest-api).
## Case studies
-* Organizations across the world are leveraging Azure Files and Azure File Sync to optimize file access and storage. [Checkout their case studies here](azure-files-case-study.md).
+* Organizations across the world are leveraging Azure Files and Azure File Sync to optimize file access and storage. [Check out their case studies here](azure-files-case-study.md).
## Next Steps * [Plan for an Azure Files deployment](storage-files-planning.md)
storage Storage How To Use Files Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md
description: Learn to use Azure file shares with Windows and Windows Server. Use
Previously updated : 05/31/2022 Last updated : 07/19/2022
In order to use an Azure file share via the public endpoint outside of the Azure
Ensure port 445 is open: The SMB protocol requires TCP port 445 to be open; connections will fail if port 445 is blocked. You can check if your firewall is blocking port 445 with the `Test-NetConnection` cmdlet. To learn about ways to work around a blocked 445 port, see the [Cause 1: Port 445 is blocked](storage-troubleshoot-windows-file-connection-problems.md#cause-1-port-445-is-blocked) section of our Windows troubleshooting guide. ## Using an Azure file share with Windows
-To use an Azure file share with Windows, you must either mount it, which means assigning it a drive letter or mount point path, or access it via its [UNC path](/windows/win32/fileio/naming-a-file).
+To use an Azure file share with Windows, you must either mount it, which means assigning it a drive letter or mount point path, or [access it via its UNC path](#access-an-azure-file-share-via-its-unc-path).
This article uses the storage account key to access the file share. A storage account key is an administrator key for a storage account, including administrator permissions to all files and folders within the file share you're accessing, and for all file shares and other storage resources (blobs, queues, tables, etc.) contained within your storage account. If this is not sufficient for your workload, [Azure File Sync](../file-sync/file-sync-planning.md) may be used, or you may use [identity-based authentication over SMB](storage-files-active-directory-overview.md).
-A common pattern for lifting and shifting line-of-business (LOB) applications that expect an SMB file share to Azure is to use an Azure file share as an alternative for running a dedicated Windows file server in an Azure VM. One important consideration for successfully migrating a line-of-business application to use an Azure file share is that many line-of-business applications run under the context of a dedicated service account with limited system permissions rather than the VM's administrative account. Therefore, you must ensure that you mount/save the credentials for the Azure file share from the context of the service account rather than your administrative account.
+A common pattern for lifting and shifting line-of-business (LOB) applications that expect an SMB file share to Azure is to use an Azure file share as an alternative for running a dedicated Windows file server in an Azure VM. One important consideration for successfully migrating an LOB application to use an Azure file share is that many applications run under the context of a dedicated service account with limited system permissions rather than the VM's administrative account. Therefore, you must ensure that you mount/save the credentials for the Azure file share from the context of the service account rather than your administrative account.
### Mount the Azure file share
You have now mounted your Azure file share.
1. When you're ready to dismount the Azure file share, right-click on the entry for the share under the **Network locations** in File Explorer and select **Disconnect**.
+### Access an Azure file share via its UNC path
+You don't need to mount the Azure file share to a particular drive letter to use it. You can directly access your Azure file share using the [UNC path](/windows/win32/fileio/naming-a-file) by entering the following into File Explorer. Be sure to replace *storageaccountname* with your storage account name and *myfileshare* with your file share name:
+
+\\storageaccountname.file.core.windows.net\myfileshare
+
+You'll be asked to sign in with your network credentials. Sign in with the Azure subscription under which you've created the storage account and file share.
+
+For Azure Government Cloud, simply change the servername to:
+
+\\storageaccountname.file.core.usgovcloudapi.net\myfileshare
+ ### Accessing share snapshots from Windows If you've taken a share snapshot, either manually or automatically through a script or service like Azure Backup, you can view previous versions of a share, a directory, or a particular file from a file share on Windows. You can take a share snapshot using the [Azure portal](storage-files-quick-create-use-windows.md#create-a-share-snapshot), [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare?view=azps-8.0.0), or [Azure CLI](/cli/azure/storage/share?view=azure-cli-latest#az-storage-share-snapshot).
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
Synapse provides an integrated linked services experience when connecting to Azu
When the linked service authentication method is set to **Account Key**, the linked service will authenticate using the provided storage account key, request a SAS key, and automatically apply it to the storage request using the **LinkedServiceBasedSASProvider**.
+Synapse allows users to set the linked service for a particular storage account. This makes it possible to read/write data from **multiple storage accounts** in a single spark application/query. Once we set **spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName** for each storage account that will be used, Synapse figures out which linked service to use for a particular read/write operation. However if our spark job only deals with a single storage account, we can simply omit the storage account name and use **spark.storage.synapse.linkedServiceName**
+ ::: zone pivot = "programming-language-scala" ```scala val sc = spark.sparkContext
-spark.conf.set("spark.storage.synapse.linkedServiceName", "<LINKED SERVICE NAME>")
-spark.conf.set("fs.azure.account.auth.type", "SAS")
-spark.conf.set("fs.azure.sas.token.provider.type", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedSASProvider")
+val source_full_storage_account_name = "teststorage.dfs.core.windows.net"
+spark.conf.set(f"spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName", "<LINKED SERVICE NAME>")
+spark.conf.set(f"fs.azure.account.auth.type.{source_full_storage_account_name}", "SAS")
+spark.conf.set(f"fs.azure.sas.token.provider.type.{source_full_storage_account_name}", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedSASProvider")
val df = spark.read.csv("abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<FILE PATH>")
display(df.limit(10))
```python %%pyspark # Python code
-spark.conf.set("spark.storage.synapse.linkedServiceName", "<lINKED SERVICE NAME>")
-spark.conf.set("fs.azure.account.auth.type", "SAS")
-spark.conf.set("fs.azure.sas.token.provider.type", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedSASProvider")
+val source_full_storage_account_name = "teststorage.dfs.core.windows.net"
+spark.conf.set(f"spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName", "<lINKED SERVICE NAME>")
+spark.conf.set(f"fs.azure.account.auth.type.{source_full_storage_account_name}", "SAS")
+spark.conf.set(f"fs.azure.sas.token.provider.type.{source_full_storage_account_name}", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedSASProvider")
df = spark.read.csv('abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<DIRECTORY PATH>')
When the linked service authentication method is set to **Managed Identity** or
```scala val sc = spark.sparkContext
-spark.conf.set("spark.storage.synapse.linkedServiceName", "<LINKED SERVICE NAME>")
-spark.conf.set("fs.azure.account.oauth.provider.type", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider")
+val source_full_storage_account_name = "teststorage.dfs.core.windows.net"
+spark.conf.set(f"spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName", "<LINKED SERVICE NAME>")
+spark.conf.set(f"fs.azure.account.oauth.provider.type.{source_full_storage_account_name}", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider")
val df = spark.read.csv("abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<FILE PATH>") display(df.limit(10))
display(df.limit(10))
```python %%pyspark # Python code
-spark.conf.set("spark.storage.synapse.linkedServiceName", "<lINKED SERVICE NAME>")
-spark.conf.set("fs.azure.account.oauth.provider.type", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider")
+val source_full_storage_account_name = "teststorage.dfs.core.windows.net"
+spark.conf.set(f"spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName", "<LINKED SERVICE NAME>")
+spark.conf.set(f"fs.azure.account.oauth.provider.type.{source_full_storage_account_name}", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider")
df = spark.read.csv('abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<DIRECTORY PATH>')
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
Previously updated : 07/12/2022 Last updated : 07/19/2022
The following table lists the runtime name, Apache Spark version, and release da
## Runtime release stages For the complete runtime for Apache Spark lifecycle and support policies, refer to [Synapse runtime for Apache Spark lifecycle and supportability](./runtime-for-apache-spark-lifecycle-and-supportability.md).+
+## Runtime patching
+
+Azure Synapse runtime for Apache Spark patches are rolled out monthly containing bug, feature and security fixes to the Apache Spark core engine, language environments, connectors and libraries.
+
+The patch policy differs based on the [runtime lifecycle stage](./runtime-for-apache-spark-lifecycle-and-supportability.md):
+1. Generally Available (GA) runtime: Receive no upgrades on major versions (i.e. 3.x -> 4.x). And will upgrade a minor version (i.e. 3.x -> 3.y) as long as there are no deprecation/regression impacts.
+2. Preview runtime: No major version upgrades unless strictly necessary. Minor versions (3.x -> 3.y) will be upgraded to add latest features to a runtime.
+3. Long Term Support (LTS) runtime will be patched with security fixes only.
+4. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
Previously updated : 07/12/2022 Last updated : 07/19/2022
Apache Spark pools in Azure Synapse use runtimes to tie together essential compo
## Release cadence
-The Apache Spark project releases minor versions about __every 6 months__. Once released, the Azure Synapse team aims to provide a __preview Runtime within 90 days__.
+The Apache Spark project usually releases minor versions about __every 6 months__. Once released, the Azure Synapse team aims to provide a __preview runtime within approximately 90 days__, if possible.
## Runtime lifecycle
-The following diagram captures expected lifecycle paths for a Synapse runtime for Apache Spark.
+The following chart captures a typical lifecycle path for a Synapse runtime for Apache Spark.
![How to enable Intelligent Cache during new Spark pools creation](./media/runtime-for-apache-spark-lifecycle/runtime-for-apache-spark-lifecycle.png)
-| Runtime release stage | Expected Lifecycle* | Notes |
+| Runtime release stage | Typical Lifecycle* | Notes |
| -- | -- | -- |
-| Preview | 3 months* | Should be used to evaluate new features and validation of workload migration to newer versions. <br/> Must not be used for production workloads. <br/> A Preview runtime may not be elected to move into a GA stage at Microsoft discretion; moving directly to EOLA stage. |
-| Generally Available (GA) | 12 months* | Generally available (GA) runtimes are open to all customers and are ready for production use. <br/> A GA runtime may not be elected to move into an LTS stage at Microsoft discretion. |
-| Long Term Support (LTS) | 12 months* | Long term support (LTS) runtimes are open to all customers and are ready for production use, yet customers are encouraged to expedite validation and workload migration to latest GA runtimes. |
-| End of Life announced (EOLA) | 12 months* for GA or LTS runtimes.<br/>1 month* for Preview runtimes. | At the end of a lifecycle period, given a runtime is chosen to retire; an End-of-Life announcement will be made to all customers per [Azure retirements policy](https://docs.microsoft.com/lifecycle/faq/azure). This additional period serves as the exit ramp for customers to migrate workloads to a GA runtime. |
-| End of Life (EOL) | - | At this stage, the Runtime is retired and no longer supported. |
+| Preview | 3 months* | Microsoft Azure Preview terms apply. See here for details: [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/?cdn=disable) |
+| Generally Available (GA) | 12 months* | Generally available (GA) runtimes are open to all eligible customers and are ready for production use. <br/> A GA runtime may not be elected to move into an LTS stage at Microsoft discretion. |
+| Long Term Support (LTS) | 12 months* | Long term support (LTS) runtimes are open to all eligible customers and are ready for production use, but customers are encouraged to expedite validation and workload migration to latest GA runtimes. |
+| End of Life announced (EOLA) | 12 months* for GA or LTS runtimes.<br/>1 month* for Preview runtimes. | Prior to the end of a given runtime's lifecycle, we aim to provide 12 months notice to customers as an exit ramp to migrate workloads to a GA runtime. |
+| End of Life (EOL) | - | At this stage, the runtime is retired and no longer supported. |
-\* *Expected duration of a runtime in the referred stage. Provided as an example for a given Runtime. Stage durations are subject to change at Microsoft discretion, as noted throughout this document.*
+\* *Expected duration of a runtime in each stage. These timelines are provided as an example for a given runtime, and may vary depending on various factors. Lifecycle timelines are subject to change at Microsoft discretion.*
> [!IMPORTANT] >
-> * The expected timelines are provided as best effort based on current Apache Spark releases. If the Apache Spark project changes lifecycle of a specific version affecting a Synapse runtime, changes to the stage dates will be noted on the [release notes](./apache-spark-version-support.md).
-> * Both GA and LTS runtimes may be moved into EOL stage faster based on outstanding security risks and usage rates criteria at Microsoft discretion. Proper notification will be performed based on current Azure service policies, please refer to [Lifecycle FAQ - Microsoft Azure](https://docs.microsoft.com/lifecycle/faq/azure) for information.
+> * The above timelines are provided as examples based on current Apache Spark releases. If the Apache Spark project changes the lifecycle of a specific version affecting a Synapse runtime, changes to the stage dates will be noted on the [release notes](./apache-spark-version-support.md).
+> * Both GA and LTS runtimes may be moved into EOL stage faster based on outstanding security risks and usage rates criteria at Microsoft discretion.
+> * Please refer to [Lifecycle FAQ - Microsoft Azure](https://docs.microsoft.com/lifecycle/faq/azure) for information about Azure lifecycle policies.
> ## Release stages and support ### Preview runtimes
-Azure Synapse Analytics provides previews to give you a chance to evaluate and share feedback on features before they become generally available (GA). While a runtime is available in preview, new dependencies and component versions may be introduced. __Support SLAs are not applicable for preview runtimes, therefore no production grade workloads should be considered on a Preview runtime.__
+Azure Synapse Analytics provides previews to give you a chance to evaluate and share feedback on features before they become generally available (GA).
-At the end of the Preview lifecycle for the runtime, Microsoft will assess if the runtime will move into a Generally Availability (GA) based on customer usage, security and stability criteria; as described in the next section.
+At the end of the Preview lifecycle for the runtime, Microsoft will assess if the runtime will move into a Generally Availability (GA) based on customer usage, security and stability criteria.
-If not eligible for GA stage, the Preview runtime will move into the retirement cycle composed by the end of life announcement (EOLA), a 1 month period, then moving into the EOL stage.
+If not eligible for GA stage, the Preview runtime will move into the retirement cycle.
### Generally available runtimes
-Generally available (GA) runtimes are open to all customers and __are ready for production use__. Once a runtime is generally available, only security fixes will be backported. In addition, new components or features will be introduced if they do not change underlying dependencies or component versions.
+Once a runtime is Generally Available, only security fixes will be backported. In addition, new components or features will be introduced if they do not change underlying dependencies or component versions.
-At the end of the GA lifecycle for the runtime, Microsoft will assess if the runtime will have an extended lifecycle (LTS) based on customer usage, security and stability criteria; as described in the next section.
+At the end of the GA lifecycle for the runtime, Microsoft will assess if the runtime will have an extended lifecycle (LTS) based on customer usage, security and stability criteria.
-If not eligible for LTS stage, the GA runtime will move into the retirement cycle composed by the end of life announcement (EOLA), a 12 month period, then moving into the EOL stage.
+If not eligible for LTS stage, the GA runtime will move into the retirement cycle.
### Long term support runtimes
-Long term support (LTS) runtimes are open to all customers and are ready for production use, yet customers are encouraged to expedite validation and migration of code base and workloads to the latest GA runtimes. Customers should preferably not onboard new workloads using an LTS runtime. Security fixes and stability improvements may be backported. Yet, no new components or features will be introduced into the runtime at this stage.
+For runtimes that are covered by Long term support (LTS) customers are encouraged to expedite validation and migration of code base and workloads to the latest GA runtimes. We recommend that customers do not onboard new workloads using an LTS runtime. Security fixes and stability improvements may be backported, but no new components or features will be introduced into the runtime at this stage.
### End of life announcement
-At the end of the runtime lifecycle at any stage, an end of life announcement (EOLA) is performed. Proper notification will be performed based on current Azure service policies, please refer to [Lifecycle FAQ - Microsoft Azure](https://docs.microsoft.com/lifecycle/faq/azure) for information.
+Prior to the end of the runtime lifecycle at any stage, an end of life announcement (EOLA) is performed.
-Support SLAs are applicable for EOLA runtimes yet all customers must migrate to a GA stage runtime. During the retirement period, no security fixes and stability improvements will be backported.
+Support SLAs are applicable for EOL announced runtimes, but all customers must migrate to a GA stage runtime no later than the EOL date.
-Existing pools will work as expected, yet __no new Synapse Spark pools of such a version can be created, as the runtime version will not be listed on Azure Synapse Studio, Synapse API and Azure Portal.__
+Existing pools will work as expected, but __no new Synapse Spark pools of such a version can be created, as the runtime version will not be listed on Azure Synapse Studio, Synapse API, or Azure Portal.__
-Based on outstanding security issues and runtime usage, Microsoft reserves its right to expedite moving a runtime into the final EOL stage.
+If necessary due to outstanding security issues, runtime usage, or other factors, Microsoft may expedite moving a runtime into the final EOL stage at any time, at Microsoft's discretion.
### End of life and retirement
-After the period of end of life announcement (EOLA), runtimes are considered retired.
+As of the applicable EOL date, runtimes are considered retired.
* Existing Spark Pools definitions and metadata will still exist in the workspace for a defined period, yet __all pipelines, jobs and notebooks will not be able to execute__.
-* For runtimes coming from GA or LTS stages, Spark Pools definitions will be deleted in 60 days.
-* For runtimes coming from a Preview stage, Spark Pools definitions will be deleted in 15 days.
+* Spark Pools definitions will be deleted from the Synapse workspace in 90 days after the applicable EOL date.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Some general system constraints might affect your workload:
| Property | Limitation | |||
-| Maximum number of Azure Synapse workspaces per subscription | [See limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#synapse-workspace-limits). |
+| Maximum number of Azure Synapse workspaces per subscription | [See limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-synapse-limits-for-workspaces). |
| Maximum number of databases per serverless pool | 20 (not including databases synchronized from Apache Spark pool). | | Maximum number of databases synchronized from Apache Spark pool | Not limited. | | Maximum number of databases objects per database | The sum of the number of all objects in a database can't exceed 2,147,483,647. See [Limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects). |
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared-enable.md
description: Configure an Azure managed disk with shared disks so that you can s
Previously updated : 06/09/2022 Last updated : 07/19/2022
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 06/09/2022 Last updated : 07/19/2022
Shared disks support several operating systems. See the [Windows](#windows) or [
When you share a disk, your billing could be impacted in two different ways, depending on the type of disk.
-For shared premium SSDs, in addition to cost of the disk's tier, there's an extra charge that increases with each VM the SSD is mounted to. See [managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for details.
+For shared premium SSD disks, in addition to cost of the disk's tier, there's an extra charge that increases with each VM the SSD is mounted to. See [managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) for details.
Ultra disks don't have an extra charge for each VM that they're mounted to. They're billed on the total IOPS and MBps that the disk is configured for. Normally, an ultra disk has two performance throttles that determine its total IOPS/MBps. However, when configured as a shared ultra disk, two more performance throttles are exposed, for a total of four. These two additional throttles allow for increased performance at an extra expense and each meter has a default value, which raises the performance and cost of the disk.
virtual-machines Expose Sap Process Orchestration On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure.md
+
+ Title: Exposing SAP legacy middleware with Azure PaaS securely
+description: Learn about securely exposing SAP Process Orchestration on Azure.
+++++ Last updated : 07/19/2022+++
+# Exposing SAP legacy middleware with Azure PaaS securely
+
+Enabling internal systems and external partners to interact with SAP backends is a common requirement. Existing SAP landscapes often rely on the legacy middleware [SAP Process Orchestration](https://help.sap.com/docs/SAP_NETWEAVER_750/bbd7c67c5eb14835843976b790024ec6/8e995afa7a8d467f95a473afafafa07e.html)(PO) or [Process Integration](https://help.sap.com/docs/SAP_NETWEAVER_750/bbd7c67c5eb14835843976b790024ec6/8e995afa7a8d467f95a473afafafa07e.html)(PI) for their integration and transformation needs. For simplicity the term "SAP Process Orchestration" will be used in this article but associated with both offerings.
+
+This article describes configuration options on Azure with emphasis on Internet-facing implementations.
+
+> [!NOTE]
+> SAP mentions [SAP IntegrationSuite](https://discovery-center.cloud.sap/serviceCatalog/integration-suite?region=all) - specifically [SAP CloudIntegration](https://help.sap.com/docs/CLOUD_INTEGRATION/368c481cd6954bdfa5d0435479fd4eaf/9af2f05c7eb04457aee5906fd8553e00.html) - running on [Business TechnologyPlatform](https://www.sap.com/products/business-technology-platform.html)(BTP) as the successor for SAP PO/PI. Both the BTP platform and the services are available on Azure. For more information, see [SAP DiscoveryCenter](https://discovery-center.cloud.sap/serviceCatalog/integration-suite?region=all&tab=service_plan&provider=azure). See SAP OSS note [1648480](https://launchpad.support.sap.com/#/notes/1648480) for more info about the maintenance support timeline for the legacy component.
+
+## Overview
+
+Existing implementations based on SAP middleware often relied on SAP's proprietary dispatching technology called [SAP WebDispatcher](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/488fe37933114e6fe10000000a421937.html). It operates on layer 7 of the [OSI model](https://en.wikipedia.org/wiki/OSI_model), acts as a reverse-proxy and addresses load balancing needs for the downstream SAP application workloads like SAP ERP, SAP Gateway, or SAP Process Orchestration.
+
+Dispatching approaches range from traditional reverse proxies like Apache, to Platform-as-a-Service (PaaS) options like the [Azure Load Balancer](../../../load-balancer/load-balancer-overview.md), or the opinionated SAP WebDispatcher. The overall concepts described in this article apply to the options mentioned. Have a look at SAP's [wiki](https://wiki.scn.sap.com/wiki/display/SI/Can+I+use+a+different+load+balancer+instead+of+SAP+Web+Dispatcher) for their guidance on using non-SAP load balancers.
+
+> [!NOTE]
+> All described setups in this article assume a hub-spoke networking topology, where shared services are deployed into the hub. Given the criticality of SAP, even more isolation may be desirable.
+
+## Primary Azure services used
+
+[Azure Application Gateway](../../../application-gateway/how-application-gateway-works.md) handles public [internet-based](../../../application-gateway/configuration-front-end-ip.md) and/or [internal private](../../../application-gateway/configuration-front-end-ip.md) http routing and [encrypted tunneling across Azure subscriptions](../../../application-gateway/private-link.md), [security](../../../application-gateway/features.md), and [auto-scaling](../../../application-gateway/application-gateway-autoscaling-zone-redundant.md) for instance. Workloads in other virtual networks (VNet) or even Azure Subscriptions that shall communicate with SAP through the app gateway can be connected via [private links](../../../application-gateway/private-link-configure.md). Azure Application Gateway is focused on exposing web applications, hence offers a Web Application Firewall.
+
+[Azure Firewall](../../../firewall/overview.md) handles public internet-based and/or internal private routing for traffic types on Layer 4-7 of the OSI model. It offers filtering and threat intelligence, which feeds directly from Microsoft Cyber Security.
+
+[Azure API Management](../../../api-management/api-management-key-concepts.md) handles public internet-based and/or internal private routing specifically for APIs. It offers request throttling, usage quota and limits, governance features like policies, and API keys to slice and dice services per client.
+
+[VPN Gateway](../../../vpn-gateway/vpn-gateway-about-vpngateways.md) and [Azure ExpressRoute](../../../expressroute/expressroute-introduction.md) serve as entry points to on-premises networks. Both components are abbreviated on the diagrams as VPN and XR.
+
+## Setup considerations
+
+Integration architecture needs differ depending on the interface used. SAP-proprietary technologies like [intermediate Document framework](https://help.sap.com/docs/SAP_DATA_SERVICES/e54136ab6a4a43e6a370265bf0a2d744/577710e16d6d1014b3fc9283b0e91070.html) (ALE/iDoc), [Business Application Programming Interface](https://help.sap.com/docs/SAP_ERP/c5a8d544836649a1af6eaef358d08e3f/4dc89000ebfc5a9ee10000000a42189b.html) (BAPI), [transactional Remote Function Calls](https://help.sap.com/docs/SAP_NETWEAVER_700/108f625f6c53101491e88dc4cf51a6cc/4899b963ee2b73e7e10000000a42189b.html) (tRFC), or plain [RFC](https://help.sap.com/docs/SAP_ERP/be79bfef64c049f88262cf6cb5de1c1f/0502cbfa1c2f184eaa6ba151d1aaf4fe.html) require a specific runtime environment and operate on layer 4-7 of the OSI model, unlike modern APIs that typically rely on http-based communication (layer 7 of the OSI model). Because of that the interfaces can't be treated the same way.
+
+This article focuses on modern APIs and http (that includes integration scenarios like [AS2](https://wikipedia.org/wiki/AS2)). [FTP](https://wikipedia.org/wiki/File_Transfer_Protocol) will serve as an example to handle `non-http` integration needs. For more information about the different Microsoft load balancing solutions, see [this article](/azure/architecture/guide/technology-choices/load-balancing-overview).
+
+> [!NOTE]
+> SAP publishes dedicated [connectors](https://support.sap.com/en/product/connectors.html) for their proprietary interfaces. Check SAP's documentation for [Java](https://support.sap.com/en/product/connectors/jco.html), and [.NET](https://support.sap.com/en/product/connectors/msnet.html) for example. They are supported by [Microsoft Gateways](/azure/data-factory/connector-sap-table?tabs=data-factory#prerequisites) too. Be aware that iDocs can also be posted via [http](https://blogs.sap.com/2012/01/14/post-idoc-to-sap-erp-over-http-from-any-application/).
+
+Security concerns require the usage of [Firewalls](../../../firewall/features.md) for lower-level protocols and [Web Application Firewalls](../../../web-application-firewall/overview.md) (WAF) to address http-based traffic with [Transport Layer Security](https://wikipedia.org/wiki/Transport_Layer_Security) (TLS). To be effective, TLS sessions need to be terminated at the WAF level. Supporting zero-trust approaches, it's advisable to [re-encrypt](../../../application-gateway/ssl-overview.md) again afterwards to ensure end-to-encryption.
+
+Integration protocols such as AS2 may raise alerts by standard WAF rules. We recommend using our [Application Gateway WAF triage workbook](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Workbook%20-%20AppGw%20WAF%20Triage%20Workbook) to identify and better understand why the rule is triggered, so you can remediate effectively and securely. The standard rules are provided by Open Web Application Security Project (OWASP). For more information, see the [SAP on Azure webcast](https://www.youtube.com/watch?v=kAnWTqKlGGo) for a detailed video session on this topic with emphasis on SAP Fiori exposure.
+
+In addition, security can be further enhanced with [mutual TLS](../../../application-gateway/mutual-authentication-overview.md) (mTLS) - also referred to as mutual authentication. Unlike normal TLS, it also verifies the client identity.
+
+> [!NOTE]
+> VM pools require a load balancer. For better readability it is not shown explicitly on the diagrams below.
+
+> [!NOTE]
+> In case SAP specific balancing features provided by the SAP WebDispatcher aren't required, they can be replaced by an Azure Load Balancer giving the benefit of a managed PaaS offering compared to an Infrastructure-as-a-Service setup.
+
+## Scenario 1.A: Inbound http connectivity focused
+
+The SAP WebDispatcher **doesn't** offer a Web Application Firewall. Because of that Azure Application Gateway is recommended for a more secure setup. The WebDispatcher and "Process Orchestration" remain in charge to protect the SAP backend from request overload with [sizing guidance](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/489ab14248c673e8e10000000a42189b.html) and [concurrent request limits](https://help.sap.com/docs/ABAP_PLATFORM/683d6a1797a34730a6e005d1e8de6f22/3a450194bf9c4797afb6e21b4b22ad2a.html). There's **no** throttling capability available in the SAP workloads.
+
+Unintentional access can be avoided through [Access Control Lists](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/0c39b84c3afe4d2d9f9f887a32914ecd.html) on the SAP WebDispatcher.
+
+One of the scenarios for SAP Process Orchestration communication is inbound flow. Traffic may originate from On-premises, external apps/users or an internal system. See below an example with focus on https.
++
+## Scenario 1.B: Outbound http connectivity focused
+
+For the reverse communication direction "Process Orchestration" may leverage the VNet routing to reach workloads on-premises or Internet-based targets via the Internet breakout. Azure Application Gateway acts as a reverse proxy in such scenarios. For `non-http` communication, consider adding Azure Firewall. For more information, see [Scenario 4](#scenario-4-file-based) and [Comparing Gateway components](#comparing-gateway-setups).
+
+The outbound scenario below shows two possible methods. One using HTTPS via the Azure Application Gateway calling a Webservice (for example SOAP adapter) and the other using SFTP (FTP over SSH) via the Azure Firewall transferring files to a business partner's S/FTP server.
++
+## Scenario 2: API Management focused
+
+Compared to scenario 1, the introduction of [Azure API Management (APIM) in internal mode](../../../api-management/api-management-using-with-internal-vnet.md) (private IP only and VNet integration) adds built-in capabilities like:
+
+- [Throttling](../../../api-management/api-management-sample-flexible-throttling.md),
+- [API governance](/azure/architecture/example-scenario/devops/automated-api-deployments-apiops),
+- Additional security options like [modern authentication flows](../../../api-management/api-management-howto-protect-backend-with-aad.md),
+- [Azure Active Directory](../../../active-directory/develop/active-directory-v2-protocols.md) integration and
+- The opportunity to add the SAP APIs to a central company-wide API solution.
++
+When a web application firewall isn't required, Azure API Management can be deployed in external mode (using a public IP). That simplifies the setup, while keeping the throttling and API governance capabilities. [Basic protection](/azure/cloud-services/cloud-services-configuration-and-management-faq#what-are-the-features-and-capabilities-that-azure-basic-ips-ids-and-ddos-provides-) is implemented for all Azure PaaS offerings.
++
+## Scenario 3: Global reach
+
+Azure Application Gateway is a region-bound service. Compared to the above scenarios [Azure Front Door](../../../frontdoor/front-door-overview.md) ensures cross-region global routing including a web application firewall. Look at [this comparison](/azure/architecture/guide/technology-choices/load-balancing-overview) for more details about the differences.
+
+> [!NOTE]
+> Condensed SAP WebDispatcher, Process Orchestration, and backend into single image for better readability.
++
+## Scenario 4: File-based
+
+`Non-http` protocols like FTP can't be addressed with Azure API Management, Application Gateway, or Front Door like shown in scenarios beforehand. Instead the managed Azure Firewall or equivalent Network Virtual Appliance (NVA) takes over the role of securing inbound requests.
+
+Files need to be stored before they can be processed by SAP. It's recommended to use [SFTP](../../../storage/blobs/secure-file-transfer-protocol-support.md). Azure Blob Storage supports SFTP natively.
+
+> [!NOTE]
+> At the time of writing this article [the feature](../../../storage/blobs/secure-file-transfer-protocol-support.md) is still in preview.
++
+There are alternative SFTP options available on the Azure Marketplace if necessary.
+
+See below a variation with integration targets externally and on-premises. Different flavors of secure FTP illustrate the communication path.
++
+For more information, see the [Azure Files docs](../../../storage/files/files-nfs-protocol.md) for insights into NFS file shares as alternative to Blob Storage.
+
+## Scenario 5: SAP RISE specific
+
+SAP RISE deployments are technically identical to the scenarios described before with the exception that the target SAP workload is managed by SAP itself. The concepts described can be applied here as well.
+
+Below diagrams describe two different setups as examples. For more information, see our [SAP RISE reference guide](../../../virtual-machines/workloads/sap/sap-rise-integration.md#virtual-network-peering-with-sap-riseecs).
+
+> [!IMPORTANT]
+> Contact SAP to ensure communications ports for your scenario are allowed and opened in Network Security Groups.
+
+### Scenario 5.A: Http inbound
+
+In the first setup, the integration layer including "SAP Process Orchestration" and the complete inbound path is governed by the customer. Only the final SAP target runs on the RISE subscription. Communication to the RISE hosted workload is configured through virtual network peering - typically over the hub. A potential integration could be iDocs posted to the SAP ERP Webservice `/sap/bc/idoc_xml` by an external party.
++
+This second example shows a setup, where SAP RISE runs the whole integration chain except for the API Management layer.
++
+### Scenario 5.B: File outbound
+
+In this scenario, the SAP-managed "Process Orchestration" instance writes files to the customer managed file share on Azure or to a workload sitting on-premises. The breakout needs to be handled by the customer.
+
+> [!NOTE]
+> At the time of writing this article the [Azure Blob Storage SFTP feature](../../../storage/blobs/secure-file-transfer-protocol-support.md) is still in preview.
++
+## Comparing gateway setups
+
+> [!NOTE]
+> Performance and cost metrics assume production grade tiers. For more information, see the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) and Azure docs for [Azure Firewall](../../../firewall/firewall-performance.md), [Azure Application Gateway (incl. Web Application Firewall - WAF)](../../../application-gateway/high-traffic-support.md), and [Azure API Management](../../../api-management/api-management-capacity.md).
++
+Depending on the integration protocols required you may need multiple components. Find more details about the benefits of the various combinations of chaining Azure Application Gateway with Azure Firewall [here](/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall).
+
+## Integration rule of thumb
+
+Which integration flavor described in this article fits your requirements best, needs to be evaluated on a case-by-case basis. Consider enabling the following capabilities:
+
+- [Request throttling](../../../api-management/api-management-sample-flexible-throttling.md) using API Management
+
+- [Concurrent request limits](https://help.sap.com/docs/ABAP_PLATFORM/683d6a1797a34730a6e005d1e8de6f22/3a450194bf9c4797afb6e21b4b22ad2a.html) on the SAP WebDispatcher
+
+- [Mutual TLS](../../../application-gateway/mutual-authentication-overview.md) to verify client and receiver
+
+- Web Application Firewall and [re-encrypt after TLS-termination](../../../application-gateway/ssl-overview.md)
+
+- A [Firewall](../../../firewall/features.md) for `non-http` integrations
+
+- [High-availability](../../../virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios.md) and [disaster recovery](/azure/cloud-adoption-framework/scenarios/sap/eslz-business-continuity-and-disaster-recovery) for the VM-based SAP integration workloads
+
+- Modern [authentication mechanisms like OAuth2](/azure/api-management/sap-api?#production-considerations) where applicable
+
+- Utilize a managed key store like [Azure Key Vault](../../../key-vault/general/overview.md) for all involved credentials, certificates, and keys
+
+## Alternatives to SAP Process Orchestration with Azure Integration Services
+
+The integration scenarios covered by SAP Process Orchestration can be addressed with the [Azure Integration Service portfolio](https://azure.microsoft.com/product-categories/integration/) natively. Have a look at the [Azure Logic Apps connectors](../../../logic-apps/logic-apps-using-sap-connector.md) for your desired SAP interfaces to get started. The connector guide contains more details for [AS2](../../../logic-apps/logic-apps-enterprise-integration-as2.md), [EDIFACT](../../../logic-apps/logic-apps-enterprise-integration-edifact.md) etc. too. See [this blog series](https://blogs.sap.com/2018/09/25/your-sap-on-azure-part-9-easy-integration-using-azure-logic-apps/) for a concrete example of iDoc processing with AS2 via Logic Apps.
+
+## Next steps
+
+[Protect APIs with Application Gateway and API Management](/azure/architecture/reference-architectures/apis/protect-apis)
+
+[Integrate API Management in an internal virtual network with Application Gateway](/azure/api-management/api-management-howto-integrate-internal-vnet-appgateway)
+
+[Deploy the Application Gateway WAF triage workbook to better understand SAP related WAF alerts](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Workbook%20-%20AppGw%20WAF%20Triage%20Workbook)
+
+[Understand Azure Application Gateway and Web Application Firewall for SAP](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/)
+
+[Understand implication of combining Azure Firewall and Azure Application Gateway](/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall)
+
+[Work with SAP OData APIs in Azure API Management](/azure/api-management/sap-api)