Updates from: 01/29/2022 02:46:39
Category Microsoft Docs article Related commit history on GitHub Change details
compliance Add Your Organization Brand To Encrypted Messages https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/add-your-organization-brand-to-encrypted-messages.md
search.appverid:
- MET150 - MOE150 ms.assetid: 7a29260d-2959-42aa-8916-feceff6ee51d-+ - Strat_O365_IP - M365-security-compliance-+ - seo-marvel-apr2020 - seo-marvel-jun2020 - admindeeplinkMAC
description: Learn how Office 365 global administrators can apply your organizat
# Add your organization's brand to your Microsoft 365 for business Message Encryption encrypted messages You can apply your company branding to customize the look of your organization's email messages and the encryption portal. You'll need to apply global administrator permissions to your work or school account before you can get started. Once you have these permissions, use the Get-OMEConfiguration and Set-OMEConfiguration Windows PowerShell cmdlets to customize these parts of encrypted email messages:
-
-- Introductory text
+- Introductory text
- Disclaimer text- - URL for Your organization's privacy statement- - Text in the OME portal- - Logo that appears in the email message and OME portal, or whether to use a logo at all- - Background color in the email message and OME portal You can also revert back to the default look and feel at any time.
You can also revert back to the default look and feel at any time.
If you'd like more control, use Office 365 Advanced Message Encryption to create multiple templates for encrypted emails originating from your organization. Use these templates to control parts of the end-user experience. For example, specify whether recipients can use Google, Yahoo, and Microsoft Accounts to sign in to the encryption portal. Use templates to fulfill several use cases, such as: - Individual departments, such as Finance, Sales, and so on.- - Different products- - Different geographical regions or countries- - Whether you want to allow emails to be revoked- - Whether you want emails sent to external recipients to expire after a specified number of days. Once you've created the templates, you can apply them to encrypted emails by using Exchange mail flow rules. If you have Office 365 Advanced Message Encryption, you can revoke any email that you've branded by using these templates.
You can modify several features within a branding template. You can modify, but
- [Set-OMEConfiguration](/powershell/module/exchange/set-omeconfiguration) - Modify the default branding template or a custom branding template that you created. - [New-OMEConfiguration](/powershell/module/exchange/new-omeconfiguration) - Create a new branding template, Advanced Message Encryption only. - [Remove-OMEConfiguration](/powershell/module/exchange/remove-omeconfiguration) - Remove a custom branding template, Advanced Message Encryption only. You can't delete the default branding template.
-
+ ## Modify an OME branding template Use Windows PowerShell to modify one branding template at a time. If you have Advanced Message Encryption, you can also create, modify, and remove custom templates.
Use Windows PowerShell to modify one branding template at a time. If you have Ad
![Customizable email parts.](../media/ome-template-breakout.png)
+<br>
+
+****
+ |**To customize this feature of the encryption experience**|**Use these commands**|
-|:--|:--|
-|Background color|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -BackgroundColor "<#RRGGBB hexadecimal color code or name value>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -BackgroundColor "#ffffff"` <br/> For more information about background colors, see the [Background colors](#background-color-reference) section later in this article.|
-|Logo|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -Image <Byte[]>` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -Image (Get-Content "C:\Temp\contosologo.png" -Encoding byte)` <br/> Supported file formats: .png, .jpg, .bmp, or .tiff <br/> Optimal size of logo file: less than 40 KB <br/> Optimal size of logo image: 170x70 pixels. If your image exceeds these dimensions, the service resizes your logo for display in the portal. The service doesn't modify the graphic file itself. For best results, use the optimal size.|
-|Text next to the sender's name and email address|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -IntroductionText "<String up to 1024 characters>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -IntroductionText "has sent you a secure message."`|
-|Text that appears on the "Read Message" button|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -ReadButtonText "<String up to 1024 characters>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "OME Configuration" -ReadButtonText "Read Secure Message."`|
-|Text that appears below the "Read Message" button|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -EmailText "<String up to 1024 characters>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "OME Configuration" -EmailText "Encrypted message from ContosoPharma secure messaging system."`|
-|URL for the Privacy Statement link|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -PrivacyStatementURL "<URL>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -PrivacyStatementURL "https://contoso.com/privacystatement.html"`|
-|Disclaimer statement in the email that contains the encrypted message|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -DisclaimerText "<Disclaimer statement. String of up to 1024 characters.>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -DisclaimerText "This message is confidential for the use of the addressee only."`|
-|Text that appears at the top of the encrypted mail viewing portal|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -PortalText "<Text for your portal. String of up to 128 characters.>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "OME Configuration" -PortalText "ContosoPharma secure email portal."`|
-|To enable or disable authentication with a one-time pass code for this custom template|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -OTPEnabled <$true|$false>` <br/> **Examples:** <br/>To enable one-time passcodes for this custom template <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -OTPEnabled $true` <br/> To disable one-time passcodes for this custom template <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -OTPEnabled $false`|
-|To enable or disable authentication with Microsoft, Google, or Yahoo identities for this custom template|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -SocialIdSignIn <$true|$false>` <br/> **Examples:** <br/>To enable social IDs for this custom template <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -SocialIdSignIn $true` <br/> To disable social IDs for this custom template <br/> `Set-OMEConfiguration -Identity "Branding Template 1" -SocialIdSignIn $false`|
+|||
+|Background color|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -BackgroundColor "<#RRGGBB hexadecimal color code or name value>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "Branding Template 1" -BackgroundColor "#ffffff"` <p> For more information about background colors, see the [Background colors](#background-color-reference) section later in this article.|
+|Logo|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -Image <Byte[]>` <p> **Example:** <p> `Set-OMEConfiguration -Identity "Branding Template 1" -Image ([System.IO.File]::ReadAllBytes('C:\Temp\contosologo.png'))` <p> Supported file formats: .png, .jpg, .bmp, or .tiff <p> Optimal size of logo file: less than 40 KB <p> Optimal size of logo image: 170x70 pixels. If your image exceeds these dimensions, the service resizes your logo for display in the portal. The service doesn't modify the graphic file itself. For best results, use the optimal size.|
+|Text next to the sender's name and email address|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -IntroductionText "<String up to 1024 characters>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "Branding Template 1" -IntroductionText "has sent you a secure message."`|
+|Text that appears on the "Read Message" button|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -ReadButtonText "<String up to 1024 characters>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "OME Configuration" -ReadButtonText "Read Secure Message."`|
+|Text that appears below the "Read Message" button|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -EmailText "<String up to 1024 characters>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "OME Configuration" -EmailText "Encrypted message from ContosoPharma secure messaging system."`|
+|URL for the Privacy Statement link|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -PrivacyStatementURL "<URL>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "Branding Template 1" -PrivacyStatementURL "https://contoso.com/privacystatement.html"`|
+|Disclaimer statement in the email that contains the encrypted message|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -DisclaimerText "<Disclaimer statement. String of up to 1024 characters.>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "Branding Template 1" -DisclaimerText "This message is confidential for the use of the addressee only."`|
+|Text that appears at the top of the encrypted mail viewing portal|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -PortalText "<Text for your portal. String of up to 128 characters.>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "OME Configuration" -PortalText "ContosoPharma secure email portal."`|
+|To enable or disable authentication with a one-time pass code for this custom template|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -OTPEnabled <$true|$false>` <p> **Examples:** <br/>To enable one-time passcodes for this custom template <p> `Set-OMEConfiguration -Identity "Branding Template 1" -OTPEnabled $true` <p> To disable one-time passcodes for this custom template <p> `Set-OMEConfiguration -Identity "Branding Template 1" -OTPEnabled $false`|
+|To enable or disable authentication with Microsoft, Google, or Yahoo identities for this custom template|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -SocialIdSignIn <$true|$false>` <p> **Examples:** <br/>To enable social IDs for this custom template <p> `Set-OMEConfiguration -Identity "Branding Template 1" -SocialIdSignIn $true` <p> To disable social IDs for this custom template <p> `Set-OMEConfiguration -Identity "Branding Template 1" -SocialIdSignIn $false`|
+|
## Create an OME branding template (Advanced Message Encryption)
To create a new custom branding template:
## Return the default branding template to its original values To remove all modifications from the default template, including brand customizations, and so on, complete these steps:
-
+ 1. Using a work or school account that has global administrator permissions in your organization, start a Windows PowerShell session and connect to Exchange Online. For instructions, see [Connect to Exchange Online PowerShell](/powershell/exchange/connect-to-exchange-online-powershell).
-2. Use the **Set-OMEConfiguration** cmdlet as described in [Set-OMEConfiguration](/powershell/module/exchange/Set-OMEConfiguration). To remove your organization's branded customizations from the DisclaimerText, EmailText, and PortalText values, set the value to an empty string, `""`. For all image values, such as Logo, set the value to `"$null"`.
+2. Use the **Set-OMEConfiguration** cmdlet as described in [Set-OMEConfiguration](/powershell/module/exchange/Set-OMEConfiguration). To remove your organization's branded customizations from the DisclaimerText, EmailText, and PortalText values, set the value to an empty string, `""`. For all image values, such as Logo, set the value to `"$null"`.
The following table describes the encryption customization option defaults.
+ <br>
+
+ ****
+ |To revert this feature of the encryption experience back to the default text and image|Use these commands| |:--|:--|
- |Default text that comes with encrypted email messages. The default text appears above the instructions for viewing encrypted messages|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -EmailText "<empty string>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "OME Configuration" -EmailText ""`|
- |Disclaimer statement in the email that contains the encrypted message|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" DisclaimerText "<empty string>"` <br/> **Example:** <br/> `Set-OMEConfiguration -Identity "OME Configuration" -DisclaimerText ""`|
- |Text that appears at the top of the encrypted mail viewing portal|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -PortalText "<empty string>"` <br/> **Example reverting back to default:** <br/> `Set-OMEConfiguration -Identity "OME Configuration" -PortalText ""`|
- |Logo|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -Image <"$null">` <br/> **Example reverting back to default:** <br/> `Set-OMEConfiguration -Identity "OME configuration" -Image $null`|
- |Background color|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -BackgroundColor "$null">` <br/> **Example reverting back to default:** <br/> `Set-OMEConfiguration -Identity "OME configuration" -BackgroundColor $null`|
+ |Default text that comes with encrypted email messages. The default text appears above the instructions for viewing encrypted messages|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -EmailText "<empty string>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "OME Configuration" -EmailText ""`|
+ |Disclaimer statement in the email that contains the encrypted message|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" DisclaimerText "<empty string>"` <p> **Example:** <p> `Set-OMEConfiguration -Identity "OME Configuration" -DisclaimerText ""`|
+ |Text that appears at the top of the encrypted mail viewing portal|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -PortalText "<empty string>"` <p> **Example reverting back to default:** <p> `Set-OMEConfiguration -Identity "OME Configuration" -PortalText ""`|
+ |Logo|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -Image <"$null">` <p> **Example reverting back to default:** <p> `Set-OMEConfiguration -Identity "OME configuration" -Image $null`|
+ |Background color|`Set-OMEConfiguration -Identity "<OMEConfigurationName>" -BackgroundColor "$null">` <p> **Example reverting back to default:** <p> `Set-OMEConfiguration -Identity "OME configuration" -BackgroundColor $null`|
+ |
## Remove a custom branding template (Advanced Message Encryption) You can only remove or delete branding templates that you've made. You can't remove the default branding template. To remove a custom branding template:
-
+ 1. Using a work or school account that has global administrator permissions in your organization, start a Windows PowerShell session and connect to Exchange Online. For instructions, see [Connect to Exchange Online PowerShell](/powershell/exchange/connect-to-exchange-online-powershell). 2. Use the **Remove-OMEConfiguration** cmdlet as follows:
To remove a custom branding template:
After you've either modified the default template or created new branding templates, you can create Exchange mail flow rules to apply your custom branding based on certain conditions. Such a rule will apply custom branding in the following scenarios: - If the email was manually encrypted by the end user using Outlook or Outlook on the web, formerly Outlook Web App- - If the email was automatically encrypted by an Exchange mail flow rule or Data Loss Prevention policy For information on how to create an Exchange mail flow rule that applies encryption, see [Define mail flow rules to encrypt email messages in Office 365](define-mail-flow-rules-to-encrypt-email.md).
For information on how to create an Exchange mail flow rule that applies encrypt
7. From **Do the following**, select **Modify the message security** \> **Apply custom branding to OME messages**. Next, from the drop-down, select a branding template. 8. (Optional) You can configure the mail flow rule to apply encryption and custom branding. From **Do the following**, select **Modify the message security**, and then choose **Apply Office 365 Message Encryption and rights protection**. Select an RMS template from the list, choose **Save**, and then choose **OK**.
-
+ The list of templates includes default templates and options and any custom templates you create. If the list is empty, ensure that you have set up Office 365 Message Encryption with the new capabilities. For instructions, see [Set up new Office 365 Message Encryption capabilities](set-up-new-message-encryption-capabilities.md). For information about the default templates, see [Configuring and managing templates for Azure Information Protection](/information-protection/deploy-use/configure-policy-templates). For information about the **Do Not Forward** option, see [Do Not Forward option for emails](/information-protection/deploy-use/configure-usage-rights#do-not-forward-option-for-emails). For information about the **encrypt only** option, see [Encrypt Only option for emails](/information-protection/deploy-use/configure-usage-rights#encrypt-only-option-for-emails). Choose **add action** if you want to specify another action. ## Background color reference
-The color names that you can use for the background color are limited. Instead of a color name, you can use a hex code value (#RRGGBB). You can use a hex code value that corresponds to a color name, or you can use a custom hex code value. Be sure to enclose the hex code value in quotation marks (for example, `"#f0f8ff"`).
+The color names that you can use for the background color are limited. Instead of a color name, you can use a hex code value (`#RRGGBB`). You can use a hex code value that corresponds to a color name, or you can use a custom hex code value. Be sure to enclose the hex code value in quotation marks (for example, `"#f0f8ff"`).
The available background color names and their corresponding hex code values are described in the following table.
compliance Apply Sensitivity Label Automatically https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/apply-sensitivity-label-automatically.md
Finally, you can use simulation mode to provide an approximation of the time nee
![Choose locations page for auto-labeling configuration.](../media/locations-auto-labeling-wizard.png) To specify individual OneDrive accounts, see [Get a list of all user OneDrive URLs in your organization](/onedrive/list-onedrive-urls).
-
- > [!NOTE]
- > When [OneDrive accounts are deleted](/onedrive/retention-and-deletion#the-onedrive-deletion-process) (for example, an employee leaves the organization) the location gets marked as a SharePoint site to support continued access during the OneDrive retention period.
- >
- > At this stage of deletion, files in the OneDrive account won't be included in the **All** setting for the **OneDrive accounts** location but will be included in the **All** setting for the **SharePoint sites** location. Any files from these deleted OneDrive accounts display SharePoint as their source location in the simulation results and auditing data.
7. For the **Set up common or advanced rules** page: Keep the default of **Common rules** to define rules that identify content to label across all your selected locations. If you need different rules per location, select **Advanced rules**. Then select **Next**.
compliance Communication Compliance Configure https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/communication-compliance-configure.md
If you don't have an existing Office 365 Enterprise E5 plan and want to try comm
## Recommended actions (preview)
-Recommended actions can help your organization get started with communication compliance capabilities and get the most out of your existing policies. Included on the **Policies** page, recommended actions provide insights and summarizes sensitive information types and inappropriate content activities in communications in your organization. Insights are supported by [data classification](data-classification-overview.md) and the application of sensitivity labels, retention labels, and sensitive information type classification. These insights do not include any personally identifiable information (PII) for users in your organization.
+Recommended actions can help your organization get started with communication compliance capabilities and get the most out of your existing policies. Included on the **Policies** page, recommended actions provide insights and summarizes sensitive information types and inappropriate content activities in communications in your organization. Insights are supported by [data classification](data-classification-overview.md) and the application of sensitivity labels, retention labels, and sensitive information type classification. These insights don't include any personally identifiable information (PII) for users in your organization.
![Communication compliance recommended actions.](../media/communication-compliance-recommended-actions.png)
Choose from these solution role group options when configuring and managing comm
| Role | Role permissions | |:--|:--|
-| **Communication Compliance** | Use this role group to manage communication compliance for your organization in a single group. By adding all user accounts for designated administrators, analysts, investigators, and viewers, you can configure communication compliance permissions in a single group. This role group contains all the communication compliance permission roles. This configuration is the easiest way to quickly get started with communication compliance and is a good fit for organizations that do not need separate permissions defined for separate groups of users. Users that create policies as a communication compliance administrator must have their mailbox hosted on Exchange Online.|
+| **Communication Compliance** | Use this role group to manage communication compliance for your organization in a single group. By adding all user accounts for designated administrators, analysts, investigators, and viewers, you can configure communication compliance permissions in a single group. This role group contains all the communication compliance permission roles. This configuration is the easiest way to quickly get started with communication compliance and is a good fit for organizations that don't need separate permissions defined for separate groups of users. Users that create policies as a communication compliance administrator must have their mailbox hosted on Exchange Online.|
| **Communication Compliance Admin** | Use this role group to initially configure communication compliance and later to segregate communication compliance administrators into a defined group. Users assigned to this role group can create, read, update, and delete communication compliance policies, global settings, and role group assignments. Users assigned to this role group can't view message alerts. Users that create policies as a communication compliance administrator must have their mailbox hosted on Exchange Online.| | **Communication Compliance Analyst** | Use this group to assign permissions to users that will act as communication compliance analysts. Users assigned to this role group can view policies where they're assigned as Reviewers, view message metadata (not message content), escalate to additional reviewers, or send notifications to users. Analysts can't resolve pending alerts. | | **Communication Compliance Investigator** | Use this group to assign permissions to users that will act as communication compliance investigators. Users assigned to this role group can view message metadata and content, escalate to additional reviewers, escalate to an Advanced eDiscovery case, send notifications to users, and resolve the alert. |
Use the following chart to help you configure groups in your organization for co
|Supervised users <br> Excluded users | Distribution groups <br> Microsoft 365 Groups | Dynamic distribution groups <br> Nested distribution groups <br> Mail-enabled security groups <br> Microsoft 365 groups with dynamic membership | | Reviewers | None | Distribution groups <br> Dynamic distribution groups <br> Nested distribution groups <br> Mail-enabled security groups |
-When you assign a distribution group in the policy, the policy monitors all emails and Teams chats from each user in distribution group. When you assign a Microsoft 365 group in the policy, the policy monitors all emails and Teams chats sent to that group, not the individual emails and chats received by each group member.
+When you assign a *distribution group* in the policy, the policy monitors all emails and Teams chats from each user in the *distribution group*. When you assign a *Microsoft 365 group* in the policy, the policy monitors all emails and Teams chats sent to the *Microsoft 365 group*,* not the individual emails and chats received by each group member. Using distribution groups in communication compliance policies are recommended so that individual emails and Teams chats from each user are automatically monitored.
If you're an organization with an Exchange on-premises deployment or an external email provider and you want to monitor Microsoft Teams chats for your users, you must create a distribution group for the users with on-premises or external mailboxes to monitor. Later in these steps, you'll assign this distribution group as the **Supervised users and groups** selection in the policy wizard. For more information about the requirements and limitations for enabling cloud-based storage and Teams support for on-premises users, see [Search for Teams chat data for on-premises users](search-cloud-based-mailboxes-for-on-premises-users.md).
compliance Compliance Easy Trials https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/compliance-easy-trials.md
Wondering what you can experience in your free trial? The compliance solutions t
- **Audit**
- Advanced Audit helps organizations to conduct forensic and compliance investigations by increasing audit log retention required to conduct an investigation, providing access to crucial events that help determine scope of compromise, and faster access to Office 365 Management Activity API. [Learn more about Audit](advanced-audit.md)
+ Advanced Audit helps organizations to conduct forensic and compliance investigations by increasing audit log retention required to conduct an investigation, providing access to crucial events that help determine scope of compromise, and faster access to Office 365 Management Activity API. [Learn more about Audit](advanced-audit.md)
- **Communication Compliance**
- Communication Compliance helps you overcome modern compliance challenges associated with internal and external communications by helping you automatically capture inappropriate messages, investigate possible policy violations, and take steps to remediate. Learn more about [Communication Compliance](communication-compliance.md)
+ Communication Compliance helps you overcome modern compliance challenges associated with internal and external communications by helping you automatically capture inappropriate messages, investigate possible policy violations, and take steps to remediate. Learn more about [Communication Compliance](communication-compliance.md)
- **Compliance Manager**
- Compliance Manager can help you throughout your compliance journey, from taking inventory of your data protection risks to managing the complexities of implementing controls, staying current with regulations and certifications, and reporting to auditors. [Learn more about Compliance Manager](compliance-manager.md)
+ Compliance Manager can help you throughout your compliance journey, from taking inventory of your data protection risks to managing the complexities of implementing controls, staying current with regulations and certifications, and reporting to auditors. [Learn more about Compliance Manager](compliance-manager.md)
- **eDiscovery**
- Take advantage of an end-to-end workflow for preserving, collecting, analyzing, and exporting content that's responsive to your organization's internal and external investigations. Legal teams can also manage the entire legal hold notification process by communicating with custodians involved in a case. [Learn more about eDiscovery](ediscovery.md)
+ Take advantage of an end-to-end workflow for preserving, collecting, analyzing, and exporting content that's responsive to your organization's internal and external investigations. Legal teams can also manage the entire legal hold notification process by communicating with custodians involved in a case. [Learn more about eDiscovery](ediscovery.md)
- **Information Governance**
- Automate your retention policy coverage using Adaptive Policy Scopes. This feature allows you to dynamically target retention policies to specific users, groups, or sites. These policies automatically update when changes occur in your organization. In addition, retention policies using adaptive scopes are not subject to location limits. [Learn more about Adaptive Policy Scopes](create-retention-policies.md).
-
+ Automate your retention policy coverage using Adaptive Policy Scopes. This feature allows you to dynamically target retention policies to specific users, groups, or sites. These policies automatically update when changes occur in your organization. In addition, retention policies using adaptive scopes are not subject to location limits. [Learn more about Adaptive Policy Scopes](create-retention-policies.md).
- **Information Protection**
- Implement Microsoft Information Protection with [sensitivity labels](sensitivity-labels.md) and [data loss prevention policies](dlp-learn-about-dlp.md) to help you discover, classify, and protect your sensitive content wherever it lives or travels.
-
- The Information Protection trial provides you with default labels, auto-labeling for documents and emails, and data loss prevention to protect credit card numbers shared in Teams and by devices. The default policies we create for you get you up and running quickly, but you can fully customize them as you want.
-
+ Implement Microsoft Information Protection with [sensitivity labels](sensitivity-labels.md) and [data loss prevention policies](dlp-learn-about-dlp.md) to help you discover, classify, and protect your sensitive content wherever it lives or travels.
+
+ The Information Protection trial provides you with default labels, auto-labeling for documents and emails, and data loss prevention to protect credit card numbers shared in Teams and by devices. The default policies we create for you get you up and running quickly, but you can fully customize them as you want.
+ When the trial ends, you'll receive an email that informs you:
-
- - All files and emails labeled during your trial stay labeled. You can manually remove the labels.
-
- - You'll be downgraded to your previous Microsoft E3 license package that doesn't support auto-labeling and data loss prevention. Your existing policies will stay turned on unless you turn them off.
-
- - Any auto-labeling policies cannot be edited after the trial ends, but can be deleted.
-
- - If you edit DLP policies that include either the Teams or Devices locations after the trial ends, those locations will be removed from the policy.
-
- For more information about each of these preconfigured features and how they will impact users, see [Learn about the free trial for Microsoft Information Protection](mip-easy-trials.md).
-
- For more information about the full range of features for Microsoft Information Protection, see [Microsoft Information Protection in Microsoft 365](information-protection.md).
+
+ - All files and emails labeled during your trial stay labeled. You can manually remove the labels.
+ - You'll be downgraded to your previous Microsoft E3 license package that doesn't support auto-labeling and data loss prevention. Your existing policies will stay turned on unless you turn them off.
+ - Any auto-labeling policies cannot be edited after the trial ends, but can be deleted.
+ - If you edit DLP policies that include either the Teams or Devices locations after the trial ends, those locations will be removed from the policy.
+
+ For more information about each of these preconfigured features and how they will impact users, see [Learn about the free trial for Microsoft Information Protection](mip-easy-trials.md).
+
+ For more information about the full range of features for Microsoft Information Protection, see [Microsoft Information Protection in Microsoft 365](information-protection.md).
- **Insider Risk Management**
- Leverage artificial intelligence to help you quickly identify, triage, and remediate internal risks. Using logs from Microsoft 365 and Azure services, you can define policies that monitor for risk signals, then take remediation actions such as promoting user education or initiating an investigation. [Learn more about Insider Risk Management](insider-risk-management-solution-overview.md)
+ Leverage artificial intelligence to help you quickly identify, triage, and remediate internal risks. Using logs from Microsoft 365 and Azure services, you can define policies that monitor for risk signals, then take remediation actions such as promoting user education or initiating an investigation. [Learn more about Insider Risk Management](insider-risk-management-solution-overview.md)
<!-- - **privacy management**
Wondering what you can experience in your free trial? The compliance solutions t
- **Records Management**
- Use integrated Records Management features to:
- - Classify content as a record to prevent users from editing, as required by regulations, laws, or organizational policy
- - Apply retention labels to content automatically when it matches criteria you specify, using auto-apply label policies
- - Use adaptive scope policies to dynamically target your retention label policies to locations, with no limit on how many locations are included
- - Get full content lifecycle support, including the ability to perform disposition review on contents before they are permanently deleted at the end
- For more information on the full range of feature for Microsoft Records Management, please see [Learn more about Records Management](records-management.md)
+ Use integrated Records Management features to:
+ - Classify content as a record to prevent users from editing, as required by regulations, laws, or organizational policy
+ - Apply retention labels to content automatically when it matches criteria you specify, using auto-apply label policies
+ - Use adaptive scope policies to dynamically target your retention label policies to locations, with no limit on how many locations are included
+ - Get full content lifecycle support, including the ability to perform disposition review on contents before they are permanently deleted at the end
+
+ For more information on the full range of feature for Microsoft Records Management, [learn more about Records Management](records-management.md)
compliance Configure Irm To Use An On Premises Ad Rms Server https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/configure-irm-to-use-an-on-premises-ad-rms-server.md
When you import the TPD, it's stored and protected in Exchange Online.
4. In the **Actions** pane, click **Export Trusted Publishing Domain**.
-5. In the **Publishing domain file** box, click **Save As** to save the file to a specific location on the local computer. Type a file name, making sure to specify the `.xml` file name extension, and then click **Save**.
+5. In the **Publishing domain file** box, click **Save As** to save the file to a specific location on the local computer. Type a file name, making sure to specify the `.xml` file name extension, and then click **Save**.
6. In the **Password** and **Confirm Password** boxes, type a strong password that will be used to encrypt the trusted publishing domain file. You will have to specify this password when you import the TPD to your cloud-based email organization.
After the TPD is exported to an XML file, you have to import it to Exchange Onli
To import the TPD, run the following command in Windows PowerShell: ```powershell
-Import-RMSTrustedPublishingDomain -FileData $([byte[]](Get-Content -Encoding byte -Path <path to exported TPD file> -ReadCount 0)) -Name "<name of TPD>" -ExtranetLicensingUrl <URL> -IntranetLicensingUrl <URL>
+Import-RMSTrustedPublishingDomain -FileData ([System.IO.File]::ReadAllBytes('<path to exported TPD file>')) -Name "<name of TPD>" -ExtranetLicensingUrl <URL> -IntranetLicensingUrl <URL>
```
-You can obtain the values for the _ExtranetLicensingUrl_ and _IntranetLicensingUrl_ parameters in the Active Directory Rights Management Services console. Select the AD RMS cluster in the console tree. The licensing URLs are displayed in the results pane. These URLs are used by email clients when content has to be decrypted and when Exchange Online needs to determine which TPD to use.
+You can obtain the values for the _ExtranetLicensingUrl_ and _IntranetLicensingUrl_ parameters in the Active Directory Rights Management Services console. Select the AD RMS cluster in the console tree. The licensing URLs are displayed in the results pane. These URLs are used by email clients when content has to be decrypted and when Exchange Online needs to determine which TPD to use.
When you run this command, you'll be prompted for a password. Enter the password that you specified when you exported the TPD from your AD RMS server. For example, the following command imports the TPD named Exported TPD using the XML file that you exported from your AD RMS server and saved to the desktop of the Administrator account. The Name parameter is used to specify a name to the TPD. ```powershell
-Import-RMSTrustedPublishingDomain -FileData $([byte[]](Get-Content -Encoding byte -Path C:\Users\Administrator\Desktop\ExportTPD.xml -ReadCount 0)) -Name "Exported TPD" -ExtranetLicensingUrl https://corp.contoso.com/_wmcs/licensing -IntranetLicensingUrl https://rmsserver/_wmcs/licensing
+Import-RMSTrustedPublishingDomain -FileData ([System.IO.File]::ReadAllBytes('C:\Users\Administrator\Desktop\ExportTPD.xml')) -Name "Exported TPD" -ExtranetLicensingUrl https://corp.contoso.com/_wmcs/licensing -IntranetLicensingUrl https://rmsserver/_wmcs/licensing
``` For detailed syntax and parameter information, see [Import-RMSTrustedPublishingDomain](/powershell/module/exchange/import-rmstrustedpublishingdomain).
-#### How do you know this step worked?
+#### How do you know that you successfully imported the TPD?
To verify that you have successfully imported the TPD, run the **Get-RMSTrustedPublishingDomain** cmdlet to retrieve TPDs in your Exchange Online organization. For details, see the examples in [Get-RMSTrustedPublishingDomain](/powershell/module/exchange/get-rmstrustedpublishingdomain).
To return a list of all templates contained in the default TPD, run the followin
Get-RMSTemplate -Type All | fl ```
-If the value of the _Type_ parameter is `Archived`, the template isn't visible to users. Only distributed templates in the default TPD are available in Outlook on the web.
+If the value of the _Type_ parameter is `Archived`, the template isn't visible to users. Only distributed templates in the default TPD are available in Outlook on the web.
To distribute a template, run the following command:
Set-RMSTemplate -Identity "Company Confidential" -Type Distributed
For detailed syntax and parameter information, see [Get-RMSTemplate](/powershell/module/exchange/get-rmstemplate) and [Set-RMSTemplate](/powershell/module/exchange/set-rmstemplate).
-**The Do Not Forward template**
+#### The Do Not Forward template
When you import the default TPD from your on-premises organization into Exchange Online, one AD RMS rights policy template named **Do Not Forward** is imported. By default, this template is distributed when you import the default TPD. You can't use the **Set-RMSTemplate** cmdlet to modify the **Do Not Forward** template. When the **Do Not Forward** template is applied to a message, only the recipients addressed in the message can read the message. Additionally, recipients can't do the following: - Forward the message to another person.- - Copy content from the message.- - Print the message. > [!IMPORTANT]
When the **Do Not Forward** template is applied to a message, only the recipient
You can create additional AD RMS rights policy templates on the AD RMS server in your on-premises organization to meet your IRM protection requirements. If you create additional AD RMS rights policy templates, you have to export the TPD from the on-premises AD RMS server again and refresh the TPD in the cloud-based email organization.
-#### How do you know this step worked?
+#### How do you know that you successfully distributed the AD RMS rights policy template?
To verify that you have successfully distributed and AD RMS rights policy template, run the **Get-RMSTemplate** cmdlet to check the template's properties. For details, see the examples in [Get-RMSTemplate](/powershell/module/exchange/get-rmstemplate).
Set-IRMConfiguration -InternalLicensingEnabled $true
For detailed syntax and parameter information, see [Set-IRMConfiguration](/powershell/module/exchange/set-irmconfiguration).
-#### How do you know this step worked?
+#### How do you know that you successfully enabled IRM?
To verify that you have successfully enabled IRM, run the [Get-IRMConfiguration](/powershell/module/exchange/get-irmconfiguration) cmdlet to check IRM configuration in the Exchange Online organization.
compliance Create A Custom Sensitive Information Type In Scc Powershell https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/create-a-custom-sensitive-information-type-in-scc-powershell.md
audience: Admin-
+ms.article: article
ms.localizationpriority: medium-+ - M365-security-compliance
+search.appverid:
- MOE150 - MET150 description: "Learn how to create and import a custom sensitive information type for policies in the Compliance center."
description: "Learn how to create and import a custom sensitive information type
# Create a custom sensitive information type using PowerShell
-This topic shows you how to use PowerShell to create an XML *rule package* file that defines your own custom [sensitive information types](sensitive-information-type-entity-definitions.md). You need to know how to create a regular expression. As an example, this topic creates a custom sensitive information type that identifies an employee ID. You can use this example XML as a starting point for your own XML file. If you are new to sensitive information types, see [Learn about sensitive information types](sensitive-information-type-learn-about.md).
+This article shows you how to create an XML *rule package* file that defines custom [sensitive information types](sensitive-information-type-entity-definitions.md). This article describes a custom sensitive information type that identifies an employee ID. You can use the sample XML in this article as a starting point for your own XML file.
-After you've created a well-formed XML file, you can upload it to Microsoft 365 by using Microsoft 365 PowerShell. Then you're ready to use your custom sensitive information type in your policies and test that it's detecting the sensitive information as you intended.
+For more information about sensitive information types, see [Learn about sensitive information types](sensitive-information-type-learn-about.md).
+
+After you've created a well-formed XML file, you can upload it to Microsoft 365 using PowerShell. Then, you're ready to use your custom sensitive information type in policies. You can test its effectiveness in detecting the sensitive information as you intended.
> [!NOTE]
-> If you don't need the fine grained control that PowerShell provides, you can create custom sensitive information types in the Compliance center. For more information, see [Create a custom sensitive information type](create-a-custom-sensitive-information-type.md).
+> If you don't need the fine-grained control that PowerShell provides, you can create custom sensitive information types in the Microsoft 365 compliance center. For more information, see [Create a custom sensitive information type](create-a-custom-sensitive-information-type.md).
## Important disclaimer
-Due to the variances in customer environments and content match requirements, Microsoft Support cannot assist in providing custom content-matching definitions; e.g., defining custom classifications or regular expression (also known as RegEx) patterns. For custom content-matching development, testing, and debugging, Microsoft 365 customers will need to rely upon internal IT resources, or use an external consulting resource such as Microsoft Consulting Services (MCS). Support engineers can provide limited support for the feature, but cannot provide assurances that any custom content-matching development will fulfill the customer's requirements or obligations. As an example of the type of support that can be provided, sample regular expression patterns may be provided for testing purposes. Or, support can assist with troubleshooting an existing RegEx pattern which is not triggering as expected with a single specific content example.
+Microsoft Support can't help you create content-matching definitions.
+
+For custom content-matching development, testing, and debugging, you'll need to use your own internal IT resources, or use consulting services, such as Microsoft Consulting Services (MCS). Microsoft Support engineers can provide limited support for this feature, but they can't guarantee that custom content-matching suggestions will fully meet your needs.
-See [Potential validation issues to be aware of](#potential-validation-issues-to-be-aware-of) in this topic.
+MCS can provide regular expressions for testing purposes. They can also provide assistance in troubleshooting an existing RegEx pattern that's not working as expected with a single specific content example.
+
+See [Potential validation issues to be aware of](#potential-validation-issues-to-be-aware-of) in this article.
For more information about the Boost.RegEx (formerly known as RegEx++) engine that's used for processing the text, see [Boost.Regex 5.1.3](https://www.boost.org/doc/libs/1_68_0/libs/regex/doc/html/). > [!NOTE]
-> If you use an ampersand character (&) as part of a keyword in your custom sensitive information type, please note that there is a known issue. You should add an additional term with spaces around the character to make sure that the character is properly identified, for example, L & P _not_ L&P.
+> If you use an ampersand character (&) as part of a keyword in your custom sensitive information type, you need to add an additional term with spaces around the character. For example, use `L & P` _not_ `L&P`.
## Sample XML of a rule package
-Here's the sample XML of the rule package that we'll create in this topic. Elements and attributes are explained in the sections below.
-
+Here's the sample XML of the rule package that we'll create in this article. Elements and attributes are explained in the sections below.
+ ```xml <?xml version="1.0" encoding="UTF-16"?> <RulePackage xmlns="http://schemas.microsoft.com/office/2011/mce">
Here's the sample XML of the rule package that we'll create in this topic. Eleme
## What are your key requirements? [Rule, Entity, Pattern elements]
-Before you get started, it's helpful to understand the basic structure of the XML schema for a rule, and how you can use this structure to define your custom sensitive information type so that it will identify the right content.
-
-A rule defines one or more entities (sensitive information types), and each entity defines one or more patterns. A pattern is what a policy looks for when it evaluates content such as email and documents.
+It's important that you understand the basic structure of the XML schema for a rule. Your understanding of the structure will help your custom sensitive information type to identify the right content.
+
+A rule defines one or more entities (also known as sensitive information types). Each entity defines one or more patterns. A pattern is what a policy looks for when it evaluates content (for example, email and documents).
+
+In XML markup, "rules" mean the patterns that define the sensitive information type. Don't associate references to rules in this article with "conditions" or "actions" that are common in other Microsoft features.
-In this topic, the XML markup uses rule to mean the patterns that define an entity, also known as a sensitive information type. So in this topic, when you see rule, think entity or sensitive information type, not conditions and actions.
-
### Simplest scenario: entity with one pattern
-Here's the simplest scenario. You want your policy to identify content that contains your organization's employee ID, which is formatted as a nine-digit number. So the pattern refers to a regular expression contained in the rule that identifies nine-digit numbers. Any content containing a nine-digit number satisfies the pattern.
-
+Here's a simple scenario: You want your policy to identify content that contains nine-digit employee IDs that are used in your organization. A pattern refers to the regular expression in the rule that identifies nine-digit numbers. Any content that contains a nine-digit number satisfies the pattern.
+ ![Diagram of entity with one pattern.](../media/4cc82dcf-068f-43ff-99b2-bac3892e9819.png)
-
-However, while simple, this pattern may identify many false positives by matching content that contains any nine-digit number that is not necessarily an employee ID.
-
+
+But, this pattern might identify **any** nine-digit number, including longer numbers or other types of nine-digit numbers that aren't employee IDs. This type of unwanted match is known as a *false positive*.
+ ### More common scenario: entity with multiple patterns
-For this reason, it's more common to define an entity by using more than one pattern, where the patterns identify supporting evidence (such as a keyword or date) in addition to the entity (such as a nine-digit number).
-
-For example, to increase the likelihood of identifying content that contains an employee ID, you can define another pattern that also identifies a hire date, and define yet another pattern that identifies both a hire date and a keyword (such as "employee ID"), in addition to the nine-digit number.
-
+Because of the potential for false positives, you typically use more than one pattern to define an entity. Multiple patterns provide supporting evidence for the target entity. For example, additional keywords, dates, or other text can help identify the original entity (for example, the nine-digit employee number).
+
+For example, to increase the likelihood of identifying content that contains an employee ID, you can define other patterns to look for:
+
+- A pattern that identifies a hire date.
+- A pattern that identifies both a hire date and the "employee ID" keyword.
+ ![Diagram of entity with multiple patterns.](../media/c8dc2c9d-00c6-4ebc-889a-53b41a90024a.png)
-
-Note a couple of important aspects of this structure:
-
-- Patterns that require more evidence have a higher confidence level. This is useful because when you later use this sensitive information type in a policy, you can use more restrictive actions (such as block content) with only the higher-confidence matches, and you can use less restrictive actions (such as send notification) with the lower-confidence matches. -- The supporting IdMatch and Match elements reference regexes and keywords that are actually children of the Rule element, not the Pattern. These supporting elements are referenced by the Pattern but included in the Rule. This means that a single definition of a supporting element, like a regular expression or a keyword list, can be referenced by multiple entities and patterns.
+There are important points to consider for multiple pattern matches:
+
+- Patterns that require more evidence have a higher confidence level. Based on the confidence level, you can take the following actions:
+ - Use more restrictive actions (such as block content) with higher-confidence matches.
+ - Use less restrictive actions (such as send notifications) with lower-confidence matches.
-## What entity do you need to identify? [Entity element, id attribute]
+- The supporting `IdMatch` and `Match` elements reference RegExes and keywords that are actually children of the `Rule` element, not the `Pattern`. These supporting elements are referenced by the `Pattern`, but are included in the `Rule`. This behavior means that a single definition of a supporting element, such as a regular expression or a keyword list, can be referenced by multiple entities and patterns.
+
+## What entity do you need to identify? [Entity element, ID attribute]
An entity is a sensitive information type, such as a credit card number, that has a well-defined pattern. Each entity has a unique GUID as its ID.
-
+ ### Name the entity and generate its GUID
-1. In your XML editor of choice, add the Rules and Entity elements.
-2. Add a comment that contains the name of your custom entity ΓÇö in this example, Employee ID. Later, you'll add the entity name to the localized strings section, and that name is what appears in the UI when you create a policy.
-3. Generate a GUID for your entity. There are several ways to generate GUIDs, but you can do it easily in PowerShell by typing **[guid]::NewGuid()**. Later, you'll also add the entity GUID to the localized strings section.
-
+1. In your XML editor of choice, add the `Rules` and `Entity` elements.
+2. Add a comment that contains the name of your custom entity, such as Employee ID. Later, you'll add the entity name to the localized strings section, and that name appears in the admin center when you create a policy.
+3. Generate a unique GUID for your entity. For example, in Windows PowerShell, you can run the command `[guid]::NewGuid()`. Later, you'll also add the GUID to the localized strings section of the entity.
+ ![XML markup showing Rules and Entity elements.](../media/c46c0209-0947-44e0-ac3a-8fd5209a81aa.png)
-
+ ## What pattern do you want to match? [Pattern element, IdMatch element, Regex element]
-The pattern contains the list of what the sensitive information type is looking for. This can include regexes, keywords, and built-in functions (which perform tasks like running regexes to find dates or addresses). Sensitive information types can have multiple patterns with unique confidences.
-
-What all of the below patterns have in common is that they all reference the same regular expression, which looks for a nine-digit number (\d{9}) surrounded by white space (\s) … (\s). This regular expression is referenced by the IdMatch element and is the common requirement for all patterns that look for the Employee ID entity. IdMatch is the identifier that the pattern is to trying to match, such as Employee ID or credit card number or social security number. A Pattern element must have exactly one IdMatch element.
-
+The pattern contains the list of what the sensitive information type is looking for. The pattern can include RegExes, keywords, and built-in functions. Functions do task like running RegExes to find dates or addresses. Sensitive information types can have multiple patterns with unique confidences.
+
+In the following diagram, all of the patterns reference the same regular expression. This RegEx looks for a nine-digit number `(\d{9})` surrounded by white space `(\s) ... (\s)`. This regular expression is referenced by the `IdMatch` element, and is the common requirement for all patterns that look for the Employee ID entity. `IdMatch` is the identifier that the pattern is to trying to match. A `Pattern` element must have exactly one `IdMatch` element.
+ ![XML markup showing multiple Pattern elements referencing single Regex element.](../media/8f3f497b-3b8b-4bad-9c6a-d9abf0520854.png)
-
-When satisfied, a pattern returns a count and confidence level, which you can use in the conditions in your policy. When you add a condition for detecting a sensitive information type to a policy, you can edit the count and confidence level as shown here. Confidence level (also called match accuracy) is explained later in this topic.
-
+
+A satisfied pattern match returns a count and confidence level, which you can use in the conditions in your policy. When you add a condition for detecting a sensitive information type to a policy, you can edit the count and confidence level as shown in the following diagram. Confidence level (also called match accuracy) is explained later in this article.
+ ![Instance count and match accuracy options.](../media/sit-confidence-level.png)
-
-When you create your regular expression, keep in mind that there are potential issues to be aware of. For example, if you write and upload a regex that identifies too much content, this can impact performance. To learn more about these potential issues, see the later section [Potential validation issues to be aware of](#potential-validation-issues-to-be-aware-of).
-
+
+Regular expressions are powerful, so there are issues that you need to know about. For example, a RegEx that identifies too much content can affect performance. To learn more about these issues, see the [Potential validation issues to be aware of](#potential-validation-issues-to-be-aware-of) section later in this article.
+ ## Do you want to require additional evidence? [Match element, minCount attribute]
-In addition to the IdMatch, a pattern can use the Match element to require additional supporting evidence, such as a keyword, regex, date, or address.
-
-A Pattern can include multiple Match elements; they can be included directly in the Pattern element or combined by using the Any element. Match elements are joined by an implicit AND operator; all Match elements must be satisfied for the pattern to be matched. You can use the Any element to introduce AND or OR operators (more on that in a later section).
-
-You can use the optional minCount attribute to specify how many instances of a match need to be found for each of the Match elements. For example, you can specify that a pattern is satisfied only when at least two keywords from a keyword list are found.
-
+In addition to `IdMatch`, a pattern can use the `Match` element to require additional supporting evidence, such as a keyword, RegEx, date, or address.
+
+A `Pattern` might include multiple `Match` elements:
+
+- Directly in the `Pattern` element.
+- Combined by using the `Any` element.
+
+`Match` elements are joined by an implicit AND operator. In other words, all `Match` elements must be satisfied for the pattern to be matched.
+
+You can use the `Any` element to introduce AND or OR operators. The `Any` element is described later in this article.
+
+You can use the optional `minCount` attribute to specify how many instances of a match need to be found for each `Match` elements. For example, you can specify that a pattern is satisfied only when at least two keywords from a keyword list are found.
+ ![XML markup showing Match element with minOccurs attribute.](../media/607f6b5e-2c7d-43a5-a131-a649f122e15a.png)
-
+ ### Keywords [Keyword, Group, and Term elements, matchStyle and caseSensitive attributes]
-When you identify sensitive information, like an employee ID, you often want to require keywords as corroborative evidence. For example, in addition to matching a nine-digit number, you may want to look for words like "card", "badge", or "ID". To do this, you use the Keyword element. The Keyword element has an ID attribute that can be referenced by multiple Match elements in multiple patterns or entities.
-
-Keywords are included as a list of Term elements in a Group element. The Group element has a matchStyle attribute with two possible values:
-
-- **matchStyle="word"** Word match identifies whole words surrounded by white space or other delimiters. You should always use word unless you need to match parts of words or match words in Asian languages.
-
-- **matchStyle="string"** String match identifies strings no matter what they're surrounded by. For example, "id" will match "bid" and "idea". Use string only when you need to match Asian words or if your keyword may be included as part of other strings.
-
-Finally, you can use the caseSensitive attribute of the Term element to specify that the content must match the keyword exactly, including lower- and upper-case letters.
-
+As described earlier, identifying sensitive information often requires additional keywords as corroborative evidence. For example, in addition to matching a nine-digit number, you can look for words like "card", "badge", or "ID" using the Keyword element. The `Keyword` element has an `ID` attribute that can be referenced by multiple `Match` elements in multiple patterns or entities.
+
+Keywords are included as a list of `Term` elements in a `Group` element. The `Group` element has a `matchStyle` attribute with two possible values:
+
+- **matchStyle="word"**: A word match identifies whole words surrounded by white space or other delimiters. You should always use **word** unless you need to match parts of words or words in Asian languages.
+
+- **matchStyle="string"**: A string match identifies strings no matter what they're surrounded by. For example, "ID" will match "bid" and "idea". Use `string` only when you need to match Asian words or if your keyword might be included in other strings.
+
+Finally, you can use the `caseSensitive` attribute of the `Term` element to specify that the content must match the keyword exactly, including lower-case and upper-case letters.
+ ![XML markup showing Match elements referencing keywords.](../media/e729ba27-dec6-46f4-9242-584c6c12fd85.png)
-
+ ### Regular expressions [Regex element]
-In this example, the employee ID entity already uses the IdMatch element to reference a regex for the pattern ΓÇö a nine-digit number surrounded by whitespace. In addition, a pattern can use a Match element to reference an additional Regex element to identify corroborative evidence, such as a five- or nine-digit number in the format of a US zip code.
-
+In this example, the employee `ID` entity already uses the `IdMatch` element to reference a regular expression for the pattern: a nine-digit number surrounded by whitespace. In addition, a pattern can use a `Match` element to reference an additional `Regex` element to identify corroborative evidence, such as a five-digit or nine-digit number in the format of a US postal code.
+ ### Additional patterns such as dates or addresses [built-in functions]
-In addition to the built-in sensitive information types, sensitive information types can also use built-in functions that can identify corroborative evidence such as a US date, EU date, expiration date, or US address. Microsoft 365 does not support uploading your own custom functions, but when you create a custom sensitive information type, your entity can reference the built-in functions.
-
-For example, an employee ID badge has a hire date on it, so this custom entity can use the built-in function `Func_us_date` to identify a date in the format commonly used in the US.
-
+Sensitive information types can also use built-in functions to identify corroborating evidence. For example, a US date, EU date, expiration date, or US address. Microsoft 365 doesnΓÇÖt support uploading your own custom functions. But, when you create a custom sensitive information type, your entity can reference built-in functions.
+
+For example, an employee ID badge has a hire date on it, so this custom entity can use the built-in `Func_us_date` function to identify a date in the format that's commonly used in the US.
+ For more information, see [What the DLP functions look for](what-the-dlp-functions-look-for.md).
-
+ ![XML markup showing Match element referencing built-in function.](../media/dac6eae3-9c52-4537-b984-f9f127cc9c33.png)
-
+ ## Different combinations of evidence [Any element, minMatches and maxMatches attributes]
-In a Pattern element, all IdMatch and Match elements are joined by an implicit AND operator ΓÇö all of the matches must be satisfied before the pattern can be satisfied. However, you can create more flexible matching logic by using the Any element to group Match elements. For example, you can use the Any element to match all, none, or an exact subset of its children Match elements.
-
-The Any element has optional minMatches and maxMatches attributes that you can use to define how many of the children Match elements must be satisfied before the pattern is matched. Note that these attributes define the number of Match elements that must be satisfied, not the number of instances of evidence found for the matches. To define a minimum number of instances for a specific match, such as two keywords from a list, use the minCount attribute for a Match element (see above).
-
+In a `Pattern` element, all `IdMatch` and `Match` elements are joined by an implicit AND operator. In other words, all of the matches must be satisfied before the pattern can be satisfied.
+
+You can create more flexible matching logic by using the `Any` element to group `Match` elements. For example, you can use the `Any` element to match all, none, or an exact subset of its child `Match` elements.
+
+The `Any` element has optional `minMatches` and `maxMatches` attributes that you can use to define how many of the child `Match` elements must be satisfied before the pattern is matched. These attributes define the *number* of `Match` elements, not the number of instances of evidence found for the matches. To define a minimum number of instances for a specific match, such as two keywords from a list, use the `minCount` attribute for a `Match` element (see above).
+ ### Match at least one child Match element
-If you want to require that only a minimum number of Match elements must be met, you can use the minMatches attribute. In effect, these Match elements are joined by an implicit OR operator. This Any element is satisfied if a US-formatted date or a keyword from either list is found.
+To require only a minimum number of `Match` elements, you can use the `minMatches` attribute. In effect, these `Match` elements are joined by an implicit OR operator. This `Any` element is satisfied if a US-formatted date or a keyword from either list is found.
```xml <Any minMatches="1" >
If you want to require that only a minimum number of Match elements must be met,
<Match idRef="Keyword_badge" /> </Any> ```
-
+ ### Match an exact subset of any children Match elements
-If you want to require that an exact number of Match elements must be met, you can set minMatches and maxMatches to the same value. This Any element is satisfied only if exactly one date or keyword is found ΓÇö any more than that, and the pattern won't be matched.
+To require an exact number of `Match` elements, set `minMatches` and `maxMatches` to the same value. This `Any` element is satisfied only if exactly one date or keyword is found. If there are any more matches, the pattern isn't matched.
```xml <Any minMatches="1" maxMatches="1" >
If you want to require that an exact number of Match elements must be met, you c
<Match idRef="Keyword_badge" /> </Any> ```
-
+ ### Match none of children Match elements If you want to require the absence of specific evidence for a pattern to be satisfied, you can set both minMatches and maxMatches to 0. This can be useful if you have a keyword list or other evidence that are likely to indicate a false positive.
-
+ For example, the employee ID entity looks for the keyword "card" because it might refer to an "ID card". However, if card appears only in the phrase "credit card", "card" in this content is unlikely to mean "ID card". So you can add "credit card" as a keyword to a list of terms that you want to exclude from satisfying the pattern.
-
+ ```xml <Any minMatches="0" maxMatches="0" > <Match idRef="Keyword_false_positives_local" />
If you want to match a number of unique terms, use the *uniqueResults* parameter
</Pattern> ```
-In this example, a pattern is defined for salary revision using at least three unique matches.
-
+In this example, a pattern is defined for salary revision using at least three unique matches.
+ ## How close to the entity must the other evidence be? [patternsProximity attribute] Your sensitive information type is looking for a pattern that represents an employee ID, and as part of that pattern it's also looking for corroborative evidence like a keyword such as "ID". It makes sense that the closer together this evidence is, the more likely the pattern is to be an actual employee ID. You can determine how close other evidence in the pattern must be to the entity by using the required patternsProximity attribute of the Entity element.
-
+ ![XML markup showing patternsProximity attribute.](../media/e97eb7dc-b897-4e11-9325-91c742d9839b.png)
-
+ For each pattern in the entity, the patternsProximity attribute value defines the distance (in Unicode characters) from the IdMatch location for all other Matches specified for that Pattern. The proximity window is anchored by the IdMatch location, with the window extending to the left and right of the IdMatch.
-
+ ![Diagram of proximity window.](../media/b593dfd1-5eef-4d79-8726-a28923f7c31e.png)
-
+ The example below illustrates how the proximity window affects the pattern matching where IdMatch element for the employee ID custom entity requires at least one corroborating match of keyword or date. Only ID1 matches because for ID2 and ID3, either no or only partial corroborating evidence is found within the proximity window.
-
+ ![Diagram of corroborative evidence and proximity window.](../media/dc68e38e-dfa1-45b8-b204-89c8ba121f96.png)
-
-Note that for email, the message body and each attachment are treated as separate items. This means that the proximity window does not extend beyond the end of each of these items. For each item (attachment or body), both the idMatch and corroborative evidence needs to reside in that item.
-
+
+Note that for email, the message body and each attachment are treated as separate items. This means that the proximity window doesnΓÇÖt extend beyond the end of each of these items. For each item (attachment or body), both the idMatch and corroborative evidence needs to reside in that item.
+ ## What are the right confidence levels for different patterns? [confidenceLevel attribute, recommendedConfidence attribute] The more evidence that a pattern requires, the more confidence you have that an actual entity (such as employee ID) has been identified when the pattern is matched. For example, you have more confidence in a pattern that requires a nine-digit ID number, hire date, and keyword in close proximity, than you do in a pattern that requires only a nine-digit ID number.
-
+ The Pattern element has a required confidenceLevel attribute. You can think of the value of confidenceLevel (an integer between 1 and 100) as a unique ID for each pattern in an entity ΓÇö the patterns in an entity must have different confidence levels that you assign. The precise value of the integer doesn't matter ΓÇö simply pick numbers that make sense to your compliance team. After you upload your custom sensitive information type and then create a policy, you can reference these confidence levels in the conditions of the rules that you create.
-
+ ![XML markup showing Pattern elements with different values for confidenceLevel attribute.](../media/sit-xml-markedup-2.png)
-
-In addition to confidenceLevel for each Pattern, the Entity has a recommendedConfidence attribute. The recommended confidence attribute can be thought of as the default confidence level for the rule. When you create a rule in a policy, if you don't specify a confidence level for the rule to use, that rule will match based on the recommended confidence level for the entity. Please note that the recommendedConfidence attribute is mandatory for each Entity ID in the Rule Package, if missing you won't be able to save policies that use the Sensitive Information Type.
-
+
+In addition to confidenceLevel for each Pattern, the Entity has a recommendedConfidence attribute. The recommended confidence attribute can be thought of as the default confidence level for the rule. When you create a rule in a policy, if you don't specify a confidence level for the rule to use, that rule will match based on the recommended confidence level for the entity. Please note that the recommendedConfidence attribute is mandatory for each Entity ID in the Rule Package, if missing you won't be able to save policies that use the Sensitive Information Type.
+ ## Do you want to support other languages in the UI of the Compliance center? [LocalizedStrings element] If your compliance team uses the Microsoft 365 Compliance center to create polices policies in different locales and in different languages, you can provide localized versions of the name and description of your custom sensitive information type. When your compliance team uses Microsoft 365 in a language that you support, they'll see the localized name in the UI.
-
+ ![Instance count and match accuracy configuration.](../media/11d0b51e-7c3f-4cc6-96d8-b29bcdae1aeb.png)
-
+ The Rules element must contain a LocalizedStrings element, which contains a Resource element that references the GUID of your custom entity. In turn, each Resource element contains one or more Name and Description elements that each use the langcode attribute to provide a localized string for a specific language.
-
+ ![XML markup showing contents of LocalizedStrings element.](../media/a96fc34a-b93d-498f-8b92-285b16a7bbe6.png)
-
+ Note that you use localized strings only for how your custom sensitive information type appears in the UI of the Compliance center. You can't use localized strings to provide different localized versions of a keyword list or regular expression.
-
+ ## Other rule package markup [RulePack GUID] Finally, the beginning of each RulePackage contains some general information that you need to fill in. You can use the following markup as a template and replace the ". . ." placeholders with your own info.
-
+ Most importantly, you'll need to generate a GUID for the RulePack. Above, you generated a GUID for the entity; this is a second GUID for the RulePack. There are several ways to generate GUIDs, but you can do it easily in PowerShell by typing [guid]::NewGuid().
-
+ The Version element is also important. When you upload your rule package for the first time, Microsoft 365 notes the version number. Later, if you update the rule package and upload a new version, make sure to update the version number or Microsoft 365 won't deploy the rule package.
-
+ ```xml <?xml version="1.0" encoding="utf-16"?> <RulePackage xmlns="http://schemas.microsoft.com/office/2011/mce"> <RulePack id=". . ."> <Version major="1" minor="0" build="0" revision="0" />
- <Publisher id=". . ." />
+ <Publisher id=". . ." />
<Details defaultLangCode=". . ."> <LocalizedDetails langcode=" . . . "> <PublisherName>. . .</PublisherName>
The Version element is also important. When you upload your rule package for the
</LocalizedDetails> </Details> </RulePack>
-
+ <Rules> . . . </Rules> </RulePackage>- ``` When complete, your RulePack element should look like this.
-
+ ![XML markup showing RulePack element.](../media/fd0f31a7-c3ee-43cd-a71b-6a3813b21155.png) ## Validators
-Microsoft 365 exposes function processors for commonly used SITs as validators. Here's a list of them.
-
-### List of validators currently available
--- Func_credit_card-- Func_ssn-- Func_unformatted_ssn-- Func_randomized_formatted_ssn-- Func_randomized_unformatted_ssn-- Func_aba_routing-- Func_south_africa_identification_number-- Func_brazil_cpf-- Func_iban-- Func_brazil_cnpj-- Func_swedish_national_identifier-- Func_india_aadhaar-- Func_uk_nhs_number-- Func_Turkish_National_Id-- Func_australian_tax_file_number-- Func_usa_uk_passport-- Func_canadian_sin-- Func_formatted_itin-- Func_unformatted_itin-- Func_dea_number_v2-- Func_dea_number-- Func_japanese_my_number_personal-- Func_japanese_my_number_corporate-
-This gives you the ability to define your own regex and validate them. To use validators, define your own regex and while defining the regex use the validator property to add the function processor of your choice. Once defined, you can use this regex in an SIT.
+Microsoft 365 exposes function processors for commonly used SITs as validators. Here's a list of them.
+
+### List of currently available validators
+
+- `Func_credit_card`
+- `Func_ssn`
+- `Func_unformatted_ssn`
+- `Func_randomized_formatted_ssn`
+- `Func_randomized_unformatted_ssn`
+- `Func_aba_routing`
+- `Func_south_africa_identification_number`
+- `Func_brazil_cpf`
+- `Func_iban`
+- `Func_brazil_cnpj`
+- `Func_swedish_national_identifier`
+- `Func_india_aadhaar`
+- `Func_uk_nhs_number`
+- `Func_Turkish_National_Id`
+- `Func_australian_tax_file_number`
+- `Func_usa_uk_passport`
+- `Func_canadian_sin`
+- `Func_formatted_itin`
+- `Func_unformatted_itin`
+- `Func_dea_number_v2`
+- `Func_dea_number`
+- `Func_japanese_my_number_personal`
+- `Func_japanese_my_number_corporate`
+
+This gives you the ability to define your own RegEx and validate them. To use validators, define your own RegEx and use the `Validator` property to add the function processor of your choice. Once defined, you can use this RegEx in an SIT.
In the example below, a regular expression - Regex_credit_card_AdditionalDelimiters is defined for Credit card which is then validated using the checksum function for credit card by using Func_credit_card as a validator.
Microsoft 365 provides two generic validators
### Checksum validator
-In this example, a checksum validator for employee ID is defined to validate the regex for EmployeeID.
+In this example, a checksum validator for employee ID is defined to validate the RegEx for EmployeeID.
```xml <Validators id="EmployeeIDChecksumValidator">
In this example, a checksum validator for employee ID is defined to validate the
### Date Validator
-In this example, a date validator is defined for a regex part of which is date.
+In this example, a date validator is defined for a RegEx part of which is date.
```xml <Validators id="date_validator_1"> <Validator type="DateSimple"> <Param name="Pattern">DDMMYYYY</Param> <!ΓÇösupported patterns DDMMYYYY, MMDDYYYY, YYYYDDMM, YYYYMMDD, DDMMYYYY, DDMMYY, MMDDYY, YYDDMM, YYMMDD --> </Validator> </Validators> <Regex id="date_regex_1" validators="date_validator_1">\d{8}</Regex> ```
-
+ ## Changes for Exchange Online Previously, you might have used Exchange Online PowerShell to import your custom sensitive information types for DLP. Now your custom sensitive information types can be used in both the <a href="https://go.microsoft.com/fwlink/p/?linkid=2059104" target="_blank">Exchange admin center</a> and the Compliance center. As part of this improvement, you should use Compliance center PowerShell to import your custom sensitive information types ΓÇö you can't import them from the Exchange PowerShell anymore. Your custom sensitive information types will continue to work just like before; however, it may take up to one hour for changes made to custom sensitive information types in the Compliance center to appear in the Exchange admin center.
-
-Note that in the Compliance center, you use the **[New-DlpSensitiveInformationTypeRulePackage](/powershell/module/exchange/new-dlpsensitiveinformationtyperulepackage)** cmdlet to upload a rule package. (Previously, in the Exchange admin center, you used the **ClassificationRuleCollection**` cmdlet.)
-
+
+Note that in the Compliance center, you use the **[New-DlpSensitiveInformationTypeRulePackage](/powershell/module/exchange/new-dlpsensitiveinformationtyperulepackage)** cmdlet to upload a rule package. (Previously, in the Exchange admin center, you used the **ClassificationRuleCollection**` cmdlet.)
+ ## Upload your rule package To upload your rule package, do the following steps:
-
+ 1. Save it as an .xml file with Unicode encoding.
-
+ 2. [Connect to Compliance center PowerShell](/powershell/exchange/exchange-online-powershell)
-
+ 3. Use the following syntax: ```powershell
- New-DlpSensitiveInformationTypeRulePackage -FileData (Get-Content -Path "PathToUnicodeXMLFile" -Encoding Byte -ReadCount 0)
+ New-DlpSensitiveInformationTypeRulePackage -FileData ([System.IO.File]::ReadAllBytes('PathToUnicodeXMLFile'))
``` This example uploads the Unicode XML file named MyNewRulePack.xml from C:\My Documents. ```powershell
- New-DlpSensitiveInformationTypeRulePackage -FileData (Get-Content -Path "C:\My Documents\MyNewRulePack.xml" -Encoding Byte -ReadCount 0)
+ New-DlpSensitiveInformationTypeRulePackage -FileData ([System.IO.File]::ReadAllBytes('C:\My Documents\MyNewRulePack.xml'))
``` For detailed syntax and parameter information, see [New-DlpSensitiveInformationTypeRulePackage](/powershell/module/exchange/new-dlpsensitiveinformationtyperulepackage).
To upload your rule package, do the following steps:
```powershell Get-DlpSensitiveInformationTypeRulePackage
- ```
+ ```
- Run the [Get-DlpSensitiveInformationType](/powershell/module/exchange/get-dlpsensitiveinformationtype) cmdlet to verify the sensitive information type is listed: ```powershell Get-DlpSensitiveInformationType
- ```
+ ```
For custom sensitive information types, the Publisher property value will be something other than Microsoft Corporation.
To upload your rule package, do the following steps:
```powershell Get-DlpSensitiveInformationType -Identity "<Name>" ```
-
+ ## Potential validation issues to be aware of When you upload your rule package XML file, the system validates the XML and checks for known bad patterns and obvious performance issues. Here are some known issues that the validation checks for ΓÇö a regular expression:
-
+ - Lookbehind assertions in the regular expression should be of fixed length only. Variable length assertions will result in errors.
- For example, this regex expression will not pass validation `"(?<=^|\s|_)"` because the first option in this is `^` which has a zero length while the next two options `\s` and `_` have a length of one. An alternate way to write this regular expression is `"(?:^|(?<=\s|_))"`.
-
-- Cannot begin or end with alternator "|", which matches everything because it's considered an empty match.
-
- For example, "|a" or "b|" will not pass validation.
-
-- Cannot begin or end with a ".{0,m}" pattern, which has no functional purpose and only impairs performance.
-
- For example, ".{0,50}ASDF" or "ASDF.{0,50}" will not pass validation.
-
-- Cannot have ".{0,m}" or ".{1,m}" in groups, and cannot have ".\*" or ".+" in groups.
-
- For example, "(.{0,50000})" will not pass validation.
-
-- Cannot have any character with "{0,m}" or "{1,m}" repeaters in groups.
-
- For example, "(a\*)" will not pass validation.
-
-- Cannot begin or end with ".{1,m}"; instead, use just "."
-
- For example, ".{1,m}asdf" will not pass validation; instead, use just ".asdf".
-
-- Cannot have an unbounded repeater (such as "\*" or "+") on a group.
-
- For example, "(xx)\*" and "(xx)+" will not pass validation.
-
+ For example, `"(?<=^|\s|_)"` will not pass validation. The first pattern (`^`) is zero length, while the next two patterns (`\s` and `_`) have a length of one. An alternate way to write this regular expression is `"(?:^|(?<=\s|_))"`.
+
+- Cannot begin or end with alternator `|`, which matches everything because it's considered an empty match.
+
+ For example, `|a` or `b|` will not pass validation.
+
+- Cannot begin or end with a `.{0,m}` pattern, which has no functional purpose and only impairs performance.
+
+ For example, `.{0,50}ASDF` or `ASDF.{0,50}` will not pass validation.
+
+- Cannot have `.{0,m}` or `.{1,m}` in groups, and cannot have `.\*` or `.+` in groups.
+
+ For example, `(.{0,50000})` will not pass validation.
+
+- Cannot have any character with `{0,m}` or `{1,m}` repeaters in groups.
+
+ For example, `(a\*)` will not pass validation.
+
+- Cannot begin or end with `.{1,m}`; instead, use `.`.
+
+ For example, `.{1,m}asdf` will not pass validation. Instead, use `.asdf`.
+
+- Cannot have an unbounded repeater (such as `*` or `+`) on a group.
+
+ For example, `(xx)\*` and `(xx)+` will not pass validation.
+ - Keywords have a maximum of 50 characters in Length. If you have a keyword within a Group exceeding this, a suggested solution is to create the Group of terms as a [Keyword Dictionary](./create-a-keyword-dictionary.md) and reference the GUID of the Keyword Dictionary within the XML structure as part of the Entity for Match or idMatch in the file. - Each Custom Sensitive Information Type can have a maximum of 2048 keywords total.
When you upload your rule package XML file, the system validates the XML and che
- When using the PowerShell Cmdlet there is a maximum return size of the Deserialized Data of approximately 1 megabyte. This will affect the size of your rule pack XML file. Keep the uploaded file limited to a 770 kilobyte maximum as a suggested limit for consistent results without error when processing. -- The XML structure does not require formatting characters such as spaces, tabs, or carriage return/linefeed entries. Take note of this when optimizing for space on uploads. Tools such as Microsoft Visual Code provide join line features to compact the XML file.
-
+- The XML structure doesnΓÇÖt require formatting characters such as spaces, tabs, or carriage return/linefeed entries. Take note of this when optimizing for space on uploads. Tools such as Microsoft Visual Code provide join line features to compact the XML file.
+ If a custom sensitive information type contains an issue that may affect performance, it won't be uploaded and you may see one of these error messages:
-
-- **Generic quantifiers which match more content than expected (e.g., '+', '\*')**
-
-- **Lookaround assertions**
-
-- **Complex grouping in conjunction with general quantifiers**
-
+
+- `Generic quantifiers which match more content than expected (e.g., '+', '*')`
+
+- `Lookaround assertions`
+
+- `Complex grouping in conjunction with general quantifiers`
+ ## Recrawl your content to identify the sensitive information Microsoft 365 uses the search crawler to identify and classify sensitive information in site content. Content in SharePoint Online and OneDrive for Business sites is recrawled automatically whenever it's updated. But to identify your new custom type of sensitive information in all existing content, that content must be recrawled.
-
-In Microsoft 365, you can't manually request a recrawl of an entire tenant, but you can do this for a site collection, list, or library ΓÇö see [Manually request crawling and re-indexing of a site, a library or a list](/sharepoint/crawl-site-content).
-
+
+In Microsoft 365, you can't manually request a recrawl of an entire organization, but you can manually request a recrawl for a site collection, list, or library. For more information, see [Manually request crawling and reindexing of a site, a library or a list](/sharepoint/crawl-site-content).
+ ## Reference: Rule package XML schema definition You can copy this markup, save it as an XSD file, and use it to validate your rule package XML file.
-
+ ```xml <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns:mce="http://schemas.microsoft.com/office/2011/mce"
You can copy this markup, save it as an XSD file, and use it to validate your ru
## More information - [Learn about data loss prevention](dlp-learn-about-dlp.md)- - [Sensitive information type entity definitions](sensitive-information-type-entity-definitions.md)- - [What the DLP functions look for](what-the-dlp-functions-look-for.md)
compliance Create A Keyword Dictionary https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/create-a-keyword-dictionary.md
audience: Admin Previously updated : Last updated : ms.localizationpriority: high-+ - M365-security-compliance
+search.appverid:
- MOE150 - MET150
$rawFile = $env:TEMP + "\rule.xml"
$kd = Get-DlpKeywordDictionary $ruleCollections = Get-DlpSensitiveInformationTypeRulePackage
-Set-Content -path $rawFile -Encoding Byte -Value $ruleCollections.SerializedClassificationRuleCollection
+[System.IO.File]::WriteAllBytes((Resolve-Path $rawFile), $ruleCollections.SerializedClassificationRuleCollection)
$UnicodeEncoding = New-Object System.Text.UnicodeEncoding $FileContent = [System.IO.File]::ReadAllText((Resolve-Path $rawFile), $unicodeEncoding)
Remove-Item $rawFile
## Basic steps to creating a keyword dictionary The keywords for your dictionary could come from various sources, most commonly from a file (such as a .csv or .txt list) imported in the service or by PowerShell cmdlet, from a list you enter directly in the PowerShell cmdlet, or from an existing dictionary. When you create a keyword dictionary, you follow the same core steps:
-
+ 1. Use the *<a href="https://go.microsoft.com/fwlink/p/?linkid=2077149" target="_blank">Microsoft 365 compliance center</a> or connect to **Security &amp; Compliance Center PowerShell**.
-
+ 2. **Define or load your keywords from your intended source**. The wizard and the cmdlet both accept a comma-separated list of keywords to create a custom keyword dictionary, so this step will vary slightly depending on where your keywords come from. Once loaded, they're encoded and converted to a byte array before they're imported.
-
+ 3. **Create your dictionary**. Choose a name and description and create your dictionary. ## Create a keyword dictionary using the Security & Compliance Center
Use the following steps to create and import keywords for a custom dictionary:
11. Select **Add**, then select **Next**. 12. Review and finalize your sensitive info type selections, then select **Finish**.
-
+ ## Create a keyword dictionary from a file using PowerShell
-Often when you need to create a large dictionary, it's to use keywords from a file or a list exported from some other source. In this case, you'll create a keyword dictionary containing a list of inappropriate language to screen in external email. You must first [Connect to Security &amp; Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
-
+Often when you need to create a large dictionary, it's to use keywords from a file or a list exported from some other source. In this case, you'll create a keyword dictionary containing a list of inappropriate language to screen in external email. You must first [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
+ 1. Copy the keywords into a text file and make sure that each keyword is on a separate line.
-
+ 2. Save the text file with Unicode encoding. In Notepad \> **Save As** \> **Encoding** \> **Unicode**.
-
+ 3. Read the file into a variable by running this cmdlet:
-
+ ```powershell
- $fileData = Get-Content <filename> -Encoding Byte -ReadCount 0
+ $fileData = [System.IO.File]::ReadAllBytes('<filename>')
``` 4. Create the dictionary by running this cmdlet:
-
+ ```powershell New-DlpKeywordDictionary -Name <name> -Description <description> -FileData $fileData ```
-
+ ## Using keyword dictionaries in custom sensitive information types and DLP policies Keyword dictionaries can be used as part of the match requirements for a custom sensitive information type, or as a sensitive information type themselves. Both require you to create a [custom sensitive information type](create-a-custom-sensitive-information-type-in-scc-powershell.md). Follow the instructions in the linked article to create a sensitive information type. Once you have the XML, you'll need the GUID identifier for the dictionary to use it.
-
+ ```xml <Entity id="9e5382d0-1b6a-42fd-820e-44e0d3b15b6e" patternsProximity="300" recommendedConfidence="75"> <Pattern confidenceLevel="75">
Keyword dictionaries can be used as part of the match requirements for a custom
</Entity> ```
-To get the identity of your dictionary, run this command and copy the **Identity** property value:
-
+To get the identity of your dictionary, run this command and copy the **Identity** property value:
+ ```powershell Get-DlpKeywordDictionary -Name "Diseases" ``` The output of the command looks like this:
-
+ `RunspaceId : 138e55e7-ea1e-4f7a-b824-79f2c4252255` `Identity : 8d2d44b0-91f4-41f2-94e0-21c1c5b5fc9f` `Name : Diseases`
The output of the command looks like this:
`IsValid : True` `ObjectState : Unchanged` - Paste the identity into your custom sensitive information type's XML and upload it. Now your dictionary will appear in your list of sensitive information types and you can use it right in your policy, specifying how many keywords are required to match.
-
+ ```xml <Entity id="d333c6c2-5f4c-4131-9433-db3ef72a89e8" patternsProximity="300" recommendedConfidence="85"> <Pattern confidenceLevel="85">
Paste the identity into your custom sensitive information type's XML and upload
``` > [!NOTE]
-> Microsoft 365 Information Protection supports double byte character set languages for:
+> Microsoft 365 Information Protection supports double-byte character set languages for:
+>
> - Chinese (simplified) > - Chinese (traditional) > - Korean > - Japanese >
->This support is available for sensitive information types. See, [Information protection support for double byte character sets release notes (preview)](mip-dbcs-relnotes.md) for more information.
+> This support is available for sensitive information types. See, [Information protection support for double byte character sets release notes (preview)](mip-dbcs-relnotes.md) for more information.
> [!TIP]
-> To detect patterns containing Chinese/Japanese characters and single byte characters or to detect patterns containing Chinese/Japanese and English, define two variants of the keyword or regex.
+> To detect patterns containing Chinese/Japanese characters and single byte characters or to detect patterns containing Chinese/Japanese and English, define two variants of the keyword or regex.
+>
> - For example, to detect a keyword like "机密的document", use two variants of the keyword; one with a space between the Japanese and English text and another without a space between the Japanese and English text. So, the keywords to be added in the SIT should be "机密的 document" and "机密的document". Similarly, to detect a phrase "東京オリンピック2020", two variants should be used; "東京オリンピック 2020" and "東京オリンピック2020".-
-> Along with Chinese/Japanese/double byte characters, if the list of keywords/phrases also contain non Chinese/Japanese words also (like English only), it is recommended to create two dictionaries/keyword lists. One for keywords containing Chinese/Japanese/double byte characters and another one for English only.
-> - For example, if you want to create a keyword dictionary/list with three phrases "Highly confidential", "機密性が高い" and "机密的document", the it you should create two keyword lists.
+>
+> Along with Chinese/Japanese/double byte characters, if the list of keywords/phrases also contain non Chinese/Japanese words also (like English only), it is recommended to create two dictionaries/keyword lists. One for keywords containing Chinese/Japanese/double byte characters and another one for English only.
+>
+> - For example, if you want to create a keyword dictionary/list with three phrases "Highly confidential", "機密性が高い" and "机密的document", the it you should create two keyword lists.
> 1. Highly confidential > 2. 機密性が高い, 机密的document and 机密的 document
->
+>
> While creating a regex using a double byte hyphen or a double byte period, make sure to escape both the characters like one would escape a hyphen or period in a regex. Here is a sample regex for reference:
-> - (?<!\d)([4][0-9]{3}[\-?\-\t]*[0-9]{4}
+>
+> - `(?<!\d)([4][0-9]{3}[\-?\-\t]*[0-9]{4}`
> > We recommend using a string match instead of a word match in a keyword list.
compliance Create Test Tune Dlp Policy https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/create-test-tune-dlp-policy.md
Here's a list of Microsoft Information Protection (MIP) roles that are in previe
- Information Protection Investigator - Information Protection Reader
-Here's a list of MIP role groups that are in preview. To learn more about the, see [Role groups in the Security & Compliance Center](../security/office-365-security/permissions-in-the-security-and-compliance-center.md#role-groups-in-the-security--compliance-center)
+Here's a list of MIP role groups that are in preview. To learn more, see [Role groups in the Security & Compliance Center](../security/office-365-security/permissions-in-the-security-and-compliance-center.md#role-groups-in-the-security--compliance-center)
- Information Protection - Information Protection Admins
At the first **Policy Settings** step, just accept the defaults for now. You can
![Options to customize the type of content to protect.](../media/DLP-create-test-tune-default-customization-settings.png)
-After clicking Next,** you'll be presented with an more **Policy Settings** page with more customization options. For a policy that you are just testing, here's where you can start to make some adjustments.
+After clicking Next,** you'll be presented with a more **Policy Settings** page with more customization options. For a policy that you are just testing, here's where you can start to make some adjustments.
- I've turned off policy tips for now, which is a reasonable step to take if you're just testing things out and don't want to display anything to users yet. Policy tips display warnings to users that they're about to violate a DLP policy. For example, an Outlook user will see a warning that the file they've attached contains credit card numbers and will cause their email to be rejected. The goal of policy tips is to stop the non-compliant behavior before it happens. - I've also decreased the number of instances from 10 to 1, so that this policy will detect any sharing of Australian PII data, not just bulk sharing of the data.
When you're happy that your DLP policy is accurately and effectively detecting s
If you're waiting to see when the policy will take effect, [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell) and run the [Get-DlpCompliancePolicy cmdlet](/powershell/module/exchange/get-dlpcompliancepolicy) to see the DistributionStatus.
-![Running cmdlet in PowerShell.](../media/DLP-create-test-tune-PowerShell.png)
-
+ ```powershell
+ Get-DlpCompliancePolicy "Testing -Australia PII" -DistributionDetail | Select distributionstatus
+ ```
After turning on the DLP policy, you should run some final tests of your own to make sure that the expected policy actions are occurring. If you're trying to test things like credit card data, there are websites online with information on how to generate sample credit card or other personal information that will pass checksums and trigger your policies. Policies that allow user overrides will present that option to the user as part of the policy tip.
compliance Customize A Built In Sensitive Information Type https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/customize-a-built-in-sensitive-information-type.md
description: Learn how to create a custom sensitive information type that will a
# Customize a built-in sensitive information type
-When looking for sensitive information in content, you need to describe that information in what's called a *rule* . Data loss prevention (DLP) includes rules for the most-common sensitive information types that you can use right away. To use these rules, you have to include them in a policy. You might find that you want to adjust these built-in rules to meet your organization's specific needs, and you can do that by creating a custom sensitive information type. This topic shows you how to customize the XML file that contains the existing rule collection to detect a wider range of potential credit-card information.
+When looking for sensitive information in content, you need to describe that information in what's called a *rule*. Data loss prevention (DLP) includes rules for the most-common sensitive information types that you can use right away. To use these rules, you have to include them in a policy. You might find that you want to adjust these built-in rules to meet your organization's specific needs, and you can do that by creating a custom sensitive information type. This topic shows you how to customize the XML file that contains the existing rule collection to detect a wider range of potential credit-card information.
You can take this example and apply it to other built-in sensitive information types. For a list of default sensitive information types and XML definitions, see [Sensitive information type entity definitions](sensitive-information-type-entity-definitions.md).
To export the XML, you need to [connect to the Security and Compliance Center vi
$ruleCollections = Get-DlpSensitiveInformationTypeRulePackage ```
-3. Make a formatted XML file with all that data by typing the following. (`Set-content` is the part of the cmdlet that writes the XML to the file.)
+3. Make a formatted XML file with all that data by typing the following.
```powershell
- Set-Content -path C:\custompath\exportedRules.xml -Encoding Byte -Value $ruleCollections.SerializedClassificationRuleCollection
+ [System.IO.File]::WriteAllBytes('C:\custompath\exportedRules.xml', $ruleCollections.SerializedClassificationRuleCollection)
``` > [!IMPORTANT]
Now, you have something that looks similar to the following XML. Because rule pa
## Remove the corroborative evidence requirement from a sensitive information type
-Now that you have a new sensitive information type that you're able to upload to the Security &amp; Compliance Center, the next step is to make the rule more specific. Modify the rule so that it only looks for a 16-digit number that passes the checksum but doesn't require additional (corroborative) evidence, like keywords. To do this, you need to remove the part of the XML that looks for corroborative evidence. Corroborative evidence is very helpful in reducing false positives. In this case there are usually certain keywords or an expiration date near the credit card number. If you remove that evidence, you should also adjust how confident you are that you found a credit card number by lowering the `confidenceLevel`, which is 85 in the example.
+Now that you have a new sensitive information type that you're able to upload to the Security &amp; Compliance Center, the next step is to make the rule more specific. Modify the rule so that it only looks for a 16-digit number that passes the checksum but doesn't require additional (corroborative) evidence, like keywords. To do this, you need to remove the part of the XML that looks for corroborative evidence. Corroborative evidence is very helpful in reducing false positives. In this case there are usually certain keywords or an expiration date near the credit card number. If you remove that evidence, you should also adjust how confident you are that you found a credit card number by lowering the `confidenceLevel`, which is 85 in the example.
```xml <Entity id="db80b3da-0056-436e-b0ca-1f4cf7080d1f" patternsProximity="300"
Now that you have a new sensitive information type that you're able to upload to
## Look for keywords that are specific to your organization
-You might want to require corroborative evidence but want different or additional keywords, and perhaps you want to change where to look for that evidence. You can adjust the `patternsProximity` to expand or shrink the window for corroborative evidence around the 16-digit number. To add your own keywords, you need to define a keyword list and reference it within your rule. The following XML adds the keywords "company card" and "Contoso card" so that any message that contains those phrases within 150 characters of a credit card number will be identified as a credit card number.
+You might want to require corroborative evidence but want different or additional keywords, and perhaps you want to change where to look for that evidence. You can adjust the `patternsProximity` to expand or shrink the window for corroborative evidence around the 16-digit number. To add your own keywords, you need to define a keyword list and reference it within your rule. The following XML adds the keywords "company card" and "Contoso card" so that any message that contains those phrases within 150 characters of a credit card number will be identified as a credit card number.
```xml <Rules>
To upload your rule, you need to do the following.
3. In the PowerShell, type the following. ```powershell
- New-DlpSensitiveInformationTypeRulePackage -FileData (Get-Content -Path "C:\custompath\MyNewRulePack.xml" -Encoding Byte)
+ New-DlpSensitiveInformationTypeRulePackage -FileData ([System.IO.File]::ReadAllBytes('C:\custompath\MyNewRulePack.xml'))
``` > [!IMPORTANT]
- > Make sure that you use the file location where your rule pack is actually stored. `C:\custompath\` is a placeholder.
+ > Make sure that you use the file location where your rule pack is actually stored. `C:\custompath\` is a placeholder.
4. To confirm, type Y, and then press **Enter**.
To start using the new rule to detect sensitive information, you need to add the
These are the definitions for the terms you encountered during this procedure.
-|**Term**|**Definition**|
-|:--|:--|
+<br>
+
+****
+
+|Term|Definition|
+|||
|Entity|Entities are what we call sensitive information types, such as credit card numbers. Each entity has a unique GUID as its ID. If you copy a GUID and search for it in the XML, you'll find the XML rule definition and all the localized translations of that XML rule. You can also find this definition by locating the GUID for the translation and then searching for that GUID.|
-|Functions|The XML file references `Func_credit_card`, which is a function in compiled code. Functions are used to run complex regexes and verify that checksums match for our built-in rules.) Because this happens in the code, some of the variables don't appear in the XML file.|
+|Functions|The XML file references `Func_credit_card`, which is a function in compiled code. Functions are used to run complex regexes and verify that checksums match for our built-in rules.) Because this happens in the code, some of the variables don't appear in the XML file.|
|IdMatch|This is the identifier that the pattern is to trying to matchΓÇöfor example, a credit card number.|
-|Keyword lists|The XML file also references `keyword_cc_verification` and `keyword_cc_name`, which are lists of keywords from which we are looking for matches within the `patternsProximity` for the entity. These aren't currently displayed in the XML.|
+|Keyword lists|The XML file also references `keyword_cc_verification` and `keyword_cc_name`, which are lists of keywords from which we are looking for matches within the `patternsProximity` for the entity. These aren't currently displayed in the XML.|
|Pattern|The pattern contains the list of what the sensitive type is looking for. This includes keywords, regexes, and internal functions, which perform tasks like verifying checksums. Sensitive information types can have multiple patterns with unique confidences. This is useful when creating a sensitive information type that returns a high confidence if corroborative evidence is found and a lower confidence if little or no corroborative evidence is found.| |Pattern confidenceLevel|This is the level of confidence that the DLP engine found a match. This level of confidence is associated with a match for the pattern if the pattern's requirements are met. This is the confidence measure you should consider when using Exchange mail flow rules (also known as transport rules).|
-|patternsProximity|When we find what looks like a credit card number pattern, `patternsProximity` is the proximity around that number where we'll look for corroborative evidence.|
-|recommendedConfidence|This is the confidence level we recommend for this rule. The recommended confidence applies to entities and affinities. For entities, this number is never evaluated against the `confidenceLevel` for the pattern. It's merely a suggestion to help you choose a confidence level if you want to apply one. For affinities, the `confidenceLevel` of the pattern must be higher than the `recommendedConfidence` number for a mail flow rule action to be invoked. The `recommendedConfidence` is the default confidence level used in mail flow rules that invokes an action. If you want, you can manually change the mail flow rule to be invoked based off the pattern's confidence level, instead.|
+|patternsProximity|When we find what looks like a credit card number pattern, `patternsProximity` is the proximity around that number where we'll look for corroborative evidence.|
+|recommendedConfidence|This is the confidence level we recommend for this rule. The recommended confidence applies to entities and affinities. For entities, this number is never evaluated against the `confidenceLevel` for the pattern. It's merely a suggestion to help you choose a confidence level if you want to apply one. For affinities, the `confidenceLevel` of the pattern must be higher than the `recommendedConfidence` number for a mail flow rule action to be invoked. The `recommendedConfidence` is the default confidence level used in mail flow rules that invokes an action. If you want, you can manually change the mail flow rule to be invoked based off the pattern's confidence level, instead.|
+|
## For more information - [Sensitive information type entity definitions](sensitive-information-type-entity-definitions.md) - [Create a custom sensitive information type](create-a-custom-sensitive-information-type.md)-- [Learn about data loss prevention](dlp-learn-about-dlp.md)
+- [Learn about data loss prevention](dlp-learn-about-dlp.md)
compliance Document Fingerprinting https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/document-fingerprinting.md
description: "Information workers in your organization handle many kinds of sens
# Document Fingerprinting Information workers in your organization handle many kinds of sensitive information during a typical day. In the Security &amp; Compliance Center, Document Fingerprinting makes it easier for you to protect this information by identifying standard forms that are used throughout your organization. This topic describes the concepts behind Document Fingerprinting and how to create one by using PowerShell.
-
+ ## Basic scenario for Document Fingerprinting Document Fingerprinting is a Data Loss Prevention (DLP) feature that converts a standard form into a sensitive information type, which you can use in the rules of your DLP policies. For example, you can create a document fingerprint based on a blank patent template and then create a DLP policy that detects and blocks all outgoing patent templates with sensitive content filled in. Optionally, you can set up [policy tips](use-notifications-and-policy-tips.md) to notify senders that they might be sending sensitive information, and the sender should verify that the recipients are qualified to receive the patents. This process works with any text-based forms used in your organization. Additional examples of forms that you can upload include:
-
+ - Government forms-- Health Insurance Portability and Accountability Act (HIPAA) compliance forms
+- Health Insurance Portability and Accountability Act (HIPAA) compliance forms
- Employee information forms for Human Resources departments - Custom forms created specifically for your organization
You've probably already guessed that documents don't have actual fingerprints, b
> For now, DLP can use document fingerprinting as a detection method in Exchange online only. The following example shows what happens if you create a document fingerprint based on a patent template, but you can use any form as a basis for creating a document fingerprint.
-
+ ### Example of a patent document matching a document fingerprint of a patent template ![Diagram of document fingerprinting.](../media/Document-Fingerprinting-diagram.png)
-
-The patent template contains the blank fields "Patent title," "Inventors," and "Description" and descriptions for each of those fieldsΓÇöthat's the word pattern. When you upload the original patent template, it's in one of the supported file types and in plain text. DLP converts this word pattern into a document fingerprint, which is a small Unicode XML file containing a unique hash value representing the original text, and the fingerprint is saved as a data classification in Active Directory. (As a security measure, the original document itself isn't stored on the service; only the hash value is stored, and the original document can't be reconstructed from the hash value.) The patent fingerprint then becomes a sensitive information type that you can associate with a DLP policy. After you associate the fingerprint with a DLP policy, DLP detects any outbound emails containing documents that match the patent fingerprint and deals with them according to your organization's policy.
+
+The patent template contains the blank fields "Patent title," "Inventors," and "Description" and descriptions for each of those fieldsΓÇöthat's the word pattern. When you upload the original patent template, it's in one of the supported file types and in plain text. DLP converts this word pattern into a document fingerprint, which is a small Unicode XML file containing a unique hash value representing the original text, and the fingerprint is saved as a data classification in Active Directory. (As a security measure, the original document itself isn't stored on the service; only the hash value is stored, and the original document can't be reconstructed from the hash value.) The patent fingerprint then becomes a sensitive information type that you can associate with a DLP policy. After you associate the fingerprint with a DLP policy, DLP detects any outbound emails containing documents that match the patent fingerprint and deals with them according to your organization's policy.
For example, you might want to set up a DLP policy that prevents regular employees from sending outgoing messages containing patents. DLP will use the patent fingerprint to detect patents and block those emails. Alternatively, you might want to let your legal department to be able to send patents to other organizations because it has a business need for doing so. You can allow specific departments to send sensitive information by creating exceptions for those departments in your DLP policy, or you can allow them to override a policy tip with a business justification.
-
+ ### Supported file types Document Fingerprinting supports the same file types that are supported in mail flow rules (also known as transport rules). For a list of supported file types, see [Supported file types for mail flow rule content inspection](/exchange/security-and-compliance/mail-flow-rules/inspect-message-attachments#supported-file-types-for-mail-flow-rule-content-inspection). One quick note about file types: neither mail flow rules nor Document Fingerprinting supports the .dotx file type, which can be confusing because that's a template file in Word. When you see the word "template" in this and other Document Fingerprinting topics, it refers to a document that you have established as a standard form, not the template file type.
-
+ #### Limitations of document fingerprinting Document Fingerprinting won't detect sensitive information in the following cases:
-
+ - Password protected files - Files that contain only images - Documents that don't contain all the text from the original form used to create the document fingerprint
Document Fingerprinting won't detect sensitive information in the following case
## Use PowerShell to create a classification rule package based on document fingerprinting
-Note that you can currently create a document fingerprint only by using PowerShell in the Security &amp; Compliance Center. To connect, see [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
+Currently, you can create a document fingerprint only in [Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
DLP uses classification rule packages to detect sensitive content. To create a classification rule package based on a document fingerprint, use the **New-DlpFingerprint** and **New-DlpSensitiveInformationType** cmdlets. Because the results of **New-DlpFingerprint** aren't stored outside the data classification rule, you always run **New-DlpFingerprint** and **New-DlpSensitiveInformationType** or **Set-DlpSensitiveInformationType** in the same PowerShell session. The following example creates a new document fingerprint based on the file C:\My Documents\Contoso Employee Template.docx. You store the new fingerprint as a variable so you can use it with the **New-DlpSensitiveInformationType** cmdlet in the same PowerShell session.
-
+ ```powershell
-$Employee_Template = Get-Content "C:\My Documents\Contoso Employee Template.docx" -Encoding byte -ReadCount 0
+$Employee_Template = ([System.IO.File]::ReadAllBytes('C:\My Documents\Contoso Employee Template.docx'))
$Employee_Fingerprint = New-DlpFingerprint -FileData $Employee_Template -Description "Contoso Employee Template" ``` Now, let's create a new data classification rule named "Contoso Employee Confidential" that uses the document fingerprint of the file C:\My Documents\Contoso Customer Information Form.docx.
-
+ ```powershell
-$Customer_Form = Get-Content "C:\My Documents\Contoso Customer Information Form.docx" -Encoding byte -ReadCount 0
+$Customer_Form = ([System.IO.File]::ReadAllBytes('C:\My Documents\Contoso Customer Information Form.docx'))
$Customer_Fingerprint = New-DlpFingerprint -FileData $Customer_Form -Description "Contoso Customer Information Form"
-New-DlpSensitiveInformationType -Name "Contoso Customer Confidential" -Fingerprints $Customer_Fingerprint -Description "Message contains Contoso customer information."
+New-DlpSensitiveInformationType -Name "Contoso Customer Confidential" -Fingerprints $Customer_Fingerprint -Description "Message contains Contoso customer information."
```
-You can now use the **Get-DlpSensitiveInformationType** cmdlet to find all DLP data classification rule packages, and in this example, "Contoso Customer Confidential" is part of the data classification rule packages list.
-
+You can now use the **Get-DlpSensitiveInformationType** cmdlet to find all DLP data classification rule packages, and in this example, "Contoso Customer Confidential" is part of the data classification rule packages list.
+ Finally, add the "Contoso Customer Confidential" data classification rule package to a DLP policy in the Security &amp; Compliance Center. This example adds a rule to an existing DLP policy named "ConfidentialPolicy". ```powershell
New-DlpComplianceRule -Name "ContosoConfidentialRule" -Policy "ConfidentialPolic
``` You can also use the data classification rule package in mail flow rules in Exchange Online, as shown in the following example. To run this command, you first need to [Connect to Exchange Online PowerShell](/powershell/exchange/connect-to-exchange-online-powershell). Also note that it takes time for the rule package to sync from the Security &amp; Compliance Center to the Exchange admin center.
-
+ ```powershell New-TransportRule -Name "Notify :External Recipient Contoso confidential" -NotifySender NotifyOnly -Mode Enforce -SentToScope NotInOrganization -MessageContainsDataClassification @{Name=" Contoso Customer Confidential"} ``` DLP now detects documents that match the Contoso Customer Form.docx document fingerprint.
-
+ For syntax and parameter information, see: - [New-DlpFingerprint](/powershell/module/exchange/New-DlpFingerprint)
compliance Endpoint Dlp Learn About https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/endpoint-dlp-learn-about.md
Onboarding and offboarding are handled via scripts you download from the Device
Use the procedures in [Getting started with Microsoft 365 Endpoint DLP](endpoint-dlp-getting-started.md) to onboard devices.
-If you have onboarded devices through [Microsoft Defender for Endpoint](/windows/security/threat-protection/), those devices will automatically show up in the list of devices.
+If you have onboarded devices through [Microsoft Defender for Endpoint](/windows/security/threat-protection/), those devices will automatically show up in the list of devices. You can **Turn on device monitoring** to use endpoint DLP.
> [!div class="mx-imgBorder"] > ![managed devices list.](../media/endpoint-dlp-learn-about-2-device-list.png)
compliance Endpoint Dlp Using https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/endpoint-dlp-using.md
To find the full path of Mac apps:
2. Choose **Open Files and Ports** tab.
-3. The app name is located at the end of the full path.
-
+3. For macOS apps, you need the full path name, including the name of the app.
#### Protect sensitive data from cloud synchronization apps
For macOS devices, you must add the full file path. To find the full path of Mac
2. Choose **Open Files and Ports** tab.
-3. The app name is located at the end of the full path.
+3. For macOS apps, you need the full path name, including the name of the app.
#### Service domains
compliance How Smtp Dane Works https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/how-smtp-dane-works.md
In the example TLSA record, the Selector Field is set to ΓÇÿ1ΓÇÖ so the Certific
|1 |SHA-256 |The data in the TSLA record is a SHA-256 hash of either the certificate or the SPKI. | |2 |SHA-512 |The data in the TSLA record is a SHA-512 hash of either the certificate or the SPKI. |
-In the example of TLSA record, the Matching Type Field is set to ΓÇÿ1ΓÇÖ so the Certificate Association Data is a SHA-256 hash of the Subject Public Key Info from the destination server certificate
+In the example TLSA record, the Matching Type Field is set to ΓÇÿ1ΓÇÖ so the Certificate Association Data is a SHA-256 hash of the Subject Public Key Info from the destination server certificate
**Certificate Association Data**: Specifies the certificate data that is used for matching against the destination server certificate. This data depends on the Selector Field value and the Matching Type Value.
-In the example of TLSA record, the Certificate Association data is set to ‘abc123…xyz789’. Since the Selector Field value in the example is set to '1’, it would reference the destination server certificate’s public key and the algorithm that is identified to be used with it. And since the Matching Type field value in the example is set to ‘1’, it would reference the SHA-256 hash of the Subject Public Key Info from the destination server certificate.
+In the example TLSA record, the Certificate Association data is set to ‘abc123…xyz789’. Since the Selector Field value in the example is set to '1’, it would reference the destination server certificate’s public key and the algorithm that is identified to be used with it. And since the Matching Type field value in the example is set to ‘1’, it would reference the SHA-256 hash of the Subject Public Key Info from the destination server certificate.
## How can Exchange Online customers use SMTP DANE Outbound?
compliance Import Epic Data https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/import-epic-data.md
The following table describes the parameters to use with this script and their r
|Parameter |Description| |:-|:-| |tenantId|This is the Id for your Microsoft 365 organization that you obtained in Step 1. You can also obtain the tenant Id for your organization on the **Overview** blade in the Azure AD admin center. This is used to identify your organization.|
-|appId|This is the Azure AD application Id for the app that you created in Azure AD in Step 1. This is used by Azure AD for authentication when the script attempts to accesses your Microsoft 365 organization.|
+|appId|This is the Azure AD application Id for the app that you created in Azure AD in Step 1. This is used by Azure AD for authentication when the script attempts to access your Microsoft 365 organization.|
|appSecret|This is the Azure AD application secret for the app that you created in Azure AD in Step 1. This also used for authentication.| |jobId|This is the job ID for the Epic connector that you created in Step 3. This is used to associate the Epic EHR audit records that are uploaded to the Microsoft cloud with the Epic connector.| |filePath|This is the file path for the text file (stored on the same system as the script) that you created in Step 2. Try to avoid spaces in the file path; otherwise use single quotation marks.|
If you've haven't run the script in Step 4, a link to download the script is dis
To make sure the latest audit records from your Epic EHR system are available to tools like the insider risk management solution, we recommend that you schedule the script to run automatically on a daily basis. This also requires that you update the Epic audit record data in the same text file on a similar (if not the same) schedule so that it contains the latest information about patient records access activities by your employees. The goal is to upload the most current audit records so that the Epic connector can make it available to the insider risk management solution.
-You can user the Task Scheduler app in Windows to automatically run the script every day.
+You can use the Task Scheduler app in Windows to automatically run the script every day.
1. On your local computer, click the Windows **Start** button and then type **Task Scheduler**.
You can user the Task Scheduler app in Windows to automatically run the script e
6. Select the **Triggers** tab, click **New**, and then do the following things:
- 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will every day at the same specified time.
+ 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will run every day at the same specified time.
2. Under **Advanced settings**, make sure the **Enabled** checkbox is selected.
compliance Import Healthcare Data https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/import-healthcare-data.md
The following table describes the parameters to use with this script and their r
|Parameter |Description| |:-|:-| |tenantId|This is the Id for your Microsoft 365 organization that you obtained in Step 1. You can also obtain the tenant Id for your organization on the **Overview** blade in the Azure AD admin center. This is used to identify your organization.|
-|appId|This is the Azure AD application Id for the app that you created in Azure AD in Step 1. This is used by Azure AD for authentication when the script attempts to accesses your Microsoft 365 organization.|
+|appId|This is the Azure AD application Id for the app that you created in Azure AD in Step 1. This is used by Azure AD for authentication when the script attempts to access your Microsoft 365 organization.|
|appSecret|This is the Azure AD application secret for the app that you created in Azure AD in Step 1. This also used for authentication.| |jobId|This is the job ID for the Healthcare connector that you created in Step 3. This is used to associate the healthcare EHR auditing data that are uploaded to the Microsoft cloud with the Healthcare connector.| |filePath|This is the file path for the text file (stored on the same system as the script) that you created in Step 2. Try to avoid spaces in the file path; otherwise use single quotation marks.|
If you've haven't run the script in Step 4, a link to download the script is dis
To make sure the latest auditing data from your healthcare EHR system are available to tools like the insider risk management solution, we recommend that you schedule the script to run automatically on a daily basis. This also requires that you update the EHR auditing data in the same text file on a similar (if not the same) schedule so that it contains the latest information about patient records access activities by your employees. The goal is to upload the most current auditing data so that the Healthcare connector can make it available to the insider risk management solution.
-You can user the Task Scheduler app in Windows to automatically run the script every day.
+You can use the Task Scheduler app in Windows to automatically run the script every day.
1. On your local computer, click the Windows **Start** button and then type **Task Scheduler**.
You can user the Task Scheduler app in Windows to automatically run the script e
6. Select the **Triggers** tab, click **New**, and then do the following things:
- 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will every day at the same specified time.
+ 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will run every day at the same specified time.
2. Under **Advanced settings**, make sure the **Enabled** checkbox is selected.
compliance Import Hr Data US Government https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/import-hr-data-US-government.md
The next step is to create an HR connector in the Microsoft 365 compliance cente
## Step 4: Run the sample script to upload your HR data
-The last step in setting up an HR connector is to run a sample script that will upload the HR data in the CSV file (that you created in Step 2) to the Microsoft cloud. Specifically, the script uploads the data to the HR connector. After you run the script, the HR connector that you created in Step 3 imports the HR data to your Microsoft 365 organization where it can accessed by other compliance tools, such as the Insider risk management solution. After you run the script, consider scheduling a task to run it automatically on a daily basis so the most current employee termination data is uploaded to the Microsoft cloud. See [Schedule the script to run automatically](#optional-step-6-schedule-the-script-to-run-automatically).
+The last step in setting up an HR connector is to run a sample script that will upload the HR data in the CSV file (that you created in Step 2) to the Microsoft cloud. Specifically, the script uploads the data to the HR connector. After you run the script, the HR connector that you created in Step 3 imports the HR data to your Microsoft 365 organization where it can be accessed by other compliance tools, such as the Insider risk management solution. After you run the script, consider scheduling a task to run it automatically on a daily basis so the most current employee termination data is uploaded to the Microsoft cloud. See [Schedule the script to run automatically](#optional-step-6-schedule-the-script-to-run-automatically).
1. Go to window that you left open from the previous step to access the GitHub site with the sample script. Alternatively, open the bookmarked site or use the URL that you copied.
The last step in setting up an HR connector is to run a sample script that will
| Parameter | Description | |:--|:--|:--| |`tenantId`|The Id for your Microsoft 365 organization that you obtained in Step 1. You can also obtain the tenant Id for your organization on the **Overview** blade in the Azure AD admin center. This is used to identify your organization.|
- |`appId` |The Azure AD application Id for the app that you created in Azure AD in Step 1. This is used by Azure AD for authentication when the script attempts to accesses your Microsoft 365 organization. |
+ |`appId` |The Azure AD application Id for the app that you created in Azure AD in Step 1. This is used by Azure AD for authentication when the script attempts to access your Microsoft 365 organization. |
|`appSecret`|The Azure AD application secret for the app that you created in Azure AD in Step 1. This also used for authentication.| |`jobId`|The job ID for the HR connector that you created in Step 3. This is used to associate the HR data that is uploaded to the Microsoft cloud with the HR connector.| |`csvFilePath`|The file path for the CSV file (stored on the same system as the script) that you created in Step 2. Try to avoid spaces in the file path; otherwise use single quotation marks.|
If you've haven't run the script in Step 4, a link to download the script is dis
To make sure the latest HR data from your organization is available to tools like the insider risk management solution, we recommend that you schedule the script to run automatically on a recurring basis, such as once a day. This also requires that you update the HR data in the CSV file on a similar (if not the same) schedule so that it contains the latest information about employees who leave your organization. The goal is to upload the most current HR data so that the HR connector can make it available to the insider risk management solution.
-You can user the Task Scheduler app in Windows to automatically run the script every day.
+You can use the Task Scheduler app in Windows to automatically run the script every day.
1. On your local computer, click the Windows **Start** button and then type **Task Scheduler**.
You can user the Task Scheduler app in Windows to automatically run the script e
6. Select the **Triggers** tab, click **New**, and then do the following things:
- 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will every day at the same specified time.
+ 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will run every day at the same specified time.
1. Under **Advanced settings**, make sure the **Enabled** checkbox is selected.
compliance Import Hr Data https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/import-hr-data.md
You can also click **Edit** to change the Azure App ID or the column header name
## Step 4: Run the sample script to upload your HR data
-The last step in setting up an HR connector is to run a sample script that will upload the HR data in the CSV file (that you created in Step 1) to the Microsoft cloud. Specifically, the script uploads the data to the HR connector. After you run the script, the HR connector that you created in Step 3 imports the HR data to your Microsoft 365 organization where it can accessed by other compliance tools, such as the Insider risk management solution. After you run the script, consider scheduling a task to run it automatically on a daily basis so the most current employee termination data is uploaded to the Microsoft cloud. See [Schedule the script to run automatically](#optional-step-6-schedule-the-script-to-run-automatically).
+The last step in setting up an HR connector is to run a sample script that will upload the HR data in the CSV file (that you created in Step 1) to the Microsoft cloud. Specifically, the script uploads the data to the HR connector. After you run the script, the HR connector that you created in Step 3 imports the HR data to your Microsoft 365 organization where it can be accessed by other compliance tools, such as the Insider risk management solution. After you run the script, consider scheduling a task to run it automatically on a daily basis so the most current employee termination data is uploaded to the Microsoft cloud. See [Schedule the script to run automatically](#optional-step-6-schedule-the-script-to-run-automatically).
1. Go to window that you left open from the previous step to access the GitHub site with the sample script. Alternatively, open the bookmarked site or use the URL that you copied. You can also access the script [here](https://github.com/microsoft/m365-compliance-connector-sample-scripts/blob/main/sample_script.ps1).
The last step in setting up an HR connector is to run a sample script that will
| Parameter | Description | |:--|:--|:--| |`tenantId`|This is the Id for your Microsoft 365 organization that you obtained in Step 2. You can also obtain the tenant Id for your organization on the **Overview** blade in the Azure AD admin center. This is used to identify your organization.|
- |`appId` |This is the Azure AD application Id for the app that you created in Azure AD in Step 2. This is used by Azure AD for authentication when the script attempts to accesses your Microsoft 365 organization. |
+ |`appId` |This is the Azure AD application Id for the app that you created in Azure AD in Step 2. This is used by Azure AD for authentication when the script attempts to access your Microsoft 365 organization. |
|`appSecret`|This is the Azure AD application secret for the app that you created in Azure AD in Step 2. This also used for authentication.| |`jobId`|This is the job ID for the HR connector that you created in Step 3. This is used to associate the HR data that is uploaded to the Microsoft cloud with the HR connector.| |`filePath`|This is the file path for the file (stored on the same system as the script) that you created in Step 1. Try to avoid spaces in the file path; otherwise use single quotation marks.|
If you've haven't run the script in Step 4, a link to download the script is dis
To make sure the latest HR data from your organization is available to tools like the insider risk management solution, we recommend that you schedule the script to run automatically on a recurring basis, such as once a day. This also requires that you update the HR data in the CSV file on a similar (if not the same) schedule so that it contains the latest information about employees who leave your organization. The goal is to upload the most current HR data so that the HR connector can make it available to the insider risk management solution.
-You can user the Task Scheduler app in Windows to automatically run the script every day.
+You can use the Task Scheduler app in Windows to automatically run the script every day.
1. On your local computer, click the Windows **Start** button and then type **Task Scheduler**.
You can user the Task Scheduler app in Windows to automatically run the script e
6. Select the **Triggers** tab, click **New**, and then do the following things:
- 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will every day at the same specified time.
+ 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will run every day at the same specified time.
1. Under **Advanced settings**, make sure the **Enabled** checkbox is selected.
compliance Import Physical Badging Data https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/import-physical-badging-data.md
Here's an example of a JSON file that conforms to the required schema:
```json [ {
- "UserId":"sarad@contoso.com"
+ "UserId":"sarad@contoso.com",
"AssetId":"Mid-Sec-7", "AssetName":"Main Building 1st Floor Mid Section", "EventTime":"2019-07-04T01:57:49",
- "AccessStatus":"Failed",
+ "AccessStatus":"Failed"
}, { "UserId":"pilarp@contoso.com", "AssetId":"Mid-Sec-7", "AssetName":"Main Building 1st Floor Mid Section", "EventTime":"2019-07-04T02:57:49",
- "AccessStatus":"Success",
+ "AccessStatus":"Success"
} ] ```
The next step is to create a physical badging connector in the Microsoft 365 com
The next step in setting up a physical badging connector is to run a script that will push the physical badging data in the JSON file (that you created in Step 2) to the API endpoint you created in Step 1. We provide a sample script for your reference and you can choose to use it or create your own script to post the JSON file to the API endpoint.
-After you run the script, the JSON file containing the physical badging data is pushed to your Microsoft 365 organization where it can accessed by the insider risk management solution. We recommend you post physical badging data daily. You can do this by automating the process to generate the JSON file every day from your physical badging system and then scheduling the script to push the data.
+After you run the script, the JSON file containing the physical badging data is pushed to your Microsoft 365 organization where it can be accessed by the insider risk management solution. We recommend you post physical badging data daily. You can do this by automating the process to generate the JSON file every day from your physical badging system and then scheduling the script to push the data.
> [!NOTE] > The maximum number of records in the JSON file that can be processed by the API is 50,000 records.
-1. Go to [this GitHub site](https://github.com/microsoft/m365-hrconnector-sample-scripts/blob/master/upload_termination_records.ps1) to access the sample script.
+1. Go to [this GitHub site](https://github.com/microsoft/m365-physical-badging-connector-sample-scripts/blob/master/push_physical_badging_records.ps1) to access the sample script.
2. Click the **Raw** button to display the script in text view
After you run the script, the JSON file containing the physical badging data is
|Parameter|Description| ||| |tenantId|This is the Id for your Microsoft 365 organization that you obtained in Step 1. You can also obtain the tenantId for your organization on the **Overview** blade in the Azure AD admin center. This is used to identify your organization.|
- |appId|This is the Azure AD application Id for the app that you created in Azure AD in Step 1. This is used by Azure AD for authentication when the script attempts to accesses your Microsoft 365 organization.|
+ |appId|This is the Azure AD application Id for the app that you created in Azure AD in Step 1. This is used by Azure AD for authentication when the script attempts to access your Microsoft 365 organization.|
|appSecret|This is the Azure AD application secret for the app that you created in Azure AD in Step 1. This is also used for authentication.| |jobId|This is the Job Id for the physical badging connector that you created in Step 3. This is used to associate the physical badging data that is pushed to the Microsoft cloud with the physical badging connector.| |JsonFilePath|This is the file path on the local computer (the one you're using to run the script) for the JSON file that you created in Step 2. This file must follow the sample schema described in Step 3.|
After you run the script, the JSON file containing the physical badging data is
Here's an example of the syntax for the physical badging connector script using actual values for each parameter: ```powershell
- .\PhysicalBadging.ps1 -tenantId d5723623-11cf-4e2e-b5a5-01d1506273g9 -appId 29ee526e-f9a7-4e98-a682-67f41bfd643e -appSecret MNubVGbcQDkGCnn -jobId b8be4a7d-e338-43eb-a69e-c513cd458eba -csvFilePath 'C:\Users\contosoadmin\Desktop\Data\physical_badging_data.json'
+ .\PhysicalBadging.ps1 -tenantId d5723623-11cf-4e2e-b5a5-01d1506273g9 -appId 29ee526e-f9a7-4e98-a682-67f41bfd643e -appSecret MNubVGbcQDkGCnn -jobId b8be4a7d-e338-43eb-a69e-c513cd458eba -jsonFilePath 'C:\Users\contosoadmin\Desktop\Data\physical_badging_data.json'
``` If the upload is successful, the script displays the **Upload Successful** message.
If you've haven't run the script in Step 4, a link to download the script is dis
To make sure the latest physical badging data from your organization is available to tools like the insider risk management solution, we recommend that you schedule the script to run automatically on a recurring basis, such as once a day. This also requires that you update the physical badging data to JSON file on a similar (if not the same) schedule so that it contains the latest information about employees who leave your organization. The goal is to upload the most current physical badging data so that the physical badging connector can make it available to the insider risk management solution.
-You can user the Task Scheduler app in Windows to automatically run the script every day.
+You can use the Task Scheduler app in Windows to automatically run the script every day.
1. On your local computer, click the Windows **Start** button and then type **Task Scheduler**.
You can user the Task Scheduler app in Windows to automatically run the script e
6. Select the **Triggers** tab, click **New**, and then do the following things:
- 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will every day at the same specified time.
+ 1. Under **Settings**, select the **Daily** option, and then choose a date and time to run the script for the first time. The script will run every day at the same specified time.
2. Under **Advanced settings**, make sure the **Enabled** checkbox is selected.
You can user the Task Scheduler app in Windows to automatically run the script e
2. In the **Program/script** box, click **Browse**, and go to the following location and select it so the path is displayed in the box: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe.
- 3. In the **Add arguments (optional)** box, paste the same script command that you ran in Step 4. For example, .\PhysicalBadging.ps1-tenantId "d5723623-11cf-4e2e-b5a5-01d1506273g9" -appId "c12823b7-b55a-4989-faba-02de41bb97c3" -appSecret "MNubVGbcQDkGCnn" -jobId "e081f4f4-3831-48d6-7bb3-fcfab1581458" -jsonFilePath "C:\Users\contosoadmin\Desktop\Data\physical_badging_data.csv"
+ 3. In the **Add arguments (optional)** box, paste the same script command that you ran in Step 4. For example, .\PhysicalBadging.ps1-tenantId "d5723623-11cf-4e2e-b5a5-01d1506273g9" -appId "c12823b7-b55a-4989-faba-02de41bb97c3" -appSecret "MNubVGbcQDkGCnn" -jobId "e081f4f4-3831-48d6-7bb3-fcfab1581458" -jsonFilePath "C:\Users\contosoadmin\Desktop\Data\physical_badging_data.json"
4. In the **Start in (optional)** box, paste the folder location of the script that you ran in Step 4. For example, C:\Users\contosoadmin\Desktop\Scripts.
compliance Legacy Information For Message Encryption https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/legacy-information-for-message-encryption.md
- seo-marvel-apr2020 - admindeeplinkMAC - admindeeplinkEXCHANGE
-description: Understand how to transition legacy files to Office 365 Message Encryption (OME) for your organization.
+description: Understand how to transition legacy files to Office 365 Message Encryption (OME) for your organization.
# Legacy information for Office 365 Message Encryption If you haven't yet moved your organization to the new OME capabilities, but you have already deployed OME, then the information in this article applies to your organization. Microsoft recommends that you make a plan to move to the new OME capabilities as soon as it is reasonable for your organization. For instructions, see [Set up new Office 365 Message Encryption capabilities built on top of Azure Information Protection](set-up-new-message-encryption-capabilities.md). If you want to find out more about how the new capabilities work first, see [Office 365 Message Encryption](ome.md). The rest of this article refers to OME behavior before the release of the new OME capabilities.
-
+ With Office 365 Message Encryption, your organization can send and receive encrypted email messages between people inside and outside your organization. Office 365 Message Encryption works with Outlook.com, Yahoo, Gmail, and other email services. Email message encryption helps ensure that only intended recipients can view message content.
-
+ Here are some examples:
-
-- A bank employee sends credit card statements to customers
+- A bank employee sends credit card statements to customers
- An insurance company representative provides policy details to customers- - A mortgage broker requests financial information from a customer for a loan application- - A health care provider sends health care information to patients- - An attorney sends confidential information to a customer or another attorney ## How Office 365 Message Encryption works without the new capabilities Office 365 Message Encryption is an online service that's built on Microsoft Azure Rights Management (Azure RMS). With Azure RMS, administrators can define mail flow rules to determine the conditions for encryption. For example, a rule can require the encryption of all messages addressed to a specific recipient.
-
+ When someone sends an email message in Exchange Online that matches an encryption rule, the message is sent with an HTML attachment. The recipient opens the HTML attachment and follows instructions to view the encrypted message on the Office 365 Message Encryption portal. The recipient can choose to view the message by signing in with a Microsoft account or a work or school associated with Office 365, or by using a one-time pass code. Both options help ensure that only the intended recipient can view the encrypted message. This process is very different for the new OME capabilities.
-
+ The following diagram summarizes the passage of an email message through the encryption and decryption process.
-
+ ![Diagram showing the path of an encrypted email.](../media/O365-Office365MessageEncryption-Concept.png)
-
+ For more information, see [Service information for legacy Office 365 Message Encryption prior to the release of the new OME capabilities](legacy-information-for-message-encryption.md#LegacyServiceInfo).
-
+ ## Defining mail flow rules for Office 365 Message Encryption that don't use the new OME capabilities To enable Office 365 Message Encryption without the new capabilities, Exchange Online and Exchange Online Protection administrators define Exchange mail flow rules. These rules determine under what conditions email messages should be encrypted, as well as conditions for removing message encryption. When an encryption action is set for a rule, the service performs the action on any messages that match the rule conditions before sending the messages. Mail flow rules are flexible, letting you combine conditions so you can meet specific security requirements in a single rule. For example, you can create a rule to encrypt all messages that contain specified keywords and are addressed to external recipients. Office 365 Message Encryption also encrypts replies from recipients of encrypted email, and you can create a rule that decrypts those replies as a convenience for your email users. That way, users in your organization won't have to sign in to the encryption portal to view replies.
-
+ For more information about how to create Exchange mail flow rules, see [Define Rules for Office 365 Message Encryption](define-mail-flow-rules-to-encrypt-email.md).
-
+ ### Use the EAC to create a mail flow rule for encrypting email messages without the new OME capabilities 1. In a web browser, using a work or school account that has been granted global administrator permissions, [sign in to Office 365](https://support.office.com/article/b9582171-fd1f-4284-9846-bdd72bb28426#ID0EAABAAA=Web_browser).
For detailed syntax and parameter information, see [New-TransportRule](/powershe
## Sending, viewing, and replying to messages encrypted without the new capabilities With Office 365 Message Encryption, email messages are encrypted automatically, based on administrator-defined rules. An email that bears an encrypted message arrives in the recipient's Inbox with an attached HTML file.
-
+ Recipients follow instructions in the message to open the attachment and authenticate by using a Microsoft account or a work or school associated with Office 365. If recipients don't have either account, they're directed to create a Microsoft account that will let them sign in to view the encrypted message. Alternatively, recipients can choose to get a one-time pass code to view the message. After signing in or using a one-time pass code, recipients can view the decrypted message and send an encrypted reply.
-
+ ## Customize encrypted messages with Office 365 Message Encryption As an Exchange Online and Exchange Online Protection administrator, you can customize your encrypted messages. For example, you can add your company's brand and logo, specify an introduction, and add disclaimer text in encrypted messages and in the portal where recipients view your encrypted messages. Using Windows PowerShell cmdlets, you can customize the following aspects of the viewing experience for recipients of encrypted email messages: - Introductory text of the email that contains the encrypted message- - Disclaimer text of the email that contains the encrypted message- - Portal text that will appear in the message viewing portal- - Logo that will appear in the email message and viewing portal You can also revert back to the default look and feel at any time.
-
+ The following example shows a custom logo for ContosoPharma in the email attachment: > [!div class="mx-imgBorder"] > ![Sample of the view encrypted message page.](../media/TA-OME-3attachment2.jpg)
-
-**To customize encryption email messages and the encryption portal with your organization's brand**
-
+
+### To customize encryption email messages and the encryption portal with your organization's brand
+ 1. Connect to Exchange Online using Remote PowerShell, as described in [Connect to Exchange Online Using Remote PowerShell](/powershell/exchange/connect-to-exchange-online-powershell). 2. Use the Set-OMEConfiguration cmdlet as described here: [Set-OMEConfiguration](/powershell/module/exchange/set-omeconfiguration) or use the following table for guidance. **Encryption customization options**
- | To customize this feature of the encryption experience | Use these Windows PowerShell commands |
- |:--|:--|
- |Default text that accompanies encrypted email messages <br/> The default text appears above the instructions for viewing encrypted messages <br/> | `Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -EmailText "<string of up to 1024 characters>"` <br/> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -EmailText "Encrypted message from ContosoPharma secure messaging system"` <br/> |
- |Disclaimer statement in the email that contains the encrypted message <br/> | `Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> DisclaimerText "<your disclaimer statement, string of up to 1024 characters>"` <br/> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -DisclaimerText "This message is confidential for the use of the addressee only"` <br/> |
- |Text that appears at the top of the encrypted mail viewing portal <br/> | `Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -PortalText "<text for your portal, string of up to 128 characters>"` <br/> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -PortalText "ContosoPharma secure email portal"` <br/> |
- |Logo <br/> | `Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -Image <Byte[]>` <br/> **Example:** `Set-OMEConfiguration -Identity "OME configuration" -Image (Get-Content "C:\Temp\contosologo.png" -Encoding byte)` <br/> Supported file formats: .png, .jpg, .bmp, or .tiff <br/> Optimal size of logo file: less than 40 KB <br/> Optimal size of logo image: 170x70 pixels <br/> |
+ |To customize this feature of the encryption experience|Use these Windows PowerShell commands|
+ |||
+ |Default text that accompanies encrypted email messages <p> The default text appears above the instructions for viewing encrypted messages|`Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -EmailText "<string of up to 1024 characters>"` <p> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -EmailText "Encrypted message from ContosoPharma secure messaging system"`|
+ |Disclaimer statement in the email that contains the encrypted message|`Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> DisclaimerText "<your disclaimer statement, string of up to 1024 characters>"` <p> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -DisclaimerText "This message is confidential for the use of the addressee only"`|
+ |Text that appears at the top of the encrypted mail viewing portal|`Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -PortalText "<text for your portal, string of up to 128 characters>"` <p> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -PortalText "ContosoPharma secure email portal"`|
+ |Logo|`Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -Image <Byte[]>` <p> **Example:** `Set-OMEConfiguration -Identity "OME configuration" -Image ([System.IO.File]::ReadAllBytes('C:\Temp\contosologo.png'))` <p> Supported file formats: .png, .jpg, .bmp, or .tiff <p> Optimal size of logo file: less than 40 KB <p> Optimal size of logo image: 170x70 pixels|
+
+### To remove brand customizations from encryption email messages and the encryption portal
-**To remove brand customizations from encryption email messages and the encryption portal**
-
1. Connect to Exchange Online using Remote PowerShell, as described in [Connect to Exchange Online Using Remote PowerShell](/powershell/exchange/connect-to-exchange-online-powershell).
-2. Use the Set-OMEConfiguration cmdlet as described here: [Set-OMEConfiguration](/powershell/module/exchange/set-omeconfiguration). To remove your organization's branded customizations from the DisclaimerText, EmailText, and PortalText values, set the value to an empty string, `""`. For all image values, such as Logo, set the value to `"$null"`.
+2. Use the Set-OMEConfiguration cmdlet as described here: [Set-OMEConfiguration](/powershell/module/exchange/set-omeconfiguration). To remove your organization's branded customizations from the DisclaimerText, EmailText, and PortalText values, set the value to an empty string, `""`. For all image values, such as Logo, set the value to `"$null"`.
**Encryption customization options**
- | To revert this feature of the encryption experience back to the default text and image | Use these Windows PowerShell commands |
- |:--|:--|
- |Default text that accompanies encrypted email messages <br/> The default text appears above the instructions for viewing encrypted messages <br/> | `Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -EmailText "<empty string>"` <br/> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -EmailText ""` <br/> |
- |Disclaimer statement in the email that contains the encrypted message <br/> | `Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> DisclaimerText "<empty string>"` <br/> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -DisclaimerText ""` <br/> |
- |Text that appears at the top of the encrypted mail viewing portal <br/> | `Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -PortalText "<empty string>"` <br/> **Example reverting back to default:** `Set-OMEConfiguration -Identity "OME Configuration" -PortalText ""` <br/> |
- |Logo <br/> | `Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -Image <"$null">` <br/> **Example reverting back to default:** `Set-OMEConfiguration -Identity "OME configuration" -Image $null` <br/> |
+ |To revert this feature of the encryption experience back to the default text and image|Use these Windows PowerShell commands|
+ |||
+ |Default text that accompanies encrypted email messages <p> The default text appears above the instructions for viewing encrypted messages|`Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -EmailText "<empty string>"` <p> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -EmailText ""`|
+ |Disclaimer statement in the email that contains the encrypted message <p> |`Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> DisclaimerText "<empty string>"` <p> **Example:** `Set-OMEConfiguration -Identity "OME Configuration" -DisclaimerText ""`|
+ |Text that appears at the top of the encrypted mail viewing portal|`Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -PortalText "<empty string>"` <p> **Example reverting back to default:** `Set-OMEConfiguration -Identity "OME Configuration" -PortalText ""`|
+ |Logo|`Set-OMEConfiguration -Identity <OMEConfigurationIdParameter> -Image <"$null">` <p> **Example reverting back to default:** `Set-OMEConfiguration -Identity "OME configuration" -Image $null`|
## Service information for legacy Office 365 Message Encryption prior to the release of the new OME capabilities <a name="LegacyServiceInfo"> </a> The following table provides technical details for the Office 365 Message Encryption service prior to the release of the new OME capabilities.
-
-| Service details | Description |
-|:--|:--|
-|Client device requirements <br/> |Encrypted messages can be viewed on any client device, as long as the HTML attachment can be opened in a modern browser that supports Form Post. <br/> |
-|Encryption algorithm and Federal Information Processing Standards (FIPS) compliance <br/> |Office 365 Message Encryption uses the same encryption keys as Windows Azure Information Rights Management (IRM) and supports Cryptographic Mode 2 (2K key for RSA and 256 bits key for SHA-1 systems). For more information about the underlying IRM cryptographic modes, see [AD RMS Cryptographic Modes](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/hh867439(v=ws.10)). <br/> |
-|Supported message types <br/> |Office 365 Message Encryption is only supported for items that have a message class ID of **IPM.Note**. For more information, see [Item types and message classes](/office/vba/outlook/Concepts/Forms/item-types-and-message-classes). <br/> |
-|Message size limits <br/> |Office 365 Message Encryption can encrypt messages of up to 25 megabytes. For more details about message size limits, see [Exchange Online Limits](/office365/servicedescriptions/exchange-online-service-description/exchange-online-limits). <br/> |
-|Exchange Online email retention policies <br/> |Exchange Online doesn't store the encrypted messages. <br/> |
-|Language support for Office 365 Message Encryption <br/> | Office 365 Message encryption supports Microsoft 365 languages, as follows: <br/> Incoming email messages and attached HTML files are localized based on the sender's language settings. <br/> The viewing portal is localized based on the recipient's browser settings. <br/> The body (content) of the encrypted message isn't localized. <br/> |
-|Privacy information for OME Portal and OME Viewer App <br/> |The [Office 365 Messaging Encryption Portal privacy statement](https://privacy.microsoft.com/privacystatement) provides detailed information about what Microsoft does and doesn't do with your private information. <br/> |
+
+|Service details|Description|
+|||
+|Client device requirements|Encrypted messages can be viewed on any client device, as long as the HTML attachment can be opened in a modern browser that supports Form Post.|
+|Encryption algorithm and Federal Information Processing Standards (FIPS) compliance|Office 365 Message Encryption uses the same encryption keys as Windows Azure Information Rights Management (IRM) and supports Cryptographic Mode 2 (2K key for RSA and 256 bits key for SHA-1 systems). For more information about the underlying IRM cryptographic modes, see [AD RMS Cryptographic Modes](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/hh867439(v=ws.10)).|
+|Supported message types|Office 365 Message Encryption is only supported for items that have a message class ID of **IPM.Note**. For more information, see [Item types and message classes](/office/vba/outlook/Concepts/Forms/item-types-and-message-classes).|
+|Message size limits|Office 365 Message Encryption can encrypt messages of up to 25 megabytes. For more details about message size limits, see [Exchange Online Limits](/office365/servicedescriptions/exchange-online-service-description/exchange-online-limits).|
+|Exchange Online email retention policies|Exchange Online doesn't store the encrypted messages.|
+|Language support for Office 365 Message Encryption|Office 365 Message encryption supports Microsoft 365 languages, as follows: <p> Incoming email messages and attached HTML files are localized based on the sender's language settings. <p> The viewing portal is localized based on the recipient's browser settings. <p> The body (content) of the encrypted message isn't localized.|
+|Privacy information for OME Portal and OME Viewer App|The [Office 365 Messaging Encryption Portal privacy statement](https://privacy.microsoft.com/privacystatement) provides detailed information about what Microsoft does and doesn't do with your private information.|
## Frequently Asked Questions about legacy OME <a name="LegacyServiceInfo"> </a> Got questions about Office 365 Message Encryption? Here are some answers. If you can't find what you need, check the [Microsoft Tech Community forums for Office 365](https://techcommunity.microsoft.com/t5/Office-365/ct-p/Office365).
-
+ **Q. My users send encrypted email messages to recipients outside our organization. Is there anything that external recipients have to do in order to read and reply to email messages that are encrypted with Office 365 Message Encryption?**
-
+ Recipients outside your organization who receive Microsoft 365 encrypted messages can view them in one of two ways:
-
+ - By signing in with a Microsoft account or a work or school account associated with Office 365. - By using a one-time pass code. **Q. Are Microsoft 365 encrypted messages stored in the cloud or on Microsoft servers?**
-
+ No, the encrypted messages are kept on the recipient's email system, and when the recipient opens the message, it is temporarily posted for viewing on Microsoft servers. The messages are not stored there.
-
+ **Q. Can I customize encrypted email messages with my brand?**
-
+ Yes. You can use Windows PowerShell cmdlets to customize the default text that appears at the top of encrypted email messages, the disclaimer text, and the logo that you want to use for the email message and the encryption portal. This feature is now available in OMEv2. For details, see [Add branding to encrypted messages](add-your-organization-brand-to-encrypted-messages.md).
-
+ **Q. Does the service require a license for every user in my organization?**
-
+ A license is required for every user in the organization who sends encrypted email.
-
+ **Q. Do external recipients require subscriptions?**
-
+ No, external recipients do not require a subscription to read or reply to encrypted messages.
-
+ **Q. How is Office 365 Message Encryption different from Rights Management Services (RMS)?**
-
+ RMS provides Information Rights Protection capabilities for an organization's internal emails by providing built-in templates, such as: Do not forward and Company Confidential. Office 365 Message Encryption supports email message encryption for messages that are sent to external recipients as well as internal recipients.
-
+ **Q. How is Office 365 Message Encryption different from S/MIME?**
-
+ S/MIME is essentially a client-side encryption technology, and requires complicated certificate management and publishing infrastructure. Office 365 Message Encryption uses mail flow rules (also known as transport rules) and does not depend on certificate publishing.
-
+ **Q. Can I read the encrypted messages over mobile devices?**
-
+ Yes, you can view messages on Android and iOS by downloading the OME Viewer apps from the Google Play store and the Apple App store. Open the HTML attachment in the OME Viewer app and then follow the instructions to open your encrypted message. For other mobile devices, you can open the HTML attachment as long as your mail client supports Form Post.
-
+ **Q. Are replies and forwarded messages encrypted?**
-
+ Yes. Responses continue to be encrypted throughout the duration of the thread.
-
+ **Q. Does Office 365 Message Encryption provide localization?**
-
+ Incoming email and HTML content is localized based on sender email settings. The viewing portal is localized based on recipient's browser settings. However, the actual body (content) of encrypted message isn't localized.
-
+ **Q. What encryption method is used for Office 365 Message Encryption?**
-
+ Office 365 Message Encryption uses Rights Management Services (RMS) as its encryption infrastructure. The encryption method used depends on where you obtain the RMS keys used to encrypt and decrypt messages.
-
+ - If you use Microsoft Azure RMS to obtain the keys, Cryptographic Mode 2 is used. Cryptographic Mode 2 is an updated and enhanced AD RMS cryptographic implementation. It supports RSA 2048 for signature and encryption, and supports SHA-256 for signature. - If you use Active Directory (AD) RMS to obtain the keys, either Cryptographic Mode 1 or Cryptographic Mode 2 is used. The method used depends on your on-premises AD RMS deployment. Cryptographic Mode 1 is the original AD RMS cryptographic implementation. It supports RSA 1024 for signature and encryption, and supports SHA-1 for signature. This mode continues to be supported by all current versions of RMS. For more information, see [AD RMS Cryptographic Modes](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/hh867439(v=ws.10)).
-
+ **Q. Why do some encrypted messages say they come from** Office365@messaging.microsoft.com?
-
+ When an encrypted reply is sent from the encryption portal or through the OME Viewer app, the sending email address is set to Office365@messaging.microsoft.com because the encrypted message is sent through a Microsoft endpoint. This helps to prevent encrypted messages from being marked as spam. The displayed name on the email and the address within the encryption portal aren't changed because of this labeling. Also, this labeling only applies to messages sent through the portal, not through any other email client.
-
+ **Q. I am an Exchange Hosted Encryption (EHE) subscriber. Where can I learn more about the upgrade to Office 365 Message Encryption?**
-
+ All EHE customers have been upgraded to Office 365 Message Encryption. For more information, visit the [Exchange Hosted Encryption Upgrade Center](../security/office-365-security/exchange-online-protection-overview.md).
-
+ **Q. Do I need to open any URLs, IP addresses, or ports in my organization's firewall to support Office 365 Message Encryption?**
-
+ Yes. You have to add URLs for Exchange Online to the allow list for your organization to enable authentication for messages encrypted by Office 365 Message Encryption. For a list of Exchange Online URLs, see [Microsoft 365 URLs and IP address ranges](../enterprise/urls-and-ip-address-ranges.md).
-
+ **Q. How many recipients can I send a Microsoft 365 encrypted message to?**
-
+ The recipient limit is 500 recipients per message, or, when combined after distribution list expansion, 11,980 characters in the message's **To** field, whichever comes first.
-
+ **Q. Is it possible to revoke a message sent to a particular recipient?**
-
+ No. You can't revoke a message to a particular person after it's sent.
-
+ **Q. Can I view a report of encrypted messages that have been received and read?**
-
+ There isn't a report that shows if an encrypted message has been viewed, but there are Microsoft 365 reports available that you can leverage to determine the number of messages that matched a specific mail flow rule (also known as a transport rule), for instance.
-
+ **Q. What does Microsoft do with the information I provide through the OME Portal and the OME Viewer App?**
-
+ The [Office 365 Messaging Encryption Portal privacy statement](https://privacy.microsoft.com/privacystatement) provides detailed information about what Microsoft does and doesn't do with your private information. **Q. What do I do if I donΓÇÖt receive the one-time pass code after I requested it?** First, check the junk or spam folder in your email client. DKIM and DMARC settings for your organization may cause these emails to end up filtered as spam.
-Next, check quarantine in the Security & Compliance Center. Often, messages containing a one-time pass code, especially the first ones your organization receives, end up in quarantine.
+Next, check quarantine in the Security & Compliance Center. Often, messages containing a one-time pass code, especially the first ones your organization receives, end up in quarantine.
compliance Named Entities Use https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/named-entities-use.md
Named entity SITs and enhanced policies are not supported for:
- On-premises repositories
+- Power BI
## Create and edit enhanced policies
compliance Search The Audit Log In Security And Compliance https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance.md
No. The auditing service pipeline is near real time, and therefore can't support
**Does auditing data flow across geographies?**
-No. We currently have auditing pipeline deployments in the NA (North America), EMEA (Europe, Middle East, and Africa) and APAC (Asia Pacific) regions. However, we may flow the data across these regions for load-balancing and only during live-site issues. When we do perform these activities, the data in transit is encrypted.
+In general, no. We currently have auditing pipeline deployments in the NA (North America), EMEA (Europe, Middle East, and Africa) and APAC (Asia Pacific) regions. However, we may need to transfer data across these regions for load-balancing during live-site issues. When we do perform these activities, the data in transit is encrypted. For multi-geo organizations, the audit data collected from all regions of the organization will be stored only in the organization's home region.
**Is auditing data encrypted?**
compliance Sit Common Scenarios https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/sit-common-scenarios.md
+
+ Title: "Common usage scenarios for sensitive information types"
+f1.keywords:
+- NOCSH
+++
+audience: Admin
++ Last updated :
+localization_priority: Normal
+
+- M365-security-compliance
+search.appverid:
+- MOE150
+- MET150
+description: How to implement common sensitive information types use case scenarios
+++
+# Common usage scenarios for sensitive information types
+
+This article describes how to implement some common sensitive information type (SIT) use case scenarios. You can use these procedures as examples and adapt them to your specific needs.
+
+## Protect credit card numbers
+
+Contoso Bank needs to classify the credit card numbers that they issue as sensitive. Their credit cards start with a set of six-digit patterns. They would like to customize the out of the box credit card definition to only detect the credit card numbers starting with their six-digit patterns.
+
+**Suggested solution**
+
+1. Create a copy of the credit card SIT. Use the steps to [copy and modify a sensitive information type](create-a-custom-sensitive-information-type.md#copy-and-modify-a-sensitive-information-type) to copy the credit card SIT.
+1. Edit the high confidence pattern. Follow the steps in [edit or delete the sensitive information type pattern](sit-get-started-exact-data-match-create-rule-package.md#edit-or-delete-the-sensitive-information-type-pattern).
+1. Add 'starts with' check and add the list of bin digit (formatted & unformatted). For example to ensure that only credit cards starting with 411111 & 433512 should be considered valid, add the following to the list 4111 11, 4111-11, 411111, 4335 12, 4335-12, 433512.
+1. Repeat step 2 & 3 for the low confidence pattern.
+
+## Test numbers similar to Social Security numbers
+
+Contoso has identified a few nine-digit test numbers that trigger false positive matches in the Social Security Number (SSN) data loss prevention (DLP) policy. They would like to exclude these numbers from the list of valid matches for SSN.
+
+**Suggested solution**
+
+1. Create a copy of the SSN SIT. Use the steps to [copy and modify a sensitive information type](create-a-custom-sensitive-information-type.md#copy-and-modify-a-sensitive-information-type) to copy the SSN SIT.
+1. Edit the high confidence pattern. Follow the steps in [edit or delete the sensitive information type pattern](sit-get-started-exact-data-match-create-rule-package.md#edit-or-delete-the-sensitive-information-type-pattern).
+1. Add the numbers to be excluded in the 'exclude specific values' additional check. For example, to exclude 239-23-532 & 23923532, just adding 23923532 will suffice
+1. Repeat step 2 & 3 for other confidence patterns as well
+
+## Phone numbers in signature trigger match
+
+Australia based Contoso finds that phone numbers in email signatures are triggering a match for their Australia company number DLP policy.
+
+**Suggested solution**
+
+Add a 'not' group in supporting elements using a keyword list containing commonly used keywords in signature of email like ΓÇ£PhoneΓÇ¥, ΓÇ£MobileΓÇ¥, ΓÇ£emailΓÇ¥, ΓÇ£Thanks and regardsΓÇ¥ etc. Keep the proximity of this keyword list to a smaller value like 50 characters for better accuracy. For more information, see [Get started with custom sensitive information types](create-a-custom-sensitive-information-type.md).
+
+## Unable to trigger ABA routing policy
+
+DLP policy is unable to trigger ABA routing number policy in large excel files because the required keyword isn't found within 300 characters.
+
+**Suggested solution**
+
+Create a copy of the built-in SIT and edit it to change the proximity of the keyword list from ΓÇ£300 charactersΓÇ¥ to ΓÇ£Anywhere in the documentΓÇ¥.
+
+> [!TIP]
+> You may edit the keyword list to include/exclude keywords that are relevant to your organization.
+
+## Unable to detect credit card numbers with unusual delimiters
+
+Contoso Bank has noticed some of their employees share Credit card numbers with ΓÇÿ/ΓÇÖ as a delimiter, for example 4111/1111/1111/1111, which isn't detected by the out of the box credit card definition. Contoso would like to define their own regex and validate it using LuhnCheck.
+
+**Suggested solution**
+
+1. Create a copy of the Credit card SIT using the steps in [Customize a built-in sensitive information type](customize-a-built-in-sensitive-information-type.md).
+1. Add a new pattern
+1. In the primary element, select regular expression
+1. Define the regular expression that includes ΓÇÿ/ΓÇÖ as part of the regular expression and then choose validator and select luhncheck or func_credit_card to ensure the regex also passes the LuhnCheck.
+
+## Ignore a disclaimer notice
+
+Many organizations add legal disclaimers, disclosure statements, signatures, or other information to the top or bottom of email messages that enter or leave their organizations and in some cases even within the organizations. The employees themselves put signatures including ΓÇô motivational quotes, social messages, and so on. A disclaimer or signature can contain the terms that are present in the lexicon of a CC and and may generate a lot of false positives.
+
+For example, a typical disclaimer might contain words like sensitive, or confidential and a policy looking for sensitive info will detect it as an incident, leading to lot of false positives. Thus providing customers with an option to ignore disclaimer can reduce false positives and increase the efficiency of compliance team.
+
+### Example of disclaimer
+
+Consider the following disclaimer:
+
+IMPORTANT NOTICE: This e-mail message is intended to be received only by persons entitled to receive the confidential information it may contain. E-mail messages to clients of Contoso may contain information that is confidential and legally privileged. Please do not read, copy, forward, or store this message unless you are an intended recipient of it. If you have received this message in error, please forward it to the sender and delete it completely from your computer system.
+
+If the SIT has been configured to detect a keyword confidential, then the pattern will invoke a match every time a disclaimer is used in the email, leading to a lot of false positives.
+
+### Ignore disclaimer using prefix and suffix in SIT
+
+One way to ignore the instances of keywords in the disclaimer is by excluding the instances of keywords which are preceded by a prefix and followed by a suffix.
+
+Consider this disclaimer:
+
+IMPORTANT NOTICE: This e-mail message is intended to be received only by persons *entitled to receive the* confidential **information it may contain**. E-mail messages to clients of Contoso may contain information that is confidential and legally privileged. Please do not read, copy, forward, or store this message unless you are an intended recipient of it. If you have received this message in error, please forward it to the sender and delete it completely from your computer system.
+
+We have two instances of the keyword ΓÇ£confidentialΓÇ¥ and if we configure the SIT to ignore instances of this keyword preceded by prefixes (italicized in the example) and followed by suffixes (bolded in the example), then we can achieve ignoring disclaimers in most of the cases.
+
+To ignore the disclaimer using prefix and suffix:
+
+1. Add additional checks in the current SIT to exclude prefix and suffix text to the keyword which we want to ignore in the disclaimer.
+1. Choose to exclude the prefix and in the **Prefixes** text box enter **contain information that is**.
+1. Choose to exclude the suffix and in the **Suffixes** text box enter **and legally privileged**.
+1. Repeat this process for other instances of the keywords in the disclaimer, as shown in the following graphic.
+
+### Ignore disclaimer by excluding secondary elements
+
+Another way to add a list of supporting elements (instances in disclaimer) which needs to be excluded is to exclude secondary elements.
+
+Consider this disclaimer:
+
+IMPORTANT NOTICE: This e-mail message is intended to be received only by persons entitled to receive the confidential information it may contain. E-mail messages to clients of Contoso may contain information that is confidential and legally privileged. Please do not read, copy, forward, or store this message unless you are an intended recipient of it. If you have received this message in error, please forward it to the sender and delete it completely from your computer system.
+
+We have two instances of the keyword ΓÇ£confidentialΓÇ¥ in this example. If we configure the SIT to ignore instances of this keyword in the disclaimer (underlined as red), we can achieve ignoring disclaimers in most of the cases.
++
+To ignore the disclaimer using secondary elements:
+
+1. Select **Not any of these** group in the supporting elements.
+1. Add the instances of disclaimer which we want to ignore as a keyword list/dictionary.
+1. Add the keywords as a new line which we want to ignore. Remember that the length of each text can't be more than 50 characters.
+1. Set the proximity of this element to be within 50-60 characters of the primary element.
compliance Sit Get Started Exact Data Match Create Rule Package https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/sit-get-started-exact-data-match-create-rule-package.md
If you are not familiar with EDM based SITS or their implementation, you should
You can use this wizard to create your sensitive information type (SIT) files to help simplify the process.
-An EDM Sensitive Information type is composed of one or more patterns. Each pattern describes a combination of evidence (fields from the schema) that will be used to identify sensitive content in a document or email.
+An EDM Sensitive Information type is composed of one or more patterns. Each pattern describes a combination of evidence (fields from the schema) that will be used to identify sensitive content in a document or email.
## Pre-requisites
Perform the steps in these articles:
- Whether you will be creating an EDM sensitive information type using the wizard or the rule package XML file via PowerShell, you must have Global admin or Compliance admin permissions to create, test, and deploy a custom sensitive information type through the UI. See [About admin roles in Office 365](/office365/admin/add-users/about-admin-roles). - Identify one of the built in SITs to use as the Primary elements sensitive information type.
- - If none of the built-in sensitive info types will match the data in the column you selected you will have to create a custom sensitive info type that does.
- - If you selected the Ignored Delimiters option for the primary element column in your schema, make sure the custom SIT you create will match data with and without the selected delimiters.
- - If you use a built in SIT, make sure it will detect exactly the strings you want to select, and not include any surrounding characters or exclude any valid part of the string as stored in your sensitive information table.
-See [Sensitive information type entity definitions](sensitive-information-type-entity-definitions.md#sensitive-information-type-entity-definitions) and [Get started with custom sensitive information types](create-a-custom-sensitive-information-type.md#get-started-with-custom-sensitive-information-types).
-
+ - If none of the built-in sensitive info types will match the data in the column you selected you will have to create a custom sensitive info type that does.
+ - If you selected the Ignored Delimiters option for the primary element column in your schema, make sure the custom SIT you create will match data with and without the selected delimiters.
+ - If you use a built in SIT, make sure it will detect exactly the strings you want to select, and not include any surrounding characters or exclude any valid part of the string as stored in your sensitive information table.
+
+See [Sensitive information type entity definitions](sensitive-information-type-entity-definitions.md#sensitive-information-type-entity-definitions) and [Get started with custom sensitive information types](create-a-custom-sensitive-information-type.md#get-started-with-custom-sensitive-information-types).
+ ### Use the exact data match schema and sensitive information type pattern wizard 1. In the Microsoft 365 Compliance center for your tenant go to **Data classification** > **Exact data matches**.
See [Sensitive information type entity definitions](sensitive-information-type-e
4. Choose **Next** and choose **Create pattern**.
-5. Choose the **Confidence level** and **Primary element**. To learn more about confidence levels, see [Learn about sensitive information types](sensitive-information-type-learn-about.md#learn-about-sensitive-information-types).
+5. Choose the **Confidence level** and **Primary element**. To learn more about confidence levels, see [Learn about sensitive information types](sensitive-information-type-learn-about.md#learn-about-sensitive-information-types).
-6. Choose the **Primary element's sensitive info type** to associate it with to define what text in the document will be compared with all the values in the primary element field. See [Sensitive Information Type Entity Definitions](sensitive-information-type-entity-definitions.md) to learn more about the available sensitive information types.
+6. Choose the **Primary element's sensitive info type** to associate it with to define what text in the document will be compared with all the values in the primary element field. See [Sensitive Information Type Entity Definitions](sensitive-information-type-entity-definitions.md) to learn more about the available sensitive information types.
-> [!IMPORTANT]
-> Select a sensitive information type that closely matches the format of the content you want to find. Selecting a sensitive information type that matches unnecessary content, like one that matches all text strings, or all numbers can cause excessive load in the system which could result in sensitive information being missed. See the Best Practices section in the Introduction to Exact Data Matching article in this documentation for recommendations in selecting a sensitive information type to use here.
+ > [!IMPORTANT]
+ > Select a sensitive information type that closely matches the format of the content you want to find. Selecting a sensitive information type that matches unnecessary content, like one that matches all text strings, or all numbers can cause excessive load in the system which could result in sensitive information being missed. See the Best Practices section in the Introduction to Exact Data Matching article in this documentation for recommendations in selecting a sensitive information type to use here.
7. Choose your **Supporting elements** and match options.
-7. Choose **Done** and **Next**.
+8. Choose **Done** and **Next**.
-8. Choose your desired **Confidence level and character proximity**. This will be the default value for the whole EDM sensitive info type.
+9. Choose your desired **Confidence level and character proximity**. This will be the default value for the whole EDM sensitive info type.
-9. Choose **Create pattern** if you want to create additional patterns for your EDM sensitive info type.
+10. Choose **Create pattern** if you want to create additional patterns for your EDM sensitive info type.
-10. Choose **Next** and fill in a **Name** and **Description for admins**.
+11. Choose **Next** and fill in a **Name** and **Description for admins**.
-11. Review and choose **Submit**.
+12. Review and choose **Submit**.
### Edit or delete the sensitive information type pattern
This procedure shows you how to create a file in XML format called a rule packag
> [!NOTE] > If the SIT that you map to can detect multi-word corroborative evidence, the secondary elements you define in a manually created rule package can be mapped to the SIT. For example, the name `John Smith` would not match as a secondary element because we'd compare `John` and `Smith` found in the content separately to the term `John Smith` uploaded in one of the fields, if that corroborative evidence field wasn't mapped to a SIT that can detect that pattern.-
-> [!NOTE]
-> ThereΓÇÖs a limit of 10 rule packages in a Microsoft 365 tenant. Since a rule package can contain an arbitrary number of sensitive information types, you can avoid creating a new rule package each time you want to define a new sensitive information type using this method, instead export an existing rule package and add your sensitive information types to the XML before re- uploading it.
-
+>
+> ThereΓÇÖs a limit of 10 rule packages in a Microsoft 365 tenant. Since a rule package can contain an arbitrary number of sensitive information types, you can avoid creating a new rule package each time you want to define a new sensitive information type using this method, instead export an existing rule package and add your sensitive information types to the XML before re- uploading it.
1. Create a rule package in XML format (with Unicode encoding), similar to the following example. (You can copy, modify, and use our example.)
-When you set up your rule package, make sure to correctly reference your .csv, .tsv, or pipe (|) delimited sensitive information source table file and **edm.xml** schema file. You can copy, modify, and use our example. In this sample xml the following fields need to be customized to create your EDM sensitive type:
+ When you set up your rule package, make sure to correctly reference your .csv, .tsv, or pipe (|) delimited sensitive information source table file and **edm.xml** schema file. You can copy, modify, and use our example. In this sample xml the following fields need to be customized to create your EDM sensitive type:
-- **RulePack id & ExactMatch id**: Use [New-GUID](/powershell/module/microsoft.powershell.utility/new-guid) to generate a GUID.
+ - **RulePack id & ExactMatch id**: Use [New-GUID](/powershell/module/microsoft.powershell.utility/new-guid) to generate a GUID.
-- **Datastore**: This field specifies EDM lookup data store to be used. You provide the data source name of the configured EDM Schema.
+ - **Datastore**: This field specifies EDM lookup data store to be used. You provide the data source name of the configured EDM Schema.
-- **idMatch**: This field points to the primary element for EDM.-- **Matches**: Specifies the field to be used in exact lookup. You provide a searchable field name in EDM Schema for the DataStore.-- **Classification**: This field specifies the sensitive information type match that triggers EDM lookup. You can use the name or GUID of an existing built-in or custom sensitive information type.
-
-> [!NOTE]
-> Be aware that any string that matches the SIT provided will be hashed and compared to every entry in the sensitive information source table. To avoid performance issues if you choose a custom SIT for the classification element, don't use one that will match a large percentage of content. For example one that matches "any number" or "any five-letter word". You can differentiate it by adding supporting keywords or including formatting in the definition of the custom classification SIT.
+ - **idMatch**: This field points to the primary element for EDM.
+ - **Matches**: Specifies the field to be used in exact lookup. You provide a searchable field name in EDM Schema for the DataStore.
+ - **Classification**: This field specifies the sensitive information type match that triggers EDM lookup. You can use the name or GUID of an existing built-in or custom sensitive information type.
-- **Match**: This field points to additional evidence found in proximity of idMatch.-- **Matches**: You provide any field name in EDM Schema for DataStore.-- **Resource idRef:** This section specifies the name and description for sensitive type in multiple locales
- - You provide GUID for ExactMatch ID.
- - **Name** & **description**: customize as required.
+ > [!NOTE]
+ > Be aware that any string that matches the SIT provided will be hashed and compared to every entry in the sensitive information source table. To avoid performance issues if you choose a custom SIT for the classification element, don't use one that will match a large percentage of content. For example one that matches "any number" or "any five-letter word". You can differentiate it by adding supporting keywords or including formatting in the definition of the custom classification SIT.
+
+ - **Match**: This field points to additional evidence found in proximity of idMatch.
+ - **Matches**: You provide any field name in EDM Schema for DataStore.
+ - **Resource idRef:** This section specifies the name and description for sensitive type in multiple locales
+ - You provide GUID for ExactMatch ID.
+ - **Name** & **description**: customize as required.
```xml <RulePackage xmlns="http://schemas.microsoft.com/office/2018/edm">
When you set up your rule package, make sure to correctly reference your .csv, .
</RulePackage> ```
-2. Upload the rule package by running the following PowerShell cmdlets, one at a time:
+2. Upload the rule package by running the following PowerShell command:
- ```powershell
- $rulepack=Get-Content .\\rulepack.xml -Encoding Byte -ReadCount 0
- New-DlpSensitiveInformationTypeRulePackage -FileData $rulepack
- ```
+ ```powershell
+ New-DlpSensitiveInformationTypeRulePackage -FileData ([System.IO.File]::ReadAllBytes('.\\rulepack.xml'))
+ ```
> [!NOTE] > The syntax of the rule package file is the same as for other sensitive information types. For complete details on the syntax of the rule package file and for additional configuration options, and for instructions on modifying and deleting sensitive information types using PowerShell, [Create a custom sensitive information type using PowerShell](create-a-custom-sensitive-information-type-in-scc-powershell.md#create-a-custom-sensitive-information-type-using-powershell). ## Next step -- [Test an exact data match sensitive information type](sit-get-started-exact-data-match-test.md#test-an-exact-data-match-sensitive-information-type)
+- [Test an exact data match sensitive information type](sit-get-started-exact-data-match-test.md#test-an-exact-data-match-sensitive-information-type)
compliance Sit Get Started Exact Data Match Create Schema https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/sit-get-started-exact-data-match-create-schema.md
A single EDM schema can be used in multiple sensitive information types that use
## Working with specific types of data
-For performance reasons, it is critical that you use patterns that will minimize the number of unnecessary matches. For example, you might use a sensitive information type based on the regular expression.
+For performance reasons, it is critical that you use patterns that will minimize the number of unnecessary matches. For example, you might use a sensitive information type based on the regular expression.
`\b\w*\b` This would match every individual word or number in any document or email. This would cause the service to be overloaded with matches and miss detecting true matches. Using more precise patterns can avoid this situation. Here are some recommendations for identifying the right configuration for some common types of data.
-**Email addresses**: Email addresses can be easy to identify, but because they are so common in content they may cause significant load in the system if used as a primary field. Use them only as secondary evidence. If they must be used as primary evidence, try to define a custom sensitive information type that uses logic to exclude their use as `From` or `To` fields in emails, and to exclude those with your companyΓÇÖs email address to reduce the number of unnecessary strings that need to be matched.
+**Email addresses**: Email addresses can be easy to identify, but because they are so common in content they may cause significant load in the system if used as a primary field. Use them only as secondary evidence. If they must be used as primary evidence, try to define a custom sensitive information type that uses logic to exclude their use as `From` or `To` fields in emails, and to exclude those with your companyΓÇÖs email address to reduce the number of unnecessary strings that need to be matched.
**Phone numbers**: Phone numbers can come in many different formats, including or excluding country prefixes, area codes, and separators. To reduce the false negatives while keeping load to a minimum, use them only as secondary elements, exclude all likely separators, like parenthesis and dashes and only include in your sensitive data table the part that will be always present in the phone number.
-**Person's names**: DonΓÇÖt use personΓÇÖs names as primary elements if using a sensitive information type based on a regular expression as the classification element for this EDM type, because they are difficult to distinguish from common words.
+**Person's names**: DonΓÇÖt use personΓÇÖs names as primary elements if using a sensitive information type based on a regular expression as the classification element for this EDM type, because they are difficult to distinguish from common words.
If you must use a primary element that is hard to identify with a specific pattern, like a project code name that could generate lots of matches to be processed, make sure you include keywords in the sensitive information type you use as the classification element for your EDM type. For example, if using project code names that may be regular words, you can use the word `project` as required additional evidence in close proximity to the project name regular expression-based pattern in the sensitive type used as the classification element for your EDM type. Or you might consider using a sensitive type based on a regular dictionary as the classification element for your EDM SIT.
If a field you need to use as a primary element follows a simple pattern that mi
</Entity> <Regex id="30 AccountNrs">\d{5}</Regex> ```
-
In some cases, you might have to identify certain account or record identification numbers that for historical reasons donΓÇÖt follow a standardized pattern. For example, `Medical Record Numbers` can be composed of many different permutations of letters and numbers within the same organization. Even though it might be hard at first to identify a pattern, closer inspection often lets you narrow down a pattern that describes all valid values without causing an excessive number of invalid matches. For example, it might be detected that ΓÇ£all MRNs are at least seven characters in length, have at least two numerical digits in them, and if they have any letters in them, they start with oneΓÇ¥. Creating a regular expression based on such criteria should allow you to minimize unnecessary matches while capturing all the desired values, and further analysis might allow increased precision by defining separate patterns that describe different formats.
You can use this wizard to help simplify the schema file creation process.
2. Choose **Create EDM schema** to open the schema wizard configuration flyout.
-![EDM schema creation wizard configuration flyout.](../media/edm-schema-wizard-1.png)
+ ![EDM schema creation wizard configuration flyout.](../media/edm-schema-wizard-1.png)
3. Fill in an appropriate **Name** and **Description**.
-4. Choose **Ignore delimiters and punctuation for all schema fields** if you want that behavior for the entire schema. To learn more about configuring EDM to ignore case or delimiters, see [Using the caseInsensitive and ignoredDelimiters fields](#using-the-caseinsensitive-and-ignoreddelimiters-fields) for more details on this feature.
+4. Choose **Ignore delimiters and punctuation for all schema fields** if you want that behavior for the entire schema. To learn more about configuring EDM to ignore case or delimiters, see [Using the caseInsensitive and ignoredDelimiters fields](#using-the-caseinsensitive-and-ignoreddelimiters-fields) for more details on this feature.
5. Fill in your desired values for your **Schema field #1** and add more fields as needed. Each schema field must be identical to the column headers in your sensitive information source file.
You can use this wizard to help simplify the schema file creation process.
1. **Choose delimiters and punctuation to ignore for this field** 1. **Enter custom delimiters and punctuation for this field**
-> [!IMPORTANT]
-> At least one, but no more than five of your schema fields must be designated as searchable.
+ > [!IMPORTANT]
+ > At least one, but no more than five of your schema fields must be designated as searchable.
-6. Choose **Save**. Your schema will now be listed and available for use.
+7. Choose **Save**. Your schema will now be listed and available for use.
-> [!IMPORTANT]
-> If you want to remove a schema, and it is already associated with an EDM sensitive info type, you must first delete the EDM sensitive info type, then you can delete the schema. Deleting a schema that has a data store associated with it also deletes the data store within 24 hours.
+ > [!IMPORTANT]
+ > If you want to remove a schema, and it is already associated with an EDM sensitive info type, you must first delete the EDM sensitive info type, then you can delete the schema. Deleting a schema that has a data store associated with it also deletes the data store within 24 hours.
## Export of the EDM schema file in XML format If you created the EDM schema in the EDM schema wizard, you must export the EDM schema file in XML format. You'll need it in the [Hash and upload the sensitive information source table for exact data match sensitive information types](sit-get-started-exact-data-match-hash-upload.md#hash-and-upload-the-sensitive-information-source-table-for-exact-data-match-sensitive-information-types) phase.
-1. Connect to the Security & Compliance Center PowerShell using the procedures in [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
+1. [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
-2. To export the EDM schema file, use this cmdlet:
+2. To export the EDM schema file, use this syntax:
+
+ ```powershell
+ $Schema = Get-DlpEdmSchema -Identity "[your EDM Schema name]"
+ Set-Content -Path ".\Schemafile.xml" -Value $Schema.EdmSchemaXML
+ ```
-```powershell
-$Schema = Get-DlpEdmSchema -Identity "[your EDM Schema name]"
-Set-Content -Path ".\Schemafile.xml" -Value $Schema.EdmSchemaXML
-```
3. Save this file for later use. ## Create exact data match schema manually and upload
-In the schema file, configure an entry for each column in the sensitive information source table, using the syntax:
+In the schema file, configure an entry for each column in the sensitive information source table, using the syntax:
```xml <Field name="FieldName" searchable="true/false" caseInsensitive="true/false" ignoredDelimiters="delimiter characters" /> ```+ ### Using the caseInsensitive and ignoredDelimiters fields The following schema XML sample makes use of the *caseInsensitive* and the *ignoredDelimiters* fields. When you include the *caseInsensitive* field set to the value of `true` in your schema definition, EDM will not exclude an item based on case differences. For example, EDM will see the values **FOO-1234** and **fOo-1234** as being identical for the `PatientID` field.
-When you include the *ignoredDelimiters* field with supported characters, EDM will ignore those characters. So EDM will see the values **FOO-1234** and **FOO#1234** as being identical for the `PatienID` field.
+When you include the *ignoredDelimiters* field with supported characters, EDM will ignore those characters. So EDM will see the values **FOO-1234** and **FOO#1234** as being identical for the `PatienID` field.
-In this example, where both `caseInsensitive` and `ignoredDelimiters` are used, EDM would see **FOO-1234** and **fOo#1234** as identical and classify the item as a patient record sensitive information type.
+In this example, where both `caseInsensitive` and `ignoredDelimiters` are used, EDM would see **FOO-1234** and **fOo#1234** as identical and classify the item as a patient record sensitive information type.
Both these parameters are used on a per field basis.
The `ignoredDelimiters` flag doesn't support:
> [!IMPORTANT] > When defining your EDM sensitive information type, *ignoreDelimiters* will not affect how the Classification sensitive information type associated with the primary element in an EDM pattern identifies content in an item. So if you configure *ignoreDelimiters* for a searchable field you need to make sure the sensitive information type used for a primary element based on that field will pick strings both with and without those characters present.-
-> [!IMPORTANT]
+>
> The number of columns in your sensitive information source table and the number of fields in your schema must match, order doesn't matter.
-1. Define the schema in XML format (similar to our example below). Name this schema file **edm.xml**, and configure it such that for each column in the sensitive information source table, there is a line that uses the syntax:
+1. Define the schema in XML format (similar to our example below). Name this schema file **edm.xml**, and configure it such that for each column in the sensitive information source table, there is a line that uses the syntax:
`\<Field name="" searchable=""/\>`.
- - Use column names for *Field name* values.
- - Use *searchable="true"* for the fields that you want to be searchable and primary fields up to a maximum of 5 fields. At least one field must be searchable.
+ - Use column names for *Field name* values.
+ - Use *searchable="true"* for the fields that you want to be searchable and primary fields up to a maximum of 5 fields. At least one field must be searchable.
- As an example, the following XML file defines the schema for a patient records database, with five fields specified as searchable: *PatientID*, *MRN*, *SSN*, *Phone*, and *DOB*.
+ As an example, the following XML file defines the schema for a patient records database, with five fields specified as searchable: *PatientID*, *MRN*, *SSN*, *Phone*, and *DOB*.
(You can copy, modify, and use our example.)
The `ignoredDelimiters` flag doesn't support:
</EdmSchema> ```
-Once you have created the EDM schema file in XML format, you have to upload it to the cloud service.
+ Once you have created the EDM schema file in XML format, you have to upload it to the cloud service.
-2. Connect to the Security & Compliance Center PowerShell using the procedures in [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
+2. [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
-3. To upload the database schema, run the following cmdlets, one at a time:
+3. To upload the database schema, run the following command:
```powershell
- $edmSchemaXml=Get-Content .\\edm.xml -Encoding Byte -ReadCount 0
- New-DlpEdmSchema -FileData $edmSchemaXml -Confirm:$true
+ New-DlpEdmSchema -FileData ([System.IO.File]::ReadAllBytes('.\\edm.xml')) -Confirm:$true
``` You will be prompted to confirm, as follows:
Once you have created the EDM schema file in XML format, you have to upload it t
> > \[Y\] Yes \[A\] Yes to All \[N\] No \[L\] No to All \[?\] Help (default is "Y"):
-> [!TIP]
-> If you want your changes to occur without confirmation, in Step 2, use this cmdlet instead: New-DlpEdmSchema -FileData $edmSchemaXml
+ > [!TIP]
+ > If you want your changes to occur without confirmation, don't use `-Confirm:$true` in Step 3.
> [!NOTE] > It can take between 10-60 minutes to update the EDMSchema with additions. The update must complete before you execute steps that use the additions.
compliance Sit Modify A Custom Sensitive Information Type In Powershell https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/sit-modify-a-custom-sensitive-information-type-in-powershell.md
audience: Admin
ms.localizationpriority: medium-+ - M365-security-compliance
+search.appverid:
- MOE150 - MET150 description: "Learn how to modify a custom sensitive information using PowerShell."
To connect to Compliance Center PowerShell, see [Connect to Compliance Center Po
$rulepak = Get-DlpSensitiveInformationTypeRulePackage -Identity "Employee ID Custom Rule Pack" ```
-3. Use the [Set-Content](/powershell/module/microsoft.powershell.management/set-content) cmdlet to export the custom rule package to an XML file:
+3. Use the following syntax to export the custom rule package to an XML file:
```powershell
- Set-Content -Path "XMLFileAndPath" -Encoding Byte -Value $rulepak.SerializedClassificationRuleCollection
+ [System.IO.File]::WriteAllBytes('XMLFileAndPath', $rulepak.SerializedClassificationRuleCollection)
``` This example export the rule package to the file named ExportedRulePackage.xml in the C:\My Documents folder. ```powershell
- Set-Content -Path "C:\My Documents\ExportedRulePackage.xml" -Encoding Byte -Value $rulepak.SerializedClassificationRuleCollection
+ [System.IO.File]::WriteAllBytes('C:\My Documents\ExportedRulePackage.xml', $rulepak.SerializedClassificationRuleCollection)
``` #### Step 2: Modify the sensitive information type in the exported XML file
Sensitive information types in the XML file and other elements in the file are d
To import the updated XML back into the existing rule package, use the [Set-DlpSensitiveInformationTypeRulePackage](/powershell/module/exchange/set-dlpsensitiveinformationtyperulepackage) cmdlet: ```powershell
-Set-DlpSensitiveInformationTypeRulePackage -FileData ([Byte[]]$(Get-Content -Path "C:\My Documents\External Sensitive Info Type Rule Collection.xml" -Encoding Byte -ReadCount 0))
+Set-DlpSensitiveInformationTypeRulePackage -FileData ([System.IO.File]::ReadAllBytes('C:\My Documents\External Sensitive Info Type Rule Collection.xml'))
``` For detailed syntax and parameter information, see [Set-DlpSensitiveInformationTypeRulePackage](/powershell/module/exchange/set-dlpsensitiveinformationtyperulepackage). - ## More information - [Learn about data loss prevention](dlp-learn-about-dlp.md)- - [Sensitive information type entity definitions](sensitive-information-type-entity-definitions.md)- - [What the DLP functions look for](what-the-dlp-functions-look-for.md)
compliance Sit Modify Edm Schema Configurable Match https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/sit-modify-edm-schema-configurable-match.md
audience: Admin Previously updated : Last updated : ms.localizationpriority: high-+ - M365-security-compliance
+search.appverid:
- MOE150 - MET150 description: Learn how to modify an edm schema to use configurable match.
# Modify Exact Data Match schema to use configurable match
-Exact Data Match (EDM) based classification enables you to create custom sensitive information types that refer to exact values in a database of sensitive information. When you need to allow for variants of a exact string, you can use *configurable match* to tell Microsoft 365 to ignore case and some delimiters.
+Exact Data Match (EDM) based classification enables you to create custom sensitive information types that refer to exact values in a database of sensitive information. When you need to allow for variants of a exact string, you can use *configurable match* to tell Microsoft 365 to ignore case and some delimiters.
> [!IMPORTANT] > Use this procedure to modify an existing EDM schema and data file.
Exact Data Match (EDM) based classification enables you to create custom sensiti
- [GCC-High](https://go.microsoft.com/fwlink/?linkid=2137521) - This is specifically for high security government cloud subscribers - [DoD](https://go.microsoft.com/fwlink/?linkid=2137807) - this is specifically for United States Department of Defense cloud customers
-3. Authorize the EDM Upload Agent, open Command Prompt window (as an administrator) and run the following command:
+3. Authorize the EDM Upload Agent, open a Command Prompt window (as an administrator) and run the following command:
- `EdmUploadAgent.exe /Authorize`
+ ```dos
+ EdmUploadAgent.exe /Authorize
+ ```
4. If you don't have a current copy of the existing schema, you'll need to download a copy of the existing schema, run this command:
- `EdmUploadAgent.exe /SaveSchema /DataStoreName <dataStoreName> [/OutputDir [Output dir location]]`
+ ```dos
+ EdmUploadAgent.exe /SaveSchema /DataStoreName <dataStoreName> [/OutputDir [Output dir location]]
+ ```
-5. Customize the schema so each column utilizes ΓÇ£caseInsensitiveΓÇ¥ and / or ΓÇ£ignoredDelimitersΓÇ¥. The default value for ΓÇ£caseInsensitiveΓÇ¥ is ΓÇ£falseΓÇ¥ and for ΓÇ£ignoredDelimitersΓÇ¥, it is an empty string.
+5. Customize the schema so each column utilizes ΓÇ£caseInsensitiveΓÇ¥ and / or ΓÇ£ignoredDelimitersΓÇ¥. The default value for ΓÇ£caseInsensitiveΓÇ¥ is ΓÇ£falseΓÇ¥ and for ΓÇ£ignoredDelimitersΓÇ¥, it is an empty string.
> [!NOTE] > The underlying custom sensitive information type or built in sensitive information type used to detect the general regex pattern must support detection of the variations inputs listed with ignoredDelimiters. For example, the built in U.S. social security number (SSN) sensitive information type can detect variations in the data that include dashes, spaces, or lack of spaces between the grouped numbers that make up the SSN. As a result, the only delimiters that are relevant to include in EDMΓÇÖs ignoredDelimiters for SSN data are: dash and space.
-
+ Here is a sample schema that simulates case insensitive match by creating the extra columns needed to recognize case variations in the sensitive data.
-
+ ```xml <EdmSchema xmlns="http://schemas.microsoft.com/office/2018/edm"> <DataStore name="PatientRecords" description="Schema for patient records policy" version="1">
Exact Data Match (EDM) based classification enables you to create custom sensiti
</DataStore> </EdmSchema> ```
-
+ In the above example, the variations of the original `PolicyNumber` column will no longer be needed if both `caseInsensitive` and `ignoredDelimiters` are added.
-
+ To update this schema so that EDM uses configurable match use the `caseInsensitive` and `ignoredDelimiters` flags. Here's how that looks:
-
+ ```xml <EdmSchema xmlns="http://schemas.microsoft.com/office/2018/edm"> <DataStore name="PatientRecords" description="Schema for patient records policy" version="1">
Exact Data Match (EDM) based classification enables you to create custom sensiti
</DataStore> </EdmSchema> ```
-
+ The `ignoredDelimiters` flag supports any non-alphanumeric character, here are some examples: - \. - \-
Exact Data Match (EDM) based classification enables you to create custom sensiti
- \\ - \~ - \;
-
+ The `ignoredDelimiters` flag doesn't support: - characters 0-9 - A-Z - a-z - \"
- - \,
+ - \,
-6. Connect to the Security & Compliance center using the procedures in [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
+6. [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
> [!NOTE] > If your organization has set up [Customer Key for Microsoft 365 at the tenant level (public preview)](customer-key-tenant-level.md#overview-of-customer-key-for-microsoft-365-at-the-tenant-level-public-preview), Exact data match will make use of its encryption functionality automatically. This is available only to E5 licensed tenants in the Commercial cloud.
-7. Update your schema by running these cmdlets one at a time:
+7. Update your schema by running the following command:
- `$edmSchemaXml=Get-Content .\\edm.xml -Encoding Byte -ReadCount 0`
-
- `Set-DlpEdmSchema -FileData $edmSchemaXml -Confirm:$true`
+ ```powershell
+ Set-DlpEdmSchema -FileData ([System.IO.File]::ReadAllBytes('.\\edm.xml')) -Confirm:$true
+ ```
-8. If necessary, update the data file to match the new schema version
+8. If necessary, update the data file to match the new schema version.
> [!TIP] > Optionally, you can run a validation against your csv file before uploading by running: >
- >`EdmUploadAgent.exe /ValidateData /DataFile [data file] [schema file]`
+ > `EdmUploadAgent.exe /ValidateData /DataFile [data file] [schema file]`
>
- >For more information on all the EdmUploadAgent.exe >supported parameters run
+ > For more information on all the EdmUploadAgent.exe supported parameters, run
> > `EdmUploadAgent.exe /?` 9. Open Command Prompt window (as an administrator) and run the following command to hash and upload your sensitive data:
- `EdmUploadAgent.exe /UploadData /DataStoreName [DS Name] /DataFile [data file] /HashLocation [hash file location] /Salt [custom salt] /Schema [Schema file]`
-
+ ```dos
+ EdmUploadAgent.exe /UploadData /DataStoreName [DS Name] /DataFile [data file] /HashLocation [hash file location] /Salt [custom salt] /Schema [Schema file]
+ ```
## Related articles
compliance Sit Modify Keyword Dictionary https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/sit-modify-keyword-dictionary.md
audience: Admin Previously updated : Last updated : ms.localizationpriority: medium-+ - M365-security-compliance
+search.appverid:
- MOE150 - MET150
You might need to modify keywords in one of your keyword dictionaries, or modify
Keyword dictionaries can be used as `Primary elements` or `Supporting elements` in sensitive information type (SIT) patterns. You can edit a keyword dictionary while creating a SIT or in an existing SIT. For example to edit an existing keyword dictionary: 1. Open the pattern that has the keyword dictionary you want to update.
-2. Find the keyword dictionary you want to update and choose edit.
-3. Make your edits, using one keyword per line.
+2. Find the keyword dictionary you want to update and choose edit.
+3. Make your edits, using one keyword per line.
-![screenshot edit keywords.](../media/edit-keyword-dictionary.png)
+ ![screenshot edit keywords.](../media/edit-keyword-dictionary.png)
4. Choose `Done`.
-## Modify a keyword dictionary using PowerShell
+## Modify a keyword dictionary using PowerShell
-For example, we'll modify some terms in PowerShell, save the terms locally where you can modify them in an editor, and then update the previous terms in place.
+For example, we'll modify some terms in PowerShell, save the terms locally where you can modify them in an editor, and then update the previous terms in place.
First, retrieve the dictionary object:
-
+ ```powershell $dict = Get-DlpKeywordDictionary -Name "Diseases" ```
-Printing `$dict` will show the various variables. The keywords themselves are stored in an object on the backend, but `$dict.KeywordDictionary` contains a string representation of them, which you'll use to modify the dictionary.
+Printing `$dict` will show the various properties. The keywords themselves are stored in an object on the backend, but `$dict.KeywordDictionary` contains a string representation of them, which you'll use to modify the dictionary.
+
+Before you modify the dictionary, you need to turn the string of terms back into an array using the `.split(',')` method. Then you'll clean up the unwanted spaces between the keywords with the `.trim()` method, leaving just the keywords to work with.
-Before you modify the dictionary, you need to turn the string of terms back into an array using the `.split(',')` method. Then you'll clean up the unwanted spaces between the keywords with the `.trim()` method, leaving just the keywords to work with.
-
```powershell $terms = $dict.KeywordDictionary.split(',').trim() ``` Now you'll remove some terms from the dictionary. Because the example dictionary has only a few keywords, you could as easily skip to exporting the dictionary and editing it in Notepad, but dictionaries generally contain a large amount of text, so you'll first learn this way to edit them easily in PowerShell.
-
+ In the last step, you saved the keywords to an array. There are several ways to [remove items from an array](/previous-versions/windows/it-pro/windows-powershell-1.0/ee692802(v=technet.10)), but as a straightforward approach, you'll create an array of the terms you want to remove from the dictionary, and then copy only the dictionary terms to it that aren't in the list of terms to remove.
-
-Run the command `$terms` to show the current list of terms. The output of the command looks like this:
-
-`aarskog's syndrome`
-`abandonment`
-`abasia`
-`abderhalden-kaufmann-lignac`
-`abdominalgia`
-`abduction contracture`
-`abetalipoproteinemia`
-`abiotrophy`
-`ablatio`
-`ablation`
-`ablepharia`
-`abocclusion`
-`abolition`
-`aborter`
-`abortion`
-`abortus`
-`aboulomania`
-`abrami's disease`
+
+Run the command `$terms` to show the current list of terms. The output of the command looks like this:
+
+```powershell
+aarskog's syndrome
+abandonment
+abasia
+abderhalden-kaufmann-lignac
+abdominalgia
+abduction contracture
+abetalipoproteinemia
+abiotrophy
+ablatio
+ablation
+ablepharia
+abocclusion
+abolition
+aborter
+abortion
+abortus
+aboulomania
+abrami's disease
+```
Run this command to specify the terms that you want to remove:
-
+ ```powershell
-$termsToRemove = @('abandonment', 'ablatio')
+$termsToRemove = @('abandonment','ablatio')
``` Run this command to actually remove the terms from the list:
-
+ ```powershell
-$updatedTerms = $terms | Where-Object{ $_ -notin $termsToRemove }
+$updatedTerms = $terms | Where-Object {$_ -notin $termsToRemove}
```
-Run the command `$updatedTerms` to show the updated list of terms. The output of the command looks like this (the specified terms have been removed):
-
-`aarskog's syndrome`
-`abasia`
-`abderhalden-kaufmann-lignac`
-`abdominalgia`
-`abduction contracture`
-`abetalipo proteinemia`
-`abiotrophy`
-`ablation`
-`ablepharia`
-`abocclusion`
-`abolition`
-`aborter`
-`abortion`
-`abortus`
-`aboulomania`
-`abrami's disease`
+Run the command `$updatedTerms` to show the updated list of terms. The output of the command looks like this (the specified terms have been removed):
+
+```powershell
+aarskog's syndrome
+abasia
+abderhalden-kaufmann-lignac
+abdominalgia
+abduction contracture
+abetalipoproteinemia
+abiotrophy
+ablation
+ablepharia
+abocclusion
+abolition
+aborter
+abortion
+abortus
+aboulomania
+abrami's disease
``` Now save the dictionary locally and add a few more terms. You could add the terms right here in PowerShell, but you'll still need to export the file locally to ensure it's saved with Unicode encoding and contains the BOM.
-
+ Save the dictionary locally by running the following:
-
+ ```powershell Set-Content $updatedTerms -Path "C:\myPath\terms.txt" ``` Now open the file, add your other terms, and save with Unicode encoding (UTF-16). Now you'll upload the updated terms and update the dictionary in place.
-
+ ```powershell
-PS> Set-DlpKeywordDictionary -Identity "Diseases" -FileData (Get-Content -Path "C:myPath\terms.txt" -Encoding Byte -ReadCount 0)
+Set-DlpKeywordDictionary -Identity "Diseases" -FileData ([System.IO.File]::ReadAllBytes('C:myPath\terms.txt'))
```
-Now the dictionary has been updated in place. The `Identity` field takes the name of the dictionary. If you wanted to also change the name of your dictionary using the `set-` cmdlet, you would just need to add the `-Name` parameter to what's above with your new dictionary name.
+Now the dictionary has been updated in place. The `Identity` field takes the name of the dictionary. If you wanted to also change the name of your dictionary using the `Set-` cmdlet, you would just need to add the `-Name` parameter to what's above with your new dictionary name.
+
+## See also
-See Also
- [Create a keyword dictionary](create-a-keyword-dictionary.md) - [Create a custom sensitive information type](create-a-custom-sensitive-information-type.md)
compliance Sit Use Exact Data Manage Schema https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/compliance/sit-use-exact-data-manage-schema.md
If you want to make changes to your EDM schema, for example the **edm.xml** file
> [!TIP] > You can change your EDM schema and sensitive information table source file to take advantage of **configurable match**. When configured, EDM will ignore case differences and some delimiters when it evaluates an item. This makes defining your xml schema and your sensitive data files easier. To learn more see, [Using the caseInsensitive and ignoredDelimiters fields](sit-get-started-exact-data-match-create-schema.md#using-the-caseinsensitive-and-ignoreddelimiters-fields).
-1. Edit your **edm.xml** file (this is the file discussed in the [Create the schema for exact data match based sensitive information types](sit-get-started-exact-data-match-create-schema.md#create-the-schema-for-exact-data-match-based-sensitive-information-types).
+1. Edit your **edm.xml** file (this is the file discussed in the [Create the schema for exact data match based sensitive information types](sit-get-started-exact-data-match-create-schema.md#create-the-schema-for-exact-data-match-based-sensitive-information-types).
-2. Connect to the Security & Compliance center using the procedures in [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
+2. [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
-3. To update your database schema, run the following cmdlets, one at a time:
+3. To update your database schema, run the following command:
```powershell
- $edmSchemaXml=Get-Content .\\edm.xml -Encoding Byte -ReadCount 0
- Set-DlpEdmSchema -FileData $edmSchemaXml -Confirm:$true
+ Set-DlpEdmSchema -FileData ([System.IO.File]::ReadAllBytes('.\\edm.xml')) -Confirm:$true
``` You will be prompted to confirm, as follows:
If you want to make changes to your EDM schema, for example the **edm.xml** file
> \[Y\] Yes \[A\] Yes to All \[N\] No \[L\] No to All \[?\] Help (default is "Y"): > [!TIP]
- > If you want your changes to occur without confirmation, in Step 3, use this cmdlet instead: Set-DlpEdmSchema -FileData $edmSchemaXml
+ > If you want your changes to occur without confirmation, don't use `-Confirm:$true` in Step 3.
> [!NOTE]
- > It can take between 10-60 minutes to update the EDMSchema with additions. The update must complete before you execute steps that use the additions.-->
+ > It can take between 10-60 minutes to update the EDMSchema with additions. The update must complete before you execute steps that use the additions.
## Removing the schema for EDM-based classification manually If you want to remove the schema you're using for EDM-based classification, follow these steps:
-1. Connect to the Security & Compliance center using the procedures in [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
+1. [Connect to Security & Compliance Center PowerShell](/powershell/exchange/connect-to-scc-powershell).
-2. Run the following PowerShell cmdlets, substituting the data store name of "patient records" with the one you want to remove (using the patientrecords store as an example):
+2. Run the following command, substituting the data store name of "patient records" with the one you want to remove (using the patientrecords store as an example):
```powershell Remove-DlpEdmSchema -Identity 'patientrecords'
If you want to remove the schema you're using for EDM-based classification, foll
> \[Y\] Yes \[A\] Yes to All \[N\] No \[L\] No to All \[?\] Help (default is "Y"): > [!TIP]
- > If you want your changes to occur without confirmation, in Step 2, use this cmdlet instead: Remove-DlpEdmSchema -Identity patientrecords -Confirm:$false
+ > If you want your changes to occur without confirmation, don't use `-Confirm:$true` in Step 2.
### Edit or delete the EDM schema with the wizard
-1. Open **Compliance center** > **Data classification** > **Exact data matches**.
+1. Open **Compliance center** \> **Data classification** \> **Exact data matches**.
2. Choose **EDM schemas**.
If you want to remove the schema you're using for EDM-based classification, foll
4. Choose **Edit EDM schema** or **Delete EDM schema** from the flyout. > [!IMPORTANT]
-> If you want to remove a schema, and it is already associated with an EDM sensitive info type, you must first delete the EDM sensitive info type, then you can delete the schema.
+> If you want to remove a schema, and it is already associated with an EDM sensitive info type, you must first delete the EDM sensitive info type, then you can delete the schema.
contentunderstanding Learn About Document Understanding Models Through The Sample Model https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/contentunderstanding/learn-about-document-understanding-models-through-the-sample-model.md
Title: Learn about document understanding models through the sample model in Microsoft SharePoint Syntex
+ Title: Import a sample document understanding model for Microsoft SharePoint Syntex
ms.localizationpriority: medium
description: Learn about document understanding models through the sample model.
-# Learn about document understanding models through the sample model in Microsoft SharePoint Syntex
+# Import a sample document understanding model for Microsoft SharePoint Syntex
SharePoint Syntex provides you with a sample model you can use to examine, giving you a better understanding of how to create your own models. The sample model also allows you to examine model components, such as its classifier, extractors, and explanations. You can also use the sample files to train the model.
enterprise Microsoft 365 Vpn Implement Split Tunnel https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/enterprise/microsoft-365-vpn-implement-split-tunnel.md
Title: "Implementing VPN split tunneling for Office 365"
+ Title: "Implementing VPN split tunneling for Microsoft 365"
- Previously updated : 9/22/2020+ Last updated : 1/28/2022 audience: Admin-
+ms.article: conceptual
ms.localizationpriority: medium search.appverid:
- remotework f1.keywords: - NOCSH
-description: "How to implement VPN split tunneling for Office 365"
+description: "How to implement VPN split tunneling for Microsoft 365"
-# Implementing VPN split tunneling for Office 365
+# Implementing VPN split tunneling for Microsoft 365
>[!NOTE]
->This topic is part of a set of topics that address Office 365 optimization for remote users.
->- For an overview of using VPN split tunneling to optimize Office 365 connectivity for remote users, see [Overview: VPN split tunneling for Office 365](microsoft-365-vpn-split-tunnel.md).
->- For information about optimizing Office 365 worldwide tenant performance for users in China, see [Office 365 performance optimization for China users](microsoft-365-networking-china.md).
+>This article is part of a set of articles that address Microsoft 365 optimization for remote users.
-For many years, enterprises have been using VPNs to support remote experiences for their users. Whilst core workloads remained on-premises, a VPN from the remote client routed through a datacenter on the corporate network was the primary method for remote users to access corporate resources. To safeguard these connections, enterprises build layers of network security solutions along the VPN paths. This security was built to protect internal infrastructure and to safeguard mobile browsing of external web sites by rerouting traffic into the VPN and then out through the on-premises Internet perimeter. VPNs, network perimeters, and associated security infrastructure were often purpose-built and scaled for a defined volume of traffic, typically with most connectivity being initiated from within the corporate network, and most of it staying within the internal network boundaries.
+>- For an overview of using VPN split tunneling to optimize Microsoft 365 connectivity for remote users, see [Overview: VPN split tunneling for Microsoft 365](microsoft-365-vpn-split-tunnel.md).
+>- For information about optimizing Microsoft 365 worldwide tenant performance for users in China, see [Microsoft 365 performance optimization for China users](microsoft-365-networking-china.md).
-For quite some time, VPN models where all connections from the remote user device are routed back into the on-premises network (known as _forced tunneling_) were largely sustainable as long as the concurrent scale of remote users was modest and the traffic volumes traversing VPN were low. Some customers continued to use VPN force tunneling as the status quo even after their applications moved from inside the corporate perimeter to public SaaS clouds, Office 365 being a prime example.
+For many years, enterprises have been using VPNs to support remote experiences for their users. While core workloads remained on-premises, a VPN from the remote client routed through a datacenter on the corporate network was the primary method for remote users to access corporate resources. To safeguard these connections, enterprises build layers of network security solutions along the VPN paths. This security was built to protect internal infrastructure and to safeguard mobile browsing of external web sites by rerouting traffic into the VPN and then out through the on-premises Internet perimeter. VPNs, network perimeters, and associated security infrastructure were often purpose-built and scaled for a defined volume of traffic, typically with most connectivity being initiated from within the corporate network, and most of it staying within the internal network boundaries.
-The use of forced tunneled VPNs for connecting to distributed and performance-sensitive cloud applications is suboptimal, but the negative effect of that may have been accepted by some enterprises so as to maintain the status quo from a security perspective. An example diagram of this scenario can be seen below:
+For quite some time, VPN models where all connections from the remote user device are routed back into the on-premises network (known as _forced tunneling_) were largely sustainable as long as the concurrent scale of remote users was modest and the traffic volumes traversing VPN were low. Some customers continued to use VPN force tunneling as the status quo even after their applications moved from inside the corporate perimeter to public SaaS clouds.
+
+The use of forced tunneled VPNs for connecting to distributed and performance-sensitive cloud applications is suboptimal, but the negative effects have been accepted by some enterprises so as to maintain the security status quo. An example diagram of this scenario can be seen below:
![Split Tunnel VPN configuration.](../media/vpn-split-tunneling/enterprise-network-traditional.png)
-This problem has been growing for many years, with many customers reporting a significant shift of network traffic patterns. Traffic that used to stay on premises now connects to external cloud endpoints. Numerous Microsoft customers report that previously, around 80% of their network traffic was to some internal source (represented by the dotted line in the above diagram). In 2020 that number is now around 20% or lower as they have shifted major workloads to the cloud, these trends are not uncommon with other enterprises. Over time, as the cloud journey progresses, the above model becomes increasingly cumbersome and unsustainable, preventing an organization from being agile as they move into a cloud first world.
+This problem has been growing for many years, with many customers reporting a significant shift of network traffic patterns. Traffic that used to stay on premises now connects to external cloud endpoints. Many Microsoft customers report that previously, around 80% of their network traffic was to some internal source (represented by the dotted line in the above diagram). In 2020 that number is now around 20% or lower as they have shifted major workloads to the cloud, these trends aren't uncommon with other enterprises. Over time, as the cloud journey progresses, the above model becomes increasingly cumbersome and unsustainable, preventing an organization from being agile as they move into a cloud-first world.
-The worldwide COVID-19 crisis has escalated this problem to require immediate remediation. The need to ensure employee safety has generated unprecedented demands on enterprise IT to support work-from-home productivity at a massive scale. Microsoft Office 365 is well positioned to help customers fulfill that demand, but high concurrency of users working from home generates a large volume of Office 365 traffic which, if routed through forced tunnel VPN and on-premises network perimeters, causes rapid saturation and runs VPN infrastructure out of capacity. In this new reality, using VPN to access Office 365 is no longer just a performance impediment, but a hard wall that not only impacts Office 365 but critical business operations that still have to rely on the VPN to operate.
+The worldwide COVID-19 crisis has escalated this problem to require immediate remediation. The need to ensure employee safety has generated unprecedented demands on enterprise IT to support work-from-home productivity at a massive scale. Microsoft 365 is well positioned to help customers fulfill that demand, but high concurrency of users working from home generates a large volume of Microsoft 365 traffic which, if routed through forced tunnel VPN and on-premises network perimeters, causes rapid saturation and runs VPN infrastructure out of capacity. In this new reality, using VPN to access Microsoft 365 is no longer just a performance impediment, but a hard wall that not only impacts Microsoft 365 but critical business operations that still have to rely on the VPN to operate.
-Microsoft has been working closely with customers and the wider industry for many years to provide effective, modern solutions to these problems from within our own services, and to align with industry best practice. [Connectivity principles](./microsoft-365-network-connectivity-principles.md) for the Office 365 service have been designed to work efficiently for remote users whilst still allowing an organization to maintain security and control over their connectivity. These solutions can also be implemented quickly with limited work yet achieve a significant positive impact on the problems outlined above.
+Microsoft has been working closely with customers and the wider industry for many years to provide effective, modern solutions to these problems from within our own services, and to align with industry best practice. [Connectivity principles](./microsoft-365-network-connectivity-principles.md) for the Microsoft 365 service have been designed to work efficiently for remote users while still allowing an organization to maintain security and control over their connectivity. These solutions can also be implemented quickly with limited work yet achieve a significant positive effect on the problems outlined above.
-Microsoft's recommended strategy for optimizing remote worker's connectivity is focused on rapidly alleviating the problems with the traditional approach and also providing high performance with a few simple steps. These steps adjust the legacy VPN approach for a few defined endpoints that bypass bottlenecked VPN servers. An equivalent or even superior security model can be applied at different layers to remove the need to secure all traffic at the egress of the corporate network. In most cases this can be effectively achieved within hours and is then scalable to other workloads as requirements demand and time allows.
+Microsoft's recommended strategy for optimizing remote worker's connectivity is focused on rapidly mitigating problems and providing high performance with a few simple steps. These steps adjust the legacy VPN approach for a few defined endpoints that bypass bottlenecked VPN servers. An equivalent or even superior security model can be applied at different layers to remove the need to secure all traffic at the egress of the corporate network. In most cases, this can be effectively achieved within hours and is then scalable to other workloads as requirements demand and time allows.
## Common VPN scenarios
-In the list below you'll see the most common VPN scenarios seen in enterprise environments. Most customers traditionally operate model 1 (VPN Forced Tunnel). This section will help you to quickly and securely transition to **model 2**, which is achievable with relatively little effort, and has enormous benefits to network performance and user experience.
+In the list below, you'll see the most common VPN scenarios seen in enterprise environments. Most customers traditionally operate model 1 (VPN Forced Tunnel). This section will help you to quickly and securely transition to **model 2**, which is achievable with relatively little effort, and has enormous benefits to network performance and user experience.
| Model | Description | | | | | [1. VPN Forced Tunnel](#1-vpn-forced-tunnel) | 100% of traffic goes into VPN tunnel, including on-premise, Internet, and all O365/M365 | | [2. VPN Forced Tunnel with few exceptions](#2-vpn-forced-tunnel-with-a-small-number-of-trusted-exceptions) | VPN tunnel is used by default (default route points to VPN), with few, most important exempt scenarios that are allowed to go direct |
-| [3. VPN Forced Tunnel with broad exceptions](#3-vpn-forced-tunnel-with-broad-exceptions) | VPN tunnel is used by default (default route points to VPN), with broad exceptions that are allowed to go direct (such as all Office 365, All Salesforce, All Zoom) |
+| [3. VPN Forced Tunnel with broad exceptions](#3-vpn-forced-tunnel-with-broad-exceptions) | VPN tunnel is used by default (default route points to VPN), with broad exceptions that are allowed to go direct (such as all Microsoft 365, All Salesforce, All Zoom) |
| [4. VPN Selective Tunnel](#4-vpn-selective-tunnel) | VPN tunnel is used only for corpnet-based services. Default route (Internet and all Internet-based services) goes direct. |
-| [5. No VPN](#5-no-vpn) | A variation of #2, where instead of legacy VPN, all corpnet services are published through modern security approaches (like Zscaler ZPA, Azure Active Directory (Azure AD) Proxy/MCAS, etc.) |
+| [5. No VPN](#5-no-vpn) | A variation of #2. Instead of legacy VPN, all corpnet services are published through modern security approaches (like Zscaler ZPA, Azure Active Directory (Azure AD) Proxy/MCAS, etc.) |
### 1. VPN Forced Tunnel
-This is the most common starting scenario for most enterprise customers. A forced VPN is used, which means 100% of traffic is directed into the corporate network regardless of the fact the endpoint resides within the corporate network or not. Any external (Internet) bound traffic such as Office 365 or Internet browsing is then hair-pinned back out of the on-premises security equipment such as proxies. In the current climate with nearly 100% of users working remotely, this model therefore puts high load on the VPN infrastructure and is likely to significantly hinder performance of all corporate traffic and thus the enterprise to operate efficiently at a time of crisis.
+The most common starting scenario for most enterprise customers. A forced VPN is used, which means 100% of traffic is directed into the corporate network whether the endpoint resides within the corporate network or not. Any external (Internet) bound traffic such as Microsoft 365 or Internet browsing is then hair-pinned back out of the on-premises security equipment such as proxies. In the current climate with nearly 100% of users working remotely, this model therefore puts high load on the VPN infrastructure and is likely to significantly hinder performance of all corporate traffic and thus the enterprise to operate efficiently at a time of crisis.
![VPN Forced Tunnel model 1.](../media/vpn-split-tunneling/vpn-model-1.png) ### 2. VPN Forced Tunnel with a small number of trusted exceptions
-This model is significantly more efficient for an enterprise to operate under as it allows a few controlled and defined endpoints that are very high load and latency sensitive to bypass the VPN tunnel and go direct to the Office 365 service in this example. This significantly improves the performance for the offloaded services, and also decreases the load on the VPN infrastructure, thus allowing elements that still require it to operate with lower contention for resources. It is this model that this article concentrates on assisting with the transition to as it allows for simple, defined actions to be taken quickly with numerous positive outcomes.
+Significantly more efficient for an enterprise to operate under. This model allows a few controlled and defined endpoints that are high load and latency sensitive to bypass the VPN tunnel and go direct to the Microsoft 365 service. This significantly improves the performance for the offloaded services, and also decreases the load on the VPN infrastructure, thus allowing elements that still require it to operate with lower contention for resources. It's this model that this article concentrates on assisting with the transition to as it allows for simple, defined actions to be taken quickly with numerous positive outcomes.
![Split Tunnel VPN model 2.](../media/vpn-split-tunneling/vpn-model-2.png) ### 3. VPN Forced Tunnel with broad exceptions
-The third model broadens the scope of model two as rather than just sending a small group of defined endpoints direct, it instead sends all traffic directly to trusted services such Office 365 and SalesForce. This further reduces the load on the corporate VPN infrastructure and improves the performance of the services defined. As this model is likely to take more time to assess the feasibility of and implement, it is likely a step that can be taken iteratively at a later date once model two is successfully in place.
+Broadens the scope of model 2. Rather than just sending a small group of defined endpoints direct, it instead sends all traffic directly to trusted services such Microsoft 365 and SalesForce. This further reduces the load on the corporate VPN infrastructure and improves the performance of the services defined. As this model is likely to take more time to assess the feasibility of and implement, It's likely a step that can be taken iteratively at a later date once model two is successfully in place.
![Split Tunnel VPN model 3.](../media/vpn-split-tunneling/vpn-model-3.png) ### 4. VPN selective Tunnel
-This model reverses the third model in that only traffic identified as having a corporate IP address is sent down the VPN tunnel and thus the Internet path is the default route for everything else. This model requires an organization to be well on the path to [Zero Trust](https://www.microsoft.com/security/zero-trust?rtc=1) in able to safely implement this model. It should be noted that this model or some variation thereof will likely become the necessary default over time as more and more services move away from the corporate network and into the cloud. Microsoft uses this model internally; you can find more information on Microsoft's implementation of VPN split tunneling at [Running on VPN: How Microsoft is keeping its remote workforce connected](https://www.microsoft.com/itshowcase/blog/running-on-vpn-how-microsoft-is-keeping-its-remote-workforce-connected/?elevate-lv).
+Reverses the third model in that only traffic identified as having a corporate IP address is sent down the VPN tunnel and thus the Internet path is the default route for everything else. This model requires an organization to be well on the path to [Zero Trust](https://www.microsoft.com/security/zero-trust?rtc=1) in able to safely implement this model. It should be noted that this model or some variation thereof will likely become the necessary default over time as more services move away from the corporate network and into the cloud.
+
+Microsoft uses this model internally. You can find more information on Microsoft's implementation of VPN split tunneling at [Running on VPN: How Microsoft is keeping its remote workforce connected](https://www.microsoft.com/itshowcase/blog/running-on-vpn-how-microsoft-is-keeping-its-remote-workforce-connected/?elevate-lv).
![Split Tunnel VPN model 4.](../media/vpn-split-tunneling/vpn-model-4.png) ### 5. No VPN
-A more advanced version of model number two, whereby any internal services are published through a modern security approach or SDWAN solution such as Azure AD Proxy, Defender for Cloud Apps, Zscaler ZPA, etc.
+A more advanced version of model number 2, whereby any internal services are published through a modern security approach or SDWAN solution such as Azure AD Proxy, Defender for Cloud Apps, Zscaler ZPA, etc.
![Split Tunnel VPN model 5.](../media/vpn-split-tunneling/vpn-model-5.png) ## Implement VPN split tunneling
-In this section, you'll find the simple steps required to migrate your VPN client architecture from a _VPN forced tunnel_ to a _VPN forced tunnel with a small number of trusted exceptions_, [VPN split tunnel model #2](#2-vpn-forced-tunnel-with-a-small-number-of-trusted-exceptions) in [Common VPN scenarios](#common-vpn-scenarios).
+In this section, you'll find the simple steps required to migrate your VPN client architecture from a _VPN forced tunnel_ to a _VPN forced tunnel with a few trusted exceptions_, [VPN split tunnel model #2](#2-vpn-forced-tunnel-with-a-small-number-of-trusted-exceptions) in [Common VPN scenarios](#common-vpn-scenarios).
The diagram below illustrates how the recommended VPN split tunnel solution works:
The diagram below illustrates how the recommended VPN split tunnel solution work
### 1. Identify the endpoints to optimize
-In the [Office 365 URLs and IP address ranges](urls-and-ip-address-ranges.md) topic, Microsoft clearly identifies the key endpoints you need to optimize and categorizes them as **Optimize**. There are currently just four URLS and 20 IP subnets that need to be optimized. This small group of endpoints accounts for around 70% - 80% of the volume of traffic to the Office 365 service including the latency sensitive endpoints such as those for Teams media. Essentially this is the traffic that we need to take special care of and is also the traffic that will put incredible pressure on traditional network paths and VPN infrastructure.
+In the [Microsoft 365 URLs and IP address ranges](urls-and-ip-address-ranges.md) article, Microsoft clearly identifies the key endpoints you need to optimize and categorizes them as **Optimize**. There are currently just four URLS and 20 IP subnets that need to be optimized. This small group of endpoints accounts for around 70% - 80% of the volume of traffic to the Microsoft 365 service including the latency sensitive endpoints such as those for Teams media. Essentially this is the traffic that we need to take special care of and is also the traffic that will put incredible pressure on traditional network paths and VPN infrastructure.
URLs in this category have the following characteristics:
URLs in this category have the following characteristics:
- Low rate of change and are expected to remain small in number (currently 20 IP subnets) - Are bandwidth and/or latency sensitive - Are able to have required security elements provided in the service rather than inline on the network-- Account for around 70-80% of the volume of traffic to the Office 365 service
+- Account for around 70-80% of the volume of traffic to the Microsoft 365 service
-For more information about Office 365 endpoints and how they are categorized and managed, see [Managing Office 365 endpoints](managing-office-365-endpoints.md).
+For more information about Microsoft 365 endpoints and how they are categorized and managed, see [Managing Microsoft 365 endpoints](managing-office-365-endpoints.md).
#### Optimize URLs
The current Optimize URLs can be found in the table below. Under most circumstan
| https://\<tenant\>-my.sharepoint.com | TCP 443 | This is the primary URL for OneDrive for Business and has high bandwidth usage and possibly high connection count from the OneDrive for Business Sync tool. | | Teams Media IPs (no URL) | UDP 3478, 3479, 3480, and 3481 | Relay Discovery allocation and real-time traffic (3478), Audio (3479), Video (3480), and Video Screen Sharing (3481). These are the endpoints used for Skype for Business and Microsoft Teams Media traffic (calls, meetings, etc.). Most endpoints are provided when the Microsoft Teams client establishes a call (and are contained within the required IPs listed for the service). Use of the UDP protocol is required for optimal media quality. |
-In the above examples, **tenant** should be replaced with your Office 365 tenant name. For example, **contoso.onmicrosoft.com** would use _contoso.sharepoint.com_ and _constoso-my.sharepoint.com_.
+In the above examples, **tenant** should be replaced with your Microsoft 365 tenant name. For example, **contoso.onmicrosoft.com** would use _contoso.sharepoint.com_ and _contoso-my.sharepoint.com_.
#### Optimize IP address ranges
-At the time of writing the IP address ranges that these endpoints correspond to are as follows. It is **very strongly** advised you use a [script such as this](https://github.com/microsoft/Office365NetworkTools/tree/master/Scripts/Display%20URL-IPs-Ports%20per%20Category) example, the [Office 365 IP and URL web service](microsoft-365-ip-web-service.md) or the [URL/IP page](urls-and-ip-address-ranges.md) to check for any updates when applying the configuration, and put a policy in place to do so regularly.
+At the time of writing the IP address ranges that these endpoints correspond to are as follows. It's **very strongly** advised you use a [script such as this](https://github.com/microsoft/Office365NetworkTools/tree/master/Scripts/Display%20URL-IPs-Ports%20per%20Category) example, the [Microsoft 365 IP and URL web service](microsoft-365-ip-web-service.md) or the [URL/IP page](urls-and-ip-address-ranges.md) to check for any updates when applying the configuration, and put a policy in place to do so regularly.
``` 104.146.128.0/17
Once you have added the routes, you can confirm that the route table is correct
![Route print output.](../media/vpn-split-tunneling/vpn-route-print.png)
-To add routes for _all_ current IP address ranges in the Optimize category, you can use the following script variation to query the [Office 365 IP and URL web service](microsoft-365-ip-web-service.md) for the current set of Optimize IP subnets and add them to the route table.
+To add routes for _all_ current IP address ranges in the Optimize category, you can use the following script variation to query the [Microsoft 365 IP and URL web service](microsoft-365-ip-web-service.md) for the current set of Optimize IP subnets and add them to the route table.
#### Example: Add all Optimize subnets into the route table
foreach ($prefix in $destPrefix) {New-NetRoute -DestinationPrefix $prefix -Inter
``` -->
-The VPN client should be configured so that traffic to the **Optimize** IPs are routed in this way. This allows the traffic to utilize local Microsoft resources such as Office 365 Service Front Doors [such as the Azure Front Door](https://azure.microsoft.com/blog/azure-front-door-service-is-now-generally-available/) that deliver Office 365 services and connectivity endpoints as close to your users as possible. This allows us to deliver high performance levels to users wherever they are in the world and takes full advantage of [Microsoft's world class global network](https://azure.microsoft.com/blog/how-microsoft-builds-its-fast-and-reliable-global-network/), which is likely within a few milliseconds of your users' direct egress.
+The VPN client should be configured so that traffic to the **Optimize** IPs are routed in this way. This allows the traffic to utilize local Microsoft resources such as Microsoft 365 Service Front Doors [such as the Azure Front Door](https://azure.microsoft.com/blog/azure-front-door-service-is-now-generally-available/) that deliver Microsoft 365 services and connectivity endpoints as close to your users as possible. This allows us to deliver high performance levels to users wherever they are in the world and takes full advantage of [Microsoft's world class global network](https://azure.microsoft.com/blog/how-microsoft-builds-its-fast-and-reliable-global-network/), which is likely within a few milliseconds of your users' direct egress.
## Configuring and securing Teams media traffic
In certain scenarios, often unrelated to Teams client configuration, media traff
>[!IMPORTANT] >To ensure Teams media traffic is routed via the desired method in all VPN scenarios, please ensure users are running Microsoft Teams client version **1.3.00.13565** or greater. This version includes improvements in how the client detects available network paths.
-Signaling traffic is performed over HTTPS and is not as latency sensitive as the media traffic and is marked as **Allow** in the URL/IP data and thus can safely be routed through the VPN client if desired.
+Signaling traffic is performed over HTTPS and isn't as latency sensitive as the media traffic and is marked as **Allow** in the URL/IP data and thus can safely be routed through the VPN client if desired.
+
+>[!NOTE]
+>Microsoft Edge **96 and above** also supports VPN split tunneling for peer-to-peer traffic. This means customers can gain the benefit of VPN split tunneling for Teams web clients on Edge, for instance. Customers who want to set it up for websites running on Edge can achieve it by taking the additional step of enabling the Edge [WebRtcRespectOsRoutingTableEnabled](/deployedge/microsoft-edge-policies#webrtcrespectosroutingtableenabled) policy.
### Security
-One common argument for avoiding split tunnels is that it is less secure to do so, i.e any traffic that does not go through the VPN tunnel will not benefit from whatever encryption scheme is applied to the VPN tunnel, and is therefore less secure.
+One common argument for avoiding split tunnels is that It's less secure to do so, i.e any traffic that does not go through the VPN tunnel won't benefit from whatever encryption scheme is applied to the VPN tunnel, and is therefore less secure.
The main counter-argument to this is that media traffic is already encrypted via _Secure Real-Time Transport Protocol (SRTP)_, a profile of Real-Time Transport Protocol (RTP) that provides confidentiality, authentication, and replay attack protection to RTP traffic. SRTP itself relies on a randomly generated session key, which is exchanged via the TLS secured signaling channel. This is covered in great detail within [this security guide](/skypeforbusiness/optimizing-your-network/security-guide-for-skype-for-business-online), but the primary section of interest is media encryption. Media traffic is encrypted using SRTP, which uses a session key generated by a secure random number generator and exchanged using the signaling TLS channel. In addition, media flowing in both directions between the Mediation Server and its internal next hop is also encrypted using SRTP.
-Skype for Business Online generates username/passwords for secure access to media relays over _Traversal Using Relays around NAT (TURN)_. Media relays exchange the username/password over a TLS-secured SIP channel. It is worth noting that even though a VPN tunnel may be used to connect the client to the corporate network, the traffic still needs to flow in its SRTP form when it leaves the corporate network to reach the service.
+Skype for Business Online generates username/passwords for secure access to media relays over _Traversal Using Relays around NAT (TURN)_. Media relays exchange the username/password over a TLS-secured SIP channel. It's worth noting that even though a VPN tunnel may be used to connect the client to the corporate network, the traffic still needs to flow in its SRTP form when it leaves the corporate network to reach the service.
Information on how Teams mitigates common security concerns such as voice or _Session Traversal Utilities for NAT (STUN)_ amplification attacks can be found in [5.1 Security Considerations for Implementers](/openspecs/office_protocols/ms-ice2/69525351-8c68-4864-b8a6-04bfbc87785c). You can also read about modern security controls in remote work scenarios at [Alternative ways for security professionals and IT to achieve modern security controls in today's unique remote work scenarios (Microsoft Security Team blog)](https://www.microsoft.com/security/blog/2020/03/26/alternative-security-professionals-it-achieve-modern-security-controls-todays-unique-remote-work-scenarios/).
-## Testing
+### Testing
-Once the policy is in place, you should confirm it is working as expected. There are multiple ways of testing the path is correctly set to use the local Internet connection:
+Once the policy is in place, you should confirm It's working as expected. There are multiple ways of testing the path is correctly set to use the local Internet connection:
- Run the [Microsoft 365 connectivity test](https://aka.ms/netonboard) that will run connectivity tests for you including trace routes as above. We're also adding in VPN tests into this tooling that should also provide additional insights.
Once the policy is in place, you should confirm it is working as expected. There
You should then see a path via the local ISP to this endpoint that should resolve to an IP in the Teams ranges we have configured for split tunneling. -- Take a network capture using a tool such as Wireshark. Filter on UDP during a call and you should see traffic flowing to an IP in the Teams **Optimize** range. If the VPN tunnel is being used for this traffic, then the media traffic will not be visible in the trace.
+- Take a network capture using a tool such as Wireshark. Filter on UDP during a call and you should see traffic flowing to an IP in the Teams **Optimize** range. If the VPN tunnel is being used for this traffic, then the media traffic won't be visible in the trace.
### Additional support logs If you need further data to troubleshoot, or are requesting assistance from Microsoft support, obtaining the following information should allow you to expedite finding a solution. Microsoft support's **TSS Windows CMD-based universal TroubleShooting Script toolset** can help you to collect the relevant logs in a simple manner. The tool and instructions on use can be found at <https://aka.ms/TssTools>.
+## How to Optimize Stream & Live Events
+
+Microsoft 365 Live Events traffic (this includes attendees to Teams-produced live events and those produced with an external encoder via Teams, Stream, or Yammer) and on-demand Stream traffic is currently categorized as **Default** versus **Optimize** in the [URL/IP list for the service](urls-and-ip-address-ranges.md). These endpoints are categorized as **Default** because they're hosted on CDNs that may also be used by other services. Customers generally prefer to proxy this type of traffic and apply any security elements normally done on endpoints such as these.
+
+Many customers have asked for URL/IP data needed to connect their users to Stream/Live Events directly from their local internet connection, rather than route the high-volume and latency-sensitive traffic via the VPN infrastructure. Typically, this isn't possible without both dedicated namespaces and accurate IP information for the endpoints, which isn't provided for Microsoft 365 endpoints categorized as **Default**.
+
+Use the following steps to enable direct connectivity for the Stream/Live Events service from clients using a forced tunnel VPN. This solution is intended to provide customers with an option to avoid routing Live Events traffic over VPN while there is high network traffic due to work-from-home scenarios. If possible, it's advised to access the service through an inspecting proxy.
+
+>[!NOTE]
+>Using this solution, there may be service elements that do not resolve to the IP addresses provided and thus traverse the VPN, but the bulk of high-volume traffic like streaming data should. There may be other elements outside the scope of Live Events/Stream which get caught by this offload, but these should be limited as they must meet both the FQDN _and_ the IP match before going direct.
+
+>[!IMPORTANT]
+>Customers are advised to weigh the risk of sending more traffic that bypasses the VPN over the performance gain for Live Events.
+
+To implement the forced tunnel exception for Teams Live Events and Stream, the following steps should be applied:
+
+### 1. Configure external DNS resolution
+
+Clients need external, recursive DNS resolution to be available so that the following host names can be resolved to IP addresses.
+
+- \*.azureedge.net
+- \*.media.azure.net
+- \*.bmc.cdn.office.net
+
+**\*.azureedge.net** is used for Stream events ([Configure encoders for live streaming in Microsoft Stream - Microsoft Stream | Microsoft Docs](/stream/live-encoder-setup)).
+
+**\*.media.azure.net** and **\*.bmc.cdn.office.net** are used for Teams-produced Live Events (Quick Start events, RTMP-In supported events [Roadmap ID 84960]) scheduled from the Teams client.
+
+ Some of these endpoints are shared with other elements outside of Stream/Live Events, it isn't advised to just use these FQDNs to configure VPN offload even if technically possible in your VPN solution (eg if it works at the FQDN rather than IP).
+
+FQDNs aren't required in the VPN configuration, they are purely for use in PAC files in combination with the IPs to send the relevant traffic direct.
+
+### 2. Implement PAC file changes (where required)
+
+For organizations that utilize a PAC file to route traffic through a proxy while on VPN, this is normally achieved using FQDNs. However, with Stream/Live Events, the host names provided contain wildcards such as **\*.azureedge.net**, which also encompasses other elements for which it isn't possible to provide full IP listings. Thus, if the request is sent direct based on DNS wildcard match alone, traffic to these endpoints will be blocked as there is no route via the direct path for it in [Step 3](#3-configure-routing-on-the-vpn-to-enable-direct-egress).
+
+To solve this, we can provide the following IPs and use them in combination with the host names in [Step 1](#1-configure-external-dns-resolution) in an example PAC file. The PAC file checks if the URL matches those used for Stream/Live Events and then if it does, it then also checks to see if the IP returned from a DNS lookup matches those provided for the service. If _both_ match, then the traffic is routed direct. If either element (FQDN/IP) doesn't match, then the traffic is sent to the proxy. As a result, the configuration ensures that anything which resolves to an IP outside of the scope of both the IP and defined namespaces will traverse the proxy via the VPN as normal.
+
+#### Gathering the current lists of CDN Endpoints
+
+Live Events uses multiple CDN providers to stream to customers, to provide the best coverage, quality, and resiliency. Currently, both Azure CDN from Microsoft and from Verizon are used. Over time this could be changed due to situations such as regional availability. This article is a source to enable you to keep up to date on IP ranges.
+
+For Azure CDN from Microsoft, you can download the list from [Download Azure IP Ranges and Service Tags ΓÇô Public Cloud from Official Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=56519) - you will need to look specifically for the service tag *AzureFrontdoor.Frontend* in the JSON; *addressPrefixes* will show the IPv4/IPv6 subnets. Over time the IPs can change, but the service tag list is always updated before they are put in use.
+
+For Azure CDN from Verizon (Edgecast) you can find an exhaustive list using [https://docs.microsoft.com/rest/api/cdn/edge-nodes/list](/rest/api/cdn/edge-nodes/list) (click **Try It** ) - you will need to look specifically for the **Premium\_Verizon** section. Note that this API shows all Edgecast IPs (origin and Anycast). Currently there isn't a mechanism for the API to distinguish between origin and Anycast.
+
+To implement this in a PAC file you can use the following example which sends the Microsoft 365 Optimize traffic direct (which is recommended best practice) via FQDN, and the critical Stream/Live Events traffic direct via a combination of the FQDN and the returned IP address. The placeholder name _Contoso_ would need to be edited to your specific tenant's name where _contoso_ is from contoso.onmicrosoft.com
+
+##### Example PAC file
+
+Here is an example of how to generate the PAC files:
+
+1. Save the script below to your local hard disk as _Get-TLEPacFile.ps1_.
+1. Go to the [Verizon URL](/rest/api/cdn/edge-nodes/list#code-try-0) and download the resulting JSON (copy paste it into a file like cdnedgenodes.json)
+1. Put the file into the same folder as the script.
+1. In a PowerShell window, run the following command. Change out the tenant name for something else if you want the SPO URLs. This is Type 2, so **Optimize** and **Allow** (Type 1 is Optimize only).
+
+ ```powershell
+ .\Get-TLEPacFile.ps1 -Instance Worldwide -Type 2 -TenantName <contoso> -CdnEdgeNodesFilePath .\cdnedgenodes.json -FilePath TLE.pac
+ ```
+
+5. The TLE.pac file will contain all the namespaces and IPs (IPv4/IPv6).
+
+###### Get-TLEPacFile.ps1
+
+```powershell
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License.
+
+<#PSScriptInfo
+
+.VERSION 1.0.4
+
+.AUTHOR Microsoft Corporation
+
+.GUID 7f692977-e76c-4582-97d5-9989850a2529
+
+.COMPANYNAME Microsoft
+
+.COPYRIGHT
+Copyright (c) Microsoft Corporation. All rights reserved.
+Licensed under the MIT License.
+
+.TAGS PAC Microsoft Microsoft365 365
+
+.LICENSEURI
+
+.PROJECTURI http://aka.ms/ipurlws
+
+.ICONURI
+
+.EXTERNALMODULEDEPENDENCIES
+
+.REQUIREDSCRIPTS
+
+.EXTERNALSCRIPTDEPENDENCIES
+
+.RELEASENOTES
+
+#>
+
+<#
+
+.SYNOPSIS
+
+Create a PAC file for Microsoft 365 prioritized connectivity
+
+.DESCRIPTION
+
+This script will access updated information to create a PAC file to prioritize Microsoft 365 Urls for
+better access to the service. This script will allow you to create different types of files depending
+on how traffic needs to be prioritized.
+
+.PARAMETER Instance
+
+The service instance inside Microsoft 365.
+
+.PARAMETER ClientRequestId
+
+The client request id to connect to the web service to query up to date Urls.
+
+.PARAMETER DirectProxySettings
+
+The direct proxy settings for priority traffic.
+
+.PARAMETER DefaultProxySettings
+
+The default proxy settings for non priority traffic.
+
+.PARAMETER Type
+
+The type of prioritization to give. Valid values are 1 and 2, which are 2 different modes of operation.
+Type 1 will send Optimize traffic to the direct route. Type 2 will send Optimize and Allow traffic to
+the direct route.
+
+.PARAMETER Lowercase
+
+Flag this to include lowercase transformation into the PAC file for the host name matching.
+
+.PARAMETER TenantName
+
+The tenant name to replace wildcard Urls in the webservice.
+
+.PARAMETER ServiceAreas
+
+The service areas to filter endpoints by in the webservice.
+
+.PARAMETER FilePath
+
+The file to print the content to.
+
+.EXAMPLE
+
+Get-TLEPacFile.ps1 -ClientRequestId b10c5ed1-bad1-445f-b386-b919946339a7 -DefaultProxySettings "PROXY 4.4.4.4:70" -FilePath type1.pac
+
+.EXAMPLE
+
+Get-TLEPacFile.ps1 -ClientRequestId b10c5ed1-bad1-445f-b386-b919946339a7 -Instance China -Type 2 -DefaultProxySettings "PROXY 4.4.4.4:70" -FilePath type2.pac
+
+.EXAMPLE
+
+Get-TLEPacFile.ps1 -ClientRequestId b10c5ed1-bad1-445f-b386-b919946339a7 -Instance WorldWide -Lowercase -TenantName tenantName -ServiceAreas Sharepoint
+
+#>
+
+#Requires -Version 2
+
+[CmdletBinding(SupportsShouldProcess=$True)]
+Param (
+ [Parameter(Mandatory = $false)]
+ [ValidateSet('Worldwide', 'Germany', 'China', 'USGovDoD', 'USGovGCCHigh')]
+ [String] $Instance = "Worldwide",
+
+ [Parameter(Mandatory = $false)]
+ [ValidateNotNullOrEmpty()]
+ [guid] $ClientRequestId = [Guid]::NewGuid().Guid,
+
+ [Parameter(Mandatory = $false)]
+ [ValidateNotNullOrEmpty()]
+ [String] $DirectProxySettings = 'DIRECT',
+
+ [Parameter(Mandatory = $false)]
+ [ValidateNotNullOrEmpty()]
+ [String] $DefaultProxySettings = 'PROXY 10.10.10.10:8080',
+
+ [Parameter(Mandatory = $false)]
+ [ValidateRange(1, 2)]
+ [int] $Type = 1,
+
+ [Parameter(Mandatory = $false)]
+ [switch] $Lowercase = $false,
+
+ [Parameter(Mandatory = $false)]
+ [ValidateNotNullOrEmpty()]
+ [string] $TenantName,
+
+ [Parameter(Mandatory = $false)]
+ [ValidateSet('Exchange', 'SharePoint', 'Common', 'Skype')]
+ [string[]] $ServiceAreas,
+
+ [Parameter(Mandatory = $false)]
+ [ValidateNotNullOrEmpty()]
+ [string] $FilePath,
+
+ [Parameter(Mandatory = $false)]
+ [ValidateNotNullOrEmpty()]
+ [string] $CdnEdgeNodesFilePath
+)
+
+##################################################################################################################
+### Global constants
+##################################################################################################################
+
+$baseServiceUrl = "https://endpoints.office.com/endpoints/$Instance/?ClientRequestId={$ClientRequestId}"
+$directProxyVarName = "direct"
+$defaultProxyVarName = "proxyServer"
+$bl = "`r`n"
+
+##################################################################################################################
+### Functions to create PAC files
+##################################################################################################################
+
+function Get-PacClauses
+{
+ param(
+ [Parameter(Mandatory = $false)]
+ [string[]] $Urls,
+
+ [Parameter(Mandatory = $true)]
+ [ValidateNotNullOrEmpty()]
+ [String] $ReturnVarName
+ )
+
+ if (!$Urls)
+ {
+ return ""
+ }
+
+ $clauses = (($Urls | ForEach-Object { "shExpMatch(host, `"$_`")" }) -Join "$bl || ")
+
+@"
+ if($clauses)
+ {
+ return $ReturnVarName;
+ }
+"@
+}
+
+function Get-PacString
+{
+ param(
+ [Parameter(Mandatory = $true)]
+ [ValidateNotNullOrEmpty()]
+ [array[]] $MapVarUrls
+ )
+
+@"
+// This PAC file will provide proxy config to Microsoft 365 services
+// using data from the public web service for all endpoints
+function FindProxyForURL(url, host)
+{
+ var $directProxyVarName = "$DirectProxySettings";
+ var $defaultProxyVarName = "$DefaultProxySettings";
+
+$( if ($Lowercase) { " host = host.toLowerCase();" })
+
+$( ($MapVarUrls | ForEach-Object { Get-PACClauses -ReturnVarName $_.Item1 -Urls $_.Item2 }) -Join "$bl$bl" )
+
+$( if (!$ServiceAreas -or $ServiceAreas.Contains('Skype')) { Get-TLEPacConfiguration })
+
+ return $defaultProxyVarName;
+}
+"@ -replace "($bl){3,}","$bl$bl" # Collapse more than one blank line in the PAC file so it looks better.
+}
+
+##################################################################################################################
+### Functions to get and filter endpoints
+##################################################################################################################
+
+function Get-TLEPacConfiguration {
+ param ()
+ $PreBlock = @"
+ // Don't Proxy Teams Live Events traffic
+
+ if(shExpMatch(host, "*.azureedge.net")
+ || shExpMatch(host, "*.bmc.cdn.office.net")
+ || shExpMatch(host, "*.media.azure.net"))
+ {
+ var resolved_ip = dnsResolveEx(host);
+
+"@
+ $TLESb = New-Object 'System.Text.StringBuilder'
+ $TLESb.Append($PreBlock) | Out-Null
+
+ if (![string]::IsNullOrEmpty($CdnEdgeNodesFilePath) -and (Test-Path -Path $CdnEdgeNodesFilePath)) {
+ $CdnData = Get-Content -Path $CdnEdgeNodesFilePath -Raw -ErrorAction SilentlyContinue | ConvertFrom-Json | Select-Object -ExpandProperty value |
+ Where-Object { $_.name -eq 'Premium_Verizon'} | Select-Object -First 1 -ExpandProperty properties |
+ Select-Object -ExpandProperty ipAddressGroups
+ $CdnData | Select-Object -ExpandProperty ipv4Addresses | ForEach-Object {
+ if ($TLESb.Length -eq $PreBlock.Length) {
+ $TLESb.Append(" if(") | Out-Null
+ }
+ else {
+ $TLESb.AppendLine() | Out-Null
+ $TLESb.Append(" || ") | Out-Null
+ }
+ $TLESb.Append("isInNetEx(resolved_ip, `"$($_.BaseIpAddress)/$($_.prefixLength)`")") | Out-Null
+ }
+ $CdnData | Select-Object -ExpandProperty ipv6Addresses | ForEach-Object {
+ if ($TLESb.Length -eq $PreBlock.Length) {
+ $TLESb.Append(" if(") | Out-Null
+ }
+ else {
+ $TLESb.AppendLine() | Out-Null
+ $TLESb.Append(" || ") | Out-Null
+ }
+ $TLESb.Append("isInNetEx(resolved_ip, `"$($_.BaseIpAddress)/$($_.prefixLength)`")") | Out-Null
+ }
+ }
+ $AzureIPsUrl = Invoke-WebRequest -Uri "https://www.microsoft.com/en-us/download/confirmation.aspx?id=56519" -UseBasicParsing -ErrorAction SilentlyContinue |
+ Select-Object -ExpandProperty Links | Select-Object -ExpandProperty href |
+ Where-Object { $_.EndsWith('.json') -and $_ -match 'ServiceTags' } | Select-Object -First 1
+ if ($AzureIPsUrl) {
+ Invoke-RestMethod -Uri $AzureIPsUrl -ErrorAction SilentlyContinue | Select-Object -ExpandProperty values |
+ Where-Object { $_.name -eq 'AzureFrontDoor.Frontend' } | Select-Object -First 1 -ExpandProperty properties |
+ Select-Object -ExpandProperty addressPrefixes | ForEach-Object {
+ if ($TLESb.Length -eq $PreBlock.Length) {
+ $TLESb.Append(" if(") | Out-Null
+ }
+ else {
+ $TLESb.AppendLine() | Out-Null
+ $TLESb.Append(" || ") | Out-Null
+ }
+ $TLESb.Append("isInNetEx(resolved_ip, `"$_`")") | Out-Null
+ }
+ }
+ if ($TLESb.Length -gt $PreBlock.Length) {
+ $TLESb.AppendLine(")") | Out-Null
+ $TLESb.AppendLine(" {") | Out-Null
+ $TLESb.AppendLine(" return $directProxyVarName;") | Out-Null
+ $TLESb.AppendLine(" }") | Out-Null
+ }
+ else {
+ $TLESb.AppendLine(" // no addresses found for service via script") | Out-Null
+ }
+ $TLESb.AppendLine(" }") | Out-Null
+ return $TLESb.ToString()
+}
+
+function Get-Regex
+{
+ param(
+ [Parameter(Mandatory = $true)]
+ [ValidateNotNullOrEmpty()]
+ [string] $Fqdn
+ )
+
+ return "^" + $Fqdn.Replace(".", "\.").Replace("*", ".*").Replace("?", ".?") + "$"
+}
+
+function Match-RegexList
+{
+ param(
+ [Parameter(Mandatory = $true)]
+ [ValidateNotNullOrEmpty()]
+ [string] $ToMatch,
+
+ [Parameter(Mandatory = $false)]
+ [string[]] $MatchList
+ )
+
+ if (!$MatchList)
+ {
+ return $false
+ }
+ foreach ($regex in $MatchList)
+ {
+ if ($regex -ne $ToMatch -and $ToMatch -match (Get-Regex $regex))
+ {
+ return $true
+ }
+ }
+ return $false
+}
+
+function Get-Endpoints
+{
+ $url = $baseServiceUrl
+ if ($TenantName)
+ {
+ $url += "&TenantName=$TenantName"
+ }
+ if ($ServiceAreas)
+ {
+ $url += "&ServiceAreas=" + ($ServiceAreas -Join ",")
+ }
+ return Invoke-RestMethod -Uri $url
+}
+
+function Get-Urls
+{
+ param(
+ [Parameter(Mandatory = $false)]
+ [psobject[]] $Endpoints
+ )
+
+ if ($Endpoints)
+ {
+ return $Endpoints | Where-Object { $_.urls } | ForEach-Object { $_.urls } | Sort-Object -Unique
+ }
+ return @()
+}
+
+function Get-UrlVarTuple
+{
+ param(
+ [Parameter(Mandatory = $true)]
+ [ValidateNotNullOrEmpty()]
+ [string] $VarName,
+
+ [Parameter(Mandatory = $false)]
+ [string[]] $Urls
+ )
+ return New-Object 'Tuple[string,string[]]'($VarName, $Urls)
+}
+
+function Get-MapVarUrls
+{
+ Write-Verbose "Retrieving all endpoints for instance $Instance from web service."
+ $Endpoints = Get-Endpoints
+
+ if ($Type -eq 1)
+ {
+ $directUrls = Get-Urls ($Endpoints | Where-Object { $_.category -eq "Optimize" })
+ $nonDirectPriorityUrls = Get-Urls ($Endpoints | Where-Object { $_.category -ne "Optimize" }) | Where-Object { Match-RegexList $_ $directUrls }
+ return @(
+ Get-UrlVarTuple -VarName $defaultProxyVarName -Urls $nonDirectPriorityUrls
+ Get-UrlVarTuple -VarName $directProxyVarName -Urls $directUrls
+ )
+ }
+ elseif ($Type -eq 2)
+ {
+ $directUrls = Get-Urls ($Endpoints | Where-Object { $_.category -in @("Optimize", "Allow")})
+ $nonDirectPriorityUrls = Get-Urls ($Endpoints | Where-Object { $_.category -notin @("Optimize", "Allow") }) | Where-Object { Match-RegexList $_ $directUrls }
+ return @(
+ Get-UrlVarTuple -VarName $defaultProxyVarName -Urls $nonDirectPriorityUrls
+ Get-UrlVarTuple -VarName $directProxyVarName -Urls $directUrls
+ )
+ }
+}
+
+##################################################################################################################
+### Main script
+##################################################################################################################
+
+$content = Get-PacString (Get-MapVarUrls)
+
+if ($FilePath)
+{
+ $content | Out-File -FilePath $FilePath -Encoding ascii
+}
+else
+{
+ $content
+}
+```
+
+The script will automatically parse the Azure list based on the [download URL](https://www.microsoft.com/download/details.aspx?id=56519) and keys off of **AzureFrontDoor.Frontend**, so there is no need to get that manually.
+
+Again, it isn't advised to perform VPN offload using just the FQDNs; utilizing **both** the FQDNs and the IP addresses in the function helps scope the use of this offload to a limited set of endpoints including Live Events/Stream. The way the function is structured will result in a DNS lookup being done for the FQDN that matches those listed by the client directly, i.e. DNS resolution of the remaining namespaces remains unchanged.
+
+If you wish to limit the risk of offloading endpoints not related to Live Events and Stream, you can remove the **\*.azureedge.net** domain from the configuration which is where most of this risk lies as this is a shared domain used for all Azure CDN customers. The downside of this is that any event using an external encoder won't be optimized but events produced/organized within Teams will be.
+
+### 3. Configure routing on the VPN to enable direct egress
+
+The final step is to add a direct route for the Live Event IPs described in **Gathering the current lists of CDN Endpoints** into the VPN configuration to ensure the traffic isn't sent via the forced tunnel into the VPN. Detailed information on how to do this for Microsoft 365 Optimize endpoints can be found in the [Implement VPN split tunneling](#implement-vpn-split-tunneling) section and the process is exactly the same for the Stream/Live Events IPs listed in this document.
+
+Note that only the IPs (not FQDNs) from [Gathering the current lists of CDN Endpoints](#gathering-the-current-lists-of-cdn-endpoints) should be used for VPN configuration.
+
+### Stream & Live Events Optimization FAQ
+
+#### Will this send all my traffic to the service direct?
+
+No, this will send the latency-sensitive streaming traffic for a Live Event or Stream video direct, any other traffic will continue to use the VPN tunnel if they do not resolve to the IPs published.
+
+#### Do I need to use the IPv6 Addresses?
+
+No, the connectivity can be IPv4 only if required.
+
+#### Why are these IPs not published in the Microsoft 365 URL/IP service?
+
+Microsoft has strict controls around the format and type of information that is in the service to ensure customers can reliably use the information to implement secure and optimal routing based on endpoint category.
+
+The **Default** endpoint category has no IP information provided for numerous reasons (Default endpoints may be outside of the control of Microsoft, may change too frequently, or may be in blocks shared with other elements). For this reason, Default endpoints are designed to be sent via FQDN to an inspecting proxy, like normal web traffic.
+
+In this case, the above endpoints are CDNs that may be used by non-Microsoft controlled elements other than Live Events or Stream, and thus sending the traffic direct will also mean anything else which resolves to these IPs will also be sent direct from the client. Due to the unique nature of the current global crisis and to meet the short-term needs of our customers, Microsoft has provided the information above for customers to use as they see fit.
+
+Microsoft is working to reconfigure the Live Events endpoints to allow them to be included in the Allow/Optimize endpoint categories in the future.
+
+#### Do I only need to allow access to these IPs?
+
+No, access to all of the **Required** marked endpoints in [the URL/IP service](urls-and-ip-address-ranges.md) is essential for the service to operate. In addition, any Optional endpoint marked for Stream (ID 41-45) is required.
+
+#### What scenarios will this advice cover?
+
+1. Live events produced within the Teams App
+2. Viewing Stream hosted content
+3. External device (encoder) produced events
+
+#### Does this advice cover presenter traffic?
+
+It does not, the advice above is purely for those consuming the service. Presenting from within Teams will see the presenter's traffic flowing to the Optimize marked UDP endpoints listed in URL/IP service row 11 with detailed VPN offload advice outlined in the [Implement VPN split tunneling](#implement-vpn-split-tunneling) section.
+
+#### Does this configuration risk traffic other than Live Events &amp; Stream being sent direct?
+
+Yes, due to shared FQDNs used for some elements of the service, this is unavoidable. This traffic is normally sent via a corporate proxy which can apply inspection. In a VPN split tunnel scenario, using both the FQDNs and IPs will scope this risk down to a minimum, but it will still exist. Customers can remove the **\*.azureedge.net** domain from the offload configuration and reduce this risk to a bare minimum but this will remove the offload of Stream-supported Live Events (Teams-scheduled, external encoder events, Yammer events produced in Teams, Yammer-scheduled external encoder events, and Stream scheduled events or on-demand viewing from Stream). Events scheduled and produced in Teams are unaffected.
+ ## HOWTO guides for common VPN platforms
-This section provides links to detailed guides for implementing split tunneling for Office 365 traffic from the most common partners in this space. We'll add additional guides as they become available.
+This section provides links to detailed guides for implementing split tunneling for Microsoft 365 traffic from the most common partners in this space. We'll add additional guides as they become available.
-- **Windows 10 VPN client**: [Optimizing Office 365 traffic for remote workers with the native Windows 10 VPN client](/windows/security/identity-protection/vpn/vpn-office-365-optimization)
+- **Windows 10 VPN client**: [Optimizing Microsoft 365 traffic for remote workers with the native Windows 10 VPN client](/windows/security/identity-protection/vpn/vpn-office-365-optimization)
- **Cisco Anyconnect**: [Optimize Anyconnect Split Tunnel for Office365](https://www.cisco.com/c/en/us/support/docs/security/anyconnect-secure-mobility-client/215343-optimize-anyconnect-split-tunnel-for-off.html)-- **Palo Alto GlobalProtect**: [Optimizing Office 365 Traffic via VPN Split Tunnel Exclude Access Route](https://live.paloaltonetworks.com/t5/Prisma-Access-Articles/GlobalProtect-Optimizing-Office-365-Traffic/ta-p/319669)-- **F5 Networks BIG-IP APM**: [Optimizing Office 365 traffic on Remote Access through VPNs when using BIG-IP APM](https://devcentral.f5.com/s/articles/SSL-VPN-Split-Tunneling-and-Office-365)-- **Citrix Gateway**: [Optimizing Citrix Gateway VPN split tunnel for Office365](https://docs.citrix.com/en-us/citrix-gateway/13/optimizing-citrix-gateway-vpn-split-tunnel-for-office365.html)-- **Pulse Secure**: [VPN Tunneling: How to configure split tunneling to exclude Office 365 applications](https://kb.pulsesecure.net/articles/Pulse_Secure_Article/KB44417)-- **Check Point VPN**: [How to configure Split Tunnel for Office 365 and other SaaS Applications](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk167000)
+- **Palo Alto GlobalProtect**: [Optimizing Microsoft 365 Traffic via VPN Split Tunnel Exclude Access Route](https://live.paloaltonetworks.com/t5/Prisma-Access-Articles/GlobalProtect-Optimizing-Office-365-Traffic/ta-p/319669)
+- **F5 Networks BIG-IP APM**: [Optimizing Microsoft 365 traffic on Remote Access through VPNs when using BIG-IP APM](https://devcentral.f5.com/s/articles/SSL-VPN-Split-Tunneling-and-Office-365)
+- **Citrix Gateway**: [Optimizing Citrix Gateway VPN split tunnel for Office365](https://docs.citrix.com/citrix-gateway/13/optimizing-citrix-gateway-vpn-split-tunnel-for-office365.html)
+- **Pulse Secure**: [VPN Tunneling: How to configure split tunneling to exclude Microsoft 365 applications](https://kb.pulsesecure.net/articles/Pulse_Secure_Article/KB44417)
+- **Check Point VPN**: [How to configure Split Tunnel for Microsoft 365 and other SaaS Applications](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk167000)
-## FAQ
+## VPN Split Tunneling FAQ
The Microsoft Security Team has published [Alternative ways for security professionals and IT to achieve modern security controls in todayΓÇÖs unique remote work scenarios](https://www.microsoft.com/security/blog/2020/03/26/alternative-security-professionals-it-achieve-modern-security-controls-todays-unique-remote-work-scenarios/), a blog post, that outlines key ways for security professionals and IT can achieve modern security controls in today's unique remote work scenarios. In addition, below are some of the common customer questions and answers on this subject. ### How do I stop users accessing other tenants I do not trust where they could exfiltrate data?
-The answer is a [feature called tenant restrictions](/azure/active-directory/manage-apps/tenant-restrictions). Authentication traffic is not high volume nor especially latency sensitive so can be sent through the VPN solution to the on-premises proxy where the feature is applied. An allow list of trusted tenants is maintained here and if the client attempts to obtain a token to a tenant that is not trusted, the proxy simply denies the request. If the tenant is trusted, then a token is accessible if the user has the right credentials and rights.
+The answer is a [feature called tenant restrictions](/azure/active-directory/manage-apps/tenant-restrictions). Authentication traffic isn't high volume nor especially latency sensitive so can be sent through the VPN solution to the on-premises proxy where the feature is applied. An allow list of trusted tenants is maintained here and if the client attempts to obtain a token to a tenant that isn't trusted, the proxy simply denies the request. If the tenant is trusted, then a token is accessible if the user has the right credentials and rights.
So even though a user can make a TCP/UDP connection to the Optimize marked endpoints above, without a valid token to access the tenant in question, they simply cannot log in and access/move any data. ### Does this model allow access to consumer services such as personal OneDrive accounts?
-No, it does not, the Office 365 endpoints are not the same as the consumer services (Onedrive.live.com as an example) so the split tunnel will not allow a user to directly access consumer services. Traffic to consumer endpoints will continue to use the VPN tunnel and existing policies will continue to apply.
+No, it does not, the Microsoft 365 endpoints aren't the same as the consumer services (Onedrive.live.com as an example) so the split tunnel won't allow a user to directly access consumer services. Traffic to consumer endpoints will continue to use the VPN tunnel and existing policies will continue to apply.
### How do I apply DLP and protect my sensitive data when the traffic no longer flows through my on-premises solution?
-To help you prevent the accidental disclosure of sensitive information, Office 365 has a rich set of [built-in tools](../compliance/information-protection.md). You can use the built-in [DLP capabilities](../compliance/dlp-learn-about-dlp.md) of Teams and SharePoint to detect inappropriately stored or shared sensitive information. If part of your remote work strategy involves a bring-your-own-device (BYOD) policy, you can use [app-based Conditional Access](/azure/active-directory/conditional-access/app-based-conditional-access) to prevent sensitive data from being downloaded to users' personal devices
+To help you prevent the accidental disclosure of sensitive information, Microsoft 365 has a rich set of [built-in tools](../compliance/information-protection.md). You can use the built-in [DLP capabilities](../compliance/dlp-learn-about-dlp.md) of Teams and SharePoint to detect inappropriately stored or shared sensitive information. If part of your remote work strategy involves a bring-your-own-device (BYOD) policy, you can use [app-based Conditional Access](/azure/active-directory/conditional-access/app-based-conditional-access) to prevent sensitive data from being downloaded to users' personal devices
### How do I evaluate and maintain control of the user's authentication when they are connecting directly?
-In addition to the tenant restrictions feature noted in Q1, [conditional access policies](/azure/active-directory/conditional-access/overview) can be applied to dynamically assess the risk of an authentication request and react appropriately. Microsoft recommends the [Zero Trust model](https://www.microsoft.com/security/zero-trust?rtc=1) is implemented over time and we can use Azure AD conditional access policies to maintain control in a mobile and cloud first world. Conditional access policies can be used to make a real-time decision on whether an authentication request is successful based on numerous factors such as:
+In addition to the tenant restrictions feature noted in Q1, [conditional access policies](/azure/active-directory/conditional-access/overview) can be applied to dynamically assess the risk of an authentication request and react appropriately. Microsoft recommends the [Zero Trust model](https://www.microsoft.com/security/zero-trust?rtc=1) is implemented over time and we can use Azure AD conditional access policies to maintain control in a mobile and cloud-first world. Conditional access policies can be used to make a real-time decision on whether an authentication request is successful based on numerous factors such as:
- Device, is the device known/trusted/Domain joined? - IP ΓÇô is the authentication request coming from a known corporate IP address? Or from a country we do not trust?
We can then trigger policy such as approve, trigger MFA or block authentication
### How do I protect against viruses and malware?
-Again, Office 365 provides protection for the Optimize marked endpoints in various layers in the service itself, [outlined in this document](/office365/Enterprise/office-365-malware-and-ransomware-protection). As noted, it is vastly more efficient to provide these security elements in the service itself rather than try to do it in line with devices that may not fully understand the protocols/traffic. By default, SharePoint Online [automatically scans file uploads](../security/office-365-security/virus-detection-in-spo.md) for known malware
+Again, Microsoft 365 provides protection for the Optimize marked endpoints in various layers in the service itself, [outlined in this document](/office365/Enterprise/office-365-malware-and-ransomware-protection). As noted, It's vastly more efficient to provide these security elements in the service itself rather than try to do it in line with devices that may not fully understand the protocols/traffic. By default, SharePoint Online [automatically scans file uploads](../security/office-365-security/virus-detection-in-spo.md) for known malware
-For the Exchange endpoints listed above, [Exchange Online Protection](/office365/servicedescriptions/exchange-online-protection-service-description/exchange-online-protection-service-description) and [Microsoft Defender for Office 365](/office365/servicedescriptions/office-365-advanced-threat-protection-service-description) do an excellent job of providing security of the traffic to the service.
+For the Exchange endpoints listed above, [Exchange Online Protection](/office365/servicedescriptions/exchange-online-protection-service-description/exchange-online-protection-service-description) and [Microsoft Defender for Microsoft 365](/office365/servicedescriptions/office-365-advanced-threat-protection-service-description) do an excellent job of providing security of the traffic to the service.
### Can I send more than just the Optimize traffic direct? Priority should be given to the **Optimize** marked endpoints as these will give maximum benefit for a low level of work. However, if you wish, the Allow marked endpoints are required for the service to work and have IP addresses provided for the endpoints that can be used if necessary.
-There are also various vendors who offer cloud-based proxy/security solutions called _secure web gateways_ which provide central security, control, and corporate policy application for general web browsing. These solutions can work well in a cloud first world, if highly available, performant, and provisioned close to your users by allowing secure Internet access to be delivered from a cloud-based location close to the user. This removes the need for a hairpin through the VPN/corporate network for general browsing traffic, whilst still allowing central security control.
+There are also various vendors who offer cloud-based proxy/security solutions called _secure web gateways_ which provide central security, control, and corporate policy application for general web browsing. These solutions can work well in a cloud-first world, if highly available, performant, and provisioned close to your users by allowing secure Internet access to be delivered from a cloud-based location close to the user. This removes the need for a hairpin through the VPN/corporate network for general browsing traffic, while still allowing central security control.
-Even with these solutions in place however, Microsoft still strongly recommends that Optimize marked Office 365 traffic is sent direct to the service.
+Even with these solutions in place however, Microsoft still strongly recommends that Optimize marked Microsoft 365 traffic is sent direct to the service.
For guidance on allowing direct access to an Azure Virtual Network, see [Remote work using Azure VPN Gateway Point-to-site](/azure/vpn-gateway/work-remotely-support). ### Why is port 80 required? Is traffic sent in the clear?
-Port 80 is only used for things like redirect to a port 443 session, no customer data is sent or is accessible over port 80. [Encryption](../compliance/encryption.md) outlines encryption for data in transit and at rest for Office 365, and [Types of traffic](/microsoftteams/microsoft-teams-online-call-flows#types-of-traffic) outlines how we use SRTP to protect Teams media traffic.
+Port 80 is only used for things like redirect to a port 443 session, no customer data is sent or is accessible over port 80. [Encryption](../compliance/encryption.md) outlines encryption for data in transit and at rest for Microsoft 365, and [Types of traffic](/microsoftteams/microsoft-teams-online-call-flows#types-of-traffic) outlines how we use SRTP to protect Teams media traffic.
-### Does this advice apply to users in China using a worldwide instance of Office 365?
+### Does this advice apply to users in China using a worldwide instance of Microsoft 365?
-**No**, it does not. The one caveat to the above advice is users in the PRC who are connecting to a worldwide instance of Office 365. Due to the common occurrence of cross border network congestion in the region, direct Internet egress performance can be variable. Most customers in the region operate using a VPN to bring the traffic into the corporate network and utilize their authorized MPLS circuit or similar to egress outside the country via an optimized path. This is outlined further in the article [Office 365 performance optimization for China users](microsoft-365-networking-china.md).
+**No**, it does not. The one caveat to the above advice is users in the PRC who are connecting to a worldwide instance of Microsoft 365. Due to the common occurrence of cross border network congestion in the region, direct Internet egress performance can be variable. Most customers in the region operate using a VPN to bring the traffic into the corporate network and utilize their authorized MPLS circuit or similar to egress outside the country via an optimized path. This is outlined further in the article [Microsoft 365 performance optimization for China users](microsoft-365-networking-china.md).
### Does split-tunnel configuration work for Teams running in a browser?
-Yes it does, via supported browsers, which are listed in [Get clients for Microsoft Teams](/microsoftteams/get-clients#web-client).
+Yes, with caveats. Most Teams functionality is supported in the browsers listed in [Get clients for Microsoft Teams](/microsoftteams/get-clients#web-client).
+
+In addition, Microsoft Edge **96 and above** supports VPN split tunneling for peer-to-peer traffic by enabling the Edge [WebRtcRespectOsRoutingTableEnabled](/deployedge/microsoft-edge-policies#webrtcrespectosroutingtableenabled) policy. At this time, other browsers may not support VPN split tunneling for peer-to-peer traffic.
-## Related topics
+## Related articles
-[Overview: VPN split tunneling for Office 365](microsoft-365-vpn-split-tunnel.md)
+[Overview: VPN split tunneling for Microsoft 365](microsoft-365-vpn-split-tunnel.md)
-[Office 365 performance optimization for China users](microsoft-365-networking-china.md)
+[Microsoft 365 performance optimization for China users](microsoft-365-networking-china.md)
[Alternative ways for security professionals and IT to achieve modern security controls in today's unique remote work scenarios (Microsoft Security Team blog)](https://www.microsoft.com/security/blog/2020/03/26/alternative-security-professionals-it-achieve-modern-security-controls-todays-unique-remote-work-scenarios/)
Yes it does, via supported browsers, which are listed in [Get clients for Micros
[Running on VPN: How Microsoft is keeping its remote workforce connected](https://www.microsoft.com/itshowcase/blog/running-on-vpn-how-microsoft-is-keeping-its-remote-workforce-connected/?elevate-lv)
-[Office 365 Network Connectivity Principles](microsoft-365-network-connectivity-principles.md)
+[Microsoft 365 Network Connectivity Principles](microsoft-365-network-connectivity-principles.md)
-[Assessing Office 365 network connectivity](assessing-network-connectivity.md)
+[Assessing Microsoft 365 network connectivity](assessing-network-connectivity.md)
-[Office 365 network and performance tuning](network-planning-and-performance.md)
+[Microsoft 365 network and performance tuning](network-planning-and-performance.md)
enterprise Minification And Bundling In Sharepoint Online https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/enterprise/minification-and-bundling-in-sharepoint-online.md
Title: "Minification and bundling in SharePoint Online" - Previously updated : 3/1/2017+ Last updated : 1/18/2022 audience: Admin
For JavaScript and CSS files, you can also use an approach called minification,
You can use third-party software such as Web Essentials to bundle CSS and JavaScript files. > [!IMPORTANT]
-> Web Essentials is a third-party, open-source, community-based project. The software is an extension to Visual Studio 2012 and Visual Studio 2013 and is not supported by Microsoft. To download Web Essentials, visit the website at [https://vswebessentials.com/download](https://go.microsoft.com/fwlink/p/?LinkId=525629).
+> Web Essentials is a third-party, open-source, community-based project. The software is an extension to Visual Studio 2012 and Visual Studio 2013 and is not supported by Microsoft. To download Web Essentials, visit the website at [https://vswebessentials.com/download](https://go.microsoft.com/fwlink/p/?LinkId=525629).
Web Essentials offers two forms of bundling: - .bundle: for CSS and JavaScript files
-
- .sprite: for images (only available in Visual Studio 2013)
-
+ You can use Web Essentials if you have an existing feature with some branding elements that are referenced inside a custom master page, such as: ![Screenshot of brand element in custom master page.](../media/3a6eba36-973d-482b-8556-a9394b8ba19f.png)
- **To create a TE000127218 and CSS bundle in Web Essentials**
+### To create a TE000127218 and CSS bundle in Web Essentials
1. In Visual Studio, in Solution Explorer, select the files that you want to include in the bundle.
-
-2. Right-click the selected files and then select **Web Essentials** \> **Create JavaScript bundle file** from the context menu. For example:
-
+2. Right-click the selected files and then select **Web Essentials** \> **Create JavaScript bundle file** from the context menu. For example:
+ ![Screenshot showing Web Essentials menu options.](../media/41aac84c-4538-4f78-b454-46e651f868a3.png) ## Viewing the results of bundling JavaScript and CSS files
-When you create a JavaScript and CSS bundle, Web Essentials creates an XML file called a recipe file that identifies the JavaScript and CSS files as well as some other configuration information:
+When you create a JavaScript and CSS bundle, Web Essentials creates an XML file called a recipe file that identifies the JavaScript and CSS files as well as some other configuration information:
![Screenshot of JavaScript and CSS recipe file.](../media/7ba891f8-52d8-467b-a0f6-b062dd1137a4.png)
After bundling, the JavaScript bundle file is reduced significantly from 815KB t
Similar to how you bundle JavaScript and CSS files, you can combine many small icons and other common images into a larger sprite sheet and then use CSS to reveal the individual images. Instead of downloading each individual image, the user's web browser downloads the sprite sheet once and then caches it on the local computer. This improves page load performance by cutting down on the number of downloads and round trips to the web server.
- **To create an image sprite in Web Essentials**
+### To create an image sprite in Web Essentials**
1. In Visual Studio, in Solution Explorer, select the files that you want to include in the bundle.
-
-2. Right-click the selected files and then select **Web Essentials** \> **Create image sprite** from the context menu. For example:
-
+2. Right-click the selected files and then select **Web Essentials** \> **Create image sprite** from the context menu. For example:
+ ![Screenshot showing how to create an image sprite.](../media/de0fe741-4ef7-4e3b-bafa-ef9f4822dac6.png) 3. Choose a location to save the sprite file. The .sprite file is an XML file that describes the settings and files in the sprite. The following figures show an example of a sprite PNG file and its corresponding .sprite XML file.
-
+ ![Screenshot of a sprite file.](../media/0876bb2a-d1b9-4169-8e95-9c290d628d90.png) ![Screenshot of sprite XML file.](../media/d1f94776-280d-4d56-abb5-384f145d9989.png)
-
-
enterprise Office 365 Cdn Quickstart https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/enterprise/office-365-cdn-quickstart.md
Title: "Office 365 Content Delivery Network (CDN) Quickstart" - Previously updated : 06/04/2020+ Last updated : 01/13/2022 audience: ITPro
For more detailed information guidance see [Use the Office 365 Content Delivery
You can use the **Page Diagnostics for SharePoint tool** browser extension to easily list assets in your SharePoint Online pages that can be added to a CDN origin.
-The **Page Diagnostics for SharePoint tool** is a browser extension for the new Microsoft Edge (https://www.microsoft.com/edge) and Chrome browsers that analyzes both SharePoint Online modern portal and classic publishing site pages. The tool provides a report for each analyzed page showing how the page performs against a defined set of performance criteria. To install and learn about the Page Diagnostics for SharePoint tool, visit [Use the Page Diagnostics tool for SharePoint Online](./page-diagnostics-for-spo.md).
+The **Page Diagnostics for SharePoint tool** is a browser extension for the new Microsoft Edge (<https://www.microsoft.com/edge>) and Chrome browsers that analyzes both SharePoint Online modern portal and classic publishing site pages. The tool provides a report for each analyzed page showing how the page performs against a defined set of performance criteria. To install and learn about the Page Diagnostics for SharePoint tool, visit [Use the Page Diagnostics tool for SharePoint Online](./page-diagnostics-for-spo.md).
When you run the Page Diagnostics for SharePoint tool on a SharePoint Online page, you can click the **Diagnostic Tests** tab to see a list of assets not being hosted by the CDN. These assets will be listed under the heading **Content Delivery Network (CDN) check** as shown in the screenshot below.
Output of these cmdlets should look like the following:
[Network planning and performance tuning for Office 365](./network-planning-and-performance.md)
-[SharePoint Performance Series - Office 365 CDN video series](https://www.youtube.com/playlist?list=PLR9nK3mnD-OWMfr1BA9mr5oCw2aJXw4WA)
+[SharePoint Performance Series - Office 365 CDN video series](https://www.youtube.com/playlist?list=PLR9nK3mnD-OWMfr1BA9mr5oCw2aJXw4WA)
enterprise Office 365 Network Mac Perf Onboarding Tool https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/enterprise/office-365-network-mac-perf-onboarding-tool.md
Title: "Microsoft 365 network connectivity test tool"
Previously updated : 12/06/2021 Last updated : 1/18/2022 audience: Admin
description: "Microsoft 365 network connectivity test tool"
# Microsoft 365 network connectivity test tool
-The Microsoft 365 network connectivity test tool is located at <https://connectivity.office.com>. It is an adjunct tool to the network assessment and network insights available in the Microsoft 365 admin center under the **Health | Connectivity** menu.
+The Microsoft 365 network connectivity test tool is located at <https://connectivity.office.com>. It's an adjunct tool to the network assessment and network insights available in the Microsoft 365 admin center under the **Health | Connectivity** menu.
> [!IMPORTANT]
-> It is important to sign in to your Microsoft 365 tenant as all test reports are shared with your administrator and uploaded to the tenant while you are signed in.
+> It's important to sign in to your Microsoft 365 tenant as all test reports are shared with your administrator and uploaded to the tenant while you are signed in.
> [!div class="mx-imgBorder"] > ![Connectivity test tool.](../media/m365-mac-perf/m365-mac-perf-test-tool-page.png)
The Microsoft 365 network connectivity test tool is located at <https://connecti
Network insights in the Microsoft 365 Admin Center are based on regular in-product measurements for your Microsoft 365 tenant, aggregated each day. In comparison, network insights from the Microsoft 365 network connectivity test are run locally in the tool.
-In-product testing is limited, and running tests local to the user collects more data resulting in deeper insights. Network insights in the Microsoft 365 Admin Center will show that there is a networking problem at a specific office location. The Microsoft 365 connectivity test can help to identify the root cause of that problem and provide a targeted performance improvement action.
+In-product testing is limited, and running tests local to the user collects more data resulting in deeper insights. Network insights in the Microsoft 365 Admin Center will show that there's a networking problem at a specific office location. The Microsoft 365 connectivity test can help to identify the root cause of that problem and provide a targeted performance improvement action.
We recommend that these insights be used together where networking quality status can be assessed for each office location in the Microsoft 365 Admin Center and more specifics can be found after deployment of testing based on the Microsoft 365 connectivity test.
We recommend that these insights be used together where networking quality statu
### Office location identification
-When you click the *Run test* button, we show the running test page and identify the office location. You can type in your location by city, state, and country or choose to have it detected for you. If you detect the office location, the tool requests the latitude and longitude from the web browser and limits the accuracy to 300 meters by 300 meters before use. It is not necessary to identify the location more accurately than the building to measure network performance.
+When you click the *Run test* button, we show the running test page and identify the office location. You can type in your location by city, state, and country or choose to have it detected for you. If you detect the office location, the tool requests the latitude and longitude from the web browser and limits the accuracy to 300 meters by 300 meters before use. It's not necessary to identify the location more accurately than the building to measure network performance.
### JavaScript tests
This section shows test results related to your location.
#### Your location
-The user location is detected from the users web browser. It can also be typed in at the user's choice. It is used to identify network distances to specific parts of the enterprise network perimeter. Only the city from this location detection and the distance to other network points are saved in the report.
+The user location is detected from the users web browser. It can also be typed in at the user's choice. It's used to identify network distances to specific parts of the enterprise network perimeter. Only the city from this location detection and the distance to other network points are saved in the report.
The user office location is shown on the map view. #### Network egress location (the location where your network connects to your ISP)
-We identify the network egress IP address on the server side. Location databases are used to look up the approximate location for the network egress. These databases typically have an accuracy of about 90% of IP addresses. If the location looked up from the network egress IP address is not accurate, this would lead to a false result. To validate if this error is occurring for a specific IP address, you can use publicly accessible network IP address location web sites to compare against your actual location.
+We identify the network egress IP address on the server side. Location databases are used to look up the approximate location for the network egress. These databases typically have an accuracy of about 90% of IP addresses. If the location looked up from the network egress IP address isn't accurate, this would lead to a false result. To validate if this error is occurring for a specific IP address, you can use publicly accessible network IP address location web sites to compare against your actual location.
#### Your distance from the network egress location
This test detects if you're using a VPN to connect to Microsoft 365. A passing r
#### VPN Split Tunnel
-Each **Optimize** category route for Exchange Online, SharePoint Online, and Microsoft Teams is tested to see if it is tunneled on the VPN. A split out workload avoids the VPN entirely. A tunneled workload is sent over the VPN. A selective tunneled workload has some routes sent over the VPN and some split out. A passing result will show if all workloads are split out or selective tunneled.
+Each **Optimize** category route for Exchange Online, SharePoint Online, and Microsoft Teams is tested to see if It's tunneled on the VPN. A split out workload avoids the VPN entirely. A tunneled workload is sent over the VPN. A selective tunneled workload has some routes sent over the VPN and some split out. A passing result will show if all workloads are split out or selective tunneled.
#### Customers in your metropolitan area with better performance
This network insight is generated on the basis that all users in a city have acc
#### Time to make a DNS request on your network
-This shows the DNS server configured on the client machine that ran the tests. It might be a DNS Recursive Resolver server however this is uncommon. It is more likely to be a DNS forwarder server, which caches DNS results and forwards any uncached DNS requests to another DNS server.
+This shows the DNS server configured on the client machine that ran the tests. It might be a DNS Recursive Resolver server however this is uncommon. It's more likely to be a DNS forwarder server, which caches DNS results and forwards any uncached DNS requests to another DNS server.
This is provided for information only and does not contribute to any network insight.
This section shows test results related to Exchange Online.
#### Exchange service front door location
-The in-use Exchange service front door is identified in the same way that Outlook does this and we measure the network TCP latency from the user location to it. The TCP latency is shown and the in-use Exchange service front door is compared to the list of best service front doors for the current location. This is shown as a network insight if one of the best Exchange service front door(s) is not in use.
+The in-use Exchange service front door is identified in the same way that Outlook does this and we measure the network TCP latency from the user location to it. The TCP latency is shown and the in-use Exchange service front door is compared to the list of best service front doors for the current location. This is shown as a network insight if one of the best Exchange service front door(s) isn't in use.
Not using one of the best Exchange service front door(s) could be caused by network backhaul before the corporate network egress in which case we recommend local and direct network egress. It could also be caused by use of a remote DNS recursive resolver server in which case we recommend aligning the DNS recursive resolver server with the network egress.
This lists the best Exchange service front door locations by city for your locat
#### Service front door recorded in the client DNS
-This shows the DNS name and IP Address of the Exchange service front door server that you were directed to. It is provided for information only and there is no associated network insight.
+This shows the DNS name and IP Address of the Exchange service front door server that you were directed to. It's provided for information only and there's no associated network insight.
### SharePoint Online
We measure the download speed for a 15Mb file from the SharePoint service front
#### Buffer bloat
-During the 15Mb download we measure the TCP latency to the SharePoint service front door. This is the latency under load and it is compared to the latency when not under load. The increase in latency when under load is often attributable to consumer network device buffers being loaded (or bloated). A network insight is shown for any bloat of 1,000 or more.
+During the 15Mb download we measure the TCP latency to the SharePoint service front door. This is the latency under load and It's compared to the latency when not under load. The increase in latency when under load is often attributable to consumer network device buffers being loaded (or bloated). A network insight is shown for any bloat of 1,000 or more.
#### Service front door recorded in the client DNS
-This shows the DNS name and IP Address of the SharePoint service front door server that you were directed to. It is provided for information only and there is no associated network insight.
+This shows the DNS name and IP Address of the SharePoint service front door server that you were directed to. It's provided for information only and there's no associated network insight.
### Microsoft Teams
Shows the measured UDP jitter, which should be lower than **30ms**.
We test for HTTP connectivity from the user office location to all of the required Microsoft 365 network endpoints. These are published at [https://aka.ms/o365ip](./urls-and-ip-address-ranges.md). A network insight is shown for any required network endpoints, which cannot be connected to.
-Connectivity may be blocked by a proxy server, a firewall, or another network security device on the enterprise network perimeter. Connectivity to TCP port 80 is tested with an HTTP request and connectivity to TCP port 443 is tested with an HTTPS request. If there is no response the FQDN is marked as a failure. If there is an HTTP response code 407 the FQDN is marked as a failure. If there is an HTTP response code 403 then we check the Server attribute of the response and if it appears to be a proxy server we mark this as a failure. You can simulate the tests we perform with the Windows command-line tool curl.exe.
+Connectivity may be blocked by a proxy server, a firewall, or another network security device on the enterprise network perimeter. Connectivity to TCP port 80 is tested with an HTTP request and connectivity to TCP port 443 is tested with an HTTPS request. If there's no response the FQDN is marked as a failure. If there's an HTTP response code 407 the FQDN is marked as a failure. If there's an HTTP response code 403 then we check the Server attribute of the response and if it appears to be a proxy server we mark this as a failure. You can simulate the tests we perform with the Windows command-line tool curl.exe.
We test the SSL certificate at each required Microsoft 365 network endpoint that is in the optimize or allow category as defined at [https://aka.ms/o365ip](./urls-and-ip-address-ranges.md). If any tests do not find a Microsoft SSL certificate, then the encrypted network connected must have been intercepted by an intermediary network device. A network insight is shown on any intercepted encrypted network endpoints.
Where an SSL certificate is found that isn't provided by Microsoft, we show the
#### Network path
-This section shows the results of an ICMP traceroute to the Exchange Online service front door, the SharePoint Online service front door, and the Microsoft Teams service front door. It is provided for information only and there is no associated network insight. There are three traceroutes provided. A traceroute to _outlook.office365.com_, a traceroute to the customers SharePoint front end or to _microsoft.sharepoint.com_ if one was not provided, and a traceroute to _world.tr.teams.microsoft.com_.
+This section shows the results of an ICMP traceroute to the Exchange Online service front door, the SharePoint Online service front door, and the Microsoft Teams service front door. It's provided for information only and there's no associated network insight. There are three traceroutes provided. A traceroute to _outlook.office365.com_, a traceroute to the customers SharePoint front end or to _microsoft.sharepoint.com_ if one was not provided, and a traceroute to _world.tr.teams.microsoft.com_.
## Connectivity reports
Here are answers to some of our frequently asked questions.
The advanced test client requires .NET Core 3.1 Desktop Runtime. If you run the advanced test client without that installed you will be directed to [the .NET Core 3.1 installer page](https://dotnet.microsoft.com/download/dotnet-core/3.1). Be sure to install the Desktop Runtime and not the SDK, or the ASP.NET Core Runtime, which are higher up on the page. Administrator permissions on the machine are required to install .NET Core.
-The advanced test client uses SignalR to communicate to the web page. For this you must ensure that TCP port 443 connectivity to connectivity.service.signalr.net is open. This URL is not published in the https://aka.ms/o365ip because that connectivity is not required for an Microsoft 365 client application user.
+The advanced test client uses SignalR to communicate to the web page. For this you must ensure that TCP port 443 connectivity to **connectivity.service.signalr.net** is open. This URL isn't published in the <https://aka.ms/o365ip> because that connectivity isn't required for a Microsoft 365 client application user.
### What is Microsoft 365 service front door?
-The Microsoft 365 service front door is an entry point on Microsoft's global network where Office clients and services terminate their network connection. For an optimal network connection to Microsoft 365, it is recommended that your network connection is terminated into the closest Microsoft 365 front door in your city or metro.
+The Microsoft 365 service front door is an entry point on Microsoft's global network where Office clients and services terminate their network connection. For an optimal network connection to Microsoft 365, It's recommended that your network connection is terminated into the closest Microsoft 365 front door in your city or metro.
> [!NOTE] > Microsoft 365 service front door has no direct relationship to the **Azure Front Door Service** product available in the Azure marketplace.
includes Office 365 Worldwide Endpoints https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/includes/office-365-worldwide-endpoints.md
-<!--THIS FILE IS AUTOMATICALLY GENERATED. MANUAL CHANGES WILL BE OVERWRITTEN.-->
+<!--THIS FILE IS AUTOMATICALLY GENERATED. MANUAL CHANGES WILL BE OVERWRITTEN.-->
<!--Please contact the Office 365 Endpoints team with any questions.-->
-<!--Worldwide endpoints version 2021102900-->
-<!--File generated 2021-10-30 08:00:02.3971-->
-
-## Exchange Online
+<!--Worldwide endpoints version 2022012800-->
+<!--File generated 2022-01-28 11:00:01.6894-->
+
+## Exchange Online
ID | Category | ER | Addresses | Ports | | | - | --
ID | Category | ER | Addresses | Ports
9 | Allow<BR>Required | Yes | `*.protection.outlook.com`<BR>`40.92.0.0/15, 40.107.0.0/16, 52.100.0.0/14, 52.238.78.88/32, 104.47.0.0/17, 2a01:111:f403::/48` | **TCP:** 443 10 | Allow<BR>Required | Yes | `*.mail.protection.outlook.com`<BR>`40.92.0.0/15, 40.107.0.0/16, 52.100.0.0/14, 104.47.0.0/17, 2a01:111:f400::/48, 2a01:111:f403::/48` | **TCP:** 25 154 | Default<BR>Required | No | `autodiscover.<tenant>.onmicrosoft.com` | **TCP:** 443, 80-
-## SharePoint Online and OneDrive for Business
+
+## SharePoint Online and OneDrive for Business
ID | Category | ER | Addresses | Ports | -- | | - | -
-31 | Optimize<BR>Required | Yes | `<tenant>.sharepoint.com, <tenant>-my.sharepoint.com`<BR>`13.107.136.0/22, 40.108.128.0/17, 52.104.0.0/14, 104.146.128.0/17, 150.171.40.0/22, 2620:1ec:8f8::/46, 2620:1ec:908::/46, 2a01:111:f402::/48` | **TCP:** 443, 80
+-- | -- | | - | -
+31 | Optimize<BR>Required | Yes | `<tenant>.sharepoint.com, <tenant>-my.sharepoint.com`<BR>`13.107.136.0/22, 40.108.128.0/17, 52.104.0.0/14, 104.146.128.0/17, 150.171.40.0/22, 2603:1061:1300::/40, 2620:1ec:8f8::/46, 2620:1ec:908::/46, 2a01:111:f402::/48` | **TCP:** 443, 80
32 | Default<BR>Optional<BR>**Notes:** OneDrive for Business: supportability, telemetry, APIs, and embedded email links | No | `*.log.optimizely.com, ssw.live.com, storage.live.com` | **TCP:** 443 33 | Default<BR>Optional<BR>**Notes:** SharePoint Hybrid Search - Endpoint to SearchContentService where the hybrid crawler feeds documents | No | `*.search.production.apac.trafficmanager.net, *.search.production.emea.trafficmanager.net, *.search.production.us.trafficmanager.net` | **TCP:** 443 35 | Default<BR>Required | No | `*.wns.windows.com, admin.onedrive.com, officeclient.microsoft.com` | **TCP:** 443, 80 36 | Default<BR>Required | No | `g.live.com, oneclient.sfx.ms` | **TCP:** 443, 80 37 | Default<BR>Required | No | `*.sharepointonline.com, spoprod-a.akamaihd.net` | **TCP:** 443, 80
-39 | Default<BR>Required | No | `*.svc.ms, <tenant>-admin.sharepoint.com, <tenant>-files.sharepoint.com, <tenant>-myfiles.sharepoint.com` | **TCP:** 443, 80
-
-## Skype for Business Online and Microsoft Teams
+39 | Default<BR>Required | No | `*.gr.global.aa-rt.sharepoint.com, *.svc.ms, <tenant>-admin.sharepoint.com, <tenant>-files.sharepoint.com, <tenant>-myfiles.sharepoint.com` | **TCP:** 443, 80
+
+## Skype for Business Online and Microsoft Teams
ID | Category | ER | Addresses | Ports | - | | | -
ID | Category | ER | Addresses | Ports
27 | Default<BR>Required | No | `*.mstea.ms, *.secure.skypeassets.com, mlccdnprod.azureedge.net` | **TCP:** 443 29 | Default<BR>Optional<BR>**Notes:** Yammer third-party integration | No | `*.tenor.com` | **TCP:** 443, 80 127 | Default<BR>Required | No | `*.skype.com` | **TCP:** 443, 80-
-## Microsoft 365 Common and Office Online
+
+## Microsoft 365 Common and Office Online
ID | Category | ER | Addresses | Ports | -- | | -- | -
ID | Category | ER | Addresses | Ports
47 | Default<BR>Required | No | `*.cdn.office.net, contentstorage.osi.office.net` | **TCP:** 443 49 | Default<BR>Required | No | `*.onenote.com` | **TCP:** 443 50 | Default<BR>Optional<BR>**Notes:** OneNote notebooks (wildcards) | No | `*.microsoft.com, *.office.net` | **TCP:** 443
-51 | Default<BR>Required | No | `cdn.onenote.net, edunotebookssite-cdn.onenote.net, site-cdn.onenote.net, res-1.cdn.office.net` | **TCP:** 443
+51 | Default<BR>Required | No | `*cdn.onenote.net` | **TCP:** 443
52 | Default<BR>Optional<BR>**Notes:** OneNote 3rd party supporting services and CDNs | No | `ad.atdmt.com, s.ytimg.com, www.youtube.com` | **TCP:** 443 53 | Default<BR>Required | No | `ajax.aspnetcdn.com, apis.live.net, cdn.optimizely.com, officeapps.live.com, www.onedrive.com` | **TCP:** 443 56 | Allow<BR>Required | Yes | `*.msftidentity.com, *.msidentity.com, account.activedirectory.windowsazure.com, accounts.accesscontrol.windows.net, adminwebservice.microsoftonline.com, api.passwordreset.microsoftonline.com, autologon.microsoftazuread-sso.com, becws.microsoftonline.com, clientconfig.microsoftonline-p.net, companymanager.microsoftonline.com, device.login.microsoftonline.com, graph.microsoft.com, graph.windows.net, login.microsoft.com, login.microsoftonline.com, login.microsoftonline-p.com, login.windows.net, logincert.microsoftonline.com, loginex.microsoftonline.com, login-us.microsoftonline.com, nexus.microsoftonline-p.com, passwordreset.microsoftonline.com, provisioningapi.microsoftonline.com`<BR>`20.190.128.0/18, 40.126.0.0/18, 2603:1006:2000::/48, 2603:1007:200::/48, 2603:1016:1400::/48, 2603:1017::/48, 2603:1026:3000::/48, 2603:1027:1::/48, 2603:1036:3000::/48, 2603:1037:1::/48, 2603:1046:2000::/48, 2603:1047:1::/48, 2603:1056:2000::/48, 2603:1057:2::/48` | **TCP:** 443, 80
ID | Category | ER | Addresses | Ports
65 | Allow<BR>Required | Yes | `account.office.net`<BR>`52.108.0.0/14, 2603:1006:1400::/40, 2603:1016:2400::/40, 2603:1026:2400::/40, 2603:1036:2400::/40, 2603:1046:1400::/40, 2603:1056:1400::/40, 2a01:111:200a:a::/64, 2a01:111:2035:8::/64, 2a01:111:f406:1::/64, 2a01:111:f406:c00::/64, 2a01:111:f406:1004::/64, 2a01:111:f406:1805::/64, 2a01:111:f406:3404::/64, 2a01:111:f406:8000::/64, 2a01:111:f406:8801::/64, 2a01:111:f406:a003::/64` | **TCP:** 443, 80 66 | Default<BR>Required | No | `*.portal.cloudappsecurity.com, suite.office.net` | **TCP:** 443 67 | Default<BR>Optional<BR>**Notes:** Security and Compliance Center eDiscovery export | No | `*.blob.core.windows.net` | **TCP:** 443
-68 | Default<BR>Optional<BR>**Notes:** Portal and shared: 3rd party office integration. (including CDNs) | No | `*.helpshift.com, *.localytics.com, connect.facebook.net, firstpartyapps.oaspapps.com, outlook.uservoice.com, prod.firstpartyapps.oaspapps.com.akadns.net, telemetryservice.firstpartyapps.oaspapps.com, wus-firstpartyapps.oaspapps.com` | **TCP:** 443
+68 | Default<BR>Optional<BR>**Notes:** Portal and shared: 3rd party office integration. (including CDNs) | No | `*.helpshift.com, *.localytics.com, connect.facebook.net, firstpartyapps.oaspapps.com, prod.firstpartyapps.oaspapps.com.akadns.net, telemetryservice.firstpartyapps.oaspapps.com, wus-firstpartyapps.oaspapps.com` | **TCP:** 443
69 | Default<BR>Required | No | `*.aria.microsoft.com, *.events.data.microsoft.com` | **TCP:** 443 70 | Default<BR>Required | No | `*.o365weve.com, amp.azure.net, appsforoffice.microsoft.com, assets.onestore.ms, auth.gfx.ms, c1.microsoft.com, dgps.support.microsoft.com, docs.microsoft.com, msdn.microsoft.com, platform.linkedin.com, prod.msocdn.com, shellprod.msocdn.com, support.content.office.net, support.microsoft.com, technet.microsoft.com, videocontent.osi.office.net, videoplayercdn.osi.office.net` | **TCP:** 443 71 | Default<BR>Required | No | `*.office365.com` | **TCP:** 443 72 | Default<BR>Optional<BR>**Notes:** Azure Rights Management (RMS) with Office 2010 clients | No | `*.cloudapp.net` | **TCP:** 443 73 | Default<BR>Required | No | `*.aadrm.com, *.azurerms.com, *.informationprotection.azure.com, ecn.dev.virtualearth.net, informationprotection.hosting.portal.azure.net` | **TCP:** 443
-75 | Default<BR>Optional<BR>**Notes:** Graph.windows.net, Office 365 Management Pack for Operations Manager, SecureScore, Azure AD Device Registration, Forms, StaffHub, Application Insights, captcha services | No | `*.hockeyapp.net, *.sharepointonline.com, dc.services.visualstudio.com, mem.gfx.ms, staffhub.ms, staffhub.uservoice.com` | **TCP:** 443
+75 | Default<BR>Optional<BR>**Notes:** Graph.windows.net, Office 365 Management Pack for Operations Manager, SecureScore, Azure AD Device Registration, Forms, StaffHub, Application Insights, captcha services | No | `*.sharepointonline.com, dc.services.visualstudio.com, mem.gfx.ms, staffhub.ms` | **TCP:** 443
78 | Default<BR>Optional<BR>**Notes:** Some Office 365 features require endpoints within these domains (including CDNs). Many specific FQDNs within these wildcards have been published recently as we work to either remove or better explain our guidance relating to these wildcards. | No | `*.microsoft.com, *.msocdn.com, *.office.net, *.onmicrosoft.com` | **TCP:** 443, 80 79 | Default<BR>Required | No | `o15.officeredir.microsoft.com, officepreviewredir.microsoft.com, officeredir.microsoft.com, r.office.microsoft.com` | **TCP:** 443, 80 83 | Default<BR>Required | No | `activation.sls.microsoft.com` | **TCP:** 443
ID | Category | ER | Addresses | Ports
102 | Default<BR>Optional<BR>**Notes:** Outlook for Android and iOS: Facebook integration | No | `graph.facebook.com, m.facebook.com` | **TCP:** 443 103 | Default<BR>Optional<BR>**Notes:** Outlook for Android and iOS: Evernote integration | No | `www.evernote.com` | **TCP:** 443 105 | Default<BR>Optional<BR>**Notes:** Outlook for Android and iOS: Outlook Privacy | No | `bit.ly, www.acompli.com` | **TCP:** 443
-106 | Default<BR>Optional<BR>**Notes:** Outlook for Android and iOS: User voice integration | No | `by.uservoice.com` | **TCP:** 443
109 | Default<BR>Optional<BR>**Notes:** Outlook for Android and iOS: Flurry log integration | No | `data.flurry.com` | **TCP:** 443 110 | Default<BR>Optional<BR>**Notes:** Outlook for Android and iOS: Adjust integration | No | `app.adjust.com` | **TCP:** 443 113 | Default<BR>Optional<BR>**Notes:** Outlook for Android and iOS: Play Store integration (Android only) | No | `play.google.com` | **TCP:** 443
managed-desktop Deployment Groups https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/managed-desktop/service-description/deployment-groups.md
You might want to assign certain devices for test purposes only, or designate sp
- **Test**: best for devices that are used for testing or users who can tolerate frequent changes and exposure to new features and also provide early feedback. This group receives changes frequently and experiences in this group have a strong effect. The Test group is exempt from any established service level agreements and user support. It's best to move just a few devices at first and then check the user experience. Microsoft Managed Desktop won't automatically assign devices to this group; it will only have devices you specify. - **First**: ideal for early adopters, volunteer or designated validators, IT Pros, or representatives of business functions, that is, people who can validate changes and provide you feedback on the experience.
+- **Fast**: ideal for representatives of business functions, people who can validate changes prior to broad deployment.
- **Broad** receives changes last. Most of your organization will typically be in this group. You can also specify devices that must be in this group and should only receive changes last because they're doing business critical functions or belong to users in critical roles. - **Automatic**: select this option when you want Microsoft Managed Desktop to automatically assign devices to one of the other groups. (We won't automatically assign devices to Test.) If you want to release a device that you've previously specified so it can be automatically assigned again, select this option.
-Microsoft Managed Desktop uses some additional groups to control deployments, but you won't be able to assign devices to them. You can, however, move devices from those groups to one of the groups in this article. For more information about how Windows updates are managed in groups, see [How updates are handled in Microsoft Managed Desktop](updates.md).
+For more information about how Windows updates are managed in groups, see [How updates are handled in Microsoft Managed Desktop](updates.md).
If a device is in a group you've specified, **Group assigned by** will say **Admin**. If Microsoft Managed Desktop has assigned the group, it will say **Auto**. While a device is in the process of moving to a group, it will say **Pending**. The **Group** field always shows the group the device is currently in and only updates when a move is complete.
managed-desktop Assign Deployment Group https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/managed-desktop/working-with-managed-desktop/assign-deployment-group.md
# Assign devices to a deployment group
-Microsoft Managed Desktop will assign devices to the various deployment groups, but you can specify or change group a device is assigned to a device by using the Admin portal. You change the assignment after a device is registered or after a user has enrolled.
+Microsoft Managed Desktop will assign devices to the various deployment groups. You can specify or change the group a device is assigned to using the Admin portal. You change the assignment after a device is registered or after a user has enrolled.
> [!IMPORTANT] > If you change the assignment, policies that are specific to that group will be applied to the device. The change might install the latest version of Windows 10 (including any new feature or quality updates). It's best to move just a few devices at first and then check the resulting user experience. Be aware that certain updates will restart the device. Double-check that you've selected the right devices to assign. It can take up to 24 hours for the assignment to take effect.
-To assign devices to a deployment group, follow these steps. If you want to move separate devices to different groups, repeat these steps for each group.
+**To assign devices to a deployment group:**
-1. In Microsoft Endpoint Manager, select **Devices** in the left pane. In the **Microsoft Managed Desktop** section, select **Devices**.
-2. Select the devices you want to assign. All selected devices will be assigned to the group you specify.
-3. Select **Device actions** from the menu.
-4. Select **Assign device to group**. A fly-in opens.
-5. Use the drop-down menu to select the group to move devices to, and then select **Save**. The **Group assigned by** will change to **Pending**.
+If you want to move separate devices to different groups, repeat these steps for each group.
-When the assignment is complete, **Group assigned by** will change to **Admin** (indicated that you made the change) and the **Group** column will show the new group assignment.
+1. In Microsoft Endpoint Manager, select **Devices** in the left pane.
+1. In the **Microsoft Managed Desktop** section, select **Devices**.
+1. Select the devices you want to assign. All selected devices will be assigned to the group you specify.
+1. Select **Device actions** from the menu.
+1. Select **Assign device to group**. A fly-in opens.
+1. Use the dropdown menu to select the group to move devices to, and then select **Save**. The **Group assigned by** column will change to **Pending**.
+
+When the assignment is complete, **Group assigned by** column will change to **Admin** (indicated that you made the change) and the **Group** column will show the new group assignment.
> [!NOTE] > You can't move devices to other groups if they're in the "error" or "pending" registration state. >
->If a device hasn't been properly removed, it could show a status of "ready." If you move such a device, it's possible that the move won't complete. If you don't see **Group assigned by** change to **Pending** in Step 5, check that the device is available by searching for it in Intune. For more information, see [See device details in Intune](/mem/intune/remote-actions/device-inventory).
+>If a device hasn't been properly removed, it could show a status of "ready." If you move such a device, it's possible that the move won't complete. If you don't see **Group assigned by** column change to **Pending** in Step 5, check that the device is available by searching for it in Intune. For more information, see [See device details in Intune](/mem/intune/remote-actions/device-inventory).
managed-desktop Change Device Profile https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/managed-desktop/working-with-managed-desktop/change-device-profile.md
audience: Admin
-# Reassign profiles
+# Change the device profile
-You can change the [Device profiles](../service-description/profiles.md) assigned to a device by using the Admin Portal.
+You can change the [Device profiles](../service-description/profiles.md) assigned to a device using the Admin Portal.
-The device profile you select will be applied to all devices you select in the first step. To move separate devices to different profiles, youΓÇÖll need to repeat this process for each device profile.
+The selected device profile will be applied to all devices you select in the first step.
-1. In Microsoft Endpoint Manager, select **Devices** in the left pane. In the **Microsoft Managed Desktop** section of the menu, select **Devices**.
-2. Select the check boxes for the devices you want to modify.
-3. Select **Change device profile**; a fly-in opens.
-4. Use the drop-down menu to select the new device profile.
-5. Check that the **Reset device** slider is set the way you want.
-6. Select **Change profile**.
+**To change the device profile:**
+1. In Microsoft Endpoint Manager, select **Devices** in the left pane.
+1. In the **Microsoft Managed Desktop** section, select **Devices**.
+1. Select the checkboxes for the devices you want to modify.
+1. Select **Change device profile**. A fly-in opens.
+1. Use the dropdown menu to select the new device profile.
+1. Check that the **Reset device** slider is set the way you want.
+1. Select **Change profile**.
+To move separate devices to different profiles, youΓÇÖll need to repeat this process for each device profile.
managed-desktop Remove Devices https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/managed-desktop/working-with-managed-desktop/remove-devices.md
When you remove a device, all of the following occur:
- We remove the device from all "Modern Workplace" device groups. - We remove the device from the **Devices** blade in the Admin portal.
-When you remove a device, you have the option to also remove it from Azure Active Directory (Azure AD) and Microsoft Intune.
-
+When you remove a device, you can also remove it from Azure Active Directory (Azure AD) and Microsoft Intune.
+
> [!CAUTION] > Removing the objects related to a device from Azure AD and Microsoft Intune is permanent. If you remove the objects, you won't be able to view or manage the devices from the Intune and Azure portals. The devices won't be able to access their company's corporate resources. Company data might be deleted from them if the devices try to sign in after they're deleted.
+**To remove a device:**
+ 1. In [Microsoft Endpoint Manager](https://endpoint.microsoft.com/), select **Devices** in the left navigation pane.
-2. Look for the **Microsoft Managed Desktop** section of the menu and select **Devices**.
-3. In the Microsoft Managed Desktop Devices workspace, select the devices you want to delete.
+2. In the **Microsoft Managed Desktop** section, select **Devices**.
+3. In the **Microsoft Managed Desktop Devices workspace**, select the devices you want to delete.
4. Select **Device actions**, and then select **Delete Device** which opens a fly-in to remove the devices.
-5. In the fly-in, review the selected devices and then select **Remove devices**. If you want to also remove the Azure AD and Intune objects at the same time, select the check box. Device removal can take a few minutes to complete.
+5. In the fly-in, review the selected devices and then select **Remove devices**. If you want to also remove the Azure AD and Intune objects at the same time, select the checkbox. Device removal can take a few minutes to complete.
> [!NOTE]
-> You can't remove devices that are in a **pending** registration state.
+> You can't remove devices that are in a **pending** registration state.
managed-desktop Work With App Control https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/managed-desktop/working-with-managed-desktop/work-with-app-control.md
# Work with app control
-Once app control has been deployed in your environment, both you and Microsoft Managed Desktop Operations have ongoing responsibilities. For example, you might want to add a new app in the environment or add (or remove) a trusted signer. To improve security, all apps should be code-signed before you release them to users. An app's publisher details includes information about the signer.
-
+Once app control has been deployed in your environment, both you and Microsoft Managed Desktop Operations have ongoing responsibilities. For example, you might want to add a new app in the environment, or add (or remove) a trusted signer. To improve security, all apps should be code-signed before you release them to users. An app's publisher details includes information about the signer.
## Add a new app
-To add a new app, follow these steps:
+**To add a new app:**
1. Add the app to [Microsoft Intune](/mem/intune/apps/apps-win32-app-management).
-2. Deploy the app to any device in the Test ring.
-3. Test your app according to your standard business processes.
-4. Check Event Viewer under **Application and Services Logs\Microsoft\Windows\AppLocker**, looking for any **8003** or **8006** events. These events indicate that the app would be blocked. For more information about all App Locker events and their meanings, see [Using Event Viewer with AppLocker](/windows/security/threat-protection/windows-defender-application-control/applocker/using-event-viewer-with-applocker).
-5. If you find any of these events, open a signer request with Microsoft Managed Desktop Operations.
+1. Deploy the app to any device in the Test ring.
+1. Test your app according to your standard business processes.
+1. Check the Event Viewer under **Application and Services Logs\Microsoft\Windows\AppLocker**. Look for any **8003** or **8006** events. These events indicate that the app would be blocked. For more information about all App Locker events and their meanings, see [Using Event Viewer with AppLocker](/windows/security/threat-protection/windows-defender-application-control/applocker/using-event-viewer-with-applocker).
+1. If you find any of these events, open a signer request with Microsoft Managed Desktop Operations.
## Add (or remove) a trusted signer
-When you open a signer request, you'll need to provide some important publisher details first. Then follow these steps:
+When you open a signer request, you'll need to provide some important publisher details first.
+
+**To add (or remove) a trusted signer:**
1. [Gather publisher details](#gather-publisher-details).
-2. Open a ticket with Microsoft Managed Desktop Operations to request the signer rule and include following details:
- - Application name
- - Application version
- - Description
+1. Open a ticket with Microsoft Managed Desktop Operations to request the signer rule and include following details:
+
+ - Application name
+ - Application version
+ - Description
- Change type ("add" or "remove")
- - Publisher details (for example: ΓÇ£O=<publisher name>,L=<location>,S=State,C=CountryΓÇ¥)
+ - Publisher details (for example: ΓÇ£O=<publisher name>,L=<location>,S=State,C=CountryΓÇ¥)
> [!NOTE]
-> To remove trust for an app, follow the same steps, but set **Change type** to *remove*.
+> To remove trust for an app, follow the same steps, but set the **Change type** to *remove*.
Operations will progressively deploy policies to deployment groups following this schedule: - |Deployment group |Policy type |Timing | |||| |Test | Audit | Day 0 |
Operations will progressively deploy policies to deployment groups following thi
|Fast | Enforced | Day 2 | |Broad | Enforced | Day 3 | -
-You can pause or roll back the deployment at any time during the rollout. To do this, open another service request with Operations.
+You can pause or roll back the deployment at any time during the rollout. To pause or roll back, open another support request with Microsoft Managed Desktop Operations.
> [!NOTE] > If you pause the release of a signer rule, that rule must be either rolled back or completed before another rollout can start. ## Gather publisher details
-To access the publisher data for an app, follow these steps:
-
-1. Find a Microsoft Managed Desktop device in the Test ring that has an Audit Mode policy applied.
-2. Attempt to install the app on the device.
-3. Open Event Viewer on that device.
-4. In Event Viewer, navigate to **Application and Services Logs\Microsoft\Windows**, and then select **AppLocker**.
-5. Find any **8003** or **8006** event, and then copy information from the event:
- - Application name
- - Application version
- - Description
- - Publisher details (for example: ΓÇ£O=<publisher name>, L=<location>, S=State, C=CountryΓÇ¥)
+**To access the publisher data for an app:**
+
+1. Find a Microsoft Managed Desktop device in the Test ring that has an Audit Mode policy applied.
+1. Attempt to install the app on the device.
+1. Open the Event Viewer on that device.
+1. In the Event Viewer, navigate to **Application and Services Logs\Microsoft\Windows**, and then select **AppLocker**.
+1. Find any **8003** or **8006** event, and then copy information from the event:
+
+ - Application name
+ - Application version
+ - Description
+ - Publisher details (for example: ΓÇ£O=<publisher name>, L=<location>, S=State, C=CountryΓÇ¥)
security Configure Server Endpoints https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/security/defender-endpoint/configure-server-endpoints.md
You'll need to complete the following general steps to successfully onboard serv
-### New functionality in the modern unified solution for Windows Server 2012 R2 and 2016 Preview
+### New Windows Server 2012 R2 and 2016 functionality in the modern unified solution (Preview)
Previous implementation of onboarding Windows Server 2012 R2 and Windows Server 2016 required the use of Microsoft Monitoring Agent (MMA).
security Gov https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/security/defender-endpoint/gov.md
iOS|![No.](images/svg/check-no.svg) In development|![No](images/svg/check-no.svg
> [!NOTE] > <sup>1</sup> The patch must be deployed prior to device onboarding in order to configure Defender for Endpoint to the correct environment. >
-> <sup>2</sup> Learn about the [unified modern solution for Windows 2016 and 2012 R2](configure-server-endpoints.md#new-functionality-in-the-modern-unified-solution-for-windows-server-2012-r2-and-2016-preview). If you have previously onboarded your servers using MMA, follow the guidance provided in [Server migration](server-migration.md) to migrate to the new solution.
+> <sup>2</sup> Learn about the [unified modern solution for Windows 2016 and 2012 R2](configure-server-endpoints.md#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution-preview). If you have previously onboarded your servers using MMA, follow the guidance provided in [Server migration](server-migration.md) to migrate to the new solution.
> > <sup>3</sup> When using [Microsoft Monitoring Agent](onboard-downlevel.md#install-and-configure-microsoft-monitoring-agent-mma) you'll need to choose "Azure US Government" under "Azure Cloud" if using the [setup wizard](/azure/log-analytics/log-analytics-windows-agents#install-agent-using-setup-wizard), or if using a [command line](/azure/log-analytics/log-analytics-windows-agents#install-agent-using-command-line) or a [script](/azure/log-analytics/log-analytics-windows-agents#install-agent-using-dsc-in-azure-automation) - set the "OPINSIGHTS_WORKSPACE_AZURE_CLOUD_TYPE" parameter to 1. <br /> The minimum MMA supported version is 10.20.18029 (March 2020).
security Prevent Changes To Security Settings With Tamper Protection https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/security/defender-endpoint/prevent-changes-to-security-settings-with-tamper-protection.md
# Protect security settings with tamper protection **Applies to:**+
+- [Microsoft Defender for Endpoint Plan 1](https://go.microsoft.com/fwlink/p/?linkid=2154037)
- [Microsoft Defender for Endpoint Plan 2](https://go.microsoft.com/fwlink/p/?linkid=2154037) Tamper protection is available for devices that are running one of the following versions of Windows:
security Run Analyzer Windows https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/security/defender-endpoint/run-analyzer-windows.md
ms.technology: m365d
In addition to the above, there is also an option to [collect the analyzer support logs using live response.](troubleshoot-collect-support-log.md). > [!NOTE]
-> On Windows 10/11, Windows Server 2019/2022, or Windows Server 2012R2/2016 with the [modern unified solution](configure-server-endpoints.md#new-functionality-in-the-modern-unified-solution-for-windows-server-2012-r2-and-2016-preview) installed, the client analyzer script calls into an executable file called `MDEClientAnalyzer.exe` to run the connectivity tests to cloud service URLs.
+> On Windows 10/11, Windows Server 2019/2022, or Windows Server 2012R2/2016 with the [modern unified solution](configure-server-endpoints.md#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution-preview) installed, the client analyzer script calls into an executable file called `MDEClientAnalyzer.exe` to run the connectivity tests to cloud service URLs.
> > On Windows 8.1, Windows Server 2016 or any previous OS edition where Microsoft Monitoring Agent (MMA) is used for onboarding, the client analyzer script calls into an executable file called `MDEClientAnalyzerPreviousVersion.exe` to run connectivity tests for Command and Control (CnC) URLs while also calling into Microsoft Monitoring Agent connectivity tool `TestCloudConnection.exe` for Cyber Data channel URLs.