Category | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
admin | Whats New In Preview | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/admin/whats-new-in-preview.md | f1.keywords: Previously updated : 05/03/2023 Last updated : 06/05/2024 audience: Admin +- must-keep search.appverid: - MET150 - MOE150 You now have an option to sign up for email notifications about Windows known is - Each admin account can specify up to two email addresses under their email preferences - Windows versions to be notified about - When a single known issue affects multiple versions of Windows, you'll receive only one email notification, even if you've selected notifications for multiple versions. Duplicate emails won't be sent.-4. Select **Save** when you're finished specifying email addresses and Windows versions. It may take up to 8 hours for these changes to take effect. +4. Select **Save** when you're finished specifying email addresses and Windows versions. It might take up to 8 hours for these changes to take effect. For more information, see [How to check Windows release health](/windows/deployment/update/check-release-health). Use this data to decide which help articles and training resources to share with There are a couple of ways to get the Experience insights dashboard page: -- If youΓÇÖre a member of the Global admin or Global reader roles, when you log in to the Microsoft 365 admin center, youΓÇÖll see a one-time prompt to go to the Experience insights (preview) dashboard. You can access it at any time by selecting Experience insights (preview) from the admin home page.+- If youΓÇÖre a member of the Global admin or Global reader roles, when you sign in to the Microsoft 365 admin center, youΓÇÖll see a one-time prompt to go to the Experience insights (preview) dashboard. You can access it at any time by selecting Experience insights (preview) from the admin home page. - If youΓÇÖre a member of the Reports reader role or the User Experience success manager roles, once you sign into the admin center, youΓÇÖll automatically go to the Experience insights (preview) dashboard page. You can switch back to the admin center Dashboard view by selecting that option in the top right. To learn more, see [Simplify deployment of Microsoft 365 with new and updated de To empower IT Admins like you, the Net Promoter Score (NPS) survey insights dashboard released the new Sentiment per Topic feature under the sentiment section. -With this new feature, you will be able to identify the sentiment that is associated with each topic available. The sentiment is calculated for each NPS feedback comment and tied to a specific topic. With this new addition, you can discover what trending topics your users are talking about and understand the feeling they're experiencing regarding that specific topic. +With this new feature, you'll be able to identify the sentiment that is associated with each topic available. The sentiment is calculated for each NPS feedback comment and tied to a specific topic. With this new addition, you can discover what trending topics your users are talking about and understand the feeling they're experiencing regarding that specific topic. -With the new sentiment per topic feature on the NPS survey insights dashboard, you will be able to: +With the new sentiment per topic feature on the NPS survey insights dashboard, you'll be able to: - Identify the sentiment for each topic - Choose between three sentiments: Positive, Negative, Other Here are the topics available: - User Education - Value -To access the sentiment per topic insights, sign in to the M365 Admin Center and go to **Health** > **Product feedback** > **NPS survey insights tab**. +To access the sentiment per topic insights, sign in to the Microsoft 365 Admin Center and go to **Health** > **Product feedback** > **NPS survey insights tab**. :::image type="content" source="../media/nps-sentimentpertopic.jpg" alt-text="Screenshot: Sentiment per topic feature in the NPS survey insights dashboard" lightbox="../media/nps-sentimentpertopic.jpg"::: For questions or feedback related to NPS survey insights, contact us at Prosight ### Date filter in the Net Promoter Score (NPS) survey insights dashboard -Based on your feedback, we are introducing a new function in the NPS survey insights dashboard that allows Admins like you to filter the Net Promoter Score (NPS) data and insights per date, so that you can access details based on your date range preference. +Based on your feedback, we're introducing a new function in the NPS survey insights dashboard that allows Admins like you to filter the Net Promoter Score (NPS) data and insights per date, so that you can access details based on your date range preference. -With this change, you will be able to look at the NPS survey insights based on the following date ranges: +With this change, you'll be able to look at the NPS survey insights based on the following date ranges: - Past 30 days - Past 90 days |
bookings | Bookings In Outlook | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/bookings/bookings-in-outlook.md | Title: "Turn off your Personal Bookings page" + Title: "Turn your Personal Bookings page on or off" Previously updated : 03/25/2023 Last updated : 06/05/2024 audience: Admin +- must-keep -description: "Steps to turn off your Personal Bookings page" +description: "Steps to turn your Personal Bookings page on or off" -# Turn off your Bookings page +# Turn your Personal Bookings page on or off - Bookings is a time management solution that provides a simple and powerful scheduling page with seamless integration with outlook. It lets people schedule a meeting or appointment with you through a booking page that integrates with the free/busy information from your Outlook calendar. You can create custom meeting types to share with others so they can easily schedule time with you based on your availability and preferences. You both get an email confirmation and attendees can update or cancel scheduled meetings with you from your Personal Bookings page. + Bookings is a time management solution that provides a simple and powerful scheduling page with seamless integration with outlook. It lets people schedule a meeting or appointment with you through a booking page that integrates with the free/busy information from your Outlook calendar. You can create custom meeting types to share with others so they can easily schedule time with you based on your availability and preferences. You both get an email confirmation and attendees can update or cancel scheduled meetings with you from your Personal Bookings page. Personal Bookings has two different views: Bookings with me is an ideal solution for enterprise, small business, and users ### End users -For more information on how your users can work with Bookings with me, see the following topics: +For more information on how your users can work with Bookings with me, see the following articles: - [Set up Bookings with me](https://support.microsoft.com/office/bookings-with-me-setup-and-sharing-ad2e28c4-4abd-45c7-9439-27a789d254a2) - [Bookings with me articles](https://support.microsoft.com/office/bookings-with-me-articles-c69c4703-e812-435c-9fc2-d194e10fd205) Personal Bookings is available in the following subscriptions: - Personal Bookings is available for G1, G3, G5 Personal Bookings is on by default for users with these subscriptions. -Personal Bookings needs the **Microsoft Bookings App (service plan)** assigned to users for them to be able to access Bookings. This service plan can be enabled/disabled by tenant admins. So, if **Microsoft Bookings** is not assigned to them, Bookings access will be denied to users even if they are in one of the previously listed SKUs. +Personal Bookings needs the **Microsoft Bookings App (service plan)** assigned to users for them to be able to access Bookings. This service plan can be enabled/disabled by tenant admins. So, if **Microsoft Bookings** isn't assigned to them, Bookings access will be denied to users even if they are in one of the previously listed SKUs. For more information, see the [Bookings with me Microsoft 365 Roadmap item](https://go.microsoft.com/fwlink/?linkid=328648). Use the **Get-OrganizationConfig** and **Set-OrganizationConfig** commands to fi If the command returns "EwsEnabled:" (empty is default), no further changes are needed, proceed to Step 2. If the command returns "EwsEnabled: **$false**" then run the following command and proceed to Step 2.- + ```PowerShell Set-OrganizationConfig -EwsEnabled: $true ``` -3. Check your EwsApplicationAccessPolicy by running the following command: +1. Check your EwsApplicationAccessPolicy by running the following command: ```PowerShell Get-OrganizationConfig | Format-List EwsApplicationAccessPolicy,Ews*List Use the **Get-OrganizationConfig** and **Set-OrganizationConfig** commands to fi **C**. If the value of **EwsApplicationAccessPolicy** is empty, all applications are allowed to access EWS and REST. - - To turn off Personal Bookings for your organization set the **EnforceBlockList** policy and add **MicrosoftOWSPersonalBookings** to the block list by running the following command: + - To turn off Personal Bookings for your organization set the **EnforceBlockList** policy and add **MicrosoftOWSPersonalBookings** to the blocklist by running the following command: ```PowerShell Set-OrganizationConfig -EwsApplicationAccessPolicy EnforceBlockList -EwsBlockList @{Add="MicrosoftOWSPersonalBookings"} ```- + - If you want to revert the value of **EwsApplicationAccessPolicy** to empty to allow all applications to access EWS and REST, run the following command: ```PowerShell Set-OrganizationConfig -EwsApplicationAccessPolicy $null ```- + > [!NOTE] > The EwsApplicationAccessPolicy parameter defines which applications other than Entourage, Outlook, and Outlook for Mac can access EWS. Use the **Get-CASMailbox** and **Set-CASMailbox** commands to check user status **A**. If the command returns "**EwsEnabled: $true**", then proceed to Step 2. -2. Check the individual's **EwsApplicationAccessPolicy** by running the following command: +1. Check the individual's **EwsApplicationAccessPolicy** by running the following command: ```PowerShell Get-CASMailbox -Identity adam@contoso.com | Format-List EwsApplicationAccessPolicy,Ews*List Use the **Get-CASMailbox** and **Set-CASMailbox** commands to check user status Set-CASMailbox -Identity adam@contoso.com -EwsApplicationAccessPolicy EnforceBlockList -EWSBlockList @{Add="MicrosoftOWSPersonalBookings"} ``` - ## Frequently asked questions ### What is the difference between Bookings and Bookings with me? Bookings with me integrates with your Outlook calendar and can only be used for 1:1 meetings. Bookings with me is intended for scheduling meeting times with individual users. Bookings is intended for managing scheduling for a group of people. -Also, Bookings with me won't create a new mailbox for each Bookings with me page. -Note that Bookings with me and Personal Bookings are terms used interchangeably. +Also, Bookings with me won't create a new mailbox for each Bookings with me page. Note that Bookings with me and Personal Bookings are terms used interchangeably. ### Who can access my public Bookings page? |
bookings | Define Service Offerings | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/bookings/define-service-offerings.md | Title: "Define your Bookings service offerings" Previously updated : 06/18/2020 Last updated : 06/05/2024 audience: Admin +- must-keep description: "Instructions for entering service offerings information, including service name, description, location, duration, and pricing. You can also tag the employees who are qualified to provide the service." You can also add customized information and URLs to the email confirmation and r ## Steps -Here are the steps to add a new service. ->[!NOTE] +Here are the steps to add a new service. ++> [!NOTE] > Changes to business-related settings, like enabling or disabling one-time passwords (OTP) or sending meeting invites, may take up to 10 minutes to apply. 1. In Microsoft 365, select the App launcher, and then select **Bookings**. -2. Go to **Your calendar** > **Services** and select **Add new service**. +1. Under **Shared booking pages**, either select the page for which you want to create a new service, or create a new booking page and then select it from the available pages. ++1. On the shared booking page, select **Services**, and then select **Add new service**. The number of services should be limited to 50. -3. On the **Basic details** page, add your selections. +1. On the **Basic details** page, add your selections. **Service name**: enter the name of your service. This is the name that will appear in the drop-down menu on the Calendar page. This name will also appear when anyone manually adds an appointment on the Calendar page, and it will appear as a tile on the Self-service page. **Description**: The description you enter is what will appear when a user selects the information icon on the Self-service page. - **Default location**: This location is what will be displayed on confirmation and reminder emails for both staff and customers, and it will be displayed on the calendar event created for the booking. + **Location**: This location is what will be displayed on confirmation and reminder emails for both staff and customers, and it will be displayed on the calendar event created for the booking. **Add online meeting**: This setting enables or disables online meetings for each appointment, either via Teams or Skype, depending on which one you configure as the default client for the staff member. Here are the steps to add a new service. > Teams meetings can be joined via the Teams mobile app, the Teams desktop app, in a Web browser, or via the phone dial-in. We strongly recommend enabling Teams as the default online meeting service for your tenant, for the best experience booking virtual appointments. - Disabled:- - Appointments will not contain a meeting option, and all of the meeting-related fields that appear when **Add online meeting** is enabled will not be shown. + - Appointments won't contain a meeting option, and all of the meeting-related fields that appear when **Add online meeting** is enabled won't be shown. **Duration**: This is how long all meetings will be booked for. The time is blocked beginning from the start time, which is selected during booking. The full appointment time will be blocked on the staff's calendars. **Buffer time**: Enabling this setting allows for the addition of extra time to the staffΓÇÖs calendar every time an appointment is booked. - The time will be blocked on the staffΓÇÖs calendar and impact free/busy information. This means if an appointment ends at 3:00 pm and 10 minutes of buffer time has been added to the end of the meeting, the staffΓÇÖs calendar will show as busy and non-bookable until 3:10pm. This can be useful if your staff needs time before a meeting to prepare, such as a doctor reviewing a patientΓÇÖs chart, or a financial advisor preparing relevant account information. It can also be useful after a meeting, such as when someone needs time to travel to another location. + The time will be blocked on the staffΓÇÖs calendar and impact free/busy information. This means if an appointment ends at 3:00 pm and 10 minutes of buffer time has been added to the end of the meeting, the staffΓÇÖs calendar will show as busy and nonbookable until 3:10pm. This can be useful if your staff needs time before a meeting to prepare, such as a doctor reviewing a patientΓÇÖs chart, or a financial advisor preparing relevant account information. It can also be useful after a meeting, such as when someone needs time to travel to another location. **Price not set**: Select the price options that will display on the Self-Service page. If **Price not set** is selected, then no price or reference to cost or pricing will appear. Here are the steps to add a new service. :::image type="content" source="media/bookings-maximum-attendees.jpg" alt-text="Example of setting maximum attendees in Bookings"::: - **Let the customer manage their booking**: This setting determines whether or not the customer can modify or cancel their booking, provided it was booked through the Calendar tab on the Bookings Web app. + **Let customers manage their appointment when it was booked by you or your staff on their behalf**: This setting determines whether or not the customer can modify or cancel their booking, provided it was booked through the Calendar tab on the Bookings Web app. - Enabled: Here are the steps to add a new service. We recommend disabling this setting if you want to limit access to the Self-Service page. Additionally, we suggest adding text to your confirmation and reminder emails that tells your customers how to make changes to their booking through other means, such as by calling the office or emailing the help desk. -4. On the **Availability options** page, you can see the options you've selected from your **Booking page** for your scheduling policy and availability for your staff. For more information, see [Set your scheduling policies](set-scheduling-policies.md). + **Language**: Select the default language for the booking from the drop-down list. ++1. On the **Availability options** page, you can see the options you've selected from your **Booking page** for your scheduling policy and availability for your staff. For more information, see [Set your scheduling policies](set-scheduling-policies.md). -5. On the revamped **Assign staff** page, you can smoothly assign and remove assigned staff members from a service. There are two more controls added on this page: +1. On the revamped **Assign staff** page, you can smoothly assign and remove assigned staff members from a service. There are two more controls added on this page: - - **Single staff** When this option is selected, the booking will be scheduled with a single staff member. - - **Multiple staff** This feature allows you to create a service with multiple staff members. The booking will be scheduled with all of the assigned staff members of the service. You can refer to this service as N:1 booking service. + - **Assign any of your selected staff for an appointment**: When this option is selected, the booking will be scheduled with a single staff member. + - **Multiple staff**: This feature allows you to create a service with multiple staff members. The booking will be scheduled with all of the assigned staff members of the service. You can refer to this service as N:1 booking service. > [!NOTE] > For Multiple staff, you can only create a booking when all assigned staff members are available to attend. -6. **Custom fields** can be useful when collecting information that is needed every time the specific appointment is booked. Examples include insurance provider prior to a clinic visit, loan type for loan consultations, major of study for academic advising, or applicant ID for candidate interviews. These fields will appear on the Booking page when your customers book appointments with you and your staff. + - **Allow customers to choose a particular staff for booking**: This setting enables customers to view and choose from among specific staff members for the booking. + - **Select staff**: You can choose specific staff members for bookings created using this service. - Customer email, phone number, address, and notes are non-removable fields, but you can make them optional by deselecting **Required** beside each field. +1. **Custom fields** can be useful when collecting information that is needed every time the specific appointment is booked. Examples include insurance provider prior to a clinic visit, loan type for loan consultations, major of study for academic advising, or applicant ID for candidate interviews. These fields will appear on the Booking page when your customers book appointments with you and your staff. -7. On the **Notifications** page, you can send SMS messages, set up reminders, and send notifications. + Customer email, phone number, address, and notes are nonremovable fields, but you can make them optional by deselecting **Required** beside each field. ++1. On the **Notifications** page, you can send SMS messages, set up reminders, and send notifications. > [!NOTE] > Text message notifications in Bookings requires a Teams Premium license. **Enable text message notifications for your customer** If selected, SMS messages are sent to the customer, but only if they opt in. - **Reminders and notifications** are sent out to customers, staff members, or both, at a specified time before the appointment. Multiple messages can be created for each appointment, according to your preference. + **Email confirmation, Email reminders, and Email follow-up**: These notifications are sent out to customers, staff members, or both, when the booking is created or changed, at a specified time before the appointment, and at a specified time after the appointment. Multiple reminder and follow-up messages can be created for each appointment, according to your preference. :::image type="content" source="media/bookings-remind-confirm-2.png" alt-text="A confirmation email from Bookings."::: Here are the steps to add a new service. :::image type="content" source="media/bookings-text-notifications.jpg" alt-text="A text notification from Bookings"::: -8. There are two more controls available to ease your Service creation journey: - - **Default scheduling options** is on by default. Turn the toggle off if you want to customize how customers book a particular staff member. +1. There are two more controls available to ease your Service creation journey: + - **Default scheduling policy** is on by default. Turn the toggle off if you want to customize how customers book a particular staff member. - **Publishing options** Choose whether to have this service appear as bookable on the Self-Service page, or to make the service bookable only on the Calendar tab within the Bookings Web app.++1. Select **Save changes** to create the new service. |
bookings | Delete Calendar | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/bookings/delete-calendar.md | Title: "Delete a Shared Booking page" Previously updated : 05/23/2024 Last updated : 06/05/2024 audience: Admin +- must-keep description: "Use the Microsoft 365 admin center or Windows PowerShell to delete Bookings calendars." This article explains how you can delete an unwanted shared booking page. You ca > All shared booking pages that you created in 2017 or before must be deleted using the PowerShell instructions on this topic. All shared booking pages created in 2018 or after can be deleted in the Microsoft 365 admin center. > [!IMPORTANT]-> Only M365 Admins can delete shared booking pages created by end users. We do not support the capability of deleting shared booking pages at a user level. If a user wishes to do so, the request will have to be routed to the respective M365 admin. +> Only Microsoft 365 Admins can delete shared booking pages created by end users. We do not support the capability of deleting shared booking pages at a user level. If a user wishes to do so, the request will have to be routed to the respective Microsoft 365 admin. The shared booking page is where all relevant information is stored, including: |
business-premium | M365bp Secure Copilot | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/business-premium/m365bp-secure-copilot.md | + + Title: Secure Copilot in Business Standard and Business Premium +description: "Learn how to secure Microsoft Copilot for Microsoft 365 Business Standard and Microsoft 365 Business Premium." +search.appverid: MET150 ++++audience: Admin ++ Last updated : 4/30/2024+ms.localizationpriority: medium ++f1.keywords: NOCSH ++ - essentials-security + - essentials-privacy + - essentials-compliance + - magic-ai-copilot ++++# Secure Microsoft Copilot for Microsoft 365 in Microsoft 365 Business Standard and Microsoft 365 Business Premium ++This article explains the differences in security and compliance controls between Copilot for Microsoft 365 in Microsoft 365 Business Standard and Microsoft 365 Business Premium. This article doesn't attempt to describe the full capabilities of Copilot for Microsoft 365, or the full security and compliance features in Business Standard and Business Premium. ++The following sections contain scenarios to help you better understand how security features in Business Standard and Business Premium can help protect you when you're using Copilot for Microsoft 365. ++## Enable new levels of employee productivity while safeguarding company data and resources ++How can companies enable new levels of employee productivity with tools like Microsoft Copilot for Microsoft 365 while safeguarding company data and resources? ++- Use the following capabilities in **Business Standard** to make sure that unauthorized employees can't use Copilot for Microsoft 365 to gain access to information or confidential data in files that they don't have access to: + - Sign in without a password using multifactor authentication and help ensure only authorized users have access to data. + - Ensure only enrolled, compliant devices can access Microsoft 365 resources with device-based conditional access. + - Wipe all work content, including content generated by Copilot if a device is lost, stolen, or compromised. + - Revoke work access on noncompliant devices except Windows devices ++- **Business Premium** extends protection in the following scenarios: + - Further prevent external bad actors from getting access to Microsoft 365 resources. + - Protect against employee misuse of Copilot for Microsoft 365 by creating conditions to grant internal access. + - Reduce the ability for employees or external parties from inappropriately saving or leaking data outside the organization. ++ The following capabilities in **Business Premium** lead to results in those scenarios: ++ - Use biometrics to sign in to your Microsoft 365 account using Windows Hello for Business (enabled through Windows 11 Pro, which is available to Business Premium licenses). + - Only grant access to Microsoft 365 resources when specific conditions (identity, device, and location) are met using user-based conditional access. + - Require employees or guests to accept the terms of use policy before getting access to resources. + - Restrict the use of the Microsoft 365 apps and Teams (and Copilot in these apps) on personal devices. + - Prevent saving files to unprotected apps. + - Restrict the ability to copy and forward confidential business information with data loss prevention for emails and files. ++## Keep sensitive or personal data from being exposed ++How can companies ensure that sensitive or personal data isn't exposed when using Copilot for Microsoft 365? ++- Use the following capabilities in **Business Standard** to make sure that unauthorized employees can't use Copilot for Microsoft 365 to gain access to information or confidential data in files that they don't have access to: + - Change default sharing options in SharePoint and OneDrive. + - Prohibit Copilot for Microsoft 365 from including sensitive data that users don't have permissions to view in generated responses. + - Exclude sensitive files that users don't have permissions to view from being processed by Copilot. ++- **Business Premium** further extends the protection of sensitive data by requiring sensitivity labels for Microsoft 365 content. These labels help ensure that only employees with specific permissions can use Copilot for Microsoft 365 to access, generate, or share sensitive data. Matching sensitivity labels are automatically applied to any content generated by Copilot for Microsoft 365. ++ The following capabilities in **Business Premium** lead to those protections: ++ - Protect Microsoft 365 data from being accessed by unauthorized users by implementing manual, default, and mandatory content labeling. + - Copilot for Microsoft 365 automatically inherits and applies sensitivity labels that match any queried material or references. ++## Support regulatory compliance and eDiscovery requests ++How can companies monitor interactions with Copilot for Microsoft 365 and support related regulatory compliance or eDiscovery requests? ++- In **Business Standard**, companies can achieve the following results: + - Monitor, search, and export employee interactions with Copilot for Microsoft 365, and any content generated by Copilot for Microsoft 365. + - Define how long content generated by Copilot for Microsoft 365 should be retained within Microsoft 365. ++ The following capabilities in **Business Standard** lead to these results: ++ - Search for and export Copilot interactions by content and keyword search. + - Maintain a log of all Copilot for Microsoft 365 interactions within the organization. + - Apply retention or deletion policies for Copilot interactions and any generated content. ++- **Business Premium** further extends the support for investigations or other legal processes by asserting a legal hold on material associated with Copilot for Microsoft 365. ++ In **Business Premium**, use eDiscovery (Standard) to search for Copilot interactions by content, keyword search, create cases, assign managers, apply legal hold, and export the search results to investigate incidents and respond to litigation. ++## Appendix ++The available security and compliance features related to Copilot for Microsoft 365 in Business Standard and Business Premium is summarized in the following tables: ++- **Identity and Access Management (Microsoft Entra ID)**: ++ |Scenario|Business<br/>Standard|Business<br/>Premium| + ||::|::| + |Sign in to Copilot for Microsoft 365 with a single identity|Γ£ö|Γ£ö| + |Enforce MFA when accessing Microsoft 365 to use Copilot|Γ£ö|Γ£ö| + |Enable end-user password reset, change, and unlock when accessing Microsoft 365|Cloud users|Γ£ö| + |Implement Conditional Access policies based on identity, device, and location when accessing Microsoft 365 to use Copilot||Γ£ö| + |Enable near real-time access policies enforcement, evaluate critical events, and immediately revoke access to Microsoft 365||Γ£ö| + |Require employees or guests to accept terms of use policy before getting access||Γ£ö| ++- **Endpoint Management (Basic Mobility and Security or Intune)**: ++ |Scenario|Business<br/>Standard|Business<br/>Premium| + ||::|::| + |Push/deploy Microsoft 365 apps to devices and grant access to Copilot in those apps||Γ£ö| + |Manage Microsoft 365 app updates||Γ£ö| + |Restrict the use of Microsoft 365 apps and Teams (and Copilot in those apps) on personal devices||Γ£ö| + |Prevent saving files (including files generated by Copilot) to unprotected apps||Γ£ö| + |Wipe all work content (including content generated by Copilot) if a device is lost, stolen, or compromised|Γ£ö|Γ£ö| + |Revoke work access on noncompliant devices|iOS, Android|Γ£ö| ++- **Data Security and Compliance (Information Protection)**: ++ |Scenario|Business<br/>Standard|Business<br/>Premium| + ||::|::| + |Search for Copilot generated data and interactions with eDiscovery capabilities|Search and export results|+ Case management and legal hold| + |Audit logs for Copilot interactions|Audit (Standard)|Audit (Standard)| + |Apply a manual retention policy for Copilot interactions|Γ£ö|Γ£ö| + |Data loss prevention (DLP) policies to protect sensitive data generated by Copilot and saved in Microsoft 365 locations from exfiltration||Files and email| + |Manually label and protect Microsoft 365 content used by Copilot||Files and email| + |Inherit sensitivity labels and cite sensitivity labels in output and references in Copilot||Γ£ö| + |Prohibit Copilot from including sensitive data that users have no extract permissions for|Γ£ö|Γ£ö| + |Exclude sensitive files that users have no permission to view from being processed by Copilot|Γ£ö|Γ£ö| |
business-premium | M365bp Security Overview | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/business-premium/m365bp-security-overview.md | After you have completed the basic setup process for [Microsoft 365 Business Pre 3. [Protect against malware and other threats](m365bp-protect-against-malware-cyberthreats.md). 4. [Secure managed and unmanaged devices](m365bp-managed-unmanaged-devices.md). 5. [Set up information protection capabilities](m365bp-set-up-compliance.md).+6. [Secure Microsoft Copilot for Microsoft 365](m365bp-secure-copilot.md) > [!TIP] > If you're a Microsoft partner, see [Resources for Microsoft partners working with small and medium-sized businesses](../security/defender-business/mdb-partners.md) and download our security guide and checklist! |
enterprise | Internet Sites In Microsoft Azure Using Sharepoint Server 2013 | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/enterprise/internet-sites-in-microsoft-azure-using-sharepoint-server-2013.md | - Title: "Internet Sites in Microsoft Azure using SharePoint Server 2013"--- Previously updated : 12/15/2017------ scotvorg-- Ent_O365-- CSH- -description: This article provides resources for designing and implementing SharePoint Server 2013 Internet sites hosted in Azure Infrastructure Services. ---# Internet Sites in Microsoft Azure using SharePoint Server 2013 -- Internet sites that use SharePoint Server 2013 benefit by being hosted in Azure Infrastructure Services. This article provides resources for designing and implementing this solution. --## Using Azure Infrastructure Services for Internet sites --Microsoft Azure provides a compelling option for hosting Internet sites based on SharePoint Server 2013. Advantages include the following: --- Focus on developing a great site instead of building infrastructure.--- Flexibility to scale your solution based on demand by scaling out and in.--- Pay only for the resources that you need and use.--- Take advantage of Microsoft Entra ID for customer accounts.--- Add features that are not currently available in Microsoft 365, such as deep reporting and analytics.--## Resources --The following technical illustrations and articles provide information about how to design and implement Internet sites in Azure by using SharePoint Server 2013. --|Resource|More information| -||| -|**SharePoint Server 2013 Internet sites in Azure** <br/> [![Image of Internet sites in Azure using SharePoint.](../media/MS-AZ-SPInternetSites.jpg)](https://go.microsoft.com/fwlink/p/?LinkId=392552) <br/> [PDF](https://go.microsoft.com/fwlink/p/?LinkId=392552) \| [Visio](https://go.microsoft.com/fwlink/p/?LinkId=392551)|This architecture model outlines key design activities and recommended architecture choices for Internet sites in Azure.| -|**Design sample: Internet Sites in Azure for SharePoint Server 2013** <br/> [![Image of the Design sample: Internet sites in Microsoft Azure for SharePoint 2013.](../media/MS-AZ-InternetSitesDesignSample.jpg)] <br/> [PDF](https://go.microsoft.com/fwlink/p/?LinkId=392549) \| [Visio](https://go.microsoft.com/fwlink/p/?LinkId=392548)|Use this design sample as a starting point for your own architecture.| -|**[Microsoft Azure Architectures for SharePoint 2013](microsoft-azure-architectures-for-sharepoint-2013.md)** <br/> |This article describes how to design Azure architectures to host SharePoint solutions.| -| --## See Also --[Microsoft 365 solution and architecture center](../solutions/index.yml) |
enterprise | M365 Dr Workload Spo | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/enterprise/m365-dr-workload-spo.md | You can schedule OneDrive site moves in advance (described later in this article - You can schedule up to 4,000 moves at a time. - As the moves begin, you can schedule more, with a maximum of 4,000 pending moves in the queue and any given time. - The maximum size of a OneDrive that can be moved is 5 terabytes (5 TB).+- The count of list items for the site is < 1 million. #### **Moving a OneDrive site** You can schedule SharePoint site moves in advance (described later in this artic - You can schedule up to 4,000 moves at a time. - As the moves begin, you can schedule more, with a maximum of 4,000 pending moves in the queue and any given time. - The maximum size of a SharePoint site that can be moved is 5 terabytes (5 TB).+- The count of list items for the site is < 1 million. To schedule a SharePoint site _Geography_ move for a later time, include one of the following parameters when you start the move: |
enterprise | Microsoft Azure Architectures For Sharepoint 2013 | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/enterprise/microsoft-azure-architectures-for-sharepoint-2013.md | - Title: "Microsoft Azure Architectures for SharePoint 2013"--- Previously updated : 12/15/2017------ scotvorg-- Ent_O365-- CSH- -description: Learn which types of SharePoint 2013 solutions can be hosted in Microsoft Azure virtual machines, and how to set up Azure to host one. ---# Microsoft Azure Architectures for SharePoint 2013 --Azure is a good environment for hosting a SharePoint Server 2013 solution. In most cases, we recommend Microsoft 365, but a SharePoint Server farm hosted in Azure can be a good option for specific solutions. This article describes how to architect SharePoint solutions so they are a good fit in the Azure platform. The following two specific solutions are used as examples: --- [SharePoint Server 2013 Disaster Recovery in Microsoft Azure](sharepoint-server-2013-disaster-recovery-in-microsoft-azure.md)--- [Internet Sites in Microsoft Azure using SharePoint Server 2013](internet-sites-in-microsoft-azure-using-sharepoint-server-2013.md)--## Recommended SharePoint solutions for Azure Infrastructure Services --Azure infrastructure services is a compelling option for hosting SharePoint solutions. Some solutions are a better fit for this platform than others. The following table shows recommended solutions. --|Solution|Why this solution is recommended for Azure| -||| -|Development and test environments|It's easy to create and manage these environments.| -|Disaster recovery of on-premises SharePoint farms to Azure|**Hosted secondary datacenter** Use Azure instead of investing in a secondary datacenter in a different region. <br/> **Lower-cost disaster-recovery environments** Maintain and pay for fewer resources than an on-premises disaster recovery environment. The number of resources depends on the disaster recovery environment you choose: cold standby, warm standby, or hot standby. <br/> **More elastic platform** In the event of a disaster, easily scale-out your recovery SharePoint farm to meet load requirements. Scale in when you no longer need the resources. <br/> See [SharePoint Server 2013 Disaster Recovery in Microsoft Azure](sharepoint-server-2013-disaster-recovery-in-microsoft-azure.md).| -|Internet-facing sites that use features and scale not available in Microsoft 365|**Focus your efforts** Concentrate on building a great site rather than building infrastructure. <br/> **Take advantage of elasticity in Azure** Size the farm for the demand by adding new servers, and pay only for resources you need. Dynamic machine allocation is not supported (auto scale). <br/> **Use Microsoft Entra ID** Take advantage of Microsoft Entra ID for customer accounts. <br/> **Add SharePoint functionality not available in Microsoft 365** Add deep reporting and web analytics. <br/> See [Internet Sites in Microsoft Azure using SharePoint Server 2013](internet-sites-in-microsoft-azure-using-sharepoint-server-2013.md).| -|App farms to support Microsoft 365 or on-premises environments|**Build, test, and host apps** in Azure to support both on-premises and cloud environments. <br/> **Host this role** in Azure instead of buying new hardware for on-premises environments.| --For intranet and collaboration solutions and workloads, consider the following options: --- Determine if Microsoft 365 meets your business requirements or can be part of the solution. Microsoft 365 provides a rich feature set that is always up to date.--- If Microsoft 365 does not meet all your business requirements, consider a standard implementation of SharePoint 2013 on premises from Microsoft Consulting Services (MCS). A standard architecture can be a quicker, cheaper, and easier solution for you to support than a customized one.--- If a standard implementation doesn't meet your business requirements, consider a customized on-premises solution.--- If using a cloud platform is important for your business requirements, consider a standard or customized implementation of SharePoint 2013 hosted in Azure infrastructure services. SharePoint solutions are much easier to support in Azure than other non-native Microsoft public cloud platforms.--## Before you design the Azure environment --While this article uses example SharePoint topologies, you can use these design concepts with any SharePoint farm topology. Before you design the Azure environment, use the following topology, architecture, capacity, and performance guidance to design the SharePoint farm: --- [Architecture design for SharePoint 2013 IT pros](/SharePoint/technical-reference/technical-diagrams)--- [Plan for performance and capacity management in SharePoint Server 2013](/SharePoint/administration/performance-planning-in-sharepoint-server-2013)--## Determine the Active Directory domain type --Each SharePoint Server farm relies on Active Directory to provide administrative accounts for farm setup. At this time, there are two options for SharePoint solutions in Azure. These are described in the following table. --|Option|Description| -||| -|Dedicated domain|You can deploy a dedicated and isolated Active Directory domain to Azure to support your SharePoint farm. This is a good choice for public-facing Internet sites.| -|Extend the on-premises domain through a cross-premises connection|When you extend the on-premises domain through a cross-premises connection, users access the SharePoint farm via your intranet as if it were hosted on-premises. You can take advantage of your on-premises Active Directory and DNS implementation. <br/> A cross-premises connection is required for building a disaster-recovery environment in Azure to fail over to from your on-premises farm.| --This article includes design concepts for extending the on-premises domain through a cross-premises connection. If your solution uses a dedicated domain, you don't need a cross-premises connection. --## Design the virtual network --First you need a virtual network in Azure, which includes subnets on which you will place your virtual machines. The virtual network needs a private IP address space, portions of which you assign to the subnets. --If you are extending your on-premises network to Azure through a cross-premises connection (required for a disaster recovery environment), you must choose a private address space that is not already in use elsewhere in your organization network, which can include your on-premises environment and other Azure virtual networks. --**Figure 1: On-premises environment with a virtual network in Azure**: --![Microsoft Azure virtual network design for a SharePoint solution. One subnet for the Azure gateway. One subnet for the virtual machines.](../media/OPrrasconWA-AZarch.png) --In this diagram: --- A virtual network in Azure is illustrated side-by-side to the on-premises environment. The two environments are not yet connected by a cross-premises connection, which can be a site-to-site VPN connection or ExpressRoute.--- At this point, the virtual network just includes the subnets and no other architectural elements. One subnet will host the Azure gateway and other subnets host the tiers of the SharePoint farm, with an additional one for Active Directory and DNS.--## Add cross-premises connectivity --The next deployment step is to create the cross-premises connection (if this applies to your solution). For cross-premises connections, an Azure gateway resides in a separate gateway subnet, which you must create and assign an address space. --When you plan for a cross-premises connection, you define and create an Azure gateway and connection to an on-premises gateway device. --**Figure 2: Using an Azure gateway and an on-premises gateway device to provide site-to-site connectivity between the on-premises environment and Azure**: --![On-premises environment connected to an Azure virtual network by a cross-premise connection, which can be a site-to-site VPN connection or ExpressRoute.](../media/AZarch-VPNgtwyconnct.png) --In this diagram: --- Adding to the previous diagram, the on-premises environment is connected to the Azure virtual network by a cross-premise connection, which can be a site-to-site VPN connection or ExpressRoute.--- An Azure gateway is on a gateway subnet.--- The on-premises environment includes a gateway device, such as a router or VPN server.--For additional information to plan for and create a cross-premises virtual network, see [Connect an on-premises network to a Microsoft Azure virtual network](connect-an-on-premises-network-to-a-microsoft-azure-virtual-network.md). --## Add Active Directory Domain Services (AD DS) and DNS --For disaster recovery in Azure, you deploy Windows Server AD and DNS in a hybrid scenario where Windows Server AD is deployed both on-premises and on Azure virtual machines. --**Figure 3: Hybrid Active Directory domain configuration**: --![STwo virtual machines deployed to the Azure virtual network and the SharePoint Farm subnet are replica domain controllers and DNS servers.](../media/AZarch-HyADdomainConfig.png) --This diagram builds on the previous diagrams by adding two virtual machines to a Windows Server AD and DNS subnet. These virtual machines are replica domain controllers and DNS servers. They are an extension of the on-premises Windows Server AD environment. --The following table provides configuration recommendations for these virtual machines in Azure. Use these as a starting point for designing your own environmentΓÇöeven for a dedicated domain where your Azure environment doesn't communicate with your on-premises environment. --|Item|Configuration| -||| -|Virtual machine size in Azure|A1 or A2 size in the Standard tier| -|Operating system|Windows Server 2012 R2| -|Active Directory role|AD DS domain controller designated as a global catalog server. This configuration reduces egress traffic across the cross-premises connection. <br/> In a multidomain environment with high rates of change (this is not common), configure domain controllers on premises not to sync with the global catalog servers in Azure, to reduce replication traffic.| -|DNS role|Install and configure the DNS Server service on the domain controllers.| -|Data disks|Place the Active Directory database, logs, and SYSVOL on additional Azure data disks. Do not place these on the operating system disk or the temporary disks provided by Azure.| -|IP addresses|Use static IP addresses and configure the virtual network to assign these addresses to the virtual machines in the virtual network after the domain controllers have been configured.| --> [!IMPORTANT] -> Before you deploy Active Directory in Azure, read [Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines](/windows-server/identity/ad-ds/introduction-to-active-directory-domain-services-ad-ds-virtualization-level-100). These help you determine if a different architecture or different configuration settings are needed for your solution. - -## Add the SharePoint farm --Place the virtual machines of the SharePoint farm in tiers on the appropriate subnets. --**Figure 4: Placement of SharePoint virtual machines**: --![Database servers and SharePoint server roles added to the Azure virtual network within the SharePoint Farm subnet.](../media/AZarch-SPVMsinCloudSer.png) --This diagram builds on the previous diagrams by adding the SharePoint farm server roles in their respective tiers. --- Two database virtual machines running SQL Server create the database tier.--- Two virtual machines running SharePoint Server 2013 for each of the following tiers: front end servers, distributed cache servers, and back end servers.--## Design and fine tune server roles for availability sets and fault domains --A fault domain is a grouping of hardware in which role instances run. Virtual machines within the same fault domain can be updated by the Azure infrastructure at the same time. Or, they can fail at the same time because they share the same rack. To avoid the risk of having two virtual machines on the same fault domain, you can configure your virtual machines as an availability set, which ensures that each virtual machine is in a different fault domain. If three virtual machines are configured as an availability set, Azure guarantees that no more than two of the virtual machines are located in the same fault domain. --When you design the Azure architecture for a SharePoint farm, configure identical server roles to be part of an availability set. This ensures that your virtual machines are spread across multiple fault domains. --**Figure 5: Use Azure Availability Sets to provide high availability for the SharePoint farm tiers**: --![Configuration of availability sets in the Azure infrastructure for a SharePoint 2013 solution.](../media/AZenv-WinAzureAvailSetsHA.png) --This diagram calls out the configuration of availability sets within the Azure infrastructure. Each of the following roles share a separate availability set: --- Active Directory and DNS--- Database--- Back end--- Distribute cache--- Front end--The SharePoint farm might need to be fine tuned in the Azure platform. To ensure high availability of all components, ensure that the server roles are all configured identically. --Here is an example that shows a standard Internet Sites architecture that meets specific capacity and performance goals. This example is featured in the following architecture model: [Internet Sites Search Architectures for SharePoint Server 2013](https://go.microsoft.com/fwlink/p/?LinkId=261519). --**Figure 6: Planning example for capacity and performance goals in a three-tier farm**: --![Standard SharePoint 2013 Internet Sites architecture with component allocations that meet specific capacity and performance goals.](../media/AZarch-CapPerfexmpArch.png) --In this diagram: --- A three-tier farm is represented: web servers, application servers, and database servers.--- The three web servers are configured identically with multiple components.--- The two database servers are configured identically.--- The three application servers are not configured identically. These server roles require fine tuning for availability sets in Azure.--Let's look closer at the application server tier. --**Figure 7: Application server tier before fine tuning**: --![Example SharePoint Server 2013 application server tier before tuning for Microsoft Azure availability sets.](../media/AZarch-AppServtierBefore.png) --In this diagram: --- Three servers are included in the application tier.--- The first server includes four components.--- The second server includes three components.--- The third server includes two components.--You determine the number of components by the performance and capacity targets for the farm. To adapt this architecture for Azure, we'll replicate the four components across all three servers. This increases the number of components beyond what is necessary for performance and capacity. The tradeoff is that this design ensures high availability of all four components in the Azure platform when these three virtual machines are assigned to an availability set. --**Figure 8: Application server tier after fine tuning**: --![Example SharePoint Server 2013 application server tier after tuning for Microsoft Azure availability sets.](../media/AZarch-AppServtierAfter.png) --This diagram shows all three application servers configured identically with the same four components. --When we add availability sets to the tiers of the SharePoint farm, the implementation is complete. --**Figure 9: The completed SharePoint farm in Azure infrastructure services**: --![Example SharePoint 2013 farm in Azure infrastructure services with virtual network, cross-premises connectivity, subnets, VMs, and availability sets.](../media/7256292f-bf11-485b-8917-41ba206153ee.png) --This diagram shows the SharePoint farm implemented in Azure infrastructure services, with availability sets to provide fault domains for the servers in each tier. --## See Also --[Microsoft 365 solution and architecture center](../solutions/index.yml) --[Internet Sites in Microsoft Azure using SharePoint Server 2013](internet-sites-in-microsoft-azure-using-sharepoint-server-2013.md) --[SharePoint Server 2013 Disaster Recovery in Microsoft Azure](sharepoint-server-2013-disaster-recovery-in-microsoft-azure.md) |
enterprise | Sharepoint Server 2013 Disaster Recovery In Microsoft Azure | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/enterprise/sharepoint-server-2013-disaster-recovery-in-microsoft-azure.md | - Title: "SharePoint Server 2013 Disaster Recovery in Microsoft Azure"--- Previously updated : 04/17/2018----- MET150--- scotvorg-- Ent_O365-- CSH--- Ent_Deployment-- seo-marvel-apr2020 -description: This article describes how to use Azure to create a disaster-recovery environment for your on-premises SharePoint farm. ---# SharePoint Server 2013 Disaster Recovery in Microsoft Azure -- Using Azure, you can create a disaster-recovery environment for your on-premises SharePoint farm. This article describes how to design and implement this solution. -- **Watch the SharePoint Server 2013 disaster recovery overview video** -> [!VIDEO https://www.microsoft.com/videoplayer/embed/1b73ec8f-29bd-44eb-aa3a-f7932784bfd9?autoplay=false] -- When disaster strikes your SharePoint on-premises environment, your top priority is to get the system running again quickly. Disaster recovery with SharePoint is quicker and easier when you have a backup environment already running in Microsoft Azure. This video explains the main concepts of a SharePoint warm failover environment and complements the full details available in this article. --Use this article with the following solution model: **SharePoint Disaster Recovery in Microsoft Azure**. --[![SharePoint disaster-recovery process to Azure.](../media/SP-DR-Azure.png)](https://go.microsoft.com/fwlink/p/?LinkId=392555) -- [PDF](https://go.microsoft.com/fwlink/p/?LinkId=392555) | [Visio](https://go.microsoft.com/fwlink/p/?LinkId=392554) --## Use Azure Infrastructure Services for disaster recovery --Many organizations do not have a disaster recovery environment for SharePoint, which can be expensive to build and maintain on-premises. Azure Infrastructure Services provides compelling options for disaster recovery environments that are more flexible and less expensive than the on-premises alternatives. --The advantages for using Azure Infrastructure Services include: --- **Fewer costly resources** Maintain and pay for fewer resources than on-premises disaster recovery environments. The number of resources depends on which disaster-recovery environment you choose: cold standby, warm standby, or hot standby.--- **Better resource flexibility** In the event of a disaster, easily scale out your recovery SharePoint farm to meet load requirements. Scale in when you no longer need the resources.--- **Lower datacenter commitment** Use Azure Infrastructure Services instead of investing in a secondary datacenter in a different region.--There are less-complex options for organizations just getting started with disaster recovery and advanced options for organizations with high-resilience requirements. The definitions for cold, warm, and hot standby environments are a little different when the environment is hosted on a cloud platform. The following table describes these environments for building a SharePoint recovery farm in Azure. --**Table: Recovery environments** --|Type of recovery environment|Description| -||| -|Hot|A fully sized farm is provisioned, updated, and running on standby.| -|Warm|The farm is built and virtual machines are running and updated. <br/> Recovery includes attaching content databases, provisioning service applications, and crawling content. <br/> The farm can be a smaller version of the production farm and then scaled out to serve the full user base.| -|Cold|The farm is fully built, but the virtual machines are stopped. <br/> Maintaining the environment includes starting the virtual machines from time to time, patching, updating, and verifying the environment. <br/> Start the full environment in the event of a disaster.| --It's important to evaluate your organization's Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs). These requirements determine which environment is the most appropriate investment for your organization. --The guidance in this article describes how to implement a warm standby environment. You can also adapt it to a cold standby environment, although you need to follow additional procedures to support this kind of environment. This article does not describe how to implement a hot standby environment. --For more information about disaster recovery solutions, see [High availability and disaster recovery concepts in SharePoint 2013](/SharePoint/administration/high-availability-and-disaster-recovery-concepts) and [Choose a disaster recovery strategy for SharePoint 2013](/SharePoint/administration/plan-for-disaster-recovery). --## Solution description --The warm standby disaster-recovery solution requires the following environment: --- An on-premises SharePoint production farm--- A recovery SharePoint farm in Azure--- A site-to-site VPN connection between the two environments--The following figure illustrates these three elements. --**Figure: Elements of a warm standby solution in Azure** --![Elements of a SharePoint warm standby solution in Azure.](../media/AZarch-AZWarmStndby.png) --SQL Server log shipping with Distributed File System Replication (DFSR) is used to copy database backups and transaction logs to the recovery farm in Azure: --- DFSR transfers logs from the production environment to the recovery environment. In a WAN scenario, DFSR is more efficient than shipping the logs directly to the secondary server in Azure.--- Logs are replayed to the SQL Server in the recovery environment in Azure.--- You don't attach log-shipped SharePoint content databases in the recovery environment until a recovery exercise is performed.--Perform the following steps to recover the farm: --1. Stop log shipping. --2. Stop accepting traffic to the primary farm. --3. Replay the final transaction logs. --4. Attach the content databases to the farm. --5. Restore service applications from the replicated services databases. --6. Update Domain Name System (DNS) records to point to the recovery farm. --7. Start a full crawl. --We recommend that you rehearse these steps regularly and document them to help ensure that your live recovery runs smoothly. Attaching content databases and restoring service applications can take some time and typically involves some manual configuration. --After a recovery is performed, this solution provides the items listed in the following table. --**Table: Solution recovery objectives** --|Item|Description| -||| -|Sites and content|Sites and content are available in the recovery environment.| -|A new instance of search|In this warm standby solution, search is not restored from search databases. Search components in the recovery farm are configured as similarly as possible to the production farm. After the sites and content are restored, a full crawl is started to rebuild the search index. You do not need to wait for the crawl to complete to make the sites and content available.| -|Services|Services that store data in databases are restored from the log-shipped databases. Services that do not store data in databases are simply started. <br/> Not all services with databases need to be restored. The following services do not need to be restored from databases and can simply be started after failover: <br/> Usage and Health Data Collection <br/> State service <br/> Word automation <br/> Any other service that doesn't use a database| --You can work with Microsoft Consulting Services (MCS) or a partner to address more-complex recovery objectives. These are summarized in the following table. --**Table: Other items that can be addressed by MCS or a partner** --|Item|Description| -||| -|Synchronizing custom farm solutions|Ideally, the recovery farm configuration is identical to the production farm. You can work with a consultant or partner to evaluate whether custom farm solutions are replicated and whether the process is in place for keeping the two environments synchronized.| -|Connections to data sources on-premises|It might not be practical to replicate connections to back-end data systems, such as backup domain controller (BDC) connections and search content sources.| -|Search restore scenarios|Because enterprise search deployments tend to be fairly unique and complex, restoring search from databases requires a greater investment. You can work with a consultant or partner to identify and implement search restore scenarios that your organization might require.| --The guidance provided in this article assumes that the on-premises farm is already designed and deployed. --## Detailed architecture --Ideally, the recovery farm configuration in Azure is identical to the production farm on-premises, including the following: --- The same representation of server roles--- The same configuration of customizations--- The same configuration of search components--The environment in Azure can be a smaller version of the production farm. If you plan to scale out the recovery farm after failover, it's important that each type of server role be initially represented. --Some configurations might not be practical to replicate in the failover environment. Be sure to test the failover procedures and environment to help ensure that the failover farm provides the expected service level. --This solution doesn't prescribe a specific topology for a SharePoint farm. The focus of this solution is to use Azure for the failover farm and to implement log shipping and DFSR between the two environments. --### Warm standby environments --In a warm standby environment, all virtual machines in the Azure environment are running. The environment is ready for a failover exercise or event. --The following figure illustrates a disaster recovery solution from an on-premises SharePoint farm to an Azure-based SharePoint farm that is configured as a warm standby environment. --**Figure: Topology and key elements of a production farm and a warm standby recovery farm** --![Topology of a SharePoint farm and a warm standby recovery farm.](../media/AZarch-AZWarmStndby.png) --In this diagram: --- Two environments are illustrated side by side: the on-premises SharePoint farm and the warm standby farm in Azure.--- Each environment includes a file share.--- Each farm includes four tiers. To achieve high availability, each tier includes two servers or virtual machines that are configured identically for a specific role, such as front-end services, distributed cache, back-end services, and databases. It isn't important in this illustration to call out specific components. The two farms are configured identically.--- The fourth tier is the database tier. Log shipping is used to copy logs from the secondary database server in the on-premises environment to the file share in the same environment.--- DFSR copies files from the file share in the on-premises environment to the file share in the Azure environment.--- Log shipping replays the logs from the file share in the Azure environment to the primary replica in the SQL Server AlwaysOn availability group in the recovery environment.--### Cold standby environments --In a cold standby environment, most of the SharePoint farm virtual machines can be shut down. (We recommend occasionally starting the virtual machines, such as every two weeks or once a month, so that each virtual machine can sync with the domain.) The following virtual machines in the Azure recovery environment must remain running to help ensure continuous operations of log shipping and DFSR: --- The file share--- The primary database server--- At least one virtual machine running Windows Server Active Directory Domain Services and DNS--The following figure shows an Azure failover environment in which the file share virtual machine and the primary SharePoint database virtual machine are running. All other SharePoint virtual machines are stopped. The virtual machine that is running Windows Server Active Directory and DNS is not shown. --**Figure: Cold standby recovery farm with running virtual machines** --![Elements of a SharePoint cold standby solution in Azure.](../media/AZarch-AZColdStndby.png) --After failover to a cold standby environment, all virtual machines are started, and the method to achieve high availability of the database servers must be configured, such as SQL Server AlwaysOn availability groups. --If multiple storage groups are implemented (databases are spread across more than one SQL Server high availability set), the primary database for each storage group must be running to accept the logs associated with its storage group. --### Skills and experience --Multiple technologies are used in this disaster recovery solution. To help ensure that these technologies interact as expected, each component in the on-premises and Azure environment must be installed and configured correctly. We recommend that the person or team who sets up this solution have a strong working knowledge of and hands-on skills with the technologies described in the following articles: --- [Distributed File System (DFS) Replication Services](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj127250(v=ws.11))--- [Windows Server Failover Clustering (WSFC) with SQL Server](/sql/sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server)--- [AlwaysOn Availability Groups (SQL Server)](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server)--- [Back Up and Restore of SQL Server Databases](/sql/relational-databases/backup-restore/back-up-and-restore-of-sql-server-databases)--- [SharePoint Server 2013 installation and farm deployment](/SharePoint/install/installation-and-configuration-overview)--- [Microsoft Azure](/azure/)--Finally, we recommend scripting skills that you can use to automate tasks associated with these technologies. It's possible to use the available user interfaces to complete all the tasks described in this solution. However, a manual approach can be time consuming and error prone and delivers inconsistent results. --In addition to Windows PowerShell, there are also Windows PowerShell libraries for SQL Server, SharePoint Server, and Azure. Don't forget T-SQL, which can also help reduce the time to configure and maintain your disaster-recovery environment. --## Disaster recovery roadmap --![Visual representation of the SharePoint disaster-recovery roadmap.](../media/Azure-DRroadmap.png) --This roadmap assumes that you already have a SharePoint Server 2013 farm deployed in production. --**Table: Roadmap for disaster recovery** --|Phase|Description| -||| -|Phase 1|Design the disaster recovery environment.| -|Phase 2|Create the Azure virtual network and VPN connection.| -|Phase 3|Deploy Windows Active Directory and Domain Name Services to the Azure virtual network.| -|Phase 4|Deploy the SharePoint recovery farm in Azure.| -|Phase 5|Set up DFSR between the farms.| -|Phase 6|Set up log shipping to the recovery farm.| -|Phase 7|Validate failover and recovery solutions. This includes the following procedures and technologies: <br/> Stop log shipping. <br/> Restore the backups. <br/> Crawl content. <br/> Recover services. <br/> Manage DNS records.| --## Phase 1: Design the disaster recovery environment --Use the guidance in [Microsoft Azure Architectures for SharePoint 2013](microsoft-azure-architectures-for-sharepoint-2013.md) to design the disaster-recovery environment, including the SharePoint recovery farm. You can use the graphics in the [SharePoint Disaster Recovery Solution in Azure](https://go.microsoft.com/fwlink/p/?LinkId=392554) Visio file to start the design process. We recommend that you design the entire environment before beginning any work in the Azure environment. --In addition to the guidance provided in [Microsoft Azure Architectures for SharePoint 2013](microsoft-azure-architectures-for-sharepoint-2013.md) for designing the virtual network, VPN connection, Active Directory, and SharePoint farm, be sure to add a file share role to the Azure environment. --To support log shipping in a disaster-recovery solution, a file share virtual machine is added to the subnet where the database roles reside. The file share also serves as the third node of a Node Majority for the SQL Server AlwaysOn availability group. This is the recommended configuration for a standard SharePoint farm that uses SQL Server AlwaysOn availability groups. --> [!NOTE] -> It is important to review the prerequisites for a database to participate in a SQL Server AlwaysOn availability group. For more information, see [Prerequisites, Restrictions, and Recommendations for AlwaysOn Availability Groups](/sql/database-engine/availability-groups/windows/prereqs-restrictions-recommendations-always-on-availability). --**Figure: Placement of a file server used for a disaster recovery solution** --![Shows a file share VM added to the same cloud service that contains the SharePoint database server roles.](../media/AZenv-FSforDFSRandWSFC.png) --In this diagram, a file share virtual machine is added to the same subnet in Azure that contains the database server roles. Do not add the file share virtual machine to an availability set with other server roles, such as the SQL Server roles. --If you are concerned about the high availability of the logs, consider taking a different approach by using [SQL Server backup and restore with Azure Blob Storage Service](/sql/relational-databases/backup-restore/sql-server-backup-and-restore-with-microsoft-azure-blob-storage-service). This is a new feature in Azure that saves logs directly to a blob storage URL. This solution does not include guidance about using this feature. --When you design the recovery farm, keep in mind that a successful disaster recovery environment accurately reflects the production farm that you want to recover. The size of the recovery farm is not the most important thing in the recovery farm's design, deployment, and testing. Farm scale varies from organization to organization based on business requirements. It might be possible to use a scaled-down farm for a short outage or until performance and capacity demands require you to scale the farm. --Configure the recovery farm as identically as possible to the production farm so that it meets your service level agreement (SLA) requirements and provides the functionality that you need to support your business. When you design the disaster recovery environment, also look at your change management process for your production environment. We recommend that you extend the change management process to the recovery environment by updating the recovery environment at the same interval as the production environment. As part of the change management process, we recommend maintaining a detailed inventory of your farm configuration, applications, and users. --## Phase 2: Create the Azure virtual network and VPN connection --[Connect an on-premises network to a Microsoft Azure virtual network](connect-an-on-premises-network-to-a-microsoft-azure-virtual-network.md) shows you how to plan and deploy the virtual network in Azure and how to create the VPN connection. Follow the guidance in the topic to complete the following procedures: --- Plan the private IP address space of the Virtual Network.--- Plan the routing infrastructure changes for the Virtual Network.--- Plan firewall rules for traffic to and from the on-premises VPN device.--- Create the cross-premises virtual network in Azure.--- Configure routing between your on-premises network and the Virtual Network.--## Phase 3: Deploy Active Directory and Domain Name Services to the Azure virtual network --This phase includes deploying both Windows Server Active Directory and DNS to the Virtual Network in a hybrid scenario as described in [Microsoft Azure Architectures for SharePoint 2013](microsoft-azure-architectures-for-sharepoint-2013.md) and as illustrated in the following figure. --**Figure: Hybrid Active Directory domain configuration** --![Two virtual machines deployed to the Azure virtual network and the SharePoint Farm subnet are replica domain controllers and DNS servers.](../media/AZarch-HyADdomainConfig.png) --In the illustration, two virtual machines are deployed to the same subnet. These virtual machines are each hosting two roles: Active Directory and DNS. --Before deploying Active Directory in Azure, read [Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines](/windows-server/identity/ad-ds/introduction-to-active-directory-domain-services-ad-ds-virtualization-level-100). These guidelines help you determine whether you need a different architecture or different configuration settings for your solution. --For detailed guidance on setting up a domain controller in Azure, see [Install a Replica Active Directory Domain Controller in Azure Virtual Networks](/windows-server/identity/ad-ds/introduction-to-active-directory-domain-services-ad-ds-virtualization-level-100). --Before this phase, you didn't deploy virtual machines to the Virtual Network. The virtual machines for hosting Active Directory and DNS are likely not the largest virtual machines you need for the solution. Before you deploy these virtual machines, first create the largest virtual machine that you plan to use in your Virtual Network. This helps ensure that your solution lands on a tag in Azure that allows the largest size you need. You do not need to configure this virtual machine at this time. Simply create it, and set it aside. If you do not do this, you might run into a limitation when you try to create larger virtual machines later, which was an issue at the time this article was written. --## Phase 4: Deploy the SharePoint recovery farm in Azure --Deploy the SharePoint farm in your Virtual Network according to your design plans. It might be helpful to review [Planning for SharePoint 2013 on Azure Infrastructure Services](/previous-versions/azure/dn275958(v=azure.100)) before you deploy SharePoint roles in Azure. --Consider the following practices that we learned by building our proof of concept environment: --- Create virtual machines by using the Azure portal or PowerShell.--- Azure and Hyper-V do not support dynamic memory. Be sure this is factored into your performance and capacity plans.--- Restart virtual machines through the Azure interface, not from the virtual machine logon itself. Using the Azure interface works better and is more predictable.--- If you want to shut down a virtual machine to save costs, use the Azure interface. If you shut down from the virtual machine logon, charges continue to accrue.--- Use a naming convention for the virtual machines.--- Pay attention to which datacenter location the virtual machines are being deployed.--- The automatic scaling feature in Azure is not supported for SharePoint roles.--- Do not configure items in the farm that will be restored, such as site collections.--## Phase 5: Set up DFSR between the farms --To set up file replication by using DFSR, use the DNS Management snap-in. However, before the DFSR setup, log on to your on-premises file server and Azure file server and enable the service in Windows. --From the Server Manager Dashboard, complete the following steps: --- Configure the local server.--- Start the **Add Roles and Features Wizard**.--- Open the **File and Storage Services** node.--- Select **DFS Namespaces** and **DFS replication**.--- Click **Next** to finish the wizard steps.--The following table provides links to DFSR reference articles and blog posts. --**Table: Reference articles for DFSR** --|Title|Description| -||| -|[Replication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770278(v=ws.11))|DFS Management TechNet topic with links for replication| -|[DFS Replication: Survival Guide](https://go.microsoft.com/fwlink/p/?LinkId=392737)|Wiki with links to DFS information| -|[DFS Replication: Frequently Asked Questions](/previous-versions/windows/it-pro/windows-server-2003/cc773238(v=ws.10))|DFS Replication TechNet topic| -|[Jose Barreto's Blog](/archive/blogs/josebda/)|Blog written by a Principal Program Manager on the File Server team at Microsoft| -|[The Storage Team at Microsoft - File Cabinet Blog](https://go.microsoft.com/fwlink/p/?LinkId=392740)|Blog about file services and storage features in Windows Server| --## Phase 6: Set up log shipping to the recovery farm --Log shipping is the critical component for setting up disaster recovery in this environment. You can use log shipping to automatically send transaction log files for databases from a primary database server instance to a secondary database server instance. To set up log shipping, see [Configure log shipping in SharePoint 2013](/sharepoint/administration/configure-log-shipping). --> [!IMPORTANT] -> Log shipping support in SharePoint Server is limited to certain databases. For more information, see [Supported high availability and disaster recovery options for SharePoint databases (SharePoint 2013)](/SharePoint/administration/supported-high-availability-and-disaster-recovery-options-for-sharepoint-databas). --## Phase 7: Validate failover and recovery --The goal of this final phase is to verify that the disaster recovery solution works as planned. To do this, create a failover event that shuts down the production farm and starts up the recovery farm as a replacement. You can start a failover scenario manually or by using scripts. --The first step is to stop incoming user requests for farm services or content. You can do this by disabling DNS entries or by shutting down the front-end web servers. After the farm is "down," you can fail over to the recovery farm. --### Stop log shipping --You must stop log shipping before farm recovery. Stop log shipping on the secondary server in Azure first, and then stop it on the primary server on-premises. Use the following script to stop log shipping on the secondary server first and then on the primary server. The database names in the script might be different, depending on your environment. --``` This script removes log shipping from the server. Commands must be executed on the secondary server first and then on the primary server.--SET NOCOUNT ON -DECLARE @PriDB nvarchar(max) -,@SecDB nvarchar(250) -,@PriSrv nvarchar(250) -,@SecSrv nvarchar(250) --Set @PriDB= '' -SET @PriDB = UPPER(@PriDB) -SET @PriDB = REPLACE(@PriDB, ' ', '') -SET @PriDB = '''' + REPLACE(@PriDB, ',', ''', ''') + '''' --Set @SecDB = @PriDB --Exec ( 'Select ''exec master..sp_delete_log_shipping_secondary_database '' + '''''''' + prm.primary_database + '''''''' -from msdb.dbo.log_shipping_monitor_primary prm INNER JOIN msdb.dbo.log_shipping_primary_secondaries sec ON prm.primary_database=sec.secondary_database -where prm.primary_database in ( ' + @PriDB + ' )') --Exec ( 'Select ''exec master..sp_delete_log_shipping_primary_secondary '' + '''''''' + prm.Primary_Database + '''''', '''''' + sec.Secondary_Server + '''''', '''''' + sec.Secondary_database + '''''''' -from msdb.dbo.log_shipping_monitor_primary prm INNER JOIN msdb.dbo.log_shipping_primary_secondaries sec ON prm.primary_database=sec.secondary_database -where prm.primary_database in ( ' + @PriDB + ' )') --Exec ( 'Select ''exec master..sp_delete_log_shipping_primary_database '' + '''''''' + prm.primary_database + '''''''' -from msdb.dbo.log_shipping_monitor_primary prm INNER JOIN msdb.dbo.log_shipping_primary_secondaries sec ON prm.primary_database=sec.secondary_database -where prm.primary_database in ( ' + @PriDB + ' )') --Exec ( 'Select ''exec master..sp_delete_log_shipping_secondary_primary '' + '''''''' + prm.primary_server + '''''', '''''' + prm.primary_database + '''''''' -from msdb.dbo.log_shipping_monitor_primary prm INNER JOIN msdb.dbo.log_shipping_primary_secondaries sec ON prm.primary_database=sec.secondary_database -where prm.primary_database in ( ' + @PriDB + ' )') -``` --### Restore the backups --Backups must be restored in the order in which they were created. Before you can restore a particular transaction log backup, you must first restore the following previous backups without rolling back uncommitted transactions (that is, by using `WITH NORECOVERY`): --- The full database backup and the last differential backup - Restore these backups, if any exist, taken before the particular transaction log backup. Before the most recent full or differential database backup was created, the database was using the full recovery model or bulk-logged recovery model.--- All transaction log backups - Restore any transaction log backups taken after the full database backup or the differential backup (if you restore one) and before the particular transaction log backup. Log backups must be applied in the sequence in which they were created, without any gaps in the log chain.--To recover the content database on the secondary server so that the sites render, remove all database connections before recovery. To restore the database, run the following SQL statement. --```SQL -restore database WSS_Content with recovery -``` --> [!IMPORTANT] -> When you use T-SQL explicitly, specify either **WITH NORECOVERY** or **WITH RECOVERY** in every RESTORE statement to eliminate ambiguityΓÇöthis is very important when writing scripts. After the full and differential backups are restored, the transaction logs can be restored in SQL Server Management Studio. Also, because log shipping is already stopped, the content database is in a standby state, so you must change the state to full access. --In SQL Server Management Studio, right-click the **WSS_Content** database, point to **Tasks** > **Restore**, and then click **Transaction Log** (if you have not restored the full backup, this is not available). For more information, see[Restore a Transaction Log Backup (SQL Server)](/sql/relational-databases/backup-restore/restore-a-transaction-log-backup-sql-server). --### Crawl the content source --You must start a full crawl for each content source to restore the Search Service. Note that you lose some analytics information from the on-premises farm, such as search recommendations. Before you start the full crawls, use the Windows PowerShell cmdlet **Restore-SPEnterpriseSearchServiceApplication** and specify the log-shipped and replicated Search Administration database, **Search_Service__DB_\<GUID\>**. This cmdlet gives the search configuration, schema, managed properties, rules, and sources and creates a default set of the other components. --To start a full crawl, complete the following steps: --1. In the SharePoint 2013 Central Administration, go to **Application Management** > **Service Applications** > **Manage service applications**, and then click the Search Service application that you want to crawl. --2. On the **Search Administration** page, click **Content Sources**, point to the content source that you want, click the arrow, and then click **Start Full Crawl**. --### Recover farm services --The following table shows how to recover services that have log-shipped databases, the services that have databases but are not recommended to restore with log shipping, and the services that do not have databases. --> [!IMPORTANT] -> Restoring an on-premises SharePoint database into the Azure environment will not recover any SharePoint services that you did not already install in Azure manually. --**Table: Service application database reference** --|Restore these services from log-shipped databases|These services have databases, but we recommend that you start these services without restoring their databases|These services do not store data in databases; start these services after failover| -|||| -|Machine Translation Service <br/> Managed Metadata Service <br/> Secure Store Service <br/> User Profile. (Only the Profile and Social Tagging databases are supported. The Synchronization database is not supported.) <br/> Microsoft SharePoint Foundation Subscription Settings Service|Usage and Health Data Collection <br/> State service <br/> Word automation|Excel Services <br/> PerformancePoint Services <br/> PowerPoint Conversion <br/> Visio Graphics Service <br/> Work Management| --The following example shows how to restore the Managed Metadata service from a database. --This uses the existing Managed_Metadata_DB database. This database is log shipped, but there is no active service application on the secondary farm, so it needs to be connected after the service application is in place. --First, use `New-SPMetadataServiceApplication`, and specify the `DatabaseName` switch with the name of the restored database. --Next, configure the new Managed Metadata Service Application on the secondary server, as follows: --- Name: Managed Metadata Service--- Database server: The database name from the shipped transaction log--- Database name: Managed_Metadata_DB--- Application pool: SharePoint Service Applications--### Manage DNS records --You must manually create DNS records to point to your SharePoint farm. --In most cases where you have multiple front-end web servers, it makes sense to take advantage of the Network Load Balancing feature in Windows Server 2012 or a hardware load balancer to distribute requests among the web-front-end servers in your farm. Network load balancing can also help reduce risk by distributing requests to the other servers if one of your web-front-end servers fails. --Typically, when you set up network load balancing, your cluster is assigned a single IP address. You then create a DNS host record in the DNS provider for your network that points to the cluster. (For this project, we put a DNS server in Azure for resiliency in case of an on-premises datacenter failure.) For instance, you can create a DNS record, in DNS Manager in Active Directory, for example, called `https://sharepoint.contoso.com`, that points to the IP address for your load-balanced cluster. --For external access to your SharePoint farm, you can create a host record on an external DNS server with the same URL that clients use on your intranet (for example, `https://sharepoint.contoso.com`) that points to an external IP address in your firewall. (A best practice, using this example, is to set up split DNS so that the internal DNS server is authoritative for `contoso.com` and routes requests directly to the SharePoint farm cluster, rather than routing DNS requests to your external DNS server.) You can then map the external IP address to the internal IP address of your on-premises cluster so that clients find the resources they are looking for. --From here, you might run into a couple of different disaster-recovery scenarios: -- **Example scenario: The on-premises SharePoint farm is unavailable because of hardware failure in the on-premises SharePoint farm.** In this case, after you have completed the steps for failover to the Azure SharePoint farm, you can configure network load balancing on the recovery SharePoint farm's web-front-end servers, the same way you did with the on-premises farm. You can then redirect the host record in your internal DNS provider to point to the recovery farm's cluster IP address. Note that it can take some time before cached DNS records on clients are refreshed and point to the recovery farm. -- **Example scenario: The on-premises datacenter is lost completely.** This scenario might occur due to a natural disaster, such as a fire or flood. In this case, for an enterprise, you would likely have a secondary datacenter hosted in another region as well as your Azure subnet that has its own directory services and DNS. As in the previous disaster scenario, you can redirect your internal and external DNS records to point to the Azure SharePoint farm. Again, take note that DNS-record propagation can take some time. --If you are using host-named site collections, as recommended in [Host-named site collection architecture and deployment (SharePoint 2013)](/SharePoint/administration/host-named-site-collection-architecture-and-deployment), you might have several site collections hosted by the same web application in your SharePoint farm, with unique DNS names (for example, `https://sales.contoso.com` and `https://marketing.contoso.com`). In this case, you can create DNS records for each site collection that point to your cluster IP address. After a request reaches your SharePoint web-front-end servers, they handle routing each request to the appropriate site collection. --## Microsoft proof-of-concept environment --We designed and tested a proof-of-concept environment for this solution. The design goal for our test environment was to deploy and recover a SharePoint farm that we might find in a customer environment. We made several assumptions, but we knew that the farm needed to provide all of the out-of-the-box functionality without any customizations. The topology was designed for high availability by using best practice guidance from the field and product group. --The following table describes the Hyper-V virtual machines that we created and configured for the on-premises test environment. --**Table: Virtual machines for on-premises test** --|Server name|Role|Configuration| -|||| -|DC1|Domain controller with Active Directory.|Two processors <br/> From 512 MB through 4 GB of RAM <br/> 1 x 127-GB hard disk| -|RRAS|Server configured with the Routing and Remote Access Service (RRAS) role.|Two processors <br/> 2-8 GB of RAM <br/> 1 x 127-GB hard disk| -|FS1|File server with shares for backups and an end point for DFSR.|Four processors <br/> 2-12 GB of RAM <br/> 1 x 127-GB hard disk <br/> 1 x 1-TB hard disk (SAN) <br/> 1 x 750-GB hard disk| -|SP-WFE1, SP-WFE2|Front-end web servers.|Four processors <br/> 16 GB of RAM| -|SP-APP1, SP-APP2, SP-APP3|Application servers.|Four processors <br/> 2-16 GB of RAM| -|SP-SQL-HA1, SP-SQL-HA2|Database servers, configured with SQL Server 2012 AlwaysOn availability groups to provide high availability. This configuration uses SP-SQL-HA1 and SP-SQL-HA2 as the primary and secondary replicas.|Four processors <br/> 2-16 GB of RAM| --The following table describes drive configurations for the Hyper-V virtual machines that we created and configured for the front-end web and application servers for the on-premises test environment. --**Table: Virtual machine drive requirements for the Front End Web and Application servers for the on-premises test** --|Drive letter|Size|Directory name|Path| -||||| -|C|80|System drive|\<DriveLetter\>:\\Program Files\\Microsoft SQL Server\\| -|E|80|Log drive (40 GB)|\<DriveLetter\>:\\Program Files\\Microsoft SQL Server\\MSSQL10_50.MSSQLSERVER\\MSSQL\\DATA| -|F|80|Page (36 GB)|\<DriveLetter\>:\\Program Files\\Microsoft SQL Server\\MSSQL\\DATA| --The following table describes drive configurations for the Hyper-V virtual machines created and configured to serve as the on-premises database servers. On the **Database Engine Configuration** page, access the **Data Directories** tab to set and confirm the settings shown in the following table. --**Table: Virtual machine drive requirements for the database server for the on-premises test** --|Drive letter|Size|Directory name|Path| -||||| -|C|80|Data root directory|\<DriveLetter\>:\\Program Files\\Microsoft SQL Server\\| -|E|500|User database directory|\<DriveLetter\>:\\Program Files\\Microsoft SQL Server\\MSSQL10_50.MSSQLSERVER\\MSSQL\\DATA| -|F|500|User database log directory|\<DriveLetter\>:\\Program Files\\Microsoft SQL Server\\MSSQL10_50.MSSQLSERVER\\MSSQL\\DATA| -|G|500|Temp DB directory|\<DriveLetter\>:\\Program Files\\Microsoft SQL Server\\MSSQL10_50.MSSQLSERVER\\MSSQL\\DATA| -|H|500|Temp DB log directory|\<DriveLetter\>:\\Program Files\\Microsoft SQL Server\\MSSQL10_50.MSSQLSERVER\\MSSQL\\DATA| --### Setting up the test environment --During the different deployment phases, the test team typically worked on the on-premises architecture first and then on the corresponding Azure environment. This reflects the general real-world cases where in-house production farms are already running. What is even more important is that you should know the current production workload, capacity, and typical performance. In addition to building a disaster recovery model that can meet business requirements, you should size the recovery farm servers to deliver a minimum level of service. In a cold or warm standby environment, a recovery farm is typically smaller than a production farm. After the recovery farm is stable and in production, the farm can be scaled up and out to meet workload requirements. --We deployed our test environment in the following three phases: --- Set up the hybrid infrastructure--- Provision the servers--- Deploy the SharePoint farms--#### Set up the hybrid infrastructure --This phase involved setting up a domain environment for the on-premises farm and for the recovery farm in Azure. In addition to the normal tasks associated with configuring Active Directory, the test team implemented a routing solution and a VPN connection between the two environments. --#### Provision the servers --In addition to the farm servers, it was necessary to provision servers for the domain controllers and configure a server to handle RRAS as well as the site-to-site VPN. Two file servers were provisioned for the DFSR service, and several client computers were provisioned for testers. --#### Deploy the SharePoint farms --The SharePoint farms were deployed in two stages in order to simplify environment stabilization and troubleshooting, if required. During the first stage, each farm was deployed on the minimum number of servers for each tier of the topology to support the required functionality. --We created the database servers with SQL Server installed before creating the SharePoint 2013 servers. Because this was a new deployment, we created the availability groups before deploying SharePoint. We created three groups based on MCS best practice guidance. --> [!NOTE] -> Create placeholder databases so that you can create availability groups before the SharePoint installation. For more information, see [Configure SQL Server 2012 AlwaysOn Availability Groups for SharePoint 2013](/SharePoint/administration/configure-an-alwayson-availability-group) --We created the farm and joined additional servers in the following order: --- Provision SP-SQL-HA1 and SP-SQL-HA2.--- Configure AlwaysOn and create the three availability groups for the farm.--- Provision SP-APP1 to host Central Administration.--- Provision SP-WFE1 and SP-WFE2 to host the distributed cache.--We used the _skipRegisterAsDistributedCachehost_ parameter when we ran **psconfig.exe** at the command line. For more information, see [Plan for feeds and the Distributed Cache service in SharePoint Server 2013](/sharepoint/administration/plan-for-feeds-and-the-distributed-cache-service). --We repeated the following steps in the recovery environment: --- Provision AZ-SQL-HA1 and AZ-SQL-HA2.--- Configure AlwaysOn and create the three availability groups for the farm.--- Provision AZ-APP1 to host Central Administration.--- Provision AZ-WFE1 and AZ-WFE2 to host the distributed cache.--After we configured the distributed cache and added test users and test content, we started stage two of the deployment. This required scaling out the tiers and configuring the farm servers to support the high-availability topology described in the farm architecture. --The following table describes the virtual machines, subnets, and availability sets we set up for our recovery farm. --**Table: Recovery farm infrastructure** --|Server name|Role|Configuration|Subnet|Availability set| -|||||| -|spDRAD|Domain controller with Active Directory|Two processors <br/> From 512 MB through 4 GB of RAM <br/> 1 x 127-GB hard disk|sp-ADservers|| -|AZ-SP-FS|File server with shares for backups and an endpoint for DFSR|A5 configuration: <br/> Two processors <br/> 14 GB of RAM <br/> 1 x 127-GB hard disk <br/> 1 x 135-GB hard disk <br/> 1 x 127-GB hard disk <br/> 1 x 150-GB hard disk|sp-databaseservers|DATA_SET| -|AZ-WFE1, AZ -WFE2|Front End Web servers|A5 configuration: <br/> Two processors <br/> 14 GB of RAM <br/> 1 x 127-GB hard disk|sp-webservers|WFE_SET| -|AZ -APP1, AZ -APP2, AZ -APP3|Application servers|A5 configuration: <br/> Two processors <br/> 14 GB of RAM <br/> 1 x 127-GB hard disk|sp-applicationservers|APP_SET| -|AZ -SQL-HA1, AZ -SQL-HA2|Database servers and primary and secondary replicas for AlwaysOn availability groups|A5 configuration: <br/> Two processors <br/> 14 GB of RAM|sp-databaseservers|DATA_SET| --### Operations --After the test team stabilized the farm environments and completed functional testing, they started the following operations tasks required to configure the on-premises recovery environment: --- Configure full and differential backups.--- Configure DFSR on the file servers that transfer transaction logs between the on-premises environment and the Azure environment.--- Configure log shipping on the primary database server.--- Stabilize, validate, and troubleshoot log shipping, as required. This included identifying and documenting any behavior that might cause issues, such as network latency, which would cause log shipping or DFSR file synchronization failures.--### Databases --Our failover tests involved the following databases: --- WSS_Content--- ManagedMetadata--- Profile DB--- Sync DB--- Social DB--- Content Type Hub (a database for a dedicated Content Type Syndication Hub)--## Troubleshooting tips --The section explains the problems we encountered during our testing and their solutions. --### Using the Term Store Management Tool caused the error, "The Managed Metadata Store or Connection is currently not available." --Ensure that the application pool account used by the web application has the Read Access to Term Store permission. --### Custom term sets are not available in the site collection --Check for a missing service application association between your content site collection and your content type hub. In addition, under the **Managed Metadata - \<site collection name\> Connection** properties screen, make sure this option is enabled: **This service application is the default storage location for column specific term sets.** --### The Get-ADForest Windows PowerShell command generates the error, "The term 'Get-ADForest' is not recognized as the name of a cmdlet, function, script file, or operable program." --When setting up user profiles, you need the Active Directory forest name. In the Add Roles and Features Wizard, ensure that you have enabled the Active Directory Module for Windows PowerShell (under the **Remote Server Administration Tools>Role Administration Tools>AD DS and AD LDS Tools** section). In addition, run the following commands before using **Get-ADForest** to help ensure that your software dependencies are loaded. --```powershell -Import-Module ServerManager -Import-Module ActiveDirectory -``` --### Availability group creation fails at Starting the 'AlwaysOn_health' XEvent session on '\<server name\>' --Ensure that both nodes of your failover cluster are in the Status "Up" and not "Paused" or "Stopped". --### SQL Server log shipping job fails with access denied error trying to connect to the file share --Ensure that your SQL Server Agent is running under network credentials, instead of the default credentials. --### SQL Server log shipping job indicates success, but no files are copied --This happens because the default backup preference for an availability group is **Prefer Secondary**. Ensure that you run the log shipping job from the secondary server for the availability group instead of the primary; otherwise, the job will fail silently. --### Managed Metadata service (or other SharePoint service) fails to start automatically after installation --Services might take several minutes to start, depending on the performance and current load of your SharePoint Server. Manually click **Start** for the service and provide adequate time for startup while occasionally refreshing the Services on Server screen to monitor its status. In case the service remains stopped, enable SharePoint diagnostic logging, attempt to start the service again, and then check the log for errors. For more information, see [Configure diagnostic logging in SharePoint 2013](/sharepoint/administration/configure-diagnostic-logging) --### After changing DNS to the Azure failover environment, client browsers continue to use the old IP address for the SharePoint site --Your DNS change might not be visible to all clients immediately. On a test client, perform the following command from an elevated command prompt and attempt to access the site again. --```DOS -Ipconfig /flushdns -``` --## Additional resources --[Supported high availability and disaster recovery options for SharePoint databases](/sharepoint/administration/supported-high-availability-and-disaster-recovery-options-for-sharepoint-databas) --[Configure SQL Server 2012 AlwaysOn Availability Groups for SharePoint 2013](/SharePoint/administration/configure-an-alwayson-availability-group) --## See Also --[Microsoft 365 solution and architecture center](../solutions/index.yml) |
loop | Loop Compliance Summary | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/loop/loop-compliance-summary.md | Where the Loop content was originally created determines its storage location: |Teams private chat|||✔️in Microsoft Teams Chat files folder| |Teams private meeting|||✔️in Meetings folder| |Outlook email message|||✔️in Attachments folder|-|Word for the web|||✔️in Word Loop files folder| |OneNote for Windows or for the web|||✔️in OneNote Loop files folder|+|Whiteboard|||✔️in Whiteboard\Components folder| +|Word for the web (Preview only)|||✔️in Word Loop files folder| ## Summary table of admin management, governance, lifecycle, and compliance capabilities based on where Loop content is stored |
test-base | Faq | https://github.com/MicrosoftDocs/microsoft-365-docs/commits/public/microsoft-365/test-base/faq.md | f1.keywords: NOCSH - **App Assure**: If you run into compatibility issues or want to ensure that your organization's applications are compatible from day one, you may reach out to App Assure. With enrollment in the [App Assure](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fwindows%2Fcompatibility%2Fapp-assure&data=05%7C02%7Cmiaoyuezhou%40microsoft.com%7C7a21782822d142dfe41908dc43ba47c0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638459714580439514%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=B38PoXObefTHSHSKGHTiDlm7YzJmKkgn0TYz1AOAk4o%3D&reserved=0) service, any app compatibility issues that you find with Windows 11 can be resolved. Microsoft helps you remedy application issues at no cost. Since 2018, App Assure has evaluated almost 800,000 apps, and subscriptions are free for eligible customers with 150+ seats. - **SUVP**: The [Security Update Validation Program (SUVP)](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fwindows-it-pro-blog%2Fsecurity-update-validation-program-the-early-bird-tests-the-worm%2Fba-p%2F2569392&data=05%7C02%7Cmiaoyuezhou%40microsoft.com%7C7a21782822d142dfe41908dc43ba47c0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638459714580457417%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=T7B2DnRAe6p%2Fve3UlocDbYpdm%2FbSQKxEcLv7XszsOGE%3D&reserved=0) is a quality assurance testing program for Microsoft security updates, which are released on the second Tuesday of each month. The SUVP provides early access to Microsoft security updatesΓÇöup to three weeks in advance of the official releaseΓÇöfor the purpose of validation and interoperability testing. The program encompasses any Microsoft products for which we fix a vulnerability (for example: Windows, Office, Exchange, or SQL Server) and is limited to trusted customers under NDA who have been nominated by a Microsoft representative. To join SUVP program, contact [suvp@microsoft.com](mailto:suvp@microsoft.com).-- **Selfhost**: If youΓÇÖre building your own service pipeline to validate Windows or Office update. These guidances and services could potentially help you: [Azure DevTest Labs](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fazure%2Fdevtest-labs%2F&data=05%7C02%7Cmiaoyuezhou%40microsoft.com%7C7a21782822d142dfe41908dc43ba47c0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638459714580469035%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=hCmHl7FT8L6Xkbg2FXfpnS34N3kII%2B8o%2B3UzxunNhzM%3D&reserved=0), [Security Update Guide](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fmsrc%2Ffaqs-security-update-guide&data=05%7C02%7Cmiaoyuezhou%40microsoft.com%7C7a21782822d142dfe41908dc43ba47c0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638459714580479144%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=MwZ8J9f3BVzUopW9BOesvxo%2FP%2BHQ7fLqLVBsV4QNxHY%3D&reserved=0), [Office Deployment Tool](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fdeployoffice%2Foverview-office-deployment-tool&data=05%7C02%7Cmiaoyuezhou%40microsoft.com%7C7a21782822d142dfe41908dc43ba47c0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638459714580487089%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=sTWfQXK9exSm74Y4qqkha8mRW%2FQLU0DX7%2Fuq24Q3%2F6o%3D&reserved=0).+- **Selfhost**: If youΓÇÖre building your own service pipeline to validate Windows or Office update. This guidance and services could potentially help you: [Azure DevTest Labs](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fazure%2Fdevtest-labs%2F&data=05%7C02%7Cmiaoyuezhou%40microsoft.com%7C7a21782822d142dfe41908dc43ba47c0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638459714580469035%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=hCmHl7FT8L6Xkbg2FXfpnS34N3kII%2B8o%2B3UzxunNhzM%3D&reserved=0), [Security Update Guide](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fmsrc%2Ffaqs-security-update-guide&data=05%7C02%7Cmiaoyuezhou%40microsoft.com%7C7a21782822d142dfe41908dc43ba47c0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638459714580479144%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=MwZ8J9f3BVzUopW9BOesvxo%2FP%2BHQ7fLqLVBsV4QNxHY%3D&reserved=0), [Office Deployment Tool](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fdeployoffice%2Foverview-office-deployment-tool&data=05%7C02%7Cmiaoyuezhou%40microsoft.com%7C7a21782822d142dfe41908dc43ba47c0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638459714580487089%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=sTWfQXK9exSm74Y4qqkha8mRW%2FQLU0DX7%2Fuq24Q3%2F6o%3D&reserved=0). If you have any questions, reach out to [testbase_support@microsoft.com](mailto: - During the end-of-life period, customers can: continue to use the service for testing, export data, and make necessary arrangements for the transition. - Access to the service and data will be retained until May 31, 2024. - After May 31, 2024, all customer data will be permanently deleted.--## Testing --**Q: How do we submit our packages to the Test Base team?** --**A:** Submit your packages directly to the Test Base environment using our self-serve portal. --To submit your application package, navigate to the [Azure Portal](https://www.aka.ms/testbaseportal "Test Base Homepage") and upload a zipped folder containing your application's binaries, dependencies, and test scripts via the self-serve Test Base portal dashboard. --For assistance and more information, see the onboarding user guide or contact our team at <testbasepreview@microsoft.com>. -- --**Q: What are Out-of-box (OOB) tests?** --**A:** Out-of-box (OOB) tests are standardized, default test runs where application packages are installed, launched and closed 30 times, and then uninstalled. --The packages created for Test Base have the following test scripts: install, launch, close, and optionally the uninstall script. --The Out-of-box (OOB) tests provide you with standardized telemetry on your application to compare across Windows builds. -- --**Q: Can we submit tests outside of the Out-of-box tests (install, launch, close, uninstall test scripts)?** --**A:** Yes, customers can also upload application packages for **functional tests** via the self-serve portal dashboard. -**Functional tests** are tests that enable customers to execute their scripts to run custom functionality on their application. -- --**Q: How long does KB installation take?** --**A:** The KB installation time can vary, the KB installation happens in between the install and launch scripts for OOB tests. -- --**Q: Do you support functional tests?** --**A:** Yes, Test Base supports functional tests. Functional tests are tests that enable our customers to execute their scripts to run custom functionality on their application. --To submit your application package for functional testing, upload the zipped folder containing your application's binaries, dependencies, and test scripts via our self-serve portal dashboard. --For assistance and more information, see the onboarding user guide or contact our team at <testbasepreview@microsoft.com>. -- --**Q: How does Test Base handle our test data?** --**A:** Test Base securely collects and manages your test data on the Azure environment. -- --**Q: Can Test Base support our automated tests?** --**A:** Yes, Test Base supports automated tests, however, we don't support manual tests at this time due to service capabilities. -- --**Q: What languages and frameworks of automated tests do you support?** --**A:** We support all languages and frameworks. We invoke all scripts through PowerShell. --You also need to provide (upload) the dependent binaries of the required framework. -- --**Q: How soon does Test Base provide test results?** --**A:** For each test that we run against the pre-release builds, we provide results within 24 hours on your [Azure Portal](https://www.aka.ms/testbaseportal "Test Base Homepage") dashboard. -- --**Q: Can you reboot after installation?** --**A:** Yes, our process supports rebooting after installation. Be sure to select this option from the "Optional settings" drop list when setting your **Tasks** on the onboarding portal. --For Out-of-box (OOB) tests, you can specify whether a reboot is needed for the _Install script._ --![Reboot picture.](Media/reboot.png) --While for functional tests, you can specify whether a reboot is required for each script that is added. --![How to select functional tests.](Media/functionalreboot.png) -- --**Q: What Windows versions do you support?** --**A:** We currently support Windows 11 clients, Windows 10 clients, Windows Server 2016, Windows Server 2016 Core version, Windows Server 2019, and Windows Server 2019 Core version. -- --**Q: What is the difference between Security Update tests and Feature Update tests?** --**A:** For Security update tests, we test against the **<ins>monthly pre-release security updates</ins>** On Windows, which is focused on keeping our users always secure and protected. For the Feature update tests, we test against the **<ins>bi-annual pre-release feature updates</ins>** which introduce new features and capabilities on Windows. -- --**Q: How long would my script run?** --**A:** All customer scripts within the package have a script execution limit of 60 mins. Script executions after 60-mins fail with a timeout error. -- --**Q: How do I investigate time-out failure?** --**A:** Follow the below mentioned steps: -1. Check video recording: - 1. to confirm if any Windows pop-up blocked the script execution. - 2. if command is running in interactive mode and was waiting for input. -2. Use VM snapshot to create VM to repro timeout and find out root cause. -3. Fix code issue continue testing. -4. If test running indeed exceeds 60 mins, split into multiple scripts below 60 mins. - 1. Run all testing job in one central script which doesn't have time limit, monitor the status from multiple Test Base artifact scripts. -- --**Q: How can I pause my active packages?** --**A:** To pause your active packages, follow these steps: -1. Go to the ΓÇÿManage packagesΓÇÖ page by clicking the link in the navigation bar. -2. Select the packages that you want to pause by checking the boxes next to package names. -3. Click the ΓÇÿDisable future testsΓÇÖ button at the top of the page. --Note: The selected packages will be disabled for execution on all future OS updates that you have chosen. To resume the tests, you need to re-enable the packages by clicking the ΓÇÿEnable future testsΓÇÖ button. --## Debugging options --**Q: Do we get access to the Virtual Machines (VMs) in case of failures? What does Test Base share?** --**A:** For the service to be compliant and the pre-release updates be secure, only Microsoft has access to the VMs. However, customers can view test results and other test metrics on their portal dashboard, including crash and hang signals, reliability metrics, memory and CPU utilization etc. We also generate and provide logs of test runs on the dashboard for download and further analysis. --We can also provide memory dumps for crash debugging as needed. -- --**Q: If there are issues found during the testing, what are the next steps to resolve these issues?** --**A:** The Test Base team performs an initial triage process to determine the root cause of the error, and then depending on our findings, we route to the customer or internal teams within Microsoft for debugging. --We always work closely with our customers in joint remediation to resolve any issues. -- --**Q: Does Microsoft hold the release of the security patch until the issue is resolved? What alternate resolutions are available?** --**A:** The goal of Test Base is to ensure our joint end customers don't face any issues. We work hard with Software Vendors to address any issues before the release, but in case the fix isn't feasible we have other resolutions such as shims and blocks. --## Security --**Q: Where are my packages and binaries stored and what security precautions do you take to keep my data safe?** --**A:** Packages are uploaded and stored in Microsoft-managed Azure cloud storage. The data is encrypted in transit and at rest. When the system gets notified that one of your packages needs to be tested, a dedicated and isolated Microsoft-managed Azure Guest VM is provisioned with the OS image you selected. This VM lives within our Microsoft tenant and is provisioned within its own VNet/private subnet to prevent any lateral moves from any other Azure VM in our VM pool. The VM is configured to disallow any inbound traffic to protect the integrity of the Guest VM. In addition to these guardrails, your Test Base account and packages are uploaded as Azure resources and benefit from Azure RBAC. You can use Microsoft Entra ID plus Azure RBAC to control access to your account and packages. -- --**Q: Who has access to the VM?** --**A:** Only our backend services can access the Microsoft-managed VMs that run your workloads. These VMs are configured to disallow any inbound traffic, including remote connections, to protect the integrity of the VM. --## Miscellaneous --**Q: How will the service work with an on-prem server?** --**A:** We currently don't provide support for on-prem servers. However, if the server is exposing HTTP endpoint, we can connect to it over the internet. -- --**Q: Who hosts the VMs?** --**A:** Microsoft provisions the VM for this service, taking the load of doing so from the customer. -- --**Q: Does this service support web, mobile, or desktop applications?** --**A:** Currently, our focus is on desktop applications, however, we have plans to onboard web applications in the future, but we don't support mobile applications at this time. -- --**Q: What is the difference between Test Base and SUVP?** --**A:** The biggest difference between Test Base and SUVP is that our partners onboard their applications onto the Test Base Azure environment for validation runs against pre-release updates instead of carrying out the tests themselves. --In addition to pre-release security updates testing, we support pre-release feature updates testing on our platform. We have many other types of updates and OS testing on our roadmap. -- --**Q: Is there a cost associated with the service?** --**A:** The cost of the service depends on when you sign up and how much you use it. Here are the details: -- If you signed up before November 15, 2023, you'll receive 100 free hours (valued at $800) of Test Base usage under your subscription. These hours will expire in 6 months from the date of sign up. After the free hours are consumed or expired, you'll be charged $8 per hour for your usage.-- If you sign up on or after November 15, 2023, you'll receive 100 free hours (valued at $800) of Test Base usage under your tenant. These hours will expire in 6 months from the date of sign up. After the free hours are consumed or expired, you'll be charged $8 per hour for your usage.-- Starting from November 15, 2023, if you're a Windows E3/E5 or Microsoft 365 E3/E5 customer, you'll receive an additional 500 hours (equivalent to $4,000) of Test Base usage under your tenant. These hours don't have an expiration date and can be used anytime. Note: Don't disable the service principal "Test Base for M365 - Billing", otherwise you may lose the possibility of getting the additional hours.- **Q: How can I provide feedback about Test Base?** **A:** To share your feedback about Test Base, select the **Feedback** icon at the bottom left of the portal. Include a screenshot with your submission to help Microsoft better understand your feedback. -You can also submit product suggestions and upvote other ideas at <testbasepreview@microsoft.com>. +You can also submit product suggestions and upvote other ideas at <testbasepreview@microsoft.com>. |