Updates from: 10/08/2024 01:08:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
description: Recommendations and best practices to consider when working with Az
-+ Previously updated : 02/05/2024 Last updated : 10/07/2024
Test and automate your Azure AD B2C implementation.
| Functional and UI testing | Test the user flows end-to-end. Add synthetic tests every few minutes using Selenium, VS Web Test, etc. | | Pen-testing | Before going live with your solution, perform penetration testing exercises to verify all components are secure, including any third-party dependencies. Verify you've secured your APIs with access tokens and used the right authentication protocol for your application scenario. Learn more about [Penetration testing](../security/fundamentals/pen-testing.md) and the [Microsoft Cloud Unified Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement?rtc=1). | | A/B Testing | Flight your new features with a small, random set of users before rolling out to your entire population. With JavaScript enabled in Azure AD B2C, you can integrate with A/B testing tools like Optimizely, Clarity, and others. |
-| Load testing | Azure AD B2C can scale, but your application can scale only if all of its dependencies can scale. Load-test your APIs and CDN. Learn more about [Resilience through developer best practices](../active-directory/architecture/resilience-b2c-developer-best-practices.md).|
+| Load testing | Azure AD B2C can scale, but your application can scale only if all of its dependencies can scale. We recommend that you load-test your policy in production mode, that's set the `DeploymentMode` attribute in your custom policy file's `<TrustFrameworkPolicy>` element to `Production`. This setting ensures your performance during the test matches the production level performance. Load-test your APIs and CDN. Learn more about [Resilience through developer best practices](../active-directory/architecture/resilience-b2c-developer-best-practices.md).|
| Throttling | Azure AD B2C throttles traffic if too many requests are sent from the same source in a short period of time. Use several traffic sources while load testing, and handle the `AADB2C90229` error code gracefully in your applications. | | Automation | Use continuous integration and delivery (CI/CD) pipelines to automate testing and deployments, for example, [Azure DevOps](deploy-custom-policies-devops.md). |
Manage your Azure AD B2C environment.
| Use version control for your custom policies | Consider using GitHub, Azure Repos, or another cloud-based version control system for your Azure AD B2C custom policies. | | Use the Microsoft Graph API to automate the management of your B2C tenants | Microsoft Graph APIs:<br/>Manage [Identity Experience Framework](/graph/api/resources/trustframeworkpolicy?preserve-view=true&view=graph-rest-beta) (custom policies)<br/>[Keys](/graph/api/resources/trustframeworkkeyset?preserve-view=true&view=graph-rest-beta)<br/>[User Flows](/graph/api/resources/identityuserflow?preserve-view=true&view=graph-rest-beta) | | Integrate with Azure DevOps | A [CI/CD pipeline](deploy-custom-policies-devops.md) makes moving code between different environments easy and ensures production readiness always. |
-| Deploy custom policy | Azure AD B2C relies on caching to deliver performance to your end users. When you deploy a custom policy using whatever method, expect a delay of up to **30 minutes** for your users to see the changes. As a result of this behavior, consider the following practices when you deploy your custom policies: <br> - If you're deploying to a development environment, set the `DeploymentMode` attribute to `Development` in your custom policy file's `<TrustFrameworkPolicy>` element. <br> - Deploy your updated policy files to a production environment when traffic in your app is low. <br> - When you deploy to a production environment to update existing policy files, upload the updated files with new name(s), and then update your app reference to the new name(s). You can then remove the old policy files afterwards.<br> - You can set the `DeploymentMode` to `Development` in a production environment to bypass the caching behavior. However, we don't recommend this practice. If you [Collect Azure AD B2C logs with Application Insights](troubleshoot-with-application-insights.md), all claims sent to and from identity providers are collected, which is a security and performance risk. |
+| Deploy custom policy | Azure AD B2C relies on caching to deliver performance to your end users. When you deploy a custom policy using whatever method, expect a delay of up to **30 minutes** for your users to see the changes. As a result of this behavior, consider the following practices when you deploy your custom policies: <br> - If you're deploying to a development environment, set the `DeploymentMode` attribute in your custom policy file's `<TrustFrameworkPolicy>` element to `Production`. <br> - Deploy your updated policy files to a production environment when traffic in your app is low. <br> - When you deploy to a production environment to update existing policy files, upload the updated files with new name(s), and then update your app reference to the new name(s). You can then remove the old policy files afterwards.<br> - You can set the `DeploymentMode` to `Development` in a production environment to bypass the caching behavior. However, we don't recommend this practice. If you [Collect Azure AD B2C logs with Application Insights](troubleshoot-with-application-insights.md), all claims sent to and from identity providers are collected, which is a security and performance risk. |
| Deploy app registration updates | When you modify your application registration in your Azure AD B2C tenant, such as updating the application's redirect URI, expect a delay of up to **2 hours (3600s)** for the changes to take effect in the production environment. We recommend that you modify your application registration in your production environment when traffic in your app is low.| | Integrate with Azure Monitor | [Audit log events](view-audit-logs.md) are only retained for seven days. [Integrate with Azure Monitor](azure-monitor.md) to retain the logs for long-term use, or integrate with third-party security information and event management (SIEM) tools to gain insights into your environment. | | Setup active alerting and monitoring | [Track user behavior](./analytics-with-application-insights.md) in Azure AD B2C using Application Insights. |
application-gateway How To Multiple Site Hosting Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-ingress-api.md
status:
Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN. ```bash
-fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'')
+fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
``` Next, specify the server name indicator using the curl command, `contoso.com` for the frontend FQDN should return a response from the backend-v1 service.
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
Title: Private endpoints for Azure Backup - Overview
description: This article explains about the concept of private endpoints for Azure Backup that helps to perform backups while maintaining the security of your resources. Previously updated : 07/30/2024 Last updated : 10/01/2024
The following table lists the scenarios and recommendations:
| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent, DPM server. | Use of private endpoints is recommended to allow backup and restore without needing to add to an allowlist any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Microsoft Entra IPs or FQDNs. | | Azure VM backup | VM backup doesn't require you to allow access to any IPs or FQDNs. So, it doesn't require private endpoints for backup and restore of disks. <br><br> However, file recovery from a vault containing private endpoints would be restricted to virtual networks that contain a private endpoint for the vault. <br><br> When using ACLΓÇÖed unmanaged disks, ensure the storage account containing the disks allows access to trusted Microsoft services if it's ACL'ed. | | Azure Files backup | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. |
+| **Changed Vnet for Private endpoint in the Vault and Virtual Machine** | Stop backup protection and configure backup protection in a new vault with Private Endpoints enabled. |
>[!Note] >Private endpoints are supported with only DPM server 2022, MABS v4, and later.
backup Backup Azure Sap Hana Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md
Title: Troubleshoot SAP HANA databases back up errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 09/30/2024 Last updated : 10/01/2024
See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What
| **Possible Causes** | Restore as files is failing due to *directory* that is selected for restore doesn't exist on the Target server or isn't accessible. | **Recommended action** | Verify the directory that you selected is available on the target server and ensure you have selected the correct target server at the time of restore. |
+### JobCancelledOnExtensionUpgrade
+
+| Error message | The Backup job was canceled because the workload backup extension service restarted for an upgrade. |
+| | |
+| **Possible cause** | The backup and restore job fails due to automatic Extension upgrade when the backup/restore operation is in progress. |
+| **Recommended action** | Wait for the extension upgrade to complete. HANA then re-triggers the failed log backups, if any. <br><br> However, the failed Full/ Differential/ Incremental backups won't be re-triggered by Azure Backup and you need to manually retrigger this operation. |
+ ## Restore checks ### Single Container Database (SDC) restore
backup Private Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md
Title: Private endpoints overview description: Understand the use of private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 07/30/2024 Last updated : 10/01/2024
While private endpoints are enabled for the vault, they're used for backup and r
| Backup of workloads in Azure VM (SQL, SAP HANA), Backup using MARS Agent, DPM server. | Use of private endpoints is recommended to allow backup and restore without needing to add to an allowlist any IPs/FQDNs for Azure Backup or Azure Storage from your virtual networks. In that scenario, ensure that VMs that host SQL databases can reach Microsoft Entra IPs or FQDNs. | | **Azure VM backup** | VM backup doesn't require you to allow access to any IPs or FQDNs. So, it doesn't require private endpoints for backup and restore of disks. <br><br> However, file recovery from a vault containing private endpoints would be restricted to virtual networks that contain a private endpoint for the vault. <br><br> When using ACLΓÇÖed unmanaged disks, ensure the storage account containing the disks allows access to **trusted Microsoft services** if it's ACLΓÇÖed. | | **Azure Files backup** | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. |
+| **Changed Vnet for Private endpoint in the Vault and Virtual Machine** | Stop backup protection and configure backup protection in a new vault with Private Endpoints enabled. |
>[!NOTE] >Private endpoints are supported with only DPM server 2022, MABS v4, and later.
backup Sap Hana Database With Hana System Replication Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md
Title: Back up SAP HANA System Replication databases on Azure VMs using Azure Backup description: In this article, discover how to back up SAP HANA databases with HANA System Replication enabled. Previously updated : 09/30/2024 Last updated : 10/01/2024
When a failover occurs, the users are replicated to the new primary, but *hdbuse
| SDC | Backup Admin | Reads the backup catalog. | | SAP_INTERNAL_HANA_SUPPORT | | Accesses a few private tables. <br><br> Required only for single container database (SDC) and multiple container database (MDC) versions earlier than HANA 2.0 SPS04 Rev 46. It isn't required for HANA 2.0 SPS04 Rev 46 versions and later, because we receive the required information from public tables now after the fix from HANA team. |
+ **Example**:
+
+ ```HDBSQL
+ - hdbsql -t -U SYSTEMKEY CREATE USER USRBKP PASSWORD AzureBackup01 NO FORCE_FIRST_PASSWORD_CHANGE
+ - hdbsql -t -U SYSTEMKEY 'ALTER USER USRBKP DISABLE PASSWORD LIFETIME'
+ - hdbsql -t -U SYSTEMKEY 'ALTER USER USRBKP RESET CONNECT ATTEMPTS'
+ - hdbsql -t -U SYSTEMKEY 'ALTER USER USRBKP ACTIVATE USER NOW'
+ - hdbsql -t -U SYSTEMKEY 'GRANT DATABASE ADMIN TO USRBKP'
+ - hdbsql -t -U SYSTEMKEY 'GRANT CATALOG READ TO USRBKP'
+ ```
+ 1. Add the key to *hdbuserstore* for your custom backup user that enables the HANA backup plug-in to manage all operations (database queries, restore operations, configuring, and running backup).
+ **Example**:
+
+ ```HDBSQL
+ - hdbuserstore set BKPKEY localhost:39013 USRBKP AzureBackup01
+ ```
+ 1. Pass the custom backup user key to the script as a parameter: ```HDBSQL
When a failover occurs, the users are replicated to the new primary, but *hdbuse
You must provide the same HSR ID on both VMs/nodes. This ID must be unique within a vault. It should be an alphanumeric value containing at least one digit, one lowercase letter, and one uppercase character, and it should contain from 6 to 35 characters.
+ **Example**:
+
+ ```HDBSQL
+ - ./script.sh -sk SYSTEMKEY -bk USRBKP -hn HSRlab001 -p 39013
+ ```
+ 1. While you're running the preregistration script on the secondary node, you must specify the SDC/MDC port as input. This is because SQL commands to identify the SDC/MDC setup can't be run on the secondary node. You must provide the port number as a parameter, as shown here: `-p PORT_NUMBER` or `ΓÇôport_number PORT_NUMBER`.
When a failover occurs, the users are replicated to the new primary, but *hdbuse
- For MDC, use the format `3<instancenumber>13`. - For SDC, use the format `3<instancenumber>15`.
+ **Example**:
+
+ ```HDBSQL
+ - MDC: ./script.sh -sk SYSTEMKEY -bk USRBKP -hn HSRlab001 -p 39013
+ - SDC: ./script.sh -sk SYSTEMKEY -bk USRBKP -hn HSRlab001 -p 39015
+ ```
+ 1. If your HANA setup uses private endpoints, run the preregistration script with the `-sn` or `--skip-network-checks` parameter. Ater the preregistration script has run successfully, proceed to the next steps. 1. Run the SAP HANA backup configuration script (preregistration script) in the VMs where HANA is installed as the root user. This script sets up the HANA system for backup. For more information about the script actions, see the [What the preregistration script does](tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) section.
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md
Title: Tutorial - SAP HANA DB restore on Azure using CLI description: In this tutorial, learn how to restore SAP HANA databases running on an Azure VM from an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 07/30/2024 Last updated : 10/01/2024
arvind@Azure:~$
Ensure that the following prerequisites are met before restoring a database:
-* You can restore the database only to an SAP HANA instance that's in the same region
-* The target instance must be registered with the same vault as the source
+* You can restore the database only to an SAP HANA instance that's in the same region.
+* The target instance must be registered with the same vault as the source or another vault in the same region.
* Azure Backup can't identify two different SAP HANA instances on the same VM. Therefore, restoring data from one instance to another on the same VM isn't possible. ## Restore a database
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/sdk-features.md
Azure Resource Manager for email communication resources is meant for email doma
## API throttling and timeouts
-Your Azure account limits the number of email messages that you can send. For all developers, the limits are 30 mails sent per minute and 100 mails sent per hour.
+The Azure Communication Services email service is designed to support high throughput. The initial rate limits are intended to help customers onboard smoothly and avoid some of the issues that can occur when switching to a new email service.
-This sandbox setup helps developers start building the application. Gradually, you can request to increase the sending volume as soon as the application is ready to go live. Submit a support request to increase your sending limit.
+To learn more about these limits and instructions for requesting an increase, see [Service limits for Azure Communication Services > Email](../../concepts/service-limits.md#email).
## Next steps
-* [Create and manage an email communication resource in Azure Communication Services](../../quickstarts/email/create-email-communication-resource.md)
-* [Connect a verified email domain in Azure Communication Services](../../quickstarts/email/connect-email-communication-resource.md)
+* [Create and manage an email communication resource in Azure Communication Services](../../quickstarts/email/create-email-communication-resource.md).
+* [Connect a verified email domain in Azure Communication Services](../../quickstarts/email/connect-email-communication-resource.md).
-The following topics might be interesting to you:
+## Related articles:
* Learn how to send emails with [custom verified domains](../../quickstarts/email/add-custom-verified-domains.md). * Learn how to send emails with [Azure-managed domains](../../quickstarts/email/add-azure-managed-domains.md).
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
For more information on the SMS SDK and service, see the [SMS SDK overview](./sm
You can send a limited number of email messages. If you exceed the following limits for your subscription, your requests are rejected. You can attempt these requests again, after the Retry-After time passes. Take action before reaching the limit by requesting to raise your sending volume limits if needed.
+The Azure Communication Services email service is designed to support high throughput. However, the service imposes initial rate limits to help customers onboard smoothly and avoid some of the issues that can occur when switching to a new email service. We recommend gradually increasing your email volume using Azure Communication Services Email over a period of two to four weeks, while closely monitoring the delivery status of your emails. This gradual increase allows third-party email service providers to adapt to the change in IP for your domain's email traffic, thus protecting your sender reputation and maintaining the reliability of your email delivery.
+
+We approve higher limits for customers based on use case requirements, domain reputation, traffic patterns, and failure rates. To request higher limits, follow the instructions at [Quota increase for email domains](./email/email-quota-increase.md). Note that higher quotas are only available for verified custom domains, not Azure-managed domains.
+ ### Rate Limits [Custom Domains](../quickstarts/email/add-custom-verified-domains.md)
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
Let's look at another example where a resource tag gets overridden. In the follo
## Usage record updates
-After the tag inheritance setting is enabled, it takes about 8-24 hours for the child resource usage records to get updated with subscription and resource group tags. The usage records are updated for the current month using the existing subscription and resource group tags.
-
-For example, if the tag inheritance setting is enabled on October 20, child resource usage records are updated from October 1 using the tags that existed on October 20.
-
-Similarly, if the tag inheritance setting is disabled, the inherited tags are removed from the usage records for the current month.
+After the tag inheritance setting is updated, it takes about 8-24 hours for the child resource usage records to get updated. Any update to the setting or the tags being inherited takes effect for the current month.
+For example, if the tag inheritance setting is enabled on October 20, child resource usage records are updated from October 1 using the tags that existed on October 20.
+
> [!NOTE] > If there are purchases or resources that donΓÇÖt emit usage at a subscription scope, they will not have the subscription tags applied even if the setting is enabled.
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
Previously updated : 07/31/2024 Last updated : 10/07/2024 + # Add, update, or delete a payment method
To change your subscription's default credit card to a new one:
1. Enter details for the credit card. :::image type="content" source="./media/change-credit-card/sub-add-new-default.png" alt-text="Screenshot that shows the pane for adding credit card details." lightbox="./media/change-credit-card/sub-add-new-default.png" :::
+ - For customers in India, when you add a new payment method, Azure generates a one-time password for you. When prompted, enter the password to save the new payment method.
1. To make this card your default payment method, select **Make this my default payment method**. This card becomes the active payment instrument for all subscriptions that use the same card as the selected subscription. 1. Select **Next**.
If you have a Microsoft Customer Agreement, your credit card is associated with
1. Enter details for the credit card. :::image type="content" source="./media/change-credit-card/sub-add-new-card-billing-profile.png" alt-text="Screenshot that shows the pane for adding a new credit card as a payment method." lightbox="./media/change-credit-card/sub-add-new-card-billing-profile.png" :::
+ - For customers in India, when you add a new payment method, Azure generates a one-time password for you. When prompted, enter the password to save the new payment method.
1. To make this card your default payment method, select **Make this my default payment method**. This card becomes the active payment instrument for all subscriptions that use the same card as the selected subscription. 1. Select **Next**.
cost-management-billing Limited Time Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-linux.md
The offer is available based on the following criteria:
- Enterprise (MS-AZR-0017P or MS-AZR-0148P) - Pay-as-you-go (MS-AZR-0003P or MS-AZR-0023P) - Microsoft Customer Agreement
+ - 21v Customer Agreement
- Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations?source=azlto1) to purchase Azure Reservations. You can't purchase a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in owner or built-in reservation purchaser role. - For more information about who can purchase a reservation, see [Buy an Azure reservation](prepare-buy-reservation.md?source=azlto2).
dev-box Concept Dev Box Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-architecture.md
The network connection that is associated with a dev box pool determines where t
Developers can create a dev box from a dev box pool by using the developer portal. They might choose from a specific pool based on the VM image, compute resources, or the location where the dev box is hosted.
-Once the dev box is running, dev box users can [remotely connect](#user-connectivity) to it by using a remote desktop client or directly from the browser. Dev box users have full control over the dev boxes they created, and can manage them from the developer portal.
+Once the dev box is running, dev box users can [remotely connect](#user-connectivity) to it by using a Remote Desktop client like Windows App, or directly from the browser. Dev box users have full control over the dev boxes they created, and can manage them from the developer portal.
## Microsoft Dev Box architecture
When you configure dev boxes to use [Microsoft Entra join](/azure/active-directo
### User connectivity
-When a dev box is running, developers can connect to the dev box by using a Remote Desktop client or directly from within the browser.
+When a dev box is running, developers can connect to the dev box by using a Remote Desktop client like Windows App, or directly from within the browser.
Dev box connectivity is provided by Azure Virtual Desktop. No inbound connections direct from the Internet are made to the dev box. Instead, the following connections are made:
dev-box Concept Dev Box Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-deployment-guide.md
Each of these roles has specific responsibilities during the deployment of Micro
- **Developer**: self-serve one or more dev boxes within their assigned projects. - Create and manage a dev box based on project dev box pool from the developer portal
- - Connect to a dev box by using remote desktop or from the browser
+ - Connect to a dev box by using a Remote Desktop client like Windows App
:::image type="content" source="media/overview-what-is-microsoft-dev-box/dev-box-roles.png" alt-text="Diagram that shows roles and responsibilities for Dev Box platform engineers, team leads, and developers." lightbox="media/overview-what-is-microsoft-dev-box/dev-box-roles.png" border="false":::
Microsoft Dev Box uses Microsoft Intune to manage your dev boxes. Use Microsoft
#### Device configuration
-After a dev box is provisioned, you can manage it like any other Windows device in Microsoft Intune. For example, you can create [device configuration profiles](/mem/intune/configuration/device-profiles) to turn different settings on and off in Windows, or push apps and updates to your usersΓÇÖ dev boxes.
+After a dev box is provisioned, you can manage it like any other Windows device in Microsoft Intune. For example, you can create [device configuration profiles](/mem/intune/configuration/device-profiles) to turn different settings on and off in Windows, or push apps and updates to your users' dev boxes.
#### Configure conditional access policies
-You can use Intune to configure conditional access policies to control access to dev boxes. For Dev Box, itΓÇÖs common to configure conditional access policies to restrict who can access dev box, what they can do, and where they can access from. To configure conditional access policies, you can use Microsoft Intune to create dynamic device groups and conditional access policies.
+You can use Intune to configure conditional access policies to control access to dev boxes. For Dev Box, it's common to configure conditional access policies to restrict who can access dev box, what they can do, and where they can access from. To configure conditional access policies, you can use Microsoft Intune to create dynamic device groups and conditional access policies.
Some usage scenarios for conditional access in Microsoft Dev Box include:
Learn how you can [configure conditional access policies for Dev Box](./how-to-c
#### Privilege management
-You can configure Microsoft Intune Endpoint Privilege Management (EPM) for dev boxes so that dev box users don't need local administrative privileges. Microsoft Intune Endpoint Privilege Management allows your organizationΓÇÖs users to run as a standard user (without administrator rights) and complete tasks that require elevated privileges. Tasks that commonly require administrative privileges are application installs (like Microsoft 365 Applications), updating device drivers, and running certain Windows diagnostics.
+You can configure Microsoft Intune Endpoint Privilege Management (EPM) for dev boxes so that dev box users don't need local administrative privileges. Microsoft Intune Endpoint Privilege Management allows your organization's users to run as a standard user (without administrator rights) and complete tasks that require elevated privileges. Tasks that commonly require administrative privileges are application installs (like Microsoft 365 Applications), updating device drivers, and running certain Windows diagnostics.
Learn more about how to [configure Microsoft Intune Endpoint Privilege for Microsoft Dev Box](./how-to-elevate-privilege-dev-box.md).
dev-box How To Configure Multiple Monitors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-multiple-monitors.md
When you connect to your cloud-hosted developer machine in Microsoft Dev Box by
| Remote Desktop Connection (MSTSC) | <sub>:::image type="icon" source="./media/how-to-configure-multiple-monitors/yes.svg" border="false":::</sub> | [Microsoft Remote Desktop Connection](/azure/dev-box/how-to-configure-multiple-monitors?branch=main&tabs=windows-connection#configure-remote-desktop-to-use-multiple-monitors) | | Microsoft Remote Desktop for macOS | <sub>:::image type="icon" source="./media/how-to-configure-multiple-monitors/yes.svg" border="false":::</sub> | [Microsoft Remote Desktop for macOS](/azure/dev-box/how-to-configure-multiple-monitors?branch=main&tabs=macOS#configure-remote-desktop-to-use-multiple-monitors) | ++ ## Prerequisites To complete the steps in this article, you must install the appropriate Remote Desktop client on your local machine.
dev-box How To Create Dev Boxes Developer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-create-dev-boxes-developer-portal.md
You can also create a dev box through the Azure CLI dev center extension. For mo
## Connect to a dev box
-After you create your dev box, you can connect to it through a remote application or via the browser.
+After you create your dev box, you can connect to it through a Remote Desktop application or via the browser.
-A remote desktop client application like Windows App provides the highest performance and best user experience for heavy workloads. Windows App also supports multi-monitor configuration. For more information, see [Get started with Windows App](/windows-app/get-started-connect-devices-desktops-apps?context=/azure/dev-box/context/context&pivots=dev-box).
+The new Windows App remote desktop client is the recommended client for Microsoft Dev Box; it provides an enhanced user experience, including support for multiple monitors. It is also available on multiple platforms, including Windows, macOS, iOS/iPadOS, Android/Chrome OS (preview), and web browsers. For more information, see [Get started with Windows App](https://aka.ms/dev-box/windows-app).
You can use the **browser** for lighter workloads. When you access your dev box via your phone or laptop, you can use the browser. The browser is useful for tasks such as a quick bug fix or a review of a GitHub pull request. For more information, see the [steps for using a browser to connect to a dev box](./quickstart-create-dev-box.md#connect-to-a-dev-box).
dev-box How To Enable Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-enable-single-sign-on.md
If you disable single sign-on for a pool, new dev boxes created from that pool p
When single sign-on is enabled for a pool, your sign-on experience is as follows:
-The first time you connect to a dev box with single sign-on enabled, you first sign into your physical machine. Then you connect to your dev box from the Remote Desktop app or the developer portal. When the dev box starts up, you must enter your credentials to access the dev box.
+The first time you connect to a dev box with single sign-on enabled, you first sign into your physical machine. Then you connect to your dev box from a remote desktop client like Windows App or the developer portal. When the dev box starts up, you must enter your credentials to access the dev box.
-The next time you connect to your dev box, whether through the Remote Desktop app or through the developer portal, you don't have to enter your credentials.
+The next time you connect to your dev box, whether through the Windows App or through the developer portal, you don't have to enter your credentials.
If your connection to your dev box is interrupted because your client machine goes to sleep, you see a message explaining the issue, and you can reconnect by selecting the **Reconnect** button. You don't have to reenter your credentials.
dev-box How To Hibernate Your Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-hibernate-your-dev-box.md
To resume your dev box through the Microsoft Dev Box developer portal:
1. On the dev box you want to resume, on the more options menu, select **Resume**.
-In addition, you can also double select on your dev box in the list of VMs you see in the "Remote Desktop" app. Your dev box automatically starts up and resumes from a hibernating state.
+In addition, you can also double select on your dev box in the list of VMs you see in the Windows App. Your dev box automatically starts up and resumes from a hibernating state.
## Hibernate your dev box using the Azure CLI
dev-box How To Troubleshoot Repair Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-troubleshoot-repair-dev-box.md
Title: Troubleshoot and repair Dev Box RDP connectivity issues
+ Title: Troubleshoot and repair Remote Desktop connectivity issues
description: Having problems connecting to your dev box remotely? Learn how to troubleshoot and resolve connectivity issues to your dev box with developer portal tools.
Last updated 01/10/2024
#CustomerIntent: As a dev box user, I want to be able to troubleshoot and repair connectivity issues with my dev box so that I don't lose development time.
-# Troubleshoot and resolve dev box remote desktop connectivity issues
+# Troubleshoot and resolve dev box Remote Desktop connectivity issues
-In this article, you learn how to troubleshoot and resolve remote desktop connectivity (RDC) issues with your dev box. Because RDC issues to your dev box can be time consuming to resolve manually, use the **Troubleshoot & repair** tool in the developer portal to diagnose and repair some common dev box connectivity issues.
+In this article, you learn how to troubleshoot and resolve Remote Desktop Connectivity (RDC) issues with your dev box. Because RDC issues to your dev box can be time consuming to resolve manually, use the **Troubleshoot & repair** tool in the developer portal to diagnose and repair some common dev box connectivity issues.
:::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-troubleshoot-repair-tool.png" alt-text="Screenshot showing the Troubleshoot and repair tool in the Microsoft developer portal." lightbox="media/how-to-troubleshoot-repair-dev-box/dev-box-troubleshoot-repair-tool.png":::
dev-box Overview What Is Microsoft Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md
Title: What is Microsoft Dev Box?
-description: Explore Microsoft Dev Box for self-service access to ready-to-code cloud-based workstations and developer productivity that integrates with tools like Visual Studio.
+description: Explore Microsoft Dev Box for self-service ready-to-code cloud-based workstations and developer productivity that integrates with tools like Visual Studio.
adobe-target: true
Microsoft Dev Box gives developers self-service access to ready-to-code cloud workstations called *dev boxes*. You can configure dev boxes with tools, source code, and prebuilt binaries that are specific to a project, so developers can immediately start work. You can create your own customized image, or use a preconfigured image from Azure Marketplace, complete with Visual Studio already installed.
-If you're a developer, you can use multiple dev boxes in your day-to-day workflows. You can access your dev boxes through a remote desktop client, or through a web browser, like any virtual desktop.
+If you're a developer, you can use multiple dev boxes in your day-to-day workflows. You can access your dev boxes through a Remote Desktop client like Windows App, or through a web browser, like any virtual desktop.
The Dev Box service was designed with three organizational roles in mind: platform engineers, development team leads, and developers.
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
Last updated 08/30/2024
# Quickstart: Create and connect to a dev box by using the Microsoft Dev Box developer portal
-In this quickstart, you get started with Microsoft Dev Box by creating a dev box through the developer portal. After you create the dev box, you can connect to it with a Remote Desktop session through a browser or through a Remote Desktop app.
+In this quickstart, you get started with Microsoft Dev Box by creating a dev box through the developer portal. After you create the dev box, you can connect to it through a browser or through a Remote Desktop client like Windows App.
You can create and manage multiple dev boxes as a dev box user. Create a dev box for each task that you're working on, and create multiple dev boxes within a single project to help streamline your workflow. For example, you might switch to another dev box to fix a bug in a previous version, or if you need to work on a different part of the application.
To create a dev box in the Microsoft Dev Box developer portal:
After you create a dev box, you can connect remotely to the developer virtual machine. You can connect from your desktop, laptop, tablet, or phone. Microsoft Dev Box supports connecting to a dev box in the following ways: - Connect through the browser from within the developer portal-- Connect by using a remote desktop client application
+- Connect by using a Remote Desktop client application
To connect to a dev box by using the browser:
To connect to a dev box by using the browser:
A new tab opens with a Remote Desktop session through which you can use your dev box. Use a work or school account to sign in to your dev box, not a personal Microsoft account.
-> [!TIP]
-> A Remote Desktop client provides best performance and advanced features like multiple monitor support. For more information, see [Connect to a dev box by using a Remote Desktop app](./tutorial-connect-to-dev-box-with-remote-desktop-app.md).
## Clean up resources
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
In this tutorial, you download and use a remote desktop (RDP) client application
Remote desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a remote desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android.
-> [!TIP]
-> Many remote desktops apps allow you to [use multiple monitors](tutorial-configure-multiple-monitors.md) when you connect to your dev box.
Alternately, you can access your dev box through the browser from the Microsoft Dev Box developer portal.
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
The following request headers don't get forwarded to the origin when caching is
- `Accept-Language` > [!NOTE]
-> Requests that include authorization header will not be cached.
+> Requests that include authorization header will not be cached, unless the response contains a Cache-Control directive that allows caching. The following Cache-Control directives have such an effect: must-revalidate, public, and s-maxage.
## Response headers
governance Create Policy Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to/create-policy-definition.md
description: Learn how to create a machine configuration policy.
Last updated 02/01/2024 + # How to create custom machine configuration policy definitions Before you begin, it's a good idea to read the overview page for [machine configuration][01], and
Parameters of the `New-GuestConfigurationPolicy` cmdlet:
- **Description**: Policy description. - **Parameter**: Policy parameters provided in a hash table. - **PolicyVersion**: Policy version.-- **Path**: Destination path where policy definitions are created.
+- **Path**: Destination path where policy definitions are created. Don't specify this parameter as
+ the path to a local copy of the package.
- **Platform**: Target platform (Windows/Linux) for machine configuration policy and content package.-- **Mode**: (case sensitive: `ApplyAndMonitor`, `ApplyAndAutoCorrect`, `Audit`) choose if the policy should audit
- or deploy the configuration. The default is `Audit`.
-- **Tag** adds one or more tag filters to the policy definition-- **Category** sets the category metadata field in the policy definition
+- **Mode**: (case sensitive: `ApplyAndMonitor`, `ApplyAndAutoCorrect`, `Audit`) choose if the
+ policy should audit or deploy the configuration. The default is `Audit`.
+- **Tag**: Adds one or more tag filters to the policy definition.
+- **Category**: Sets the category metadata field in the policy definition.
+- **LocalContentPath**: The path to the local copy of the `.zip` Machine Configuration package
+ file. This parameter is required if you're using a User Assigned Managed Identity to provide
+ access to an Azure Storge blob.
+- **ManagedIdentityResourceId**: The `resourceId` of the User Assigned Managed Identity that has
+ read access to the Azure Storage blob containing the `.zip` Machine Configuration package file.
+ This parameter is required if you're using a User Assigned Managed Identity to provide access to
+ an Azure Storge blob.
+- **ExcludeArcMachines**: Specifies that the Policy definition should exclude Arc machines. This
+ parameter is required if you are using a User Assigned Managed Identity to provide access to an
+ Azure Storge blob.
+
+> [!IMPORTANT]
+> Unlike Azure VMs, Arc-connected machines currently do not support User Assigned Managed
+> Identities. As a result, the `-ExcludeArcMachines` flag is required to ensure the exclusion of
+> those machines from the policy definition. For the Azure VM to download the assigned package and
+> apply the policy, the Guest Configuration Agent must be version `1.29.82.0` or higher for Windows
+> and version `1.26.76.0` or higher for Linux.
For more information about the **Mode** parameter, see the page [How to configure remediation options for machine configuration][02].
-Create a policy definition that audits using a custom configuration package, in a specified path:
+Create a policy definition that **audits** using a custom configuration package, in a specified path:
```powershell $PolicyConfig = @{
$PolicyConfig = @{
New-GuestConfigurationPolicy @PolicyConfig ```
-Create a policy definition that deploys a configuration using a custom configuration package, in a
-specified path:
+Create a policy definition that **enforces** a custom configuration package, in a specified path:
```powershell $PolicyConfig2 = @{
$PolicyConfig2 = @{
New-GuestConfigurationPolicy @PolicyConfig2 ```
+Create a policy definition that **enforces** a custom configuration package using a User-Assigned
+Managed Identity:
+
+```powershell
+$PolicyConfig3 = @{
+ PolicyId = '_My GUID_'
+ ContentUri = $contentUri
+ DisplayName = 'My deployment policy'
+ Description = 'My deployment policy'
+ Path = './policies/deployIfNotExists.json'
+ Platform = 'Windows'
+ PolicyVersion = 1.0.0
+ Mode = 'ApplyAndAutoCorrect'
+ ContentLocalPath = "C:\Local\Path\To\Package" # Required parameter for managed identity
+ ManagedIdentityResourceId = "YourManagedIdentityResourceId" # Required parameter for managed identity
+}
+
+New-GuestConfigurationPolicy @PolicyConfig3 -ExcludeArcMachines
+```
+
+> [!NOTE]
+> You can retrieve the resorceId of a nmanaged identity using the `Get-AzUserAssignedIdentity`
+> PowerShell cmdlet.
+ The cmdlet output returns an object containing the definition display name and path of the policy files. Definition JSON files that create audit policy definitions have the name `auditIfNotExists.json` and files that create policy definitions to apply configurations have the
governance 4 Publish Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to/develop-custom-package/4-publish-package.md
$contentUri = New-AzStorageBlobSASToken @tokenParams
## Next step > [!div class="nextstepaction"]
-> [Sign a custom machine configuration package](./5-sign-package.md)
+> [Provide secure access to a custom machine configuration package](./5-access-package.md)
+ <!-- Reference link definitions --> [01]: ../../overview.md
governance 5 Access Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to/develop-custom-package/5-access-package.md
+
+ Title: How to access custom machine configuration package artifacts
+description: Learn how to provide access to a machine configuration package file in Azure blob storage .
Last updated : 08/28/2024++++
+# How to provide secure access to custom machine configuration packages
+
+This page provides a guide on how to provide access to Machine Configuration packages stored in
+Azure storage by using the resource ID of a user-assigned managed identity or a Shared Access
+Signature (SAS) token.
+
+## Prerequisites
+
+- Azure subscription
+- Azure Storage account with the Machine Configuration package
+
+## Steps to provide access to the package
+
+The following steps prepare your resources for more secure operations. TThe code snippets for the
+steps include values in angle brackets, like `<storage-account-container-name>`, which you must
+replace with a valid value when following the steps. If you just copy and paste the code, the
+commands may raise errors due to invalid values.
+
+### Using a User Assigned Identity
+
+> [!IMPORTANT]
+> Please note that, unlike Azure VMs, Arc-connected machines currently do not support User-Assigned
+> Managed Identities.
+
+You can grant private access to a machine configuration package in an Azure Storage blob by
+assigning a [User-Assigned Identity][01] to a scope of Azure VMs. For this to work, you need to
+grant the managed identity read access to the Azure storage blob. This involves assigning the
+"Storage Blob Data Reader" role to the identity at the scope of the blob container. This setup
+ensures that your Azure VMs can securely read from the specified blob container using the
+user-assigned managed identity. To learn how you can assign a User Assigned Identity at scale, see
+[Use Azure Policy to assign managed identities][02].
+
+### Using a SAS Token
+
+Optionally, you can add a shared access signature (SAS) token in the URL to ensure secure access to
+the package. The below example generates a blob SAS token with read access and returns the full
+blob URI with the shared access signature token. In this example, the token has a time limit of
+three years.
+
+```powershell
+$startTime = Get-Date
+$endTime = $startTime.AddYears(3)
+
+$tokenParams = @{
+ StartTime = $startTime
+ ExpiryTime = $endTime
+ Container = '<storage-account-container-name>'
+ Blob = '<configuration-blob-name>'
+ Permission = 'r'
+ Context = '<storage-account-context>'
+ FullUri = $true
+}
+
+$contentUri = New-AzStorageBlobSASToken @tokenParams
+```
+
+## Summary
+
+By using the resource ID of a user-assigned managed identity or SAS token, you can securely provide
+access to Machine Configuration packages stored in Azure storage. The additional parameters ensure
+that the package is retrieved using the managed identity and that Azure Arc machines aren't
+included in the policy scope.
+
+## Next Steps
+- After creating the policy definition, you can assign it to the appropriate scope, like management
+ group, subscription, or resource group, within your Azure environment.
+- Remember to monitor the policy compliance status and make any necessary adjustments to your
+ Machine Configuration package or policy assignment to meet your organizational requirements.
+
+> [!div class="nextstepaction"]
+> [Sign a custom machine configuration package](./6-sign-package.md)
+
+<!-- Reference link definitions -->
+[01]: /entra/identity/managed-identities-azure-resources/managed-identity-best-practice-recommendations#using-user-assigned-identities-to-reduce-administration
+[02]: /entra/identity/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy
governance 6 Sign Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to/develop-custom-package/6-sign-package.md
+
+ Title: How to sign machine configuration packages
+description: You can optionally sign machine configuration content packages and force the agent to only allow signed content
Last updated : 02/01/2024++++
+# How to sign machine configuration packages
+
+Machine configuration custom policies use a SHA256 hash to validate that the policy package hasn't
+changed. Optionally, customers may also use a certificate to sign packages and force the machine
+configuration extension to only allow signed content.
+
+To enable this scenario, there are two steps you need to complete:
+
+1. Run the cmdlet to sign the content package.
+1. Append a tag to the machines that should require code to be signed.
+
+## Signature validation using a code signing certificate
+
+To use the Signature Validation feature, run the `Protect-GuestConfigurationPackage` cmdlet to sign
+the package before it's published. This cmdlet requires a 'Code Signing' certificate. If you don't
+have a 'Code Signing' certificate, use the following script to create a self-signed certificate for
+testing purposes to follow along with the example.
+
+## Windows signature validation
+
+```powershell
+# How to create a self sign cert and use it to sign Machine Configuration
+# custom policy package
+
+# Create Code signing cert
+$codeSigningParams = @{
+ Type = 'CodeSigningCert'
+ DnsName = 'GCEncryptionCertificate'
+ HashAlgorithm = 'SHA256'
+}
+$certificate = New-SelfSignedCertificate @codeSigningParams
+
+# Export the certificates
+$privateKey = @{
+ Cert = $certificate
+ Password = Read-Host "Enter password for private key" -AsSecureString
+ FilePath = '<full-path-to-export-private-key-pfx-file>'
+}
+$publicKey = @{
+ Cert = $certificate
+ FilePath = '<full-path-to-export-public-key-cer-file>'
+ Force = $true
+}
+Export-PfxCertificate @privateKey
+Export-Certificate @publicKey
+
+# Import the certificate
+$importParams = @{
+ FilePath = $privateKey.FilePath
+ Password = $privateKey.Password
+ CertStoreLocation = 'Cert:\LocalMachine\My'
+}
+Import-PfxCertificate @importParams
+
+# Sign the policy package
+$certToSignThePackage = Get-ChildItem -Path Cert:\LocalMachine\My |
+ Where-Object { $_.Subject -eq "CN=GCEncryptionCertificate" }
+$protectParams = @{
+ Path = '<path-to-package-to-sign>'
+ Certificate = $certToSignThePackage
+ Verbose = $true
+}
+Protect-GuestConfigurationPackage @protectParams
+```
+
+## Linux signature validation
+
+```powershell
+# generate gpg key
+gpg --gen-key
+
+$emailAddress = '<email-id-used-to-generate-gpg-key>'
+$publicGpgKeyPath = '<full-path-to-export-public-key-gpg-file>'
+$privateGpgKeyPath = '<full-path-to-export-private-key-gpg-file>'
+
+# export public key
+gpg --output $publicGpgKeyPath --export $emailAddress
+
+# export private key
+gpg --output $privateGpgKeyPath --export-secret-key $emailAddress
+
+# Sign linux policy package
+Import-Module GuestConfiguration
+$protectParams = @{
+ Path = '<path-to-package-to-sign>'
+ PrivateGpgKeyPath = $privateGpgKeyPath
+ PublicGpgKeyPath = $publicGpgKeyPath
+ Verbose = $true
+}
+Protect-GuestConfigurationPackage
+```
+
+Parameters of the `Protect-GuestConfigurationPackage` cmdlet:
+
+- **Path**: Full path to the machine configuration package.
+- **Certificate**: Code signing certificate to sign the package. This parameter is only supported
+ when signing content for Windows.
+- **PrivateGpgKeyPath**: Full path to the private key `.gpg` file. This parameter is only supported
+ when signing content for Linux.
+- **PublicGpgKeyPath**: Full path to the public key `.gpg` file. This parameter is only supported
+ when signing content for Linux.
++
+## Certificate requirements
+
+The machine configuration agent expects the certificate public key to be present in "Trusted
+Publishers" on Windows machines and in the path `/usr/local/share/ca-certificates/gc` on Linux
+machines. For the node to verify signed content, install the certificate public key on the machine
+before applying the custom policy.
+
+You can install the certificate public key using normal tools inside the VM or by using Azure
+Policy. An [example template using Azure Policy][01] shows how you can deploy a machine with a
+certificate. The Key Vault access policy must allow the Compute resource provider to access
+certificates during deployments. For detailed steps, see
+[Set up Key Vault for virtual machines in Azure Resource Manager][02].
+
+Following is an example to export the public key from a signing certificate, to import to the
+machine.
+
+```azurepowershell-interactive
+$Cert = Get-ChildItem -Path Cert:\LocalMachine\My |
+ Where-Object { $_.Subject-eq 'CN=<CN-of-your-signing-certificate>' } |
+ Select-Object -First 1
+
+$Cert | Export-Certificate -FilePath '<path-to-export-public-key-cer-file>' -Force
+```
+
+## Tag requirements
+
+After your content is published, append a tag with name `GuestConfigPolicyCertificateValidation`
+and value `enabled` to all virtual machines where code signing should be required. See the
+[Tag samples][03] for how tags can be delivered at scale using Azure Policy. Once this tag is in
+place, the policy definition generated using the `New-GuestConfigurationPolicy` cmdlet enables the
+requirement through the machine configuration extension.
+
+## Related content
+
+- Use the `GuestConfiguration` module to [create an Azure Policy definition][04] for at-scale
+ management of your environment.
+- [Assign your custom policy definition][05] using Azure portal.
+- Learn how to view [compliance details for machine configuration][06] policy assignments.
+
+<!-- Reference link definitions -->
+[01]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-push-certificate-windows
+[02]: /azure/virtual-machines/windows/key-vault-setup#use-templates-to-set-up-key-vault
+[03]: ../../../policy/samples/built-in-policies.md#tags
+[04]: ../create-policy-definition.md
+[05]: ../../../policy/assign-policy-portal.md
+[06]: ../../../policy/how-to/determine-non-compliance.md
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to/develop-custom-package/overview.md
non-Azure machine.
1. [Create a custom machine configuration package artifact][04] 1. [Test the package artifact][05] 1. [Publish the package artifact][06]
-1. [Sign the package artifact][07]
+1. [Provide access to a package][07]
+1. [Sign the package artifact][08]
<!-- Link reference definitions --> [01]: ../../overview.md
non-Azure machine.
[04]: ./2-create-package.md [05]: ./3-test-package.md [06]: ./4-publish-package.md
-[07]: ./5-sign-package.md
+[07]: ./5-access-package.md
+[08]: ./6-sign-package.md
hdinsight What Are Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/what-are-preview-features.md
+
+ Title: What are preview features in Azure HDInsight?
+description: Learn what preview features are and how to identify them in Azure HDInsight.
+ Last updated : 07/10/2024+++
+# What are preview features?
+
+This article describes what preview features are, what limitations apply to them, and how to identify them.
+
+Preview features are features that aren't complete but are made available on a **preview** basis so that customers can get early access and provide feedback.
+
+Preview features come with some disclaimers. Preview features:
+
+* Are subject to separateΓÇ»[Supplemental Terms of Use](https://www.microsoft.com/business-applications/legal/supp-powerplatform-preview/).
+
+* Aren't meant for production use.
+
+* Aren't supported by Microsoft Support for production use. Microsoft Support is, however, eager to get your feedback on the preview functionality, and might provide best-effort support in certain cases.
+
+* May have limited or restricted functionality.
+
+* May not be available in all geographic areas.
+
+## How to identify a preview feature
+
+These features have a **Preview** label in the documentation.
+
+## Next steps
+
+* [Azure HDInsight release notes](./hdinsight-release-notes.md)
+
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-confidential-azure-ad-client-app.md
# Register a confidential client application in Microsoft Entra ID for Azure API for FHIR
-In this tutorial, you'll learn how to register a confidential client application in Microsoft Entra ID.
+In this tutorial, you learn how to register a confidential client application in Microsoft Entra ID.
-A client application registration is a Microsoft Entra representation of an application that can be used to authenticate on behalf of a user and request access to [resource applications](register-resource-azure-ad-client-app.md). A confidential client application is an application that can be trusted to hold a secret and present that secret when requesting access tokens. Examples of confidential applications are server-side applications.
+A client application registration is a Microsoft Entra representation of an application that can be used to authenticate on behalf of a user, and request access to [resource applications](register-resource-azure-ad-client-app.md). A confidential client application is an application that can be trusted to hold a secret and present that secret when requesting access tokens. Examples of confidential applications are server-side applications.
-To register a new confidential client application, refer to the steps below.
+To register a new confidential client application, use the following steps.
## Register a new application
To register a new confidential client application, refer to the steps below.
## API permissions
-Permissions for Azure API for FHIR are managed through RBAC. For more details, visit [Configure Azure RBAC for FHIR](configure-azure-rbac.md).
+Permissions for Azure API for FHIR are managed through role-based access control (RBAC). For more details, visit [Configure Azure RBAC for FHIR](configure-azure-rbac.md).
>[!NOTE]
->Use grant_type of client_credentials when trying to obtain an access token for Azure API for FHIR using tools such as Postman. For more details, visit [Testing the FHIR API on Azure API for FHIR](tutorial-web-app-test-postman.md).
+>Use a `grant_type` of `client_credentials` when trying to obtain an access token for Azure API for FHIR using tools such as Postman. For more details, visit [Testing the FHIR API on Azure API for FHIR](tutorial-web-app-test-postman.md).
## Application secret
Permissions for Azure API for FHIR are managed through RBAC. For more details, v
:::image type="content" source="media/add-azure-active-directory/portal-aad-register-new-app-registration-confidential-client-secret.png" alt-text="Confidential client. Application Secret.":::
-1. Enter a **Description** for the client secret. Select the **Expires** drop-down menu to choose an expiration time frame, and then click **Add**.
+1. Enter a **Description** for the client secret. Select the **Expires** drop-down menu to choose an expiration time frame, and then select **Add**.
:::image type="content" source="media/add-azure-active-directory/add-a-client-secret.png" alt-text="Add a client secret.":::
Permissions for Azure API for FHIR are managed through RBAC. For more details, v
## Next steps
-In this article, you were guided through the steps of how to register a confidential client application in the Microsoft Entra ID. You were also guided through the steps of how to add API permissions in Microsoft Entra ID for Azure API for FHIR. Lastly, you were shown how to create an application secret. Furthermore, you can learn how to access your FHIR server using Postman.
+In this article, you were guided through the steps of how to register a confidential client application in the Microsoft Entra ID. You were also guided through the steps of how to add API permissions in Microsoft Entra ID for Azure API for FHIR. Lastly, you were shown how to create an application secret.<br>
+You can also learn how to access your FHIR server using Postman.
>[!div class="nextstepaction"] >[Access the FHIR service using Postman](./../fhir/use-postman.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Register Public Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app.md
# Register a public client application in Microsoft Entra ID for Azure API for FHIR
-In this article, you'll learn how to register a public application in Microsoft Entra ID.
+In this article, you learn how to register a public application in Microsoft Entra ID.
Client application registrations are Microsoft Entra representations of applications that can authenticate and ask for API permissions on behalf of a user. Public clients are applications such as mobile applications and single page JavaScript applications that can't keep secrets confidential. The procedure is similar to [registering a confidential client](register-confidential-azure-ad-client-app.md), but since public clients can't be trusted to hold an application secret, there's no need to add one.
-The quickstart provides general information about how to [register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+This quickstart provides general information about how to [register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
## App registrations in Azure portal
The quickstart provides general information about how to [register an applicatio
1. Give the application a display name.
-2. Provide a reply URL. The reply URL is where authentication codes will be returned to the client application. You can add more reply URLs and edit existing ones later.
+2. Provide a reply URL. The reply URL is where authentication codes are returned to the client application. You can add more reply URLs and edit existing ones later.
![Azure portal. New public App Registration.](media/add-azure-active-directory/portal-aad-register-new-app-registration-pub-client-name.png)
To configure your [desktop](../../active-directory/develop/scenario-desktop-app-
## API permissions
-Permissions for Azure API for FHIR are managed through RBAC. For more details, visit [Configure Azure RBAC for FHIR](configure-azure-rbac.md).
+Permissions for Azure API for FHIR are managed through role-based access control (RBAC). For more details, visit [Configure Azure RBAC for FHIR](configure-azure-rbac.md).
>[!NOTE]
->Use grant_type of client_credentials when trying to otain an access token for Azure API for FHIR using tools such as Postman. For more details, visit [Testing the FHIR API on Azure API for FHIR](tutorial-web-app-test-postman.md).
+>Use a `grant_type` of `client_credentials` when trying to obtain an access token for Azure API for FHIR using tools such as Postman. For more details, visit [Testing the FHIR API on Azure API for FHIR](tutorial-web-app-test-postman.md).
## Validate FHIR server authority
-If the application you registered in this article and your FHIR server are in the same Microsoft Entra tenant, you're good to proceed to the next steps.
+If the application you registered and your FHIR server are in the same Microsoft Entra tenant, you're good to proceed to the next steps.
-If you configure your client application in a different Microsoft Entra tenant from your FHIR server, you'll need to update the **Authority**. In Azure API for FHIR, you do set the Authority under Settings --> Authentication. Set your Authority to ``https://login.microsoftonline.com/\<TENANT-ID>`.
+If you configure your client application in a different Microsoft Entra tenant from your FHIR server, you need to update the **Authority**. In Azure API for FHIR, you do set the Authority under **Settings** > **Authentication**. Set your Authority to `https://login.microsoftonline.com/\<TENANT-ID>`.
## Next steps
-In this article, you've learned how to register a public client application in Microsoft Entra ID. Next, test access to your FHIR Server using Postman.
+In this article, you learned how to register a public client application in Microsoft Entra ID. Next, test access to your FHIR Server using Postman.
>[!div class="nextstepaction"] >[Access the FHIR service using Postman](./../fhir/use-postman.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Register Resource Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-resource-azure-ad-client-app.md
# Register a resource application in Microsoft Entra ID for Azure API for FHIR
-In this article, you'll learn how to register a resource (or API) application in Microsoft Entra ID. A resource application is a Microsoft Entra representation of the FHIR server API itself and client applications can request access to the resource when authenticating. The resource application is also known as the *audience* in OAuth parlance.
+In this article, you learn how to register a resource (or API) application in Microsoft Entra ID. A resource application is a Microsoft Entra representation of the FHIR&reg; server API itself, and client applications can request access to the resource when authenticating. The resource application is also known as the *audience* in OAuth parlance.
## Azure API for FHIR If you're using the Azure API for FHIR, a resource application is automatically created when you deploy the service. As long as you're using the Azure API for FHIR in the same Microsoft Entra tenant as you're deploying your application, you can skip this how-to-guide and instead deploy your Azure API for FHIR to get started.
-If you're using a different Microsoft Entra tenant (not associated with your subscription), you can import the Azure API for FHIR resource application into your tenant with
-PowerShell:
+If you're using a different Microsoft Entra tenant (not associated with your subscription), you can use PowerShell to import the Azure API for FHIR resource application into your tenant.
```azurepowershell-interactive New-AzADServicePrincipal -ApplicationId 4f6778d8-5aef-43dc-a1ff-b073724b9495 -Role Contributor ```
-or you can use Azure CLI:
+Or you can use Azure CLI.
```azurecli-interactive az ad sp create --id 4f6778d8-5aef-43dc-a1ff-b073724b9495
If you're using the open source FHIR Server for Azure, follow the steps on the [
## Next steps
-In this article, you've learned how to register a resource application in Microsoft Entra ID. Next, register your confidential client application.
+In this article, you learned how to register a resource application in Microsoft Entra ID. Next, register your confidential client application.
>[!div class="nextstepaction"] >[Register Confidential Client Application](register-confidential-azure-ad-client-app.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Register Service Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-service-azure-ad-client-app.md
# Register a service client application in Microsoft Entra ID for Azure API for FHIR
-In this article, you'll learn how to register a service client application in Microsoft Entra ID. Client application registrations are Microsoft Entra representations of applications that can be used to authenticate and obtain tokens. A service client is intended to be used by an application to obtain an access token without interactive authentication of a user. It will have certain application permissions and use an application secret (password) when obtaining access tokens.
+In this article, you learn how to register a service client application in Microsoft Entra ID. Client application registrations are Microsoft Entra representations of applications that can be used to authenticate and obtain tokens. A service client is intended to be used by an application to obtain an access token without interactive authentication of a user. It has certain application permissions and can use an application secret (password) when obtaining access tokens.
Follow these steps to create a new service client.
Follow these steps to create a new service client.
## API permissions
-Permissions for Azure API for FHIR are managed through RBAC. For more details, visit [Configure Azure RBAC for FHIR](configure-azure-rbac.md).
+Permissions for Azure API for FHIR are managed through role-based access control (RBAC). For more details, visit [Configure Azure RBAC for FHIR](configure-azure-rbac.md).
>[!NOTE]
->Use grant_type of client_credentials when trying to otain an access token for Azure API for FHIR using tools such as Postman. For more details, visit [Testing the FHIR API on Azure API for FHIR](tutorial-web-app-test-postman.md).
+>Use a `grant_type` of `client_credentials` when trying to otain an access token for Azure API for FHIR using tools such as Postman. For more details, visit [Testing the FHIR API on Azure API for FHIR](tutorial-web-app-test-postman.md).
## Application secret
The service client needs a secret (password) to obtain a token.
3. Provide a description and duration of the secret (either one year, two years or never).
-4. Once the secret has been generated, it will only be displayed once in the portal. Make a note of it and store in a secure location.
+4. Once the secret is generated, it will only be displayed once in the portal. Make a note of it and store it in a secure location.
## Next steps
-In this article, you've learned how to register a service client application in Microsoft Entra ID. Next, test access to your FHIR server using Postman.
+In this article, you learned how to register a service client application in Microsoft Entra ID. Next, test access to your FHIR server using Postman.
>[!div class="nextstepaction"] >[Access the FHIR service using Postman](./../fhir/use-postman.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
# Release notes: Azure API for FHIR
-Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+Azure API for FHIR&reg; provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
## **August 2024**
Learn more:
The query parameter _summary=count and _count=0 can be added to _history endpoint to get count of all versioned resources. This count includes soft deleted resources. For more information, see [history management](././../azure-api-for-fhir/purge-history.md). **Improve throughput for export operation**
-The "_isparallel" query parameter can be added to the export operation to enhance its throughput. Its' important to note that using this parameter may result in an increase in Request Units consumption over the life of export. For more information, see [Export operation query parameters](././../azure-api-for-fhir/export-data.md).
+The "_isparallel" query parameter can be added to the export operation to enhance its throughput. It's' important to note that using this parameter may result in an increase in Request Units consumption over the life of export. For more information, see [Export operation query parameters](././../azure-api-for-fhir/export-data.md).
> [!NOTE] > There's a known issue with the $export operation that could result in incomplete exports with status success. Issue occurs when the is_parallel flag was used. Export jobs executed with _isparallel query parameter starting February 13th, 2024 are impacted with this issue.
With this change, exported file names follow the format '{FHIR Resource Name}-{N
**Performance Enhancement**
-Parallel optimization for FHIR queries can be enabled using HTTP header "x-ms-query-latency-over-efficiency" . This value needs to be set to true to achieve maximum concurrency during execution of query. For more information, see [Batch Bundles](././../azure-api-for-fhir/fhir-rest-api-capabilities.md).
+Parallel optimization for FHIR queries can be enabled using HTTP header "x-ms-query-latency-over-efficiency". This value needs to be set to true to achieve maximum concurrency during execution of query. For more information, see [Batch Bundles](././../azure-api-for-fhir/fhir-rest-api-capabilities.md).
## **January 2024**
For more details, visit [#3222](https://github.com/microsoft/fhir-server/pull/32
**Fixed the Error generated when resource is updated using if-match header and PATCH**
-Bug is now fixed and Resource will be updated if matches the Etag header. For details , see [#2877](https://github.com/microsoft/fhir-server/issues/2877)|.
+Bug is now fixed and Resource will be updated if it matches the Etag header. For details, see [#2877](https://github.com/microsoft/fhir-server/issues/2877)|.
## May 2022
Bug is now fixed and Resource will be updated if matches the Etag header. For de
|Bug fixes |Related information | | :-- | : | |Duplicate resources in search with `_include` |Fixed issue where a single resource can be returned twice in a search that has `_include`. For more information, see [PR #2448](https://github.com/microsoft/fhir-server/pull/2448). |
-|PUT creates on versioned update |Fixed issue were creates with PUT resulted in an error when the versioning policy is configured to `versioned-update`. For more information, see [PR #2457](https://github.com/microsoft/fhir-server/pull/2457). |
+|PUT creates on versioned update |Fixed issue: creating with PUT resulted in an error when the versioning policy is configured to `versioned-update`. For more information, see [PR #2457](https://github.com/microsoft/fhir-server/pull/2457). |
|Invalid header handling on versioned update |Fixed issue where invalid `if-match` header would result in an HTTP 500 error. Now an HTTP Bad Request is returned instead. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). | ## February 2022
Bug is now fixed and Resource will be updated if matches the Etag header. For de
|Enhancements |Related information | | :-- | : | |Added 429 retry and logging in BundleHandler |We sometimes encounter 429 errors when processing a bundle. If the FHIR service receives a 429 at the BundleHandler layer, we abort processing of the bundle and skip the remaining resources. We've added another retry (in addition to the retry present in the data store layer) that will execute one time per resource that encounters a 429. For more about this feature enhancement, see [PR #2400](https://github.com/microsoft/fhir-server/pull/2400).|
-|Billing for `$convert-data` and `$de-id` |Azure API for FHIR's data conversion and deidentified export features are now Generally Available. Billing for `$convert-data` and `$de-id` operations in Azure API for FHIR has been enabled. Billing meters were turned on March 1, 2022. |
+|Billing for `$convert-data` and `$de-id` |Azure API for FHIR's data conversion and de-identified export features are now Generally Available. Billing for `$convert-data` and `$de-id` operations in Azure API for FHIR has been enabled. Billing meters were turned on March 1, 2022. |
### **Bug fixes**
Bug is now fixed and Resource will be updated if matches the Etag header. For de
|Bug fixes |Related information | | :-- | : | |Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it results in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
-|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
+|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
|`_sort` can cause `ChainedSearch` to return incorrect results |Previously, the sort options from the chained search's `SearchOption` object wasn't cleared, causing the sorting options to be passed through to the chained subsearch, which aren't valid. This could result in no results when there should be results. This bug is now fixed [#2347](https://github.com/microsoft/fhir-server/pull/2347). It addressed GitHub bug [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
Bug is now fixed and Resource will be updated if matches the Etag header. For de
| :- | : | |Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](../../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. | |Added software name and version to capability statement |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Health Data Services. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
-|Log 500's to `RequestMetric` |Previously, 500s or any unknown/unhandled errors weren't getting logged in `RequestMetric`. They're now getting logged [#2240](https://github.com/microsoft/fhir-server/pull/2240). For more information, see [Enable diagnostic settings in Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md) |
+|Log 500s to `RequestMetric` |Previously, 500s or any unknown/unhandled errors weren't getting logged in `RequestMetric`. They're now getting logged [#2240](https://github.com/microsoft/fhir-server/pull/2240). For more information, see [Enable diagnostic settings in Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md) |
|Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](../../healthcare-apis/azure-api-for-fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). | ### **Bug fixes**
For information about the features and bug fixes in Azure Health Data Services (
>[!div class="nextstepaction"] >[Release notes: Azure Health Data Services](../release-notes.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
iot-operations Howto Configure Kafka Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md
kubectl create secret generic cs-secret -n azure-iot-operations \
#### Limitations
-Azure Event Hubs [doesn't support all the compression types that Kafka supports](../../event-hubs/azure-event-hubs-kafka-overview.md#compression). Only GZIP compression is supported. Using other compression types might result in errors.
+Azure Event Hubs [doesn't support all the compression types that Kafka supports](../../event-hubs/azure-event-hubs-kafka-overview.md#compression). Only GZIP compression is supported in Azure Event Hubs premium and dedicated tiers currently. Using other compression types might result in errors.
### Other Kafka brokers
The compression field enables compression for the messages sent to Kafka topics.
| Value | Description | | -- | -- | | `None` | No compression or batching is applied. None is the default value if no compression is specified. |
-| `Gzip` | GZIP compression and batching are applied. GZIP is a general-purpose compression algorithm that offers a good balance between compression ratio and speed. GZIP is the only compression method supported by Azure Event Hubs. |
+| `Gzip` | GZIP compression and batching are applied. GZIP is a general-purpose compression algorithm that offers a good balance between compression ratio and speed. Only [GZIP compression is supported in Azure Event Hubs premium and dedicated tiers](../../event-hubs/azure-event-hubs-kafka-overview.md#compression) currently. |
| `Snappy` | Snappy compression and batching are applied. Snappy is a fast compression algorithm that offers moderate compression ratio and speed. | | `Lz4` | LZ4 compression and batching are applied. LZ4 is a fast compression algorithm that offers low compression ratio and high speed. |
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
Title: Create example Standard logic app workflow in Azure portal
+ Title: Create example Standard workflow in Azure portal
description: Create your first example Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.+ ms.suite: integration Previously updated : 09/23/2024 Last updated : 09/27/2024 # Customer intent: As a developer, I want to create my first example Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
In this example, the workflow runs when the **Request** trigger receives an inbo
> [!TIP] >
- > You can also find the endpoint URL on your logic app's **Overview** pane in the **Workflow URL** property.
+ > You can also find the endpoint URL on your logic app **Overview** page in the **Workflow URL** property.
> > 1. On the resource menu, select **Overview**. > 1. On the **Overview** pane, find the **Workflow URL** property.
For a stateful workflow, you can review the trigger history for each run, includ
For an existing stateful workflow run, you can rerun the entire workflow with the same inputs that were previously used for that run. For more information, see [Rerun a workflow with same inputs](monitor-logic-apps.md?tabs=standard#resubmit-workflow-run).
+<a name="set-up-managed-identity-storage"></a>
+
+## Set up managed identity access to your storage account
+
+By default, your Standard logic app authenticates access to your Azure Storage account by using a connection string. However, you can set up a user-assigned managed identity to authenticate access instead.
+
+1. In the [Azure portal](https://portal.azure.com), [follow these steps to create a user-assigned managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+
+1. From your user-assigned identity, get the resource ID:
+
+ 1. On the user-assigned managed identity menu, under **Settings**, select **Properties**.
+
+ 1. From the **Id** property, copy and save the resource ID.
+
+1. From your storage account, get the URIs for the Blob, Queue, and Table
+
+ 1. On the storage account menu, under **Settings**, select **Endpoints**.
+
+ 1. Copy and save the URIs for **Blob service**, **Queue service**, and **Table service**.
+
+1. On your storage account, add the necessary role assignments for your user-assigned identity:
+
+ 1. On the storage account menu, select **Access control (IAM)**.
+
+ 1. On the **Access control (IAM)** page toolbar, from the **Add** menu, select **Add role assignment**.
+
+ 1. On the **Job function roles** tab, add each of the following roles to the user-assigned identity:
+
+ - **Storage Account Contributor**
+ - **Storage Blob Data Owner**
+ - **Storage Queue Data Contributor**
+ - **Storage Table Data Contributor**
+
+ For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml) and [Understand role assignments](../role-based-access-control/role-assignments.md).
+
+1. [Follow these steps to add the user-assigned managed identity to your Standard logic app resource](authenticate-with-managed-identity.md?tabs=standard#add-user-assigned-identity-to-logic-app-in-the-azure-portal).
+
+1. On your Standard logic app, enable runtime scale monitoring:
+
+ 1. On the logic app menu, under **Settings**, select **Configuration**.
+
+ 1. On the **Workflow runtime settings** tab, for **Runtime Scale Monitoring**, select **On**.
+
+ 1. On the **Configuration** toolbar, select **Save**.
+
+1. On your Standard logic app, set up the resource ID and service URIs:
+
+ 1. On the logic app menu, select **Overview**.
+
+ 1. On the **Overview** page toolbar, select **Stop**.
+
+ 1. On the logic app menu, under **Settings**, select **Environment variables**.
+
+ 1. On the **App settings** tab, select **Add** to add the following app settings and values:
+
+ | App setting | Value |
+ |-|-|
+ | **AzureWebJobsStorage__managedIdentityResourceId** | The resource ID for your user-assigned managed identity |
+ | **AzureWebJobsStorage__blobServiceUri** | The Blob service URI for your storage account |
+ | **AzureWebJobsStorage__queueServiceUri** | The Queue service URI for your storage account |
+ | **AzureWebJobsStorage__tableServiceUri** | The Table service URI for your storage account |
+ | **AzureWebJobsStorage__credential** | **managedIdentity** |
+
+ 1. On the **App settings** tab, delete the app setting named **AzureWebJobsStorage**, which is set to the connection string associated with your storage account.
+
+ 1. When you finish, select **Apply**, which saves your changes and restarts your logic app.
+
+ Your changes might take several moments to take effect. If necessary, on your logic app menu, select **Overview**, and on the toolbar, select **Refresh**.
+
+ The following message might appear, but it isn't an error and doesn't affect your logic app:
+
+ **"AzureWebjobsStorage" app setting is not present.**
+ <a name="enable-run-history-stateless"></a> ## Enable run history for stateless workflows
To fix this problem, follow these steps to delete the outdated version so that t
> If you get an error such as **"permission denied"** or **"file in use"**, refresh the > page in your browser, and try the previous steps again until the folder is deleted.
-1. In the Azure portal, return to your logic app's **Overview** page, and select **Restart**.
+1. In the Azure portal, return to your logic app and its **Overview** page, and select **Restart**.
The portal automatically gets and uses the latest bundle.
modeling-simulation-workbench Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/best-practices.md
+
+ Title: Best practices for using and administering Azure Modeling and Simulation Workbench
+description: Learn best practices and helpful guidance when working with Azure Modeling and Simulation Workbench.
++++ Last updated : 10/06/2024+
+#customer intent: As a user of Azure Modeling and Simulation Workbench, I want to learn best practices so that I can efficiently and effectively use and administer.
+++
+# Best practices for Azure Modeling and Simulation Workbench
+
+The Azure Modeling and Simulation Workbench is a cloud-based collaboration platform that provides secure, isolated chambers to allow enterprises to work in the cloud. Modeling and Simulation Workbench provides a large selection of powerful, virtual machines (VM) and high-performance scalable storage and provides control and oversight to what users can export from the platform.
+
+This best practices article provides both users and administrators guidance on how to get the most from the platform, control costs, and work effectively.
+
+## Control costs with chamber idle mode
+
+When a chamber won't be used in the immediate future, [place it into idle mode](./how-to-guide-chamber-idle.md). Idling a chamber significantly reduces costs. For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/modeling-and-simulation-workbench/#pricing). Idle mode doesn't delete your VMs or storage, but does terminate desktop sessions and chamber license servers.
+
+## Review user allocation to chambers to control cost
+
+Modeling and Simulation Workbench prices chamber access through 10-Pack user connectivity. If your user count increases beyond a multiple of 10, another user pack is added. Review your user allocations to ensure your costs are optimized. For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/modeling-and-simulation-workbench/#pricing).
+
+## Use an Azure naming resource convention
+
+Depending on complexity, workbenches can have many resources. Adopting a naming convention can help you effectively manage your deployment. The Azure Cloud Adoption Framework has a [naming convention](/azure/cloud-adoption-framework/ready/azure-best-practices/resource-naming) to help you get started.
+
+## Key Vaults best practices
+
+Modeling and Simulation Workbench uses [Key Vaults](/azure/key-vault/general/basic-concepts) to store authentication identifiers. See the [Azure Key Vault best practices guide](/azure/key-vault/general/best-practices) for other guidance on effectively using a Key Vault in Azure.
+
+### Use separate Key Vault to broaden security perimeters
+
+Use separate Key Vault for every workbench or assigned group of administrators to help keep your deployment secure. If user credentials or a perimeter is breached, a separate key vault for workbenches can reduce impact.
+
+### Assign two or more Key Vault Secrets Officers
+
+The role of **Secrets Officers** is assigned to the **Workbench Owner** who is tasked with creating and administering the workbench environment. Designating at least two secrets officers can reduce downtime if secrets need to be administered and one administrator isn't available. Consider using Azure Groups to assign this role.
+
+## Use the right storage for the task
+
+Modeling and Simulation Workbench offers several types and tiers for storage. For more information, see the [storage overview](./concept-storage.md).
+
+* Don't save or perform critical work in home directories. Home directories are deleted anytime users are dropped from chambers. Additionally, if you delete users to manage user pack costs, those home directories are deleted. Home directories are intended for resource files or temporary work.
+* Chamber storage is the best place to store vital data and perform application workloads. Chamber storage is high-performance with two different performance tiers and scalable. You can learn how to manage chamber storage in [chamber storage how-to](./how-to-guide-manage-chamber-storage.md).
+* Don't place information that shouldn't be shared with other chambers in shared storage. Shared storage is visible to all users of the member chambers.
+* If you plan on idling the chamber and are looking to save cost, create a standard tier of chamber storage and move all files there.
+
+## Using application registrations in Microsoft Entra and Modeling and Simulation Workbench
+
+### Choose a meaningful management approach for application registrations
+
+Application registrations can easily accumulate in an organization and be forgotten, becoming difficult to manage. Use a meaningful name for application registrations made for Modeling and Simulation Workbench to identify it later. Assign at least two or more owners or consider using an Azure Group to assign ownership.
+
+### Manage application registration secrets
+
+Use a reasonable expiration date for the application secret created. Refer to your organizations rules on application password lifetime.
+
+### Reuse application registrations across related deployments
+
+Application registrations are authentication brokers for the Modeling and Simulation Workbench. Identity and Access Management (IAM) at the chamber level is responsible for this access. You can use fewer application registrations where it makes sense to do so based on region, user base, project, or security boundaries.
+
+### Delete redirect URIs when deleting connectors
+
+Connectors generate two distinct redirect URIs when created. Anytime you're deleting or rebuilding a connector, delete the associated redirect URI from the application registration.
+
+## Related content
+
+* [Manage chamber storage in Azure Modeling and Simulation Workbench](how-to-guide-manage-chamber-storage.md)
+* [Manage users in Azure Modeling and Simulation Workbench](how-to-guide-manage-users.md)
+* [Manage chamber idle mode](how-to-guide-chamber-idle.md)
modeling-simulation-workbench Troubleshoot Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/troubleshoot-known-issues.md
The Modeling and Simulation Workbench is a secure, cloud-based platform for coll
This Known Issues guide provides troubleshooting and advisory information for resolving or acknowledging issues to be addressed. Where applicable, workaround or mitigation steps are provided.
+## Cadence dependencies
+
+When a Chamber Admin is attempting installation of some recent releases of Cadence tools, some users report missing dependencies on Modeling and Simulation Workbench. To fix this issue, install missing dependencies.
+
+### Troubleshooting steps
+
+During installation, the Cadence dependency checker `checkSysConf` reports that the following packages are missing from Modeling and Simulation Workbench VMs. Some of those packages are installed, but fail the dependency check due to other dependencies.
+
+* `xterm`
+* `motif`
+* `libXp`
+* `apr`
+* `apr-util`
+
+A Chamber Admin can install these packages with the following command in a terminal:
+
+```bash
+sudo yum install motif apr apr-util xterm
+```
+ ## EDA license upload failures on server name When uploading Electronic Design Automation (EDA) license files with server names that contain a dash ("-") symbol, the chamber license file server fails to process the file. For some license files, the `SERVER` line server name isn't being parsed correctly. The parser fails to tokenize this line in order to reformat for the chamber license server environment.
peering-service About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/about.md
Previously updated : 09/27/2023 Last updated : 10/07/2024 #CustomerIntent: As an administrator, I want learn about Azure Peering Service so I can optimize the connectivity to Microsoft. # Azure Peering Service overview
-Azure Peering Service is a networking service that enhances the connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. Microsoft has partnered with internet service providers (ISPs), internet exchange partners (IXPs), and software-defined cloud interconnect (SDCI) providers worldwide to provide reliable and high-performing public connectivity with optimal routing from the customer to the Microsoft network.
+Azure Peering Service is a networking service that enhances the connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. Microsoft partnered with internet service providers (ISPs), internet exchange partners (IXPs), and software-defined cloud interconnect (SDCI) providers worldwide to provide reliable and high-performing public connectivity with optimal routing from the customer to the Microsoft network.
With Peering Service, customers can select a well-connected partner service provider in a given region. Public connectivity is optimized for high reliability and minimal latency from cloud services to the end-user location.
Microsoft and partner service providers ensure that the traffic for the prefixes
> [!NOTE] > For more information about the Microsoft global network, see [Microsoft global network](../networking/microsoft-global-network.md).
->
## Why use Peering Service?
-Enterprises looking for internet-first access to the cloud or considering SD-WAN architecture or with high usage of Microsoft SaaS services need robust and high-performing internet connectivity. Customers can make that transition happen by using Peering Service. Microsoft and service providers have partnered to deliver reliable and performance-centric public connectivity to the Microsoft cloud. Some of the key customer features are listed here:
+Enterprises looking for internet-first access to the cloud or considering SD-WAN architecture or with high usage of Microsoft SaaS services need robust and high-performing internet connectivity. Customers can make that transition happen by using Peering Service. Microsoft and service providers partnered to deliver reliable and performance-centric public connectivity to the Microsoft cloud. Some of the key customer features are listed here:
- Best public routing over the internet to Microsoft Azure Cloud Services for optimal performance and reliability. - Ability to select the preferred service provider to connect to the Microsoft cloud.
Peering Service uses two types of redundancy:
- **Geo-redundancy**
- Microsoft has interconnected with service providers at multiple metro locations so that if one of the Edge nodes has degraded performance, the traffic routes to and from Microsoft via alternate sites. Microsoft routes traffic in its global network by using SDN-based routing policies for optimal performance.
+ Microsoft interconnected with service providers at multiple metro locations so that if one of the Edge nodes has degraded performance, the traffic routes to and from Microsoft via alternate sites. Microsoft routes traffic in its global network by using SDN-based routing policies for optimal performance.
This type of redundancy uses the shortest routing path by always choosing the nearest Microsoft Edge PoP to the end user and ensures that the customer is one network hop (AS hops) away from MicrosoftΓÇï.
peering-service Connection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/connection-telemetry.md
Previously updated : 06/06/2023 Last updated : 10/07/2024 -+ # Customer intent: Customer wants to access their connection telemetry per prefix to Microsoft services with Azure Peering Service.
peering-service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/connection.md
Previously updated : 07/23/2023 Last updated : 10/07/2024 # Peering Service connection A connection typically refers to a logical information set, identifying a Peering Service. It's defined by specifying the following attributes: -- Logical Name
+- Logical name
- Connectivity partner - Connectivity partner Primary service location - Connectivity partner Backup service location - IP prefixes
-Customer can establish a single connection or multiple connections as per the requirement. A connection is also used as a unit of telemetry collection. For instance, to opt for telemetry alerts, customer must define the connection that will be monitored.
+Customer can establish a single connection or multiple connections as per the requirement. A connection is also used as a unit of telemetry collection. For instance, to opt for telemetry alerts, you must define the connection that you want to monitor.
> [!NOTE] > When you sign up for Peering Service, we analyze your Windows and Microsoft 365 telemetry in order to provide you with latency measurements for your selected prefixes.
Customer can establish a single connection or multiple connections as per the re
## How to create a peering service connection?
-**Scenario** - Let's say a branch office is spread across different geographic locations as shown in the figure. Here, the customer is required to provide a logical name, Service Provider (SP) name, customer's physical location, and IP prefixes that are (owned by the customer or allocated by the Service Provider) associated with a single connection. The primary and backup service locations with partner help defining the preferred service location for customer. This process must be repeated to create Peering Service for other locations.
+**Scenario** - Let's say a branch office is spread across different geographic locations as shown in the figure. Here, the customer is required to provide a logical name, Service Provider (SP) name, customer's physical location, and IP prefixes that are (owned by the customer or allocated by the Service Provider) associated with a single connection. The primary and backup service locations with partner help defining the preferred service location for customer. This process must be repeated to create Peering Service for other locations.
:::image type="content" source="./media/connection/peering-service-connections.png" alt-text="Diagram shows geo redundant connections."::: > [!NOTE] > State level-filtration is considered for the customer's physical location when the connection is geo-located in the United States.
-## Next steps
+## Related content
- To learn how to register Peering Service connection, see [Create Peering Service using the Azure portal](azure-portal.md). - To learn about Peering Service connection telemetry, see [Access Peering Service connection telemetry](connection-telemetry.md).
peering-service Customer Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/customer-walkthrough.md
Title: Azure Peering Service customer walkthrough
-description: Learn about Azure Peering Service and how to onboard.
+description: Learn how to activate and optimize your prefixes with Azure Peering Service.
Previously updated : 07/26/2023 Last updated : 10/07/2024 # Azure Peering Service customer walkthrough
-This section explains the steps to optimize your prefixes with an Internet Service Provider (ISP) or Internet Exchange Provider (IXP) who is a Peering Service partner.
+This article explains the steps to optimize your prefixes with an Internet Service Provider (ISP) or Internet Exchange Provider (IXP) that is a Peering Service partner.
See [Peering Service partners](location-partners.md) for a complete list of Peering Service providers. ## Activate the prefix
-If you have received a Peering Service prefix key from your Peering Service provider, then you can activate your prefixes for optimized routing with Peering Service. Prefix activation, alignment to the right OC partner, and appropriate interconnect location are requirements for optimized routing (to ensure cold potato routing).
+If you already received a Peering Service prefix key from your Peering Service provider, then you can activate your prefixes for optimized routing with Peering Service. Prefix activation, alignment to the right OC partner, and appropriate interconnect location are requirements for optimized routing (to ensure cold potato routing).
To activate the prefix, follow these steps:
To activate the prefix, follow these steps:
:::image type="content" source="./media/customer-walkthrough/peering-service-basics.png" alt-text="Screenshot shows the Basics tab of creating a Peering Service connection in the Azure portal.":::
-1. In the **Configuration** tab, provide details on the location, provider and primary and backup interconnect locations. If the backup location is set to **None**, the traffic fails over to the internet.
+1. In the **Configuration** tab, provide details on the location, provider, and primary and backup interconnect locations. If the backup location is set to **None**, the traffic fails over to the internet.
> [!NOTE] > - The prefix key should be the same as the one obtained from your Peering Service provider.
To activate the prefix, follow these steps:
## Frequently asked questions (FAQ)
-**Q.** Will Microsoft re-advertise my prefixes to the Internet?
+**Q.** Will Microsoft readvertise my prefixes to the Internet?
**A.** No.
-**Q.** My Peering Service prefix has failed validation. How should I proceed?
+**Q.** My Peering Service prefix failed validation. How should I proceed?
-**A.** Review the [Peering Service Prefix Requirements](./peering-service-prefix-requirements.md) and follow the troubleshooting steps described.
+**A.** Review the [Peering Service prefix requirements](./peering-service-prefix-requirements.md) and follow the troubleshooting steps described.
private-link Rbac Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/rbac-permissions.md
Microsoft.Network and the specific resource provider you are deploying, for exam
## Private endpoint
-This section lists the granular permissions required to deploy a private endpoint.
+This section lists the granular permissions required to deploy a private endpoint, manage [private endpoint subnet policies](../private-link/disable-private-endpoint-network-policy.md), and deploy dependent resources
| Action | Description | | | - |
This section lists the granular permissions required to deploy a private endpoin
| Microsoft.Resources/subscriptions/resourcegroups/resources/read | Read the resources for the resource group | | Microsoft.Network/virtualNetworks/read | Read the virtual network definition | | Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition |
-| Microsoft.Network/virtualNetworks/subnets/write | Creates a virtual network subnet or updates an existing virtual network subnet|
-| Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network |
+| Microsoft.Network/virtualNetworks/subnets/write | Creates a virtual network subnet or updates an existing virtual network subnet. <br/> *Not explicitly needed to deploy a private endpoint, but necessary for managing private endpoint subnet policies* |
+| Microsoft.Network/virtualNetworks/subnets/join/action | Allow a private endpoint to join a virtual network |
| Microsoft.Network/privateEndpoints/read | Read a private endpoint resource | | Microsoft.Network/privateEndpoints/write | Creates a new private endpoint, or updates an existing private endpoint | | Microsoft.Network/locations/availablePrivateEndpointTypes/read | Read available private endpoint resources |
Here is the JSON format of the above permissions. Input your own roleName, descr
## Private link service
-This section lists the granular permissions required to deploy a private link service.
+This section lists the granular permissions required to deploy a private link service, manage [private link service subnet policies](../private-link/disable-private-link-service-network-policy.md), and deploy dependent resources
| Action | Description | | | - |
This section lists the granular permissions required to deploy a private link se
| Microsoft.Resources/subscriptions/resourcegroups/resources/read | Read the resources for the resource group | | Microsoft.Network/virtualNetworks/read | Read the virtual network definition | | Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition |
-| Microsoft.Network/virtualNetworks/subnets/write | Creates a virtual network subnet or updates an existing virtual network subnet|
-| Microsoft.Network/privateLinkServices/read | Read a private link service resource|
-| Microsoft.Network/privateLinkServices/write | Creates a new private link service, or updates an existing private link service|
+| Microsoft.Network/virtualNetworks/subnets/write | Creates a virtual network subnet or updates an existing virtual network subnet. <br/> *Not explicitly needed to deploy a private link service, but necessary for managing private link subnet policies* |
+| Microsoft.Network/privateLinkServices/read | Read a private link service resource|
+| Microsoft.Network/privateLinkServices/write | Creates a new private link service, or updates an existing private link service|
| Microsoft.Network/privateLinkServices/privateEndpointConnections/read | Read a private endpoint connection definition | | Microsoft.Network/privateLinkServices/privateEndpointConnections/write | Creates a new private endpoint connection, or updates an existing private endpoint connection|
-| Microsoft.Network/networkSecurityGroups/join/action | Joins a network security group |
-| Microsoft.Network/loadBalancers/read | Read a load balancer definition |
-| Microsoft.Network/loadBalancers/write | Creates a load balancer or updates an existing load balancer |
+| Microsoft.Network/networkSecurityGroups/join/action | Joins a network security group |
+| Microsoft.Network/loadBalancers/read | Read a load balancer definition |
+| Microsoft.Network/loadBalancers/write | Creates a load balancer or updates an existing load balancer |
```JSON {
Typically, a network administrator creates a private endpoint. Depending on your
|Approval method |Minimum RBAC permissions | |||
-|Automatic | `Microsoft.Network/virtualNetworks/**`<br/>`Microsoft.Network/virtualNetworks/subnets/**`<br/>`Microsoft.Network/privateEndpoints/**`<br/>`Microsoft.Network/networkinterfaces/**`<br/>`Microsoft.Network/locations/availablePrivateEndpointTypes/read`<br/>`Microsoft.ApiManagement/service/**`<br/>`Microsoft.ApiManagement/service/privateEndpointConnections/**` |
+|Automatic | `Microsoft.Network/virtualNetworks/**`<br/>`Microsoft.Network/virtualNetworks/subnets/**`<br/>`Microsoft.Network/privateEndpoints/**`<br/>`Microsoft.Network/networkinterfaces/**`<br/>`Microsoft.Network/locations/availablePrivateEndpointTypes/read`<br/>|
|Manual | `Microsoft.Network/virtualNetworks/**`<br/>`Microsoft.Network/virtualNetworks/subnets/**`<br/>`Microsoft.Network/privateEndpoints/**`<br/>`Microsoft.Network/networkinterfaces/**`<br/>`Microsoft.Network/locations/availablePrivateEndpointTypes/read` | ## Next steps
sap Dbms Guide Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sqlserver.md
keywords: 'Azure, SQL Server, SAP, AlwaysOn, Always On'
Previously updated : 11/14/2022 Last updated : 10/07/2024
[Logo_Windows]:media/virtual-machines-shared-sap-shared/Windows.png
-This document covers several different areas to consider when deploying SQL Server for SAP workload in Azure IaaS. As a precondition to this document, you should have read the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](./dbms-guide-general.md) and other guides in the [SAP workload on Azure documentation](./get-started.md).
+This document covers several different areas to consider when deploying SQL Server for SAP workload in Azure IaaS. As a precondition to this document, read the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](./dbms-guide-general.md) and other guides in the [SAP workload on Azure documentation](./get-started.md).
This document covers several different areas to consider when deploying SQL Serv
In general, you should consider using the most recent SQL Server releases to run SAP workload in Azure IaaS. The latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have changes that optimize operations in an Azure IaaS infrastructure.
-General documentation about SQL Server running in Azure VMs can be found in these articles:
+General documentation about SQL Server running in Azure Virtual Machines (VM) can be found in these articles:
- [SQL Server on Azure Virtual Machines (Windows)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) - [Automate management with the Windows SQL Server IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management)
General documentation about SQL Server running in Azure VMs can be found in thes
- [Storage: Performance best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage) - [HADR configuration best practices (SQL Server on Azure VMs)](/azure/azure-sql/virtual-machines/windows/hadr-cluster-best-practices)
-Not all the content and statements made in the general SQL Server in Azure VM documentation applies to SAP workload. But, the documentation gives a good impression on the principles. an example for functionality not supported for SAP workload is the usage of FCI clustering.
+Not all the content and statements made in the general SQL Server in Azure VM documentation applies to SAP workload. But, the documentation gives a good impression on the principles. An example for functionality not supported for SAP workload is the usage of FCI clustering.
There's some SQL Server in IaaS specific information you should know before continuing:
-* **SQL Version Support**: Even with SAP Note [#1928533](https://launchpad.support.sap.com/#/notes/1928533) stating that the minimum supported SQL Server release is SQL Server 2008 R2, the window of supported SQL Server versions on Azure is also dictated by SQL Server's lifecycle. SQL Server 2012 extended maintenance ended mid of 2022. As a result, the current minimum release for newly deployed systems should be [SQL Server 2014](/lifecycle/products/sql-server-2014). The more recent, the better. The latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have changes that optimize operations in an Azure IaaS infrastructure.
-* **Using Images from Azure Marketplace**: The fastest way to deploy a new Microsoft Azure VM is to use an image from the Azure Marketplace. There are images in the Azure Marketplace, which contain the most recent SQL Server releases. The images where SQL Server already is installed can't be immediately used for SAP NetWeaver applications. The reason is the default SQL Server collation is installed within those images and not the collation required by SAP NetWeaver systems. In order to use such images, check the steps documented in chapter [Using a SQL Server image out of the Microsoft Azure Marketplace](./dbms-guide-sqlserver.md).
+* **SQL Version Support**: Even with SAP Note [#1928533](https://launchpad.support.sap.com/#/notes/1928533) stating that the minimum supported SQL Server release is SQL Server 2008 R2, the window of supported SQL Server versions on Azure is also dictated with SQL Server's lifecycle. SQL Server 2012 extended maintenance ended mid of 2022. As a result, the current minimum release for newly deployed systems should be [SQL Server 2014](/lifecycle/products/sql-server-2014). The more recent, the better. The latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have changes that optimize operations in an Azure IaaS infrastructure.
+* **Using Images from Azure Marketplace**: The fastest way to deploy a new Microsoft Azure VM is to use an image from the Azure Marketplace. There are images in the Azure Marketplace, which contain the most recent SQL Server releases. The images where SQL Server already is installed can't be immediately used for SAP NetWeaver applications. The reason is the default SQL Server collation is installed within those images and not the collation required for SAP NetWeaver systems. In order to use such images, check the steps documented in chapter [Using a SQL Server image out of the Microsoft Azure Marketplace](./dbms-guide-sqlserver.md).
* **SQL Server multi-instance support within a single Azure VM**: This deployment method is supported. However, be aware of resource limitations, especially around network and storage bandwidth of the VM type that you're using. Detailed information is available in article [Sizes for virtual machines in Azure](/azure/virtual-machines/sizes). These quota limitations might prevent you to implement the same multi-instance architecture as you can implement on-premises. As of the configuration and interference of sharing the resources available within a single VM, the same considerations as on-premises need to be taken into account. * **Multiple SAP databases in one single SQL Server instance in a single VM**: Configurations like these are supported. Considerations of multiple SAP databases sharing the shared resources of a single SQL Server instance are the same as for on-premises deployments. Keep other limits like number of disks that can be attached to a specific VM type in mind. Or network and storage quota limits of specific VM types as detailed [Sizes for virtual machines in Azure](/azure/virtual-machines/sizes). ## Recommendations on VM/VHD structure for SAP-related SQL Server deployments
-In accordance with the general description, Operating system, SQL Server executables, the SAP executables should be located or installed separate Azure disks. Typically, most of the SQL Server system databases aren't utilized at a high level by SAP NetWeaver workload. Nevertheless the system databases of SQL Server should be, together with the other SQL Server directories on a separate Azure disk. SQL Server tempdb should be either located on the nonperisisted D:\ drive or on a separate disk.
+In accordance with the general description, Operating system, SQL Server executables, the SAP executables should be located or installed separate Azure disks. Typically, most of the SQL Server system databases aren't utilized at a high level with SAP NetWeaver workload. Nevertheless the system databases of SQL Server should be, together with the other SQL Server directories on a separate Azure disk. SQL Server tempdb should be either located on the nonperisisted D:\ drive or on a separate disk.
-* With all SAP certified VM types (see SAP Note [#1928533](https://launchpad.support.sap.com/#/notes/1928533)), tempdb data, and log files can be placed on the non-persisted D:\ drive.
-* With SQL Server releases, where SQL Server installs tempdb with one data file by default, it's recommended to use multiple tempdb data files. Be aware D:\ drive volumes are different in size and capabilities based on the VM type. For exact sizes of the D:\ drive of the different VMs, check the article [Sizes for Windows virtual machines in Azure](/azure/virtual-machines/sizes).
+* With all SAP certified VM types (see SAP Note [#1928533](https://launchpad.support.sap.com/#/notes/1928533)), tempdb data, and log files can be placed on the nonpersisted D:\ drive.
+* With SQL Server releases, where SQL Server installs tempdb with one data file only, it's recommended to use multiple tempdb data files. Be aware D:\ drive volumes are different in size and capabilities based on the VM type. For exact sizes of the D:\ drive of the different VMs, check the article [Sizes for Windows virtual machines in Azure](/azure/virtual-machines/sizes).
These configurations enable tempdb to consume more space and more important more I/O operations per second (IOPS) and storage bandwidth than the system drive is able to provide. The nonpersistent D:\ drive also offers better I/O latency and throughput. In order to determine the proper tempdb size, you can check the tempdb sizes on existing systems.
A VM configuration, which runs SQL Server with an SAP database and where tempdb
The diagram displays a simple case. As eluded to in the article [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms-guide-general.md), Azure storage type, number, and size of disks is dependent from different factors. But in general we recommend: - For smaller and mid-range deployments, using one large volume, which contains the SQL Server data files. Reason behind this configuration is that it's easier to deal with different I/O workloads in case the SQL Server data files don't have the same free space. Whereas in large deployments, especially deployments where the customer moved with a heterogenous database migration to SQL Server in Azure, we used separate disks and then distributed the data files across those disks. Such an architecture is only successful when each disk has the same number of data files, all the data files are the same size, and roughly have the same free space.-- Use the D:\drive for tempdb as long as performance is good enough. If the overall workload is limited in performance by tempdb located on the D:\ drive, you need to move tempdb to Azure premium storage v1 or v2, or Ultra disk as recommended in [this article](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist).
+- Use the D:\drive for tempdb as long as performance is good enough. If the overall workload is limited in performance of tempdb located on the D:\ drive, you need to move tempdb to Azure premium storage v1 or v2, or Ultra disk as recommended in [this article](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist).
-SQL Server proportional fill mechanism distributes reads and writes to all datafiles evenly provided all SQL Server data files are the same size and have the same frees pace. SAP on SQL Server will deliver the best performance when reads and writes are distributed evenly across all available datafiles. If a database has too few datafiles or the existing data files are highly unbalanced, the best method to correct is an R3load export and import. An R3load export and import involves downtime and should only be done if there's an obvious performance problem that needs to be resolved. If the datafiles are only moderately different sizes, increase all datafiles to the same size, and SQL Server will rebalance data over time. SQL Server will automatically grow datafiles evenly if trace flag 1117 is set or if SQL Server 2016 or higher is used.
+SQL Server proportional fill mechanism distributes reads and writes to all datafiles evenly provided all SQL Server data files are the same size and have the same frees pace. SAP on SQL Server delivers the best performance when reads and writes are distributed evenly across all available datafiles. If a database has too few datafiles or the existing data files are highly unbalanced, the best method to correct is an R3load export and import. An R3load export and import involves downtime and should only be done if there's an obvious performance problem that needs to be resolved. If the datafiles are only moderately different sizes, increase all datafiles to the same size, and SQL Server is rebalancing data over time. SQL Server automatically grows datafiles evenly if trace flag 1117 is set or if SQL Server 2016 or higher is used without trace flag.
### Special for M-Series VMs
-For Azure M-Series VM, the latency writing into the transaction log can be reduced, compared to Azure premium storage performance v1, when using Azure Write Accelerator. If the latency provided by premium storage v1 is limiting scalability of the SAP workload, the disk that stores the SQL Server transaction log file can be enabled for Write Accelerator. Details can be read in the document [Write Accelerator](/azure/virtual-machines/how-to-enable-write-accelerator). Azure Write Accelerator doesn't work with Azure premium storage v2 and Ultra disk. In both cases, the latency is better than what Azure premium storage v1 delivers.
+For Azure M-Series VM, the latency writing into the transaction log can be reduced, compared to Azure premium storage performance v1, when using Azure Write Accelerator. If the premium storage v1 provided latency is limiting scalability of the SAP workload, the disk that stores the SQL Server transaction log file can be enabled for Write Accelerator. Details can be read in the document [Write Accelerator](/azure/virtual-machines/how-to-enable-write-accelerator). Azure Write Accelerator doesn't work with Azure premium storage v2 and Ultra disk. In both cases, the latency is better than what Azure premium storage v1 delivers. Write Accelerator is not supporting Premium SSD v2.
### Formatting the disks
-For SQL Server, the NTFS block size for disks containing SQL Server data and log files should be 64 KB. There's no need to format the D:\ drive. This drive comes pre-formatted.
+For SQL Server, the NTFS block size for disks containing SQL Server data and log files should be 64 KB. There's no need to format the D:\ drive. This drive comes preformatted.
To avoid that the restore or creation of databases is initializing the data files by zeroing the content of the files, make sure that the user context the SQL Server service is running in has the User Right **Perform volume maintenance tasks**. For more information, see [Database instant file initialization](/sql/relational-databases/databases/database-instant-file-initialization).
-## SQL Server 2014 and more recent - Storing Database Files directly on Azure Blob Storage
+## SQL Server 2014 and more recent SQL Server versions - Storing Database Files directly on Azure Blob Storage
SQL Server 2014 and later releases open the possibility to store database files directly on Azure Blob Store without the 'wrapper' of a VHD around them. This functionality was meant to address shortcomings of Azure block storage years back. These days, it isn't recommended to use this deployment method and instead choose either Azure premium storage v1, premium storage v2, or Ultra disk. Dependent on the requirements.
-## SQL Server 2014 Buffer Pool Extension
-SQL Server 2014 introduced a new feature, which is called [Buffer Pool Extension](/sql/database-engine/configure-windows/buffer-pool-extension). This functionality though tested under SAP workload on Azure didn't provide improvement in hosting workload. Therefore, it shouldn't be considered
- ## Backup/Recovery considerations for SQL Server
-Deploying SQL Server into Azure, you need to review your backup architecture. Even if the system isn't a production system, the SAP database hosted by SQL Server must be backed up periodically. Since Azure Storage keeps three images, a backup is now less important in respect to compensating a storage crash. The priority reason for maintaining a proper backup and recovery plan is more that you can compensate for logical/manual errors by providing point in time recovery capabilities. The goal is to either use backups to restore the database back to a certain point in time. Or to use the backups in Azure to seed another system by copying the existing database.
+Deploying SQL Server into Azure, you need to review your backup architecture. Even if the system isn't a production system, the SQL Server SAP database must be backed up periodically. Since Azure Storage keeps three images, a backup is now less important in respect to compensating a storage crash. The priority reason for maintaining a proper backup and recovery plan is important for point in time recovery functionality to compensate for logical/manual errors. The goal is to either use backups to restore the database back to a certain point in time. Or to use the backups in Azure to seed another system with copying the existing database backup.
There are several ways to back up and restore SQL Server databases in Azure. To get the best overview and details, read the document [Backup and restore for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/backup-restore). The article covers several different possibilities. ## Using a SQL Server image out of the Microsoft Azure Marketplace
-Microsoft offers VMs in the Azure Marketplace, which already contain versions of SQL Server. For SAP customers who require licenses for SQL Server and Windows, using these images might be an opportunity to cover the need for licenses by spinning up VMs with SQL Server already installed. In order to use such images for SAP, the following considerations need to be made:
+Microsoft offers VMs in the Azure Marketplace, which already contains versions of SQL Server. For SAP customers who require licenses for SQL Server and Windows, using these images might be an opportunity to cover the need for licenses by spinning up VMs with SQL Server already installed. In order to use such images for SAP, the following considerations need to be made:
* The SQL Server non-evaluation versions acquire higher costs than a 'Windows-only' VM deployed from Azure Marketplace. To compare prices, see [Windows Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) and [SQL Server Enterprise Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/sql-server-enterprise/).
-* You only can use SQL Server releases, which are supported by SAP.
+* You only can use SQL Server releases, which are supported by SAP for their software.
* The collation of the SQL Server instance, which is installed in the VMs offered in the Azure Marketplace isn't the collation SAP NetWeaver requires the SQL Server instance to run. You can change the collation though with the directions in the following section. ### Changing the SQL Server Collation of a Microsoft Windows/SQL Server VM
-Since the SQL Server images in the Azure Marketplace aren't set up to use the collation, which is required by SAP NetWeaver applications, it needs to be changed immediately after the deployment. For SQL Server, this change of collation can be done with the following steps as soon as the VM has been deployed and an administrator is able to log into the deployed VM:
+Since the SQL Server images in the Azure Marketplace aren't set up to use the collation, which is required for SAP NetWeaver applications, it needs to be changed immediately after the deployment. For SQL Server, this change of collation can be done with the following steps as soon as the VM has been deployed and an administrator is able to log into the deployed VM:
* Open a Windows Command Window, as administrator. * Change the directory to C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\SQLServer2012.
Latin1-General, binary code point comparison sort for Unicode Data, SQL Server S
If the result is different, STOP any deployment and investigate why the setup command didn't work as expected. Deployment of SAP NetWeaver applications onto SQL Server instance with different SQL Server codepages than the one mentioned is **NOT** supported for NetWeaver deployments. ## SQL Server High-Availability for SAP in Azure
-Using SQL Server in Azure IaaS deployments for SAP, you have several different possibilities to add to deploy the DBMS layer highly available. Azure provides different up-time SLAs for a single VM using different Azure block storages, a pair of VMs deployed in an Azure availability set, or a pair of VMs deployed across Azure Availability Zones. For production systems, we expect you to deploy a pair of VMs within an virtual machine scale set with flexible orchestration across two availability zones. See [comparison of different deployment types for SAP workload](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for more information. One VM will run the active SQL Server Instance. The other VM will run the passive instance
+Using SQL Server in Azure IaaS deployments for SAP, you have several different possibilities to add to deploy the database layer highly available. Azure provides different up-time SLAs for a single VM using different Azure block storages, a pair of VMs deployed in an Azure availability set, or a pair of VMs deployed across Azure Availability Zones. For production systems, we expect you to deploy a pair of VMs within a virtual machine scale set with flexible orchestration across two availability zones. See [comparison of different deployment types for SAP workload](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for more information. One VM runs the active SQL Server Instance. The other VM runs the passive instance
### SQL Server Clustering using Windows Scale-out File Server or Azure shared disk With Windows Server 2016, Microsoft introduced [Storage Spaces Direct](/windows-server/storage/storage-spaces/storage-spaces-direct-overview). Based on Storage Spaces, Direct Deployment, SQL Server FCI clustering is supported in general. Azure also offers [Azure shared disks](/azure/virtual-machines/disks-shared-enable?tabs=azure-cli) that could be used for Windows clustering. **For SAP workload, we aren't supporting these HA options.**
The SQL Server log shipping functionality was hardly used in Azure to achieve hi
- Disaster Recovery scenarios from one Azure region into another Azure region - Disaster Recovery configuration from on-premises into an Azure region-- Cut-over scenarios from on-premises to Azure. In those cases, log shipping is used to synchronize the new DBMS deployment in Azure with the ongoing production system on-premises. At the time of cutting over, production is shut down and it's made sure that the last and latest transaction log backups got transferred to the Azure DBMS deployment. Then the Azure DBMS deployment is opened up for production.
+- Cut-over scenarios from on-premises to Azure. In those cases, log shipping is used to synchronize the new database deployment in Azure with the ongoing production system on-premises. At the time of cutting over, production is shut down and it's made sure that the last and latest transaction log backups got transferred to the Azure database deployment. Then the Azure database deployment is opened up for production.
### SQL Server Always On
Most customers are using the SQL Server Always On functionality for disaster rec
Many customers are using SQL Server [Transparent Data Encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) when deploying their SAP SQL Server databases in Azure. The SQL Server TDE functionality is fully supported by SAP (see SAP Note [#1380493](https://launchpad.support.sap.com/#/notes/1380493)). ### Applying SQL Server TDE
-In cases where you perform a heterogeneous migration from another DBMS, running on-premises, to Windows/SQL Server running in Azure, you should create your empty target database in SQL Server ahead of time. As next step you would apply SQL Server TDE functionality against this empty database. Reason you want to perform in this sequence is that the process of encrypting the empty database can take quite a while. The SAP import processes would then import the data into the encrypted database during the downtime phase. The overhead of importing into an encrypted database has a way lower time impact than encrypting the database after the export phase in the down time phase. Negative experiences were made when trying to apply TDE with SAP workload running on top of the database. Therefore, recommendation is treating the deployment of TDE as an activity that needs to be done with no or low SAP workload on the particular database. From SQL Server 2016 on, you can stop and resume the TDE scan that performs the initial encryption. The document [Transparent Data Encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) describes the command and details.
+In cases where you perform a heterogeneous migration from another database, running on-premises, to Windows/SQL Server running in Azure, you should create your empty target database in SQL Server ahead of time. As next step you would apply SQL Server TDE functionality against this empty database. Reason you want to perform in this sequence is that the process of encrypting the empty database can take quite a while. The SAP import processes would then import the data into the encrypted database during the downtime phase. The overhead of importing into an encrypted database has a way lower time impact than encrypting the database after the export phase in the down time phase. Negative experiences were made when trying to apply TDE with SAP workload running on top of the database. Therefore, recommendation is treating the deployment of TDE as an activity that needs to be done with no or low SAP workload on the particular database. From SQL Server 2016 on, you can stop and resume the TDE scan that performs the initial encryption. The document [Transparent Data Encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) describes the command and details.
In cases where you move SAP SQL Server databases from on-premises into Azure, we recommend testing on which infrastructure you can get the encryption applied fastest. For this case, keep these facts in mind: -- You can't define how many threads are used to apply data encryption to the database. The number of threads is majorly dependent on the number of disk volumes the SQL Server data and log files are distributed over. Means the more distinct volumes (drive letters), the more threads will be engaged in parallel to perform the encryption. Such a configuration contradicts a bit with earlier disk configuration suggestion on building one or a smaller number of storage spaces for the SQL Server database files in Azure VMs. A configuration with a few volumes would lead to a few threads executing the encryption. A single thread encrypting is reading 64 KB extents, encrypts it and then write a record into the transaction log file, telling that the extent got encrypted. As a result the load on the transaction log is moderate.
+- You can't define how many threads are used to apply data encryption to the database. The number of threads is majorly dependent on the number of disk volumes the SQL Server data and log files are distributed over. Means the more distinct volumes (drive letters), the more threads are engaged in parallel to perform the encryption. Such a configuration contradicts a bit with earlier disk configuration suggestion on building one or a smaller number of storage spaces for the SQL Server database files in Azure VMs. A configuration with a few volumes would lead to a few threads executing the encryption. A single thread encrypting is reading 64 KB extents, encrypts it and then write a record into the transaction log file, telling that the extent got encrypted. As a result the load on the transaction log is moderate.
- In older SQL Server releases, backup compression didn't get efficiency anymore when you encrypted your SQL Server database. This behavior could develop into an issue when your plan was to encrypt your SQL Server database on-premises and then copy a backup into Azure to restore the database in Azure. SQL Server backup compression can achieve a compression ratio of factor 4. - With SQL Server 2016, SQL Server introduced new functionality that allows compressing backup of encrypted databases as well in an efficient manner. See [this blog](/archive/blogs/sqlcat/sqlsweet16-episode-1-backup-compression-for-tde-enabled-databases) for some details.
In this section, we suggest a set of minimum configurations for different sizes
An example of a configuration for a little SQL Server instance with a database size between 50 GB ΓÇô 250 GB could look like
-| Configuration | DBMS VM | Comments |
+| Configuration | Database VM | Comments |
| | | | | VM Type | E4s_v3/v4/v5 (4 vCPU/32 GiB RAM) | | | Accelerated Networking | Enable | |
An example of a configuration for a little SQL Server instance with a database s
| Disk aggregation | Storage Spaces if desired | | | File system | NTFS | | | Format block size | 64 KB | |
-| # and type of data disks | Premium storage v1: 2 x P10 (RAID0) <br /> Premium storage v2: 2 x 150 GiB (RAID0) - default IOPS and throughput | Cache = Read Only for premium storage v1 |
-| # and type of log disks | Premium storage v1: 1 x P20 <br /> Premium storage v2: 1 x 128 GiB - default IOPS and throughput | Cache = NONE |
+| # and type of data disks | Premium storage v1: 2 x P10 (RAID0) <br /> Premium storage v2: 2 x 150 GiB (RAID0) - default IOPS and throughput or equivalent Premium SSD v2 | Cache = Read Only for premium storage v1 |
+| # and type of log disks | Premium storage v1: 1 x P20 <br /> Premium storage v2: 1 x 128 GiB - default IOPS and throughput or equivalent Premium SSD v2 | Cache = NONE |
| SQL Server max memory parameter | 90% of Physical RAM | Assuming single instance | An example of a configuration or a small SQL Server instance with a database size between 250 GB ΓÇô 750 GB, such as a smaller SAP Business Suite system, could look like
-| Configuration | DBMS VM | Comments |
+| Configuration | Database VM | Comments |
| | | | | VM Type | E16s_v3/v4/v5 (16 vCPU/128 GiB RAM) | | | Accelerated Networking | Enable | |
An example of a configuration or a small SQL Server instance with a database siz
| Disk aggregation | Storage Spaces if desired | | | File system | NTFS | | | Format block size | 64 KB | |
-| # and type of data disks | Premium storage v1: 4 x P20 (RAID0) <br /> Premium storage v2: 4 x 100 GiB - 200 GiB (RAID0) - default IOPS and 25 MB/sec extra throughput per disk | Cache = Read Only for premium storage v1 |
-| # and type of log disks | Premium storage v1: 1 x P20 <br /> Premium storage v2: 1 x 200 GiB - default IOPS and throughput | Cache = NONE |
+| # and type of data disks | Premium storage v1: 4 x P20 (RAID0) <br /> Premium storage v2: 4 x 100 GiB - 200 GiB (RAID0) - default IOPS and 25 MB/sec extra throughput per disk or equivalent Premium SSD v2 | Cache = Read Only for premium storage v1 |
+| # and type of log disks | Premium storage v1: 1 x P20 <br /> Premium storage v2: 1 x 200 GiB - default IOPS and throughput or equivalent Premium SSD v2 | Cache = NONE |
| SQL Server max memory parameter | 90% of Physical RAM | Assuming single instance | An example of a configuration for a medium SQL Server instance with a database size between 750 GB ΓÇô 2,000 GB, such as a medium SAP Business Suite system, could look like
-| Configuration | DBMS VM | Comments |
+| Configuration | Database VM | Comments |
| | | | | VM Type | E64s_v3/v4/v5 (64 vCPU/432 GiB RAM) | | | Accelerated Networking | Enable | |
An example of a configuration for a medium SQL Server instance with a database s
| Disk aggregation | Storage Spaces if desired | | | File system | NTFS | | | Format block size | 64 KB | |
-| # and type of data disks | Premium storage v1: 4 x P30 (RAID0) <br /> Premium storage v2: 4 x 250 GiB - 500 GiB - plus 2,000 IOPS and 75 MB/sec throughput per disk | Cache = Read Only for premium storage v1 |
-| # and type of log disks | Premium storage v1: 1 x P20 <br /> Premium storage v2: 1 x 400 GiB - default IOPS and 75MB/sec extra throughput | Cache = NONE |
+| # and type of data disks | Premium storage v1: 4 x P30 (RAID0) <br /> Premium storage v2: 4 x 250 GiB - 500 GiB - plus 2,000 IOPS and 75 MB/sec throughput per disk or equivalent Premium SSD v2 | Cache = Read Only for premium storage v1 |
+| # and type of log disks | Premium storage v1: 1 x P20 <br /> Premium storage v2: 1 x 400 GiB - default IOPS and 75MB/sec extra throughput or equivalent Premium SSD v2 | Cache = NONE |
| SQL Server max memory parameter | 90% of Physical RAM | Assuming single instance | An example of a configuration for a larger SQL Server instance with a database size between 2,000 GB and 4,000 GB, such as a larger SAP Business Suite system, could look like
-| Configuration | DBMS VM | Comments |
+| Configuration | Database VM | Comments |
| | | | | VM Type | E96(d)s_v5 (96 vCPU/672 GiB RAM) | | | Accelerated Networking | Enable | |
An example of a configuration for a larger SQL Server instance with a database s
| Disk aggregation | Storage Spaces if desired | | | File system | NTFS | | | Format block size | 64 KB | |
-| # and type of data disks | Premium storage v1: 4 x P30 (RAID0) <br /> Premium storage v2: 4 x 500 GiB - 800 GiB - plus 2500 IOPS and 100 MB/sec throughput per disk | Cache = Read Only for premium storage v1 |
-| # and type of log disks | Premium storage v1: 1 x P20 <br /> Premium storage v2: 1 x 400 GiB - plus 1,000 IOPS and 75MB/sec extra throughput | Cache = NONE |
+| # and type of data disks | Premium storage v1: 4 x P30 (RAID0) <br /> Premium storage v2: 4 x 500 GiB - 800 GiB - plus 2500 IOPS and 100 MB/sec throughput per disk or equivalent Premium SSD v2 | Cache = Read Only for premium storage v1 |
+| # and type of log disks | Premium storage v1: 1 x P20 <br /> Premium storage v2: 1 x 400 GiB - plus 1,000 IOPS and 75MB/sec extra throughput or equivalent Premium SSD v2 | Cache = NONE |
| SQL Server max memory parameter | 90% of Physical RAM | Assuming single instance | An example of a configuration for a large SQL Server instance with a database size of 4 TB+, such as a large globally used SAP Business Suite system, could look like
-| Configuration | DBMS VM | Comments |
+| Configuration | Database VM | Comments |
| | | | | VM Type | M-Series (1.0 to 4.0 TB RAM) | | | Accelerated Networking | Enable | |
An example of a configuration for a large SQL Server instance with a database si
| Disk aggregation | Storage Spaces if desired | | | File system | NTFS | | | Format block size | 64 KB | |
-| # and type of data disks | Premium storage v1: 4+ x P40 (RAID0) <br /> Premium storage v2: 4+ x 1,000 GiB - 4,000 GiB - plus 4,500 IOPS and 125 MB/sec throughput per disk | Cache = Read Only for premium storage v1 |
-| # and type of log disks | Premium storage v1: 1 x P30 <br /> Premium storage v2: 1 x 500 GiB - plus 2,000 IOPS and 125 MB/sec throughput | Cache = NONE |
+| # and type of data disks | Premium storage v1: 4+ x P40 (RAID0) <br /> Premium storage v2: 4+ x 1,000 GiB - 4,000 GiB - plus 4,500 IOPS and 125 MB/sec throughput per disk or equivalent Premium SSD v2 | Cache = Read Only for premium storage v1 |
+| # and type of log disks | Premium storage v1: 1 x P30 <br /> Premium storage v2: 1 x 500 GiB - plus 2,000 IOPS and 125 MB/sec throughput or equivalent Premium SSD v2 | Cache = NONE |
| SQL Server max memory parameter | 95% of Physical RAM | Assuming single instance |
-As an example, this configuration is the DBMS VM configuration of an SAP Business Suite on SQL Server. This VM hosts the 30TB database of the single global SAP Business Suite instance of a global company with over $200B annual revenue and over 200K full time employees. The system runs all the financial processing, sales and distribution processing and many more business processes out of different areas, including North American payroll. The system is running in Azure since the beginning of 2018 using Azure M-series VMs as DBMS VMs. As high availability the system is using Always on with one synchronous replica in another Availability Zone of the same Azure region and another asynchronous replica in another Azure region. The NetWeaver application layer is deployed in Ev4 VMs.
+As an example, this configuration is the Database VM configuration of an SAP Business Suite on SQL Server. This VM hosts the 30TB database of the single global SAP Business Suite instance of a global company with over $200B annual revenue and over 200K full time employees. The system runs all the financial processing, sales, and distribution processing and many more business processes out of different areas, including North American payroll. The system is running in Azure since the beginning of 2018 using Azure M-series VMs as database VMs. As high availability the system is using Always On with one synchronous replica in another Availability Zone of the same Azure region. And another asynchronous replica in another Azure region. The NetWeaver application layer is deployed on the latest D(a)/E(a) VM families.
-| Configuration | DBMS VM | Comments |
+| Configuration | Database VM | Comments |
| | | | | VM Type | M192dms_v2 (192 vCPU/4,196 GiB RAM) | | | Accelerated Networking | Enabled | |
As an example, this configuration is the DBMS VM configuration of an SAP Busines
| Disk aggregation | Storage Spaces | | | File system | NTFS | | | Format block size | 64 KB | |
-| # and type of data disks | Premium storage v1: 16 x P40 | Cache = Read Only |
-| # and type of log disks | Premium storage v1: 1 x P60 | Using Write Accelerator |
-| # and type of tempdb disks | Premium storage v1: 1 x P30 | No caching |
+| # and type of data disks | Premium storage v1: 16 x P40 or equivalent Premium SSD v2 | Cache = Read Only |
+| # and type of log disks | Premium storage v1: 1 x P60 or equivalent Premium SSD v2 | Using Write Accelerator |
+| # and type of tempdb disks | Premium storage v1: 1 x P30 or equivalent Premium SSD v2 | No caching |
| SQL Server max memory parameter | 95% of Physical RAM | | ## <a name="9053f720-6f3b-4483-904d-15dc54141e30"></a>General SQL Server for SAP on Azure Summary
-There are many recommendations in this guide and we recommend you read it more than once before planning your Azure deployment. In general, though, be sure to follow the top general DBMS on Azure-specific recommendations:
+There are many recommendations in this guide and we recommend you read it more than once before planning your Azure deployment. In general, though, be sure to follow the top SQL Server on Azure-specific recommendations:
-1. Use the latest DBMS release, like SQL Server 2019, that has the most advantages in Azure.
+1. Use the latest SQLServer release, like SQL Server 2022, that has the most advantages in Azure.
2. Carefully plan your SAP system landscape in Azure to balance the data file layout and Azure restrictions: * Don't have too many disks, but have enough to ensure you can reach your required IOPS. * Only stripe across disks if you need to achieve a higher throughput.
-3. Never install software or put any files that require persistence on the D:\ drive as it's non-permanent and anything on this drive can be lost at a Windows reboot or VM restart.
-6. Use your DBMS vendor's HA/DR solution to replicate database data.
+3. Never install software or put any files that require persistence on the D:\ drive as it's nonpermanent. Anything on this drive can be lost at a Windows reboot or VM restart.
+6. Use your SQL Server Always On solution to replicate database data.
7. Always use Name Resolution, don't rely on IP addresses. 8. Using SQL Server TDE, apply the latest SQL Server patches. 10. Be careful using SQL Server images from the Azure Marketplace. If you use the SQL Server one, you must change the instance collation before installing any SAP NetWeaver system on it.
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
Title: Migrate to the Azure Monitor agent (AMA) from the Log Analytics agent (MMA/OMS) for Microsoft Sentinel
-description: Learn about migrating from the Log Analytics agent (MMA/OMS) to the Azure Monitor agent (AMA), when working with Microsoft Sentinel.
+ Title: Migrate to the Azure Monitor Agent (AMA) from the Log Analytics agent (MMA/OMS) for Microsoft Sentinel
+description: Learn about migrating from the Log Analytics agent (MMA/OMS) to the Azure Monitor Agent (AMA), when working with Microsoft Sentinel.
Previously updated : 04/03/2024 Last updated : 10/01/2024 # AMA migration for Microsoft Sentinel
-This article describes the migration process to the Azure Monitor Agent (AMA) when you have an existing Log Analytics Agent (MMA/OMS), and are working with Microsoft Sentinel.
-> [!IMPORTANT]
-> The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA.
+This article describes the migration process to the Azure Monitor Agent (AMA) when you have an existing, legacy [Log Analytics Agent (MMA/OMS)](/azure/azure-monitor/agents/log-analytics-agent), and are working with Microsoft Sentinel.
-## Prerequisites
-Start with the [Azure Monitor documentation](/azure/azure-monitor/agents/azure-monitor-agent-migration) which provides an agent comparison and general information for this migration process.
-
-This article provides specific details and differences for Microsoft Sentinel.
--
-## Gap analysis between agents
+The Log Analytics agent is [retired as of 31 August, 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you migrate to the AMA.
-The Azure Monitor agent provides extra functionality and a throughput that is 25% better than legacy Log Analytics agents. Migrate to the new AMA connectors to get higher performance, especially if you are using your servers as log forwarders for Windows security events or forwarded events.
-
-The Azure Monitor agent provides the following extra functionality, which is not supported by legacy Log Analytics agents:
+## Prerequisites
-| Log type | Functionality |
-| ||
-| **Windows logs** | Filtering by security event ID <br>Windows event forwarding |
-| **Linux logs** | Multi-homing |
+- Start with the [Azure Monitor documentation](/azure/azure-monitor/agents/azure-monitor-agent-migration), which provides an agent comparison and general information for this migration process. This article provides specific details and differences for Microsoft Sentinel.
-The only logs supported only by the legacy Log Analytics agent are Windows Firewall logs.
## Recommended migration plan
Each organization will have different metrics of success and internal migration
**Include the following steps in your migration process**:
-1. Make sure that you've reviewed necessary prerequisites and other considerations as [documented here](/azure/azure-monitor/agents/azure-monitor-agent-migration#before-you-begin) in the Azure Monitor documentation.
+1. Make sure that you've reviewed necessary prerequisites and other considerations as documented in the Azure Monitor documentation. For more information, see [Before you begin](/azure/azure-monitor/agents/azure-monitor-agent-migration#before-you-begin).
1. Run a proof of concept to test how the AMA sends data to Microsoft Sentinel, ideally in a development or sandbox environment.
- 1. To connect your Windows machines to the [Windows Security Event connector](data-connectors/windows-security-events-via-ama.md), start with **Windows Security Events via AMA** data connector page in Microsoft Sentinel. For more information, see [Windows agent-based connections](connect-services-windows-based.md).
+ 1. In Microsoft Sentinel, install the **Windows Security Events** Microsoft Sentinel solution. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
+
+ 1. To connect your Windows machines to the [Windows Security Event connector](data-connectors/windows-security-events-via-ama.md), start with the **Windows Security Events via AMA** data connector page in Microsoft Sentinel. For more information, see [Windows agent-based connections](connect-services-windows-based.md).
- 1. Go to the **Security Events via Legacy Agent** data connector page. On the **Instructions** tab, under **Configuration** > Step 2, **Select which events to stream**, select **None**. This configures your system so that you won't receive any security events through the MMA/OMS, but other data sources relying on this agent will continue to work. This step affects all machines reporting to your current Log Analytics workspace.
+ 1. Continue with the **Security Events via Legacy Agent** data connector page. On the **Instructions** tab, under **Configuration** > **Step 2** > **Select which events to stream**, select **None**. This configures your system so that you won't receive any security events through the MMA/OMS, but other data sources relying on this agent will continue to work. This step affects all machines reporting to your current Log Analytics workspace.
> [!IMPORTANT]
- > Ingesting data from the same source using two different types of agents will result in double ingestion charges and duplicate events in the Microsoft Sentinel workspace.
+ > Ingesting data from the same source using two different types of agents will result in double ingestion charges and duplicate events in the Microsoft Sentinel workspace.
> > If you need to keep both data connectors running simultaneously, we recommend that you do so only for a limited time for a benchmarking, or test comparison activity, ideally in a separate test workspace. >
-1. Measure the success of your proof of concept.
+1. Measure the success of your proof of concept.
To help with this step, use the **AMA migration tracker** workbook, which displays the servers reporting to your workspaces, and whether they have the legacy MMA, the AMA, or both agents installed. You can also use this workbook to view the DCRs collecting events from your machines, and which events they are collecting.
- For example:
+ Make sure to select you subscription and resource group at the top of the workbook to show data for your environment. For example:
:::image type="content" source="media/ama-migrate/migrate-workbook.png" alt-text="Screenshot of the AMA migration tracker workbook." lightbox="media/ama-migrate/migrate-workbook.png" :::
+ For more information, see [Visualize and monitor your data by using workbooks in Microsoft Sentinel](monitor-your-data.md).
+ Success criteria should include a statistical analysis and comparison of the quantitative data ingested by the MMA/OMS and AMA agents on the same host: - Measure your success over a predefined time period that represents a normal workload for your environment.
Each organization will have different metrics of success and internal migration
- Plan your rollout for AMA agents in your production environment according to your organization's risk profile and change processes.
-3. Roll out the new agent on your production environment and run a final test of the AMA functionality.
+1. Roll out the new agent on your production environment and run a final test of the AMA functionality.
-4. Disconnect any data connectors that rely on the legacy connector, such as Security Events with MMA. Leave the new connector, such as Windows Security Events with AMA, running.
+1. Disconnect any data connectors that rely on the legacy connector, such as Security Events with MMA. Leave the new connector, such as Windows Security Events with AMA, running.
While you can have both the legacy MMA/OMS and the AMA agents running in parallel, prevent duplicate costs and data by making sure that each data source uses only one agent to send data to Microsoft Sentinel.
-5. Check your Microsoft Sentinel workspace to make sure that all your data streams have been replaced using the new AMA-based connectors.
-
-6. Uninstall the legacy agent. For more information, see [Manage the Azure Log Analytics agent ](/azure/azure-monitor/agents/agent-manage#uninstall-agent).
-
-## FAQs
-The following FAQs address issues specific to AMA migration with Microsoft Sentinel. For more information, see [Frequently asked questions for Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview#frequently-asked-questions) in the Azure Monitor documentation.
-
-## What happens if I run both MMA/OMS and AMA in parallel in my Microsoft Sentinel deployment?
-Both the AMA and MMA/OMS agents can co-exist on the same machine. If they both send data, from the same data source to a Microsoft Sentinel workspace, at the same time, from a single host, duplicate events and double ingestion charges will occur.
-
-For your production rollout, we recommend that you configure either an MMA/OMS agent or the AMA for each data source. To address any issues for duplication, see the relevant FAQs in the [Azure Monitor documentation](/azure/azure-monitor/agents/agents-overview#frequently-asked-questions).
-
-## The AMA doesnΓÇÖt yet have the features my Microsoft Sentinel deployment needs to work. Should I migrate yet?
-The legacy Log Analytics agent will be retired on 31 August 2024.
-
-We recommend that you keep up to date with the new features being released for the AMA over time, as it reaches towards parity with the MMA/OMS. Aim to migrate as soon as the features you need to run your Microsoft Sentinel deployment are available in the AMA.
-
-While you can run the MMA and AMA simultaneously, you may want to migrate each connector, one at a time, while running both agents.
+1. Check your Microsoft Sentinel workspace to make sure that all your data streams have been replaced using the new AMA-based connectors.
+1. Uninstall the legacy agent. For more information, see [Manage the Azure Log Analytics agent](/azure/azure-monitor/agents/agent-manage#uninstall-agent).
+For your production rollout, we recommend that you configure the AMA for each data source. To address any issues for duplication, see the relevant FAQs in the [Azure Monitor documentation](/azure/azure-monitor/agents/agents-overview#frequently-asked-questions).
-## Next steps
+## Related content
For more information, see: -- [Overview of the Azure Monitor agents](/azure/azure-monitor/agents/agents-overview)
+- [Overview of the Azure Monitor Agents](/azure/azure-monitor/agents/agents-overview)
- [Migrate from Log Analytics agents](/azure/azure-monitor/agents/azure-monitor-agent-migration)-- [Windows Security Events via AMA](data-connectors/windows-security-events-via-ama.md)-- [Security events via Legacy Agent (Windows)](data-connectors/security-events-via-legacy-agent.md) - [Windows agent-based connections](connect-services-windows-based.md)
sentinel Billing Reduce Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-reduce-costs.md
You can reduce costs even further by enrolling tables that contain secondary sec
## Use data collection rules for your Windows Security Events
-The [Windows Security Events connector](connect-windows-security-events.md?tabs=LAA) enables you to stream security events from any computer running Windows Server that's connected to your Microsoft Sentinel workspace, including physical, virtual, or on-premises servers, or in any cloud. This connector includes support for the Azure Monitor agent, which uses data collection rules to define the data to collect from each agent.
+The [Windows Security Events connector](connect-windows-security-events.md?tabs=LAA) enables you to stream security events from any computer running Windows Server that's connected to your Microsoft Sentinel workspace, including physical, virtual, or on-premises servers, or in any cloud. This connector includes support for the Azure Monitor Agent, which uses data collection rules to define the data to collect from each agent.
-Data collection rules enable you to manage collection settings at scale, while still allowing unique, scoped configurations for subsets of machines. For more information, see [Configure data collection for the Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-data-collection).
+Data collection rules enable you to manage collection settings at scale, while still allowing unique, scoped configurations for subsets of machines. For more information, see [Configure data collection for the Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-data-collection).
Besides for the predefined sets of events that you can select to ingest, such as All events, Minimal, or Common, data collection rules enable you to build custom filters and select specific events to ingest. The Azure Monitor Agent uses these rules to filter the data at the source, and then ingest only the events you selected, while leaving everything else behind. Selecting specific events to ingest can help you optimize your costs and save more.
sentinel Configure Connector Login Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-connector-login-detection.md
As the machine learning algorithm requires 30 days' worth of data to build a bas
- [Windows security event sets that can be sent to Microsoft Sentinel](windows-security-event-id-reference.md) - [Windows Security Events via AMA connector for Microsoft Sentinel](data-connectors/windows-security-events-via-ama.md)-- [Security Events via Legacy Agent connector for Microsoft Sentinel](data-connectors/security-events-via-legacy-agent.md)
sentinel Configure Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-transformation.md
Before you start configuring DCRs for data transformation:
| If you are ingesting | Ingestion-time transformation is... | Use this DCR type | | -- | - | -- | | **Custom data** through <br>the [**Log Ingestion API**](/azure/azure-monitor/logs/logs-ingestion-api-overview) | <li>Required<li>Included in the DCR that defines the data model | Standard DCR |
-| **Built-in data types** <br>(Syslog, CommonSecurityLog, WindowsEvent, SecurityEvent) <br>using the legacy **Log Analytics Agent (MMA)** | <li>Optional<li>If desired, added to the DCR attached to the Workspace where this data is being ingested | Workspace transformation DCR |
+| **Built-in data types** <br>(Syslog, CommonSecurityLog, WindowsEvent, SecurityEvent) <br>using the Azure Monitor Agent | <li>Optional<li>If desired, added to the DCR that configures how this data is being ingested | Standard DCR |
| **Built-in data types** <br>from most other sources | <li>Optional<li>If desired, added to the DCR attached to the Workspace where this data is being ingested | Workspace transformation DCR | - ## Configure your data transformation Use the following procedures from the Log Analytics and Azure Monitor documentation to configure your data transformation DCRs:
For more information about data transformation and DCRs, see:
- [Data collection transformations in Azure Monitor Logs (preview)](/azure/azure-monitor/essentials/data-collection-transformations) - [Logs ingestion API in Azure Monitor Logs (Preview)](/azure/azure-monitor/logs/logs-ingestion-api-overview) - [Structure of a data collection rule in Azure Monitor (preview)](/azure/azure-monitor/essentials/data-collection-rule-structure)-- [Configure data collection for the Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-data-collection)
+- [Configure data collection for the Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-data-collection)
sentinel Connect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-virtual-desktop.md
For example, monitoring your Azure Virtual Desktop environments can enable you t
Azure Virtual Desktop data in Microsoft Sentinel includes the following types: ++ |Data |Description | |||
-|**Windows event logs** | Windows event logs from the Azure Virtual Desktop environment are streamed into a Microsoft Sentinel-enabled Log Analytics workspace in the same manner as Windows event logs from other Windows machines, outside of the Azure Virtual Desktop environment. <br><br>Install the Log Analytics agent onto your Windows machine and configure the Windows event logs to be sent to the Log Analytics workspace.<br><br>For more information, see:<br>- [Install Log Analytics agent on Windows computers](/azure/azure-monitor/agents/agent-windows)<br>- [Collect Windows event log data sources with Log Analytics agent](/azure/azure-monitor/agents/data-sources-windows-events)<br>- [Connect Windows security events](connect-windows-security-events.md) |
+|**Windows event logs** | Windows event logs from the Azure Virtual Desktop environment are streamed into a Microsoft Sentinel-enabled Log Analytics workspace in the same manner as Windows event logs from other Windows machines, outside of the Azure Virtual Desktop environment. <br><br>Install the Azure Monitor Agent onto your Windows machine and configure the Windows event logs to be sent to the Log Analytics workspace.<br><br>For more information, see:<br>- [Install Azure Monitor Agent on Windows client devices using the client installer](/azure/azure-monitor/agents/azure-monitor-agent-windows-client)<br>- [Collect Windows events with Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-windows-events)<br>- [Windows Security Events via AMA connector for Microsoft Sentinel](data-connectors/windows-security-events-via-ama.md) |
|**Microsoft Defender for Endpoint alerts** | To configure Defender for Endpoint for Azure Virtual Desktop, use the same procedure as you would for any other Windows endpoint. <br><br>For more information, see: <br>- [Set up Microsoft Defender for Endpoint deployment](/windows/security/threat-protection/microsoft-defender-atp/production-deployment)<br>- [Connect data from Microsoft Defender XDR to Microsoft Sentinel](connect-microsoft-365-defender.md) | |**Azure Virtual Desktop diagnostics** | Azure Virtual Desktop diagnostics is a feature of the Azure Virtual Desktop PaaS service, which logs information whenever someone assigned Azure Virtual Desktop role uses the service. <br><br>Each log contains information about which Azure Virtual Desktop role was involved in the activity, any error messages that appear during the session, tenant information, and user information. <br><br>The diagnostics feature creates activity logs for both user and administrative actions. <br><br>For more information, see [Use Log Analytics for the diagnostics feature in Azure Virtual Desktop](../virtual-desktop/virtual-desktop-fall-2019/diagnostics-log-analytics-2019.md). |
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-data-sources.md
Microsoft Sentinel can use agents provided by the Azure Monitor service (on whic
The following sections describe the different types of Microsoft Sentinel agent-based data connectors. To configure connections using agent-based mechanisms, follow the steps in each Microsoft Sentinel data connector page.
-> [!IMPORTANT]
-> The Log Analytics agent will be [**retired on 31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/) and succeeded by the Azure Monitor Agent (AMA). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
- <a name="syslog"></a><a name="common-event-format-cef"></a> ### Syslog and Common Event Format (CEF)
For some data sources, you can collect logs as files on Windows or Linux compute
To connect using the Log Analytics custom log collection agent, follow the steps in each Microsoft Sentinel data connector page. After successful configuration, the data appears in custom tables.
-For more information, see [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md).
+For more information, see [Custom Logs via AMA data connector - Configure data ingestion to Microsoft Sentinel from specific applications](unified-connector-custom-device.md).
## Service-to-service integration for data connectors
sentinel Connect Services Windows Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-services-windows-based.md
Title: Connect Microsoft Sentinel to other Microsoft services with a Windows age
description: Learn how to connect Microsoft Sentinel to Microsoft services with Windows agent-based connections. Previously updated : 07/18/2023 Last updated : 10/06/2024 # Connect Microsoft Sentinel to other Microsoft services with a Windows agent-based data connector
-This article describes how to connect Microsoft Sentinel to other Microsoft services by using a Windows agent-based connections. Microsoft Sentinel uses the Azure foundation to provide built-in, service-to-service support for data ingestion from many Azure and Microsoft 365 services, Amazon Web Services, and various Windows Server services. There are a few different methods through which these connections are made.
+This article describes how to connect Microsoft Sentinel to other Microsoft services Windows agent-based connections. Microsoft Sentinel uses the Azure Monitor Agent to provide built-in, service-to-service support for data ingestion from many Azure and Microsoft 365 services, Amazon Web Services, and various Windows Server services.
-This article presents information that is common to the group of Windows agent-based data connectors.
+The [Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-overview) uses **Data collection rules (DCRs)** to define the data to collect from each agent. Data collection rules offer you two distinct advantages:
-
-## Azure Monitor Agent
-
-Some connectors based on the Azure Monitor Agent (AMA) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The Azure Monitor Agent is currently supported only for Windows Security Events, Windows Forwarded Events, and Windows DNS Events.
-
-The [Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-overview) uses **Data collection rules (DCRs)** to define the data to collect from each agent. Data collection rules offer you two distinct advantages:
--- **Manage collection settings at scale** while still allowing unique, scoped configurations for subsets of machines. They are independent of the workspace and independent of the virtual machine, which means they can be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-data-collection).
+- **Manage collection settings at scale** while still allowing unique, scoped configurations for subsets of machines. They are independent of the workspace and independent of the virtual machine, which means they can be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-data-collection).
- **Build custom filters** to choose the exact events you want to ingest. The Azure Monitor Agent uses these rules to filter the data *at the source* and ingest only the events you want, while leaving everything else behind. This can save you a lot of money in data ingestion costs!
-See below how to create data collection rules.
+
+> [!IMPORTANT]
+> Some connectors based on the Azure Monitor Agent (AMA) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-### Prerequisites
+## Prerequisites
- You must have read and write permissions on the Microsoft Sentinel workspace.
See below how to create data collection rules.
- Windows servers installed on on-premises virtual machines - Windows servers installed on virtual machines in non-Azure clouds -- Data connector specific requirements:
+- For the Windows Forwarded Events data connector:
+
+ - You must have Windows Event Collection (WEC) enabled and running, with the Azure Monitor Agent installed on the WEC machine.
+ - We recommend installing the [Advanced Security Information Model (ASIM)](normalization.md) parsers to ensure full support for data normalization. You can deploy these parsers from the [`Azure-Sentinel` GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers/ASim%20WindowsEvent) using the **Deploy to Azure** button there.
- |Data connector |Licensing, costs, and other information |
- |||
- |Windows Forwarded Events|- You must have Windows Event Collection (WEC) enabled and running.<br>Install the Azure Monitor Agent on the WEC machine. <br>- We recommend installing the [Advanced Security Information Model (ASIM)](normalization.md) parsers to ensure full support for data normalization. You can deploy these parsers from the [`Azure-Sentinel` GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers/ASim%20WindowsEvent) using the **Deploy to Azure** button there.|
- Install the related Microsoft Sentinel solution from the Content Hub in Microsoft Sentinel. For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md).
-### Instructions
+## Create data collection rules via the GUI
-1. From the Microsoft Sentinel navigation menu, select **Data connectors**. Select your connector from the list, and then select **Open connector page** on the details pane. Then follow the on-screen instructions under the **Instructions** tab, as described through the rest of this section.
+1. From Microsoft Sentinel, select **Configuration**> **Data connectors**. Select your connector from the list, and then select **Open connector page** on the details pane. Then follow the on-screen instructions under the **Instructions** tab, as described through the rest of this section.
1. Verify that you have the appropriate permissions as described under the **Prerequisites** section on the connector page.
See below how to create data collection rules.
1. In the **Resources** tab, select **+Add resource(s)** to add machines to which the Data Collection Rule will apply. The **Select a scope** dialog will open, and you will see a list of available subscriptions. Expand a subscription to see its resource groups, and expand a resource group to see the available machines. You will see Azure virtual machines and Azure Arc-enabled servers in the list. You can mark the check boxes of subscriptions or resource groups to select all the machines they contain, or you can select individual machines. Select **Apply** when you've chosen all your machines. At the end of this process, the Azure Monitor Agent will be installed on any selected machines that don't already have it installed.
-1. On the **Collect** tab, choose the events you would like to collect: select **All events** or **Custom** to specify other logs or to filter events using [XPath queries](/azure/azure-monitor/agents/data-collection-windows-events#filter-events-using-xpath-queries) (see note below). Enter expressions in the box that evaluate to specific XML criteria for events to collect, then select **Add**. You can enter up to 20 expressions in a single box, and up to 100 boxes in a rule.
+1. On the **Collect** tab, choose the events you would like to collect: select **All events** or **Custom** to specify other logs or to filter events using [XPath queries](/azure/azure-monitor/agents/data-collection-windows-events#filter-events-using-xpath-queries). Enter expressions in the box that evaluate to specific XML criteria for events to collect, then select **Add**. You can enter up to 20 expressions in a single box, and up to 100 boxes in a rule.
- Learn more about [data collection rules](/azure/azure-monitor/essentials/data-collection-rule-overview) from the Azure Monitor documentation.
+ For more information, see the [Azure Monitor documentation](/azure/azure-monitor/essentials/data-collection-rule-overview).
> [!NOTE] > > - The Windows Security Events connector offers two other [**pre-built event sets**](windows-security-event-id-reference.md) you can choose to collect: **Common** and **Minimal**. >
- > - The Azure Monitor agent supports XPath queries for **[XPath version 1.0](/windows/win32/wes/consuming-events#xpath-10-limitations) only**.
+ > - The Azure Monitor Agent supports XPath queries for **[XPath version 1.0](/windows/win32/wes/consuming-events#xpath-10-limitations) only**.
-1. When you've added all the filter expressions you want, select **Next: Review + create**.
+ To test the validity of an XPath query, use the PowerShell cmdlet **Get-WinEvent** with the *-FilterXPath* parameter. For example:
+
+ ```powershell
+ $XPath = '*[System[EventID=1035]]'
+ Get-WinEvent -LogName 'Application' -FilterXPath $XPath
+ ```
+
+ - If events are returned, the query is valid.
+ - If you receive the message "No events were found that match the specified selection criteria," the query may be valid, but there are no matching events on the local machine.
+ - If you receive the message "The specified query is invalid," the query syntax is invalid.
-1. When you see the "Validation passed" message, select **Create**.
+1. When you've added all the filter expressions you want, select **Next: Review + create**.
-You'll see all your data collection rules (including those created through the API) under **Configuration** on the connector page. From there you can edit or delete existing rules.
+1. When you see the **Validation passed** message, select **Create**.
-> [!TIP]
-> Use the PowerShell cmdlet **Get-WinEvent** with the *-FilterXPath* parameter to test the validity of an XPath query. The following script shows an example:
->
-> ```powershell
-> $XPath = '*[System[EventID=1035]]'
-> Get-WinEvent -LogName 'Application' -FilterXPath $XPath
-> ```
->
-> - If events are returned, the query is valid.
-> - If you receive the message "No events were found that match the specified selection criteria," the query may be valid, but there are no matching events on the local machine.
-> - If you receive the message "The specified query is invalid," the query syntax is invalid.
+You'll see all your data collection rules, including those [created through the API](#create-data-collection-rules-using-the-api), under **Configuration** on the connector page. From there you can edit or delete existing rules.
-### Create data collection rules using the API
+## Create data collection rules using the API
-You can also create data collection rules using the API ([see schema](/rest/api/monitor/data-collection-rules)), which can make life easier if you're creating many rules (if you're an MSSP, for example). Here's an example (for the [Windows Security Events via AMA](./data-connectors/windows-security-events-via-ama.md) connector) that you can use as a template for creating a rule:
+You can also create data collection rules using the API, which can make life easier if you're creating many rules, such as if you're an MSSP. Here's an example (for the [Windows Security Events via AMA](./data-connectors/windows-security-events-via-ama.md) connector) that you can use as a template for creating a rule:
**Request URL and header**
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/m
} ```
-See this [complete description of data collection rules](/azure/azure-monitor/essentials/data-collection-rule-overview) from the Azure Monitor documentation.
-
-## Log Analytics Agent (Legacy)
-
-The Log Analytics agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you start planning your migration to the AMA. For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
-
-### Prerequisites
--- You must have read and write permissions on the Log Analytics workspace, and any workspace that contains machines you want to collect logs from.-- You must have the **Log Analytics Contributor** role on the SecurityInsights (Microsoft Sentinel) solution on those workspaces, in addition to any Microsoft Sentinel roles.-
-### Instructions
-
-1. From the Microsoft Sentinel navigation menu, select **Data connectors**.
-
-1. Select your service (**DNS** or **Windows Firewall**) and then select **Open connector page**.
-
-1. Install and onboard the agent on the device that generates the logs.
-
- | Machine type | Instructions |
- | | |
- | **For an Azure Windows VM** | 1. Under **Choose where to install the agent**, expand **Install agent on Azure Windows virtual machine**. <br><br>2. Select the **Download & install agent for Azure Windows Virtual machines >** link. <br><br>3. In the **Virtual machines** blade, select a virtual machine to install the agent on, and then select **Connect**. Repeat this step for each VM you wish to connect. |
- | **For any other Windows machine** | 1. Under **Choose where to install the agent**, expand **Install agent on non-Azure Windows Machine** <br><br>2. Select the **Download & install agent for non-Azure Windows machines >** link. <br><br>3. In the **Agents management** blade, on the **Windows servers** tab, select the **Download Windows Agent** link for either 32-bit or 64-bit systems, as appropriate. <br><br>4. Using the downloaded executable file, install the agent on the Windows systems of your choice, and configure it using the **Workspace ID and Keys** that appear below the download links in the previous step. |
-
-To allow Windows systems without the necessary internet connectivity to still stream events to Microsoft Sentinel, download and install the **Log Analytics Gateway** on a separate machine, using the **Download Log Analytics Gateway** link on the **Agents Management** page, to act as a proxy. You still need to install the Log Analytics agent on each Windows system whose events you want to collect.
-
-For more information on this scenario, see the [**Log Analytics gateway** documentation](/azure/azure-monitor/agents/gateway).
-
-For additional installation options and further details, see the [**Log Analytics agent** documentation](/azure/azure-monitor/agents/agent-windows).
-
-### Determine the logs to send
-
-For the Windows DNS Server and Windows Firewall connectors, select the **Install solution** button. For the legacy Security Events connector, choose the **event set** you wish to send and select **Update**. For more information, see [Windows security event sets that can be sent to Microsoft Sentinel](windows-security-event-id-reference.md).
-
-You can find and query the data for these services using the table names in their respective sections in the [Data connectors reference](data-connectors-reference.md) page.
-
-### Troubleshoot your Windows DNS Server data connector
-
-If your DNS events don't show up in Microsoft Sentinel:
-
-1. Make sure that DNS analytics logs on your servers are [enabled](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn800669(v=ws.11)#to-enable-dns-diagnostic-logging).
-1. Go to Azure DNS Analytics.
-1. In the **Configuration** area, change any of the settings and save your changes. Change your settings back if you need to, and then save your changes again.
-1. Check your Azure DNS Analytics to make sure that your events and queries display properly.
+For more information, see:
-For more information, see [Gather insights about your DNS infrastructure with the DNS Analytics Preview solution](/previous-versions/azure/azure-monitor/insights/dns-analytics).
+- [Data collection rules (DCRs) in Azure Monitor](/azure/azure-monitor/essentials/data-collection-rule-overview)
+- [Data collection rules API schema](/rest/api/monitor/data-collection-rules)
## Next steps
sentinel Create Custom Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-custom-connector.md
Title: Resources for creating Microsoft Sentinel custom connectors
-description: Learn about available resources for creating custom connectors for Microsoft Sentinel. Methods include the Log Analytics agent and API, Logstash, Logic Apps, PowerShell, and Azure Functions.
+description: Learn about available resources for creating custom connectors for Microsoft Sentinel. Methods include the Log Analytics API, Logstash, Logic Apps, PowerShell, and Azure Functions.
Previously updated : 09/26/2024 Last updated : 10/01/2024
The following table compares essential details about each method for creating cu
|Method description |Capability | Serverless |Complexity | ||||| | **[Codeless Connector Platform (CCP)](#connect-with-the-codeless-connector-platform)** <br>Best for less technical audiences to create SaaS connectors using a configuration file instead of advanced development. | Supports all capabilities available with the code. | Yes | Low; simple, codeless development
-|**[Log Analytics Agent](#connect-with-the-log-analytics-agent)** <br>Best for collecting files from on-premises and IaaS sources | File collection only | No |Low |
-|**[Logstash](#connect-with-logstash)** <br>Best for on-premises and IaaS sources, any source for which a plugin is available, and organizations already familiar with Logstash | Available plugins, plus custom plugin, capabilities provide significant flexibility. | No; requires a VM or VM cluster to run | Low; supports many scenarios with plugins |
+|**[Azure Monitor Agent](#connect-with-the-azure-monitor-agent)** <br>Best for collecting files from on-premises and IaaS sources | File collection, data transformation | No | Low |
+|**[Logstash](#connect-with-logstash)** <br>Best for on-premises and IaaS sources, any source for which a plugin is available, and organizations already familiar with Logstash | Supports all capabilities of the Azure Monitor Agent | No; requires a VM or VM cluster to run | Low; supports many scenarios with plugins |
|**[Logic Apps](#connect-with-logic-apps)** <br>High cost; avoid for high-volume data <br>Best for low-volume cloud sources | Codeless programming allows for limited flexibility, without support for implementing algorithms.<br><br> If no available action already supports your requirements, creating a custom action may add complexity. | Yes | Low; simple, codeless development | |**[PowerShell](#connect-with-powershell)** <br>Best for prototyping and periodic file uploads | Direct support for file collection. <br><br>PowerShell can be used to collect more sources, but will require coding and configuring the script as a service. |No | Low | |**[Log Analytics API](#connect-with-the-log-analytics-api)** <br>Best for ISVs implementing integration, and for unique collection requirements | Supports all capabilities available with the code. | Depends on the implementation | High |
Connectors created using the CCP are fully SaaS, without any requirements for se
For more information, see [Create a codeless connector for Microsoft Sentinel](create-codeless-connector.md).
-## Connect with the Log Analytics agent
+## Connect with the Azure Monitor Agent
-If your data source delivers events in files, we recommend that you use the Azure Monitor Log Analytics agent to create your custom connector.
+If your data source delivers events in text files, we recommend that you use the Azure Monitor Agent to create your custom connector.
-- For more information, see [Collecting custom logs in Azure Monitor](/azure/azure-monitor/agents/data-sources-custom-logs).
+- For more information, see [Collect logs from a text file with Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-log-text).
-- For an example of this method, see [Collecting custom JSON data sources with the Log Analytics agent for Linux in Azure Monitor](/azure/azure-monitor/agents/data-sources-json).
+- For an example of this method, see [Collect logs from a JSON file with Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-log-json).
## Connect with Logstash
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
This article lists all supported, out-of-the-box data connectors and links to ea
> [!IMPORTANT] > - Noted Microsoft Sentinel data connectors are currently in **Preview**. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> - For connectors that use the Log Analytics agent, the agent will be [retired on **31 August, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). If you are using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you migrate to the the Azure Monitor Agent (AMA). For more information, see [AMA migration for Microsoft Sentinel](ama-migrate.md).
> - [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)] Data connectors are available as part of the following offerings:
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
The following table describes DCR support for Microsoft Sentinel data connector
| - | -- | | **Direct ingestion via [Logs ingestion API](/azure/azure-monitor/logs/logs-ingestion-api-overview)** | Standard DCRs | | [**AMA standard logs**](connect-services-windows-based.md), such as: <li>[Windows Security Events via AMA](./data-connectors/windows-security-events-via-ama.md)<li>[Windows Forwarded Events](./data-connectors/windows-forwarded-events.md)<li>[CEF data](connect-cef-ama.md)<li>[Syslog data](connect-cef-syslog.md) | Standard DCRs |
-| [**MMA standard logs**](connect-services-windows-based.md), such as <li>[Syslog data](connect-syslog.md)<li>[CommonSecurityLog](connect-azure-windows-microsoft-services.md) | Workspace transformation DCRs |
| [**Diagnostic settings-based connections**](connect-services-diagnostic-setting-based.md) | Workspace transformation DCRs, based on the [supported output tables](/azure/azure-monitor/logs/tables-feature-support) for specific data connectors | | **Built-in, service-to-service data connectors**, such as:<li>[Microsoft Office 365](connect-services-api-based.md)<li>[Microsoft Entra ID](connect-azure-active-directory.md)<li>[Amazon S3](connect-aws.md) | Workspace transformation DCRs, based on the [supported output tables](/azure/azure-monitor/logs/tables-feature-support) for specific data connectors | | **Built-in, API-based data connector**, such as: <li>[Codeless data connectors](create-codeless-connector.md) | Standard DCRs |
Ingestion-time data transformation currently has the following known issues for
- Data transformations using *workspace transformation DCRs* are supported only per table, and not per connector.
- There can only be one workspace transformation DCR for an entire workspace. Within that DCR, each table can use a separate input stream with its own transformation. However, if you have two different MMA-based data connectors sending data to the *Syslog* table, they will both have to use the same input stream configuration in the DCR. Splitting data to multiple destinations (Log Analytics workspaces) with a workspace transformation DCR is not possible.
+ There can only be one workspace transformation DCR for an entire workspace. Within that DCR, each table can use a separate input stream with its own transformation. Splitting data to multiple destinations (Log Analytics workspaces) with a workspace transformation DCR is not possible. AMA-based data connectors use the configuration you define in the associated DCR for input and output streams and transformations, and ignore the workspace transformation DCR.
- The following configurations are supported only via API:
Ingestion-time data transformation currently has the following known issues for
- Standard DCRs for custom log ingestion to a standard table. -- It make take up to 60 minutes for the data transformation configurations to apply.
+- It may take up to 60 minutes for the data transformation configurations to apply.
- KQL syntax: Not all operators are supported. For more information, see [**KQL limitations** and **Supported KQL features**](/azure/azure-monitor/essentials/data-collection-transformations-structure#kql-limitations) in the Azure Monitor documentation.
sentinel Entities Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entities-reference.md
The following section contains a more in-depth look at the full schemas of each
- **NetBiosName + DnsDomain** - **AzureID** - **OMSAgentID**-- ***IoTDevice***
+- **IoTDevice**
#### Weak identifiers of a host entity
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
Previously updated : 07/15/2024 Last updated : 09/30/2024 #Customer intent: As a security operations manager, I want to understand the Microsoft Sentinel's feature availability across different Azure environments so that I can effectively plan and manage our security operations.
While Microsoft Sentinel is also available in the [Microsoft Defender portal](mi
|[Microsoft Purview (Preview)](connect-services-diagnostic-setting-based.md) |Public preview |&#x2705;|&#10060; |&#10060; | |[Microsoft Purview Information Protection](connect-microsoft-purview.md) |Public preview |&#x2705;| &#10060;|&#10060; | |[Office 365](connect-services-api-based.md) |GA |&#x2705;|&#x2705; |&#x2705; |
-|[Security Events via Legacy Agent](connect-services-windows-based.md#log-analytics-agent-legacy) |GA |&#x2705; |&#x2705;|&#x2705; |
|[Summary rules](summary-rules.md) | Public preview |&#x2705; | &#10060; |&#10060; | |[Syslog](connect-syslog.md) |GA |&#x2705;| &#x2705;|&#x2705; | |[Syslog via AMA](connect-cef-syslog-ama.md) |GA |&#x2705;| &#x2705;|&#x2705; |
sentinel Migration Arcsight Detection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-arcsight-detection-rules.md
SecurityEvent
| where SubjectUserName =~ "AutoMatedService" | where isnotempty(SubjectDomainName) ```
-This rule assumes that Microsoft Monitoring Agent (MMA) or Azure Monitoring Agent (AMA) collect the Windows Security Events. Therefore, the rule uses the Microsoft Sentinel SecurityEvent table.
+
+This rule assumes that the Azure Monitoring Agent (AMA) collects the Windows Security Events. Therefore, the rule uses the Microsoft Sentinel [SecurityEvent](/azure/azure-monitor/reference/tables/securityevent) table.
Consider these best practices:+ - To optimize your queries, avoid case-insensitive operators when possible: `=~`. - Use `==` if the value isn't case-sensitive. - Order the filters by starting with the `where` statement, which filters out the most data.
sentinel Migration Track https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-track.md
To monitor deployed resources and deploy new connectors, in the **Microsoft Sent
- Current ingestion trends - Tables ingesting data - How much data each table is reporting-- Endpoints reporting with Microsoft Monitoring Agent (MMA)-- Endpoints reporting with Azure Monitoring Agent (AMA)-- Endpoints reporting with both the MMA and AMA agents
+- Endpoints reporting with Azure Monitor Agent (AMA)
- Data collection rules in the resource group and the devices linked to the rules - Data connector health (changes and failures) - Health logs within the specified time range
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md
There are three tabbed sections in this workbook:
:::image type="content" source="media/monitor-data-connector-health/data-health-workbook-2.png" alt-text="data connector health monitoring workbook anomalies page" lightbox="media/monitor-data-connector-health/data-health-workbook-2.png"::: -- The **Agent info** tab shows you information about the health of the Log Analytics agents installed on your various machines, whether Azure VM, other cloud VM, on-premises VM, or physical. You can monitor the following:
+- The **Agent info** tab shows you information about the health of the agents installed on your various machines, whether Azure VM, other cloud VM, on-premises VM, or physical. Monitor system location, heartbeat status and latency, available memory and disk space, and agent operations.
- - System location
-
- - Heartbeat status and latency
-
- - Available memory and disk space
-
- - Agent operations
-
- In this section you must select the tab that describes your machinesΓÇÖ environment: choose the **Azure-managed machines** tab if you want to view only the Azure Arc-managed machines; choose the **All machines** tab to view both managed and non-Azure machines with the Log Analytics agent installed.
+ In this section you must select the tab that describes your machinesΓÇÖ environment: choose the **Azure-managed machines** tab if you want to view only the Azure Arc-managed machines; choose the **All machines** tab to view both managed and non-Azure machines with the Azure Monitor Agent installed.
:::image type="content" source="media/monitor-data-connector-health/data-health-workbook-3.png" alt-text="data connector health monitoring workbook agent info page" lightbox="media/monitor-data-connector-health/data-health-workbook-3.png":::
sentinel Ops Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ops-guide.md
Schedule the following activities daily.
|**Explore hunting queries and bookmarks**|Explore results for all built-in queries, and update existing hunting queries and bookmarks. Manually generate new incidents or update old incidents if applicable. For more information, see:</br></br>- [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md)</br>- [Hunt for threats with Microsoft Sentinel](hunting.md)</br>- [Keep track of data during hunting with Microsoft Sentinel](bookmarks.md)| |**Analytic rules**|Review and enable new analytics rules as applicable, including both newly released or newly available rules from recently connected data connectors.| |**Data connectors**| Review the status, date, and time of the last log received from each data connector to ensure that data is flowing. Check for new connectors, and review ingestion to ensure set limits aren't exceeded. For more information, see [Data collection best practices](best-practices-data.md) and [Connect data sources](connect-data-sources.md).|
-|**Log Analytics Agent**| Verify that servers and workstations are actively connected to the workspace, and troubleshoot and remediate any failed connections. For more information, see [Log Analytics Agent overview](/azure/azure-monitor/agents/log-analytics-agent).|
+|**Azure Monitor Agent**| Verify that servers and workstations are actively connected to the workspace, and troubleshoot and remediate any failed connections. For more information, see [Azure Monitor Agent overview](/azure/azure-monitor/agents/azure-monitor-agent-overview).|
|**Playbook failures**| Verify playbook run statuses and troubleshoot any failures. For more information, see [Tutorial: Respond to threats by using playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md).| ## Weekly tasks
sentinel Sample Workspace Designs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sample-workspace-designs.md
Constoso's solution includes the following considerations:
The resulting workspace design for Contoso is illustrated in the following image: The suggested solution includes:
sentinel Collect Sap Hana Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/collect-sap-hana-audit-logs.md
This article explains how to collect audit logs from your SAP HANA database.
## Prerequisites
-SAP HANA logs are sent over Syslog. Make sure that your AMA agent or your Log Analytics agent (legacy) is configured to collect Syslog files. For more information, see:
+SAP HANA logs are sent over Syslog. Make sure that your Azure Monitor Agent is configured to collect Syslog files. For more information, see:
For more information, see [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](../connect-cef-syslog-ama.md).
For more information, see [Ingest syslog and CEF messages to Microsoft Sentinel
- [Recommendations for Auditing](https://help.sap.com/viewer/742945a940f240f4a2a0e39f93d3e2d4/2.0.05/en-US/5c34ecd355e44aa9af3b3e6de4bbf5c1.html) - [SAP HANA Security Guide for SAP HANA Platform](https://help.sap.com/docs/SAP_HANA_PLATFORM/b3ee5778bc2e4a089d3299b82ec762a7/4f7cde1125084ea3b8206038530e96ce.html)
-2. Check your operating system Syslog files for any relevant HANA database events.
+1. Check your operating system Syslog files for any relevant HANA database events.
-3. Sign into your HANA database operating system as a user with sudo privileges.
+1. Sign into your HANA database operating system as a user with sudo privileges.
-4. Install an agent on your machine and confirm that your machine is connected. For more information, see:
+1. Install an agent on your machine and confirm that your machine is connected. For more information, see [Install and manage Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-manage?tabs=azure-portal).
- - [Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-manage?tabs=azure-portal)
- - [Log Analytics Agent](/azure/azure-monitor/agents/agent-linux) (legacy)
-
-5. Configure your agent to collect Syslog data. For more information, see:
-
- - [Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-syslog)
- - [Log Analytics Agent](/azure/azure-monitor/agents/data-sources-syslog) (legacy)
+1. Configure your agent to collect Syslog data. For more information, see [Collect Syslog events with Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-syslog).
> [!TIP] > Because the facilities where HANA database events are saved can change between different distributions, we recommend that you add all facilities. Check them against your Syslog logs, and then remove any that aren't relevant.
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
The first piece of information you see for each connector is its *data ingestion
| Microsoft Sentinel Data Collector API | [Connect your data source to the Microsoft Sentinel Data Collector API to ingest data](connect-rest-api-template.md) | | Azure Functions and the REST API | [Use Azure Functions to connect Microsoft Sentinel to your data source](connect-azure-functions-template.md) | | Syslog | [Ingest Syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](connect-cef-syslog-ama.md) |
-| Custom logs | [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md) |
+| Custom logs | [Custom Logs via AMA data connector - Configure data ingestion to Microsoft Sentinel from specific applications](unified-connector-custom-device.md) |
If your source isn't available, you can [create a custom connector](create-custom-connector.md). Custom connectors use the ingestion API and therefore are similar to direct sources. You most often implement custom connectors by using Azure Logic Apps, which offers a codeless option, or Azure Functions.
Microsoft Sentinel supports two new features for data ingestion and transformati
- [**Logs ingestion API**](/azure/azure-monitor/logs/logs-ingestion-api-overview): Use it to send custom-format logs from any data source to your Log Analytics workspace and then store those logs either in certain specific standard tables, or in custom-formatted tables that you create. You can perform the actual ingestion of these logs by using direct API calls. You can use Azure Monitor [data collection rules](/azure/azure-monitor/essentials/data-collection-rule-overview) to define and configure these workflows. - [**Workspace data transformations for standard logs**](/azure/azure-monitor/essentials/data-collection-transformations-workspace): It uses [data collection rules](/azure/azure-monitor/essentials/data-collection-rule-overview) to filter out irrelevant data, to enrich or tag your data, or to hide sensitive or personal information. You can configure data transformation at ingestion time for the following types of built-in data connectors:
- - Azure Monitor agent (AMA)-based data connectors (based on the new Azure Monitor agent)
- - Microsoft Monitoring agent (MMA)-based data connectors (based on the legacy Azure Monitor Logs Agent)
- - Data connectors that use diagnostics settings
+ - Azure Monitor Agent (AMA)-based data connectors ([Syslog and CEF](connect-cef-syslog-ama.md) | [Windows DNS](connect-dns-ama.md) | [Custom](connect-custom-logs-ama.md))
+ - [Data connectors that use diagnostics settings](connect-services-diagnostic-setting-based.md)
- [Service-to-service data connectors](data-connectors-reference.md) For more information, see:
sentinel Tutorial Enrich Ip Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-enrich-ip-information.md
To complete this tutorial, make sure you have:
- A Log Analytics workspace with the Microsoft Sentinel solution deployed on it and data being ingested into it. -- An Azure user with the following roles assigned on the following resources:
+- An Azure user with the following roles assigned on the following resources:
+ - [**Microsoft Sentinel Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) on the Log Analytics workspace where Microsoft Sentinel is deployed. - [**Logic App Contributor**](../role-based-access-control/built-in-roles.md#logic-app-contributor), and **Owner** or equivalent, on whichever resource group will contain the playbook created in this tutorial.
To complete this tutorial, make sure you have:
- A (free) [VirusTotal account](https://www.virustotal.com/gui/my-apikey) will suffice for this tutorial. A production implementation requires a VirusTotal Premium account.
+- An Azure Monitor Agent installed on at least one machine in your environment, so that incidents are generated and sent to Microsoft Sentinel.
+ ## Create a playbook from a template Microsoft Sentinel includes ready-made, out-of-the-box playbook templates that you can customize and use to automate a large number of basic SecOps objectives and scenarios. Let's find one to enrich the IP address information in our incidents.
-1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), select the **Configuration** > **Automation** page. For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Configuration** > **Automation**.
+1. In Microsoft Sentinel, select **Configuration** > **Automation**.
1. From the **Automation** page, select the **Playbook templates (Preview)** tab.
-1. Filter the list of templates by tag:
- 1. Select the **Tags** filter toggle at the top of the list (to the right of the **Search** field).
-
- 1. Clear the **Select all** checkbox, then mark the **Enrichment** checkbox. Select **OK**.
-
- For example:
+1. Locate and select one of the **IP Enrichment - Virus Total report** templates, for either entity, incident, or alert triggers. If needed, filter the list by the **Enrichment** tag to find your templates.
+
+1. Select **Create playbook** from the details pane. For example:
- :::image type="content" source="media/tutorial-enrich-ip-information/1-filter-playbook-template-list.png" alt-text="Screenshot of list of playbook templates to be filtered by tags." lightbox="media/tutorial-enrich-ip-information/1-filter-playbook-template-list.png":::
-
-1. Select the **IP Enrichment - Virus Total report** template, and select **Create playbook** from the details pane.
-
- :::image type="content" source="media/tutorial-enrich-ip-information/2-select-playbook-template.png" alt-text="Screenshot of selecting a playbook template from which to create a playbook." lightbox="media/tutorial-enrich-ip-information/2-select-playbook-template.png":::
+ :::image type="content" source="media/restore/select-virus-total.png" alt-text="Screenshot of the IP Enrichment - Virus Total Report - Entity Trigger template selected.":::
1. The **Create playbook** wizard will open. In the **Basics** tab:
- 1. Select your **Subscription**, **Resource group**, and **Region** from their respective drop-down lists.
- 1. Edit the **Playbook name** by adding to the end of the suggested name "*Get-VirusTotalIPReport*". (This way you'll be able to tell which original template this playbook came from, while still ensuring that it has a unique name in case you want to create another playbook from this same template.) Let's call it "*Get-VirusTotalIPReport-Tutorial-1*".
+ 1. Select your **Subscription**, **Resource group**, and **Region** from their respective drop-down lists.
- 1. Let's leave the last two checkboxes unmarked as they are, as we don't need these services in this case:
- - Enable diagnostics logs in Log Analytics
- - Associate with integration service environment
+ 1. Edit the **Playbook name** by adding to the end of the suggested name "*Get-VirusTotalIPReport*". This way you'll be able to tell which original template this playbook came from, while still ensuring that it has a unique name in case you want to create another playbook from this same template. Let's call it "*Get-VirusTotalIPReport-Tutorial-1*".
- :::image type="content" source="media/tutorial-enrich-ip-information/3-playbook-basics-tab.png" alt-text="Screenshot of the Basics tab from the playbook creation wizard.":::
+ 1. Leave the **Enable diagnostics logs in Log Analytics** option unchecked.
1. Select **Next : Connections >**.
Microsoft Sentinel includes ready-made, out-of-the-box playbook templates that y
1. Leave the **Microsoft Sentinel** connection as is (it should say "*Connect with managed identity*").
- 2. If any connections say "*New connection will be configured*," you will be prompted to do so at the next stage of the tutorial. Or, if you already have connections to these resources, select the expander arrow to the left of the connection and choose an existing connection from the expanded list. For this exercise, we'll leave it as is.
+ 2. If any connections say "*New connection will be configured*," you're prompted to do so at the next stage of the tutorial. Or, if you already have connections to these resources, select the expander arrow to the left of the connection and choose an existing connection from the expanded list. For this exercise, we'll leave it as is.
:::image type="content" source="media/tutorial-enrich-ip-information/4-playbook-connections-tab.png" alt-text="Screenshot of the Connections tab of the playbook creation wizard."::: 1. Select **Next : Review and create >**.
-1. In the **Review and create** tab, review all the information you've entered as it's displayed here, and select **Create and continue to designer**.
+1. In the **Review and create** tab, review all the information you've entered as it's displayed here, and select **Create playbook**.
:::image type="content" source="media/tutorial-enrich-ip-information/5-playbook-review-tab.png" alt-text="Screenshot of the Review and create tab from the playbook creation wizard.":::
Microsoft Sentinel includes ready-made, out-of-the-box playbook templates that y
## Authorize logic app connections
-Recall that when we created the playbook from the template, we were told that the Azure Log Analytics Data Collector and Virus Total connections would be configured later.
+Recall that when we created the playbook from the template, we were told that the Azure Log Analytics Data Collector and Virus Total connections would be configured later.
:::image type="content" source="media/tutorial-enrich-ip-information/7-authorize-connectors.png" alt-text="Screenshot of review information from playbook creation wizard.":::
Here's where we do that.
### Authorize Virus Total connection
-1. Select the **For each** action to expand it and review its contents (the actions that will be performed for each IP address).
+1. Select the **For each** action to expand it and review its contents, which include the actions that will be performed for each IP address. For example:
:::image type="content" source="media/tutorial-enrich-ip-information/8-for-each-loop.png" alt-text="Screenshot of for-each loop statement action in logic app designer.":::
-1. The first action item you see is labeled **Connections** and has an orange warning triangle.
+1. The first action item you see is labeled **Connections** and has an orange warning triangle.
- (If instead, that first action is labeled **Get an IP report (Preview)**, that means you already have an existing connection to Virus Total and you can go to the [next step](#next-step-condition).)
+ If instead, that first action is labeled **Get an IP report (Preview)**, that means you already have an existing connection to Virus Total and you can go to the [next step](#next-step-condition).
- 1. Select the **Connections** action to open it.
+ 1. Select the **Connections** action to open it.
1. Select the icon in the **Invalid** column for the displayed connection. :::image type="content" source="media/tutorial-enrich-ip-information/9-virus-total-invalid.png" alt-text="Screenshot of invalid Virus Total connection configuration.":::
But as you'll see, we have more invalid connections we need to authorize.
:::image type="content" source="media/tutorial-enrich-ip-information/15-log-analytics-connection.png" alt-text="Screenshot shows how to enter Workspace ID and key and other connection details for Log Analytics.":::
-1. Enter "*Log Analytics*" as the **Connection name**.
+1. Enter "*Log Analytics*" as the **Connection name**.
-1. For **Workspace Key** and **Workspace ID**, copy and paste the key and ID from your Log Analytics workspace settings. They can be found in the **Agents management** page, inside the **Log Analytics agent instructions** expander.
+1. For **Workspace ID**, copy and paste the ID from the **Overview** page of the Log Analytics workspace settings.
1. Select **Update**.
If you're not going to continue to use this automation scenario, delete the play
1. Mark the check box next to your automation rule in the list, and select **Delete** from the top banner. (If you don't want to delete it, you can select **Disable** instead.)
-## Next steps
+## Related content
Now that you've learned how to automate a basic incident enrichment scenario, learn more about automation and other scenarios you can use it in.
static-web-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/quotas.md
App limits:
| Total storage (all environments) | 500 MB | 2 GB | 2 GB | | Storage (single environment) | 250 MB | 500 MB | 500 MB | | File count | 15,000 | 15,000 | 15,000 |
-| [Custom domains][1] | 2 | 5 | 5 |
+| [Custom domains][1] | 2 | 6 | 6 |
| [Private endpoint][4] | Unavailable | 1 | 1 | | Allowed IP range restrictions | Unavailable | 25 | 25 | | [Authorization (custom roles)][2] | | | |
storage Storage Files Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md
The following table classifies Microsoft tools and their current suitability for
| :-: | :-- | :- | :- | |![Yes, recommended](medi) | Supported. | Full fidelity.* | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| RoboCopy | Supported. Azure file shares can be mounted as network drives. | Full fidelity.* |
-|![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Azure File Sync | Natively integrated into Azure file shares. | Full fidelity.* |
+|![Yes, recommended](medi) | Natively integrated into Azure file shares. | Full fidelity.* |
|![Yes, recommended](medi) | Supported. | Full fidelity.* | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Storage Migration Service | Indirectly supported. Azure file shares can be mounted as network drives on SMS target servers. | Full fidelity.* | |![Yes, recommended](medi) to load files onto the device)| Supported. </br>(Data Box Disks doesn't support large file shares) | Data Box and Data Box Heavy fully support metadata. </br>Data Box Disks does not preserve file metadata. |
synapse-analytics Restore Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/restore-sql-pool.md
This timeout can be ignored. Review the dedicated SQL pool page in the Azure por
- [User-defined restore points](sqlpool-create-restore-point.md) - [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json)-- [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772)
+- [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md)
synapse-analytics Security White Paper Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-threat-protection.md
For an overview of Azure compliance offerings, download the latest version of th
For more information related to this white paper, check out the following resources: -- [Azure Synapse Analytics Blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/bg-p/AzureSynapseAnalyticsBlog) - [Azure security baseline for Azure Synapse dedicated SQL pool (formerly SQL DW)](/security/benchmark/azure/baselines/synapse-analytics-security-baseline) - [Overview of the Microsoft cloud security benchmark](/security/benchmark/azure/overview) - [Security baselines for Azure](/security/benchmark/azure/security-baselines-overview)
synapse-analytics Synapse Machine Learning Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/synapse-machine-learning-library.md
SynapseML is generally available on Azure Synapse Analytics with enterprise supp
## Next steps
-* To learn more about SynapseML, see the [blog post.](https://www.microsoft.com/en-us/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/)
+* To learn more about SynapseML, see [SynapseML: A simple, multilingual, and massively parallel machine learning library](https://www.microsoft.com/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/).
* [Install SynapseML and get started with examples.](https://microsoft.github.io/SynapseML/docs/Get%20Started/Install%20SynapseML/)
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
In this article, you'll learn how to use backup and restore in Azure Synapse ded
Use dedicated SQL pool restore points to recover or copy your data warehouse to a previous state in the primary region. Use data warehouse geo-redundant backups to restore to a different geographical region. > [!NOTE]
-> Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](workspace-connected-create.md). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772).
+> Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](workspace-connected-create.md). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## What is a data warehouse snapshot
synapse-analytics Pause And Resume Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-portal.md
You can use the Azure portal to pause and resume the dedicated SQL pool compute
If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. > [!NOTE]
-> This article applies to dedicated SQL pools created in Azure Synapse Workspaces and not dedicated SQL pools (formerly SQL DW). There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+> This article applies to dedicated SQL pools created in Azure Synapse Workspaces and not dedicated SQL pools (formerly SQL DW). There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Sign in to the Azure portal
Follow these steps to clean up resources as you desire.
- You have now paused and resumed compute for your dedicated SQL pool. Continue to the next article to learn more about how to [Load data into a dedicated SQL pool](./load-data-from-azure-blob-storage-using-copy.md). For additional information about managing compute capabilities, see the [Manage compute overview](sql-data-warehouse-manage-compute-overview.md) article. -- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
synapse-analytics Pause And Resume Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
> [!NOTE] > This article applies to dedicated SQL pools (formerly SQL DW) and not dedicated SQL pools created in Azure Synapse Workspaces. There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool in an Azure Synapse Workspace, see [Quickstart: Pause and resume compute in dedicated SQL pool in an Azure Synapse Workspace with Azure PowerShell](pause-and-resume-compute-workspace-powershell.md).
-> For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+> For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Before you begin
Follow these steps to clean up resources as you desire.
- To learn more about SQL pool, continue to the [Load data into dedicated SQL pool (formerly SQL DW)](./load-data-from-azure-blob-storage-using-copy.md) article. For additional information about managing compute capabilities, see the [Manage compute overview](sql-data-warehouse-manage-compute-overview.md) article. -- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
synapse-analytics Pause And Resume Compute Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-workspace-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
> [!NOTE] > This article applies to dedicated SQL pools created in Azure Synapse Workspaces and not dedicated SQL pools (formerly SQL DW). There are different PowerShell cmdlets to use for each, for example, use `Suspend-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Suspend-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool (formerly SQL DW), see [Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell](pause-and-resume-compute-powershell.md).
-> For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+> For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Before you begin
Follow these steps to clean up resources as you desire.
- To get started with Azure Synapse Analytics, see [Get Started with Azure Synapse Analytics](../get-started.md). - To learn more about dedicated SQL pools in Azure Synapse Analytics, see [What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?](sql-data-warehouse-overview-what-is.md) - To learn more about SQL pool, continue to the [Load data into dedicated SQL pool (formerly SQL DW)](./load-data-from-azure-blob-storage-using-copy.md) article. For additional information about managing compute capabilities, see the [Manage compute overview](sql-data-warehouse-manage-compute-overview.md) article.-- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+- For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
- See [Quickstart: Scale compute for dedicated SQL pools in Azure Synapse Workspaces with Azure PowerShell](quickstart-scale-compute-workspace-powershell.md)
synapse-analytics Quickstart Configure Workload Isolation Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-portal.md
# Quickstart: Configure dedicated SQL pool workload isolation using a workload group in the Azure portal
-In this quickstart, you will configure [workload isolation](sql-data-warehouse-workload-isolation.md) by creating a workload group for reserving resources. For purposes of this tutorial, we will create the workload group for data loading called `DataLoads`. The workload group will reserve 20% of the system resources. With 20% isolation for data loads, they are guaranteed resources that allow them to hit SLAs. After creating the workload group, [create a workload classifier](quickstart-create-a-workload-classifier-portal.md) to assign queries to this workload group.
+In this quickstart, you will configure [workload isolation](sql-data-warehouse-workload-isolation.md) by creating a workload group for reserving resources. For purposes of this tutorial, we will create the workload group for data loading called `DataLoads`. The workload group will reserve 20% of the system resources. With 20% isolation for data loads, they are guaranteed resources that allow them to hit SLAs. After creating the workload group, [create a workload classifier](quickstart-create-a-workload-classifier-portal.md) to assign queries to this workload group.
If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
If you don't have an Azure subscription, create a [free Azure account](https://a
Sign in to the [Azure portal](https://portal.azure.com/). > [!NOTE]
-> Creating a dedicated SQL pool instance in Azure Synapse Analytics may result in a new billable service. For more information, see [Azure Synapse Analytics pricing](https://azure.microsoft.com/pricing/details/sql-data-warehouse/).
+> Creating a dedicated SQL pool instance in Azure Synapse Analytics may result in a new billable service. For more information, see [Azure Synapse Analytics pricing](https://azure.microsoft.com/pricing/details/sql-data-warehouse/).
## Prerequisites
To create a workload group with 20 percent isolation:
:::image type="content" source="./media/quickstart-configure-workload-isolation-portal/configure-wg.png" alt-text="A screenshot of the Azure portal, the workload management page for a dedicated SQL pool. Select Save.":::
-A portal notification appears when the workload group is created. The workload group resources are displayed in the charts below the configured values.
+A portal notification appears when the workload group is created. The workload group resources are displayed in the charts below the configured values.
:::image type="content" source="./media/quickstart-configure-workload-isolation-portal/display-wg.png" alt-text="A screenshot of the Azure portal, showing visualizations for workload group parameters.":::
Follow these steps to clean up resources.
:::image type="content" source="./media/load-data-from-azure-blob-storage-using-polybase/clean-up-resources.png" alt-text="A screenshot of the Azure portal, the workload management page for a dedicated SQL pool. The Delete workload group option is highlighted.":::
-1. To pause compute, select the **Pause** button. When the data warehouse is paused, you see a **Start** button. To resume compute, select **Start**.
+1. To pause compute, select the **Pause** button. When the data warehouse is paused, you see a **Start** button. To resume compute, select **Start**.
1. To remove the data warehouse so you're not charged for compute or storage, select **Delete**. ## Next steps
-To use the `DataLoads` workload group, a [workload classifier](/sql/t-sql/statements/create-workload-classifier-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) needs to be created to route requests to the workload group. Continue to the [create workload classifier](quickstart-create-a-workload-classifier-portal.md) tutorial to create a workload classifier for `DataLoads`.
+To use the `DataLoads` workload group, a [workload classifier](/sql/t-sql/statements/create-workload-classifier-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) needs to be created to route requests to the workload group. Continue to the [create workload classifier](quickstart-create-a-workload-classifier-portal.md) tutorial to create a workload classifier for `DataLoads`.
## See also - [Manage and monitor Workload Management](sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md)-- [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772).
+- [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
synapse-analytics Quickstart Scale Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md
If you don't have an Azure subscription, create a [free Azure account](https://a
> [!NOTE] > This article applies to dedicated SQL pools (formerly SQL DW). This content does not apply to dedicated SQL pools in an Azure Synapse Analytics workspace. For similar instructions for dedicated SQL pools (formerly SQL DW), see [Quickstart: Scale compute for an Azure Synapse dedicated SQL pool in a Synapse workspace with the Azure portal](quickstart-scale-compute-workspace-portal.md).
-> For more on the differences between dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+> For more on the differences between dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Sign in to the Azure portal
synapse-analytics Quickstart Scale Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
> [!NOTE] > This article applies to dedicated SQL pools (formerly SQL DW) or in Azure Synapse connected workspaces. This content does not apply to dedicated SQL pools created in Azure Synapse workspaces. There are different PowerShell cmdlets to use for each, for example, use `Set-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Update-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For similar instructions for dedicated SQL pools in Azure Synapse Analytics workspaces, see [Quickstart: Scale compute for dedicated SQL pools in Azure Synapse workspaces with Azure PowerShell](quickstart-scale-compute-workspace-powershell.md).
-> For more on the differences between dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+> For more on the differences between dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Before you begin
synapse-analytics Quickstart Scale Compute Workspace Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-workspace-portal.md
If you don't have an Azure subscription, create a [free Azure account](https://a
> [!NOTE] > This article applies to dedicated SQL pools created in Azure Synapse Analytics workspaces. This content does not apply to dedicated SQL pools (formerly SQL DW) or dedicated SQL pools (formerly SQL DW) in connected workspaces. For similar instructions for dedicated SQL pools (formerly SQL DW), see [Quickstart: Scale compute for an Azure Synapse dedicated SQL pool (formerly SQL DW) with the Azure portal](quickstart-scale-compute-portal.md).
-> For more on the differences between dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+> For more on the differences between dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Sign in to the Azure portal
synapse-analytics Quickstart Scale Compute Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-workspace-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
> [!NOTE] > This article applies to dedicated SQL pools created in Azure Synapse Analytics workspaces. This content does not apply to dedicated SQL pools (formerly SQL DW) or dedicated SQL pools (formerly SQL DW) in connected workspaces. There are different PowerShell cmdlets to use for each, for example, use `Set-AzSqlDatabase` for a dedicated SQL pool (formerly SQL DW), but `Update-AzSynapseSqlPool` for a dedicated SQL pool in an Azure Synapse Workspace. For similar instructions for dedicated SQL pools (formerly SQL DW), see [Quickstart: Scale compute for dedicated SQL pools (formerly SQL DW) using Azure PowerShell](quickstart-scale-compute-powershell.md).
-> For more on the differences between dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+> For more on the differences between dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Before you begin
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
This article summarizes the new features and improvements in the recent releases of [dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-overview-what-is.md) in Azure Synapse Analytics. The article also lists notable content updates that aren't directly related to the release but published in the same time frame. For improvements to other Azure services, see [Service updates](https://azure.microsoft.com/updates).
-> [!NOTE]
-> For the newest release updates on Azure Synapse Analytics, including dedicated SQL pools, please refer to the [Azure Synapse Analytics blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/bg-p/AzureSynapseAnalyticsBlog/label-name/Monthly%20Update), [What's new in Azure Synapse Analytics?](../whats-new.md), or the Synapse Studio homepage in the Azure portal.
- ## Check your dedicated SQL pool (formerly SQL DW) version As new features are rolled out to all regions, check the version deployed to your instance and the latest release notes for feature availability. To check the version, connect to your dedicated SQL pool (formerly SQL DW) via SQL Server Management Studio (SSMS) and run `SELECT @@VERSION;` to return the current version. Use this version to confirm which release has been applied to your dedicated SQL pool (formerly SQL DW). The date in the output identifies the month for the release applied to your dedicated SQL pool (formerly SQL DW). This only applies to service-level improvements.
synapse-analytics Sql Data Warehouse Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md
Dedicated SQL pool (formerly SQL DW) represents a collection of analytic resourc
Once your dedicated SQL pool is created, you can import big data with simple [PolyBase](/sql/relational-databases/polybase/polybase-guide?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics. As you integrate and analyze the data, dedicated SQL pool (formerly SQL DW) will become the single version of truth your business can count on for faster and more robust insights. > [!NOTE]
-> Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](workspace-connected-create.md). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772). Explore the [Azure Synapse Analytics documentation](../overview-what-is.md) and [Get Started with Azure Synapse](../get-started.md).
+> Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](workspace-connected-create.md). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](../sql/overview-difference-between-formerly-sql-dw-workspace.md). Explore the [Azure Synapse Analytics documentation](../overview-what-is.md) and [Get Started with Azure Synapse](../get-started.md).
## Key component of a big data solution
The analysis results can go to worldwide reporting databases or applications. Bu
- [Load sample data](./load-data-from-azure-blob-storage-using-copy.md). - Explore [Videos](https://azure.microsoft.com/documentation/videos/index/?services=sql-data-warehouse) - [Get Started with Azure Synapse](../get-started.md)-- [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772)
+- [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](../sql/overview-difference-between-formerly-sql-dw-workspace.md)
Or look at some of these other Azure Synapse resources:
synapse-analytics Sql Data Warehouse Reference Powershell Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-powershell-cmdlets.md
Many dedicated SQL pool administrative tasks can be managed using either Azure P
[!INCLUDE [updated-for-az](~/reusable-content/ce-skilling/azure/includes/updated-for-az.md)] > [!NOTE]
-> This article applies for standalone dedicated SQL pools (formerly SQL DW) and are not applicable to a dedicated SQL pool created in an Azure Synapse Analytics workspace. There are different PowerShell cmdlets to use for each, for example, use [Suspend-AzSqlDatabase](/powershell/module/az.sql/suspend-azsqldatabase) for a dedicated SQL pool (formerly SQL DW), but [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsesqlpool) for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool created in an Azure Synapse Analytics workspace, see [Quickstart: Pause and resume compute in dedicated SQL pool in a Synapse Workspace with Azure PowerShell](pause-and-resume-compute-workspace-powershell.md). For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
+> This article applies for standalone dedicated SQL pools (formerly SQL DW) and are not applicable to a dedicated SQL pool created in an Azure Synapse Analytics workspace. There are different PowerShell cmdlets to use for each, for example, use [Suspend-AzSqlDatabase](/powershell/module/az.sql/suspend-azsqldatabase) for a dedicated SQL pool (formerly SQL DW), but [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsesqlpool) for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool created in an Azure Synapse Analytics workspace, see [Quickstart: Pause and resume compute in dedicated SQL pool in a Synapse Workspace with Azure PowerShell](pause-and-resume-compute-workspace-powershell.md). For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Get started with Azure PowerShell cmdlets
synapse-analytics Workspace Connected Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/workspace-connected-create.md
All SQL data warehouse users can now access and use an existing dedicated SQL pool (formerly SQL DW) instance via the Synapse Studio and Azure Synapse workspace. Users can use the Synapse Studio and Workspace without impacting automation, connections, or tooling. This article explains how an existing Azure Synapse Analytics user can enable the Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW). The user can expand their existing analytics solution by taking advantage of the new feature-rich capabilities now available via the Synapse workspace and Studio.
-Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. This article is a guide to enable workspace features for an existing dedicated SQL pool (formerly SQL DW). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772).
+Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. This article is a guide to enable workspace features for an existing dedicated SQL pool (formerly SQL DW). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](../sql/overview-difference-between-formerly-sql-dw-workspace.md).
## Prerequisites Before you enable the Synapse workspace features on your data warehouse, you must ensure you have:
synapse-analytics Overview Difference Between Formerly Sql Dw Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-difference-between-formerly-sql-dw-workspace.md
+
+ Title: Difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics workspaces
+description: Learn about the history and difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics workspaces.
+++++ Last updated : 10/06/2024++
+# Difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics workspaces
+
+*Originally posted as a techcommunity blog at: https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772*
+
+There has been confusion for a while when it comes to Microsoft Docs and the two distinct sets of documentation for dedicated SQL pools. When you do an internet search for an Azure Synapse related doc and land on Microsoft Learn Docs site, the Table of Contacts has a toggle switch between two sets of documentation.
+
+This article clarifies which documentation applies to your Synapse Analytics environment.
+
+|Azure Synapse Analytics |Dedicated SQL pools (formerly SQL DW) |
+|||
+|:::image type="content" source="media/overview-difference-between-formerly-sql-dw-workspace/switch-to-azure-synapse.png" alt-text="Screenshot from the Microsoft Learn Docs site showing the Azure Synapse Analytics table of contents."::: | :::image type="content" source="media/overview-difference-between-formerly-sql-dw-workspace/switch-to-dedicated-sql-pool-formerly-sql-dw.png" alt-text="Screenshot from the Microsoft Learn Docs site showing the older dedicated SQL pool (formerly SQL DW) table of contents."::: |
+
+You'll also see notes in many docs trying to highlight which Synapse implementation of dedicated SQL pools the document is referencing.
+
+## Dedicated SQL pools exist in two different modalities
+
+Standalone or existing SQL Data Warehouses were renamed to "dedicated SQL pools (formerly SQL DW)" in November 2020. Ever since, dedicated SQL pools created within Synapse Analytics are "dedicated SQL pools in Synapse workspaces."
+
+Circa 2016, Microsoft adapted its massively parallel processing (MPP) on-premises appliance to the cloud as "Azure SQL Data Warehouse" or "SQL DW" for short.
+
+Historians remember the appliance was named parallel data warehouse (PDW) and then Analytics Platform System (APS) which still powers many on-premises data warehousing solutions today.
+
+Azure SQL Data Warehouse adopted the constructs of Azure SQL DB such as a logical server where administration and networking are controlled. SQL DW could exist on the same server as other SQL DBs. This implementation made it easy for current Azure SQL DB administrators and practitioners to apply the same concepts to data warehouse.
+
+However, the analytics and insights space has gone through massive changes since 2016. We made a paradigm shift in how data warehousing would be delivered. As SQL DW handled the warehousing, the Synapse workspace expanded upon that and rounded out the analytics portfolio. The new Synapse Workspace experience became generally available in 2020.
++
+The original SQL DW component is just one part of this. It became known as a dedicated SQL pool.
++
+This was a big change and with more capabilities. The whole platform received a fitting new name: Synapse Analytics.
+
+But what about all the existing SQL DWs? Would they automatically become Synapse Workspaces?
+
+## Rebranding and migration
+
+Azure SQL DW instances weren't automatically upgraded to Synapse Analytics workspaces.
+
+Many factors play into big platform upgrades, and it was best to allow customers to opt in for this. Azure SQL DW was rebranded as "Dedicated SQL pool (formerly SQL DW)" with intention to create clear indication that the former SQL DW is in fact the same artifact that lives within Synapse Analytics.
++
+In documentation, you'll also see "Dedicated SQL pool (formerly SQL DW)" referred to as "standalone dedicated SQL pool".
+
+[Migration of a dedicated SQL pool (formerly SQL DW)](../sql-data-warehouse/workspace-connected-create.md) in relative terms is easy with just a few steps from the Azure portal. However, it isn't quite a full migration. There's a subtle difference which is noticed from the toast that pops up in the Azure portal.
++
+In a migration, the dedicated SQL pool (formerly SQL DW) never really is migrated. It stays on the logical server it was originally on. The server DNS `server-123.database.windows.net` never becomes `server-123.sql.azuresynapse.net`. Customers that "upgraded" or "migrated" a SQL DW to Synapse Analytics still have a full logical server that could be shared in an Azure SQL Database logical server.
+
+## The Migrated SQL DW and Synapse workspace
+
+The upgrade or migration path described in the previous section is connected to a Synapse workspace. For migrated environments, use documentation in [dedicated SQL pool (formerly SQL DW)](../sql-data-warehouse/sql-data-warehouse-overview-what-is.md) for dedicated SQL pool scenarios. All of the other components of Synapse Analytics would be accessed from the [Synapse Analytics documentation](../overview-what-is.md).
+
+A quick way to visualize this as a "blend" of all the additional Synapse Analytics workspace capabilities and the original SQL DW follows.
++
+If you never migrated a SQL DW and you started your journey with creating a Synapse Analytics Workspace, then you simply use [Synapse Analytics documentation](../overview-what-is.md).
++
+## PowerShell differences
+
+One of the biggest areas of confusion in documentation between "dedicated SQL pool (formerly SQL DW)" and "Synapse Analytics" dedicated SQL pools is PowerShell.
+
+The original SQL DW implementation uses a logical server that is the same as Azure SQL Database. There's a shared PowerShell module named [Az.Sql](/powershell/module/az.sql). In this module, to create a new dedicated SQL pool (formerly SQL DW), the cmdlet [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) has a parameter for `Edition` that is used to distinguish that you want a `DataWarehouse`.
+
+When Synapse Analytics was released, it came with a different PowerShell module of [Az.Synapse](/powershell/module/az.synapse). To create a dedicated SQL pool in a Synapse Analytics Workspace, you would use [New-AzSynapseSqlPool](/powershell/module/az.synapse/new-azsynapsesqlpool). In this PowerShell module, there's no need to include an "Edition" parameter, as it's exclusively used for Synapse.
+
+These two modules ARE NOT equal in all cases. There are some actions that can be done in `Az.Sql` that can't be done in `Az.Synapse`. For instance, performing a restore for a dedicated SQL pool (formerly SQL DW) uses `Restore-AzSqlDatabase` cmdlet while Synapse Analytics uses `Restore-AzSynapseSqlPool`. However, the action to [restore across a subscription boundary](../sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell) is only available in `Az.Sql` module with `Restore-AzSqlDatabase`.
+
+## Related content
+
+- [What is Azure Synapse Analytics?](../overview-what-is.md)
+- [What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?](../sql-data-warehouse/sql-data-warehouse-overview-what-is.md)
+
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
There are several mitigation steps that you can do to avoid this:
### Missing column when using automatic schema inference
-You can easily query files without knowing or specifying schema, by omitting WITH clause. In that case column names and data types will be inferred from the files. Have in mind that if you're reading number of files at once, the schema will be inferred from the first file service gets from the storage. This can mean that some of the columns expected are omitted, all because the file used by the service to define the schema did not contain these columns. To explicitly specify the schema, use OPENROWSET WITH clause. If you specify schema (by using external table or OPENROWSET WITH clause) default lax path mode will be used. That means that the columns that donΓÇÖt exist in some files will be returned as NULLs (for rows from those files). To understand how path mode is used, check the following [documentation](../sql/develop-openrowset.md) and [sample](../sql/develop-openrowset.md#specify-columns-using-json-paths).
+You can easily query files without knowing or specifying schema, by omitting WITH clause. In that case column names and data types will be inferred from the files. Have in mind that if you're reading number of files at once, the schema will be inferred from the first file service gets from the storage. This can mean that some of the columns expected are omitted, all because the file used by the service to define the schema did not contain these columns. To explicitly specify the schema, use OPENROWSET WITH clause. If you specify schema (by using external table or OPENROWSET WITH clause) default lax path mode will be used. That means that the columns that don't exist in some files will be returned as NULLs (for rows from those files). To understand how path mode is used, check the following [documentation](../sql/develop-openrowset.md) and [sample](../sql/develop-openrowset.md#specify-columns-using-json-paths).
## Configuration
There are some limitations that you might see in Delta Lake support in serverles
### Serverless support Delta 1.0 version
-Serverless SQL pools are reading only Delta Lake 1.0 version. Serverless SQL pools is a [Delta reader with level 1](https://github.com/delta-io/delt#reader-version-requirements), and doesnΓÇÖt support the following features:
+Serverless SQL pools are reading only Delta Lake 1.0 version. Serverless SQL pools is a [Delta reader with level 1](https://github.com/delta-io/delt#reader-version-requirements), and doesn't support the following features:
- Column mappings are ignored - serverless SQL pools will return original column names. - Delete vectors are ignored and the old version of deleted/updated rows will be returned (possibly wrong results). - The following Delta Lake features are not supported: [V2 checkpoints](https://github.com/delta-io/delt#vacuum-protocol-check)
time-series-insights How To Create Environment Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-create-environment-using-portal.md
This article describes how to create an Azure Time Series Insights Gen2 environm
The environment provisioning tutorial will walk you through the process. You'll learn about selecting the correct Time Series ID and view examples from two JSON payloads.</br>
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWzk3P]
+> [!VIDEO 5876a3d5-5867-4d41-95c8-004539622c7f]
## Overview
update-manager Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/scheduled-patching.md
The registry keys listed in [Configure Automatic Updates by editing the registry
We recommend the following limits for the indicators.
-| Indicator | Limit |
-|-|-|
-| Number of schedules per subscription per region | 250 |
-| Total number of resource associations to a schedule | 3,000 |
-| Resource associations on each dynamic scope | 1,000 |
-| Number of dynamic scopes per resource group or subscription per region | 250 |
-| Number of dynamic scopes per schedule | 200 |
-| Total number of subscriptions attached to all dynamic scopes per schedule | 200 |
+| Indicator | Public Cloud Limit | Mooncake/Fairfax Limit |
+|-|-|--|
+| Number of schedules per subscription per region | 250 | 250 |
+| Total number of resource associations to a schedule | 3,000 | 3,000 |
+| Resource associations on each dynamic scope | 1,000 | 1,000 |
+| Number of dynamic scopes per resource group or subscription per region | 250 | 250 |
+| Number of dynamic scopes per schedule | 200 | 30 |
+| Total number of subscriptions attached to all dynamic scopes per schedule | 200 | 30 |
For more information, see the [service limits for Dynamic scope](dynamic-scope-overview.md#service-limits).
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
Select the relevant tab for your scenario and follow the steps.
:::image type="content" source="media/add-session-hosts-host-pool/agent-install-token.png" alt-text="Screenshot that shows the box for entering a registration token." lightbox="media/add-session-hosts-host-pool/agent-install-token.png":::
-1. Run the `Microsoft.RDInfra.RDAgentBootLoader.Installer-x64.msi` file to install the remaining components.
+1. Run the `Microsoft.RDInfra.RDAgentBootLoader.Installer-x64-<version>.msi` file to install the remaining components.
1. Follow the prompts and complete the installation.
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
To configure the service principal, use the [Microsoft Graph PowerShell SDK](/po
Remove-MgServicePrincipalRemoteDesktopSecurityConfigurationTargetDeviceGroup -ServicePrincipalId $WCLspId -TargetDeviceGroupId "<Group object ID>" ```
-## Create a Kerberos Server object
+## Create a Kerberos server object
-If your session hosts meet the following criteria, you must [Create a Kerberos Server object](../active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md#create-a-kerberos-server-object):
+If your session hosts meet the following criteria, you must create a Kerberos server object. For more information, see [Enable passwordless security key sign-in to on-premises resources by using Microsoft Entra ID](/entr#create-a-kerberos-server-object):
-- Your session host is Microsoft Entra hybrid joined. You must have a Kerberos Server object to complete authentication to a domain controller.
+- Your session host is Microsoft Entra hybrid joined. You must have a Kerberos server object to complete authentication to a domain controller.
-- Your session host is Microsoft Entra joined and your environment contains Active Directory domain controllers. You must have a Kerberos Server object for users to access on-premises resources, such as SMB shares, and Windows-integrated authentication to websites.
+- Your session host is Microsoft Entra joined and your environment contains Active Directory domain controllers. You must have a Kerberos server object for users to access on-premises resources, such as SMB shares and Windows-integrated authentication to websites.
> [!IMPORTANT]
-> If you enable single sign-on on Microsoft Entra hybrid joined session hosts without creating a Kerberos server object, one of the following things can happen:
+> If you enable single sign-on on Microsoft Entra hybrid joined session hosts without creating a Kerberos server object, one of the following things can happen when you try to connect to a remote session:
> > - You receive an error message saying the specific session doesn't exist. > - Single sign-on will be skipped and you see a standard authentication dialog for the session host. >
-> To resolve these issues, create the Kerberos Server object, then connect again.
+> To resolve these issues, create the Kerberos server object, then connect again.
## Review your conditional access policies
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
Title: 'How to configure Virtual WAN Hub routing policies'
-description: Learn how to configure Virtual WAN routing policies
+description: Learn how to configure Virtual WAN routing policies.
# How to configure Virtual WAN Hub routing intent and routing policies
-Virtual WAN Hub routing intent allows you to set up simple and declarative routing policies to send traffic to bump-in-the-wire security solutions like Azure Firewall, Network Virtual Appliances or software-as-a-service (SaaS) solutions deployed within the Virtual WAN hub.
+Virtual WAN Hub routing intent allows you to set up simple and declarative routing policies to send traffic to bump-in-the-wire security solutions like Azure Firewall, Network Virtual Appliances, or software-as-a-service (SaaS) solutions deployed within the Virtual WAN hub.
## Background
-Routing Intent and Routing Policies allow you to configure the Virtual WAN hub to forward Internet-bound and Private (Point-to-site VPN, Site-to-site VPN, ExpressRoute, Virtual Network and Network Virtual Appliance) Traffic to an Azure Firewall, Next-Generation Firewall Network Virtual Appliance (NGFW-NVA) or security software-as-a-service (SaaS) solution deployed in the virtual hub.
+Routing Intent and Routing Policies allow you to configure the Virtual WAN hub to forward Internet-bound and Private (Point-to-site VPN, Site-to-site VPN, ExpressRoute, Virtual Network, and Network Virtual Appliance) Traffic to an Azure Firewall, Next-Generation Firewall (NGFW), Network Virtual Appliance (NVA), or security software-as-a-service (SaaS) solution deployed in the virtual hub.
There are two types of Routing Policies: Internet Traffic and Private Traffic Routing Policies. Each Virtual WAN Hub may have at most one Internet Traffic Routing Policy and one Private Traffic Routing Policy, each with a single Next Hop resource. While Private Traffic includes both branch and Virtual Network address prefixes, Routing Policies considers them as one entity within the Routing Intent concepts.
-* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (Remote User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub forwards Internet-bound traffic to the **Azure Firewall**, **Third-Party Security provider**, **Network Virtual Appliance** or **SaaS solution** specified as part of the Routing Policy.
+* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (Remote User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub forwards Internet-bound traffic to the **Azure Firewall**, **Third-Party Security provider**, **Network Virtual Appliance**, or **SaaS solution** specified as part of the Routing Policy.
- In other words, when an Internet Traffic Routing Policy is configured on a Virtual WAN hub, the Virtual WAN advertises a default (0.0.0.0/0) route to all spokes, Gateways and Network Virtual Appliances (deployed in the hub or spoke).
+ In other words, when an Internet Traffic Routing Policy is configured on a Virtual WAN hub, Virtual WAN advertises a default (0.0.0.0/0) route to all spokes, Gateways, and Network Virtual Appliances (deployed in the hub or spoke).
-* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic is forwarded to the Next Hop **Azure Firewall**, **Network Virtual Appliance** or **SaaS solution** resource.
+* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic is forwarded to the Next Hop **Azure Firewall**, **Network Virtual Appliance**, or **SaaS solution** resource.
- In other words, when a Private Traffic Routing Policy is configured on the Virtual WAN Hub, all branch-to-branch, branch-to-virtual network, virtual network-to-branch and inter-hub traffic is sent via Azure Firewall, Network Virtual Appliance or SaaS solution deployed in the Virtual WAN Hub.
+ In other words, when a Private Traffic Routing Policy is configured on the Virtual WAN Hub, all branch-to-branch, branch-to-virtual network, virtual network-to-branch, and inter-hub traffic is sent via Azure Firewall, Network Virtual Appliance, or SaaS solution deployed in the Virtual WAN Hub.
## Use Cases The following section describes two common scenarios where Routing Policies are applied to Secured Virtual WAN hubs.
-### All Virtual WAN Hubs are secured (deployed with Azure Firewall, NVA or SaaS solution)
+### All Virtual WAN Hubs are secured (deployed with Azure Firewall, NVA, or SaaS solution)
-In this scenario, all Virtual WAN hubs are deployed with an Azure Firewall, NVA or SaaS solution in them. In this scenario, you may configure an Internet Traffic Routing Policy, a Private Traffic Routing Policy or both on each Virtual WAN Hub.
+In this scenario, all Virtual WAN hubs are deployed with an Azure Firewall, NVA, or SaaS solution in them. In this scenario, you may configure an Internet Traffic Routing Policy, a Private Traffic Routing Policy or both on each Virtual WAN Hub.
Consider the following configuration where Hub 1 and Hub 2 have Routing Policies for both Private and Internet Traffic. **Hub 1 configuration:**
-* Private Traffic Policy with Next Hop Hub 1 Azure Firewall, NVA or SaaS solution
-* Internet Traffic Policy with Next Hop Hub 1 Azure Firewall, NVA or SaaS solution
+* Private Traffic Policy with Next Hop Hub 1 Azure Firewall, NVA, or SaaS solution
+* Internet Traffic Policy with Next Hop Hub 1 Azure Firewall, NVA, or SaaS solution
**Hub 2 configuration:**
-* Private Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA or SaaS solution
-* Internet Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA or SaaS solution
+* Private Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA, or SaaS solution
+* Internet Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA, or SaaS solution
The following are the traffic flows that result from such a configuration.
The following are the traffic flows that result from such a configuration.
| From | To | Hub 1 VNets | Hub 1 branches | Hub 2 VNets | Hub 2 branches| Internet| | -- | -- | - | | | | |
-| Hub 1 VNets | &#8594;| Hub 1 AzFW or NVA| Hub 1 AzFW or NVA | Hub 1 and 2 AzFW, NVA or SaaS | Hub 1 and 2 AzFW, NVA or SaaS | Hub 1 AzFW, NVA or SaaS |
-| Hub 1 Branches | &#8594;| Hub 1 AzFW, NVA or SaaS | Hub 1 AzFW, NVA or SaaS | Hub 1 and 2 AzFW, NVA or SaaS | Hub 1 and 2 AzFW, NVA or SaaS | Hub 1 AzFW, NVA or SaaS|
-| Hub 2 VNets | &#8594;| Hub 1 and 2 AzFW, NVA or SaaS| Hub 1 and 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS|
-| Hub 2 Branches | &#8594;| Hub 1 and 2 AzFW, NVA or SaaS| Hub 1 and 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | Hub 2AzFW, NVA or SaaS|
+| Hub 1 VNets | &#8594;| Hub 1 AzFW or NVA| Hub 1 AzFW or NVA | Hub 1 and 2 AzFW, NVA, or SaaS | Hub 1 and 2 AzFW, NVA, or SaaS | Hub 1 AzFW, NVA, or SaaS |
+| Hub 1 Branches | &#8594;| Hub 1 AzFW, NVA, or SaaS | Hub 1 AzFW, NVA, or SaaS | Hub 1 and 2 AzFW, NVA, or SaaS | Hub 1 and 2 AzFW, NVA, or SaaS | Hub 1 AzFW, NVA, or SaaS|
+| Hub 2 VNets | &#8594;| Hub 1 and 2 AzFW, NVA, or SaaS| Hub 1 and 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS| Hub 2 AzFW, NVA, or SaaS|
+| Hub 2 Branches | &#8594;| Hub 1 and 2 AzFW, NVA, or SaaS| Hub 1 and 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS | Hub 2AzFW, NVA, or SaaS|
### Deploying both secured and regular Virtual WAN Hubs
In this scenario, not all hubs in the WAN are Secured Virtual WAN Hubs (hubs tha
Consider the following configuration where Hub 1 (Normal) and Hub 2 (Secured) are deployed in a Virtual WAN. Hub 2 has Routing Policies for both Private and Internet Traffic. **Hub 1 Configuration:**
-* N/A (can't configure Routing Policies if hub isn't deployed with Azure Firewall, NVA or SaaS solution)
+* N/A (can't configure Routing Policies if hub isn't deployed with Azure Firewall, NVA, or SaaS solution)
**Hub 2 Configuration:**
-* Private Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA or SaaS solution.
-* Internet Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA or SaaS solution.
+* Private Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA, or SaaS solution.
+* Internet Traffic Policy with Next Hop Hub 2 Azure Firewall, NVA, or SaaS solution.
:::image type="content" source="./media/routing-policies/one-secured-one-normal-diagram.png"alt-text="Screenshot showing architecture with one secured hub one normal hub."lightbox="./media/routing-policies/one-secured-one-normal-diagram.png":::
Consider the following configuration where Hub 1 (Normal) and Hub 2 (Secured) ar
| From | To | Hub 1 VNets | Hub 1 branches | Hub 2 VNets | Hub 2 branches| Internet | | -- | -- | - | | | | |
-| Hub 1 VNets | &#8594;| Direct | Direct | Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS | - |
-| Hub 1 Branches | &#8594;| Direct | Direct | Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | - |
-| Hub 2 VNets | &#8594;| Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS|
-| Hub 2 Branches | &#8594;| Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS| Hub 2 AzFW, NVA or SaaS | Hub 2 AzFW, NVA or SaaS|
+| Hub 1 VNets | &#8594;| Direct | Direct | Hub 2 AzFW, NVA, or SaaS| Hub 2 AzFW, NVA, or SaaS | - |
+| Hub 1 Branches | &#8594;| Direct | Direct | Hub 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS | - |
+| Hub 2 VNets | &#8594;| Hub 2 AzFW, NVA, or SaaS| Hub 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS| Hub 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS|
+| Hub 2 Branches | &#8594;| Hub 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS| Hub 2 AzFW, NVA, or SaaS | Hub 2 AzFW, NVA, or SaaS|
## <a name="knownlimitations"></a> Known Limitations * Routing Intent is currently available in Azure public. Microsoft Azure operated by 21Vianet and Azure Government are currently in roadmap.
-* Routing Intent simplifies routing by managing route table associations and propagations for all connections (Virtual Network, Site-to-site VPN, Point-to-site VPN and ExpressRoute). Virtual WANs with custom route tables and customized policies therefore can't be used with the Routing Intent constructs.
+* Routing Intent simplifies routing by managing route table associations and propagations for all connections (Virtual Network, Site-to-site VPN, Point-to-site VPN, and ExpressRoute). Virtual WANs with custom route tables and customized policies therefore can't be used with the Routing Intent constructs.
* Encrypted ExpressRoute (Site-to-site VPN tunnels running over ExpressRoute circuits) is supported in hubs where routing intent is configured if Azure Firewall is configured to allow traffic between VPN tunnel endpoints (Site-to-site VPN Gateway private IP and on-premises VPN device private IP). For more information on the required configurations, see [Encrypted ExpressRoute with routing intent](#encryptedER). * The following connectivity use cases are **not** supported with Routing Intent: * Static routes in the defaultRouteTable that point to a Virtual Network connection can't be used in conjunction with routing intent. However, you can use the [BGP peering feature](scenario-bgp-peering-hub.md).
- * Static routes on the Virtual Network connection with "static route propagation" are not applied to the next-hop resource specified in private routing policies. Support for applying static routes on Virtual Network connections to private routing policy next-hop is on the roadmap.
+ * Static routes on the Virtual Network connection with "static route propagation" aren't applied to the next-hop resource specified in private routing policies. Support for applying static routes on Virtual Network connections to private routing policy next-hop is on the roadmap.
* The ability to deploy both an SD-WAN connectivity NVA and a separate Firewall NVA or SaaS solution in the **same** Virtual WAN hub is currently in the road-map. Once routing intent is configured with next hop SaaS solution or Firewall NVA, connectivity between the SD-WAN NVA and Azure is impacted. Instead, deploy the SD-WAN NVA and Firewall NVA or SaaS solution in different Virtual Hubs. Alternatively, you can also deploy the SD-WAN NVA in a spoke Virtual Network connected to the hub and leverage the virtual hub [BGP peering](scenario-bgp-peering-hub.md) capability. * Network Virtual Appliances (NVAs) can only be specified as the next hop resource for routing intent if they're Next-Generation Firewall or dual-role Next-Generation Firewall and SD-WAN NVAs. Currently, **checkpoint**, **fortinet-ngfw** and **fortinet-ngfw-and-sdwan** are the only NVAs eligible to be configured to be the next hop for routing intent. If you attempt to specify another NVA, Routing Intent creation fails. You can check the type of the NVA by navigating to your Virtual Hub -> Network Virtual Appliances and then looking at the **Vendor** field. [**Palo Alto Networks Cloud NGFW**](how-to-palo-alto-cloud-ngfw.md) is also supported as the next hop for Routing Intent, but is considered a next hop of type **SaaS solution**. * Routing Intent users who want to connect multiple ExpressRoute circuits to Virtual WAN and want to send traffic between them via a security solution deployed in the hub can enable open up a support case to enable this use case. Reference [enabling connectivity across ExpressRoute circuits](#expressroute) for more information.
Before enabling routing intent, consider the following:
* Enabling routing intent affects the advertisement of prefixes to on-premises. See [prefix advertisements](#prefixadvertisments) for more information. * You may open a support case to enable connectivity across ExpressRoute circuits via a Firewall appliance in the hub. Enabling this connectivity pattern modifies the prefixes advertised to ExpressRoute circuits. See [About ExpressRoute](#expressroute) for more information. * Routing intent is the only mechanism in Virtual WAN to enable inter-hub traffic inspection via security appliances deployed in the hub. Inter-hub traffic inspection also requires routing intent to be enabled on all hubs to ensure traffic is routed symmetrically between security appliances deployed in Virtual WAN hubs.
-* Routing intent sends Virtual Network and on-premises traffic to the next hop resource specified in the routing policy. Virtual WAN programs the underlying Azure platform to route your on-premises and Virtual Network traffic in accordance with the configured routing policy and does not process the traffic through the Virtual Hub router. Because packets routed via routing intent are not processed by the router, you do not need to allocate additional [routing infrastructure units](hub-settings.md#capacity) for data-plane packet forwarding on hubs configured with routing intent. However, you may need to allocate additional routing infastructure units based on the number of Virtual Machines in Virtual Networks connected to the Virtual WAN Hub.
-* Routing intent allows you to configure different next-hop resources for private and internet routing policies. For example, you can set the next hop for private routing policies to Azure Firewall in the hub and the next hop for internet routing policy to a NVA or SaaS solution in the hub. Because SaaS solutions and Firewall NVAs are deployed in the same subnet in the Virtual WAN hub, deploying SaaS solutions with a Firewall NVA in the same hub can impact the horizontal scalability of the SaaS solutions as there are less IP addresses avaialble for horizontal scale-out. Additionally, you can have at most one SaaS solution deployed in each Virtual WAN hub.
+* Routing intent sends Virtual Network and on-premises traffic to the next hop resource specified in the routing policy. Virtual WAN programs the underlying Azure platform to route your on-premises and Virtual Network traffic in accordance with the configured routing policy and doesn't process the traffic through the Virtual Hub router. Because packets routed via routing intent aren't processed by the router, you don't need to allocate additional [routing infrastructure units](hub-settings.md#capacity) for data-plane packet forwarding on hubs configured with routing intent. However, you may need to allocate additional routing infrastructure units based on the number of Virtual Machines in Virtual Networks connected to the Virtual WAN Hub.
+* Routing intent allows you to configure different next-hop resources for private and internet routing policies. For example, you can set the next hop for private routing policies to Azure Firewall in the hub and the next hop for internet routing policy to an NVA or SaaS solution in the hub. Because SaaS solutions and Firewall NVAs are deployed in the same subnet in the Virtual WAN hub, deploying SaaS solutions with a Firewall NVA in the same hub can impact the horizontal scalability of the SaaS solutions as there are less IP addresses available for horizontal scale-out. Additionally, you can have at most one SaaS solution deployed in each Virtual WAN hub.
### <a name="prereq"></a> Prerequisites To enable routing intent and policies, your Virtual Hub must meet the below prerequisites:
Routing Intent simplifies routing and configuration by managing route associatio
The following table describes the associated route table and propagated route tables of all connections once routing intent is configured. |Routing Intent configuration | Associated route table| Propagated route tables|
-| --| --| --|
+|--|--|--|
|Internet|defaultRouteTable| default label (defaultRouteTable of all hubs in the Virtual WAN)| | Private| defaultRouteTable| noneRouteTable| |Internet and Private| defaultRouteTable| noneRouteTable| ### <a name="staticroute"></a> Static routes in defaultRouteTable
-The following section describes how routing intent manages static routes in the defaultRouteTable when routing intent is enabled on a hub. The modifications that Routing Intent makes to the defaultRouteTable is irreversible.
+The following section describes how routing intent manages static routes in the defaultRouteTable when routing intent is enabled on a hub. The modifications that Routing Intent makes to the defaultRouteTable is irreversible.
If you remove routing intent, you'll have to manually restore your previous configuration. Therefore, we recommend saving a snapshot of your configuration before enabling routing intent.
When a Virtual hub is configured with a Private Routing policy Virtual WAN adver
* Routes corresponding to prefixes learned from remote hub Virtual Networks, ExpressRoute, Site-to-site VPN, Point-to-site VPN, NVA-in-the-hub and BGP connections where Routing Intent isn't configured **and** the remote connections propagate to the defaultRouteTable of the local hub. * Prefixes learned from one ExpressRoute circuit aren't advertised to other ExpressRoute circuits unless Global Reach is enabled. If you want to enable ExpressRoute to ExpressRoute transit through a security solution deployed in the hub, open a support case. For more information, see [Enabling connectivity across ExpressRoute circuits](#expressroute).
+## Key routing scenarios
+The following section describes a few key routing scenarios and routing behaviors when configuring routing intent on a Virtual WAN hub.
+ ### <a name="expressroute"></a> Transit connectivity between ExpressRoute circuits with routing intent Transit connectivity between ExpressRoute circuits within Virtual WAN is provided through two different configurations. Because these two configurations aren't compatible, customers should choose one configuration option to support transit connectivity between two ExpressRoute circuits.
Using the sample VPN configuration and VPN site from above, create firewall rule
|Destination Port| * | |Protocol| ANY|
-### Performance
+#### Performance for Encrypted ExpressRoute
Configuring private routing policies with Encrypted ExpressRoute routes VPN ESP packets through the next hop security appliance deployed in the hub. As a result, you can expect Encrypted ExpressRoute maximum VPN tunnel throughput of 1 Gbps in both directions (inbound from on-premises and outbound from Azure). To achieve the maximum VPN tunnel throughput, consider the following deployment optimizations: * Deploy Azure Firewall Premium instead of Azure Firewall Standard or Azure Firewall Basic. * Ensure Azure Firewall processes the rule that allows traffic between the VPN tunnel endpoints (192.168.1.4 and 192.168.1.5 in the example above) first by making the rule have the highest priority in your Azure Firewall policy. For more information about Azure Firewall rule processing logic, see [Azure Firewall rule processing logic](../firewall/rule-processing.md#rule-processing-using-firewall-policy).
-* Turn off deep-packet for traffic between the VPN tunnel endpoints.For information on how to configure Azure Firewall to exclude traffic from deep-packet inspection, reference [IDPS bypass list documentation](../firewall/premium-features.md#idps).
+* Turn off deep-packet for traffic between the VPN tunnel endpoints. For information on how to configure Azure Firewall to exclude traffic from deep-packet inspection, reference [IDPS bypass list documentation](../firewall/premium-features.md#idps).
* Configure VPN devices to use GCMAES256 for both IPSEC Encryption and Integrity to maximize performance.
+#### Direct routing to NVA instances for dual-role connectivity and firewall NVAs
+
+> [!NOTE]
+> Direct routing to dual-role NVA used with private routing policies in Virtual WAN only applies to traffic between Virtual Networks and routes learnt via BGP from NVA-deployed in the Virtual WAN hub. ExpressRoute and VPN transit connectivity to NVA-connected on-premises isn't routed directly to NVA instances and is instead routed via the dual-role NVA's load balancer.
+
+Certain Network Virtual Appliances can have both connectivity (SD-WAN) and security (NGFW) capabilities on the same device. These NVAs are considered dual-role NVAs. Check whether or not your NVA is dual-role NVA under [NVA partners](about-nva-hub.md#partners).
+
+When private routing policies are configured for dual-role NVAs, Virtual WAN automatically advertises routes learnt from that Virtual WAN hub's NVA device to directly connected (local) Virtual Networks as well to other Virtual Hubs in the Virtual WAN with the next hop as the NVA instance as opposed to the NVAs Internal Load Balancer.
+
+For **active-passive NVA configurations** where only one instance of the NVAs is advertising a route for a specific prefix to Virtual WAN (or the AS-PATH length of routes learnt from one of the instances is always the shortest), Virtual WAN ensures that outbound traffic from an Azure Virtual Network is always routed to the active (or preferred) NVA instance.
++
+For **active-active NVA configurations** (multiple NVA instances advertise the same prefix with the same AS-PATH length), Azure automatically performs ECMP to route traffic from Azure to on-premises. Azure's software-defined networking platform doesn't guarantee flow-level symmetry, meaning the inbound flow to Azure and outbound flow from Azure can land on different instances of the NVA. This results in asymmetric routing which is dropped by stateful firewall inspection. Therefore, it isn't recommended to use active-active connectivity patterns where an NVA is behaving as a dual-role NVA unless the NVA can support asymmetric forwarding or support session sharing/synchronization. For more information on whether your NVA supports asymmetric forwarding or session state sharing/synchronization, reach out to your NVA provider.
+ ## Configuring routing intent through Azure portal
The following section describes common ways to troubleshoot when you configure r
When private routing policies are configured on the Virtual Hub, all traffic between on-premises and Virtual Networks are inspected by Azure Firewall, Network Virtual Appliance, or SaaS solution in the Virtual hub.
-Therefore, the effective routes of the defaultRouteTable show the RFC1918 aggregate prefixes (10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12) with next hop Azure Firewall or Network Virtual Appliance. This reflects that all traffic between Virtual Networks and branches is routed to Azure Firewall, NVA or SaaS solution in the hub for inspection.
+Therefore, the effective routes of the defaultRouteTable show the RFC1918 aggregate prefixes (10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12) with next hop Azure Firewall or Network Virtual Appliance. This reflects that all traffic between Virtual Networks and branches is routed to Azure Firewall, NVA, or SaaS solution in the hub for inspection.
:::image type="content" source="./media/routing-policies/default-route-table-effective-routes.png"alt-text="Screenshot showing effective routes for defaultRouteTable."lightbox="./media/routing-policies/public-routing-policy-nva.png":::
Assuming you have already reviewed the [Known Limitations](#knownlimitations) s
* Scenario-specific troubleshooting: * **If you have a non-secured hub (hub without Azure Firewall or NVA) in your Virtual WAN**, make sure connections to the nonsecured hub are propagating to the defaultRouteTable of the hubs with routing intent configured. If propagations aren't set to the defaultRouteTable, connections to the secured hub won't be able to send packets to the nonsecured hub. * **If you have Internet Routing Policies configured**, make sure the 'Propagate Default Route' or 'Enable Internet Security' setting is set to 'true' for all connections that should learn the 0.0.0.0/0 default route. Connections where this setting is set to 'false' won't learn the 0.0.0.0/0 route, even if Internet Routing Policies are configured.
- * **If you're using Private Endpoints deployed in Virtual Networks connected to the Virtual Hub**, traffic from on-premises destined for Private Endpoints deployed in Virtual Networks connected to the Virtual WAN hub by default **bypasses** the routing intent next hop Azure Firewall, NVA or SaaS. However, this results in asymmetric routing (which can lead to loss of connectivity between on-premises and Private Endpoints) as Private Endpoints in Spoke Virtual Networks forward on-premises traffic to the Firewall. To ensure routing symmetry, enable [Route Table network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md) on the subnets where Private Endpoints are deployed. Configuring /32 routes corresponding to Private Endpoint private IP addresses in the Private Traffic text box **will not** ensure traffic symmetry when private routing policies are configured on the hub.
+ * **If you're using Private Endpoints deployed in Virtual Networks connected to the Virtual Hub**, traffic from on-premises destined for Private Endpoints deployed in Virtual Networks connected to the Virtual WAN hub by default **bypasses** the routing intent next hop Azure Firewall, NVA, or SaaS. However, this results in asymmetric routing (which can lead to loss of connectivity between on-premises and Private Endpoints) as Private Endpoints in Spoke Virtual Networks forward on-premises traffic to the Firewall. To ensure routing symmetry, enable [Route Table network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md) on the subnets where Private Endpoints are deployed. Configuring /32 routes corresponding to Private Endpoint private IP addresses in the Private Traffic text box **will not** ensure traffic symmetry when private routing policies are configured on the hub.
* **If you're using Encrypted ExpressRoute with Private Routing Policies**, ensure that your Firewall device has a rule configured to allow traffic between the Virtual WAN Site-to-site VPN Gateway private IP tunnel endpoint and on-premises VPN device. ESP (encrypted outer) packets should log in Azure Firewall logs. For more information on Encrypted ExpressRoute with routing intent, see [Encrypted ExpressRoute documentation](#encryptedER). ### Troubleshooting Azure Firewall routing issues