Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Technical Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technical-overview.md | The primary resources you work with in an Azure AD B2C tenant are: An Azure AD B2C tenant is the first resource you need to create to get started with Azure AD B2C. Learn how to: * [Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md).-* [Manage your Azure AD B2C tenant](tenant-management.md) +* [Manage your Azure AD B2C tenant](tenant-management-manage-administrator.md) ## Accounts in Azure AD B2C |
active-directory-b2c | User Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-overview.md | In Azure Active Directory B2C (Azure AD B2C), there are several types of account The following types of accounts are available: - **Work account** - A work account can access resources in a tenant, and with an administrator role, can manage tenants.-- **Guest account** - A guest account can only be a Microsoft account or an Azure AD user that can be used to share administration responsibilities such as [managing a tenant](tenant-management.md).+- **Guest account** - A guest account can only be a Microsoft account or an Azure AD user that can be used to share administration responsibilities such as [managing a tenant](tenant-management-manage-administrator.md). - **Consumer account** - A consumer account is used by a user of the applications you've registered with Azure AD B2C. Consumer accounts can be created by: - The user going through a sign-up user flow in an Azure AD B2C application - Using Microsoft Graph API |
active-directory | How To Mfa Number Match | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md | In addition: >[!IMPORTANT] >MSCHAPv2 doesn't support OTP. If the NPS Server isn't configured to use PAP, user authorization will fail with events in the **AuthZOptCh** log of the NPS Extension server in Event Viewer:<br> >NPS Extension for Azure MFA: Challenge requested in Authentication Ext for User npstesting_ap. + >You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications. If your organization uses Remote Desktop Gateway and the user is registered for OTP code along with Microsoft Authenticator push notifications, the user won't be able to meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to **Approve**/**Deny** push notifications with Microsoft Authenticator. |
active-directory | Scenario Desktop Acquire Token Device Code Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md | static async Task<AuthenticationResult> GetATokenForGraph() } catch (MsalUiRequiredException ex) {- // No token found in the cache or AAD insists that a form interactive auth is required (e.g. the tenant admin turned on MFA) + // No token found in the cache or Azure AD insists that a form interactive auth is required (e.g. the tenant admin turned on MFA) // If you want to provide a more complex user experience, check out ex.Classification return await AcquireByDeviceCodeAsync(pca); if accounts: result = app.acquire_token_silent(config["scope"], account=chosen) if not result:- logging.info("No suitable token exists in cache. Let's get a new one from AAD.") + logging.info("No suitable token exists in cache. Let's get a new one from Azure AD.") flow = app.initiate_device_flow(scopes=config["scope"]) if "user_code" not in flow: |
api-management | How To Self Hosted Gateway On Kubernetes In Production | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md | Request throttling in a self-hosted gateway can be enabled by using the API Mana ## Security The self-hosted gateway is able to run as non-root in Kubernetes allowing customers to run the gateway securely. -Here's an example of the security context for the self-hosted gateway: +Here's an example of the security context for the self-hosted gateway container: ```yml securityContext: allowPrivilegeEscalation: false |
automation | Extension Based Hybrid Runbook Worker Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md | Title: Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in description: This article provides information about deploying the extension-based User Hybrid Runbook Worker to run runbooks on Windows or Linux machines in your on-premises datacenter or other cloud environment. Previously updated : 11/09/2022 Last updated : 02/20/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers. Azure Automation stores and manages runbooks and then delivers them to one or mo | Windows | Linux (x64)| |||-| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core), and <br> ● Windows Server 2012, 2012 R2 | ● Debian GNU/Linux 10 and 11 <br> ● Ubuntu 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7 and 8ΓÇ»| +| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 <br> ● Windows 10 Enterprise (including multi-session) and Pro | ● Debian GNU/Linux 10 and 11 <br> ● Ubuntu 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7 and 8ΓÇ»| + ### Other Requirements |
automation | Migrate Existing Agent Based Hybrid Worker To Extension Based Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md | Title: Migrate an existing agent-based hybrid workers to extension-based-workers description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Previously updated : 12/29/2022 Last updated : 02/20/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers. The purpose of the Extension-based approach is to simplify the installation and | Windows | Linux (x64)| |||-| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core), and <br> ● Windows Server 2012, 2012 R2 | ● Debian GNU/Linux 10 and 11 <br> ● Ubuntu 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7 and 8ΓÇ»| +| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> ● Windows Server 2012, 2012 R2 <br> ● Windows 10 Enterprise (including multi-session) and Pro| ● Debian GNU/Linux 10 and 11 <br> ● Ubuntu 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7 and 8ΓÇ»| ### Other Requirements |
azure-monitor | Autoscale Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md | The following services are supported by autoscale: | Azure API Management service | [Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) | | Azure Data Explorer Clusters | [Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling) | | Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |+| Azure SignalR Service (Premium tier) | [Automatically scale units of an Azure SignalR service](https://learn.microsoft.com/azure/azure-signalr/signalr-howto-scale-autoscale) | | Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) | | Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/how-to-setup-autoscale.md) | | Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) | |
azure-monitor | Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md | The configuration requirements and content of resource logs vary by resource typ | Destination | Description | Reference | |:|:|:|-| Azure Monitor Logs | Send resource logs to Azure Monitor Logs for analysis with other collected log data. | [Collect Azure resource logs in Log Analytics workspace in Azure Monitor](essentials/resource-logs.md#send-to-azure-storage) | -| Storage | Send resource logs to Azure Storage for archiving. | [Archive Azure resource logs](essentials/resource-logs.md#send-to-log-analytics-workspace) | +| Azure Monitor Logs | Send resource logs to Azure Monitor Logs for analysis with other collected log data. | [Collect Azure resource logs in Log Analytics workspace in Azure Monitor](essentials/resource-logs.md#send-to-log-analytics-workspace) | +| Storage | Send resource logs to Azure Storage for archiving. | [Archive Azure resource logs](essentials/resource-logs.md#send-to-azure-storage) | | Event Hubs | Stream resource logs to other locations using Event Hubs. |[Stream Azure resource logs to an event hub](essentials/resource-logs.md#send-to-azure-event-hubs) | ## Operating system (guest) |
azure-monitor | Prometheus Metrics Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md | Deploy the template with the parameter file using any valid method for deploying ## Verify Deployment -Run the following command to which verify that the daemon set was deployed properly: +Run the following command to verify that the daemon set was deployed properly: ``` kubectl get ds ama-metrics-node --namespace=kube-system |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | Configure a table for Basic logs if: | [AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests) | Azure Media Services HTTP request details for key, or license acquisition. | | [AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth) | Azure Media Services account health status. | | [AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | Azure Media Services information about requests to streaming endpoints. |+ | [ASCAuditLogs](/azure/azure-monitor/reference/tables/ASCAuditLogs) | Azure Sphere audit logs generated by Azure Sphere service and devices. | | [ASCDeviceEvents](/azure/azure-monitor/reference/tables/ASCDeviceEvents) | Azure Sphere devices operations, with information about event types, event categories, event classes, event descriptions etc. | | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange) | Azure Virtual Network Manager changes to network group membership of network resources. | | [AZFWNetworkRule](/azure/azure-monitor/reference/tables/AZFWNetworkRule) | Azure Firewalls network rules logs including data plane packet and rule's attributes. | |
azure-monitor | Workbooks Jsonpath | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-jsonpath.md | Title: Azure Monitor workbooks - Transform JSON data with JSONPath -description: Learn how to use JSONPath in Azure Monitor workbooks to transform the results of JSON data returned by a queried endpoint to the format that you want. +description: Use JSONPath in Azure Monitor workbooks to transform the JSON data results to a different data format. ibiza Previously updated : 07/05/2022 Last updated : 02/19/2023 # Use JSONPath to transform JSON data in workbooks -Workbooks can query data from many sources. Some endpoints, such as [Azure Resource Manager](../../azure-resource-manager/management/overview.md) or custom endpoint, can return results in JSON. If the JSON data returned by the queried endpoint isn't configured in a format that you want, JSONPath can be used to transform the results. +Workbooks can query data from many sources. Some endpoints, such as [Azure Resource Manager](../../azure-resource-manager/management/overview.md) or custom endpoints can return results in JSON. If the JSON data returned by the queried endpoint is in a format that you don't want, you can use JSONPath transformation to convert the JSON to a table structure. You can then use the table to plot [workbook visualizations](./workbooks-overview.md#visualizations). -JSONPath is a query language for JSON that's similar to XPath for XML. Like XPath, JSONPath allows for the extraction and filtration of data out of JSON structure. --By using JSONPath transformation, workbook authors can convert JSON into a table structure. The table can then be used to plot [workbook visualizations](./workbooks-overview.md#visualizations). +JSONPath is a query language for JSON that's similar to XPath for XML. Like XPath, JSONPath allows for the extraction and filtration of data out of the JSON structure. ## Use JSONPath +In this example, the JSON object represents a store's inventory. We're going to create a table of the store's available books listing their titles, authors, and prices. + 1. Switch the workbook to edit mode by selecting **Edit**. 1. Use the **Add** > **Add query** link to add a query control to the workbook. 1. Select the data source as **JSON**. By using JSONPath transformation, workbook authors can convert JSON into a table } } ``` --Let's assume we're given the preceding JSON object as a representation of a store's inventory. Our task is to create a table of the store's available books by listing their titles, authors, and prices. - 1. Select the **Result Settings** tab and switch the result format to **JSON Path**. 1. Apply the following JSON path settings: Let's assume we're given the preceding JSON object as a representation of a stor | Author | `$.author` | | Price | `$.price` | - Column IDs will be the column headers. Column JSON paths fields represent the path from the root of the table to the column value. + Column IDs are the column headers. Column JSON paths fields represent the path from the root of the table to the column value. -1. Apply the preceding settings by selecting **Run Query**. +1. Select **Run Query**.  +## Use regular expressions to covert values ++You may have some data that isn't in a standard format. To use that data effectively, you would want to convert that data into a standard format. ++In this example, the published date is in YYYMMMDD format. The code interprets this value as a numeric value, not text, resulting in right-justified numbers, instead of as a date. ++You can use the **Type**, **RegEx Match** and **Replace With** fields in the result settings to convert the result into true dates. ++|Result setting field |Description | +||| +|Type|Allows you to explicitly change the type of the value returned by the API. This field us usually left unset, but you can use this field to force the value to a different type. | +|Regex Match|Allows you to enter a regular expression to take part (or parts) of the value returned by the API instead of the whole value. This field is usually combined with the **Replace With** field. | +|Replace With|Use this field to create the new value along with the regular expression. If this value is empty, the default is `$&`, which is the match result of the expression. See string.replace documentation to see other values that you can use to generate other output.| +++To convert YYYMMDD format into YYYY-MM-DD format: ++1. Select the Published row in the grid. +1. In the **Type** field, select Date/Time so that the column is usable in charts. +1. In the **Regex Match** field, use this regular expression: `([0-9]{4})([0-9]{2})([0-9]{2})`. This regular expression: + - matches a four digit number, then a two digit number, then another two digit number. + - The parentheses form capture groups to use in the next step. + 1.In the **Replace With**, use this regular expression: `$1-$2-$3. This expression creates a new string with each captured group, with a hyphen between them, turning "12345678" into "1234-56-78"). +1. Run the query again. ++  ## Next steps - [Workbooks overview](./workbooks-overview.md) |
azure-resource-manager | Tag Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md | Resource tags support all cost-accruing services. To ensure that cost-accruing s > [!WARNING] > Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, commands that return existing tag definitions, deployment histories, exported templates, and monitoring logs. +> [!WARNING] +> Please be careful while using non-English language in your tags. It can cause decoding progress failure while loading your VM's metadata from IMDS (Instance Metadata Service). + > [!IMPORTANT] > Tag names are case-insensitive for operations. A tag with a tag name, regardless of the casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports. > |
azure-signalr | Server Graceful Shutdown | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/server-graceful-shutdown.md | In general, there will be four stages in a graceful shutdown process: Azure SignalR Service will try to reroute the client connection on this server to another valid server. - In this scenario, `OnConnectedAsync` and `OnDisconnectedAsync` will be triggered on the new server and the old server respectively with an `IConnectionMigrationFeature` set in the `Context`, which can be used to identify if the client connection was being migrated-in our migrated-out. It could be useful especially for stateful scenarios. + In this scenario, `OnConnectedAsync` and `OnDisconnectedAsync` will be triggered on the new server and the old server respectively with an `IConnectionMigrationFeature` set in the `Context`, which can be used to identify if the client connection was being migrated-in or migrated-out. It could be useful especially for stateful scenarios. The client connection will be immediately migrated after the current message has been delivered, which means the next message will be routed to the new server. |
azure-signalr | Signalr Howto Troubleshoot Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-guide.md | public class ThreadPoolStarvationDetector : EventListener { // See: https://learn.microsoft.com/dotnet/framework/performance/thread-pool-etw-events#threadpoolworkerthreadadjustmentadjustment if (eventData.EventId == EventIdForThreadPoolWorkerThreadAdjustmentAdjustment &&- eventData.Payload[3] as uint? == ReasonForStarvation) + eventData.Payload[2] as uint? == ReasonForStarvation) { _logger.LogWarning("Thread pool starvation detected!"); } |
azure-web-pubsub | Resource Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/resource-faq.md | Azure Web PubSub service is more suitable for situations where: ## Where does my data reside? -Azure Web PubSub service works as a data processor service and doesn't store any customer content. Azure Web PubSub service processes customer data within the region the customer deploys the service instance in. If you use Azure Web PubSub service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions. +Azure Web PubSub service works as a data processor service and doesn't store any customer data. Azure Web PubSub service processes customer data within the region the customer deploys the service instance in. If you use Azure Web PubSub service together with other Azure services, like Azure Storage for diagnostics, see [this white paper](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/) for guidance about how to keep data residency in Azure regions. |
backup | Restore Sql Database Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md | Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 11/08/2022 Last updated : 02/20/2023 If the total string size of files in a database is greater than a [particular li  +## Recover a database from .bak file using SSMS ++You can use *Restore as Files* operation to restore the database files in `.bak` format while restoring from the Azure portal. [Learn more](restore-sql-database-azure-vm.md#restore-as-files). ++When the restoration of the `.bak` file to the Azure virtual machine is complete, you can trigger restore using **TSQL commands** through SSMS. +  +To restore the database files to the *original path on the source server*, remove the `MOVE` clause from the TSQL restore query. +  +**Example** ++ ```azurecli-interactive + USE [master] + RESTORE DATABASE [<DBName>] FROM  DISK = N'<.bak file path>' + ``` ++>[!Note] +>You shouldn’t have the same database files on the target server (restore with replace).  Also, you can [enable instant file initialization on the target server to reduce the file initialization time overhead]( /sql/relational-databases/databases/database-instant-file-initialization?view=sql-server-ver16). ++To relocate the database files from the target restore server, you can frame a TSQL command using the `MOVE` clauses. ++ ```azurecli + USE [master] + RESTORE DATABASE [<DBName>] FROM  DISK = N'<.bak file path>'  MOVE N'<LogicalName1>' TO N'<TargetFilePath1OnDisk>',  MOVE N'<LogicalName2>' TO N'<TargetFilePath2OnDisk>' GO + ``` ++**Example** ++ ```azurecli + USE [master] + RESTORE DATABASE [test] FROM  DISK = N'J:\dbBackupFiles\test.bak' WITH  FILE = 1,  MOVE N'test' TO N'F:\data\test.mdf',  MOVE N'test_log' TO N'G:\log\test_log.ldf',  NOUNLOAD,  STATS = 5 + GO + ``` ++If there are more than two files for the database, you can add additional `MOVE` clauses to the restore query. You can also use SSMS for database recovery using `.bak` files. [Learn more](/sql/relational-databases/backup-restore/restore-a-database-backup-using-ssms?view=sql-server-ver16). ++>[!Note] +>For large database recovery, we recommend you to use TSQL statements. If you want to relocate the specific database files, see the list of database files in the JSON format created during the **Restore as Files** operation. + ## Cross Region Restore As one of the restore options, Cross Region Restore (CRR) allows you to restore SQL databases hosted on Azure VMs in a secondary region, which is an Azure paired region. |
backup | Tutorial Backup Sap Hana Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-sap-hana-db.md | The Recovery Services vault is now created. ## Enable Cross Region Restore -At the Recovery Services vault, you can enable Cross Region Restore. You must turn on Cross Region Restore before you configure and protect backups on your HANA databases. Learn about [how to turn on Cross Region Restore](./backup-create-rs-vault.md#set-cross-region-restore). +At the Recovery Services vault, you can enable Cross Region Restore. Learn about [how to turn on Cross Region Restore](./backup-create-rs-vault.md#set-cross-region-restore). [Learn more](./backup-azure-recovery-services-vault-overview.md) about Cross Region Restore. |
cosmos-db | Whitepapers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/whitepapers.md | Title: Whitepapers that describe Azure Cosmos DB concepts -description: Get the list of whitepapers for Azure Cosmos DB, these whitepapers describe the concepts in depth. + Title: Conceptual whitepapers ++description: This list of conceptual whitepapers describes various Azure Cosmos DB service, development, and data concepts in depth. Previously updated : 05/07/2021 Last updated : 02/20/2023 -# Azure Cosmos DB whitepapers +# Azure Cosmos DB conceptual whitepapers + [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] Whitepapers allow you to explore Azure Cosmos DB concepts at a deeper level. This article provides you with a list of available whitepapers for Azure Cosmos DB. | **Whitepaper** | **Description** | | | |-|[Schema-Agnostic Indexing with Azure Cosmos DB](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf) | This paper describes Azure Cosmos DB's indexing subsystem. This paper includes Azure Cosmos DB capabilities such as document representation, query language, document indexing approach, core index support, and early production experiences.| -| [Azure Cosmos DB and personal data](https://servicetrust.microsoft.com/ViewPage/TrustDocuments?command=Download&downloadType=Document&downloadId=87cc6456-4b23-473c-94d3-6c713b8b8956&docTab=6d000410-c9e9-11e7-9a91-892aae8839ad_FAQ_and_White_Papers)| This paper provides guidance for Azure Cosmos DB customers managing a cloud-based database, an on-premises database, or both, and who need to ensure that the personal data in their database systems is handled and protected in accordance with current rules. | -|[Next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/) | This paper explores how Azure Cosmos DB is uniquely positioned to address the data requirements of modern apps. It includes three customers spotlights highlighting the API for MongoDB, simplified data management, cost-effective scalability, market-leading performance, and reliability. | +| [Schema-Agnostic Indexing with Azure Cosmos DB](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf) | This paper describes Azure Cosmos DB's indexing subsystem. This paper includes a deep dive into various Azure Cosmos DB capabilities. These capabilities include; document representation, query language, document indexing approach, core index support, and early production experiences. | +| [Azure Cosmos DB and personal data](https://servicetrust.microsoft.com/ViewPage/TrustDocuments?command=Download&downloadType=Document&downloadId=87cc6456-4b23-473c-94d3-6c713b8b8956&docTab=6d000410-c9e9-11e7-9a91-892aae8839ad_FAQ_and_White_Papers) | This paper provides guidance for Azure Cosmos DB customers managing a cloud-based database, an on-premises database, or both. This paper is ideal for customers who want to learn more about the rules related to how we handle and protect their personal data in their database systems. | +| [Next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/) | This paper explores how we uniquely positioned Azure Cosmos DB to address the data requirements of modern apps. It includes three customers spotlights highlighting the API for MongoDB, simplified data management, cost-effective scalability, market-leading performance, and reliability. | +| [Cloud-Scale Data for Spring Developers](https://azure.github.io/cloud-scale-data-for-devs-guide/) | This paper helps you build cloud-native Java applications in Azure. This paper helps you gain insights about using NoSQL and why you should consider Azure Cosmos DB, our fully managed, distributed NoSQL database service on Azure. | ++## Next steps ++- Learn about [common Azure Cosmos DB use cases](use-cases.md) |
cost-management-billing | Tutorial Acm Create Budgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md | -Budgets in Cost Management help you plan for and drive organizational accountability. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. You can configure alerts based on your actual cost or forecasted cost to ensure that your spend is within your organizational spend limit. When the budget thresholds you've created are exceeded, only notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. +Budgets in Cost Management help you plan for and drive organizational accountability. They help you proactively inform others about their spending to manage costs and monitor how spending progresses over time. ++You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spending limit. Notifications are triggered when the budget thresholds you've created are exceeded. None of your resources is affected, and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. Cost and usage data is typically available within 8-24 hours and budgets are evaluated against these costs every 24 hours. Be sure to get familiar with [Cost and usage data updates](./understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention) specifics. When a budget threshold is met, email notifications are normally sent within an hour of the evaluation. |
data-factory | Data Factory Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-service-identity.md | See the following topics that introduce when and how to use managed identity: - [Copy data from/to Azure Data Lake Store using managed identities for Azure resources authentication](connector-azure-data-lake-store.md). See [Managed Identities for Azure Resources Overview](../active-directory/managed-identities-azure-resources/overview.md) for more background on managed identities for Azure resources, on which managed identity in Azure Data Factory is based.++See [Limitations](../active-directory/managed-identities-azure-resources/managed-identities-faq.md#limitations) of managed identities, which also apply to managed identities in Azure Data Factory. |
defender-for-cloud | Concept Cloud Security Posture Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md | Title: Overview of Cloud Security Posture Management (CSPM) description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 01/24/2023 Last updated : 02/20/2023 # Cloud Security Posture Management (CSPM) Defender for Cloud continually assesses your resources, subscriptions and organi |Aspect|Details| |-|:-| |Release state:| Foundational CSPM capabilities: GA <br> Defender Cloud Security Posture Management (CSPM): Preview |-| Prerequisites | - **Foundational CSPM capabilities** - None <br> <br> - **Defender Cloud Security Posture Management (CSPM)** - Agentless scanning requires the **Subscription Owner** to enable the plan. Anyone with a lower level of authorization can enable the Defender CSPM plan but the agentless scanner won't be enabled by default due to lack of permissions. Attack path analysis and security explorer won't be populated with vulnerabilities because the agentless scanner is disabled. | +| Prerequisites | - **Foundational CSPM capabilities** - None <br> <br> - **Defender Cloud Security Posture Management (CSPM)** - Agentless scanning requires the **Subscription Owner** to enable the plan. Anyone with a lower level of authorization can enable the Defender CSPM plan but the agentless scanner won't be enabled by default due to lack of permissions. Attack path analysis and security explorer won't populate with vulnerabilities because the agentless scanner is disabled. | |Clouds:| **Foundational CSPM capabilities** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. <br> <br> **Defender Cloud Security Posture Management (CSPM)** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. | ## Defender CSPM plan options -The Defender CSPM plan comes with two options, foundational CSPM capabilities and Defender CSPM. When you deploy Defender for Cloud to your subscription and resources, you'll automatically gain the basic coverage offered by the CSPM plan. To gain access to the other capabilities provided by Defender CSPM, you'll need to [enable the Defender Cloud Security Posture Management (CSPM) plan](enable-enhanced-security.md) on your subscription and resources. +Defender for cloud offers foundational multicloud CSPM capabilities for free. These capabilities are automatically enabled by default on any subscription or account that has onboarded to Defender for Cloud. The foundational CSPM includes asset discovery, continuous assessment and security recommendations for posture hardening, compliance with Microsoft Cloud Security Benchmark (MCSB), and a [Secure score](secure-score-access-and-track.md) which measure the current status of your organizationΓÇÖs posture. -The following table summarizes what's included in each plan and their cloud availability. +The optional Defender CSPM plan, provides advanced posture management capabilities such as [Attack path analysis](#attack-path-analysis), [Cloud security explorer](#cloud-security-explorer), advanced threat hunting, [security governance capabilities](#security-governance-and-regulatory-compliance), and also tools to assess your [security compliance](#security-governance-and-regulatory-compliance) with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region. ++The following table summarizes each plan and their cloud availability. | Feature | Foundational CSPM capabilities | Defender CSPM | Cloud availability | |--|--|--|--| The following table summarizes what's included in each plan and their cloud avai ## Security governance and regulatory compliance -Security governance and regulatory compliance refer to the policies and processes which organizations have in place to ensure that they comply with laws, rules and regulations put in place by external bodies (government) which control activity in a given jurisdiction. Defender for Cloud allows you to view your regulatory compliance through the regulatory compliance dashboard. +Security governance and regulatory compliance refer to the policies and processes which organizations have in place. These policies ensure that they comply with laws, rules and regulations put in place by external bodies (government) which control activity in a given jurisdiction. Defender for Cloud allows you to view your regulatory compliance through the regulatory compliance dashboard. Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in the standards that you've applied to your subscriptions. The dashboard reflects the status of your compliance with these standards. Learn more about [security and regulatory compliance in Defender for Cloud](conc ## Cloud security explorer -The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected is then used to build a graph representing your multicloud environment. +The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources. For example, the cloud assets inventory, connections and lateral movement possibilities between resources, exposure to internet, permissions, network connections, vulnerabilities and more. The data collected builds a graph representing your multicloud environment. Defender for Cloud then uses the generated graph to perform an attack path analysis and find the issues with the highest risk that exist within your environment. You can also query the graph using the cloud security explorer. |
defender-for-cloud | How To Manage Cloud Security Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md | -Defender for Cloud's contextual security capabilities assists security teams in the reduction of the risk of impactful breaches. Defender for Cloud uses environmental context to perform a risk assessment of your security issues, and identifies the biggest security risks and distinguishes them from less risky issues. +Defender for Cloud's contextual security capabilities assist security teams in reducing the risk of impactful breaches. Defender for Cloud uses environmental context to perform a risk assessment of your security issues, identifies the biggest security risks, and distinguishes them from less risky issues. -By using the cloud security explorer, you can proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account. +Use the cloud security explorer, to proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account. -With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, lateral movement between resources and more. +With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure and AWS). -Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md). +Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md). -## Availability +## Prerequisites -| Aspect | Details | -|--|--| -| Release state | Preview | -| Prerequisite | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md) <br> - [Enable Defender for CSPM](enable-enhanced-security.md) <br> - [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. | -| Required plans | - Defender Cloud Security Posture Management (CSPM) enabled | -| Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** | -| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) | +- You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md). ++- You must [enable Defender for CSPM](enable-enhanced-security.md). ++- You must [enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. ++ When you enable Defender for Containers, you also gain the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in the security explorer. ++- Required roles and permissions: + - Security Reader + - Security Admin + - Reader + - Contributor + - Owner ++Check the [cloud availability tables](supported-machines-endpoint-solutions-clouds-servers.md) to see which government and cloud environments are supported. ## Build a query with the cloud security explorer -You can use the cloud security explorer to build queries that can proactively hunt for security risks in your environments. +The cloud security explorer allows you to build queries that can proactively hunt for security risks in your environments with dynamic and efficient features such as: ++- **Multi-cloud and multi-resource queries** - The entity selection control filters are grouped and combined into logical control categories to assist you in building queries across cloud environments and across resources simultaneously. ++- **Custom Search** - Use the dropdown menus to apply filters to build your query. ++- **Query templates** - Use any of the available pre-built query templates to more efficiently build your query. ++- **Share query link** - Copy and share a link of your query with other people. **To build a query**: You can use the cloud security explorer to build queries that can proactively hu 1. Navigate to **Microsoft Defender for Cloud** > **Cloud Security Explorer**. - :::image type="content" source="media/concept-cloud-map/cloud-security-explorer.png" alt-text="Screenshot of the cloud security explorer page." lightbox="media/concept-cloud-map/cloud-security-explorer.png"::: + :::image type="content" source="media/concept-cloud-map/cloud-security-explorer-main-page.png" alt-text="Screenshot of the cloud security explorer page." lightbox="media/concept-cloud-map/cloud-security-explorer-main-page.png"::: -1. Select a resource from the drop-down menu. +1. Search for and select a resource from the drop-down menu. - :::image type="content" source="media/how-to-manage-cloud-security/select-resource.png" alt-text="Screenshot of the resource drop-down menu."::: + :::image type="content" source="media/how-to-manage-cloud-security/cloud-security-explorer-select-resource.png" alt-text="Screenshot of the resource drop-down menu." lightbox="media/how-to-manage-cloud-security/cloud-security-explorer-select-resource.png"::: -1. Select **+** to add other filters to your query. For each filter selected you can add more subfilters as needed. +1. Select **+** to add other filters to your query. + + :::image type="content" source="media/how-to-manage-cloud-security/cloud-security-explorer-query-search.png" alt-text="Screenshot that shows a full query and where to select on the screen to perform the search." lightbox="media/how-to-manage-cloud-security/cloud-security-explorer-query-search.png"::: -1. Select **Search**. +1. Add subfilters as needed. - :::image type="content" source="media/how-to-manage-cloud-security/search-query.png" alt-text="Screenshot that shows a full query and where to select on the screen to perform the search."::: +1. After building your query, select **Search** to run the query. -The results will populate on the bottom of the page. + :::image type="content" source="media/how-to-manage-cloud-security/cloud-security-explorer-query-search-populated.png" alt-text="Screenshot that shows where to select search to run the query and results populated." lightbox="media/how-to-manage-cloud-security/cloud-security-explorer-query-search-populated.png"::: ## Query templates -You can select an existing query template from the bottom of the page by selecting **Open query**. ---You can alter any template to search for specific results by changing the query and selecting search. +Query templates are pre-formatted searches using commonly used filters. Use one of the existing query templates from the bottom of the page by selecting **Open query**. -## Query options -The following information can be queried in the cloud security explorer: +You can modify any template to search for specific results by changing the query and selecting **Search**. -- **Recommendations** - All Defender for Cloud security recommendations. -- **Vulnerabilities** - All vulnerabilities found by Defender for Cloud.+## Share a query -- **Insights** - Contextual data about your cloud resources. - -- **Connections** - Connections that are identified between cloud resources in your environment.+Use the query link to share a query with other people. After creating a query, select **Share query link**. The link is copied to your clipboard. -You can review the [full list of recommendations, insights and connections](attack-path-reference.md). ## Next steps -View the [reference list of attack paths and cloud security graph components](attack-path-reference.md) +View the [reference list of attack paths and cloud security graph components](attack-path-reference.md). -Learn about the [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options) +Learn about the [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options). |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 02/09/2023 Last updated : 02/20/2023 # What's new in Microsoft Defender for Cloud? To learn about *planned* changes that are coming soon to Defender for Cloud, see Updates in February include: +- [Enhanced Cloud Security Explorer](#enhanced-cloud-security-explorer) - [Recommendation to find vulnerabilities in running container images for Linux released for General Availability (GA)](#recommendation-to-find-vulnerabilities-in-running-container-images-released-for-general-availability-ga) - [Announcing support for the AWS CIS 1.5.0 compliance standard](#announcing-support-for-the-aws-cis-150-compliance-standard) - [Microsoft Defender for DevOps (preview) is now available in other regions](#microsoft-defender-for-devops-preview-is-now-available-in-other-regions)+- [The built-in policy [Preview]: Private endpoint should be configured for Key Vault has been deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-has-been-deprecated) ++### Enhanced Cloud Security Explorer ++An improved version of the cloud security explorer includes a refreshed user experience that removes query friction dramatically, added the ability to run multicloud and multi-resource queries, and embedded documentation for each query option. ++The Cloud Security Explorer now allows you to run cloud-abstract queries across resources. You can use either the pre-built query templates or use the custom search to apply filters to build your query. Learn [how to manage Cloud Security Explorer](how-to-manage-cloud-security-explorer.md). ### Recommendation to find vulnerabilities in running container images released for General Availability (GA) This new standard includes both existing and new recommendations that extend Def Learn how to [Manage AWS assessments and standards](how-to-manage-aws-assessments-standards.md). - ### Microsoft Defender for DevOps (preview) is now available in other regions Microsoft Defender for DevOps has expanded its preview and is now available in the West Europe and East Australia regions, when you onboard your Azure DevOps and GitHub resources. Learn more about [Microsoft Defender for DevOps](defender-for-devops-introduction.md). +### The built-in policy \[Preview]: Private endpoint should be configured for Key Vault has been deprecated ++The built-in policy [`[Preview]: Private endpoint should be configured for Key Vault`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) has been deprecated and has been replaced with the [`[Preview]: Azure Key Vaults should use private link`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) policy. ++Learn more about [integrating Azure Key Vault with Azure Policy](../key-vault/general/azure-policy.md#network-access). + ## January 2023 Updates in January include: |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 02/12/2023 Last updated : 02/19/2023 # Important upcoming changes to Microsoft Defender for Cloud If you're looking for the latest release notes, you'll find them in the [What's | Planned change | Estimated date for change | |--|--|-| [The built-in policy [Preview]: Private endpoint should be configured for Key Vault is will be deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-will-be-deprecated) | February 2023 | | [Three alerts in Defender for Azure Resource Manager plan will be deprecated](#three-alerts-in-defender-for-azure-resource-manager-plan-will-be-deprecated) | March 2023 | | [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 | | [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | August 2023 | -### The built-in policy \[Preview]: Private endpoint should be configured for Key Vault will be deprecated --**Estimated date for change: February 2023** --The built-in policy [`[Preview]: Private endpoint should be configured for Key Vault`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) is set to be deprecated and will be replaced with the [`[Preview]: Azure Key Vaults should use private link`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) policy. --The related [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c1b1214-f927-48bf-8882-84f0af6588b1) will also be replaced by this new policy in all standards displayed in the regulatory compliance dashboard. - ### Three alerts in Defender for Azure Resource Manager plan will be deprecated **Estimated date for change: March 2023** |
governance | Initiative Definition Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/initiative-definition-structure.md | A parameter has the following properties that are used in the policy initiative **integer**, **float**, or **datetime**. - `metadata`: Defines subproperties primarily used by the Azure portal to display user-friendly information:- - `description`: The explanation of what the parameter is used for. Can be used to provide + - `description`: (Optional) The explanation of what the parameter is used for. Can be used to provide examples of acceptable values. - `displayName`: The friendly name shown in the portal for the parameter. - `strongType`: (Optional) Used when assigning the policy definition through the portal. Provides |
hdinsight | Apache Ambari Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-ambari-email.md | If you select **Start TLS** from the **Create Alert Notification** page, and you 1. Go to the Apache Ambari UI. 2. Go to **Alerts > ManageNotifications > Edit (Edit Notification)**. 3. Select **Add Property**.-4. Add the new property, `mail.smtp.ssl.protocol` with a value of `TLSv1.2`. +4. Add the new property, `mail.smtp.ssl.protocols` with a value of `TLSv1.2`. |
key-vault | Disaster Recovery Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/disaster-recovery-guide.md | You must provide the following inputs to create a Managed HSM resource: - The Azure location. - A list of initial administrators. -The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **West US 3** location, with **the current signed in user** as the only administrator. +The following example creates an HSM named **ContosoMHSM2**, in the resource group **ContosoResourceGroup**, residing in the **West US 3** location, with **the current signed in user** as the only administrator. ```azurecli-interactive oid=$(az ad signed-in-user show --query objectId -o tsv) The `az keyvault security-domain upload` command performs following operations: In the following example, we use the Security Domain from the **ContosoMHSM**, the 2 of the corresponding private keys, and upload it to **ContosoMHSM2**, which is waiting to receive a Security Domain. ```azurecli-interactive-az keyvault security-domain upload --hsm-name ContosoMHSM2 --sd-exchange-key ContosoMHSM-SDE.cer --sd-file ContosoMHSM-SD.json --sd-wrapping-keys cert_0.key cert_1.key +az keyvault security-domain upload --hsm-name ContosoMHSM2 --sd-exchange-key ContosoMHSM2-SDE.cer --sd-file ContosoMHSM-SD.json --sd-wrapping-keys cert_0.key cert_1.key ``` Now both the source HSM (ContosoMHSM) and the destination HSM (ContosoMHSM2) have the same security domain. We can now restore a full backup from the source HSM into the destination HSM. |
mariadb | Concept Reserved Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concept-reserved-pricing.md | Last updated 06/24/2022 Azure Database for MariaDB now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MariaDB reserved capacity, you make an upfront commitment on MariaDB servers for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MariaDB reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br> -You do not need to assign the reservation to specific Azure Database for MariaDB servers. An already running Azure Database for MariaDB or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MariaDB compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MariaDB Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MariaDB are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MariaDB reserved capacity offering](https://azure.microsoft.com/pricing/details/mariadb/). </br> +You do not need to assign the reservation to specific Azure Database for MariaDB servers. An already running Azure Database for MariaDB or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure Database for MariaDB compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MariaDB Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MariaDB are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MariaDB reserved capacity offering](https://azure.microsoft.com/pricing/details/mariadb/). </br> You can buy Azure Database for MariaDB reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity: If you have questions or need help, [create a support request](https://portal.az ## Next steps -The vCore reservation discount is applied automatically to the number of Azure Database for MariaDB servers that match the Azure Database for MariaDB reserved capacity reservation scope and attributes. You can update the scope of the Azure database for MariaDB reserved capacity reservation through Azure portal, PowerShell, CLI or through the API. </br></br> +The vCore reservation discount is applied automatically to the number of Azure Database for MariaDB servers that match the Azure Database for MariaDB reserved capacity reservation scope and attributes. You can update the scope of the Azure Database for MariaDB reserved capacity reservation through Azure portal, PowerShell, CLI or through the API. </br></br> To learn how to manage the Azure Database for MariaDB reserved capacity, see manage Azure Database for MariaDB reserved capacity. To learn more about Azure Reservations, see the following articles: |
mysql | Concepts Azure Ad Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-azure-ad-authentication.md | The following permissions are required to allow the UMI to read from the Microso For guidance about how to grant and use the permissions, refer to [Overview of Microsoft Graph permissions](/graph/permissions-overview) -After you grant the permissions to the UMI, they're enabled for all servers or instances created with the UMI assigned as a server identity. +After you grant the permissions to the UMI, they're enabled for all servers created with the UMI assigned as a server identity. ## Token Validation |
mysql | How To Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md | To create an Azure AD Admin user, follow the following steps. - [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information. - [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information. -For guidance about how to grant and use the permissions, refer to [Overview of Microsoft Graph permissions](/graph/permissions-overview) --After you grant the permissions to the UMI, they're enabled for all servers or instances created with the UMI assigned as a server identity. - > [!IMPORTANT] > Only a [Global Administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or [Privileged Role Administrator](../../active-directory/roles/permissions-reference.md#privileged-role-administrator) can grant these permissions. After you grant the permissions to the UMI, they're enabled for all servers or i > [!NOTE] > Only one Azure AD admin can be created per MySQL server, and selecting another overwrites the existing Azure AD admin configured for the server. +### Grant permissions to User assigned managed identity ++The following sample PowerShell script grants the necessary permissions for a UMI. This sample assigns permissions to the UMI `umiservertest`. ++To run the script, you must sign in as a user with a Global Administrator or Privileged Role Administrator role. ++The script grants the `User.Read.All`, `GroupMember.Read.All`, and `Application.Read.ALL` permissions to a UMI to access [Microsoft Graph](/graph/auth/auth-concepts#microsoft-graph-permissions). ++```powershell +# Script to assign permissions to the UMI "umiservertest" ++import-module AzureAD +$tenantId = '<tenantId>' # Your Azure AD tenant ID ++Connect-AzureAD -TenantID $tenantId +# Log in as a user with a "Global Administrator" or "Privileged Role Administrator" role +# Script to assign permissions to an existing UMI +# The following Microsoft Graph permissions are required: +# User.Read.All +# GroupMember.Read.All +# Application.Read.ALL ++# Search for Microsoft Graph +$AAD_SP = Get-AzureADServicePrincipal -SearchString "Microsoft Graph"; +$AAD_SP +# Use Microsoft Graph; in this example, this is the first element $AAD_SP[0] ++#Output ++#ObjectId AppId DisplayName +#-- -- -- +#47d73278-e43c-4cc2-a606-c500b66883ef 00000003-0000-0000-c000-000000000000 Microsoft Graph +#44e2d3f6-97c3-4bc7-9ccd-e26746638b6d 0bf30f3b-4a52-48df-9a82-234910c4a086 Microsoft Graph #Change ++$MSIName = "<managedIdentity>"; # Name of your user-assigned +$MSI = Get-AzureADServicePrincipal -SearchString $MSIName +if($MSI.Count -gt 1) +{ +Write-Output "More than 1 principal found, please find your principal and copy the right object ID. Now use the syntax $MSI = Get-AzureADServicePrincipal -ObjectId <your_object_id>" ++# Choose the right UMI ++Exit +} ++# If you have more UMIs with similar names, you have to use the proper $MSI[ ]array number ++# Assign the app roles ++$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq "User.Read.All"} +New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -PrincipalId $MSI.ObjectId -ResourceId $AAD_SP.ObjectId[0] -Id $AAD_AppRole.Id +$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq "GroupMember.Read.All"} +New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -PrincipalId $MSI.ObjectId -ResourceId $AAD_SP.ObjectId[0] -Id $AAD_AppRole.Id +$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq "Application.Read.All"} +New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -PrincipalId $MSI.ObjectId -ResourceId $AAD_SP.ObjectId[0] -Id $AAD_AppRole.Id +``` ++In the final steps of the script, if you have more UMIs with similar names, you have to use the proper `$MSI[ ]array` number. An example is `$AAD_SP.ObjectId[0]`. ++### Check permissions for user-assigned managed identity ++To check permissions for a UMI, go to the [Azure portal](https://portal.azure.com). In the **Azure Active Directory** resource, go to **Enterprise applications**. Select **All Applications** for **Application type**, and search for the UMI that was created. ++Select the UMI, and go to the **Permissions** settings under **Security**. ++After you grant the permissions to the UMI, they're enabled for all servers created with the UMI assigned as a server identity. + ## Connect to Azure Database for MySQL flexible server using Azure AD ### 1 - Authenticate with Azure AD |
postgresql | Concepts Single To Flexible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md | Title: "Migration tool - Azure Database for PostgreSQL Single Server to Flexible Server - Concepts" -description: Concepts about migrating your Single server to Azure database for PostgreSQL Flexible server. +description: Concepts about migrating your Single server to Azure Database for PostgreSQL Flexible server. -Azure Database for PostgreSQL Flexible Server provides zone-redundant high availability, control over price, and control over maintenance windows. You can use the available migration tool to move your databases from Single Server to Flexible Server. To understand the differences between the two deployment options, see [this comparison chart](../flexible-server/concepts-compare-single-server-flexible-server.md). +Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in two deployment modes: +* Flexible Server +* Single Server -Single to Flexible server migration tool is designed to help you with your migration from Single to flexible server task. The tool allows you to initiate migrations for multiple servers and databases in a repeatable way. The tool automates most of the migration steps to make the migration journey across Azure platforms as seamless as possible. The tool is offered **free of cost**. +Flexible Server is the next generation managed PostgreSQL service in Azure that provides maximum flexibility over your database, built-in cost-optimizations, and offers several improvements over Single Server. We recommend new customers to get started with flexible server and highly recommend existing single server customers to migrate to Flexible server to have the best experience of running PostgreSQL workloads on Azure. ->[!NOTE] -> The migration tool is in public preview. Feature, functionality, and user interfaces are subject to change. Migration initiation from Single Server is enabled in preview in these regions: Central US, West US, South Central US, North Central US, East Asia, Switzerland North, Australia South East, UAE North, UK West and Canada East. However, you can use the migration wizard from the Flexible Server side as well, in all regions. +In this article, we provide compelling reasons for single server customers to migrate to flexible server and a walk-through of tackling this migration in a simple, efficient and a hassle-free way. -## Recommended migration path +## Why should you choose Flexible Server? +* **[Superior performance](../flexible-server/overview.md)** - Flexible server runs on Linux VM that is best suited to run PostgreSQL engine as compared to Windows environment, which is the case with Single Server. -The migration tool is agnostic of source and target PostgreSQL versions. Here are some guidelines. +* **[Cost Savings](../flexible-server/how-to-deploy-on-azure-free-account.md)** – Flexible server allows you to stop and start server on-demand to lower your TCO. Your compute tier billing is stopped immediately which allows you to have significant cost savings during development, testing and for time-bound predictable production workloads. -| Source Postgres version (Single Server) | Suggested Target Postgres version (Flexible server) | Remarks | -|:|:-|:--| -| Postgres 9.5 (Retired) | Postgres 13 | You can even directly migrate to Postgres 14. Verify your application compatibility. | -| Postgres 9.6 (Retired) | Postgres 13 | You can even directly migrate to Postgres 14. Verify your application compatibility. | -| Postgres 10 (Retiring Nov'22) | Postgres 14 | Verify your application compatibility. | -| Postgres 11 | Postgres 14 | Verify your application compatibility. | -| Postgres 11 | Postgres 11 | You can choose to migrate to the same version in Flexible Server. You can then upgrade to a higher version in Flexible Server | +* **[Support for new PG versions](../flexible-server/concepts-supported-versions.md)** - Flexible server currently supports PG version 11 and onwards till version 14. Newer community versions of PostgreSQL will only be supported in flexible server. ->[!IMPORTANT] -> If Flexible Server is not available in your Single Server region, you may choose to deploy Flexible server in an [alternate region](../flexible-server/overview.md#azure-regions). We continue to add support for more regions with Flexible server. +* **Minimized Latency** – You can collocate your flexible server in the same availability zone as the application server that results in a minimal latency. This option isn't available in Single server. + +* **[Connection Pooling](../flexible-server/concepts-pgbouncer.md)** - Flexible server has a built-in connection pooling mechanism using **pgBouncer** that can support thousands of active connections with low overhead. +* **[Server Parameters](../flexible-server/concepts-server-parameters.md)** - Flexible server offers a richer set of server parameters when compared to Single server for server configuration and tuning. -## Overview +* **[Custom Maintenance Window](../flexible-server/concepts-maintenance.md)** - You can schedule the maintenance window of flexible server for a specific day and time of the week. Single server doesn't have this flexibility. -The migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target). +* **[High Availability](../flexible-server/concepts-high-availability.md)** - Flexible server supports HA within same availability zone and across availability zones by configuring a warm standby server that is in sync with the primary. -You choose the source server and can select up to eight databases from it. This limitation is per migration task. The migration tool automates the following steps: +* **[Security](../flexible-server/concepts-security.md)** - Flexible server offers multiple layers of information protection and encryption to protect your data. -1. Creates the migration infrastructure in the region of the target server. -2. Creates a public IP address and attaches it to the migration infrastructure. -3. Adds the migration infrastructure's IP address to the allowlist on the firewall rules of both the source and target servers. -4. Creates a migration project with both source and target types as Azure Database for PostgreSQL. -5. Creates a migration activity to migrate the databases specified by the user from the source to the target. -6. Migrates schemas from the source to the target. -7. Creates databases with the same name on the Flexible Server target. -8. Migrates data from the source to the target. +A feature-set comparison between Single server and Flexible server is available [here](../flexible-server/concepts-compare-single-server-flexible-server.md). -The following diagram shows the process flow for migration from Single Server to Flexible Server via the migration tool. +All the learnings acquired by running customer workloads on Azure database for PostgreSQL - Single Server is incorporated into Flexible server and we recommend using Flexible server for all your new PostgreSQL deployments on Azure. + +## How to migrate from Single Server to Flexible Server? +Let us first look at the methods you can consider performing the migration from Single server to Flexible server. -The steps in the process are: +**Offline Migration** – In an offline migration, all applications connecting to your single server are stopped and the database(s) is copied to flexible server. -1. Create a Flexible Server target. -2. Invoke migration. -3. Provision the migration infrastructure by using Azure Database Migration Service. -4. Start the migration. - 1. Initial dump/restore (online and offline) - 1. Streaming the changes (online only) -5. Cut over to the target. +**Online Migration** - In an online migration, applications connecting to your single server aren't stopped while database(s) are copied to flexible server. The initial copy of the databases is followed by replication to keep flexible server in sync with the single server. A cutover is performed when the flexible server is in complete sync with the single server resulting in minimal downtime. -The migration tool is exposed through the Azure portal and through easy-to-use Azure CLI commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations. +The following table gives an overview of Offline vs Online migration. -## Comparison of migration modes +| Mode | Pros | Cons | +|:|:|:--| +| Offline | - Simple, easy and less complex to execute.<br> - Very fewer chances of failure.<br> - No restrictions in terms of database objects it can handle | Downtime to applications. | +| Online | - Very minimal downtime to application.<br> - Ideal for large databases and for customers having limited downtime requirements.<br>| - Replication used in online migration has multiple restrictions listed in this [doc](https://www.postgresql.org/docs/current/logical-replication-restrictions.html) (e.g Primary Keys needed in all tables) <br> - Tough and much complex to execute than offline migration. <br> - Greater chances of failure due to complexity of migration. <br> There is an impact on the source server’s storage and compute if the migration runs for a long time. The impact needs to be monitored closely during migration. | -The tool supports two modes for migration from Single Server to Flexible Server. The *online* option provides reduced downtime for the migration, with logical replication restrictions. The *offline* option offers a simple migration but might incur extended downtime, depending on the size of databases. +>[!IMPORTANT] +> Offline migration is the recommended way to perform migrations from single server to flexible server. Customers should consider online migrations only if their downtime requirements are not met.  -The following table summarizes the differences between the migration modes. +The following table lists the different tools available for performing the migration from single server to flexible server. -| Capability | Online | Offline | -|:|:-|:--| -| Database availability for reads during migration | Available | Available | -| Database availability for writes during migration | Available | Generally not recommended, because any writes initiated after the migration are not captured or migrated | -| Application suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window | -| Environment suitability | Production environments | Usually development environments, testing environments, and some production environments that can afford downtime | -| Suitability for write-heavy workloads | Suitable but expected to reduce the workload during migration | Not applicable, because writes at the source after migration begins are not replicated to the target | -| Manual cutover | Required | Not required | -| Downtime required | Less | More | -| Logical replication limitations | Applicable | Not applicable | -| Migration time required | Depends on the database size and the write activity until cutover | Depends on the database size | +| Tool | Mode | Pros | Cons | +|:|:|:--|:| +| Single to Flex Migration tool (**Recommended**) | Offline | - Managed migration service.<br> - No complex setup/pre-requisites required<br> - Simple to use portal-based migration experience<br> - Fast offline migration tool<br> - No limitations in terms of size of databases it can handle. | Downtime to applications.| +| pg_dump and pg_restore | Offline | - Tried and tested tool that has been in use for long time<br> - Suited for databases of size less than 10 GB<br> | - Need prior knowledge of setting up and using this tool<br> - Slow when compared to other tools <br> Significant downtime to your application.| +| Azure DMS | Online | - Minimal downtime to your application<br> - Free of cost | - Complex setup<br> - High chances of migration failures<br> - Can't handle database of sizes > 1 TB<br> - Can't handle write-intensive workload| -Based on those differences, pick the mode that best works for your workloads. +The next section of the document gives an overview of the Single to Flex Migration tool, its implementation, limitations, and the experience that makes it the recommended tool to perform migrations from single to flexible server. -### Migration considerations for offline mode +>[!NOTE] +> The Single to Flex Migration tool currently supports only **Offline** migrations. Support for online migrations will be introduced later in the tool. -The migration process for offline mode entails a dump of the source Single Server database, followed by a restore at the Flexible Server target. +## Single to Flexible Migration tool - Overview -The following table shows the approximate time for performing offline migrations for databases of various sizes. +The single to flex migration tool is a hosted solution where we spin up a purpose-built docker container in the target Flexible server VM and drive the incoming migrations. This docker container spins up on-demand when a migration is initiated from a single server and gets decommissioned once the migration is completed. The migration container uses a new binary called [pgcopydb](https://github.com/dimitri/pgcopydb) that provides a fast and efficient way of copying databases from one server to another. Though pgcopydb uses the traditional pg_dump and pg_restore for schema migration, it implements its own data migration mechanism that involves multi-process streaming parts from source to target. Also, pgcopydb bypasses pg_restore way of index building and drives that internally in a way that all indexes are built concurrently. So, the data migration process is quicker with pgcopydb. Following is the process diagram of the new version of the migration tool. ->[!NOTE] -> Add about 15 minutes for the migration infrastructure to be deployed for each migration task. Each task can migrate up to eight databases. ++The following table shows the time for performing offline migrations for databases of various sizes using the single to flex migration tool. The migration was performed using a flexible server with the SKU – **Standard_D4ds_v4(4 cores, 16GB Memory, 128GB disk and 500 iops)** | Database size | Approximate time taken (HH:MM) | |:|:-| | 1 GB | 00:01 |-| 5 GB | 00:05 | -| 10 GB | 00:10 | -| 50 GB | 00:45 | -| 100 GB | 06:00 | -| 500 GB | 08:00 | -| 1,000 GB | 09:30 | --### Migration considerations for online mode --The migration process for online mode entails a dump of the Single Server database(s), a restore of that dump in the Flexible Server target, and then replication of ongoing changes. You capture change data by using logical decoding. --The time for completing an online migration depends on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to be replicated to Flexible Server. --To begin the migration in either Online or Offline mode, you can get started with the Prerequisites below. --## Migration prerequisites +| 5 GB | 00:03 | +| 10 GB | 00:08 | +| 50 GB | 00:35 | +| 100 GB | 01:00 | +| 500 GB | 04:00 | +| 1,000 GB | 07:00 | >[!NOTE]-> It is very important to complete the prerequisite steps in this section before you initiate a migration using this tool. --#### Register your subscription for Azure Database Migration Service -- 1. On the Azure portal, go to the subscription of your Target server. -- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png" alt-text="Screenshot of Azure portal subscription details." lightbox="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png"::: -- 2. On the left menu, select **Resource Providers**. Search for **Microsoft.DataMigration**, and then select **Register**. -- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png" alt-text="Screenshot of the Register button for Azure Data Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png"::: --#### Enable logical replication -- [Enable logical replication](../single-server/concepts-logical.md) on the source server. -- :::image type="content" source="./media/concepts-single-to-flexible/logical-replication-support.png" alt-text="Screenshot of logical replication support in the Azure portal." lightbox="./media/concepts-single-to-flexible/logical-replication-support.png"::: -- >[!NOTE] - > Enabling logical replication will require a server restart for the change to take effect. --#### Create an Azure Database for PostgreSQL Flexible server -- [Create an Azure Database for PostgreSQL Flexible server](../flexible-server/quickstart-create-server-portal.md) which will be used as the Target (if not created already). +> The above numbers give you an approximation of the time taken to complete the migration. To get a precise value for migrating your server, we strongly recommend taking a **PITR (point in time restore)** of your single server and running it against the single to flex migration tool. -#### Set up and configure an Azure Active Directory (Azure AD) app -- [Set up and configure an Azure Active Directory (Azure AD) app](./how-to-set-up-azure-ad-app-portal.md). An Azure AD app is a critical component of the migration tool. It helps with role-based access control as the migration tool accesses both the source and target servers. --#### Assign contributor roles to Azure resources -- Assign [contributor roles](./how-to-set-up-azure-ad-app-portal.md#add-contributor-privileges-to-an-azure-resource) to source server, target server and the migration resource group. In case of private access for source/target server, add Contributor privileges to the corresponding VNet as well. --#### Verify replication privileges for Single server's admin user -- Please run the following query to check if single server's admin user has replication privileges. +>[!IMPORTANT] +> In order to perform faster migrations, pick a higher SKU for your flexible server. You can always change the SKU to match the application needs post migration. -``` - SELECT usename, userepl FROM pg_catalog.pg_user; -``` +## Limitations +* You can have only one active migration to your flexible server. +* You can select a max of eight databases in one migration attempt. If you've more than eight databases, you must wait for the first migration to be complete before initiating another migration for the rest of the databases. Support for migration of more than eight databases in a single migration will be introduced later. +* The source and target server must be in the same Azure region. Cross region migrations are not supported. +* The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details, firewall rules, users, roles and permissions. In the later part of the document, we point you to docs that can help you perform the migration of users, roles and firewall rules from single server to flexible server. +* The migration tool shows the number of tables copied from source to target server. You need to validate the data in target server post migration. +* The tool only migrates user databases and not system databases like template_0, template_1, azure_sys and azure_maintenance.   - Verify that the **userpl** column for the single server's admin user has the value **true**. If it is set to **false**, please grant the replication privileges to the admin user by running the following query on the single server. +## Experience - ``` - ALTER ROLE <adminusername> WITH REPLICATION; -``` +Get started with the Single to Flex migration tool by using any of the following methods: -#### Allow-list required extensions +* [Migrate using the Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md) +* [Migrate using the Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md) - If you are using any PostgreSQL extensions on the Single Server, it has to be allow-listed on the Flexible Server before initiating the migration using the steps below: +## Best practices - 1. Use select command in the Single Server environment to list all the extensions in use. +Here, we go through the phases of an overall database migration journey, with guidance on how to use Single to Flex migration tool in the process. - ``` - select * from pg_extension - ``` +### Pre migration - The output of the above command gives the list of extensions currently active on the Single Server +#### Application compatibility +Single server supports PG version 9.6,10 and 11 while Flexible server supports PG version 11, 12, 13 and 14. Given the differences in supported versions, you might be moving across versions while migrating from single to flexible server. If that is the case, make sure your application works well with the version of flexible server you're trying to migrate to. If there are breaking changes, make sure to fix them on your application before migrating to flexible server. Use this [link](https://www.postgresql.org/docs/14/appendix-obsolete.html) to check for any breaking changes while migrating to the target version. - 2. Enable the list of extensions obtained from step 1 in the Flexible Server. Search for the 'azure.extensions' parameter by selecting the Server Parameters tab in the side pane. Select the extensions that are to be allow-listed and click Save. +#### Database migration planning +The most important thing to consider for performing offline migration using the single to flex migration tool is the downtime incurred by the application. - :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-azure-extensions.png" alt-text="Screenshot of PG extension support in the Flexible Server Azure portal." lightbox="./media/concepts-single-to-flexible/single-to-flex-azure-extensions.png"::: +##### How to calculate the downtime?   +In most cases, the non-prod servers (dev, UAT, test, staging) are migrated using offline migrations. Since these servers have less data than the production servers, the migration completes fast. For migration of production server, you need to know the time it would take to complete the migration to plan for it in advance. -### Data and schema migration +The time taken for an offline migration to complete is dependent on several factors that includes the number of databases, size of databases, number of tables inside each database, number of indexes, and the distribution of data across tables. It also depends on the SKU of the source and target server, and the IOPS available on the source and target server. Given the many factors that can affect the migration time, it's hard to estimate the total time for the offline migration to complete. The best approach would be to try it on a server restored from the primary server. -After you finish the prerequisites, migrate the data and schemas by using one of these methods: +For calculating the total downtime to perform offline migration of production server, the following phases are considered. -- [Migrate by using the Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md)-- [Migrate by using the Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)+* **Migration of PITR** - The best way to get a good estimate on the time taken to migrate your production database server would be to take a point-in time restore of your production server and run the offline migration on this newly restored server. -### Post-migration actions and considerations +* **Migration of Buffer** - After completing the above step, you can plan for actual production migration during a time period when the application traffic is low. This migration can be planned on the same day or probably a week away. By this time, the size of the source server might have increased. Update your estimated migration time for your production server based on the amount of this increase. If the increase is significant, you can consider doing another test using the PITR server. But for most servers the size increase shouldn't be significant enough. -- If you are using sequences, once the migration has successfully completed, ensure that you update the current value of sequences in target database to match the values in the source database.+* **Validation** - Once the offline migration completes for the production server, you need to verify if the data in flexible server is an exact copy of the single server. Customers can use opensource/thirdparty tools or can do the validation manually. Prepare the validation steps that you would like to do in advance of the actual migration. Validation can include:  + * Row count match for all the tables involved in the migration.  + * Matching counts for all the database object (tables, sequences, extensions, procedures, indexes) + * Comparing max or min IDs of key application related columns +>[!NOTE] +> The size of databases is not the right metric for validation.The source server might have bloats/dead tuples which can bump up the size on the source server. Also, the storage containers used in single and flexible servers are completely different. It is completely normal to have size differences between source and target servers. If there is an issue in the first three steps of validation, it indicates a problem with the migration.  -- All the resources that the migration tool creates will be automatically cleaned up, whether the migration succeeds, fails, or is canceled. No action is required from you.+* **Migration of server settings** - The users, roles/privileges, server parameters, firewall rules (if applicable), tags, alerts need to be manually copied from single server to flexible server. Users and roles are migrated from Single to Flexible server by following the steps listed in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md). -- If your migration fails, you can create a new migration task with a different name and retry the operation.+* **Changing connection strings** - Post successful validation, application should change their connection strings to point to flexible server. This activity is coordinated with the application team to make changes to all the references of connection strings pointing to single server. Note that in the flexible server the user parameter in the connection string no longer needs to be in the **username@servername** format. You should just use the **user=username** format for this parameter in the connection string +For example +Psql -h **mysingleserver**.postgres.database.azure.com -u **user1@mysingleserver** -d db1 +should now be of the format +Psql -h **myflexserver**.postgres.database.azure.com -u user1 -d db1 -- If you have more than eight databases on your Single Server and you want to migrate them all, we recommend that you create multiple migration tasks. Each task can migrate up to eight databases.+**Total planned downtime** = **Time to migrate PITR** + **time to migrate Buffer** + **time for Validation** + **time to migrate server settings** + **time to switch connection strings to the flexible server.** -- The migration does not move the database users and roles of the source server. You have to manually create these and apply them to the target server after migration.+While most frequently a migration runs without a hitch, it’s good practice to plan for contingencies if there is additional time required for debugging or if a migration may need to be restarted. -- For security reasons, we highly recommended that you delete the Azure AD app after the migration finishes.+#### Migration prerequisites +The following pre-requisites need to be taken care of before using the Single to Flex Migration tool for migration -- After you validate your data and make your application point to Flexible Server, you can consider deleting your Single Server.+##### Network connectivity between Single and Flexible Server +The following table summarizes the possible network configuration on single and flexible server. -## Limitations +|Server Type| Public Access | Private Access | +|:|:-|:--| +| Single Server | Public access turned **ON** with firewall rules or VNet rules (service endpoints). | Public access turned OFF with private end points configured.| +| Flexible Server | Public access with firewall rules | Server deployed inside a VNet and delegated subnet.| -### Size +>[!NOTE] +> Currently private end points are not supported in Flexible server. It will be supported at a later point time. -- You can migrate databases of sizes **up to 1 TB** by using this tool. To migrate larger databases or heavy write workloads, contact your account team to reach out to us or file a support ticket.+The following table summarizes the list of networking scenarios supported by the Single to Flex migration tool. + +|Single Server Config| Flexible Server Config | Supported by Migration tool | +|:|:-|:--| +| Public Access | Public Access | Yes| +| Public Access | Private Access | Yes| +| Private Access | Public Access | No| +| Private Access | Private Access | Yes| -- In one migration attempt, you can migrate up to eight user databases from Single Server to Flexible Server. If you have more databases to migrate, you can create multiple migrations between the same Single Server and Flexible Server.+**Steps needed to establish connectivity between your Single and Flexible Server** +* If your single server is public access (case #1 and case #2 in the above table), there's nothing needed from your end. The single to flex migration tool automatically establishes connection between single and flexible server and the migration will go through. +* If your single server is in private access, then the only supported scenario is when your Flexible server is inside a VNet. If your flexible server is deployed in the same VNet as the private end point of your Single server, connections between single server and flexible server should automatically work provided there is no network security group(NSGs) blocking the connectivity between subnets. If flexible server is deployed in another VNet, [peering should be established between the VNets](../../virtual-network/tutorial-connect-virtual-networks-portal.md) for the connection to work between Single and Flexible server. -### Performance +##### Allow list required extensions +Use the following select command in the Single Server databases to list all the extensions that are being used. -- The migration infrastructure is deployed on a four-vCore virtual machine that might limit migration performance. +``` + select * from pg_extension; +``` -- The deployment of migration infrastructure takes 10 to 15 minutes before the actual data migration starts, regardless of the size of data or the migration mode (online or offline).+Search for the **azure.extensions** parameter on the Server Parameters blade on your Flexible server. Select the list of extensions obtained by running the above query on your Single server database to this server parameter and click Save. You should wait for the deployment to complete before proceeding further. -### Replication -- The migration tool uses a logical decoding feature of PostgreSQL to perform the online migration. The decoding feature has the following limitations. For more information about logical replication limitations, see the [PostgreSQL documentation](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).- - Data Definition Language (DDL) commands are not replicated. - - Sequence data is not replicated. - - Truncate commands are not replicated. - - To work around this limitation, use `DELETE` instead of `TRUNCATE`. To avoid accidental `TRUNCATE` invocations, you can revoke the `TRUNCATE` privilege from tables. +>[!NOTE] +> If TIMESCALEDB, PG_PARTMAN or POSTGIS_TIGER_DECODER extensions are used in your single server database, please raise a support request since the Single to Flex migration tool will not handle these extensions. - - Views, materialized views, partition root tables, and foreign tables are not migrated. +Check if the list contains any of the following extensions: +* PG_CRON +* PG_HINT_PLAN +* PG_PARTMAN_BGW +* PG_PREWARM +* PG_STAT_STATEMENTS +* PG_AUDIT +* PGLOGICAL +* WAL2JSON -- Logical decoding will use resources in the Single Server. Consider reducing the workload, or plan to scale CPU/memory resources at the Single Server during the migration.+If yes, then follow the below steps. -### Other limitations +Go to the server parameters blade and search for **shared_preload_libraries** parameter. This parameter indicates the set of extension libraries that are preloaded at the server restart. Pg_cron and pg_stat_statements extensions are selected by default. Select the list of above extensions used by the single server database to this parameter and click on Save. -- The migration tool migrates only data and schemas of the Single Server databases to Flexible Server. It does not migrate other features, such as server parameters, connection security details, firewall rules, users, roles, and permissions. In other words, everything except data and schemas must be manually configured in the Flexible Server target. -- The migration tool does not validate the data in the Flexible Server target after migration. You must do this validation manually.+The changes to this server parameter would require a server restart to come into effect. -- The migration tool migrates only user databases, including Postgres databases. It doesn't migrate system or maintenance databases. -- If migration fails, there is no option to retry the same migration task. You have to create a new migration task with a unique name.+Use the **Save and Restart** option and wait for the postgresql server to restart. -- The migration tool does not include an assessment of your Single Server.+### Migration +Once the pre-migration steps are complete, you're ready to carry out the migration of the production databases of your single server. At this point, you've finalized the day and time of production migration along with a planned downtime for your applications. -## Best practices +* Create a flexible server with a **General-Purpose** or **Memory Optimized** compute tier. Pick a minimum 4VCore or higher SKU to complete the migration quickly. Burstable SKUs are blocked for use as migration target servers. +* Don't include HA or geo redundancy option while creating flexible server. You can always enable it with zero downtime once the migration from single server is complete. Don't create any read-replicas yet on the flexible server. +* Before initiating the migration, stop all the applications that connect to your production server. +* Checkpoint the source server by running **checkpoint** command and restart the source server. +This command ensures any remaining applications or connections are disconnected. Additionally, you can run **select * from pg_stat_activity;** after the restart to ensure no applications is connected to the source server. -- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.-- Plan the mode of migration for each database. For simpler migrations and smaller databases, consider offline mode.-- Batch similar-sized databases in a migration task. -- Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures.-- Perform test migrations before migrating for production:- - Test migrations are an important for ensuring that you cover all aspects of the database migration, including application testing. - - The best practice is to begin by running a migration entirely for testing purposes. After a newly started migration enters the continuous replication (CDC) phase with minimal lag, make your Flexible Server target the primary database server. Use that target for testing the application to ensure expected performance and results. If you're migrating to a higher Postgres version, test for application compatibility. +Trigger the migration of your production databases using the single to flex migration tool. The migration requires close monitoring, and the monitoring user interface of the migration tool comes in handy. Check the migration status over the period of time to ensure there is progress and wait for the migration to complete. - - After testing is completed, you can migrate the production databases. At this point, you need to finalize the day and time of production migration. Ideally, there's low application use at this time. All stakeholders who need to be involved should be available and ready. +### Post migration +* Once the migration is complete, verify the data on your flexible server and make sure it's an exact copy of the single server. +* Post verification, enable HA/ backup options as needed on your flexible server. +* Change the SKU of the flexible server to match the application needs. This change needs a database server restart. +* Migrate users and roles from single to flexible servers. This step can be done by creating users on flexible servers and providing them with suitable privileges or by using the steps that are listed in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md). +* If you've changed any server parameters from their default values in single server, copy those server parameter values in flexible server. +* Copy other server settings like tags, alerts, firewall rules (if applicable) from single server to flexible server. +* Make changes to your application to point the connection strings to flexible server. +* Monitor the database performance closely to see if it requires performance tuning. - The production migration requires close monitoring. For an online migration, the replication must be completed before you perform the cutover, to prevent data loss. --- Cut over all dependent applications to access the new primary database, and open the applications for production usage.-- After the application starts running on the Flexible Server target, monitor the database performance closely to see if performance tuning is required.--## Other migration methods --The intent of the tool is to provide a seamless migration experience for most workloads. However, you may also choose other options to migrate using [dump/restore](../single-server/how-to-upgrade-using-dump-and-restore.md) or using [Azure Database Migration Service (DMS)](../../dms/tutorial-postgresql-azure-postgresql-online-portal.md) or using any 3rd party tools. -- ## Next steps - [Migrate to Flexible Server by using the Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md) |
purview | Catalog Managed Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md | Currently, the following data sources are supported to have a managed private en - Azure SQL Managed Instance - Azure Synapse Analytics -Additionally, you can deploy managed private endpoints for your Azure Key Vault resources if you need to run scans using any authentication options rather than Managed Identities, such as SQL Authentication or Account Key. --> [!IMPORTANT] -> If you are planning to scan Azure Synapse workspaces using Managed Virtual Network, you are also required to [configure Azure Synapse workspace firewall access](register-scan-synapse-workspace.md#set-up-azure-synapse-workspace-firewall-access) to enable **Allow Azure services and resources to access this workspace**. Currently, we do not support setting up scans for an Azure Synapse workspace from the Microsoft Purview governance portal, if you cannot enable **Allow Azure services and resources to access this workspace** on your Azure Synapse workspaces. If you cannot enable the firewall: -> - You can use [Microsoft Purview REST API - Scans - Create Or Update](/rest/api/purview/scanningdataplane/scans/create-or-update/) to create a new scan for your Synapse workspaces including dedicated and serverless pools. -> - You must use **SQL Authentication** as authentication mechanism. +Additionally, you can deploy managed private endpoints for your Azure Key Vault resources if you need to run scans using any authentication options rather than Managed Identities, such as SQL Authentication or Account Key. ### Managed Virtual Network |
purview | Concept Asset Normalization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-asset-normalization.md | Before: `https://myAccount.file.core.windows.net//myshare/folderA////folderB/` After: `https://myaccount.file.core.windows.net/myshare/folderA/folderB/` -### Lowercase ADF sections -Applies to: Azure Data Factory --Before: `/subscriptions/01234567-abcd-9876-0000-ba9876543210/resourceGroups/fooBar/providers/Microsoft.DataFactory/factories/fooFactory/pipelines/barPipeline/activities/barFoo` --After: `/subscriptions/01234567-abcd-9876-0000-ba9876543210/resourceGroups/foobar/providers/microsoft.datafactory/factories/foofactory/pipelines/barpipeline/activities/barfoo` - ### Convert to ADL scheme Applies to: Azure Data Lake Storage Gen1 |
purview | Concept Resource Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-resource-sets.md | This article helps you understand how Microsoft Purview uses resource sets to ma ## Background info -At-scale data processing systems typically store a single table in storage as multiple files. In the Microsoft Purview data catalog, this concept is represented by using resource sets. A resource set is a single object in the catalog that represents a large number of assets in storage. +At-scale data processing systems typically store a single table in storage as multiple files. In the Microsoft Purview Data Catalog, this concept is represented by using resource sets. A resource set is a single object in the catalog that represents a large number of assets in storage. For example, suppose your Spark cluster has persisted a DataFrame into an Azure Data Lake Storage (ADLS) Gen2 data source. Although in Spark the table looks like a single logical resource, on the disk there are likely thousands of Parquet files, each of which represents a partition of the total DataFrame's contents. IoT data and web log data have the same challenge. Imagine you have a sensor that outputs log files several times a second. It won't take long until you have hundreds of thousands of log files from that single sensor. When Microsoft Purview detects resources that it thinks are part of a resource s ## Advanced resource sets -Microsoft Purview can customize and further enrich your resource set assets through the **Advanced Resource Sets** capability. Advanced resource sets allows Microsoft Purview to understand the underlying partitions of data ingested and enables the creation of [resource set pattern rules](how-to-resource-set-pattern-rules.md) that customize how Microsoft Purview groups resource sets during scanning. +Microsoft Purview can customize and further enrich your resource set assets through the **Advanced Resource Sets** capability. Advanced resource sets allow Microsoft Purview to understand the underlying partitions of data ingested and enables the creation of [resource set pattern rules](how-to-resource-set-pattern-rules.md) that customize how Microsoft Purview groups resource sets during scanning. -When Advanced Resource Sets are enabled, Microsoft Purview run extra aggregations to compute the following information about resource set assets: +When Advanced Resource Sets are enabled, Microsoft Purview runs extra aggregations to compute the following information about resource set assets: - A sample path from a file that comprises the resource set. - A partition count that shows how many files make up the resource set. These properties can be found on the asset details page of the resource set. ### Turning on advanced resource sets -Advanced resource sets is off by default in all new Microsoft Purview instances. Advanced resource sets can be enabled from **Account information** in the management hub. Only those users who are added to the Data Curator role at root collection, can manage Advanced Resource Sets settings. +Advanced resource sets are off by default in all new Microsoft Purview instances. Advanced resource sets can be enabled from **Account information** in the management hub. Only those users who are added to the Data Curator role at root collection, can manage Advanced Resource Sets settings. :::image type="content" source="media/concept-resource-sets/advanced-resource-set-toggle.png" alt-text="Turn on Advanced resource set." border="true"::: After enabling advanced resource sets, the additional enrichments will occur on all newly ingested assets. The Microsoft Purview team recommends waiting an hour before scanning in new data lake data after toggling on the feature. > [!IMPORTANT]-> Enabling advanced resource sets will impact the refresh rate of asset and classification insights. When advanced resource sets is on, asset and classification insights will only update twice a day. +> Enabling advanced resource sets will impact the refresh rate of asset and classification insights. When advanced resource sets are on, asset and classification insights will only update twice a day. ## Built-in resource set patterns Microsoft Purview supports the following resource set patterns. These patterns c | Date(yyyy/mm/dd)InPath | {Year}/{Month}/{Day} | Year/month/day pattern spanning multiple folders | -## How resource sets are displayed in the Microsoft Purview data catalog +## How resource sets are displayed in the Microsoft Purview Data Catalog When Microsoft Purview matches a group of assets into a resource set, it attempts to extract the most useful information to use as a display name in the catalog. Some examples of the default naming convention applied: When scanning a storage account, Microsoft Purview uses a set of defined pattern - Putting an asset into the wrong resource set - Incorrectly marking an asset as not being a resource set -To customize or override how Microsoft Purview detects which assets are grouped as resource sets and how they are displayed within the catalog, you can define pattern rules in the management center. For step-by-step instructions and syntax, please see [resource set pattern rules](how-to-resource-set-pattern-rules.md). +To customize or override how Microsoft Purview detects which assets are grouped as resource sets and how they're displayed within the catalog, you can define pattern rules in the management center. For step-by-step instructions and syntax, see [resource set pattern rules](how-to-resource-set-pattern-rules.md). ## Known limitations with resource sets - By default, resource set assets will only be deleted by a scan if [Advanced Resource sets](#advanced-resource-sets) are enabled. If this capability is off, resource set assets can only be deleted manually or via API.-- Currently, resource set assets will apply the first schema and classification discovered by the scan. Subsequent scans won't update the schema. ## Next steps |
purview | Register Scan Power Bi Tenant Cross Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md | Use either of the following deployment checklists during the setup, or for troub - `*.analysis.windows.net` 1. Network connectivity from the self-hosted runtime to Microsoft services is enabled.- 1. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed. + 1. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed. Restart the machine after you newly install the JDK for it to take effect. 1. In Power BI tenant, In Azure Active Directory create a security group. 1. In Power BI tenant, from Azure Active Directory tenant, make sure [Service Principal is member of the new security group](#authenticate-to-power-bi-tenant). 1. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group. |
purview | Register Scan Power Bi Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md | Use any of the following deployment checklists during the setup or for troublesh - `*.analysis.windows.net` 3. Network connectivity from Self-hosted runtime to Microsoft services is enabled.- 4. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed. + 4. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed. Restart the machine after you newly install the JDK for it to take effect. 1. In Azure Active Directory tenant, create a security group. Use any of the following deployment checklists during the setup or for troublesh 1. Validate Self-hosted runtime settings: 1. Latest version of [Self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM.- 2. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed. + 2. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed. Restart the machine after you newly install the JDK for it to take effect. 1. Validate App registration settings to make sure: 1. App registration exists in your Azure Active Directory tenant. |
reliability | Availability Zones Service Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md | Azure offerings are grouped into three categories that reflect their _regional_ | [Azure Storage: Azure Data Lake Storage](migrate-storage.md) |  | | [Azure Storage: Disk Storage](migrate-storage.md) |  | | [Azure Storage: Blob Storage](migrate-storage.md) |  |-| [Azure Storage: Managed Disks](migrate-storage.md) |   | +| [Azure Storage: Managed Disks](https://learn.microsoft.com/azure/virtual-machines/disks-redundancy?source=recommendations) |   | | [Azure Virtual Machine Scale Sets](../virtual-machines/availability.md) |   | | [Azure Virtual Machines](../virtual-machines/availability.md) |  | | Virtual Machines: [Av2-Series](../virtual-machines/availability.md) |  | |
sentinel | Configure Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit.md | Some installations of SAP systems may not have audit log enabled by default. For  -1. Specify a name for the profile in the **Profile/Filter Number** field. +1. Specify a name for the profile in the **Profile/Filter Number** field. ++ > [!NOTE] + > Vanilla SAP installation requires this additional step: right-click the profile you have created and create a new filter. 1. Mark the **Filter for recording active** checkbox. |
site-recovery | Azure To Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md | This table summarizes support for the cache storage account used by Site Recover **Setting** | **Support** | **Details** | | -General purpose V2 storage accounts (Hot and Cool tier) | Supported | Usage of GPv2 is recommended because GPv1 does not support ZRS (Zonal Redundant Storage). +General purpose V2 storage accounts (Hot and Cool tier) | Supported | Usage of GPv2 is recommended because GPv1 doesn't support ZRS (Zonal Redundant Storage). Premium storage | Supported | Use Premium Block Blob storage accounts to get High Churn support (in Public Preview). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). Region | Same region as virtual machine | Cache storage account should be in the same region as the virtual machine being protected. Subscription | Can be different from source virtual machines | Cache storage account need not be in the same subscription as the source virtual machine(s).-Azure Storage firewalls for virtual networks | Supported | If you are using firewall enabled cache storage account or target storage account, ensure you ['Allow trusted Microsoft services'](../storage/common/storage-network-security.md#exceptions).<br></br>Also, ensure that you allow access to at least one subnet of source Vnet.<br></br>Note: Do not restrict virtual network access to your storage accounts used for Site Recovery. You should allow access from 'All networks'. -Soft delete | Not supported | Soft delete is not supported because once it is enabled on cache storage account, it increases cost. Azure Site Recovery performs frequent creates/deletes of log files while replicating causing costs to increase. +Azure Storage firewalls for virtual networks | Supported | If you're using firewall enabled cache storage account or target storage account, ensure you ['Allow trusted Microsoft services'](../storage/common/storage-network-security.md#exceptions).<br></br>Also, ensure that you allow access to at least one subnet of source Vnet.<br></br>Note: Don't restrict virtual network access to your storage accounts used for Site Recovery. You should allow access from 'All networks'. +Soft delete | Not supported | Soft delete isn't supported because once it is enabled on cache storage account, it increases cost. Azure Site Recovery performs frequent creates/deletes of log files while replicating causing costs to increase. Encryption at rest (CMK) | Supported | Storage account encryption can be configured with customer managed keys (CMK) The following table lists the limits in terms of number of disks that can replicate to a single storage account. Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft **Operating system** | **Details** | -Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), 8.7 (Azure Site Recovery for 8.7 is not available in China regions). -CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or later), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6, 8.7 (Azure Site Recovery for 8.7 is not available in China regions). +Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), 8.7 (Azure Site Recovery for 8.7 isn't available in China regions). +CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or later), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6, 8.7 (Azure Site Recovery for 8.7 isn't available in China regions). Ubuntu 14.04 LTS Server | Includes support for all 14.04.*x* versions; [Supported kernel versions](#supported-ubuntu-kernel-versions-for-azure-virtual-machines); Ubuntu 16.04 LTS Server | Includes support for all 16.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. Ubuntu 18.04 LTS Server | Includes support for all 18.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. Ubuntu 20.04 LTS server | Includes support for all 20.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines) Debian 7 | Includes support for all 7. *x* versions [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) Debian 8 | Includes support for all 8. *x* versions [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines)-Debian 9 | Includes support for 9.1 to 9.13. Debian 9.0 is not supported. [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) +Debian 9 | Includes support for 9.1 to 9.13. Debian 9.0 isn't supported. [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) Debian 10 | [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) Debian 11 | [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) SUSE Linux Enterprise Server 12 | SP1, SP2, SP3, SP4, SP5 [(Supported kernel versions)](#supported-suse-linux-enterprise-server-12-kernel-versions-for-azure-virtual-machines) SUSE Linux Enterprise Server 11 | SP4 Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) (running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4, 5, and 6 (UEK3, UEK4, UEK5, UEK6), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6 <br/><br/>8.1 (running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/)). > [!NOTE]-> For Linux versions, Azure Site Recovery does not support custom OS kernels. Only the stock kernels that are part of the distribution minor version release/update are supported. +> For Linux versions, Azure Site Recovery doesn't support custom OS kernels. Only the stock kernels that are part of the distribution minor version release/update are supported. > [!NOTE] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch), follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario. Debian 11 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azur **Release** | **Mobility service version** | **Kernel version** | | | |-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://support.microsoft.coms/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SUSE Linux Enterprise Server 12 kernels supported in this release. | +SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SUSE Linux Enterprise Server 12 kernels supported in this release. | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.106-azure:5 </br> 4.12.14-16.112-azure | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 12 Azure kernels supported in this release. | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>4.12.14-16.100-azure:5 </br> 4.12.14-16.103-azure:5 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsof * Volume > [!NOTE]-> Multipath software is not supported. +> Multipath software isn't supported. ## Replicated machines - compute settings Azure gallery images - Microsoft published | Supported | Supported if the VM run Azure Gallery images - Third party published | Supported | Supported if the VM runs on a supported operating system. Custom images - Third party published | Supported | Supported if the VM runs on a supported operating system. VMs migrated using Site Recovery | Supported | If a VMware VM or physical machine was migrated to Azure using Site Recovery, you need to uninstall the older version of Mobility service running on the machine, and restart the machine before replicating it to another Azure region.-Azure RBAC policies | Not supported | Azure role-based access control (Azure RBAC) policies on VMs are not replicated to the failover VM in target region. -Extensions | Not supported | Extensions are not replicated to the failover VM in target region. It needs to be installed manually after failover. +Azure RBAC policies | Not supported | Azure role-based access control (Azure RBAC) policies on VMs aren't replicated to the failover VM in target region. +Extensions | Not supported | Extensions aren't replicated to the failover VM in target region. It needs to be installed manually after failover. Proximity Placement Groups | Supported | Virtual machines located inside a Proximity Placement Group can be protected using Site Recovery. Tags | Supported | User-generated tags applied on source virtual machines are carried over to target virtual machines post-test failover or failover. Tags on the VM(s) are replicated once every 24 hours for as long as the VM(s) is/are present in the target region. Tags | Supported | User-generated tags applied on source virtual machines are c **Action** | **Details** -- | -Resize disk on replicated VM | Resizing up on the source VM is supported. Resizing down on the source VM is not supported. Resizing should be performed before failover. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captured.<br/><br/> If you change the disk size on the Azure VM after failover, changes aren't captured by Site Recovery, and failback will be to the original VM size.<br/><br/> If resizing to >=4 TB, note Azure guidance on disk caching [here](../virtual-machines/premium-storage-performance.md). +Resize disk on replicated VM | Resizing up on the source VM is supported. Resizing down on the source VM isn't supported. Resizing should be performed before failover. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captured.<br/><br/> If you change the disk size on the Azure VM after failover, changes aren't captured by Site Recovery, and failback will be to the original VM size.<br/><br/> If resizing to >=4 TB, note Azure guidance on disk caching [here](../virtual-machines/premium-storage-performance.md). Add a disk to a replicated VM | Supported Offline changes to protected disks | Disconnecting disks and making offline modifications to them require triggering a full resync.-Disk caching | Disk Caching is not supported for disks 4 TiB and larger. If multiple disks are attached to your VM, each disk that is smaller than 4 TiB will support caching. Changing the cache setting of an Azure disk detaches and re-attaches the target disk. If it is the operating system disk, the VM is restarted. Stop all applications/services that might be affected by this disruption before changing the disk cache setting. Not following those recommendations could lead to data corruption. +Disk caching | Disk Caching isn't supported for disks 4 TiB and larger. If multiple disks are attached to your VM, each disk that is smaller than 4 TiB will support caching. Changing the cache setting of an Azure disk detaches and re-attaches the target disk. If it's the operating system disk, the VM is restarted. Stop all applications/services that might be affected by this disruption before changing the disk cache setting. Not following those recommendations could lead to data corruption. ## Replicated machines - storage Data disk - standard storage account | Supported | Data disk - premium storage account | Supported | If a VM has disks spread across premium and standard storage accounts, you can select a different target storage account for each disk, to ensure you have the same storage configuration in the target region. Managed disk - standard | Supported in Azure regions in which Azure Site Recovery is supported. | Managed disk - premium | Supported in Azure regions in which Azure Site Recovery is supported. |-Disk subscription limits | Up to 3000 protected disks per Subscription | Ensure that the Source or Target subscription does not have more than 3000 Azure Site Recovery-protected Disks (Both Data and OS). +Disk subscription limits | Up to 3000 protected disks per Subscription | Ensure that the Source or Target subscription doesn't have more than 3000 Azure Site Recovery-protected Disks (Both Data and OS). Standard SSD | Supported | Redundancy | LRS, ZRS, and GRS are supported. Cool and hot storage | Not supported | VM disks aren't supported on cool and hot storage Storage Spaces | Supported | NVMe storage interface | Not supported-Encryption at host | Not Supported | The VM will get protected, but the failed over VM will not have Encryption at host enabled. [See detailed information](../virtual-machines/disks-enable-host-based-encryption-portal.md) to create a VM with end-to-end encryption using Encryption at host. +Encryption at host | Not Supported | The VM will get protected, but the failed over VM won't have Encryption at host enabled. [See detailed information](../virtual-machines/disks-enable-host-based-encryption-portal.md) to create a VM with end-to-end encryption using Encryption at host. Encryption at rest (SSE) | Supported | SSE is the default setting on storage accounts. Encryption at rest (CMK) | Supported | Both Software and HSM keys are supported for managed disks Double Encryption at rest | Supported | Learn more on supported regions for [Windows](../virtual-machines/disk-encryption.md) and [Linux](../virtual-machines/disk-encryption.md) FIPS encryption | Not supported-Azure Disk Encryption (ADE) for Windows OS | Supported for VMs with managed disks. | VMs using unmanaged disks are not supported. <br/><br/> HSM-protected keys are not supported. <br/><br/> Encryption of individual volumes on a single disk is not supported. | -Azure Disk Encryption (ADE) for Linux OS | Supported for VMs with managed disks. | VMs using unmanaged disks are not supported. <br/><br/> HSM-protected keys are not supported. <br/><br/> Encryption of individual volumes on a single disk is not supported. <br><br> Known issue with enabling replication. [Learn more.](./azure-to-azure-troubleshoot-errors.md#enable-protection-failed-as-the-installer-is-unable-to-find-the-root-disk-error-code-151137) | +Azure Disk Encryption (ADE) for Windows OS | Supported for VMs with managed disks. | VMs using unmanaged disks aren't supported. <br/><br/> HSM-protected keys aren't supported. <br/><br/> Encryption of individual volumes on a single disk isn't supported. | +Azure Disk Encryption (ADE) for Linux OS | Supported for VMs with managed disks. | VMs using unmanaged disks aren't supported. <br/><br/> HSM-protected keys aren't supported. <br/><br/> Encryption of individual volumes on a single disk isn't supported. <br><br> Known issue with enabling replication. [Learn more.](./azure-to-azure-troubleshoot-errors.md#enable-protection-failed-as-the-installer-is-unable-to-find-the-root-disk-error-code-151137) | SAS key rotation | Not Supported | If the SAS key for storage accounts is rotated, customer needs to disable and re-enable replication. | Host Caching | Supported-Hot add | Supported | Enabling replication for a data disk that you add to a replicated Azure VM is supported for VMs that use managed disks. <br/><br/> Only one disk can be hot added to an Azure VM at a time. Parallel addition of multiple disks is not supported. | +Hot add | Supported | Enabling replication for a data disk that you add to a replicated Azure VM is supported for VMs that use managed disks. <br/><br/> Only one disk can be hot added to an Azure VM at a time. Parallel addition of multiple disks isn't supported. | Hot remove disk | Not supported | If you remove data disk on the VM, you need to disable replication and enable replication again for the VM. Exclude disk | Support. You must use [PowerShell](azure-to-azure-exclude-disks.md) to configure. | Temporary disks are excluded by default.-Storage Spaces Direct | Supported for crash consistent recovery points. Application consistent recovery points are not supported. | -Scale-out File Server | Supported for crash consistent recovery points. Application consistent recovery points are not supported. | -DRBD | Disks that are part of a DRBD setup are not supported. | +Storage Spaces Direct | Supported for crash consistent recovery points. Application consistent recovery points aren't supported. | +Scale-out File Server | Supported for crash consistent recovery points. Application consistent recovery points aren't supported. | +DRBD | Disks that are part of a DRBD setup aren't supported. | LRS | Supported | GRS | Supported | RA-GRS | Supported | ZRS | Supported | -Cool and Hot Storage | Not supported | Virtual machine disks are not supported on cool and hot storage +Cool and Hot Storage | Not supported | Virtual machine disks aren't supported on cool and hot storage Azure Storage firewalls for virtual networks | Supported | If restrict virtual network access to storage accounts, enable [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions). General purpose V2 storage accounts (Both Hot and Cool tier) | Supported | Transaction costs increase substantially compared to General purpose V1 storage accounts Generation 2 (UEFI boot) | Supported Ultra Disks | Not supported Secure transfer option | Supported Write accelerator enabled disks | Not supported Tags | Supported | User-generated tags are replicated every 24 hours.-Soft delete | Not supported | Soft delete is not supported because once it is enabled on a storage account, it increases cost. Azure Site Recovery performs very frequent creates/deletes of log files while replicating causing costs to increase. -iSCSI disks | Not supported | Azure Site Recovery may be used to migrate or failover iSCSI disks into Azure. However, iSCSI disks are not supported for Azure to Azure replication and failover/failback. +Soft delete | Not supported | Soft delete isn't supported because once it's enabled on a storage account, it increases cost. Azure Site Recovery performs very frequent creates/deletes of log files while replicating causing costs to increase. +iSCSI disks | Not supported | Azure Site Recovery may be used to migrate or failover iSCSI disks into Azure. However, iSCSI disks aren't supported for Azure to Azure replication and failover/failback. >[!IMPORTANT] > To avoid performance issues, make sure that you follow VM disk scalability and performance targets for [managed disks](../virtual-machines/disks-scalability-targets.md). If you use default settings, Site Recovery creates the required disks and storage accounts, based on the source configuration. If you customize and select your own settings,follow the disk scalability and performance targets for your source VMs. Premium P20 or P30 or P40 or P50 disk | 16 KB or greater |20 MB/s | 1684 GB per **Setting** | **Support** | **Details** | | -NIC | Maximum number supported for a specific Azure VM size | NICs are created when the VM is created during failover.<br/><br/> The number of NICs on the failover VM depends on the number of NICs on the source VM when replication was enabled. If you add or remove a NIC after enabling replication, it doesn't impact the number of NICs on the replicated VM after failover. <br/><br/> The order of NICs after failover is not guaranteed to be the same as the original order. <br/><br/> You can rename NICs in the target region based on your organization's naming conventions. NIC renaming is supported using PowerShell. -Internet Load Balancer | Not supported | You can set up public/internet load balancers in the primary region. However, public/internet load balancers are not supported by Azure Site Recovery in the DR region. +NIC | Maximum number supported for a specific Azure VM size | NICs are created when the VM is created during failover.<br/><br/> The number of NICs on the failover VM depends on the number of NICs on the source VM when replication was enabled. If you add or remove a NIC after enabling replication, it doesn't impact the number of NICs on the replicated VM after failover. <br/><br/> The order of NICs after failover isn't guaranteed to be the same as the original order. <br/><br/> You can rename NICs in the target region based on your organization's naming conventions. NIC renaming is supported using PowerShell. +Internet Load Balancer | Not supported | You can set up public/internet load balancers in the primary region. However, public/internet load balancers aren't supported by Azure Site Recovery in the DR region. Internal Load balancer | Supported | Associate the preconfigured load balancer using an Azure Automation script in a recovery plan. Public IP address | Supported | Associate an existing public IP address with the NIC. Or, create a public IP address and associate it with the NIC using an Azure Automation script in a recovery plan. NSG on NIC | Supported | Associate the NSG with the NIC using an Azure Automation script in a recovery plan. Azure DNS | Supported | Custom DNS | Supported | Unauthenticated proxy | Supported | [Learn more](./azure-to-azure-about-networking.md) Authenticated Proxy | Not supported | If the VM is using an authenticated proxy for outbound connectivity, it can't be replicated using Azure Site Recovery.-VPN site-to-site connection to on-premises<br/><br/>(with or without ExpressRoute)| Supported | Ensure that the UDRs and NSGs are configured in such a way that the Site Recovery traffic is not routed to on-premises. [Learn more](./azure-to-azure-about-networking.md) +VPN site-to-site connection to on-premises<br/><br/>(with or without ExpressRoute)| Supported | Ensure that the UDRs and NSGs are configured in such a way that the Site Recovery traffic isn't routed to on-premises. [Learn more](./azure-to-azure-about-networking.md) VNET to VNET connection | Supported | [Learn more](./azure-to-azure-about-networking.md) Virtual Network Service Endpoints | Supported | If you are restricting the virtual network access to storage accounts, ensure that the trusted Microsoft services are allowed access to the storage account. Accelerated networking | Supported | Accelerated networking can be enabled on the recovery VM only if it is enabled on the source VM also. [Learn more](azure-vm-disaster-recovery-with-accelerated-networking.md).-Palo Alto Network Appliance | Not supported | With third-party appliances, there are often restrictions imposed by the provider inside the Virtual Machine. Azure Site Recovery needs agent, extensions, and outbound connectivity to be available. But the appliance does not let any outbound activity to be configured inside the Virtual Machine. +Palo Alto Network Appliance | Not supported | With third-party appliances, there are often restrictions imposed by the provider inside the Virtual Machine. Azure Site Recovery needs agent, extensions, and outbound connectivity to be available. But the appliance doesn't let any outbound activity to be configured inside the Virtual Machine. IPv6 | Not supported | Mixed configurations that include both IPv4 and IPv6 are also not supported. Free up the subnet of the IPv6 range before any Site Recovery operation. Private link access to Site Recovery service | Supported | [Learn more](azure-to-azure-how-to-enable-replication-private-endpoints.md) Tags | Supported | User-generated tags on NICs are replicated every 24 hours. Tags | Supported | User-generated tags on NICs are replicated every 24 hours. ## Next steps - Read [networking guidance](./azure-to-azure-about-networking.md) for replicating Azure VMs.-- Deploy disaster recovery by [replicating Azure VMs](./azure-to-azure-quickstart.md).+- Deploy disaster recovery by [replicating Azure VMs](./azure-to-azure-quickstart.md). |
site-recovery | Monitor Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-log-analytics.md | Title: Monitor Azure Site Recovery with Azure Monitor Logs description: Learn how to monitor Azure Site Recovery with Azure Monitor Logs (Log Analytics) Previously updated : 11/15/2019 Last updated : 02/07/2023 -Azure Monitor Logs provide a log data platform that collects activity and resource logs, along with other monitoring data. Within Azure Monitor Logs, you use Log Analytics to write and test log queries, and to interactively analyze log data. You can visualize and query log results, and configure alerts to take actions based on monitored data. +Azure Monitor Logs provide a log data platform that collects activity and resource logs, along with other monitoring data. Within Azure Monitor Logs, you use Log Analytics to write and test log queries and interactively analyze log data. You can visualize and query log results, and configure alerts to take actions based on monitored data. For Site Recovery, you can use Azure Monitor Logs to help you do the following: - **Monitor Site Recovery health and status**. For example, you can monitor replication health, test failover status, Site Recovery events, recovery point objectives (RPOs) for protected machines, and disk/data change rates. - **Set up alerts for Site Recovery**. For example, you can configure alerts for machine health, test failover status, or Site Recovery job status. -Using Azure Monitor Logs with Site Recovery is supported for **Azure to Azure** replication, and **VMware VM/physical server to Azure** replication. +Using Azure Monitor Logs with Site Recovery is supported for **Azure to Azure** replication and **VMware VM/physical server to Azure** replication. > [!NOTE]-> To get the churn data logs and upload rate logs for VMware and physical machines, you need to install a Microsoft monitoring agent on the Process Server. This agent sends the logs of the replicating machines to the workspace. This capability is available only for 9.30 mobility agent version onwards. +> To get the churn data logs and upload rate logs for VMware and physical machines, you need to install a Microsoft monitoring agent on the Process Server. This agent sends the logs of the replicating machines to the workspace. This capability is available only for the 9.30 mobility agent version onwards. -## Before you start +## Prerequisites Here's what you need: -- At least one machine protected in a Recovery Services vault.+- At least one machine is protected in a Recovery Services vault. - A Log Analytics workspace to store Site Recovery logs. [Learn about](../azure-monitor/logs/quick-create-workspace.md) setting up a workspace. - A basic understanding of how to write, run, and analyze log queries in Log Analytics. [Learn more](../azure-monitor/logs/log-analytics-tutorial.md). We recommend that you review [common monitoring questions](monitoring-common-que  2. In **Diagnostic settings**, specify a name, and check the box **Send to Log Analytics**.-3. Select the Azure Monitor Logs subscription, and the Log Analytics workspace. +3. Select the Azure Monitor Logs subscription and the Log Analytics workspace. 4. Select **Azure Diagnostics** in the toggle. 5. From the log list, select all the logs with the prefix **AzureSiteRecovery**. Then select **OK**. You can capture the data churn rate information and source data upload rate info 1. Go to the Log Analytics workspace and select **Advanced Settings**. 2. Select **Connected Sources** page and further select **Windows Servers**.-3. Download the Windows Agent (64 bit) on the Process Server. +3. Download the Windows Agent (64-bit) on the Process Server. 4. [Obtain the workspace ID and key](../azure-monitor/agents/agent-windows.md#workspace-id-and-key) 5. [Configure agent to use TLS 1.2](../azure-monitor/agents/agent-windows.md#configure-agent-to-use-tls-12) 6. [Complete the agent installation](../azure-monitor/agents/agent-windows.md#install-the-agent) by providing the obtained workspace ID and key.-7. Once the installation is complete, go to Log Analytics workspace and select **Advanced Settings**. Go to the **Data** page and select **Windows Performance Counters**. -8. Select **'+'** to add the following two counters with sample interval of 300 seconds: +7. Once the installation is complete, go to Log Analytics workspace and select **Legacy agents management**. Go to the **Data** page and select **Windows Performance Counters**. +8. Select **'+'** to add the following two counters with a sample interval of 300 seconds: - ASRAnalytics(*)\SourceVmChurnRate - ASRAnalytics(*)\SourceVmThrpRate Category contains "Upload", "UploadRate", "none")  > [!Note] > Ensure you set up the monitoring agent on the Process Server to fetch these logs. Refer [steps to configure monitoring agent](#configure-microsoft-monitoring-agent-on-the-process-server-to-send-churn-and-upload-rate-logs). -This query plots a trend graph for a specific disk, **disk0**, of a replicated item, **win-9r7sfh9qlru**, which represents the data change rate (Write Bytes per Second) and data upload rate. You can find the disk name on **Disks** blade of the replicated item in the recovery services vault. Instance name to be used in the query is DNS name of the machine followed by _ and disk name as in this example. +This query plots a trend graph for a specific disk, **disk0**, of a replicated item, **win-9r7sfh9qlru**, which represents the data change rate (Write Bytes per Second) and data upload rate. You can find the disk name on the **Disks** blade of the replicated item in the recovery services vault. The instance name to be used in the query is the DNS name of the machine followed by _ and the disk name as in this example. ``` Perf Process Server pushes this data every 5 minutes to the Log Analytics workspace. ### Query disaster recovery summary (Azure to Azure) -This query plots a summary table for Azure VMs replicated to a secondary Azure region. It shows VM name, replication and protection status, RPO, test failover status, Mobility agent version, any active replication errors, and the source location. +This query plots a summary table for Azure VMs replicated to a secondary Azure region. It shows the VM name, replication, and protection status, RPO, test failover status, Mobility agent version, any active replication errors, and the source location. ``` AzureDiagnostics  AzureDiagnostics  ### Query disaster recovery summary (VMware/physical servers) -This query plots a summary table for VMware VMs and physical servers replicated to Azure. It shows machine name, replication and protection status, RPO, test failover status, Mobility agent version, any active replication errors, and the relevant process server. +This query plots a summary table for VMware VMs and physical servers replicated to Azure. It shows the machine name, replication and protection status, RPO, test failover status, Mobility agent version, any active replication errors, and the relevant process server. ``` AzureDiagnostics  AzureDiagnostics   | summarize hint.strategy=partitioned arg_max(TimeGenerated, *) by name_s   | summarize count() ```-For the alert, set **Threshold value** to 20. +For the alert, set **Threshold value** to `20`. ### Single machine in a critical state AzureDiagnostics   | summarize hint.strategy=partitioned arg_max(TimeGenerated, *) by name_s   | summarize count()  ```-For the alert, set **Threshold value** to 1. +For the alert, set **Threshold value** to `1`. ### Multiple machines exceed RPO AzureDiagnostics   | project name_s , rpoInSeconds_d   | summarize count()  ```-For the alert, set **Threshold value** to 20. +For the alert, set **Threshold value** to `20`. ### Single machine exceeds RPO AzureDiagnostics   | project name_s , rpoInSeconds_d   | summarize count()  ```-For the alert, set **Threshold value** to 1. +For the alert, set **Threshold value** to `1`. ### Test failover for multiple machines exceeds 90 days AzureDiagnostics  | summarize hint.strategy=partitioned arg_max(TimeGenerated, *) by name_s   | summarize count()  ```-For the alert, set **Threshold value** to 20. +For the alert, set **Threshold value** to `20`. -### Test failover for single machine exceeds 90 days +### Test failover for a single machine exceeds 90 days Set up an alert if the last successful test failover for a specific VM was more than 90 days ago. ``` AzureDiagnostics  | summarize hint.strategy=partitioned arg_max(TimeGenerated, *) by name_s   | summarize count()  ```-For the alert, set **Threshold value** to 1. +For the alert, set **Threshold value** to `1`. ### Site Recovery job fails |
site-recovery | Vmware Physical Manage Mobility Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md | |
stream-analytics | Capture Event Hub Data Delta Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-delta-lake.md | Use the following steps to configure a Stream Analytics job to capture data in A 1. On the **Azure Data Lake Storage Gen2** configuration page, follow these steps: 1. Select the subscription, storage account name and container from the drop-down menu. 1. Once the subscription is selected, the authentication method and storage account key should be automatically filled in. - 1. For **Delta table path**, it's used to specify the location and name of your Delta Lake table stored in Azure Data Lake Storage Gen2. You can choose to use one or more path segments to define the path to the delta table and the delta table name. To learn more, see to [Write to Delta Lake table](./write-to-delta-lake.md). + 1. For **Delta table path**, it's used to specify the location and name of your Delta Lake table stored in Azure Data Lake Storage Gen2. You can choose to use one or more path segments to define the path to the delta table and the delta table name. To learn more, see to [Write to Delta Lake table (Public Preview)](./write-to-delta-lake.md). 1. Select **Connect**. :::image type="content" source="./media/capture-event-hub-data-delta-lake/blob-configuration.png" alt-text="First screenshot showing the Blob window where you edit a blob's connection configuration." lightbox="./media/capture-event-hub-data-delta-lake/blob-configuration.png" ::: |
stream-analytics | Job Diagram With Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/job-diagram-with-metrics.md | The processor diagram in physical job diagram visualizes the processor topology | | | | **Input** or **Output** | This processor is used for reading input or writing output data streams. | | **ReferenceData** | This processor is used for fetching the reference data. |- | **Computing** | This processor is used for processing the stream data according to the query logic, for example, aggregating, filtering, grouping with window, etc.. To learn more about the stream data computation query functions, see [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference). | + | **Computing** | This processor is used for processing the stream data according to the query logic, for example, aggregating, filtering, grouping with window, etc. To learn more about the stream data computation query functions, see [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference). | | **MarshallerUpstream** and **MarshallerDownstream** | When there's stream data interaction among streaming nodes, there will be two marshaller processors: 1). **MarshallerUpstream** for sending the data in the upstream streaming node and 2). **MarshallerDownstream** for receiving the data in the downstream streaming node. |- | **Merger** | This processor is to receive the crossing-partition stream data, which were outputted from several upstream streaming nodes. The best practice to optimize job performance is to update query to remove the merger processor to make the job become parallel since the merger processor is the bottleneck of the job. The job diagram simulator feature within VSCode ASA extension can help you simulating your query locally when you optimizing your job query. To learn more, see [Optimize query using job diagram simulator (preview)](./optimize-query-using-job-diagram-simulator.md). | + | **Merger** | This processor is to receive the crossing-partition stream data, which were outputted from several upstream streaming nodes. The best practice to optimize job performance is to update query to remove the merger processor to make the job become parallel since the merger processor is the bottleneck of the job. The job diagram simulator feature within Visual Studio Code ASA extension can help you simulating your query locally when you optimizing your job query. To learn more, see [Optimize query using job diagram simulator (preview)](./optimize-query-using-job-diagram-simulator.md). | | The processor diagram in physical job diagram visualizes the processor topology * **Adapter type**: it shows the type of the input or output adapter. Stream Analytics supports various input sources and output destinations. Each input source or output destination has a dedicated adapter type. It's only available in input processor and output processor. For example, "InputBlob" represents the ADLS Gen2 input where the input processor receives the data from; "OutputDocumentDb" represents the Cosmos DB output where the output processor outputs the data to. - To learn more details of the input and output types, see [Azure Stream Analytics inputs overview](./stream-analytics-define-inputs.md), and [Azure Stream Analytics outputs overview](./stream-analytics-define-outputs.md) + To learn more details of the input and output types, see [Azure Stream Analytics inputs overview](./stream-analytics-define-inputs.md), and [Azure Stream Analytics outputs overview](./stream-analytics-define-outputs.md). * **Partition IDs**: it shows which partition IDs' data are being processed by this processor. It's only available in input processor and output processor. * **Serializer type**: it shows the type of the serialization. Stream Analytics supports several [serialization types](./stream-analytics-define-inputs.md). It's only available in input processor and output processor. |
stream-analytics | No Code Build Power Bi Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-build-power-bi-dashboard.md | + + Title: Build real-time dashboard with Power BI dataset produced from Stream Analytics no code editor +description: Learn how to use the no code editor to easily create a Stream Analytics job to produce the Power BI dataset, and use it to build the real-time dashboard. It continuously reads from Event Hubs, and outputs the data into Power BI dataset to build the real-time Power BI dashboard. +++++ Last updated : 2/17/2023+++# Build real-time dashboard with Power BI dataset produced from Stream Analytics no code editor ++This article describes how you can use the no code editor to easily create a Stream Analytics job to produce processed data into Power BI dataset. It continuously reads from your Event Hubs, processes and outputs the data into Power BI dataset to build the real-time Power BI dashboard. ++## Prerequisites ++- Your Azure Event Hubs resources must be publicly accessible and not be behind a firewall or secured in an Azure Virtual Network +- You should have an existing Power BI workspace, and you have the permission to create dataset there. +- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format. ++## Develop a Stream Analytics job to create Power BI dataset with selected data ++1. In the [Azure portal](https://portal.azure.com), locate and select the Azure Event Hubs instance. +1. Select **Features** > **Process Data** and then select **Start** on the **Build the real-time data dashboard with Power BI** card. ++ :::image type="content" source="./media/no-code-build-power-bi-dashboard/event-hub-process-data-templates.png" alt-text="Screenshot showing the Filter and ingest to ADLS Gen2 card where you select Start." lightbox="./media/no-code-build-power-bi-dashboard/event-hub-process-data-templates.png" ::: ++1. Enter a name for the Stream Analytics job, then select **Create**. + + :::image type="content" source="./media/no-code-build-power-bi-dashboard/create-new-stream-analytics-job.png" alt-text="Screenshot showing where to enter a job name." lightbox="./media/no-code-build-power-bi-dashboard/create-new-stream-analytics-job.png" ::: ++1. Specify the **Serialization type** of your data in the Event Hubs window and the **Authentication method** that the job will use it to connect to the Event Hubs. Then select **Connect**. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/no-code-build-power-bi-dashboard/event-hub-configuration.png" ::: ++1. When the connection is established successfully and you have data streams flowing into your Event Hubs instance, you immediately see two things: + - Fields that are present in the input data. You can choose **Add field** or select the three dot symbol next to a field to remove, rename, or change its type. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-schema.png" alt-text="Screenshot showing the Event Hubs field list where you can remove, rename, or change the field type." lightbox="./media/no-code-build-power-bi-dashboard/no-code-schema.png" ::: + - A live sample of incoming data in the **Data preview** table under the diagram view. It automatically refreshes periodically. You can select **Pause streaming preview** to see a static view of the sample input data. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-sample-input.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/no-code-build-power-bi-dashboard/no-code-sample-input.png" ::: +++1. Select the **Manage** tile. In the **Manage fields** configuration panel, choose the fields you want to output. If you want to add all the fields, select **Add all fields**. ++ :::image type="content" source="./media/no-code-build-power-bi-dashboard/manage-fields-configuration.png" alt-text="Screenshot that shows the manage field operator configuration." lightbox="./media/no-code-build-power-bi-dashboard/manage-fields-configuration.png" ::: ++1. Select **Power BI** tile. In the **Power BI** configuration panel, fill in needed parameters and connect. + - **Dataset**: it's the Power BI destination where the Azure Stream Analytics job output data is written into. + - **Table**: it's the table name in the Dataset where the output data goes to. ++ :::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-pbi-configuration.png" alt-text="Screenshot that shows the Power BI output configuration." lightbox="./media/no-code-build-power-bi-dashboard/no-code-pbi-configuration.png" ::: ++1. Optionally, select **Get static preview/Refresh static preview** to see the data preview that will be ingested in event hub. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-output-static-preview.png" alt-text="Screenshot showing the Get static preview/Refresh static preview option." lightbox="./media/no-code-build-power-bi-dashboard/no-code-output-static-preview.png" ::: ++1. Select **Save** and then select **Start** the Stream Analytics job. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-save-start.png" alt-text="Screenshot showing the Save and Start options." lightbox="./media/no-code-build-power-bi-dashboard/no-code-save-start.png" ::: ++1. To start the job, specify: + - The number of **Streaming Units (SUs)** the job runs with. SUs represents the amount of compute and memory allocated to the job. We recommended that you start with three and then adjust as needed. + - **Output data error handling** ΓÇô It allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop such output events. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job options where you can change the output time, set the number of streaming units, and select the Output data error handling options." lightbox="./media/no-code-build-power-bi-dashboard/no-code-start-job.png" ::: ++1. After you select **Start**, the job starts running within two minutes, and the metrics will be open in tab section below. ++ :::image type="content" source="./media/no-code-build-power-bi-dashboard/job-metrics-after-started.png" alt-text="Screenshot that shows the job metrics after it's started." lightbox="./media/no-code-build-power-bi-dashboard/job-metrics-after-started.png" ::: ++ You can also see the job under the Process Data section on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it or stop and restart it, as needed. ++ :::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-list-jobs.png" alt-text="Screenshot of the Stream Analytics jobs tab where you view the running jobs status." lightbox="./media/no-code-build-power-bi-dashboard/no-code-list-jobs.png" ::: ++## Build the real-time dashboard in Power BI +Now, you have the Azure Stream Analytics job running and the data is continuously written into the table in the Power BI dataset you've configured. You can now create the real-time dashboard in Power BI workspace. ++1. Go to the Power BI workspace, which you've configured in above Power BI output tile, and select **+ New** in the top left corner, then choose **Dashboard** to give the new dashboard a name. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-creation.png" alt-text="Screenshot of the pbi dashboard creation." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-creation.png" ::: +2. Once the new dashboard is created, you'll be led to the new dashboard. Select **Edit**, and choose **+ Add a tile** in the top menu bar. A right pane is open. Select **Custom Streaming Data** to go to next page. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile.png" alt-text="Screenshot of the pbi dashboard adding tile." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile.png" ::: +3. Select the streaming dataset (for example **nocode-pbi-demo-xujx**) which you've configured in Power BI node, and go to next page. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile-select-dataset.png" alt-text="Screenshot of the pbi dashboard adding tile with selected dataset." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile-select-dataset.png" ::: +4. Fill in the tile details, and follow the next step to complete the tile configuration. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile-details.png" alt-text="Screenshot of the pbi dashboard adding tile with configured details." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile-details.png" ::: +5. Then, you can adjust its size and get the continuously updated dashboard as below. + :::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-report.png" alt-text="Screenshot of the pbi dashboard report." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-report.png" ::: +++## Next steps ++Learn more about Azure Stream Analytics and how to monitor the job you've created. ++* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md) +* [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md) +* [Azure Stream Analytics no code editor](./no-code-stream-processing.md) |
stream-analytics | No Code Power Bi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-power-bi-tutorial.md | Title: Build near real-time dashboard with Azure Synapse Analytics and Power BI -description: Use no code editor to compute aggregations and write to Azure Synapse Analytics and build near-real time dashboards using Power BI + Title: Build real-time dashboard with Azure Synapse Analytics and Power BI +description: Use no code editor to compute aggregations and write to Azure Synapse Analytics and build real-time dashboards using Power BI Previously updated : 05/25/2022 Last updated : 02/17/2023 -# Build real time Power BI dashboards with Stream Analytics no code editor +# Build real-time Power BI dashboards with Stream Analytics no code editor This tutorial shows how you can use the Stream Analytics no code editor to compute aggregates on real time data streams and store it in Azure Synapse Analytics. In this tutorial, you learn how to: In this tutorial, you learn how to: Before you start, make sure you've completed the following steps: -* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/). -* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this step. -* Create an [Azure Synapse Analytics workspace](../synapse-analytics/get-started-create-workspace.md) with a [Dedicated SQL pool](../synapse-analytics/get-started-analyze-sql-pool.md#create-a-dedicated-sql-pool). -* Create a table named `carsummary` using your Dedicated SQL pool. You can do it by running the following SQL script: +1. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/). +2. Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this step. +3. Create an [Azure Synapse Analytics workspace](../synapse-analytics/get-started-create-workspace.md) with a [Dedicated SQL pool](../synapse-analytics/get-started-analyze-sql-pool.md#create-a-dedicated-sql-pool). + > [!NOTE] + > If you'd like to build the real-time Power BI dashboard directly without capturing the data into database, you can skip step#3 and 4, then go to this guide to [<u>build real-time dashboard with Power BI dataset produced by Stream Analytics job</u>](./no-code-build-power-bi-dashboard.md). ++4. Create a table named `carsummary` using your Dedicated SQL pool. You can do it by running the following SQL script: ```SQL CREATE TABLE carsummary ( Before you start, make sure you've completed the following steps: ) WITH ( CLUSTERED COLUMNSTORE INDEX ) ; ``` ++ ## Use no code editor to create a Stream Analytics job 1. Locate the Resource Group in which the TollApp event generator was deployed. 2. Select the Azure Event Hubs **namespace**. Before you start, make sure you've completed the following steps: ## Create a Power BI visualization 1. Download the latest version of [Power BI desktop](https://powerbi.microsoft.com/desktop).-2. Use the Power BI connector for Azure Synapse SQL to connect to your database. +2. Use the Power BI connector for Azure Synapse SQL to connect to your database with **DirectQuery**. 3. Use this query to fetch data from your database ```SQL SELECT [Make],[CarCount],[times] Before you start, make sure you've completed the following steps: * X-axis as times * Y-axis as CarCount * Legend as Make- You'll then see a chart that can be published. You can configure [automatic page refresh](/power-bi/create-reports/desktop-automatic-page-refresh#authoring-reports-with-automatic-page-refresh-in-power-bi-desktop) and set it to 3 minutes to get a near-real time view. + You'll then see a chart that can be published. You can configure [automatic page refresh](/power-bi/create-reports/desktop-automatic-page-refresh#authoring-reports-with-automatic-page-refresh-in-power-bi-desktop) and set it to 3 minutes to get a real-time view. [](./media/stream-analytics-no-code/no-code-power-bi-real-time-dashboard.png#lightbox) +## More option ++Except the Azure Synapse SQL, you can also use the SQL Database as the no-code editor output to receive the streaming data. And then use Power BI connector to connect the SQL Database with your database with **DirectQuery** as well to build the real-time dashboard. ++It's also a good option to build the real-time dashboard with your streaming data. For more information about the SQL Database output, see [Transform and ingest to SQL Database](./no-code-transform-filter-ingest-sql.md). ++ ## Clean up resources 1. Locate your Event Hubs instance and see the list of Stream Analytics jobs under **Process Data** section. Stop any jobs that are running. 2. Go to the resource group you used while deploying the TollApp event generator. 3. Select **Delete resource group**. Type the name of the resource group to confirm deletion. ## Next steps-In this tutorial, you created a Stream Analytics job using the no code editor to define aggregations and write results to Azure Synapse Analytics. You then used the Power BI to build a near-real time dashboard to see the results produced by the job. +In this tutorial, you created a Stream Analytics job using the no code editor to define aggregations and write results to Azure Synapse Analytics. You then used the Power BI to build a real-time dashboard to see the results produced by the job. > [!div class="nextstepaction"] > [No code stream processing with Azure Stream Analytics](https://aka.ms/asanocodeux) |
stream-analytics | No Code Stream Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md | +- [Build real-time dashboard with Power BI dataset](./no-code-build-power-bi-dashboard.md) +- [Capture data from Event Hubs in Delta Lake format (preview)](./capture-event-hub-data-delta-lake.md) - [Filtering and ingesting to Azure Synapse SQL](./filter-ingest-synapse-sql.md) - [Capturing your Event Hubs data in Parquet format in Azure Data Lake Storage Gen2](./capture-event-hub-data-parquet.md) - [Materializing data in Azure Cosmos DB](./no-code-materialize-cosmos-db.md) Under the **Outputs** section on the ribbon, select **ADLS Gen2** as the output When you're connecting to Azure Data Lake Storage Gen2, if you select **Managed Identity** as the authentication mode, then the Storage Blob Data Contributor role will be granted to the managed identity for the Stream Analytics job. To learn more about managed identities for Azure Data Lake Storage Gen2, see [Use managed identities to authenticate your Azure Stream Analytics job to Azure Blob Storage](blob-output-managed-identity.md). -Managed identities eliminate the limitations of user-based authentication methods. These limitations include the need to reauthenticate because of password changes or user token expirations that occur every 90 days. +Managed identities eliminate the limitations of user-based authentication methods. These limitations include the need to reauthenticate because of password changes or user token expirations that occur every 90 days.  +**Exactly once delivery (preview)** is supported in the ADLS Gen2 as no code editor output. You can enable it in the **Write mode** section in ADLS Gen2 configuration. For more information about this feature, see [Exactly once delivery (preview) in Azure Data Lake Gen2](./blob-storage-azure-data-lake-gen2-output.md#exactly-once-delivery-public-preview) +++**Write to Delta Lake table (preview)** is supported in the ADLS Gen2 as no code editor output. You can access this option in section **Serialization** in ADLS Gen2 configuration. For more information about this feature, see [Write to Delta Lake table (Public Preview)](./write-to-delta-lake.md). ++ ### Azure Synapse Analytics Azure Stream Analytics jobs can send output to a dedicated SQL pool table in Azure Synapse Analytics and can process throughput rates up to 200 MB per second. Stream Analytics supports the most demanding real-time analytics and hot-path data processing needs for workloads like reporting and dashboarding. |
stream-analytics | No Code Transform Filter Ingest Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-transform-filter-ingest-sql.md | This article describes how you can use the no code editor to easily create a Str :::image type="content" source="./media/no-code-transform-filter-ingest-sql/group-by-operation-configuration.png" alt-text="Screenshot that shows the group by operator configuration." lightbox="./media/no-code-transform-filter-ingest-sql/group-by-operation-configuration.png" ::: -1. Select the **Manage** tile. In the **Manage fields** configuration panel, choose the fields you want to output to event hub. If you want to add all the fields, click **Add all fields**. +1. Select the **Manage** tile. In the **Manage fields** configuration panel, choose the fields you want to output. If you want to add all the fields, click **Add all fields**. :::image type="content" source="./media/no-code-transform-filter-ingest-sql/manage-fields-configuration.png" alt-text="Screenshot that shows the manage field operator configuration." lightbox="./media/no-code-transform-filter-ingest-sql/manage-fields-configuration.png" ::: |
stream-analytics | Stream Analytics Job Physical Diagram With Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-physical-diagram-with-metrics.md | Job with parallelization is the scalable scenario in Stream Analytics that can p :::image type="content" source="./media/job-physical-diagram-debug/1-non-parallel-job-diagram.png" alt-text="Screenshot that shows the non-parallel job in physical diagram." lightbox="./media/job-physical-diagram-debug/1-non-parallel-job-diagram.png"::: -You may consider to optimize it to parallel job (as example below) by rewriting your query or updating input/output configurations with the **job diagram simulator** within VSCode ASA extension or query editor in Azure portal. To learn more, see [Optimize query using job diagram simulator (preview)](./optimize-query-using-job-diagram-simulator.md). +You may consider to optimize it to parallel job (as example below) by rewriting your query or updating input/output configurations with the **job diagram simulator** within Visual Studio Code ASA extension or query editor in Azure portal. To learn more, see [Optimize query using job diagram simulator (preview)](./optimize-query-using-job-diagram-simulator.md). :::image type="content" source="./media/job-physical-diagram-debug/1-dataskew-overview.png" alt-text="Screenshot that shows data skew view with physical diagram." lightbox="./media/job-physical-diagram-debug/1-dataskew-overview.png"::: |
synapse-analytics | Data Explorer Compare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/data-explorer-compare.md | We recommend starting with Synapse Data Explorer if you are looking for a unifie | **Business Continuity** | Availability Zones | Optional | Enabled by default where Availability Zones are available | | **SKU** | Compute options | 22+ Azure VM SKUs to choose from | Simplified to Synapse workload types SKUs | | **Integrations** | Built-in ingestion pipelines | Event Hub, Event Grid, IoT Hub | Event Hub, Event Grid, and IoT Hub supported via the Azure portal for non-managed VNet |-| | Spark integration | Azure Data Explorer linked service: Built-in Kusto Spark integration with support for Azure Active Directory pass-though authentication, Synapse Workspace MSI, and Service Principal | Built-in Kusto Spark connector integration with support for Azure Active Directory pass-though authentication, Synapse Workspace MSI, and Service Principal | +| | Spark integration | Azure Data Explorer linked service: Built-in Kusto Spark integration with support for Azure Active Directory pass-through authentication, Synapse Workspace MSI, and Service Principal | Built-in Kusto Spark connector integration with support for Azure Active Directory pass-through authentication, Synapse Workspace MSI, and Service Principal | | | KQL artifacts management | Γ£ù | Save KQL queries and integrate with Git | | | Metadata sync | Γ£ù | Γ£ù | | **Features** | KQL queries | Γ£ô | Γ£ô | |
traffic-manager | Quickstart Create Traffic Manager Profile Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-bicep.md | description: This quickstart article describes how to create an Azure Traffic Ma Previously updated : 06/20/2022 Last updated : 02/19/2023 |
traffic-manager | Quickstart Create Traffic Manager Profile Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-template.md | description: This quickstart article describes how to create an Azure Traffic Ma Previously updated : 09/01/2020 Last updated : 02/19/2023 |
update-center | Manage Update Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md | description: The article describes how to manage the update settings for your Wi Previously updated : 04/21/2022 Last updated : 01/30/2023 The article describes how to configure update settings from Update management ce To configure update settings on your machines on a single VM, follow these steps: >[!NOTE]-> You can schedule updates from the Overview or Machines blade in update management center (preview) page or from the selected VM. +> You can schedule updates from the Overview blade or Machines blade in update management center (preview) page or from the selected VM. # [From Overview blade](#tab/manage-single-overview) 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In **Update management center**, select **Overview**, select your **Subscription**, and select **Update settings**.-1. In **Change update settings**, select the update settings that you want to change for your machine and select **Next**. +1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings. +1. In **Select resources**, select the machine and select **Add**. +1. In the **Change update settings** page, you will see the machine classified as per the operating system with the list of following updates that you can select and apply. :::image type="content" source="./media/manage-update-settings/update-setting-to-change.png" alt-text="Highlighting the Update settings to change option in the Azure portal."::: The following update settings are available for configuration for the selected machine(s): - - **Periodic assessment** - enable periodic **Assessment** to run every 24 hours. + - **Periodic assessment** - The **periodic Assessment** is set to run every 24 hours. You can either enable or disable this setting. - - **Hot patching** - for Azure VMs, you can enable [hot patching](../automanage/automanage-hotpatch.md) on supported Windows Server Azure Edition Virtual Machines (VMs) don't require a reboot after installation. You can use update management center (preview) to install patches with other patch classifications or to schedule patch installation when you require immediate critical patch deployment. + - **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use update management center (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting. - **Patch orchestration** option provides the following: - - **Automatic by operating system** - When the workload running on the VM doesn't have to meet availability targets, operating system updates are automatically downloaded and installed. Machines are rebooted as needed. + - **Automatic by OS (Windows Automatic Updates)** - When the workload running on the VM doesn't have to meet availability targets, the operating system updates are automatically downloaded and installed. Machines are rebooted as needed. - **Azure-orchestrated** - Patch orchestration set to Azure-orchestrated for an Azure VM (not applicable for Arc-enabled server) has two different implications depending on whether customer [schedule](../update-center/scheduled-patching.md#) is attached to it or not. | Patch orchestration type | Description To configure update settings on your machines on a single VM, follow these steps |Azure-orchestrated with schedule attached | Patching will happen according to the schedule and [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) will not take effect on the machine. Patch orchestration set to Azure-orchestrated is a necessary pre-condition for enabling schedules. You cannot enable a machine for custom schedule unless you set Patch orchestration to Azure-orchestrated. | - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.- - **Manual updates** - Configures the Windows Update agent by setting [configure automatic updates](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates). - - **Image Default** - Only supported for Linux Virtual Machines, this mode honors the default patching configuration in the image used to create the VM. + - **Manual updates** - This mode disables Windows automatic updates on VMs. Patches are installed manually or using a different solution. + - **Image Default** - Only supported for Linux Virtual Machines, this mode uses the default patching configuration in the image used to create the VM. -1. In **Machines**, select the checkbox for your machine and Select **Next** to continue. --1. In **Review and change**, verify your selected resources and the update settings and select **Review and change**. +1. After you make the selection, select **Save**. # [From Machines blade](#tab/manage-single-machines) 1. Sign in to the [Azure portal](https://portal.azure.com).-1. In **Update management center**, select **Machines**, your **subscription**, and select the checkbox of your machine from the list and select **Update settings**. +1. In **Update management center**, select **Machines** > your **subscription**. +1. Select the checkbox of your machine from the list and select **Update settings**. 1. Select **Update Settings** to proceed with the type of update for your machine.-1. In **Change update settings**, you can select the update settings that you want to change for your machines and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). +1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings. +1. In **Select resources**, select the machine and select **Add** and follow the procedure from step 5 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). # [From a selected VM](#tab/singlevm-schedule-home) 1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.-1. In **Updates**, select **Go to Updates using Update Center**. -1. In **Updates preview**, select **Update Settings**. +1. In **Updates (Preview)**, select **Update Settings**. 1. In **Change update settings**, you can select the update settings that you want to change for your machine and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). A notification appears to confirm that the update settings are successfully chan To configure update settings on your machines at scale, follow these steps: >[!NOTE]-> You can schedule updates from the Overview or Machines blade. +> You can schedule updates from the Overview blade or Machines blade. # [From Overview blade](#tab/manage-scale-overview) To configure update settings on your machines at scale, follow these steps: 1. In **Update management center**, select **Overview**, select your **Subscription** and select **Update settings**. -1. In **Change update settings**, select the update settings that you want to change for your machines follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). +1. In **Change update settings**, select the update settings that you want to change for your machines. Follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). # [From Machines blade](#tab/manage-scale-machines) 1. Sign in to the [Azure portal](https://portal.azure.com).-1. In **Update management center**, select **Machines**, your **subscription**, and select the checkbox for all your machines from the list and select **Update settings**. +1. In **Update management center**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list. 1. Select **Update Settings** to proceed with the type of update for your machines. 1. In **Change update settings**, you can select the update settings that you want to change for your machine and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). |