Updates from: 11/12/2024 02:04:16
Service Microsoft Docs article Related commit history on GitHub Change details
azure-app-configuration Howto Feature Filters Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters-javascript.md
+
+ Title: Enable conditional features with a custom filter in a Node.js application
+
+description: Learn how to implement a custom feature filter to enable conditional feature flags for your Node.js application.
+
+ms.devlang: javascript
++++ Last updated : 09/26/2024++
+# Tutorial: Enable conditional features with a custom filter in a JavaScript application
+
+Feature flags can use feature filters to enable features conditionally. To learn more about feature filters, see [Tutorial: Enable conditional features with feature filters](./howto-feature-filters.md).
+
+The example used in this tutorial is based on the Node.js application introduced in the feature management [quickstart](./quickstart-feature-flag-javascript.md). Before proceeding further, complete the quickstart to create a Node.js application with a *Beta* feature flag. Once completed, you must [add a custom feature filter](./howto-feature-filters.md) to the *Beta* feature flag in your App Configuration store.
+
+In this tutorial, you'll learn how to implement a custom feature filter and use the feature filter to enable features conditionally. We are using the Node.js console app as an example, but you can also use the custom feature filter in other JavaScript applications.
+
+## Prerequisites
+
+- Create a [console app with a feature flag](./quickstart-feature-flag-javascript.md).
+- [Add a custom feature filter to the feature flag](./howto-feature-filters.md)
+
+## Implement a custom feature filter
+
+You've added a custom feature filter named **Random** with a **Percentage** parameter for your *Beta* feature flag in the prerequisites. Next, you implement the feature filter to enable the *Beta* feature flag based on the chance defined by the **Percentage** parameter.
+
+1. Open the file *app.js* and add the `RandomFilter` with the following code.
+
+ ``` javascript
+ class RandomFilter {
+ name = "Random";
+ evaluate(context) {
+ const percentage = context.parameters.Percentage;
+ const randomNumber = Math.random() * 100;
+ return randomNumber <= percentage;
+ }
+ }
+ ```
+
+ You added a `RandomFilter` class that has a single method named `evaluate`, which is called whenever a feature flag is evaluated. In `evaluate`, a feature filter enables a feature flag by returning `true`.
+
+ You set the name to of `RandomFilter` to **Random**, which matches the filter name you set in the *Beta* feature flag in Azure App Configuration.
+
+1. Register the `RandomFilter` when creating the `FeatureManager`.
+
+ ``` javascript
+ const fm = new FeatureManager(ffProvider, {customFilters: [new RandomFilter()]});
+ ```
+
+## Feature filter in action
+
+When you run the application the configuration provider will load the *Beta* feature flag from Azure App Configuration. The result of the `isEnabled("Beta")` method will be printed to the console. As the `RandomFilter` is implemented and used by the *Beta* feature flag, the result will be `True` 50 percent of the time and `False` the other 50 percent of the time.
+
+Running the application will show that the *Beta* feature flag is sometimes enabled and sometimes not.
+
+``` bash
+Beta is enabled: true
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: true
+Beta is enabled: true
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: true
+Beta is enabled: true
+```
+
+## Next steps
+
+To learn more about the built-in feature filters, continue to the following tutorials.
+
+> [!div class="nextstepaction"]
+> [Enable features on a schedule](./howto-timewindow-filter.md)
+
+> [!div class="nextstepaction"]
+> [Roll out features to targeted audience](./howto-targetingfilter.md)
azure-app-configuration Howto Feature Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters.md
You can create custom feature filters that enable features based on your specifi
> [!div class="mx-imgBorder"] > ![Screenshot of the Azure portal, applying new custom filter.](./media/feature-filters/feature-flag-edit-apply-filter.png)
-You have successfully added a custom filter to a feature flag. Follow the instructions in the [Next Steps](#next-steps) section to implement the feature filter into your application for the language or platform you are using.
+ You have successfully added a custom filter to a feature flag.
-## Next steps
-
-In this tutorial, you learned the concept of feature filter and added a custom feature filter to a feature flag.
-
-To learn how to implement a custom feature filter, continue to the following tutorial:
+1. Continue to the following instructions to implement the feature filter into your application for the language or platform you are using.
-> [!div class="nextstepaction"]
-> [ASP.NET Core](./howto-feature-filters-aspnet-core.md)
+ - [ASP.NET Core](./howto-feature-filters-aspnet-core.md)
+ - [Node.js](./howto-feature-filters-javascript.md)
+ - [Python](./howto-feature-filters-python.md)
-> [!div class="nextstepaction"]
-> [Python](./howto-feature-filters-python.md)
+## Next steps
To learn more about the built-in feature filters, continue to the following tutorials:
azure-app-configuration Howto Targetingfilter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-targetingfilter.md
In this article, you will learn how to add and configure a targeting filter for
> [!div class="mx-imgBorder"] > ![Screenshot of the Azure portal, applying new targeting filter.](./media/feature-filters/feature-flag-edit-apply-targeting-filter.png)
-Now, you successfully added a targeting filter for your feature flag. This targeting filter will use the targeting rule you configured to enable or disable the feature flag for specific users and groups. Follow the instructions in the [Next Steps](#next-steps) section to learn how it works in your application for the language or platform you are using.
+ Now, you successfully added a targeting filter for your feature flag. This targeting filter will use the targeting rule you configured to enable or disable the feature flag for specific users and groups.
-## Next steps
-
-In this tutorial, you learned the concept of the targeting filter and added it to a feature flag.
+1. Continue to the following instructions to use the feature flag with a targeting filter in your application for the language or platform you are using.
-To learn how to use the feature flag with a targeting filter in your application, continue to the following tutorial.
+ - [ASP.NET Core](./howto-targetingfilter-aspnet-core.md)
-> [!div class="nextstepaction"]
-> [ASP.NET Core](./howto-targetingfilter-aspnet-core.md)
+## Next steps
To learn more about the feature filters, continue to the following tutorials:
azure-app-configuration Howto Timewindow Filter Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-timewindow-filter-javascript.md
+
+ Title: Enable features on a schedule in a Node.js application
+
+description: Learn how to enable feature flags on a schedule in a Node.js application.
+
+ms.devlang: javascript
++++ Last updated : 09/26/2024++
+# Tutorial: Enable features on a schedule in a Node.js application
+
+In this tutorial, you use the time window filter to enable a feature on a schedule for a Node.js application.
+
+The example used in this tutorial is based on the Node.js application introduced in the feature management [quickstart](./quickstart-feature-flag-javascript.md). Before proceeding further, complete the quickstart to create a Node.js application with a *Beta* feature flag. Once completed, you must [add a time window filter](./howto-timewindow-filter.md) to the *Beta* feature flag in your App Configuration store.
+
+## Prerequisites
+
+- Create a [Node.js application with a feature flag](./quickstart-feature-flag-javascript.md).
+- [Add a time window filter to the feature flag](./howto-timewindow-filter.md)
+
+## Use the time window filter
+
+You've added a time window filter for your *Beta* feature flag in the prerequisites. Next, you'll use the feature flag with the time window filter in your Node.js application.
+
+When you create a feature manager, the built-in feature filters are automatically added to its feature filter collection.
+
+``` javascript
+const fm = new FeatureManager(ffProvider);
+```
+
+## Time window filter in action
+
+When you run the application, the configuration provider loads the *Beta* feature flag from Azure App Configuration. The result of the `isEnabled("Beta")` method will be printed to the console. If your current time is earlier than the start time set for the time window filter, the *Beta* feature flag will be disabled by the time window filter.
+
+You'll see the following console outputs.
+
+``` bash
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+```
+
+Once the start time has passed, you'll notice that the *Beta* feature flag is enabled by the time window filter.
+
+You'll see the console outputs change as the *Beta* is enabled.
+
+``` bash
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: false
+Beta is enabled: true
+Beta is enabled: true
+Beta is enabled: true
+Beta is enabled: true
+```
+
+## Next steps
+
+To learn more about the feature filters, continue to the following tutorials.
+
+> [!div class="nextstepaction"]
+> [Enable conditional features with feature filters](./howto-feature-filters.md)
+
+> [!div class="nextstepaction"]
+> [Roll out features to targeted audience](./howto-targetingfilter.md)
azure-app-configuration Howto Timewindow Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-timewindow-filter.md
In this article, you will learn how to add and configure a time window filter fo
> [!div class="mx-imgBorder"] > ![Screenshot of the Azure portal, applying new time window filter.](./media/feature-filters/feature-flag-edit-apply-timewindow-filter.png)
-Now, you successfully added a time window filter to a feature flag. Follow the instructions in the [Next Steps](#next-steps) section to learn how it works in your application for the language or platform you are using.
+ Now, you successfully added a time window filter to a feature flag.
-## Next steps
-
-In this tutorial, you learned the concept of the time window filter and added it to a feature flag.
+1. Continue to the following instructions to use the feature flag with a time window filter in your application for the language or platform you are using.
-To learn how to use the feature flag with a time window filter in your application, continue to the following tutorial.
+ - [ASP.NET Core](./howto-timewindow-filter-aspnet-core.md)
+ - [Node.js](./howto-timewindow-filter-javascript.md)
-> [!div class="nextstepaction"]
-> [ASP.NET Core](./howto-timewindow-filter-aspnet-core.md)
+## Next steps
To learn more about the feature filters, continue to the following tutorials:
azure-app-configuration Quickstart Feature Flag Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-javascript.md
+
+ Title: Quickstart for adding feature flags to JavaScript apps
+
+description: In this quickstart, add feature flags to a Node.js app and manage them using Azure App Configuration.
++
+ms.devlang: javascript
+ Last updated : 09/26/2024++
+#Customer intent: As a JavaScript developer, I want to use feature flags to control feature availability quickly and confidently.
++
+# Quickstart: Add feature flags to a Node.js console app
+
+In this quickstart, you incorporate Azure App Configuration into a Node.js console app to create an end-to-end implementation of feature management. You can use App Configuration to centrally store all your feature flags and control their states.
+
+The JavaScript Feature Management libraries extend the framework with feature flag support. They seamlessly integrate with App Configuration through its JavaScript configuration provider. As an example, this tutorial shows how to use the JavaScript Feature Management in a Node.js app.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
+- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule). For information about installing Node.js either directly on Windows or using the Windows Subsystem for Linux (WSL), see [Get started with Node.js](/windows/dev-environment/javascript/nodejs-overview)
+
+## Add a feature flag
+
+Add a feature flag called *Beta* to the App Configuration store and leave **Label** and **Description** with their default values. For more information about how to add feature flags to a store using the Azure portal or the CLI, go to [Create a feature flag](./manage-feature-flags.md#create-a-feature-flag).
+
+> [!div class="mx-imgBorder"]
+> ![Enable feature flag named Beta](media/quickstart-feature-flag-javascript/add-beta-feature-flag.png)
++
+## Use the feature flag
+
+1. Install the Feature Management by using the `npm install` command.
+
+ ``` console
+ npm install @microsoft/feature-management
+ ```
+
+1. Create a file named *app.js* and add the following code.
+
+ ``` javascript
+ const sleepInMs = require("util").promisify(setTimeout);
+ const { load } = require("@azure/app-configuration-provider");
+ const { FeatureManager, ConfigurationMapFeatureFlagProvider} = require("@microsoft/feature-management")
+ const connectionString = process.env.AZURE_APPCONFIG_CONNECTION_STRING;
+
+ async function run() {
+ // Connect to Azure App Configuration using connection string
+ const settings = await load(connectionString, {
+ featureFlagOptions: {
+ enabled: true,
+ // Note: selectors must be explicitly provided for feature flags.
+ selectors: [{
+ keyFilter: "*"
+ }],
+ refresh: {
+ enabled: true,
+ refreshIntervalInMs: 10_000
+ }
+ }
+ });
+
+ // Create a feature flag provider which uses a map as feature flag source
+ const ffProvider = new ConfigurationMapFeatureFlagProvider(settings);
+ // Create a feature manager which will evaluate the feature flag
+ const fm = new FeatureManager(ffProvider);
+
+ while (true) {
+ await settings.refresh(); // Refresh to get the latest feature flag settings
+ const isEnabled = await fm.isEnabled("Beta"); // Evaluate the feature flag
+ console.log(`Beta is enabled: ${isEnabled}`);
+ await sleepInMs(5000);
+ }
+ }
+
+ run().catch(console.error);
+ ```
+
+## Run the application
+
+1. Set an environment variable named **AZURE_APPCONFIG_CONNECTION_STRING**, and set it to the connection string of your App Configuration store. At the command line, run the following command:
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ To run the app locally using the Windows command prompt, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```cmd
+ setx AZURE_APPCONFIG_CONNECTION_STRING "<app-configuration-store-connection-string>"
+ ```
+
+ ### [PowerShell](#tab/powershell)
+
+ If you use Windows PowerShell, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```azurepowershell
+ $Env:AZURE_APPCONFIG_CONNECTION_STRING = "<app-configuration-store-connection-string>"
+ ```
+
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command and replace `<app-configuration-store-connection-string>` with the connection string of your app configuration store:
+
+ ```console
+ export AZURE_APPCONFIG_CONNECTION_STRING='<app-configuration-store-connection-string>'
+ ```
+
+
+
+1. Run the following command to run the app locally:
+
+ ``` console
+ node app.js
+ ```
+
+1. You will see the following console outputs because the *Beta* feature flag is disabled.
+
+ ``` console
+ Beta is enabled: false
+ ```
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store that you created previously.
+
+1. Select **Feature manager** and locate the *Beta* feature flag. Enable the flag by selecting the checkbox under **Enabled**.
+
+1. Wait for a few seconds and you will see the console outputs change.
+
+ ``` console
+ Beta is enabled: true
+ ```
azure-app-configuration Quickstart Feature Flag Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-python.md
In this quickstart, you'll create a feature flag in Azure App Configuration and use it to dynamically control Python apps to create an end-to-end implementation of feature management.
-The feature management support extends the dynamic configuration feature in App Configuration. These examples in the quickstart build on thePpython apps introduced in the dynamic configuration tutorial. Before you continue, finish the [quickstart](./quickstart-python-provider.md) and the [tutorial](./enable-dynamic-configuration-python.md) to create python apps with dynamic configuration first.
+The feature management support extends the dynamic configuration feature in App Configuration. These examples in the quickstart build on the python app introduced in the dynamic configuration tutorial. Before you continue, finish the [quickstart](./quickstart-python-provider.md) and the [tutorial](./enable-dynamic-configuration-python.md) to create python apps with dynamic configuration first.
This library does **not** have a dependency on any Azure libraries. They seamlessly integrate with App Configuration through its Python configuration provider.
azure-app-configuration Quickstart Javascript Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-javascript-provider.md
Title: Quickstart for using Azure App Configuration with JavaScript apps description: In this quickstart, create a Node.js app with Azure App Configuration to centralize storage and management of application settings separate from your code. -+ ms.devlang: javascript Previously updated : 04/18/2024- Last updated : 11/07/2024+ #Customer intent: As a JavaScript developer, I want to manage all my app settings in one place.
-# Quickstart: Create a JavaScript app with Azure App Configuration
+# Quickstart: Create a Node.js console app with Azure App Configuration
In this quickstart, you use Azure App Configuration to centralize storage and management of application settings using the [Azure App Configuration JavaScript provider client library](https://github.com/Azure/AppConfiguration-JavaScriptProvider).
run().catch(console.error);
In this quickstart, you created a new App Configuration store and learned how to access key-values using the App Configuration JavaScript provider in a Node.js app. To learn how to configure your app to dynamically refresh configuration settings, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Enable dynamic configuration](./enable-dynamic-configuration-javascript.md)
+> [Enable dynamic configuration](./enable-dynamic-configuration-javascript.md)
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
The following table includes links to code samples. They demonstrate how to conn
| Client library | Language | Link to sample code| |-|-|-| | StackExchange.Redis | .NET | [StackExchange.Redis code sample](https://github.com/Azure/Microsoft.Azure.StackExchangeRedis) |
+| go-redis | Go | [go-redis code sample](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#example-package-Redis) |
| redis-py | Python | [redis-py code sample](https://aka.ms/redis/aad/sample-code/python) | | Jedis | Java | [Jedis code sample](https://aka.ms/redis/aad/sample-code/java-jedis) | | Lettuce | Java | [Lettuce code sample](https://aka.ms/redis/aad/sample-code/java-lettuce) |
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md
Your data in an Azure Storage account is [always replicated](../storage/common/s
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage provides LRS and ZRS redundancy options for replicating data in the primary region. For applications requiring high availability, you can choose geo-replication to a secondary region that is hundreds of kilometers away from the primary region. Azure Storage offers GRS and GZRS options for copying data to a secondary region. More options are available to you for configuring read access (RA) to the secondary region (RA-GRS and RA-GZRS), as explained in [Read access to data in the secondary region](../storage/common/storage-redundancy.md#read-access-to-data-in-the-secondary-region).
-Azure Storage redundancy options can have implications on data residency as Azure relies on [paired regions](../availability-zones/cross-region-replication-azure.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS). For example, if you're concerned about geo-replication across regions that span country boundaries, you may want to choose LRS or ZRS to keep Azure Storage data at rest within the geographic boundaries of the country in which the primary region is located. Similarly, [geo replication for Azure SQL Database](/azure/azure-sql/database/active-geo-replication-overview) can be obtained by configuring asynchronous replication of transactions to any region in the world, although it's recommended that paired regions be used for this purpose as well. If you need to keep relational data inside the geographic boundaries of your country/region, you shouldn't configure Azure SQL Database asynchronous replication to a region outside that country/region.
+Azure Storage redundancy options can have implications on data residency as Azure relies on [paired regions](../availability-zones/cross-region-replication-azure.md) to deliver [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS). For example, if you're concerned about geo-replication across regions that span country/region boundaries, you may want to choose LRS or ZRS to keep Azure Storage data at rest within the geographic boundaries of the country/region in which the primary region is located. Similarly, [geo replication for Azure SQL Database](/azure/azure-sql/database/active-geo-replication-overview) can be obtained by configuring asynchronous replication of transactions to any region in the world, although it's recommended that paired regions be used for this purpose as well. If you need to keep relational data inside the geographic boundaries of your country/region, you shouldn't configure Azure SQL Database asynchronous replication to a region outside that country/region.
As described on the [data location page](https://azure.microsoft.com/global-infrastructure/data-residency/), most Azure **regional** services honor the data at rest commitment to ensure that your data remains within the geographic boundary where the corresponding service is deployed. A handful of exceptions to this rule are noted on the data location page. You should review these exceptions to determine if the type of data stored outside your chosen deployment Geography meets your needs.
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview
description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 10/23/2024 Last updated : 10/30/2024 # Bicep CLI commands
To install a specific version:
az bicep install --version v0.3.255 ```
+## jsonrpc
+
+The `jsonrpc` command enables running the Bicep CLI with a JSON-RPC interface, allowing for programmatic interaction with structured output and avoiding cold-start delays when compiling multiple files. This setup also supports building libraries to interact with Bicep files programmatically in non-.NET languages.
+
+The wire format for sending and receiving input/output is header-delimited, using the following structure, where `\r` and `\n` represent carriage return and line feed characters:
+
+```
+Content-Length: <length>\r\n\r\n<message>\r\n\r\n
+```
+
+* `<length>` is the length of the `<message>` string, including the trailing `\r\n\r\n`.
+* `<message>` is the raw JSON message.
+
+For example:
+
+```
+Content-Length: 72\r\n\r\n{"jsonrpc": "2.0", "id": 0, "method": "bicep/version", "params": {}}\r\n\r\n
+```
+
+The following message shows an example for Bicep version.
+
+* The input:
+
+ ```json
+ {
+ "jsonrpc": "2.0",
+ "id": 0,
+ "method": "bicep/version",
+ "params": {}
+ }
+ ```
+
+* The output:
+
+ ```json
+ {
+ "jsonrpc": "2.0",
+ "id": 0,
+ "result": {
+ "version": "0.24.211"
+ }
+ }
+ ```
+
+For the available methods & request/response bodies, see [`ICliJsonRpcProtocol.cs`](https://github.com/Azure/bicep/blob/main/src/Bicep.Cli/Rpc/ICliJsonRpcProtocol.cs).
+For an example establishing a JSONRPC connection and interacting with Bicep files programmatically using Node, see [`jsonrpc.test.ts`](https://github.com/Azure/bicep/blob/main/src/Bicep.Cli.E2eTests/src/jsonrpc.test.ts).
+
+### Usage for named pipe
+
+Use the following syntax to connect to an existing named pipe as a JSONRPC client.
+
+```bicep cli
+bicep jsonrpc --pipe <named_pipe>`
+```
+
+`<named_pipe>` is an existing named pipe to connect the JSONRPC client to.
+
+To connect to a named pipe on OSX/Linux :
+
+```bicep cli
+bicep jsonrpc --pipe /tmp/bicep-81375a8084b474fa2eaedda1702a7aa40e2eaa24b3.sock
+```
+
+To connect to a named pipe on Windows :
+
+```bicep cli
+bicep jsonrpc --pipe \\.\pipe\\bicep-81375a8084b474fa2eaedda1702a7aa40e2eaa24b3.sock`
+```
+
+For more examples, see [C#](https://github.com/Azure/bicep/blob/096c32f9d5c42bfb85dff550f72f3fe16f8142c7/src/Bicep.Cli.IntegrationTests/JsonRpcCommandTests.cs#L24-L50) and [node.js](https://github.com/anthony-c-martin/bicep-node/blob/4769e402f2d2c1da8d27df86cb3d62677e7a7456/src/utils/jsonrpc.ts#L117-L151).
+
+### Usage for TCP socket
+
+Use the following syntax to connect to an existing TCP socket as a JSONRPC client.
+
+```bicep cli
+bicep jsonrpc --socket <tcp_socket>
+```
+
+`<tcp_socket>` is a socket number to connect the JSONRPC client to.
+
+To connect to a TCP socket
+
+`bicep jsonrpc --socket 12345`
+
+### Usage for stdin and stdout
+
+Use the following syntax to run the JSONRPC interface using stdin & stdout for messages.
+
+```bicep cli
+bicep jsonrpc --stdio
+```
+ ## lint The `lint` command returns the errors and the [linter rule](./linter.md) violations of a Bicep file.
module stgModule 'br:exampleregistry.azurecr.io/bicep/modules/storage:v1' = {
The local cache is found in: -- On Windows
+* On Windows
```path %USERPROFILE%\.bicep\br\<registry-name>.azurecr.io\<module-path\<tag> ``` -- On Linux
+* On Linux
```path /home/<username>/.bicep ``` -- On Mac
+* On Mac
```path ~/.bicep
If the Bicep CLI hasn't been installed, you'll encounter an error message statin
To learn about deploying a Bicep file, see: -- [Azure CLI](deploy-cli.md)-- [Cloud Shell](deploy-cloud-shell.md)-- [PowerShell](deploy-powershell.md)
+* [Azure CLI](deploy-cli.md)
+* [Cloud Shell](deploy-cloud-shell.md)
+* [PowerShell](deploy-powershell.md)
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 09/11/2024 Last updated : 11/11/2024
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
- Operational backup supports block blobs in standard general-purpose v2 storage accounts only. Storage accounts with hierarchical namespace enabled (that is, ADLS Gen2 accounts) aren't supported. <br><br> Also, any page blobs, append blobs, and premium blobs in your storage account won't be restored and only block blobs will be restored. - Blob backup is also supported when the storage account has private endpoints.
+- The backup operation isn't supported for blobs that are uploaded by using [Data Lake Storage APIs](/rest/api/storageservices/data-lake-storage-gen2).
**Other limitations**:
cdn Cdn App Dev Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-app-dev-net.md
We can then confirm the prompts to run the rest of the program.
## Next Steps
-To see the completed project from this walkthrough, [download the sample](https://code.msdn.microsoft.com/Azure-CDN-Management-1f2fba2c).
- To find more documentation on the Azure CDN Management Library for .NET, view the [reference on MSDN](/dotnet/api/overview/azure/cdn). Manage your CDN resources with [PowerShell](cdn-manage-powershell.md).
cost-management-billing Export Cost Data Storage Account Sas Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/export-cost-data-storage-account-sas-key.md
Often, partners don't have their own Azure subscriptions in the tenant associate
## Requirements
+- **Availability:** This feature is available only in the public cloud.
+ - You must be a partner with a Microsoft Partner Agreement. Your customers on the Azure plan must have a signed Microsoft Customer Agreement.
- - SAS key-based export isn't supported for indirect enterprise agreements.
+- SAS key-based export isn't supported for indirect enterprise agreements.
- SAS key-based export is available for partners that sign in to the Azure portal from a partner tenant. However, the SAS key option isn't supported if you're using Azure Lighthouse for customer management. - You must be global admin for your partner organization's billing account. - You must have access to configure a storage account that's in a different tenant of your partner organization. You're responsible for maintaining permissions and data access when your export data to your storage account.
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
Your new billing account simplifies billing for your organization and provides e
3. You use an invoice section to organize your costs based on your needs, similar to departments in your Enterprise Agreement enrollment. Department becomes invoice sections and department administrators become owners of the respective invoice sections. To learn more about invoice sections, see [understand invoice sections](../understand/mca-overview.md#invoice-sections). 4. The accounts that were created in your Enterprise Agreement aren't supported in the new billing account. The account's subscriptions belong to the respective invoice section for their department. Account owners can create and manage subscriptions for their invoice sections.
+> [!NOTE]
+> When you have SQL Server licenses applied with centrally managed SQL Azure Hybrid Benefit in your Enterprise Agreement and then transfer the agreement to a Microsoft Customer Agreement (enterprise), the licenses donΓÇÖt automatically transfer. After your new agreement migration completes, you must manually assign licenses with centrally managed SQL Hybrid Benefit. For more information about planning and getting started, see [Transition to centrally managed Azure Hybrid Benefit](../scope-level/transition-existing.md). For more information about centrally managed Azure Hybrid Benefit, see [What is centrally managed Azure Hybrid Benefit for SQL Server](../scope-level/overview-azure-hybrid-benefit-scope.md).
+ ## Changes to billing administrator access Depending on their access, billing administrators on your Enterprise Agreement enrollment get access to the billing scopes on the new account. The following lists explain the changes to access that result from setup:
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
When you exchange reservations, the new purchase currency amount must be greater
1. When done, select **Next: Review**. 1. Review your reservations to return and new reservations to purchase and then select **Confirm exchange**.
-## Exchange nonpremium storage for premium storage
+## Exchange nonpremium storage for premium storage or vice versa
-You can exchange a reservation purchased for a VM size that doesn't support premium storage to a corresponding VM size that does. For example, an _F1_ for an _F1s_. To make the exchange, go to Reservation Details and select **Exchange**. The exchange doesn't reset the term of the reserved instance or create a new transaction.
+You can exchange a reservation purchased for a VM size that doesn't support premium storage to a corresponding VM size that does and vice-versa. For example, an _F1_ for an _F1s_ or an _F1s_ for an _F1_. To make the exchange, go to Reservation Details and select **Exchange**. The exchange doesn't reset the term of the reserved instance or create a new transaction. Also, the new reservation will be for the same region, and there are no charges for this exchange.
If you're exchanging for a different size, series, region, or payment frequency, the term is reset for the new reservation. ## How transactions are processed
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
Title: What is centrally managed Azure Hybrid Benefit for SQL Server?
description: Azure Hybrid Benefit is a licensing benefit that lets you bring your on-premises core-based Windows Server and SQL Server licenses with active Software Assurance (or subscription) to Azure. Previously updated : 03/21/2024 Last updated : 11/11/2024
To use centrally managed licenses, you must have a specific role assigned to you
- Billing profile contributor If you don't have one of the roles, your organization must assign one to you. For more information about how to become a member of the roles, see [Manage billing roles](../manage/understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
+> [!NOTE]
+> When you have SQL Server licenses applied with centrally managed SQL Azure Hybrid Benefit in your Enterprise Agreement and then transfer the agreement to a Microsoft Customer Agreement (enterprise), the licenses donΓÇÖt automatically transfer. After your new agreement migration completes, you must manually assign licenses with centrally managed SQL Hybrid Benefit. For more information about migrating from an Enterprise Agreement to a Microsoft Customer Agreement (enterprise), see [Set up your billing account for a Microsoft Customer Agreement](../manage/mca-setup-account.md).
+ At a high level, here's how centrally managed Azure Hybrid Benefit works: 1. First, confirm that all your SQL Server VMs are visible to you and Azure by enabling automatic registration of the self-installed SQL server images with the IaaS extension. For more information, see [Register multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk).
cost-management-billing Transition Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/transition-existing.md
Title: Transition to centrally managed Azure Hybrid Benefit
description: This article describes the changes and several transition scenarios to illustrate transitioning to centrally managed Azure Hybrid Benefit. Previously updated : 03/21/2024 Last updated : 11/11/2024
When you transition to centrally managed Azure Hybrid Benefit, it removes the need to configure the benefit at the resource level. This article describes the changes and several transition scenarios to illustrate the result. For a better understanding about how the new scope-level license management experience applies licenses and discounts to your resources, see [What is centrally managed Azure Hybrid Benefit?](overview-azure-hybrid-benefit-scope.md)
+> [!NOTE]
+> When you have SQL Server licenses applied with centrally managed SQL Azure Hybrid Benefit in your Enterprise Agreement and then transfer the agreement to a Microsoft Customer Agreement (enterprise), the licenses donΓÇÖt automatically transfer. After your new agreement migration completes, you must manually assign licenses with centrally managed SQL Hybrid Benefit. For more information about migrating from an Enterprise Agreement to a Microsoft Customer Agreement (enterprise), see [Set up your billing account for a Microsoft Customer Agreement](../manage/mca-setup-account.md).
+ ## Changes to individual resource configuration When you assign licenses to a subscription using the new experience, changes are shown in the Azure portal. Afterward, you can't manage the benefit at the resource level. Any changes that you make at a scope level override settings at the individual resource level.
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
Previously updated : 10/09/2024 Last updated : 10/28/2024
When copying data from MySQL, the following mappings are used from MySQL data ty
| `time` |`TimeSpan` |`TimeSpan` | | `timestamp` |`Datetime` |`Datetime` | | `tinyblob` |`Byte[]` |`Byte[]` |
-| `tinyint` |`SByte` |`Int16` |
+| `tinyint` |`SByte` <br/> (`tinyint(1)` is mapped to `Boolean`) |`Int16` |
| `tinyint unsigned` |`Int16` |`Int16` | | `tinytext` |`String` |`String` | | `varchar` |`String` |`String` |
Here are steps that help you upgrade your MySQL connector:
1. The latest driver version v2 supports more MySQL versions. For more information, see [Supported capabilities](connector-mysql.md#supported-capabilities).
+### Best practices for MySQL connector recommended version
+
+This section introduces best practices for MySQL connector recommended version.
+
+#### Cannot load SSL key
+
+- **Symptoms**: If you are using MySQL connector recommended version with SSL Key as a connection property, you may meet the following error message: `Could not load the client key from your_pem_file: Unrecognized PEM header: --BEGIN PRIVATE KEY--`
+
+- **Cause**: The recommended version cannot decrypt the PCKS#8 format.
+
+- **Recommendation**: Convert the PEM format to PCKS#1.
+ ## Differences between the recommended and the legacy driver version The table below shows the data type mapping differences between MySQL using the recommended and the legacy driver version.
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
Previously updated : 10/22/2024 Last updated : 11/11/2024 ai-usage: ai-assisted
These generic properties are supported for the Snowflake linked service:
| warehouse | The default virtual warehouse used for the session after connecting. |Yes| | authenticationType | Type of authentication used to connect to the Snowflake service. Allowed values are: **Basic** (Default) and **KeyPair**. Refer to corresponding sections below on more properties and examples respectively. | No | | role | The default security role used for the session after connecting. | No |
+| host | The host name of the Snowflake account. For example: `contoso.snowflakecomputing.com`. `.cn` is also supported.| No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) that is used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime (if your data store is located in a private network). If not specified, it uses the default Azure integration runtime. | No | This Snowflake connector supports the following authentication types. See the corresponding sections for details.
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
When the previous command has succeeded, save the firewall frontend IP address f
FWPUBLIC_IP=$(az network public-ip show -g $RG -n $FWPUBLICIP_NAME --query "ipAddress" -o tsv) FWPRIVATE_IP=$(az network firewall show -g $RG -n $FWNAME --query "ipConfigurations[0].privateIPAddress" -o tsv)++
+# set fw as vnet dns server so dns queries are visible in fw logs
+
+az network vnet update -g $RG --name $VNET_NAME --dns-servers $FWPRIVATE_IP
``` > [!NOTE]
az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aks
```azurecli az network firewall application-rule create -g $RG -f $FWNAME --collection-name 'aksfwar' -n 'fqdn' --source-addresses '*' --protocols 'http=80' 'https=443' --fqdn-tags "AzureKubernetesService" --action allow --priority 100+
+# set fw application rule to allow kubernettes to reach storage and image resources
+
+az network firewall application-rule create -g $RG -f $FWNAME --collection-name 'aksfwarweb' -n 'storage' --source-addresses '10.42.1.0/24' --protocols 'https=443' --target-fqdns '*.blob.storage.azure.net' '*.blob.core.windows.net' --action allow --priority 101
+az network firewall application-rule create -g $RG -f $FWNAME --collection-name 'aksfwarweb' -n 'website' --source-addresses '10.42.1.0/24' --protocols 'https=443' --target-fqdns 'ghcr.io' '*.docker.io' '*.docker.com' '*.githubusercontent.com'
``` See [Azure Firewall documentation](overview.md) to learn more about the Azure Firewall service.
frontdoor Front Door Cdn Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-cdn-comparison.md
The following table provides a comparison between Azure Front Door and Azure CDN
| Azure Policy integration | &check; | &check; | &check; | | | | | Azure Advisory integration | &check; | &check; | | &check; | &check; | &check; | | Managed Identities with Azure Key Vault | &check; | &check; | | &check; | | |
-| **Pricing** | | | | | | |
+| **Pricing** | [Azure Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor/) | | | [Azure CDN pricing](https://azure.microsoft.com/pricing/details/cdn/) | | |
| Simplified pricing | &check; | &check; | | &check; | &check; | &check; | ## Services on retirement path
frontdoor Front Door Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-diagnostics.md
Previously updated : 12/19/2023 Last updated : 11/05/2024 zone_pivot_groups: front-door-tiers
The metrics listed in the following table are recorded and stored free of charge
|--|--|--|--| | Byte Hit Ratio | The percentage of traffic that was served from the Azure Front Door cache, computed against the total egress traffic. The byte hit ratio is low if most of the traffic is forwarded to the origin rather than served from the cache. <br/><br/> **Byte Hit Ratio** = (egress from edge - egress from origin)/egress from edge. <br/><br/> Scenarios excluded from bytes hit ratio calculations:<ul><li>You explicitly disable caching, either through the Rules Engine or query string caching behavior.</li><li>You explicitly configure a `Cache-Control` directive with the `no-store` or `private` cache directives.</li></ul> | Endpoint | Avg, Min | | Origin Health Percentage | The percentage of successful health probes sent from Azure Front Door to origins. | Origin, Origin Group | Avg |
-| Origin Latency | Azure Front Door calculates the time from sending the request to the origin to receiving the last response byte from the origin. | Endpoint, Origin | Avg, Max |
+| Origin Latency | Azure Front Door calculates the time from sending the request to the origin to receiving the last response byte from the origin. WebSocket is excluded from the origin latency.| Endpoint, Origin | Avg, Max |
| Origin Request Count | The number of requests sent from Azure Front Door to origins. | Endpoint, Origin, HTTP Status, HTTP Status Group | Avg, Sum | | Percentage of 4XX | The percentage of all the client requests for which the response status code is 4XX. | Endpoint, Client Country, Client Region | Avg, Max | | Percentage of 5XX | The percentage of all the client requests for which the response status code is 5XX. | Endpoint, Client Country, Client Region | Avg, Max | | Request Count | The number of client requests served through Azure Front Door, including requests served entirely from the cache. | Endpoint, Client Country, Client Region, HTTP Status, HTTP Status Group | Avg, Sum | | Request Size | The number of bytes sent in requests from clients to Azure Front Door. | Endpoint, Client Country, client Region, HTTP Status, HTTP Status Group | Avg, Max | | Response Size | The number of bytes sent as responses from Front Door to clients. | Endpoint, client Country, client Region, HTTP Status, HTTP Status Group | Avg, Max |
-| Total Latency | Azure Front Door receives the client request and sends the last response byte to the client. This is the total time taken. | Endpoint, Client Country, Client Region, HTTP Status, HTTP Status Group | Avg, Max |
+| Total Latency | Azure Front Door receives the client request and sends the last response byte to the client. This is the total time taken. For WebSocket, this metric refers to the time it takes to establish the WebSocket connection. | Endpoint, Client Country, Client Region, HTTP Status, HTTP Status Group | Avg, Max |
| Web Application Firewall Request Count | The number of requests processed by the Azure Front Door web application firewall. | Action, Policy Name, Rule Name | Avg, Sum | > [!NOTE]
Information about every request is logged into the access log. Each access log e
| Property | Description | |-|-| | TrackingReference | The unique reference string that identifies a request served by Azure Front Door. The tracking reference is sent to the client and to the origin by using the `X-Azure-Ref` headers. Use the tracking reference when searching for a specific request in the access or WAF logs. |
-| Time | The date and time when the Azure Front Door edge delivered requested contents to client (in UTC). |
+| Time | The date and time when the Azure Front Door edge delivered requested contents to client (in UTC). For WebSocket connections, the time represents when the connection gets closed. |
| HttpMethod | HTTP method used by the request: DELETE, GET, HEAD, OPTIONS, PATCH, POST, or PUT. | | HttpVersion | The HTTP version that the client specified in the request. | | RequestUri | The URI of the received request. This field contains the full scheme, port, domain, path, and query string. | | HostName | The host name in the request from client. If you enable custom domains and have wildcard domain (`*.contoso.com`), the HostName log field's value is `subdomain-from-client-request.contoso.com`. If you use the Azure Front Door domain (`contoso-123.z01.azurefd.net`), the HostName log field's value is `contoso-123.z01.azurefd.net`. |
-| RequestBytes | The size of the HTTP request message in bytes, including the request headers and the request body. |
-| ResponseBytes | The size of the HTTP response message in bytes. |
+| RequestBytes | The size of the HTTP request message in bytes, including the request headers and the request body. For WebSocket connections, this is the total number of bytes sent from the client to the server through the connection.|
+| ResponseBytes | The size of the HTTP response message in bytes. For WebSocket connections, this is the total number of bytes sent from the server to the client through the connection.|
| UserAgent | The user agent that the client used. Typically, the user agent identifies the browser type. | | ClientIp | The IP address of the client that made the original request. If there was an `X-Forwarded-For` header in the request, then the client IP address is taken from the header. | | SocketIp | The IP address of the direct connection to the Azure Front Door edge. If the client used an HTTP proxy or a load balancer to send the request, the value of SocketIp is the IP address of the proxy or load balancer. |
-| timeTaken | The length of time from when the Azure Front Door edge received the client's request to the time that Azure Front Door sent the last byte of the response to the client, in seconds. This field doesn't take into account network latency and TCP buffering. |
-| RequestProtocol | The protocol that the client specified in the request. Possible values include: **HTTP**, **HTTPS**. |
+| TimeTaken | The duration from when the Azure Front Door edge received the client's request to when the last byte of the response was sent to the client, measured in seconds. This metric excludes network latency and TCP buffering. For WebSocket connections, it represents the connection duration from establishment to closure. |
+| RequestProtocol | The protocol specified by the client in the request. Possible values include: **HTTP**, **HTTPS**. For WebSocket, the protocols are **WS**, **WSS**. Only requests that successfully upgrade to WebSocket will have WS/WSS. |
| SecurityProtocol | The TLS/SSL protocol version used by the request, or null if the request didn't use encryption. Possible values include: **SSLv3**, **TLSv1**, **TLSv1.1**, **TLSv1.2**. | | SecurityCipher | When the value for the request protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and Azure Front Door. | | Endpoint | The domain name of the Azure Front Door endpoint, such as `contoso-123.z01.azurefd.net`. |
Front Door currently provides diagnostic logs. Diagnostic logs provide individua
| TimeTaken | The length of time from first byte of request into Front Door to last byte of response out, in seconds. | | TrackingReference | The unique reference string that identifies a request served by Front Door, also sent as X-Azure-Ref header to the client. Required for searching details in the access logs for a specific request. | | UserAgent | The browser type that the client used. |
-| ErrorInfo | This field contains the specific type of error for further troubleshooting. </br> Possible values include: </br> **NoError**: Indicates no error was found. </br> **CertificateError**: Generic SSL certificate error.</br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. </br> **ClientDisconnected**: Request failure because of client network connection. </br> **UnspecifiedClientError**: Generic client error. </br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. </br> **DNSFailure**: DNS Failure. </br> **DNSNameNotResolved**: The server name or address couldn't be resolved. </br> **OriginConnectionAborted**: The connection with the origin was stopped abruptly. </br> **OriginConnectionError**: Generic origin connection error. </br> **OriginConnectionRefused**: The connection with the origin wasn't able to established. </br> **OriginError**: Generic origin error. </br> **OriginInvalidResponse**: Origin returned an invalid or unrecognized response. </br> **OriginTimeout**: The timeout period for origin request expired. </br> **ResponseHeaderTooBig**: The origin returned too large of a response header. </br> **RestrictedIP**: The request was blocked because of restricted IP. </br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. </br> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. </br> **SSLMismatchedSNI**:The request was invalid because the HTTP message header didn't match the value presented in the TLS SNI extension during SSL/TLS connection setup.|
+| ErrorInfo | This field contains the specific type of error for further troubleshooting. </br> Possible values include: </br> **NoError**: Indicates no error was found. </br> **CertificateError**: Generic SSL certificate error.</br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. </br> **ClientDisconnected**: Request failure because of client network connection. </br> **UnspecifiedClientError**: Generic client error. </br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. </br> **DNSFailure**: DNS Failure. </br> **DNSNameNotResolved**: The server name or address couldn't be resolved. </br> **OriginConnectionAborted**: The connection with the origin was stopped abruptly. </br> **OriginConnectionError**: Generic origin connection error. </br> **OriginConnectionRefused**: The connection with the origin wasn't able to established. </br> **OriginError**: Generic origin error. </br> **OriginInvalidResponse**: Origin returned an invalid or unrecognized response. </br> **OriginTimeout**: The timeout period for origin request expired. </br> **ResponseHeaderTooBig**: The origin returned too large of a response header. </br> **RestrictedIP**: The request was blocked because of restricted IP. </br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. </br> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. </br> **SSLMismatchedSNI**: The request was invalid because the HTTP message header didn't match the value presented in the TLS SNI extension during SSL/TLS connection setup.|
| Result | `SSLMismatchedSNI` is a status code that signifies a successful request with a mismatch warning between the SNI and the host header. This status code implies domain fronting, a technique that violates Azure Front DoorΓÇÖs terms of service. Requests with `SSLMismatchedSNI` will be rejected after January 22, 2024.| | Sni | This field specifies the Server Name Indication (SNI) that is sent during the TLS/SSL handshake. It can be used to identify the exact SNI value if there was a `SSLMismatchedSNI` status code. Additionally, it can be compared with the host value in the `requestUri` field to detect and resolve the mismatch issue. |
frontdoor Websocket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/websocket.md
+
+ Title: Azure Front Door WebSocket (preview)
+description: This article describes how WebSocket work on Azure Front Door for real-time bidirectional communication between a server and client over a long running TCP connection.
++++ Last updated : 11/11/2024++++
+# Azure Front Door WebSocket (preview)
+
+Azure Front Door supports WebSocket on both Standard and Premium tiers without requiring any extra configurations. WebSocket, standardized in [RFC6455](https://tools.ietf.org/html/rfc6455), is a TCP-based protocol that facilitates full-duplex communication between a server and a client over a long-running TCP connection. It eliminates the need for polling as required in HTTP and avoids some of the overhead of HTTP. It can reuse the same TCP connection for multiple requests or responses, resulting in a more efficient utilization of resources. This enables more interactive and real-time scenarios.
+
+> [!IMPORTANT]
+> Azure Front Door websocket is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+WebSocket is ideal for applications needing real-time updates or continuous data streams, such as chat apps, dashboards, financial updates, GPS, online education, live streaming, and gaming. For instance, a trading website can use WebSocket to push and update pricing data in real-time.
+
+## Use WebSocket on Azure Front Door
+
+When using WebSocket on Azure Front Door, consider the following:
+
+- Once a connection is upgraded to WebSocket, Azure Front Door transmits data between clients and the origin server without performing any inspections or manipulations during the established connection.
+- Web Application Firewall (WAF) inspections are applied during the WebSocket establishment phase. After the connection is established, the WAF doesn't perform further inspections.
+- Health probes to origins are conducted using the HTTP protocol.
+- Disable caching for WebSocket routes. For routes with caching enabled, Azure Front Door doesn't forward the WebSocket Upgrade header to the origin and treats it as an HTTP request, disregarding cache rules. This results in a failed WebSocket upgrade request.
+- The idle timeout is 5 minutes. If Azure Front Door doesn't detect any data transmission from the origin or the client within the past 5 minutes, the connection is considered idle and is closed.
+- Currently, WebSocket connections on Azure Front Door remain open for no longer than 4 hours. The WebSocket connection can be dropped due to underlying server upgrades or other maintenance activities. We highly recommend you implement retry logic in your application.
+
+## How the WebSocket protocol works
+
+WebSocket protocols use port 80 for standard WebSocket connections and port 443 for WebSocket connections over TLS/SSL. As a stateful protocol, the connection between clients and the server remains active until terminated by either party. WebSocket connections begin as an HTTP Upgrade request with the `ws:` or `wss:` scheme. These connections are established by upgrading an HTTP request/response using the `Connection: Upgrade`, `Upgrade: websocket`, `Sec-WebSocket-Key`, and `Sec-WebSocket-Version` headers, as shown in the request header examples.
+
+The handshake from the client looks as follows:
+
+```
+ GET /chat HTTP/1.1
+ Host: server.example.com
+ Upgrade: websocket
+ Connection: Upgrade
+ Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
+ Origin: http://example.com
+ Sec-WebSocket-Protocol: chat, superchat
+ Sec-WebSocket-Version: 13
+```
+
+The server responds with a `101 Switching Protocols` status code to indicate that it's switching to the WebSocket protocol as requested by the client. The response includes the `Connection: Upgrade` and `Upgrade: websocket` headers, confirming the protocol switch. The `Sec-WebSocket-Accept` header is returned to validate that the connection was successfully upgraded.
+
+The handshake from the server looks as follows:
+
+```
+ HTTP/1.1 101 Switching Protocols
+ Upgrade: websocket
+ Connection: Upgrade
+ Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
+ Sec-WebSocket-Protocol: chat
+```
+
+After the client receives the server response, the WebSocket connection is open to start transmitting data. If the WebSocket connection gets disconnected by the client or server, or by a network disruption, the client application is expected to reinitiate the connection with the server.
+
+## Next steps
+
+- Learn how to [create an Azure Front Door](../create-front-door-portal.md) profile.
+- Learn how Azure Front Door [routes traffic](../front-door-routing-architecture.md) to your origin.
iot-operations Howto Enable Secure Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-enable-secure-settings.md
Title: Enable secure settings
-description: Enable secure settings on your Azure IoT Operations Preview deployment by configuring an Azure Key Vault and enabling workload identities.
+description: Enable secure settings on your Azure IoT Operations Preview deployment by configuring an Azure key vault and enabling workload identities.
Last updated 11/04/2024
-#CustomerIntent: I deployed Azure IoT Operations with test settings for the quickstart scenario, now I want to enable secure settings to use the full feature set.
+#CustomerIntent: I deployed Azure IoT Operations with test settings for the quickstart scenario, and now I want to enable secure settings to use the full feature set.
-# Enable secure settings in Azure IoT Operations Preview deployment
+# Enable secure settings in an Azure IoT Operations Preview deployment
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-The secure settings for Azure IoT Operations include the setup of Secrets Management and user-assigned managed identity for cloud connections, for example, an OPC UA server, or dataflow endpoints.
+The secure settings for Azure IoT Operations include the setup of secrets management and a user-assigned managed identity for cloud connections; for example, an OPC UA server or dataflow endpoints.
This article provides instructions for enabling secure settings if you didn't do so during your initial deployment. ## Prerequisites
-* An Azure IoT Operations instance deployed with test settings. For example, if you followed the instructions in [Quickstart: Run Azure IoT Operations in Codespaces](../get-started-end-to-end-sample/quickstart-deploy.md).
+* An Azure IoT Operations instance deployed with test settings. For example, follow the instructions in [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces](../get-started-end-to-end-sample/quickstart-deploy.md).
-* Azure CLI installed on your development machine. This scenario requires Azure CLI version 2.64.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+* Azure CLI installed on your development machine. This scenario requires Azure CLI version 2.64.0 or later. Use `az --version` to check your version and `az upgrade` to update, if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
-* The latest versions of the following extensions for Azure CLI:
+* The latest versions of the following extensions for the Azure CLI:
```azurecli az extension add --upgrade --name azure-iot-ops az extension add --upgrade --name connectedk8s ```
-## Configure cluster for workload identity
+## Configure a cluster for a workload identity
-A workload identity is an identity you assign to a software workload (such as an application, service, script, or container) to authenticate and access other services and resources. The workload identity feature needs to be enabled on your cluster, so that the [Azure Key Vault Secret Store extension for Kubernetes](/azure/azure-arc/kubernetes/secret-store-extension) and Azure IoT Operations can access Microsoft Entra ID protected resources. To learn more, see [What are workload identities?](/entra/workload-id/workload-identities-overview).
+A *workload identity* is an identity that you assign to a software workload (such as an application, service, script, or container) to authenticate and access other services and resources. The workload identity feature needs to be enabled on your cluster, so that the [Azure Key Vault Secret Store extension for Kubernetes](/azure/azure-arc/kubernetes/secret-store-extension) and Azure IoT Operations can access Microsoft Entra ID protected resources. To learn more, see [What are workload identities?](/entra/workload-id/workload-identities-overview).
> [!NOTE]
-> This step only applies to Ubuntu + K3s clusters. The quickstart script for Azure Kubernetes Service (AKS) Edge Essentials used in [Prepare your Azure Arc-enabled Kubernetes cluster](../deploy-iot-ops/howto-prepare-cluster.md) enables workload identity by default. If you have an AKS Edge Essentials cluster, continue to the next section.
+> This step applies only to Ubuntu + K3s clusters. The quickstart script for Azure Kubernetes Service (AKS) Edge Essentials used in [Prepare your Azure Arc-enabled Kubernetes cluster](../deploy-iot-ops/howto-prepare-cluster.md) enables a workload identity by default. If you have an AKS Edge Essentials cluster, continue to the next section.
-If you aren't sure whether your K3s cluster already has workload identity enabled or not, run the [az connectedk8s show](/cli/azure/connectedk8s#az-connectedk8s-show) command to check:
+If you aren't sure whether or not your K3s cluster already has workload identity enabled, run the [az connectedk8s show](/cli/azure/connectedk8s#az-connectedk8s-show) command to check:
```azurecli az connectedk8s show --name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --query "{oidcIssuerEnabled:oidcIssuerProfile.enabled, workloadIdentityEnabled: securityProfile.workloadIdentity.enabled}" ```
-If not already set up, use the following steps to enable workload identity on an existing connected K3s cluster:
+To enable a workload identity on an existing connected K3s cluster:
-1. Use the [az connectedk8s update](/cli/azure/connectedk8s#az-connectedk8s-update) command to enable the workload identity feature on the cluster.
+1. Use the [az connectedk8s update](/cli/azure/connectedk8s#az-connectedk8s-update) command to enable the workload identity feature on the cluster:
```azurecli #!/bin/bash
If not already set up, use the following steps to enable workload identity on an
RESOURCE_GROUP="<RESOURCE_GROUP>" CLUSTER_NAME="<CLUSTER_NAME>"
- # Enable workload identity
+ # Enable a workload identity
az connectedk8s update --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --enable-oidc-issuer --enable-workload-identity ```
-1. Use the [az connectedk8s show](/cli/azure/connectedk8s#az-connectedk8s-show) command to get the cluster's issuer url. Take a note to add it later in K3s config file.
+1. Use the [az connectedk8s show](/cli/azure/connectedk8s#az-connectedk8s-show) command to get the cluster's issuer URL. You'll add the URL later in the K3s configuration file.
```azurecli #!/bin/bash
If not already set up, use the following steps to enable workload identity on an
RESOURCE_GROUP="<RESOURCE_GROUP>" CLUSTER_NAME="<CLUSTER_NAME>"
- # Get the cluster's issuer url
+ # Get the cluster's issuer URL
SERVICE_ACCOUNT_ISSUER=$(az connectedk8s show --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --query oidcIssuerProfile.issuerUrl --output tsv) echo "SERVICE_ACCOUNT_ISSUER = $SERVICE_ACCOUNT_ISSUER" ```
-1. Create a K3s config file.
+1. Create a K3s configuration file:
```bash sudo nano /etc/rancher/k3s/config.yaml
If not already set up, use the following steps to enable workload identity on an
- service-account-max-token-expiration=24h ```
-1. Save and exit the file editor.
+1. Save and close the file editor.
-1. Restart k3s.
+1. Restart k3s:
```bash systemctl restart k3s ```
-## Set up Secrets Management
+## Set up secrets management
-Secrets Management for Azure IoT Operations uses Secret Store extension to sync the secrets from an Azure Key Vault and store them on the edge as Kubernetes secrets. Secret Store extension requires a user assigned managed identity with access to the Azure Key Vault where secrets are stored. To learn more, see [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview).
+Secrets management for Azure IoT Operations uses the Secret Store extension to sync the secrets from an Azure key vault and store them on the edge as Kubernetes secrets. The Secret Store extension requires a user-assigned managed identity with access to the Azure key vault where secrets are stored. To learn more, see [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview).
-Follow these steps to set up Secrets Management:
+To set up secrets management:
-1. [Create an Azure Key Vault](/azure/key-vault/secrets/quick-create-cli#create-a-key-vault) that is used to store secrets, and [give your user account permissions to manage secrets](/azure/key-vault/secrets/quick-create-cli#give-your-user-account-permissions-to-manage-secrets-in-key-vault) with the `Key Vaults Secrets Officer` role.
-1. [Create a user-assigned managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) for Secret Store extension.
+1. [Create an Azure key vault](/azure/key-vault/secrets/quick-create-cli#create-a-key-vault) that's used to store secrets, and [give your user account permissions to manage secrets](/azure/key-vault/secrets/quick-create-cli#give-your-user-account-permissions-to-manage-secrets-in-key-vault) with the `Key Vaults Secrets Officer` role.
+1. [Create a user-assigned managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) for the Secret Store extension.
1. Use the [az iot ops secretsync enable](/cli/azure/iot/ops/secretsync#az-iot-ops-secretsync-enable) command to set up the Azure IoT Operations instance for secret synchronization. This command:
- - Creates a federated identity credential using the user-assigned managed identity.
- - Adds a role assignment to the user-assigned managed identity for access to the Azure Key Vault.
+ - Creates a federated identity credential by using the user-assigned managed identity.
+ - Adds a role assignment to the user-assigned managed identity for access to the Azure key vault.
- Adds a minimum secret provider class associated with the Azure IoT Operations instance. # [Bash](#tab/bash)
-
+ ```azurecli # Variable block INSTANCE_NAME="<INSTANCE_NAME>"
Follow these steps to set up Secrets Management:
--mi-user-assigned $USER_ASSIGNED_MI_RESOURCE_ID \ --kv-resource-id $KEYVAULT_RESOURCE_ID ```
-
+ # [PowerShell](#tab/powershell)
-
+ ```azurecli # Variable block INSTANCE_NAME="<INSTANCE_NAME>"
Follow these steps to set up Secrets Management:
--mi-user-assigned $USER_ASSIGNED_MI_RESOURCE_ID ` --kv-resource-id $KEYVAULT_RESOURCE_ID ```
-
+
-Now that secret synchronization setup is complete, you can refer to [Manage Secrets](./howto-manage-secrets.md) to learn how to use secrets with Azure IoT Operations.
+Now that secret synchronization setup is complete, you can refer to [Manage secrets for your Azure IoT Operations Preview deployment](./howto-manage-secrets.md) to learn how to use secrets with Azure IoT Operations.
-## Set up user-assigned managed identity for cloud connections
+## Set up a user-assigned managed identity for cloud connections
-Some Azure IoT Operations components like dataflow endpoints use user-assigned managed identity for cloud connections. It's recommended to use a separate identity from the one used to set up Secrets Management.
+Some Azure IoT Operations components, like dataflow endpoints, use a user-assigned managed identity for cloud connections. We recommend that you use a separate identity from the one that you used to set up secrets management.
-1. [Create a user-assigned managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) which is used for cloud connections.
+1. [Create a user-assigned managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) that's used for cloud connections.
> [!NOTE]
- > You will need to grant the identity permission to whichever cloud resource this will be used for.
+ > You'll need to grant the identity permission to whichever cloud resource you'll use the managed identity for.
-1. Use the [az iot ops identity assign](/cli/azure/iot/ops) command to assign the identity to the Azure IoT Operations instance. This command also creates a federated identity credential using the OIDC issuer of the indicated connected cluster and the Azure IoT Operations service account.
+1. Use the [az iot ops identity assign](/cli/azure/iot/ops) command to assign the identity to the Azure IoT Operations instance. This command also creates a federated identity credential by using the OIDC issuer of the indicated connected cluster and the Azure IoT Operations service account.
# [Bash](#tab/bash)
-
+ ```azurecli # Variable block INSTANCE_NAME="<INSTANCE_NAME>"
Some Azure IoT Operations components like dataflow endpoints use user-assigned m
--resource-group $RESOURCE_GROUP \ --mi-user-assigned $USER_ASSIGNED_MI_RESOURCE_ID ```
-
+ # [PowerShell](#tab/powershell)
-
+ ```azurecli # Variable block $INSTANCE_NAME="<INSTANCE_NAME>"
Some Azure IoT Operations components like dataflow endpoints use user-assigned m
$USER_ASSIGNED_MI_RESOURCE_ID=$(az identity show --name $USER_ASSIGNED_MI_NAME --resource-group $RESOURCE_GROUP --query id --output tsv)
- #Assign the identity to the Azure IoT Operations instance
+ # Assign the identity to the Azure IoT Operations instance
az iot ops identity assign --name $INSTANCE_NAME ` --resource-group $RESOURCE_GROUP ` --mi-user-assigned $USER_ASSIGNED_MI_RESOURCE_ID ```
-
+
-Now, you can use this managed identity in dataflow endpoints for cloud connections.
+Now you can use this managed identity in dataflow endpoints for cloud connections.
migrate Common Questions Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-business-case.md
ms. Previously updated : 04/22/2024 Last updated : 11/08/2024
To verify in an existing project:
Germany West Central and Sweden Central
-### How do I add facilities costs to my business case?
-
-1. Go to your business case and select **Edit assumptions** and choose **On-premises cost assumptions**.
-1. Select **Facilities** tab.
-1. Specify estimated annual lease/colocation/power costs that you want to include as facilities costs in the calculations.
-
-If you aren't aware of your facilities costs, use the following methodology.
-
-#### Step-by-step guide to calculate facilities costs
- The facilities cost calculation in Azure Migrate is based on the Cloud Economics methodology, tailored specifically for your on-premises datacenter. This methodology is based on a colocation model, which prescribes an average cost value per kWh, which includes space, power and lease costs, which usually comprise facilities costs for a datacenter. ΓÇ»
-1. **Determine the current energy consumption (in kWh) for your workloads**: Energy consumption by current workloadsΓÇ»= Energy consumption for compute resources + Energy consumption for storage resources.
- 1. **Energy consumption for compute resources**:ΓÇ»
- 1. **Determine the total number of physical cores in your on-premises infrastructure**: In case you don't have the number of physical cores, you can use the formula - Total number of physical cores = Total number of virtual cores/2.
- 1. **Input the number of physical cores into the given formula**: Energy consumption for compute resources (kWh) = Total number of physical cores * On-Prem Thermal Design Power or TDP (kWh per core) * Integration of Load factor * On-premises Power Utilization Efficiency or PUE.
- 1. If you aren't aware of the values of TDP, Integration of Load factor and On-premises PUE for your datacenter, you can use the following assumptions for your calculations:
- 1. On-Prem TDP (kWh per core) = **0.009**
- 1. Integration of Load factor = **2.00**
- 1. On-Prem PUE = **1.80**
- 1. **Energy consumption for storage resources**:
- 1. **Determine the total storage in use for your on-premises infrastructure in Terabytes (TB)**.
- 1. **Input the storage in TB into the given formula**: Energy consumption for storage resources (kWh) = Total storage capacity in TB * On-Prem storage Power Rating (kWh per TB) * Conversion of energy consumption into Peak consumption * Integration of Load factor * On-premises PUE (Power utilization effectiveness).
- 1. If you aren't aware of the values of On-premises storage power rating, conversion factor for energy consumption into peak consumption, and Integration of Load factor and On-premises PUE, you can use the following assumptions for your calculations:
- 1. On-Prem storage power rating (kWh per TB) = **10**
- 1. Conversion of energy consumption into peak consumption = **0.0001**
- 1. Integration of Load factor = **2.00**
- 1. On-Prem PUE = **1.80**
-1. **Determine the unused energy capacity for your on-premises infrastructure**: By default you can assume that **40%** of the datacenter energy capacity remains unused.
-1. **Determine the total energy capacity of the datacenter**: Total energy capacityΓÇ»= Energy consumption by current workloads / (1-unused energy capacity).
-1. **Calculate total facilities costs per year**: Facilities costs per year = Total energy capacity * Average colocation costs ($ per kWh per month) * 12. You can assume the average colocation cost = **$340 per kWh per month**.
-
-**Sample example**ΓÇ»
-
-Assume that Contoso, an e-commerce company has 10,000 virtual cores and 5,000 TB of storage. Let's use the formula to calculate facilities cost:
-1. Total number physical cores = **10,000/2** = **5,000**
-1. Energy consumption for compute resources = **5,000 * 0.009 * 2 * 1.8 = 162 kWh**
-1. Energy consumption for storage resources = **5,000 * 10 * 0.0001 * 2 * 1.8 = 18 kWh**
-1. Energy consumption for current workloads = **(162 + 18) kWh = 180 kWh**
-1. Total energy capacity of datacenter = **180/(1-0.4) = 300 kWh**
-1. Yearly facilities cost = **300 kWh * $340 per kWh * 12 = $1,224,000 = $1.224 Mn**
- ### What does the different migration strategies mean? **Migration Strategy** | **Details** | **Assessment insights** | |
migrate Concepts Business Case Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md
Cost components for running on-premises servers. For TCO calculations, an annual
| | Software - Windows Server licensing | License cost | Calculated per two core pack license pricing of Windows Server. | | | Windows Server - Extended Security Update (ESU) | License cost | Calculated for 3 years after the end of support of Windows server license: <br/><br/> ESU (Year 1) ΓÇô 75% of the license cost <br/><br/> ESU (Year 2) ΓÇô 100% of the license cost <br/><br/> ESU (Year 3) ΓÇô 125% of the license cost <br/><br/>| | | | Software Assurance | Calculated per year as in settings. |
-| | Virtualization software for servers running in VMware environment | Virtualization Software (VMware license cost + support) | License cost for vSphere Standard license + Production support for vSphere Standard license. *Not included- other hypervisor software cost* or *Antivirus / Monitoring Agents*.|
+| | Virtualization software for servers running in VMware environment | Virtualization Software (VMware license cost) | License cost based on VMware Cloud Foundation licensing.|
| Storage | Storage Hardware | | The total storage hardware acquisition cost is calculated by multiplying the Total volume of storage attached to per GB cost. Default is USD 2 per GB per month. | | | Storage Maintenance | | Default is 10% of storage hardware acquisition cost. | | Network | Network Hardware and software | Network equipment (Cabinets, switches, routers, load balancers etc.) and software | As an industry standard and used by sellers in Business cases, it's a % of compute and storage cost. Default is 10% of storage and compute cost. | | | Maintenance | Maintenance | Defaulted to 15% of network hardware and software cost. | | Security | General Servers | Server security cost | Default is USD 250 per year per server. This is multiplied with number of servers (General servers)| | | SQL Servers | SQL protection cost | Default is USD 1000 per year per server. This is multiplied with number of servers running SQL |
-| Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | Facilities cost isn't applicable for Azure cost. |
+| Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | The Facilities cost is based on a colocation model, which includes space, power, and lease costs per kWh.<br> Annual facilities cost = Total energy capacity * Average colocation costs * 12. (Assume 40% of datacenter energy capacity remains unused.) <br> Total energy capacity = Energy consumption by current workloads / (1 - unused energy capacity). <br>To determine energy consumption for your workloads: <br>- Compute resources: Total physical cores * On-Prem TDP (0.009 kWh per core) * Load factor (2.00) * On-Prem PUE (1.80).<br> - Storage resources: Total storage in TB * On-Prem storage power rating (10 kWh per TB) * Conversion factor (0.0001) * Load factor (2.00) * On-Prem PUE (1.80). |
| Labor | Labor | IT admin | DC admin cost = ((Number of virtual machines) / (Avg. # of virtual machines that can be managed by a full-time administrator)) * 730 * 12 | | Management | Management Software licensing | System center Management software | Used for cost of the System center management software that includes monitoring, hardware and virtual machine provisioning, automation, backup and configuration management capabilities. Cost of Microsoft system center management software is added when the system center agents are identified on any of the discovered resources. This is applicable only for windows servers and SQL servers related scenarios and includes Software assurance. |
-| | | VMware Vcenter Management software | This is the cost associated with VMware management software i.e. Management software cost for vSphere Standard + production support cost of management software. Not included- other hypervisor software cost or Antivirus/Monitoring Agents. |
| | | Other Management software | This is the cost of the management software for Partner management products. | | | Management cost other than software | Monitoring cost | Specify costs other than monitoring software. Default is USD 430 per year per server. This is multiplied with the number of servers. The default used is the cost associated with a monitoring administrator. | | | | Patch Management cost | Specify costs other than patch management software. Default is USD 430 per year per server. This is multiplied with the number of servers. Default is the cost associated with a patch management administrator. |
Cost components for running on-premises servers. For TCO calculations, an annual
#### Azure cost | **Cost heads** | **Category** | **Component** | **Logic** |
- | | | |
+| | | | |
| Compute | Compute (IaaS) | Azure VM, SQL Server on Azure VM | Compute cost (with AHUB) from Azure VM assessment, Compute cost (with AHUB) from Azure SQL assessment | | | Compute (PaaS) | Azure SQL MI or Azure SQL DB | Compute cost (with AHUB) from Azure SQL assessment. | | | Compute(PaaS) | Azure App Service or Azure Kubernetes Service | Plan cost from Azure App Service and/or Node pool cost from Azure Kubernetes Service. |
Cost components for running on-premises servers. For TCO calculations, an annual
| Storage  | Storage Hardware | | Estimated as a sum of total storage hardware acquisition cost + software maintenance cost. <br> Total storage hardware acquisition cost = Total volume of storage attached to VMs (across all machines) * Cost per GB per month * 12. Cost per GB can be customized in the assumptions similar to the current On-premises storage cost. | | Network | Network Hardware and software  | Network equipment (Cabinets, switches, routers, load balancers etc.) and software  | Estimated as a sum of total network hardware and software cost + network maintenance cost  Total network hardware and software cost is defaulted to 10%* (compute and licensing +storage cost) and can be customized in the assumptions. Network maintenance cost is defaulted to 15%*(Total network hardware and software cost) and can be customized in the assumptions Same as current On-premises networking cost. | | Security | General Servers  | Server security cost | Estimated as sum of total protection cost for general servers and SQL workloads using MDC via Azure Arc. MDC Servers plan 2 is assumed for servers. Microsoft Defender for SQL on Azure-connected databases is assumed for SQL Server |
-| Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | Based on user input. Same as current On-premises facilities cost. |
+| Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | The facilities cost is based on a colocation model, which includes space, power, and lease costs per kWh.<br> Annual facilities cost = Total energy capacity * Average colocation costs * 12. (Assume 40% of datacenter energy capacity remains unused.) <br> Total energy capacity = Energy consumption by current workloads / (1 - unused energy capacity). <br>To determine energy consumption for your workloads: <br>- Compute resources: Total physical cores * On-Prem TDP (0.009 kWh per core) * Load factor (2.00) * On-Prem PUE (1.80).<br>- Storage resources: Total storage in TB * On-Prem storage power rating (10 kWh per TB) * Conversion factor (0.0001) * Load factor (2.00) * On-Prem PUE (1.80). |
| Labor | Labor  | IT admin | Same as current On-premises labor cost.| | Management | Management Software licensing | System center or other management software | Estimated as sum of total management cost for general servers. This includes monitoring and patching. Patching is assumed to be free via Azure Update Manager as it is included in MDC Servers plan 2. Monitoring cost is calculated per day based on log storage and alerts and multiplied*365 Estimated as 70% of on-premises management labor cost by default as it is assumed that 30% of labor effects could be redirected to other high impact projects for the company due to productivity improvements.  Labor costs can be customized in Azure Arc setting under Azure cost assumptions.|
Cost components for running on-premises servers. For TCO calculations, an annual
**CAPEX & OPEX** | Component | Sub component | Assumptions | Azure retained |
- | | | |
+| | | | |
| **Capital Asset Expense (CAPEX) (A)** | | | | | Server Depreciation | (Total server hardware acquisition cost)/(Depreciable life) | Depreciable life = 4 years | | | Storage Depreciation | (Total storage hardware acquisition cost)/(Depreciable life) | Depreciable life = 4 years | |
migrate How To View A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-view-a-business-case.md
There are four major reports that you need to review:
This card covers your potential total cost of ownership savings based on the chosen migration strategy. It includes one year savings from compute, storage, network, labor, and facilities cost (based on assumptions) to help you envision how Azure benefits can turn into cost savings. You can see the insights of different cost categories in the **On-premises vs Azure** report. ### Estimated on-premises cost
-It covers the cost of running all the servers scoped in the business case using some of the industry benchmarks. It doesn't cover Facilities (lease/colocation/power) cost by default, but you can edit it in the on-premises cost assumptions section. It includes one time cost for some of the capital expenditures like hardware acquisition etc., and annual cost for other components that you might pay as operating expenses like maintenance etc.
+It covers the cost of running all the servers scoped in the business case using some of the industry benchmarks. It includes one time cost for some of the capital expenditures like hardware acquisition etc., and annual cost for other components that you might pay as operating expenses like maintenance etc.
### Estimated Azure cost It covers the cost of all servers and workloads that have been identified as ready for migration/modernization as per the recommendation. Refer to the respective [Azure IaaS](how-to-view-a-business-case.md#azure-iaas-report) and [Azure PaaS](how-to-view-a-business-case.md#azure-paas-report) report for details. The Azure cost is calculated based on the right sized Azure configuration, ideal migration target, and most suitable pricing offers for your workloads. You can override the migration strategy, target location, or other settings in the 'Azure cost' assumptions to see how your savings could change by migrating to Azure.
migrate Migrate Servers To Azure Using Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md
With discovery completed, you can begin replication of Hyper-V VMs to Azure.
> You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts. Next, follow the instructions to [perform migrations](tutorial-migrate-hyper-v.md#migrate-vms).
-]
+ ### Grant access permissions to the Recovery Services vault You must grant the permissions to the Recovery Services vault for authenticated access to the cache/replication storage account.
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/prepare-for-agentless-migration.md
This article provides an overview of the changes performed when you [migrate VMware VMs to Azure via the agentless migration](./tutorial-migrate-vmware.md) method using the Migration and modernization tool.
-Before you migrate your on-premises VM to Azure, you may require a few changes to make the VM ready for Azure. These changes are important to ensure that the migrated VM can boot successfully in Azure and connectivity to the Azure VM can be established-.
+Before you migrate your on-premises VM to Azure, you may require a few changes to make the VM ready for Azure. These changes are important to ensure that the migrated VM can boot successfully in Azure and connectivity to the Azure VM can be established.
Azure Migrate automatically handles these configuration changes for the following operating system versions for both Linux and Windows. This process is called *Hydration*. **Operating system versions supported for hydration**
network-watcher Vnet Flow Logs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-powershell.md
$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup # Create a VNet flow log.
-New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id -FormatVersion 2
``` ## Enable virtual network flow logs and traffic analytics
$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName
$workspace = New-AzOperationalInsightsWorkspace -Name myWorkspace -ResourceGroupName myResourceGroup -Location EastUS # Create a VNet flow log.
-New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id -EnableTrafficAnalytics -TrafficAnalyticsWorkspaceId $workspace.ResourceId -TrafficAnalyticsInterval 10
+New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id -FormatVersion 2 -EnableTrafficAnalytics -TrafficAnalyticsWorkspaceId $workspace.ResourceId -TrafficAnalyticsInterval 10
``` ## List all flow logs in a region
reliability Migrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-database.md
To disable zone-redundancy for a single database or an elastic pool, you can use
# [PowerShell](#tab/powershell) ```powershell
-set-azsqlDatabase -ResourceGroupName "<Resource-Group-Name>" -DatabaseName "<Database-Name>" -ServerName "<Server-Name>" -ZoneRedundant:$false
+set-azsqlDatabase -ResourceGroupName "<Resource-Group-Name>" -DatabaseName "<Server-Name>" -ServerName "<Server-Name>" -ZoneRedundant:$false
``` # [CLI](#tab/cli) ```azurecli
-az sql db update --resource-group "RSETLEM-AzureSQLDB" --server "rs-az-testserver1" --name "TestDB1" --zone-redundant false
+az sql db update --resource-group "<Resource-Group-Name>" --server "<Server-Name>" --name "<Server-Name>" --zone-redundant false
``` # [ARM](#tab/arm)
Set-AzSqlElasticpool -ResourceGroupName "<Resource-Group-Name>" -ServerName "<S
# [CLI](#tab/cli) ```azurecli
-az sql elastic-pool update --resource-group "RSETLEM-AzureSQLDB" --server "rs-az-testserver1" --name "testep10" --zone-redundant false
+az sql elastic-pool update --resource-group "<Resource-Group-Name>" --server "<Server-Name>" --name "<Server-Name>" --zone-redundant false
```
site-recovery Concepts Multiple Ip Address Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-multiple-ip-address-failover.md
description: This article describes how to configure the failover of secondary I
Previously updated : 04/29/2024 Last updated : 11/11/2024
To configure secondary IP address failover, follow these steps:
2. Under **Secondary IP Configuration**, select **Edit** to modify it.
- :::image type="content" source="./media/concepts-multiple-ip-address-failover/network-edit.png" alt-text="Screenshot of Network Tab Edit Mode." lightbox="./media/concepts-multiple-ip-address-failover/network-edit-expanded.png":::
+ :::image type="content" source="./media/concepts-multiple-ip-address-failover/network-edit.png" alt-text="Screenshot of Network Tab Edit Mode." :::
3. Select **+ IP configurations**. You have two options, you can either add all IP configurations, or select and add individual IP configurations.
site-recovery Disaster Recovery For Edge Zone Via Vm Flow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/disaster-recovery-for-edge-zone-via-vm-flow-tutorial.md
description: Learn how to set up disaster recovery for Virtual machines on Azure
Previously updated : 04/19/2023 Last updated : 11/11/2024
To enable replication to a secondary location, follow the below steps:
1. On the Azure portal, select **Virtual machines** and select a VM to replicate. 1. On the left pane, under **Operations**, select **Disaster recovery**.
- :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/disaster-recovery.png" alt-text=" Screenshot of Select Disaster Recovery option."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/disaster-recovery-expanded.png":::
+ :::image type="content" source="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/disaster-recovery.png" alt-text=" Screenshot of Select Disaster Recovery option."lightbox="./media/disaster-recovery-for-edge-zone-vm-flow-tutorial/disaster-recovery.png":::
1. In **Basics**, select the **Target region** or an Azure Public MEC. - Option 1: **Public MEC to Region**
site-recovery Tutorial Prepare Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/tutorial-prepare-azure.md
On-premises machines are replicated to Azure managed disks. When failover occurs
1. After deleting the pre-existing address range, select **Add an IP address space**. :::image type="Protection state" source="media/tutorial-prepare-azure/add-ip-address-space.png" alt-text="Screenshot of the adding IP.":::
- 1. In **Starting address** enter **10.0.0.**
+ 1. In **Starting address** enter **10.0.0.0**
1. Under **Address space size**, select **/24 (256 addresses)**. 1. Select **Add**.
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
The following regions are regions with higher base storage capacity available, a
##### Lower available base storage capacity
-The following regions are regions with higher base storage capacity available, and the table following the regions outlines their scale targets: East Asia, Korea Central, South Africa North, France Central, Southeast Asia, West US 3, Sweden Central, Switzerland North.
+The following regions are regions with lower base storage capacity available, and the table following the regions outlines their scale targets: East Asia, Korea Central, South Africa North, France Central, Southeast Asia, West US 3, Sweden Central, Switzerland North.
|Resource |Values |
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
The improvement applies to the following scenarios, when the server endpoint loc
This improvement will be gradually enabled in all regions within the next month. Once the improvement is enabled in your region, you will see a Provisioning steps tab in the portal after server endpoint creation which allows you to easily determine when the server endpoint is ready for use. For more information, see [Create an Azure File Sync server endpoint](file-sync-server-endpoint-create.md#provisioning-steps) documentation.
-**Preview: Managed Identity support for Azure File Sync service and servers**
+**Preview: Managed Identities support for Azure File Sync service and servers**
Azure File Sync support for managed identities eliminates the need for shared keys as a method of authentication by utilizing a system-assigned managed identity provided by Microsoft Entra ID. When you enable this configuration, the system-assigned managed identities will be used for the following scenarios:
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Azure Files and Azure File Sync are updated regularly to offer new features and
The Azure File Sync v19 release improves performance, security, and adds support for Windows Server 2025: - Faster server provisioning and improved disaster recovery for Azure File Sync server endpoints - Sync performance improvements-- Preview: Managed Identity support for Azure File Sync service and servers
+- Preview: Managed Identities support for Azure File Sync service and servers
- Azure File Sync agent support for Windows Server 2025 To learn more, see the [Azure File Sync release notes](../file-sync/file-sync-release-notes.md#version-19100).
synapse-analytics Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/concepts-data-flow-overview.md
-+ Last updated 12/16/2020
-# Data flows in Azure Synapse Analytics
-
-## What are data flows?
+# What are data flows in Azure Synapse Analytics?
Data flows are visually designed data transformations in Azure Synapse Analytics. Data flows allow data engineers to develop data transformation logic without writing code. The resulting data flows are executed as activities within Azure Synapse Analytics pipelines that use scaled-out Apache Spark clusters. Data flow activities can be operationalized using existing Azure Synapse Analytics scheduling, control, flow, and monitoring capabilities. Data flows provide an entirely visual experience with no coding required. Your data flows run on Synapse-managed execution clusters for scaled-out data processing. Azure Synapse Analytics handles all the code translation, path optimization, and execution of your data flow jobs.
-## Getting started
+## Get started
Data flows are created from the Develop pane in Synapse studio. To create a data flow, select the plus sign next to **Develop**, and then select **Data Flow**.
This action takes you to the data flow canvas, where you can create your transfo
## Authoring data flows
-Data flow has a unique authoring canvas designed to make building transformation logic easy. The data flow canvas is separated into three parts: the top bar, the graph, and the configuration panel.
+Data flow has a unique authoring canvas designed to make building transformation logic easy. The data flow canvas is separated into three parts: the top bar, the graph, and the configuration panel.
![Screenshot shows the data flow canvas with top bar, graph, and configuration panel labeled.](media/data-flow/canvas-1.png)
Data flow integrates with existing Azure Synapse Analytics monitoring capabiliti
The Azure Synapse Analytics team has created a [performance tuning guide](../data-factory/concepts-data-flow-performance.md?context=/azure/synapse-analytics/context/context) to help you optimize the execution time of your data flows after building your business logic.
-## Next steps
+## Related content
* Learn how to create a [source transformation](../data-factory/data-flow-source.md?context=/azure/synapse-analytics/context/context). * Learn how to build your data flows in [debug mode](../data-factory/concepts-data-flow-debug-mode.md?context=/azure/synapse-analytics/context/context).
synapse-analytics Concepts Data Factory Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/concepts-data-factory-differences.md
description: Learn how the data integration capabilities of Azure Synapse Analyt
-+ Last updated 02/15/2022
In Azure Synapse Analytics, the data integration capabilities such as Synapse pipelines and data flows are based upon those of Azure Data Factory. For more information, see [what is Azure Data Factory](../../data-factory/introduction.md). -
-## Available features in ADF & Azure Synapse Analytics
- Check below table for features availability: | Category | Feature | Azure Data Factory | Azure Synapse Analytics |
Check below table for features availability:
| **GIT Repository Integration** | GIT Integration | Γ£ô | Γ£ô | | **Monitoring** | Monitoring of Spark Jobs for Data Flow | Γ£ù | Γ£ô *Leverage the Synapse Spark pools* |
-## Next steps
- Get started with data integration in your Synapse workspace by learning how to [ingest data into an Azure Data Lake Storage gen2 account](data-integration-data-lake.md).
synapse-analytics Concepts Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/concepts-database-templates.md
Title: Azure Synapse database templates concepts
-description: Learn more about the database templates within Azure Synapse
+description: Learn about how you can standardize data in your late with the database templates within Azure Synapse.
-+ Last updated 11/02/2021
Azure Synapse Analytics provides industry specific database templates to help st
## Enterprise templates
-Enterprise database templates contain a subset of tables that are most likely to be of interest to an organization within a specific industry. It provides a high-level overview and describes the connectivity between the related business areas. These templates serve as an accelerator for many types of large projects. For example, the retail template has one enterprise template called "Retail".
+Enterprise database templates contain a subset of tables that are most likely to be of interest to an organization within a specific industry. It provides a high-level overview and describes the connectivity between the related business areas. These templates serve as an accelerator for many types of large projects. For example, the retail template has one enterprise template called "Retail".
![Enterprise template example](./media/concepts-database-templates/enterprise-template-example.png)
Relationships are associations or interactions between any two tables. For examp
Lake database allows for the underlying data to be partitioned for a table for better performance. You can set partition configuration in the storage settings of a table in database editor.
-## Next steps
+## Related content
Continue to explore the capabilities of the database designer using the links below. - [Quick start](quick-start-create-lake-database.md)
synapse-analytics Concepts Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/concepts-lake-database.md
-+ Last updated 11/02/2021 - # Lake database
-The lake database in Azure Synapse Analytics enables customers to bring together database design, meta information about the data that is stored and a possibility to describe how and where the data should be stored. Lake database addresses the challenge of today's data lakes where it is hard to understand how data is structured.
+The lake database in Azure Synapse Analytics enables customers to bring together database design, meta information about the data that is stored and a possibility to describe how and where the data should be stored. Lake database addresses the challenge of today's data lakes where it's hard to understand how data is structured.
![Lake database overview](./media/concepts-lake-database/lake-database-overview.png) - ## Database designer
-The new database designer in Synapse Studio gives you the possibility to create a data model for your lake database and add additional information to it. Every Entity and Attribute can be described to provide more information about the model, which not only contains Entities but relationships as well. In particular, the inability to model relationships has been a challenge for the interaction on the data lake. This challenge is now addressed with an integrated designer that provides possibilities that have been available in databases but not on the lake. Also the capability to add descriptions and possible demo values to the model allows people who are interacting with it in the future to have information where they need it to get a better understanding about the data.
+The new database designer in Synapse Studio gives you the possibility to create a data model for your lake database and add additional information to it. Every Entity and Attribute can be described to provide more information about the model, which not only contains Entities but relationships as well. In particular, the inability to model relationships has been a challenge for the interaction on the data lake. This challenge is now addressed with an integrated designer that provides possibilities that have been available in databases but not on the lake. Also the capability to add descriptions and possible demo values to the model allows people who are interacting with it in the future to have information where they need it to get a better understanding about the data.
-> [!NOTE]
+> [!NOTE]
> The maximum size of metadata in a lake database is 10 GB. Attempting to publish or update a model that exceeds 10 GB in size will fail. To resolve this issue, reduce the model size by removing tables and columns. Consider splitting large models into multiple lake databases to avoid this limit.
-## Data storage
+## Data storage
-Lake databases use a data lake on the Azure Storage account to store the data of the database. The data can be stored in Parquet, Delta or CSV format and different settings can be used to optimize the storage. Every lake database uses a linked service to define the location of the root data folder. For every entity, separate folders are created by default within this database folder on the data lake. By default all tables within a lake database use the same format but the formats and location of the data can be changed per entity if that is requested.
+Lake databases use a data lake on the Azure Storage account to store the data of the database. The data can be stored in Parquet, Delta, or CSV format and different settings can be used to optimize the storage. Every lake database uses a linked service to define the location of the root data folder. For every entity, separate folders are created by default within this database folder on the data lake. By default all tables within a lake database use the same format but the formats and location of the data can be changed per entity if that is requested.
-> [!NOTE]
+> [!NOTE]
> Publishing a lake database does not create any of the underlying structures or schemas needed to query the data in Spark or SQL. After publishing, load data into your lake database using [pipelines](../data-integration/data-integration-data-lake.md) to begin querying it. > > Currently, Delta format support for lake databases is not supported in Synapse Studio.
Lake databases use a data lake on the Azure Storage account to store the data of
## Database compute
-The lake database is exposed in Synapse SQL serverless SQL pool and Apache Spark providing users with the capability to decouple storage from compute. The metadata that is associated with the lake database makes it easy for different compute engines to not only provide an integrated experience but also use additional information (for example, relationships) that was not originally supported on the data lake.
+The lake database is exposed in Synapse SQL serverless SQL pool and Apache Spark providing users with the capability to decouple storage from compute. The metadata that is associated with the lake database makes it easy for different compute engines to not only provide an integrated experience but also use additional information (for example, relationships) that wasn't originally supported on the data lake.
-## Next steps
+## Related content
Continue to explore the capabilities of the database designer using the links below. - [Create lake database quick start](quick-start-create-lake-database.md)
synapse-analytics Quick Start Create Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/quick-start-create-lake-database.md
-+ Last updated 08/16/2022 # Quickstart: Create a new lake database leveraging database templates
-This quick start gives you a complete sample scenario on how you can apply database templates to create a lake database, align data to your new model, and use the integrated experience to analyze the data.
+This quick start gives you a complete sample scenario on how you can apply database templates to create a lake database, align data to your new model, and use the integrated experience to analyze the data.
## Prerequisites+ - At least **Synapse User** role permissions are required for exploring a lake database template from Gallery. - **Synapse Administrator**, or **Synapse Contributor** permissions are required on the Azure Synapse workspace for creating a lake database. - **Storage Blob Data Contributor** permissions are required on data lake when using create table **From data lake** option.
Use the new database templates functionality to create a lake database that you
For our scenario we will use the `Retail` database template and select the following entities:
+- **RetailProduct** - A product is anything that can be offered to a market that might satisfy a need by potential customers. That product is the sum of all physical, psychological, symbolic, and service attributes associated with it.
+- **Transaction** - The lowest level of executable work or customer activity.
A transaction consists of one or more discrete events.
+- **TransactionLineItem** - The components of a Transaction broken down by Product and Quantity, one per line item.
+- **Party** - A party is an individual, organization, legal entity, social organization, or business unit of interest to the business.
+- **Customer** - A customer is an individual or legal entity that has or has purchased a product or service.
+- **Channel** - A channel is a means by which products or services are sold and/or distributed.
The easiest way to find entities is by using the search box above the different business areas that contain the tables.
You can use lake database to train your machine learning models and score the da
## Next steps Continue to explore the capabilities of the database designer using the links below.
+- [Database templates concept](concepts-database-templates.md)
+- [Lake database concept](concepts-lake-database.md)
+- [How to create a lake database](create-empty-lake-database.md)
synapse-analytics Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md
Last updated 04/08/2024 -+ # Azure Synapse Analytics known issues
synapse-analytics Quickstart Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-data-flow.md
Title: "Quickstart: Transform data using a mapping data flow"
+ Title: 'Quickstart: Transform data using a mapping data flow'
description: This tutorial provides step-by-step instructions for using Azure Synapse Analytics to transform data with mapping data flow. -+ Last updated 02/15/2022
In this quickstart, you do the following steps:
* **Azure Synapse workspace**: Create a Synapse workspace using the Azure portal following the instructions in [Quickstart: Create a Synapse workspace](quickstart-create-workspace.md). * **Azure storage account**: You use ADLS storage as *source* and *sink* data stores. If you don't have a storage account, see [Create an Azure storage account](../storage/common/storage-account-create.md) for steps to create one.
- The file that we are transforming in this tutorial is MoviesDB.csv, which can be found [here](https://raw.githubusercontent.com/djpmsft/adf-ready-demo/master/moviesDB.csv). To retrieve the file from GitHub, copy the contents to a text editor of your choice to save locally as a .csv file. To upload the file to your storage account, see [Upload blobs with the Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md). The examples will be referencing a container named 'sample-data'.
+ The file that we're transforming in this tutorial is MoviesDB.csv, which can be found [here](https://raw.githubusercontent.com/djpmsft/adf-ready-demo/master/moviesDB.csv). To retrieve the file from GitHub, copy the contents to a text editor of your choice to save locally as a .csv file. To upload the file to your storage account, see [Upload blobs with the Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md). The examples will be referencing a container named 'sample-data'.
### Navigate to the Synapse Studio
In this quickstart, we use the workspace named "adftest2020" as an example. It w
A pipeline contains the logical flow for an execution of a set of activities. In this section, you'll create a pipeline that contains a Data Flow activity.
-1. Go to the **Integrate** tab. Select on the plus icon next to the pipelines header and select Pipeline.
+1. Go to the **Integrate** tab. Select the plus icon next to the pipelines header and select Pipeline.
![Create a new pipeline](media/doc-common-process/new-pipeline.png)
A pipeline contains the logical flow for an execution of a set of activities. In
1. Under *Move and Transform* in the *Activities* pane, drag **Data flow** onto the pipeline canvas.
-1. In the **Adding data flow** page pop-up, select **Create new data flow** -> **Data flow**. Click **OK** when done.
+1. In the **Adding data flow** page pop-up, select **Create new data flow** -> **Data flow**. Select **OK** when done.
![Create a data flow](media/quickstart-data-flow/new-data-flow.png)
Once you create your Data Flow, you'll be automatically sent to the data flow ca
1. In the data flow canvas, add a source by clicking on the **Add Source** box.
-1. Name your source **MoviesDB**. Click on **New** to create a new source dataset.
+1. Name your source **MoviesDB**. Select **New** to create a new source dataset.
![Create a new source dataset](media/quickstart-data-flow/new-source-dataset.png)
-1. Choose **Azure Data Lake Storage Gen2**. Click Continue.
+1. Choose **Azure Data Lake Storage Gen2**. Select Continue.
![Choose Azure Data Lake Storage Gen2](media/quickstart-data-flow/select-source-dataset.png)
-1. Choose **DelimitedText**. Click Continue.
+1. Choose **DelimitedText**. Select Continue.
1. Name your dataset **MoviesDB**. In the linked service dropdown, choose **New**.
-1. In the linked service creation screen, name your ADLS Gen2 linked service **ADLSGen2** and specify your authentication method. Then enter your connection credentials. In this quickstart, we're using Account key to connect to our storage account. You can click **Test connection** to verify your credentials were entered correctly. Click **Create** when finished.
+1. In the linked service creation screen, name your ADLS Gen2 linked service **ADLSGen2** and specify your authentication method. Then enter your connection credentials. In this quickstart, we're using Account key to connect to our storage account. You can select **Test connection** to verify your credentials were entered correctly. Select **Create** when finished.
![Create a source linked service](media/quickstart-data-flow/adls-gen2-linked-service.png)
-1. Once you're back at the dataset creation screen, under the **File path** field, enter where your file is located. In this quickstart, the file "MoviesDB.csv" is located in container "sample-data". As the file has headers, check **First row as header**. Select **From connection/store** to import the header schema directly from the file in storage. Click **OK** when done.
+1. Once you're back at the dataset creation screen, under the **File path** field, enter where your file is located. In this quickstart, the file "MoviesDB.csv" is located in container "sample-data". As the file has headers, check **First row as header**. Select **From connection/store** to import the header schema directly from the file in storage. Select **OK** when done.
![Source dataset settings](media/quickstart-data-flow/source-dataset-properties.png)
-1. If your debug cluster has started, go to the **Data Preview** tab of the source transformation and click **Refresh** to get a snapshot of the data. You can use data preview to verify your transformation is configured correctly.
+1. If your debug cluster has started, go to the **Data Preview** tab of the source transformation and select **Refresh** to get a snapshot of the data. You can use data preview to verify your transformation is configured correctly.
![Data preview](media/quickstart-data-flow/data-preview.png)
-1. Next to your source node on the data flow canvas, click on the plus icon to add a new transformation. The first transformation you're adding is a **Filter**.
+1. Next to your source node on the data flow canvas, select the plus icon to add a new transformation. The first transformation you're adding is a **Filter**.
![Add a filter](media/quickstart-data-flow/add-filter.png)
-1. Name your filter transformation **FilterYears**. Click on the expression box next to **Filter on** to open the expression builder. Here you'll specify your filtering condition.
+1. Name your filter transformation **FilterYears**. Select the expression box next to **Filter on** to open the expression builder. Here you'll specify your filtering condition.
1. The data flow expression builder lets you interactively build expressions to use in various transformations. Expressions can include built-in functions, columns from the input schema, and user-defined parameters. For more information on how to build expressions, see [Data Flow expression builder](../data-factory/concepts-data-flow-expression-builder.md?toc=%2fazure%2fsynapse-analytics%2ftoc.json).
Once you create your Data Flow, you'll be automatically sent to the data flow ca
![Specify filtering condition](media/quickstart-data-flow/visual-expression-builder.png)
- If you've a debug cluster active, you can verify your logic by clicking **Refresh** to see expression output compared to the inputs used. There's more than one right answer on how you can accomplish this logic using the data flow expression language.
+ If you have a debug cluster active, you can verify your logic by clicking **Refresh** to see expression output compared to the inputs used. There's more than one right answer on how you can accomplish this logic using the data flow expression language.
- Click **Save and Finish** once you're done with your expression.
+ Select **Save and Finish** once you're done with your expression.
1. Fetch a **Data Preview** to verify the filter is working correctly.
Once you create your Data Flow, you'll be automatically sent to the data flow ca
![Aggregate settings 1](media/quickstart-data-flow/aggregate-settings.png)
-1. Go to the **Aggregates** tab. In the left text box, name the aggregate column **AverageComedyRating**. Click on the right expression box to enter the aggregate expression via the expression builder.
+1. Go to the **Aggregates** tab. In the left text box, name the aggregate column **AverageComedyRating**. Select the right expression box to enter the aggregate expression via the expression builder.
![Aggregate settings 2](media/quickstart-data-flow/aggregate-settings-2.png)
Once you create your Data Flow, you'll be automatically sent to the data flow ca
`avg(toInteger(Rating))`
- Click **Save and Finish** when done.
+ Select **Save and Finish** when done.
![Average comedy rating](media/quickstart-data-flow/average-comedy-rating.png)
Once you create your Data Flow, you'll be automatically sent to the data flow ca
![Add a Sink](media/quickstart-data-flow/add-sink.png)
-1. Name your sink **Sink**. Click **New** to create your sink dataset.
+1. Name your sink **Sink**. Select **New** to create your sink dataset.
-1. Choose **Azure Data Lake Storage Gen2**. Click Continue.
+1. Choose **Azure Data Lake Storage Gen2**. Select Continue.
-1. Choose **DelimitedText**. Click Continue.
+1. Choose **DelimitedText**. Select Continue.
-1. Name your sink dataset **MoviesSink**. For linked service, choose the ADLS Gen2 linked service you created in step 7. Enter an output folder to write your data to. In this quickstart, we're writing to folder 'output' in container 'sample-data'. The folder doesn't need to exist beforehand and can be dynamically created. Set **First row as header** as true and select **None** for **Import schema**. Click **OK** when done.
+1. Name your sink dataset **MoviesSink**. For linked service, choose the ADLS Gen2 linked service you created in step 7. Enter an output folder to write your data to. In this quickstart, we're writing to folder 'output' in container 'sample-data'. The folder doesn't need to exist beforehand and can be dynamically created. Set **First row as header** as true and select **None** for **Import schema**. Select **OK** when done.
![Sink dataset properties](media/quickstart-data-flow/sink-dataset-properties.png)
Now you've finished building your data flow. You're ready to run it in your pipe
## Running and monitoring the Data Flow
-You can debug a pipeline before you publish it. In this step, you're going to trigger a debug run of the data flow pipeline. While data preview doesn't write data, a debug run will write data to your sink destination.
+You can debug a pipeline before you publish it. In this step, you're going to trigger a debug run of the data flow pipeline. While data preview doesn't write data, a debug run writes data to your sink destination.
-1. Go to the pipeline canvas. Click **Debug** to trigger a debug run.
+1. Go to the pipeline canvas. Select **Debug** to trigger a debug run.
![Debug pipeline](media/quickstart-data-flow/debug-pipeline.png)
-1. Pipeline debug of Data Flow activities uses the active debug cluster but still take at least a minute to initialize. You can track the progress via the **Output** tab. Once the run is successful, click on the eyeglasses icon to open the monitoring pane.
+1. Pipeline debug of Data Flow activities uses the active debug cluster but still take at least a minute to initialize. You can track the progress via the **Output** tab. Once the run is successful, select the eyeglasses icon to open the monitoring pane.
![Debugging output](media/quickstart-data-flow/debugging-output.png)
You can debug a pipeline before you publish it. In this step, you're going to tr
![Transformation monitoring](media/quickstart-data-flow/4-transformations.png)
-1. Click on a transformation to get detailed information about the columns and partitioning of the data.
+1. Select a transformation to get detailed information about the columns and partitioning of the data.
![Transformation details](media/quickstart-data-flow/transformation-details.png) If you followed this quickstart correctly, you should have written 83 rows and 2 columns into your sink folder. You can verify the data by checking your blob storage. - ## Next steps Advance to the following articles to learn about Azure Synapse Analytics support:
synapse-analytics Quickstart Transform Data Using Spark Job Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md
Title: "Quickstart: Transform data using Apache Spark job definition"
+ Title: 'Quickstart: Transform data using Apache Spark job definition'
description: This tutorial provides step-by-step instructions for using Azure Synapse Analytics to transform data with Apache Spark job definition. -+ Last updated 02/15/2022
In this quickstart, you'll use Azure Synapse Analytics to create a pipeline usin
* **Azure Synapse workspace**: Create a Synapse workspace using the Azure portal following the instructions in [Quickstart: Create a Synapse workspace](quickstart-create-workspace.md). * **Apache Spark job definition**: Create an Apache Spark job definition in the Synapse workspace following the instructions in [Tutorial: Create Apache Spark job definition in Synapse Studio](spark/apache-spark-job-definitions.md). - ### Navigate to the Synapse Studio After your Azure Synapse workspace is created, you have two ways to open Synapse Studio:
On this panel, you can reference to the Spark job definition to run.
* Expand the Spark job definition list, you can choose an existing Apache Spark job definition. You can also create a new Apache Spark job definition by selecting the **New** button to reference the Spark job definition to be run.
-* (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself will be used to run; if the following settings are not empty, these settings will replace the settings of the spark job definition itself.
+* (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself will be used to run; if the following settings aren't empty, these settings will replace the settings of the spark job definition itself.
| Property | Description | | -- | -- |
On this panel, you can reference to the Spark job definition to run.
|Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`| |Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> | |Apache Spark pool| You can select Apache Spark pool from the list.|
- |Python code reference| Additional Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
- |Reference files | Additional files used for reference in the main definition file. |
+ |Python code reference| Other Python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
+ |Reference files | Other files used for reference in the main definition file. |
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.| |Min executors| Min number of executors to be allocated in the specified Spark pool for the job.| |Max executors| Max number of executors to be allocated in the specified Spark pool for the job.| |Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
- |Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
-
+ |Spark configuration| Specify values for Spark configuration properties listed in the article: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
+ ![spark job definition pipline settings](media/quickstart-transform-data-using-spark-job-definition/spark-job-definition-pipline-settings.png) * You can add dynamic content by clicking the **Add Dynamic Content** button or by pressing the shortcut key <kbd>Alt</kbd>+<kbd>Shift</kbd>+<kbd>D</kbd>. In the **Add Dynamic Content** page, you can use any combination of expressions, functions, and system variables to add to dynamic content.
You can add properties for Apache Spark job definition activity in this panel.
![user properties](media/quickstart-transform-data-using-spark-job-definition/user-properties.png)
-## Next steps
+## Related content
Advance to the following articles to learn about Azure Synapse Analytics support:
synapse-analytics Gen2 Migration Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/gen2-migration-schedule.md
Last updated 01/21/2020 -+ - azure-synapse + # Upgrade your dedicated SQL pool (formerly SQL DW) to Gen2
-Microsoft is helping to reduce the entry-level cost of running a dedicated SQL pool (formerly SQL DW). Lower compute tiers capable of handling demanding queries are now available for dedicated SQL pool (formerly SQL DW). Read the full announcement [Lower compute tier support for Gen2](https://azure.microsoft.com/blog/azure-sql-data-warehouse-gen2-now-supports-lower-compute-tiers/). The new offering is available in the regions noted in the table below. For supported regions, existing Gen1 dedicated SQL pool (formerly SQL DW) can be upgraded to Gen2 through either:
+Microsoft is helping to reduce the entry-level cost of running a dedicated SQL pool (formerly SQL DW). Lower compute tiers capable of handling demanding queries are now available for dedicated SQL pool (formerly SQL DW). Read the full announcement [Lower compute tier support for Gen2](https://azure.microsoft.com/blog/azure-sql-data-warehouse-gen2-now-supports-lower-compute-tiers/). The new offering is available in the regions noted in the table below. For supported regions, existing Gen1 dedicated SQL pool (formerly SQL DW) can be upgraded to Gen2 through either:
-- **The automatic upgrade process:** Automatic upgrades don't start as soon as the service is available in a region. When automatic upgrades start in a specific region, individual data warehouse upgrades will take place during your selected maintenance schedule.
+- **The automatic upgrade process:** Automatic upgrades don't start as soon as the service is available in a region. When automatic upgrades start in a specific region, individual data warehouse upgrades will take place during your selected maintenance schedule.
- [**Self-upgrade to Gen2:**](#self-upgrade-to-gen2) You can control when to upgrade by doing a self-upgrade to Gen2. If your region is not yet supported, you can restore from a restore point directly to a Gen2 instance in a supported region. ## Automated Schedule and Region Availability Table
The following table summarizes by region when the Lower Gen2 compute tier will b
Based on the availability chart above, we'll be scheduling automated upgrades for your Gen1 instances. To avoid any unexpected interruptions on the availability of the dedicated SQL pool (formerly SQL DW), the automated upgrades will be scheduled during your maintenance schedule. The ability to create a new Gen1 instance will be disabled in regions undergoing auto upgrade to Gen2. Gen1 will be deprecated once the automatic upgrades have been completed. For more information on schedules, see [View a maintenance schedule](maintenance-scheduling.md#view-a-maintenance-schedule)
-The upgrade process will involve a brief drop in connectivity (approximately 5 min) as we restart your dedicated SQL pool (formerly SQL DW). Once your dedicated SQL pool (formerly SQL DW) has been restarted, it will be fully available for use. However, you may experience a degradation in performance while the upgrade process continues to upgrade the data files in the background. The total time for the performance degradation will vary dependent on the size of your data files.
+The upgrade process will involve a brief drop in connectivity (approximately 5 min) as we restart your dedicated SQL pool (formerly SQL DW). Once your dedicated SQL pool (formerly SQL DW) has been restarted, it will be fully available for use. However, you may experience a degradation in performance while the upgrade process continues to upgrade the data files in the background. The total time for the performance degradation will vary dependent on the size of your data files.
You can also expedite the data file upgrade process by running [Alter Index rebuild](sql-data-warehouse-tables-index.md) on all primary columnstore tables using a larger SLO and resource class after the restart.
You can also expedite the data file upgrade process by running [Alter Index rebu
You can choose to self-upgrade by following these steps on an existing Gen1 dedicated SQL pool (formerly SQL DW). If you choose to self-upgrade, you must complete it before the automatic upgrade process begins in your region. Doing so ensures that you avoid any risk of the automatic upgrades causing a conflict.
-There are two options when conducting a self-upgrade. You can either upgrade your current dedicated SQL pool (formerly SQL DW) in-place or you can restore a Gen1 dedicated SQL pool (formerly SQL DW) into a Gen2 instance.
+There are two options when conducting a self-upgrade. You can either upgrade your current dedicated SQL pool (formerly SQL DW) in-place or you can restore a Gen1 dedicated SQL pool (formerly SQL DW) into a Gen2 instance.
-- [Upgrade in-place](upgrade-to-latest-generation.md) - This option will upgrade your existing Gen1 dedicated SQL pool (formerly SQL DW) to Gen2. The upgrade process will involve a brief drop in connectivity (approximately 5 min) as we restart your dedicated SQL pool (formerly SQL DW). Once restarted, it will be fully available for use. If you experience issues during the upgrade, open a [support request](sql-data-warehouse-get-started-create-support-ticket.md) and reference "Gen2 upgrade" as the possible cause.-- [Upgrade from restore point](sql-data-warehouse-restore-points.md) - Create a user-defined restore point on your current Gen1 dedicated SQL pool (formerly SQL DW) and then restore directly to a Gen2 instance. The existing Gen1 dedicated SQL pool (formerly SQL DW) will stay in place. Once the restore has been completed, your Gen2 dedicated SQL pool (formerly SQL DW) will be fully available for use. Once you have run all testing and validation processes on the restored Gen2 instance, the original Gen1 instance can be deleted.
+- [Upgrade in-place](upgrade-to-latest-generation.md) - This option will upgrade your existing Gen1 dedicated SQL pool (formerly SQL DW) to Gen2. The upgrade process will involve a brief drop in connectivity (approximately 5 min) as we restart your dedicated SQL pool (formerly SQL DW). Once restarted, it will be fully available for use. If you experience issues during the upgrade, open a [support request](sql-data-warehouse-get-started-create-support-ticket.md) and reference "Gen2 upgrade" as the possible cause.
+- [Upgrade from restore point](sql-data-warehouse-restore-points.md) - Create a user-defined restore point on your current Gen1 dedicated SQL pool (formerly SQL DW) and then restore directly to a Gen2 instance. The existing Gen1 dedicated SQL pool (formerly SQL DW) will stay in place. Once the restore has been completed, your Gen2 dedicated SQL pool (formerly SQL DW) will be fully available for use. Once you have run all testing and validation processes on the restored Gen2 instance, the original Gen1 instance can be deleted.
- Step 1: From the Azure portal, [create a user-defined restore point](sql-data-warehouse-restore-active-paused-dw.md). - Step 2: When restoring from a user-defined restore point, set the "performance Level" to your preferred Gen2 tier.
For more information, see [Upgrade to Gen2](upgrade-to-latest-generation.md).
**Q: How will the upgrades affect my automation scripts?** -- A: Any automation script that references a Service Level Objective should be changed to correspond to the Gen2 equivalent. See details [here](upgrade-to-latest-generation.md#upgrade-in-a-supported-region-using-the-azure-portal).
+- A: Any automation script that references a Service Level Objective should be changed to correspond to the Gen2 equivalent. See details [here](upgrade-to-latest-generation.md#upgrade-in-a-supported-region-using-the-azure-portal).
**Q: How long does a self-upgrade normally take?** - A: You can upgrade in place or upgrade from a restore point.
- - Upgrading in place will cause your dedicated SQL pool (formerly SQL DW) to momentarily pause and resume. A background process will continue while the dedicated SQL pool (formerly SQL DW) is online.
- - It takes longer if you are upgrading through a restore point, because the upgrade will go through the full restore process.
+ - Upgrading in place will cause your dedicated SQL pool (formerly SQL DW) to momentarily pause and resume. A background process will continue while the dedicated SQL pool (formerly SQL DW) is online.
+ - It takes longer if you're upgrading through a restore point, because the upgrade will go through the full restore process.
**Q: How long will the auto upgrade take?**
For more information, see [Upgrade to Gen2](upgrade-to-latest-generation.md).
**Q: When will this automatic upgrade take place?** -- A: During your maintenance schedule. Leveraging your chosen maintenance schedule will minimize disruption to your business.
+- A: During your maintenance schedule. Using your chosen maintenance schedule will minimize disruption to your business.
**Q: What should I do if my background upgrade process seems to be stuck?** -- A: Kick off a reindex of your Columnstore tables. Note that reindexing of the table will be offline during this operation.
+- A: Kick off a reindex of your Columnstore tables. Reindexing of the table will be offline during this operation.
**Q: What if Gen2 does not have the Service Level Objective I have on Gen1?** -- A: If you are running a DW600 or DW1200 on Gen1, it is advised to use DW500c or DW1000c respectively since Gen2 provides more memory, resources, and higher performance than Gen1.
+- A: If you're running a DW600 or DW1200 on Gen1, it's advised to use DW500c or DW1000c respectively since Gen2 provides more memory, resources, and higher performance than Gen1.
**Q: Can I disable geo-backup?**
For more information, see [Upgrade to Gen2](upgrade-to-latest-generation.md).
**Q: Is there a difference in T-SQL syntax between Gen1 and Gen2?** -- A: There is no change in the T-SQL language syntax from Gen1 to Gen2.
+- A: There's no change in the T-SQL language syntax from Gen1 to Gen2.
**Q: Does Gen2 support Maintenance Windows?**
For more information, see [Upgrade to Gen2](upgrade-to-latest-generation.md).
- A: No. After a region has been upgraded, the creation of new Gen1 instances will be disabled.
-## Next steps
+## Related content
- [Upgrade steps](upgrade-to-latest-generation.md) - [Maintenance windows](maintenance-scheduling.md)
synapse-analytics Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-architecture.md
description: Learn how Azure Synapse SQL combines distributed query processing c
-+ Last updated 11/01/2022
-# Azure Synapse SQL architecture
+# What is Azure Synapse SQL architecture?
This article describes the architecture components of Synapse SQL. It also explains how Azure Synapse SQL combines distributed query processing capabilities with Azure Storage to achieve high performance and scalability.
The diagram below shows a replicated table that is cached on the first distribut
:::image type="content" source="./media/overview-architecture/replicated-table.png" alt-text="Screenshot of the replicated table cached on the first distribution on each compute node." lightbox="./media/overview-architecture/replicated-table.png" :::
-## Next steps
+## Related content
Now that you know a bit about Synapse SQL, learn how to quickly [create a dedicated SQL pool](../quickstart-create-sql-pool-portal.md) and [load sample data](../sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md). Or start [using serverless SQL pool](../quickstart-sql-on-demand.md). If you're new to Azure, you may find the [Azure glossary](../../azure-glossary-cloud-terminology.md) helpful as you encounter new terminology.
synapse-analytics Troubleshoot Synapse Studio Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio-powershell.md
Title: Troubleshoot Synapse Studio connectivity
-description: Troubleshoot Azure Synapse Studio connectivity using PowerShell
+description: In this article we provide steps to troubleshoot Azure Synapse Studio connectivity problems using PowerShell.
-+ Last updated 10/30/2020
If you're a network administrator and tuning your firewall configuration for Azu
* All the test items (requests) marked with "Passed" mean they have passed connectivity tests, regardless of the HTTP status code. For the failed requests, the reason is shown in yellow, such as `NamedResolutionFailure` or `ConnectFailure`. These reasons might help you figure out whether there are misconfigurations with your network environment. - ## Next steps If the previous steps don't help to resolve your issue, [create a support ticket](../sql-data-warehouse/sql-data-warehouse-get-started-create-support-ticket.md).
trusted-signing How To Cert Revocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-cert-revocation.md
Title: Revoke a certificate profile in Trusted Signing description: Learn how to revoke a Trusted Signing certificate in the Azure portal. -+
trusted-signing How To Change Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-change-sku.md
Title: Change the account SKU description: Learn how to change your SKU or pricing tier for a Trusted Signing account. -+
update-manager Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md
Title: Troubleshoot known issues with Azure Update Manager description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. Previously updated : 09/11/2024 Last updated : 11/11/2024
virtual-desktop Client Device Redirection Intune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/client-device-redirection-intune.md
Title: Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune
-description: Learn how to configure redirection settings for iOS/iPadOS Windows App and Android Remote Desktop client using Microsoft Intune.
+description: Learn how to configure redirection settings for iOS/iPadOS Windows App, Android Remote Desktop client and Android Windows App (preview) using Microsoft Intune.
Previously updated : 10/31/2024 Last updated : 11/09/2024 # Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune > [!IMPORTANT]
-> Configure redirection settings for the **Remote Desktop app on Android** using Microsoft Intune is currently in PREVIEW. Configure redirection settings for **Windows App on iOS/iPadOS** using Microsoft Intune is generally available.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Configure redirection settings for the **Remote Desktop app on Android** and **Windows App on Android** using Microsoft Intune are currently in PREVIEW. Configure redirection settings for **Windows App on iOS/iPadOS** using Microsoft Intune is generally available. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> [!TIP] > This article contains information for multiple products that use the Remote Desktop Protocol (RDP) to provide remote access to Windows desktops and applications.
For Windows App:
| Device platform | Managed devices | Unmanaged devices | |--|:--:|:--:| | iOS and iPadOS | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Android | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
For the Remote Desktop app: | Device platform | Managed devices | Unmanaged devices | |--|:--:|:--:|
-| iOS and iPadOS | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
| Android | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | ## Example scenarios
Before you can configure redirection settings on a client device using Microsoft
- A client device running one of the following versions of Windows App or the Remote Desktop app: - For Windows App:
- - iOS and iPadOS: 11.0.4 or later.
+ - iOS/iPadOS: 11.0.4 or later.
+ - Android: 1.0.145 or later.
- Remote Desktop app: - Android: 10.0.19.1279 or later.
+- The latest version of:
+ - iOS/iPadOS: Microsoft Authenticator app
+ - Android: Company Portal app, installed in the same profile as Windows App for personal devices. Both app either in personal profile OR both apps in work profile.
+ - There are more Intune prerequisites for configuring app configuration policies, app protection policies, and Conditional Access policies. For more information, see: - [App configuration policies for Microsoft Intune](/mem/intune/apps/app-configuration-policies-overview). - [How to create and assign app protection policies](/mem/intune/apps/app-protection-policies). - [Use app-based Conditional Access policies with Intune](/mem/intune/protect/app-based-conditional-access-intune).
+
+> [!IMPORTANT]
+> Intune mobile application management (MAM) functionality isn't currently supported on Android 15 by Remote Desktop or Windows App (preview). MAM runs on older versions of Android. Support for MAM on Android 15 for Windows App (preview) will be supported in an upcoming release.
## Create a managed app filter
To create and apply an app configuration policy for managed devices, follow the
## Create an app configuration policy for managed apps
-You need to create a separate [app configuration policy for managed apps](/mem/intune/apps/app-configuration-policies-overview#managed-devices) for Windows App (iOS/iPadOS) and the Remote Desktop app (Android), which enables you to provide configuration settings. Don't configure both Android and iOS in the same configuration policy or you won't be able to configure policy targeting based on managed and unmanaged devices.
+You need to create a separate [app configuration policy for managed apps](/mem/intune/apps/app-configuration-policies-overview#managed-devices) for Windows App (iOS/iPadOS) and the Windows App (preview) or Remote Desktop app (Android), which enables you to provide configuration settings. Don't configure both Android and iOS in the same configuration policy or you won't be able to configure policy targeting based on managed and unmanaged devices.
To create and apply an app configuration policy for managed apps, follow the steps in [App configuration policies for Intune App SDK managed apps](/mem/intune/apps/app-configuration-policies-managed-app) and use the following settings: -- On the **Basics** tab, select **Select public apps**, then search for and select **Remote Desktop** for Android and **Windows App** for iOS/iPadOS.
+- On the **Basics** tab, select **Select public apps**, then search for and select **Remote Desktop** for Android and **Windows App** for iOS/iPadOS. Select **Select custom apps**, then type in **com.microsoft.rdc.androidx.beta** in the Bundle or Package ID field under More Apps for **Windows App (preview)** for Android.
- On the **Settings** tab, expand **General configuration settings**, then enter the following name and value pairs for each redirection setting you want to configure exactly as shown. These values correspond to the RDP properties listed on [Supported RDP properties](/azure/virtual-desktop/rdp-properties#device-redirection), but the syntax is different:
You need to create a separate [app protection policy](/mem/intune/apps/app-prote
To create and apply an app protection policy, follow the steps in [How to create and assign app protection policies](/mem/intune/apps/app-protection-policies) and use the following settings. -- On the **Apps** tab, select **Select public apps**, then search for and select **Remote Desktop** for Android and **Windows App** for iOS/iPadOS.
+- On the **Apps** tab, select **Select public apps**, then search for and select **Remote Desktop** for Android and **Windows App** for iOS/iPadOS. Select **Select custom apps**, then type in **com.microsoft.rdc.androidx.beta** in the Bundle or Package ID field under More Apps for **Windows App (preview)** for Android.
- On the **Data protection** tab, only the following settings are relevant to Windows App and the Remote Desktop app. The other settings don't apply as Windows App and the Remote Desktop app interact with the session host and not with data in the app. On mobile devices, unapproved keyboards are a source of keystroke logging and theft.
Now that you configure Intune to manage device redirection on personal devices,
## Known issues
-When creating an app configuration policy or an app protection policy for Android, Remote Desktop is listed twice. Add both apps. This will be updated soon so Remote Desktop is only shown once.
+- When creating an app configuration policy or an app protection policy for Android, Remote Desktop is listed twice. Add both apps. This will be updated soon so Remote Desktop is only shown once.
+
+- Windows App (Preview) will exit without warning if Company Portal and Windows App are not installed in the same profile. The solution is to install both apps either in personal profile OR both apps in work profile.
virtual-desktop Session Host Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/session-host-update.md
Here are known issues and limitations:
- Avoid modifying a session host configuration in a host pool with no session hosts at the same time a session host is being created as this can result in a host pool with inconsistent session host properties.
+- Updates with large batch sizes can result in intermittent failures with the error code `AgentRegistrationFailureGeneric`. If this occurs for a subset of session hosts being updated, [retrying the update](session-host-update-configure.md#pause-resume-cancel-or-retry-an-update) typically resolves the issue.
+ ## Next steps - Learn how to [update session hosts in a host pool with a session host configuration using session host update](session-host-update-configure.md).
virtual-desktop Troubleshoot Session Host Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-session-host-update.md
If you get the error **Error: SessionHostConfiguration does not exist** when usi
When you update session hosts using session host update, it's possible that an individual session host fails to update. In this case, session host update attempts to roll back the update on that session host. The intention for the rollback is to maintain the capacity of the entire host pool, even though this session host is rolled back to a previous version of the session host configuration, rather than forcing the session host to be unavailable and reducing the capacity of the host pool. Other session hosts in the host pool that successfully updated aren't rolled back. Session hosts that didn't start updating aren't updated.
-Once a session host fails to update, session host update completes updating the current batch of session hosts, then marks the update as failed. In this scenario, the only options are to retry the update or cancel it. If you retry the update, session host update again attempts to update the session that failed, plus the remaining session hosts not previously attempted. The existing batch size is used. If a session host fails to update a second time, it goes into an error state marked **Rollback-failed skipped** and is ignored for updates.
+Once a session host fails to update, session host update completes updating the current batch of session hosts, then marks the update as failed. In this scenario, the only options are to retry the update or cancel it. If you retry the update, session host update again attempts to update the session hosts that failed, plus the remaining session hosts not previously attempted. The existing batch size is used.
-If a session host fails to roll back successfully, it isn't available to host session and capacity is reduced. The session host isn't the same as the other session hosts in the host pool and it match the session host configuration. You should investigate why the update of the session host failed and resolve the issue before scheduling a new update. Once you schedule a new update, session host update attempts to update the session host that failed so they all match, plus any session hosts that weren't started in the previous update attempt.
+If a session host fails to roll back successfully, it isn't available to host session and capacity is reduced. The session host isn't the same as the other session hosts in the host pool and it match the session host configuration. You should investigate why the update of the session host failed and resolve the issue before scheduling a new update. Once you schedule a new update, session host update attempts to update the session hosts that failed so they all match, plus any session hosts that weren't started in the previous update attempt.
An update can fail with the following status:
An update can fail with the following status:
|--|--| | Update failed to initiate | The update flow is incorrect. For example, an image that's incompatible with the virtual machine SKU. You can't retry the update; you need to cancel it and schedule a new update. | | Update failed | The update failed while it was in progress. If you retry the update, it continues with the session host it stopped at previously. |
-| Session host rollback failed | If a session host fails to update, session host update tries to roll back the update on that session host. If the rollback fails and you retry the update, the session host is ignored and the update continues with the rest. |
+| Session host rollback failed | If a session host fails to update, session host update tries to roll back the update on that session host. If the rollback fails and you retry the update, it continues with the session host it stopped at previously. |
You can get any errors for an update by following the steps to [Monitor the progress of an update](session-host-update-configure.md?tabs=powershell#monitor-the-progress-of-an-update). When you use Azure PowerShell, the variable `$updateProgress` contains error details in the following properties:
Once you identify the issue, you can either [retry the update, or cancel it and
### An update failed to initiate
-When a session host update is initiated, the service validates whether it will be able to successfully complete the update. When a session host update fails prior to starting, the update ends and changes can be made to the session host configuration to retry the update. As the Azure resources are stored in your subscription, they can be modified by other processes; session host creation can still fail using the session host configuration even after this validation check is completed.
+When a session host update is initiated, the service validates whether it will be able to successfully complete the update. When a session host update fails prior to starting, the update ends and changes can be made to the session host configuration. As the Azure resources are stored in your subscription, they can be modified by other processes; session host creation can still fail using the session host configuration even after this validation check is completed.
Here are some example failures that prevent an update from starting:
Here are some example failures that prevent an update from starting:
### Failures during an update
-Here are some example failures that can happen during an update:
+Session host update starts with an initial batch size of 1 to validate that the provided session host configuration will result in healthy session hosts. Failures that occur during the first validation batch are most often be due to parameters within the session host configuration and are typically not resolved by retrying the update. Failures that occur after the validation batch are often intermittent and can be resolved by [retrying the update](session-host-update-configure.md#pause-resume-cancel-or-retry-an-update).
+
+Here are some example failures that can occur during an update:
- **VM creation failures**: VM creation can fail for a variety of reasons not specific to Azure Virtual Desktop, for example the exhaustion of subscription capacity, or issues with the provided image. You should review the error message provided to determine the appropriate remediation. Open a support case with Azure support if you need further assistance. -- **Agent installation, domain join, and session host health errors or timeout**: in most cases, agent, domain join, and other session host health errors can be resolved by reviewing guidance for addressing deployment and domain join failures for Azure Virtual Desktop. In addition, you should ensure that your image doesn't have the PowerShell DSC extension installed in the image. If it is installed, remove the folder `C:\packages\plugin folder` from the image.
+- **Agent installation, domain join, and session host health errors or timeout**: Agent, domain join, and other session host health errors that occur in the first validation batch can often be resolved by reviewing guidance for addressing deployment and domain join failures for Azure Virtual Desktop, and by ensuring your image doesn't have the PowerShell DSC extension installed. If the extension is installed on the image, remove the folder `C:\packages\plugin` from the image. If the failure is intermittent, with some session hosts successfully updating and others encountering an error such as `AgentRegistrationFailureGeneric`, [retrying the update](session-host-update-configure.md#pause-resume-cancel-or-retry-an-update) can often resolve the issue.
- **Resource modification and access errors**: modifying resources that are impacted in the update can result in errors during an update. Some of the errors that can result include deletion of resources and resource groups, changes to permissions, changes to power state, and changes to drain mode. In addition, if your Azure resources are locked and/or Azure policy limits the Azure Virtual Desktop service from modifying your session hosts, the update fails. Review Azure activity logs if you encounter related errors. Open a support case with Azure support if you need further assistance.
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Previously updated : 10/18/2024 Last updated : 11/08/2024 # What's new in Azure Virtual Desktop?
Make sure to check back here often to keep up with new updates.
Here's what changed in October 2024:
-### Windows 11, version 24H2 images are now available in Azure Marketplace
+### Yubikey smart card redirection on iOS and iPadOS is now in preview
-Images for Windows 11 Enterprise, version 24H2 and Windows 11 Enterprise multi-session, version 24H2 are now available in the Azure Marketplace. These images also include versions with Microsoft 365 apps. The Azure portal will be updated later this month to allow the convenient selection of 24H2 images when creating session hosts from within the Azure Virtual Desktop service.
+Yubico and Microsoft have partnered to provide smart card redirection for iOS and iPadOS Windows App users, which is available in preview starting in version 11.0.4. The Yubico integration supports the latestΓÇ»[YubiKey 5 portfolio](https://www.yubico.com/products/yubikey-5-overview/).
+
+For YubiKey support, contactΓÇ»[Yubico Support Services](https://www.yubico.com/support/support-services/).
+
+### AVC Mixed mode support for Azure Virtual Desktop and Windows 365 session desktop when multimedia redirection is not enabled
+
+AVC Mixed Mode is now available in the default graphics profile. When multimedia redirection isn't enabled, AVC/h.264 is used to encode detected image content instead of the RemoteFX image encoder. This improves performance when encoding images relative to bitrate and framerate in network-constrained scenarios.
+
+For more information, see [Graphics encoding over the Remote Desktop Protocol](graphics-encoding.md).
+
+### New Teams SlimCore changes are now available
+
+Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality by redirecting it to the local device when using Windows App or the Remote Desktop client on a supported platform.
+
+There are two versions of Teams, classic Teams and [new Teams](/microsoftteams/new-teams-desktop-admin), and you can use either with Azure Virtual Desktop. New Teams has feature parity with classic Teams, and improves performance, reliability, and security.
+
+New Teams can use either SlimCore or the WebRTC Redirector Service. SlimCore is now available. If you use SlimCore, you should also install the WebRTC Redirector Service. This allows a user to fall back to WebRTC, such as if they roam between different devices that don't support the new optimization architecture. For more information about SlimCore and how to opt into the preview, seeΓÇ»[New VDI solution for Teams](/microsoftteams/vdi-2).
+
+For more information, seeΓÇ»[Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md).
+
+### Multimedia redirection for video playback and calls in a remote session
+
+Multimedia redirection call redirection is now generally available. Multimedia redirection redirects video playback and calls in a remote session from Azure Virtual Desktop, a Windows 365 Cloud PC, or Microsoft Dev Box to your local device for faster processing and rendering.ΓÇ»
+
+For more information, see [Multimedia redirection for video playback and calls in a remote session](multimedia-redirection-video-playback-calls.md?tabs=intune&pivots=azure-virtual-desktop).
+
+### Standardized naming of selectable images in Azure Virtual Desktop is now available
+
+Image naming is now consistent when selecting images from the dropdown menu. As all new images published are Gen2, we're dropping this post-fix from the display name in the Azure Virtual Desktop dropdowns and will only add Gen1 when it is required. The change doesnΓÇÖt impact naming in the Azure Marketplace.
+
+### Windows 11, version 24H2 images are now available in the Azure Marketplace
+
+Windows 11 Enterprise and Windows 11 Enterprise multi-session are now available in the Azure Marketplace. The updated images, Windows 11 + Windows 365 apps and Windows 11, are available.
For additional information to configure languages other than English, see [Install language packs on Windows 11 Enterprise VMs in Azure Virtual Desktop](windows-11-language-packs.md).
+### Configuring client device redirection settings for Windows App on iOS/iPadOS using Microsoft Intune
+
+You can now use Microsoft Intune Mobile Application Management to check for device posture and manage redirections for Windows App on iOS and iPadOS, You can use Microsoft Intune on both corporate managed and personal devices.
+
+For more information, see [Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune](client-device-redirection-intune.md).
+ ## September 2024 Here's what changed in September 2024:
virtual-network Virtual Network Test Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-test-latency.md
Many other common network latency test tools, such as Ping, don't measure TCP or
Latte and SockPerf measure only TCP or UDP payload delivery times. These tools use the following approach to measure network latency between two physical or virtual computers:
-1. Create a two-way communications channel between the computers by designating one as sender and one as receiver.
+1. Create a two-way communication channel between the computers by designating one as sender and one as receiver.
1. Send and receive packets in both directions and measure the round-trip time (RTT). ## Tips and best practices to optimize network latency
Use the following best practices to test and analyze network latency:
1. Test the effects on network latency of changing any of the following components: - Operating system (OS) or network stack software, including configuration changes.
- - VM deployment method, such as deploying to an availability zone or proximity placement group (PPG).
+ - VM deployment methods, such as deploying to an availability zone or proximity placement group (PPG).
- VM properties, such as Accelerated Networking or size changes. - Virtual network configuration, such as routing or filtering changes.
Use the following best practices to test and analyze network latency:
## Test VMs with Latte or SockPerf
-Use the following procedures to install and test network latency with [Latte](https://github.com/mellanox/sockperf) for Windows or [SockPerf](https://github.com/mellanox/sockperf) for Linux.
+Use the following procedures to install and test network latency with [Latte](https://github.com/microsoft/latte) for Windows or [SockPerf](https://github.com/mellanox/sockperf) for Linux.
# [Windows](#tab/windows) ### Install Latte and configure VMs
-1. [Download the latest version of latte.exe](https://github.com/microsoft/latte/releases/download/v0/latte.exe) to both VMs, into a separate folder such as *c:\\tools*.
+1. [Download the latest version of latte.exe](https://github.com/microsoft/latte/releases/latest/download/latte.exe) to both VMs and put it in a separate folder such as *c:/tools*.
1. On the *receiver* VM, create a Windows Defender Firewall `allow` rule to allow the Latte traffic to arrive. It's easier to allow the *latte.exe* program by name than to allow specific inbound TCP ports. In the command, replace the `<path>` placeholder with the path you downloaded *latte.exe* to, such as *c:\\tools\\*.
Run *latte.exe* from the Windows command line, not from PowerShell.
The following example shows the command for a VM with an IP address of `10.0.0.4`:<br><br>`latte -a 10.0.0.4:5005 -i 65100`
-1. On the *sender* VM, run the same command as on the receiver, except with `-c` added to indicate the *client* or sender VM. Again replace the `<receiver IP address>`, `<port>`, and `<iterations>` placeholders with your own values.
+1. On the *sender* VM, run the same command as on the receiver, except with `-c` added to indicate the *client* or sender VM. Again, replace the `<receiver IP address>`, `<port>`, and `<iterations>` placeholders with your own values.
```cmd latte -c -a <receiver IP address>:<port> -i <iterations>