Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
api-center | Build Register Apis Vscode Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/build-register-apis-vscode-extension.md | Title: Build and register APIs - Azure API Center - VS Code extension + Title: Build and register APIs - VS Code extension description: API developers can use the Azure API Center extension for Visual Studio Code to build and register APIs in their organization's API center. Previously updated : 09/23/2024 Last updated : 10/16/2024 + -# Customer intent: As an API developer, I want to use my Visual Studio Code environment to build, discover, explore, and consume APIs in my organization's API center. +# Customer intent: As an API developer, I want to use my Visual Studio Code environment to register APIs in my organization's API center as part of my development workflow. # Build and register APIs with the Azure API Center extension for Visual Studio Code -To build, discover, explore, and consume APIs in your [API center](overview.md), you can use the Azure API Center extension in your Visual Studio Code development environment. The extension provides the following features for API developers: +API developers in your organization can build and register APIs in your [API center](overview.md) inventory by using the Azure API Center extension for Visual Studio Code. API developers can: -* **Build APIs** - Make APIs you're building discoverable to others by registering them in your API center directly or using CI/CD pipelines in GitHub or Azure DevOps. Shift-left API design conformance checks into Visual Studio Code with integrated linting support. Ensure that new API versions don't break API consumers with breaking change detection. +* Add an existing API to an API center as a one-time operation, or integrate a development pipeline to register APIs as part of a CI/CD workflow. +* Generate OpenAPI specification files from API code using GitHub Copilot, and register the API to an API center. -* **Discover APIs** - Browse the APIs in your API center, and view their details and documentation. +API developers can also take advantage of features in the extension to [discover and consume APIs](discover-apis-vscode-extension.md) in the API center and ensure [API governance](govern-apis-vscode-extension.md). -* **Explore APIs** - Use Swagger UI or REST client to explore API requests and responses. -* **Consume APIs** - Generate API SDK clients for your favorite language including JavaScript, TypeScript, .NET, Python, and Java, using the Microsoft Kiota engine that generates SDKs for Microsoft Graph, GitHub, and more. +The following Visual Studio Code extensions are needed for the specified scenarios: -> [!VIDEO https://www.youtube.com/embed/62X0NALedCc] --## Prerequisites --* One or more API centers in your Azure subscription. If you haven't created one already, see [Quickstart: Create your API center](set-up-api-center.md). -- Currently, you need to be assigned the Contributor role or higher permissions to manage API centers with the extension. --* [Visual Studio Code](https://code.visualstudio.com/) - -* [Azure API Center extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center) -- > [!NOTE] - > Where noted, certain features are available only in the extension's pre-release version. [!INCLUDE [vscode-extension-prerelease-features](includes/vscode-extension-prerelease-features.md)] - -The following Visual Studio Code extensions are optional and needed only for certain scenarios as indicated: --* [REST client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) - to send HTTP requests and view the responses in Visual Studio Code directly -* [Microsoft Kiota extension](https://marketplace.visualstudio.com/items?itemName=ms-graph.kiota) - to generate API clients -* [Spectral extension](https://marketplace.visualstudio.com/items?itemName=stoplight.spectral) - to run shift-left API design conformance checks in Visual Studio Code -* [Optic CLI](https://github.com/opticdev/optic) - to detect breaking changes between API specification documents +* [GitHub Actions](https://marketplace.visualstudio.com/items?itemName=GitHub.vscode-github-actions) - to register APIs using a CI/CD pipeline with GitHub Actions +* [Azure Pipelines](https://marketplace.visualstudio.com/items?itemName=ms-azure-devops.azure-pipelines) - to register APIs using a CI/CD pipeline with Azure Pipelines * [GitHub Copilot](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot) - to generate OpenAPI specification files from API code- -## Setup + -1. Install the Azure API Center extension for Visual Studio Code from the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/items?itemName=apidev.azure-api-center). Install optional extensions as needed. -1. In Visual Studio Code, in the Activity Bar on the left, select API Center. -1. If you're not signed in to your Azure account, select **Sign in to Azure...**, and follow the prompts to sign in. - Select an Azure account with the API center (or API centers) you wish to view APIs from. You can also filter on specific subscriptions if you have many to view from. +## Register an API - step by step -## Register APIs --Register an API in your API center directly from Visual Studio Code, either by registering it as a one-time operation or with a CI/CD pipeline. +The following steps register an API in your API center as a one-time operation. 1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Register API** and hit **Enter**.-1. Select how you want to register your API with your API center: - * **Step-by-step** is best for one-time registration of APIs. - * **CI/CD** adds a preconfigured GitHub or Azure DevOps pipeline to your active Visual Studio Code workspace that is run as part of a CI/CD workflow on each commit to source control. It's recommended to inventory APIs with your API center using CI/CD to ensure API metadata including specification and version stay current in your API center as the API continues to evolve over time. -1. Complete registration steps: - * For **Step-by-step**, select the API center to register APIs with, and answer prompts with information including API title, type, lifecycle stage, version, and specification to complete API registration. - * For **CI/CD**, select either **GitHub** or **Azure DevOps**, depending on your preferred source control mechanism. A Visual Studio Code workspace must be open for the Azure API Center extension to add a pipeline to your workspace. After the file is added, complete steps documented in the CI/CD pipeline file itself to configure Azure Pipeline/GitHub Action environment variables and identity. On push to source control, the API will be registered in your API center. -- Learn more about setting up a [GitHub Actions workflow](register-apis-github-actions.md) to register APIs with your API center. ---## API design conformance --To ensure design conformance with organizational standards as you build APIs, the Azure API Center extension for Visual Studio Code provides integrated support for API specification linting with Spectral. +1. Select **Manual**. +1. Select the API center to register APIs with. +1. Answer prompts with information including API title, type, version title, version lifecycle, definition title, specification name, and definition file to complete API registration. -1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Set active API Style Guide** and hit **Enter**. -2. Select one of the default rules provided, or, if your organization has a style guide already available, use **Select Local File** or **Input Remote URL** to specify the active ruleset in Visual Studio Code. Hit **Enter**. +The API is added to your API center inventory. -Once an active API style guide is set, opening any OpenAPI or AsyncAPI-based specification file will trigger a local linting operation in Visual Studio Code. Results are displayed both inline in the editor, as well as in the Problems window (**View** > **Problems** or **Ctrl+Shift+M**). +## Register APIs - CI/CD pipeline +The following steps register an API in your API center with a CI/CD pipeline. With this option, add a preconfigured GitHub or Azure DevOps pipeline to your active Visual Studio Code workspace that is run as part of a CI/CD workflow on each commit to source control. It's recommended to inventory APIs with your API center using CI/CD to ensure API metadata including specification and version stay current in your API center as the API continues to evolve over time. -## Breaking change detection --When introducing new versions of your API, it's important to ensure that changes introduced do not break API consumers on previous versions of your API. The Azure API Center extension for Visual Studio Code makes this easy with breaking change detection for OpenAPI specification documents powered by Optic. --1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Detect Breaking Change** and hit **Enter**. -2. Select the first API specification document to compare. Valid options include API specifications found in your API center, a local file, or the active editor in Visual Studio Code. -3. Select the second API specification document to compare. Valid options include API specifications found in your API center, a local file, or the active editor in Visual Studio Code. --Visual Studio Code will open a diff view between the two API specifications. Any breaking changes are displayed both inline in the editor, as well as in the Problems window (**View** > **Problems** or **Ctrl+Shift+M**). +1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Register API** and hit **Enter**. +1. Select **CI/CD**. +1. Select either **GitHub** or **Azure DevOps**, depending on your preferred source control mechanism. A Visual Studio Code workspace must be open for the Azure API Center extension to add a pipeline to your workspace. After the file is added, complete steps documented in the CI/CD pipeline file itself to configure required environment variables and identity. On push to source control, the API is registered in your API center. +Learn more about setting up a [GitHub Actions workflow](register-apis-github-actions.md) to register APIs with your API center. ## Generate OpenAPI specification file from API code -Use the power of GitHub Copilot with the Azure API Center extension for Visual Studio Code to create an OpenAPI specification file from your API code. Right-click on the API code, select **Copilot** from the options, and select **Generate API documentation**. This will create an OpenAPI specification file. +Use the power of GitHub Copilot with the Azure API Center extension for Visual Studio Code to create an OpenAPI specification file from your API code. Right-click on the API code, select **Copilot** from the options, and select **Generate API documentation**. GitHub Copilot creates an OpenAPI specification file. > [!NOTE] > This feature is available in the pre-release version of the API Center extension. Use the power of GitHub Copilot with the Azure API Center extension for Visual S After generating the OpenAPI specification file and checking for accuracy, you can register the API with your API center using the **Azure API Center: Register API** command. -## Discover APIs --Your API center resources appear in the tree view on the left-hand side. Expand an API center resource to see APIs, versions, definitions, environments, and deployments. ---Search for APIs within an API Center by using the search icon shown in the **Apis** tree view item. --> [!TIP] -> Optionally enable a [platform API catalog](enable-platform-api-catalog-vscode-extension.md) for your API center in Visual Studio Code so that app developers in your organization can discover APIs in a centralized location. The platform API catalog is a read-only view of the API inventory. --## View API documentation --You can view the documentation for an API definition in your API center and try API operations. This feature is only available for OpenAPI-based APIs in your API center. --1. Expand the API Center tree view to show an API definition. -1. Right-click on the definition, and select **Open API Documentation**. A new tab appears with the Swagger UI for the API definition. -- :::image type="content" source="media/build-register-apis-vscode-extension/view-api-documentation.png" alt-text="Screenshot of API documentation in Visual Studio Code." lightbox="media/build-register-apis-vscode-extension/view-api-documentation.png"::: --1. To try the API, select an endpoint, select **Try it out**, enter required parameters, and select **Execute**. -- > [!NOTE] - > Depending on the API, you might need to provide authorization credentials or an API key to try the API. -- > [!TIP] - > Using the pre-release version of the extension, you can generate API documentation in Markdown, a format that's easy to maintain and share with end users. Right-click on the definition, and select **Generate Markdown**. --## Generate HTTP file --You can view a `.http` file based on the API definition in your API center. If the REST Client extension is installed, you can make requests directory from the Visual Studio Code editor. This feature is only available for OpenAPI-based APIs in your API center. --1. Expand the API Center tree view to show an API definition. -1. Right-click on the definition, and select **Generate HTTP File**. A new tab appears that renders a .http document populated by the API specification. -- :::image type="content" source="media/build-register-apis-vscode-extension/generate-http-file.png" alt-text="Screenshot of generating a .http file in Visual Studio Code." lightbox="media/build-register-apis-vscode-extension/generate-http-file.png"::: --1. To make a request, select an endpoint, and select **Send Request**. -- > [!NOTE] - > Depending on the API, you might need to provide authorization credentials or an API key to make the request. --## Generate API client --Use the Microsoft Kiota extension to generate an API client for your favorite language. This feature is only available for OpenAPI-based APIs in your API center. --1. Expand the API Center tree view to show an API definition. -1. Right-click on the definition, and select **Generate API Client**. The **Kiota OpenAPI Generator** pane appears. -1. Select the API endpoints and HTTP operations you wish to include in your SDKs. -1. Select **Generate API client**. - 1. Enter configuration details about the SDK name, namespace, and output directory. - 1. Select the language for the generated SDK. - - :::image type="content" source="media/build-register-apis-vscode-extension/generate-api-client.png" alt-text="Screenshot of Kiota OpenAPI Explorer in Visual Studio Code." lightbox="media/build-register-apis-vscode-extension/generate-api-client.png"::: - -The client is generated. --For details on using the Kiota extension, see [Microsoft Kiota extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-graph.kiota). --## Export API specification --You can export an API specification from a definition and then download it as a file. --To export a specification in the extension's tree view: --1. Expand the API Center tree view to show an API definition. -1. Right-click on the definition, and select **Export API Specification Document**. A new tab appears that renders an API specification document. -- :::image type="content" source="media/build-register-apis-vscode-extension/export-specification.png" alt-text="Screenshot of exporting API specification in Visual Studio Code." lightbox="media/build-register-apis-vscode-extension/export-specification.png"::: --You can also export a specification using the Command Palette: --1. Type the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. -1. Select **Azure API Center: Export API Specification Document**. -1. Make selections to navigate to an API definition. A new tab appears that renders an API specification document. ## Related content * [Azure API Center - key concepts](key-concepts.md)+* [Discover and consume APIs with the Azure API Center extension for Visual Studio Code](discover-apis-vscode-extension.md) +* [Govern APIs with the Azure API Center extension for Visual Studio Code](govern-apis-vscode-extension.md) * [Enable and view platform API catalog in Visual Studio Code](enable-platform-api-catalog-vscode-extension.md) |
api-center | Discover Apis Vscode Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/discover-apis-vscode-extension.md | + + Title: Discover APIs - VS Code extension +description: API developers can use the Azure API Center extension for Visual Studio Code to discover APIs in their organization's API center. +++ Last updated : 10/16/2024+++# Customer intent: As an API developer, I want to use my Visual Studio Code environment to discover and consume APIs in my organizations API center. +++# Discover and consume APIs with the Azure API Center extension for Visual Studio Code ++API developers in your organization can discover and consume APIs in your [API center](overview.md) by using the Azure API Center extension for Visual Studio Code. The extension provides the following features: ++* **Discover APIs** - Browse the APIs in your API center, and view their details and documentation. ++* **Consume APIs** - Generate API SDK clients in their favorite language including JavaScript, TypeScript, .NET, Python, and Java, using the Microsoft Kiota engine that generates SDKs for Microsoft Graph, GitHub, and more. ++API developers can also take advantage of features in the extension to [register APIs](build-register-apis-vscode-extension.md) in the API center and ensure [API governance](govern-apis-vscode-extension.md). ++> [!TIP] +> If you want enterprise app developers to discover APIs in a centralized location, optionally enable a [platform API catalog](enable-platform-api-catalog-vscode-extension.md) for your API center in Visual Studio Code. The platform API catalog is a read-only view of the API inventory. ++ +* [REST client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) - to send HTTP requests and view the responses in Visual Studio Code directly +* [Microsoft Kiota extension](https://marketplace.visualstudio.com/items?itemName=ms-graph.kiota) - to generate API clients +++## Discover APIs ++API center resources appear in the tree view on the left-hand side. Expand an API center resource to see APIs, versions, definitions, environments, and deployments. +++Search for APIs within an API Center by using the search icon shown in the **Apis** tree view item. ++## View API documentation ++You can view the documentation for an API definition in your API center and try API operations. This feature is only available for OpenAPI-based APIs in your API center. ++1. Expand the API Center tree view to show an API definition. +1. Right-click on the definition, and select **Open API Documentation**. A new tab appears with the Swagger UI for the API definition. ++ :::image type="content" source="media/discover-apis-vscode-extension/view-api-documentation.png" alt-text="Screenshot of API documentation in Visual Studio Code." lightbox="media/discover-apis-vscode-extension/view-api-documentation.png"::: ++1. To try the API, select an endpoint, select **Try it out**, enter required parameters, and select **Execute**. ++ > [!NOTE] + > Depending on the API, you might need to provide authorization credentials or an API key to try the API. ++ > [!TIP] + > Using the pre-release version of the extension, you can generate API documentation in Markdown, a format that's easy to maintain and share with end users. Right-click on the definition, and select **Generate Markdown**. ++## Generate HTTP file ++You can view a `.http` file based on the API definition in your API center. If the REST Client extension is installed, you can make requests directory from the Visual Studio Code editor. This feature is only available for OpenAPI-based APIs in your API center. ++1. Expand the API Center tree view to show an API definition. +1. Right-click on the definition, and select **Generate HTTP File**. A new tab appears that renders a .http document populated by the API specification. ++ :::image type="content" source="media/discover-apis-vscode-extension/generate-http-file.png" alt-text="Screenshot of generating a .http file in Visual Studio Code." lightbox="media/discover-apis-vscode-extension/generate-http-file.png"::: ++1. To make a request, select an endpoint, and select **Send Request**. ++ > [!NOTE] + > Depending on the API, you might need to provide authorization credentials or an API key to make the request. ++## Generate API client ++Use the Microsoft Kiota extension to generate an API client for your favorite language. This feature is only available for OpenAPI-based APIs in your API center. ++1. Expand the API Center tree view to show an API definition. +1. Right-click on the definition, and select **Generate API Client**. The **Kiota OpenAPI Generator** pane appears. +1. Select the API endpoints and HTTP operations you wish to include in your SDKs. +1. Select **Generate API client**. + 1. Enter configuration details about the SDK name, namespace, and output directory. + 1. Select the language for the generated SDK. + + :::image type="content" source="media/discover-apis-vscode-extension/generate-api-client.png" alt-text="Screenshot of Kiota OpenAPI Explorer in Visual Studio Code." lightbox="media/discover-apis-vscode-extension/generate-api-client.png"::: + +The client is generated. ++For details on using the Kiota extension, see [Microsoft Kiota extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-graph.kiota). ++## Export API specification ++You can export an API specification from a definition and then download it as a file. ++To export a specification in the extension's tree view: ++1. Expand the API Center tree view to show an API definition. +1. Right-click on the definition, and select **Export API Specification Document**. A new tab appears that renders an API specification document. ++ :::image type="content" source="media/discover-apis-vscode-extension/export-specification.png" alt-text="Screenshot of exporting API specification in Visual Studio Code." lightbox="media/discover-apis-vscode-extension/export-specification.png"::: ++You can also export a specification using the Command Palette: ++1. Type the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. +1. Select **Azure API Center: Export API Specification Document**. +1. Make selections to navigate to an API definition. A new tab appears that renders an API specification document. ++## Related content ++* [Azure API Center - key concepts](key-concepts.md) +* [Build and register APIs with the Azure API Center extension for Visual Studio Code](build-register-apis-vscode-extension.md) +* [Govern APIs with the Azure API Center extension for Visual Studio Code](govern-apis-vscode-extension.md) +* [Enable and view platform API catalog in Visual Studio Code](enable-platform-api-catalog-vscode-extension.md) + |
api-center | Enable Platform Api Catalog Vscode Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-platform-api-catalog-vscode-extension.md | +> [!TIP] +> The Visual Studio Code extension provides more features for API developers who have permissions to manage an Azure API center. For example, API developers can register APIs in the API center directly or using CI/CD pipelines. [Learn more](build-register-apis-vscode-extension.md) + ## Prerequisites ### For API center administrators |
api-center | Govern Apis Vscode Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/govern-apis-vscode-extension.md | + + Title: Govern APIs - VS Code extension +description: API developers can use the Azure API Center extension for Visual Studio Code to govern their organization's APIs. +++ Last updated : 10/16/2024+++# Customer intent: As an API developer, I want to use my Visual Studio Code environment to check API compliance in my organization's API center. +++# Govern APIs with the Azure API Center extension for Visual Studio Code ++To maximize success of your API governance efforts, it's critical to shift-left governance early into the API development cycle. This approach allows API developers to create APIs correctly from the beginning, saving them from wasted development effort and mitigating noncompliant APIs later in the development process. ++The Azure API Center extension for Visual Studio Code includes the following governance capabilities for API developers: + +* Evaluating API designs against API style guides as the API is developed in Visual Studio Code. +* Early detection of breaking changes so that APIs remain reliable and function as expected, preserving the trust of end-users and stakeholders. ++API developers can also take advantage of features in the extension to [register APIs](build-register-apis-vscode-extension.md) in the API center and [discover and consume APIs](discover-apis-vscode-extension.md). +++* [Spectral extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=stoplight.spectral) - to run shift-left API design conformance checks in Visual Studio Code +* [Optic CLI](https://github.com/opticdev/optic) - to detect breaking changes between API specification documents +++## API design conformance ++To ensure design conformance with organizational standards as you build APIs, the Azure API Center extension for Visual Studio Code provides integrated support for API specification linting with [Spectral](https://stoplight.io/open-source/spectral). ++1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Set active API Style Guide** and hit **Enter**. +2. Select one of the default rules provided, or, if your organization has a style guide already available, use **Select Local File** or **Input Remote URL** to specify the active ruleset in Visual Studio Code. Hit **Enter**. ++Once an active API style guide is set, opening any OpenAPI or AsyncAPI-based specification file triggers a local linting operation in Visual Studio Code. Results are displayed both inline in the editor and in the Problems window (**View** > **Problems** or **Ctrl+Shift+M**). +++## Breaking change detection ++When introducing new versions of your API, it's important to ensure that changes introduced do not break API consumers on previous versions of your API. The Azure API Center extension for Visual Studio Code makes this easy with breaking change detection for OpenAPI specification documents powered by [Optic](https://github.com/opticdev/optic). ++1. Use the **Ctrl+Shift+P** keyboard shortcut to open the Command Palette. Type **Azure API Center: Detect Breaking Change** and hit **Enter**. +2. Select the first API specification document to compare. Valid options include API specifications found in your API center, a local file, or the active editor in Visual Studio Code. +3. Select the second API specification document to compare. Valid options include API specifications found in your API center, a local file, or the active editor in Visual Studio Code. ++Visual Studio Code opens a diff view between the two API specifications. Any breaking changes are displayed both inline in the editor and in the Problems window (**View** > **Problems** or **Ctrl+Shift+M**). ++++## Related content ++* [Azure API Center - key concepts](key-concepts.md) +* [Build and register APIs with the Azure API Center extension for Visual Studio Code](build-register-apis-vscode-extension.md) +* [Discover and consume APIs with the Azure API Center extension for Visual Studio Code](discover-apis-vscode-extension.md) +* [Enable and view platform API catalog in Visual Studio Code](enable-platform-api-catalog-vscode-extension.md) + |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | More information about policies: ||||||--| | [Trace](trace-policy.md) | Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs. | Yes | Yes<sup>1</sup> | Yes | Yes | | [Emit metrics](emit-metric-policy.md) | Sends custom metrics to Application Insights at execution. | Yes | Yes | Yes | Yes |-| [Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | No | -| [Emit large language model API token metrics](llm-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model (LLM) tokens through LLM APIs. | Yes | Yes | No | No | +| [Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | Yes | +| [Emit large language model API token metrics](llm-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model (LLM) tokens through LLM APIs. | Yes | Yes | No | Yes | <sup>1</sup> In the V2 gateway, the `trace` policy currently does not add tracing output in the test console. |
app-service | App Service Key Vault References | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md | A key vault reference is of the form `@Microsoft.KeyVault({referenceString})`, w > [!div class="mx-tdBreakAll"] > | Reference string | Description | > |--||-> | SecretUri=_secretUri_ | The **SecretUri** should be the full data-plane URI of a secret in the vault, optionally including a version, e.g., `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931` | +> | SecretUri=_secretUri_ | The **SecretUri** should be the full data-plane URI of a secret in the vault, for example `https://myvault.vault.azure.net/secrets/mysecret`. Optionally, include a version, such as `https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931`. | > | VaultName=_vaultName_;SecretName=_secretName_;SecretVersion=_secretVersion_ | The **VaultName** is required and is the vault name. The **SecretName** is required and is the secret name. The **SecretVersion** is optional but if present indicates the version of the secret to use. | -For example, a complete reference would look like the following string: +For example, a complete reference without a specific version would look like the following string: ```-@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/) +@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret) ``` Alternatively: |
app-service | Configure Common | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md | Here, you can configure some common settings for the app. Some settings require > [!NOTE] > Most modern browsers support HTTP/2 protocol over TLS only, while non-encrypted traffic continues to use HTTP/1.1. To ensure that client browsers connect to your app with HTTP/2, secure your custom DNS name. For more information, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md). - **Web sockets**: For [ASP.NET SignalR] or [socket.io](https://socket.io/), for example.- - **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** isn't turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded. + - **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** isn't turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. It's important to ensure this request receives a 200 OK response to ensure any re-imaging operations are performed correctly. The continuous ping prevents the app from being unloaded. Always On is required for continuous WebJobs or for WebJobs that are triggered using a CRON expression. - **Session affinity**: In a multi-instance deployment, ensure that the client is routed to the same instance for the life of the session. You can set this option to **Off** for stateless applications. |
app-service | Deploy Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md | To use [user-level credentials](#1-generate-deployment-credentials), paste the e When you configure the GitHub workflow file later, you use the secret for the input `creds` of the [Azure/login](https://github.com/marketplace/actions/azure-login). For example: ```yaml-- uses: azure/login@v1+- uses: azure/login@v2 with: creds: ${{ secrets.AZURE_CREDENTIALS }} ``` Check out references on Azure GitHub Actions and workflows: - [Azure/k8s-deploy action](https://github.com/Azure/k8s-deploy) - [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples) - [Starter Workflows](https://github.com/actions/starter-workflows)-- [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows)+- [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) |
app-service | Deploy Staging Slots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md | When you swap two slots (usually from a staging slot *as the source* into the pr 1. Now that the source slot has the pre-swap app previously in the target slot, perform the same operation by applying all settings and restarting the instances. -At any point of the swap operation, all work of initializing the swapped apps happens on the source slot. The target slot remains online while the source slot is being prepared and warmed up, regardless of where the swap succeeds or fails. To swap a staging slot with the production slot, make sure that the production slot is always the target slot. This way, the swap operation doesn't affect your production app. +At any point of the swap operation, all work of initializing the swapped apps happens on the source slot. The target slot remains online while the source slot is being prepared and warmed up, regardless of whether the swap succeeds or fails. To swap a staging slot with the production slot, make sure that the production slot is always the target slot. This way, the swap operation doesn't affect your production app. > [!NOTE] > The instances in your former production instances (those that will be swapped into staging after this swap operation) will be recycled quickly in the last step of the swap process. In case you have any long running operations in your application, they will be abandoned, when the workers recycle. This also applies to function apps. Therefore your application code should be written in a fault tolerant way. |
automation | Region Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md | -> Start/Stop VMs v1 is retired and we recommend you to start using [Start/Stop VMs v2](../../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. +> - Automation Update Management has retired on **31 August 2024** and we recommend that you use Azure Update Manager. Follow the guidelines for [migration from Automation Update Management to Azure Update Manager](../../update-manager/migration-overview.md). +> - Start/Stop VMs v1 is retired and we recommend you to start using [Start/Stop VMs v2](../../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. [!INCLUDE [./log-analytics-retirement-announcement.md](../includes/log-analytics-retirement-announcement.md)] |
automation | Source Control Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md | Title: Use source control integration in Azure Automation description: This article tells you how to synchronize Azure Automation source control with other repositories. Previously updated : 09/10/2024 Last updated : 10/17/2024 Azure Automation supports three types of source control: > If you have both a Run As account and managed identity enabled, then managed identity is given preference. > [!Important]-> Azure Automation Run As Account will retire on **September 30, 2023** and will be replaced with Managed Identities. Before that date, you need to [migrate from a Run As account to Managed identities](migrate-run-as-accounts-managed-identity.md). +> Azure Automation Run As Account has retired on **September 30, 2023**. We recommend that you use [Managed Identities](migrate-run-as-accounts-managed-identity.md). > [!NOTE] > According to [this](/azure/devops/organizations/accounts/change-application-access-policies#application-connection-policies) Azure DevOps documentation, **Third-party application access via OAuth** policy is defaulted to **off** for all new organizations. So if you try to configure source control in Azure Automation with **Azure DevOps (Git)** as source control type without enabling **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps then you might get **SourceControl securityToken is invalid** error. Hence to avoid this error, make sure you first enable **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps. |
automation | Operating System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md | description: This article describes the supported Windows and Linux operating sy Previously updated : 09/15/2024 Last updated : 10/17/2024 # Operating systems supported by Update Management -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Management. [!INCLUDE [./automation-update-management-retirement-announcement.md](../includes/automation-update-management-retirement-announcement.md)] |
azure-app-configuration | Feature Management Python Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/feature-management-python-reference.md | The `feature_flags` provided to `FeatureManager` can either be the `AzureAppConf Creating a feature filter provides a way to enable features based on criteria that you define. To implement a feature filter, the `FeatureFilter` interface must be implemented. `FeatureFilter` has a single method named `evaluate`. When a feature specifies that it can be enabled for a feature filter, the `evaluate` method is called. If `evaluate` returns `true`, it means the feature should be enabled. -The following snippet demonstrates how to add a customized feature filter `MyCriteriaFilter`. +The following snippet demonstrates how to add a customized feature filter `MyCustomFilter`. ```python feature_manager = FeatureManager(feature_flags, feature_filters=[MyCustomFilter()]) Either a user can be specified directly in the `is_enabled` call or a `Targeting ```python # Directly specifying the user-feature_manager = FeatureManager(feature_flags, "test_user") +result = is_enabled(feature_flags, "test_user") # Using a TargetingContext-feature_manager = FeatureManager(feature_flags, TargetingContext(user_id="test_user", groups=["Ring1"])) +result = is_enabled(feature_flags, TargetingContext(user_id="test_user", groups=["Ring1"])) ``` ### Targeting exclusion |
azure-cache-for-redis | Cache Best Practices Enterprise Tiers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-enterprise-tiers.md | Redis Enterprise, on the other hand, can use multiple vCPUs for the Redis instan The tables show the number of vCPUs used for the primary shards, not the replica shards. Shards don't map one-to-one to the number of vCPUs. The tables only illustrate vCPUs, not shards. Some configurations use more shards than available vCPUs to boost performance in some usage scenarios. -### E1 (preview) +### E1 |Capacity|Effective vCPUs| |:|:| The [data persistence](cache-how-to-premium-persistence.md) feature in the Enter Many customers want to use persistence to take periodic backups of the data on their cache. We don't recommend that you use data persistence in this way. Instead, use the [import/export](cache-how-to-import-export-data.md) feature. You can export copies of cache data in RDB format directly into your chosen storage account and trigger the data export as frequently as you require. Export can be triggered either from the portal or by using the CLI, PowerShell, or SDK tools. -## E1 (preview) SKU Limitations +## E1 SKU Limitations -The E1 (preview) SKU is intended for dev/test scenarios, primarily. E1 runs on smaller [burstable VMs](/azure/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model). Burstable VMs offer variable performance based on how much CPU is consumed. Unlike other Enterprise SKU offerings, you can't _scale out_ the E1 SKU, although it's still possible to _scale up_ to a larger SKU. The E1 SKU also doesn't support [active geo-replication](cache-how-to-active-geo-replication.md). +The E1 SKU is intended for dev/test scenarios, primarily. E1 runs on smaller [burstable VMs](/azure/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model). Burstable VMs offer variable performance based on how much CPU is consumed. Unlike other Enterprise SKU offerings, you can't _scale out_ the E1 SKU, although it's still possible to _scale up_ to a larger SKU. The E1 SKU also doesn't support [active geo-replication](cache-how-to-active-geo-replication.md). ## Related content |
azure-functions | Flex Consumption Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md | In Flex Consumption, many of the standard application settings and site configur Keep these other considerations in mind when using Flex Consumption plan during the current preview: + **Host**: There is a 30 seconds timeout for the app initialization. If your function app takes longer than 30 seconds to start you will see gRPC related System.TimeoutException entries. This timeout will be configurable and a more clear exception will be implemented as part of [this host work item](https://github.com/Azure/azure-functions-host/issues/10482).-+ **Durable Functions Performance**: Due to the per function scaling nature of Flex Consumption, to ensure the best performance for Durable Functions we recommend setting the [Always Ready instance count](./flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. Also, with the Azure Storage provider, consider reducing the [queue polling interval](./durable/durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. ++ **Durable Functions**: Due to the per function scaling nature of Flex Consumption, to ensure the best performance for Durable Functions we recommend setting the [Always Ready instance count](./flex-consumption-how-to.md#set-always-ready-instance-counts) for the `durable` group to `1`. Also, with the Azure Storage provider, consider reducing the [queue polling interval](./durable/durable-functions-azure-storage-provider.md#queue-polling) to 10 seconds or less. Only Azure Storage is supported as a backend storage providers for Flex Consumption hosted durable functions. + **VNet Integration** Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider). The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`. + **Triggers**: All triggers are fully supported except for Kafka and Azure SQL triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version. + **Regions**: Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions). Keep these other considerations in mind when using Flex Consumption plan during + **Managed dependencies**: [Managed dependencies in PowerShell](functions-reference-powershell.md#dependency-management) aren't supported by Flex Consumption. You must instead [define your own custom modules](functions-reference-powershell.md#custom-modules). + **Diagnostic settings**: Diagnostic settings are not currently supported. + **Certificates**: Loading certificates with the WEBSITE_LOAD_CERTIFICATES app setting is currently not supported.- -## Related articles ++ **Key Vault References**: Key Vault references in app settings do not work when Key Vault is network access restricted, even if the function app has Virtual Network integration. The current workaround is to directly reference the Key Vault in code and read the required secrets.++## Related articles [Azure Functions hosting options](functions-scale.md) [Create and manage function apps in the Flex Consumption plan](flex-consumption-how-to.md) |
azure-functions | Functions Develop Vs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md | For a full list of the bindings supported by Functions, see [Supported bindings] Azure Functions Core Tools lets you run Azure Functions project on your local development computer. When you press F5 to debug a Functions project, the local Functions host (func.exe) starts to listen on a local port (usually 7071). Any callable function endpoints are written to the output, and you can use these endpoints for testing your functions. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio. ++> [!IMPORTANT] +> Starting with version 4.0.6517 of the Core Tools, in-process model projects must reference [version 4.5.0 or later of `Microsoft.NET.Sdk.Functions`](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/4.5.0). If an earlier version is used, the `func start` command will error. ++ To start your function in Visual Studio in debug mode: 1. Press F5. If prompted, accept the request from Visual Studio to download and install Azure Functions Core (CLI) tools. You might also need to enable a firewall exception so that the tools can handle HTTP requests. |
azure-functions | Functions Dotnet Class Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md | The following example shows the relevant parts of the `.csproj` files that have <AzureFunctionsVersion>v4</AzureFunctionsVersion> </PropertyGroup> <ItemGroup>- <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.4.0" /> + <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.5.0" /> </ItemGroup> ``` +> [!IMPORTANT] +> Starting with version 4.0.6517 of the Core Tools, in-process model projects must reference [version 4.5.0 or later of `Microsoft.NET.Sdk.Functions`](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/4.5.0). If an earlier version is used, the `func start` command will error. + # [v1.x](#tab/v1) ```xml |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | For help with version-related issues, see [Core Tools versions](#v2). In the terminal window or from a command prompt, run the following command to create a project in the `MyProjFolder` folder: ::: zone pivot="programming-language-csharp"-### [Isolated process](#tab/isolated-process) +### [Isolated worker model](#tab/isolated-process) ```console func init MyProjFolder --worker-runtime dotnet-isolated func init MyProjFolder --worker-runtime dotnet-isolated By default this command creates a project that runs in-process with the Functions host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For more information, see the [`func init`](functions-core-tools-reference.md#func-init) reference. -### [In-process](#tab/in-process) +### [In-process model](#tab/in-process) ```console func init MyProjFolder --worker-runtime dotnet mvn clean package mvn azure-functions:run ``` ::: zone-end ++### [Isolated worker model](#tab/isolated-process) ++``` +func start +``` ++### [In-process model](#tab/in-process) ++``` +func start +``` ++> [!IMPORTANT] +> Starting with version 4.0.6517 of the Core Tools, in-process model projects must reference [version 4.5.0 or later of `Microsoft.NET.Sdk.Functions`](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/4.5.0). If an earlier version is used, the `func start` command will error. +++ ``` func start ``` If you must use a binding extension or an extension version not in a supported b Major versions of Azure Functions Core Tools are linked to specific major versions of the Azure Functions runtime. For example, version 4.x of Core Tools supports version 4.x of the Functions runtime. This version is the recommended major version of both the Functions runtime and Core Tools. You can determine the latest release version of Core Tools in the [Azure Functions Core Tools repository](https://github.com/Azure/azure-functions-core-tools/releases/latest). +<a name="in-process-minimum-version"></a> +Starting with version 4.0.6517 of the Core Tools, in-process model projects must reference [version 4.5.0 or later of `Microsoft.NET.Sdk.Functions`](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/4.5.0). If an earlier version is used, the `func start` command will error. + Run the following command to determine the version of your current Core Tools installation: ```command |
azure-maps | Migrate Get Static Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-static-map.md | https://dev.virtualearth.net/REST/v1/Imagery/Map/Road/51.504810,-0.113629/15?map Azure Maps _Get Map Static Image_ API sample GET request: ``` http-https://atlas.microsoft.com/map/static?api-version=2024-04-01&tilesetId=microsoft.base.road&zoom=15¢er=-0.113629,51.504810&subscription-key={Your-Azure-Maps-Subscription-key} +https://atlas.microsoft.com/map/static?api-version=2024-04-01&tilesetId=microsoft.base.road&zoom=15¢er=-0.113629,51.504810&height=500&Width=500&pins=default||-0.113629 51.504810||&subscription-key={Your-Azure-Maps-Subscription-key} ``` ## Response examples |
azure-maps | Spatial Io Connect Wfs Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md | The `WfsClient` class supports the following features: The `atlas.io.ogc.WfsClient` class in the spatial IO module makes it easy to query a WFS service and convert the responses into GeoJSON objects. This GeoJSON object can then be used for other mapping purposes. +<!-- The [Simple WFS example] sample shows how to easily query a Web Feature Service (WFS) and renders the returned features on the map. For the source code for this sample, see [Simple WFS example source code]. :::image type="content" source="./media/spatial-io-connect-wfs-service/simple-wfs-example.png"alt-text="A screenshot that shows the results of a WFS overlay on a map.":::--<!-- -> [!VIDEO //codepen.io/azuremaps/embed/MWwvVYY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] -> ## Supported filters |
azure-vmware | Configure Azure Elastic San | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md | Azure Elastic storage area network (SAN) addresses the problem of workload optim The following prerequisites are required to continue. -- Verify you have a Dev/Test private cloud in a [region that Elastic SAN is available in](../storage/elastic-san/elastic-san-create.md).+- Verify you have a private cloud in a [region that Elastic SAN is available in](../storage/elastic-san/elastic-san-create.md). - Know the availability zone your private cloud is in. - - In the UI, select an Azure VMware Solution host. - > [!NOTE] - > The host exposes its Availability Zone. You should use that AZ when deploying other Azure resources for the same subscription. + - In the UI, select an Azure VMware Solution host. + > [!NOTE] + > The host exposes its Availability Zone. You should use that AZ when deploying other Azure resources for the same subscription. - You have permission to set up new resources in the subscription your private cloud is in. -## Set preview feature flags +- Reserve a dedicated address block for your external storage. -To use ElasticSAN with Azure VMware Solution, you need to set three feature flags on your subscription: +## Supported host types -- earlyAccess-- iSCSIMultipath-- ElasticSanDatastore+To use Elastic SAN with Azure VMware Solution, you can use any of these three host types: -Setting a feature flag can be done in the subscription overview page in the Azure portal. +- AV36  -1. Under the **Settings** section, select **Preview features**. -1. On the **Preview features** page, use the search bar to find the feature flags you need to register. Once found, select the feature flag you want to register and select **Register** at the top. -1. Verify the **State** of the feature is changed to **Registered** with a green checkmark. +- AV36P  +- AV52  ++Using AV64 with Elastic SAN is not currently supported. ## Set up Elastic SAN -In this section, you create a virtual network for your Elastic SAN. Then you create the Elastic SAN that includes creating at least one volume group and one volume that becomes your VMFS datastore. Next, you set up a Private Endpoint for your Elastic SAN that allows your private cloud to connect to the Elastic SAN volume. Then you're ready to add an Elastic SAN volume as a datastore in your private cloud. +In this section, you create a virtual network for your Elastic SAN. Then you create the Elastic SAN that includes creating at least one volume group and one volume that becomes your VMFS datastore. Next, you set up private endpoints for your Elastic SAN that allows your private cloud to connect to the Elastic SAN volume. Then you're ready to add an Elastic SAN volume as a datastore in your private cloud. 1. Use one of the following instruction options to set up a dedicated virtual network for your Elastic SAN: - [Azure portal](../virtual-network/quick-create-portal.md) - [Azure PowerShell module](../virtual-network/quick-create-powershell.md) - [Azure CLI](../virtual-network/quick-create-cli.md) 1. Use one of the following instruction options to set up an Elastic SAN, your dedicated volume group, and initial volume in that group:- > [!IMPORTANT] - > Create your Elastic SAN in the same region and availability zone as your private cloud for best performance. - - [Azure portal](/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal) - - [PowerShell](/azure/storage/elastic-san/elastic-san-create?tabs=azure-powershell) - - [Azure CLI](/azure/storage/elastic-san/elastic-san-create?tabs=azure-cli) + > [!IMPORTANT] + > + > Create your Elastic SAN in the same region and availability zone as your private cloud for best performance. + - [Azure portal](/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal) + - [PowerShell](/azure/storage/elastic-san/elastic-san-create?tabs=azure-powershell) + - [Azure CLI](/azure/storage/elastic-san/elastic-san-create?tabs=azure-cli) 1. Use one of the following instructions to configure a Private Endpoint (PE) for your Elastic SAN:- > [!IMPORTANT] - > You must have a Private Endpoint set up for your dedicated volume group to be able to connect your SDDC to the Elastic SAN. - - [PowerShell](/azure/storage/elastic-san/elastic-san-networking?tabs=azure-powershell#configure-a-private-endpoint) - - [Azure CLI](/azure/storage/elastic-san/elastic-san-networking?tabs=azure-cli#tabpanel_2_azure-cli) + > [!IMPORTANT] + > + > You must have a Private Endpoint set up for your dedicated volume group to be able to connect your SDDC to the Elastic SAN. + - [Azure Portal](/azure/storage/elastic-san/elastic-san-networking?tabs=azure-portal#tabpanel_2_azure-portal) + - [PowerShell](/azure/storage/elastic-san/elastic-san-networking?tabs=azure-powershell#configure-a-private-endpoint) + - [Azure CLI](/azure/storage/elastic-san/elastic-san-networking?tabs=azure-cli#tabpanel_2_azure-cli) -## Add an Elastic SAN volume as a datastore +## Configuration recommendations ++You should use multiple private endpoints to establish multiple sessions between an Elastic SAN and each volume group you intend to connect to your SDDC. Because of how Elastic SAN handles sessions, having multiple sessions comes with two benefits: increased performance thanks to parallelization, and increased reliability to handle single session disconnects due to unexpected factors like network glitches. When you establish multiple sessions, it mitigates the impact of session disconnects, as long as the connection re-established within a few seconds, your other sessions help load-balance traffic. ++ > [!NOTE] + > Session disconnects may still show up as "All Paths Down" or "APD" events, which can be seen in the Events section of the ESXi Host at vCenter. You can also see them in the logs: it will show the identifier of a device or filesystem, and state it has entered the All Paths Down state. -Once all three feature flags (earlyAccess, iSCSIMultipath, ElasticSanDatastore) are set on your subscription, you can use the Azure portal to add the Elastic SAN volume as a datastore in your private cloud. Use the steps in [configure external storage address block](#configure-external-storage-address-block) to add, connect, disconnect, and delete Elastic SAN. +Each private endpoint provides two sessions to Elastic SAN per host. The recommended number of sessions to Elastic SAN per host is 8, but because the maximum number of sessions an Elastic SAN datastore can handle is 128, the ideal number for your setup depends on the number of hosts in your private cloud. + > [!IMPORTANT] + > You should configure all Private Endpoints before attaching a volume as a datastore. Adding Private Endpoints after a volume is attached as a datastore will require detaching the datastore and reconnecting it to the cluster. ## Configure external storage address block Start by providing an IP block for deploying external storage. Navigate to the **Storage** tab in your Azure VMware Solution private cloud in the Azure portal. The address block should be a /24 network. Start by providing an IP block for deploying external storage. Navigate to the * - The address block can't overlap any of the following restricted network blocks: 100.72.0.0/15 - The address block provided is used to enable multipathing from the ESXi hosts to the target, it can’t be edited or changed. If you do need to change it, submit a support request. -After you provide an External storage address block, you can connect to an Elastic SAN volume from the same page. - ## Connect Elastic SAN -First, you need to connect your SDDC express route with the private endpoint you set up for your Elastic SAN volume group. Instructions on how to establish this connection can be found in the Tutorial, [Configure networking for your VMware private cloud in Azure](../azure-vmware/tutorial-configure-networking.md). +After you provide an External storage address block, you need to connect your private cloud express route with the private endpoint(s) you set up for your Elastic SAN volume group(s). To learn how to establish these connections, see [Configure networking for your VMware private cloud in Azure](../azure-vmware/tutorial-configure-networking.md). ++> [!NOTE] +> Connection to Elastic SAN from Azure VMWare Solution happens via private endpoints to provide the highest network security. Since your private cloud connects to Elastic SAN in Azure through an ExpressRoute virtual network gateway, you may experience intermittent connectivity issues during [gateway maintenance](/azure/expressroute/expressroute-about-virtual-network-gateways). +> These connectivity issues aren't expected to impact the availability of the datastore backed by Elastic SAN as the connection will be re-established within seconds. The potential impact from gateway maintenance is covered under the [Service Level Agreement](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1) for ExpressRoute virtual network gateways and private endpoints. +## Add an Elastic SAN volume as a datastore Once your SDDC express route is connected with the private endpoint for your Elastic SAN volume group, use the following steps to connect the volume to your SDDC: 1. From the left navigation in your Azure VMware Solution private cloud, select **Storage**, then **+ Connect Elastic SAN**. 1. Select your **Subscription**, **Resource**, **Volume Group**, **Volume(s)**, and **Client cluster**. 1. From section, "Rename datastore as per VMware requirements", under **Volume name** > **Data store name**, give names to the Elastic SAN volumes.- > [!NOTE] - > For best performance, verify that your Elastic SAN volume and private cloud are in the same Region and Availability Zone. + > [!NOTE] + > For best performance, verify that your Elastic SAN volume and private cloud are in the same Region and Availability Zone. ## Disconnect and delete an Elastic SAN-based datastore To delete the Elastic SAN-based datastore, use the following steps from the Azur :::image type="content" source="media/configure-azure-elastic-san/elastic-san-datastore-list-ellipsis-removal.png" alt-text="Screenshot showing Elastic SAN volume removal." border="false"lightbox="media/configure-azure-elastic-san/elastic-san-datastore-list-ellipsis-removal.png"::: 1. Optionally you can delete the volume you previously created in your Elastic SAN.- > [!NOTE] - > This operation can't be completed if virtual machines or virtual disks reside on an Elastic SAN VMFS Datastore. + > [!NOTE] + > This operation can't be completed if virtual machines or virtual disks reside on an Elastic SAN VMFS Datastore. |
azure-web-pubsub | Howto Mqtt Pubsub Among Mqtt Clients | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-mqtt-pubsub-among-mqtt-clients.md | + + Title: PubSub among MQTT web clients ++description: A how-to guide that shows to how to subscribe to messages on a topic and send messages to a topic without the involvement of a typical application server ++++ Last updated : 10/17/2024++# Publish/subscribe among MQTT web clients ++This quickstart guide demonstrates how to +> [!div class="checklist"] +> * **connect** to your Web PubSub resource +> * **subscribe** to messages on a specific topic +> * **publish** messages to a topic ++## Prerequisites +- A Web PubSub resource. To created one, you can follow the guidance: [Create a Web PubSub resource](./howto-develop-create-instance.md) +- A code editor, such as Visual Studio Code +- Dependencies for the language you plan to use ++> [!NOTE] +> Except for the MQTT client libraries mentioned belows, you can choose any standard MQTT client libraries that meet the following requirements to connect to Web PubSub: +> * Support WebSocket transport. +> * Support MQTT protocol 3.1.1 or 5.0. ++# [JavaScript](#tab/javascript) ++```bash +mkdir pubsub_among_clients +cd pubsub_among_clients ++npm install mqtt +``` ++# [C#](#tab/csharp) ++```bash +mkdir pubsub_among_clients +cd pubsub_among_clients ++# Create a new .net console project +dotnet new console ++dotnet add package MqttNet +``` ++# [Python](#tab/python) +```bash +mkdir pubsub_among_clients +cd pubsub_among_clients ++pip install paho-mqtt +``` ++<!--Java, Go, C++(Using VCPKG)--> +++## Connect to Web PubSub ++An MQTT uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/clients/mqtt/hubs/<hub_name>?access_token=<token>`. ++A client can have a few ways to obtain the Client Access URL. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand. [Generate Client Access URL](./howto-generate-client-access-url.md) describes the practice in detail. ++For this quick start, you can copy and paste one from Azure portal shown in the following diagram. ++![The diagram shows how to get MQTT client access url.](./media/quickstarts-pubsub-among-mqtt-clients/portal-mqtt-client-access-uri-generation.png) ++As shown in the preceding code, the client has the permissions to send messages to topic `group1` and to subscribe to topic `group2`. +++The following code shows how to connect MQTT clients to WebPubSub with MQTT protocol version 5.0, clean start, 30-seconds session expiry interval. ++# [JavaScript](#tab/javascript) ++Create a file with name `index.js` and add following code ++```javascript +const mqtt = require('mqtt'); +var client = mqtt.connect(`wss://<service_name>.webpubsub.azure.com/clients/mqtt/hubs/<hub_name>?access_token=<token>`, + { + clientId: "client1", + protocolVersion: 5, // Use MQTT 5.0 protocol + clean: true, + properties: { + sessionExpiryInterval: 30, + }, + }); +``` ++# [C#](#tab/csharp) ++Edit the `Program.cs` file and add following code ++```csharp +using MQTTnet; +using MQTTnet.Client; ++var mqttFactory = new MqttFactory(); +var client = mqttFactory.CreateMqttClient(); +var mqttClientOptions = new MqttClientOptionsBuilder() + .WithWebSocketServer((MqttClientWebSocketOptionsBuilder b) => + b.WithUri("wss://<service_name>.webpubsub.azure.com/clients/mqtt/hubs/<hub_name>?access_token=<token>")) + .WithClientId("client1") + .WithProtocolVersion(MQTTnet.Formatter.MqttProtocolVersion.V500) + .WithCleanStart() + .WithSessionExpiryInterval(30) + .Build(); +await client.ConnectAsync(mqttClientOptions, CancellationToken.None); +``` ++# [Python](#tab/python) +```python +import paho.mqtt.client as mqtt +from paho.mqtt.packettypes import PacketTypes ++def on_connect(client, userdata, flags, reasonCode, properties): + print("Connected with result code "+str(reasonCode)) ++def on_connect_fail(client, userData): + print("Connection failed") + print(userData) ++def on_log(client, userdata, level, buf): + print("log: ", buf) ++host = "<service_name>.webpubsub.azure.com" +port = 443 +client = mqtt.Client(client_id= client_id, transport="websockets", protocol= mqtt.MQTTv5) +client.ws_set_options(path="/clients/mqtt/hubs/<hub_name>?access_token=<token>") +client.tls_set() +client.on_connect = on_connect +client.on_connect_fail = on_connect_fail +client.on_log = on_log +connect_properties.SessionExpiryInterval = 30 +client.connect(host, port, clean_start = True, properties=connect_properties) +``` +++### Troubleshooting ++If your client failed to connect, you could use the Azure Monitor for troubleshooting. See [Monitor Azure Web PubSub](./howto-azure-monitor.md) for more details. ++You can check the connection parameters and get more detailed error messages from the Azure Monitor. For example, the following screenshot of Azure Log Analytics shows that the connection was rejected because it set an invalid keep alive interval. +![Screenshot of Azure Log Analytics.](./media/quickstarts-pubsub-among-mqtt-clients/diagnostic-log.png) ++## Subscribe to a topic ++To receive messages from topics, the client +- must subscribe to the topic it wishes to receive messages from +- has a callback to handle message event ++The following code shows a client subscribes to topics named `group2`. ++# [JavaScript](#tab/javascript) ++```javascript +// ...code from the last step ++// Provide callback to the message event. +client.on("message", async (topic, payload, packet) => { + console.log(topic, payload) +}); ++// Subscribe to a topic. +client.subscribe("group2", { qos: 1 }, (err, granted) => { console.log("subscribe", granted); }) ++``` ++# [C#](#tab/csharp) ++```csharp +// ...code from the last step ++// Provide callback to the message event. +client.ApplicationMessageReceivedAsync += (args) => +{ + Console.WriteLine($"Received message on topic '{args.ApplicationMessage.Topic}': {System.Text.Encoding.UTF8.GetString(args.ApplicationMessage.PayloadSegment)}"); + return Task.CompletedTask; +}; +// Subscribe to a topic "topic". +await client.SubscribeAsync("group2", MQTTnet.Protocol.MqttQualityOfServiceLevel.AtLeastOnce); +``` ++# [Python](#tab/python) ++```python +# ...code from the last step ++# Provide callback to the message event. +def subscriber_on_message(client, userdata, msg): + print(msg.topic+" "+str(msg.payload)) +client.on_message = subscriber_on_message ++# Subscribe to a topic "topic". +client.subscribe("group2") ++# Blocking call that processes network traffic, dispatches callbacks and +# handles reconnecting. +# Other loop*() functions are available that give a threaded interface and a +# manual interface. +client.loop_forever() +``` ++++## Publish a message to a group +In the previous step, we set up everything needed to receive messages from `group1`, now we send messages to that group. ++# [JavaScript](#tab/javascript) ++```javascript +// ...code from the last step ++// Send message "Hello World" in the "text" format to "group1". +client.publish("group1", "Hello World!") +``` ++# [C#](#tab/csharp) ++```csharp +// ...code from the last step ++// Send message "Hello World" in the "text" format to "group1". +await client.PublishStringAsync("group1", "Hello World!"); +``` ++# [Python](#tab/python) ++```python +# ...code from the last step ++# Send message "Hello World" in the "text" format to "group1". +client.publish("group1", "Hello World!") +``` +++By using the client SDK, you now know how to +> [!div class="checklist"] +> * **connect** to your Web PubSub resource +> * **subscribe** to topics +> * **publish** messages to topics |
azure-web-pubsub | Overview Mqtt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/overview-mqtt.md | Title: MQTT support in Azure Web PubSub service -description: Get an overview of Azure Web PubSub's support for the MQTT protocols, understand typical use case scenarios to use MQTT in Azure Web PubSub, and learn the key benefits of MQTT in Azure Web PubSub. +description: Get an overview of Azure Web PubSub's support for the MQTT protocols, understand typical use case scenarios of when to use MQTT in Azure Web PubSub, and learn the key benefits of MQTT in Azure Web PubSub. keywords: MQTT, MQTT on Azure Web PubSub, MQTT over WebSocket Previously updated : 07/15/2024 Last updated : 10/17/2024 -# Overview: MQTT in Azure Web PubSub service (Preview) --[MQTT](https://mqtt.org/) is a lightweight pub/sub messaging protocol designed for devices with constrained resources. Azure Web PubSub service now natively supports MQTT over WebSocket transport. --You can use MQTT protocols in Web PubSub service for the following scenarios: --* Pub/Sub among MQTT clients and Web PubSub native clients. -* Broadcast messages to MQTT clients. -* Get notifications for MQTT client lifetime events. +# MQTT in Azure Web PubSub service (Preview) > [!NOTE] > MQTT support in Azure Web PubSub is in preview stage. -## Key features --### Standard MQTT protocols support --Web PubSub service supports MQTT 3.1.1 and 5.0 protocols in a standard way that any MQTT SDK with WebSocket transport support can connect to Web PubSub. Users who wish to use Web PubSub in a programming language that doesn't have a native Web PubSub SDK can still connect and communicate using MQTT. --### Cross-protocol communication --MQTT clients can communicate with clients of other Web PubSub protocols. Find more details [here](./reference-mqtt-cross-protocol-communication.md) --### Easy MQTT adoption for current Web PubSub users --Current users of Azure Web PubSub can use MQTT protocol with minimal modifications to their existing upstream servers. The Web PubSub REST API is already equipped to handle MQTT connections, simplifying the transition process. +## Overview +[MQTT](https://mqtt.org/) is a lightweight pub/sub messaging protocol designed for devices with constrained resources. Azure Web PubSub service now natively supports MQTT over WebSocket transport, enabling cross-communication between MQTT web clients and other Web PubSub clients -### Client-to-server request/response model +This new capability addresses two key use cases: +1. Real-time applications with mixed protocols: You can allow clients using different protocols to exchange data in real-time through Azure Web PubSub service. -In addition to the client-to-client pub/sub model provided by the MQTT protocols, Web PubSub also support a client-to-server request/response model. Basically Web PubSub converts a specific kind of MQTT application messages into HTTP requests to registered webhooks, and sends the HTTP responses as application messages back to the MQTT clients. +2. Support for more programming languages: You can use any MQTT library to connect with the service, making it possible to integrate with applications written in languages like C++, beyond the available SDKs in C#, JavaScript, Python, and Java. -For more details, see [MQTT custom event handler protocol](./reference-mqtt-cloud-events.md#user-custom_event-event). +ItΓÇÖs important to note that this MQTT support is a lightweight adaptation of the MQTT protocol and extends only to the features already supported by Azure Web PubSub. Some MQTT features that aren't supported include: +- Wildcard subscriptions +- Retained messages +- Shared subscriptions +- Topic alias -## MQTT feature support status +For a comprehensive list of what MQTT features are supported, read [this documentation article](./reference-mqtt-support-status.md). -Web PubSub support MQTT protocol version 3.1.1 and 5.0. The supported features include but not limited to: +For a more comprehensive MQTT broker solution on Azure, we recommend exploring [Azure Event Grid](../event-grid/overview.md). -* All the levels of Quality Of Service including at most once, at least once and exactly once. -* Persistent session. MQTT sessions are preserved for up to 30 seconds when client connections are interrupted. -* Last Will & Testament -* Client Certificate Authentication --### Additional features supported for MQTT 5.0 --* Message Expiry Interval and Session Expiry Interval -* Subscription Identifier. -* Assigned Client ID. -* Flow Control -* Server-Sent Disconnect --### Not supported feature --* Wildcard subscription -* Retained messages -* Topic alias -* Shared subscription +## Real-time data exchange patterns enabled by the MQTT support +- Pub/Sub among MQTT web clients and Web PubSub native clients +- Broadcast messages to MQTT web clients +- Receive notifications for lifetime events of MQTT web client ## How MQTT is adapted into Web PubSub's system -This section assumes you have basic knowledge about MQTT protocols and Web PubSub. You can find the definitions of MQTT terms in [MQTT V5.0.0 Spec](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901003). You can also learn basic concepts of Web PubSub in [Basic Concepts](./key-concepts.md). +> [!NOTE] +> This section assumes you have basic knowledge about MQTT protocols and Azure Web PubSub. -The following table shows similar or equivalent term mappings between MQTT and Web PubSub. It helps you understand how we adapt MQTT concepts into the Web PubSub's system. It's essential if you want to use the [data-plane REST API](./reference-rest-api-data-plane.md) or [client event handlers](./howto-develop-eventhandler.md) to interact with MQTT clients. +Azure Web PubSub service now recognizes MQTT messages and translates them to its native protocols. The following table shows similar or equivalent term mappings between MQTT and Web PubSub. It helps you understand how we adapt MQTT concepts into those found in Web PubSub. It's essential if you want to use the [data-plane REST API](./reference-rest-api-data-plane.md) or [client event handlers](./howto-develop-eventhandler.md) to interact with MQTT web clients. [!INCLUDE [MQTT-Term-Mappings](includes/mqtt-term-mappings.md)] -## Client authentication and authorization --In general, a server to authenticate and authorize MQTT clients is required. There are two workflows supported by Web PubSub to authenticate and authorize MQTT clients. --* Workflow 1: The MQTT client gets a [JWT(JSON Web Token)](https://jwt.io) from somewhere with its credential, usually from an auth server. Then the client includes the token in the WebSocket upgrading request to the Web PubSub service, and the Web PubSub service validates the token and auth the client. This workflow is enabled by default. --![Diagram of MQTT Auth Workflow With JWT.](./media/howto-connect-mqtt-websocket-client/mqtt-jwt-auth-workflow.png) --* Workflow 2: The MQTT client sends an MQTT CONNECT packet after it establishes a WebSocket connection with the service, then the service calls an API in the upstream server. The upstream server can auth the client according to the username and password fields in the MQTT connection request, and the TLS certificate from the client. This workflow needs explicit configuration. -<!--Add link to tutorial and configuration--> --![Diagram of MQTT Auth Workflow With Upstream Server.](./media/howto-connect-mqtt-websocket-client/mqtt-upstream-auth-workflow.png) --These two workflows can be used individually or in combination. If they're used in together, the auth result in the latter workflow would be honored by the service. --For details on client authentication and authorization, see [How To Connect MQTT Clients to Web PubSub](./howto-connect-mqtt-websocket-client.md). --## Client lifetime event notification --You can register event handlers to get notification when a Web PubSub client connection is started or ended, that is, an MQTT session started or ended. --* [Event handler in Azure Web PubSub service](./howto-develop-eventhandler.md) -* [MQTT CloudEvents Protocol](./reference-mqtt-cloud-events.md) --## REST API support --You can use REST API to do the following things: --* Publish messages to a topic, a connection, a Web PubSub user, or all the connections. -* Manage client permissions and subscriptions. --[REST API specification for MQTT](./reference-rest-api-mqtt.md) --## Event listener support --> [!NOTE] -> Sending MQTT client events to Event Hubs is not supported yet. - ## Next step -> [!div class="nextstepaction"] -> [Quickstart: Pub/Sub among MQTT clients](./quickstarts-pubsub-among-mqtt-clients.md) - > [!div class="nextstepaction"] > [How To Connect MQTT Clients to Web PubSub](./howto-connect-mqtt-websocket-client.md)+> [!div class="nextstepaction"] +> [Quickstart: Pub/Sub among MQTT clients](./howto-mqtt-pubsub-among-mqtt-clients.md) + |
azure-web-pubsub | Reference Mqtt Support Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-mqtt-support-status.md | + + Title: MQTT feature support status in Azure Web PubSub service +description: Comprehensive list of MQTT feature supported and unsupported by Azure Web PubSub service +keywords: MQTT, MQTT on Azure Web PubSub, MQTT over WebSocket ++ Last updated : 10/17/2024++++# MQTT feature support status in Azure Web PubSub service ++Azure Web PubSub service supports MQTT protocol by translating MQTT messages into its native protocol, enabling cross-communication between MQTT web clients and other Web PubSub clients. Since the MQTT support is a lightweight adaptation of the MQTT protocol, it extends only to the features already supported by Azure Web PubSub service. Refer to the list for what's supported and not supported. ++## Feature support for MQTT version 3.1.1 and 5.0 +Azure Web PubSub support MQTT protocol version 3.1.1 and 5.0. The supported features include but not limited to: ++- All the levels of Quality Of Service including at most once, at least once and exactly once. +- Persistent session. MQTT sessions are preserved for up to 30 seconds when client connections are interrupted and restored when the client re-establishes a connection with the service. Beyond 30 seconds, the service makes no guarantee that the disrupted session is restored. +- Last will & testament +- Client certificate authentication ++## More features supported for MQTT 5.0 +- Message expiry interval and session expiry interval +- Subscription identifier +- Assigned client ID +- Flow control +- Server-sent disconnect ++## Not supported features +- Wildcard subscription +- Retained messages +- Topic alias +- Shared subscription |
backup | Backup Azure Policy Supported Skus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-policy-supported-skus.md | OpenLogic | CentOSΓÇôLVM | 6.X, 7.X OpenLogic | CentOSΓÇôSRIOV | 6.X, 7.X cloudera | cloudera-centos-os | 7.X ->[!Caution] ->CentOS is end-of-life. [Learn more](/azure/virtual-machines/workloads/centos/centos-end-of-life). |
backup | Backup Azure Restore Files From Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md | In Linux, the OS of the computer used to restore files must support the file sys | SLES | 12 and above | | openSUSE | 42.2 and above | ->[!Caution] ->CentOS is end-of-life. [Learn more](/azure/virtual-machines/workloads/centos/centos-end-of-life). - ### Additional components The script also requires Python and bash components to execute and connect securely to the recovery point. |
backup | Backup Support Matrix Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md | Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 09/11/2024 Last updated : 10/17/2024 Adding a disk to a protected VM | Supported. Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](/azure/virtual-machines/disks-shared-enable) | Not supported. <br><br> - You can exclude shared disk with Enhanced policy and backup the other supported disks in the VM. <br><br> - You can use S2D to create a shared disk or standalone volumes by combining capacities from disks in different VMs. Azure Backup doesn't support backup of a shared volume (between VMs for database cluster or cluster Configuration) created using S2D.-<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). <br><br> [Supported regions](/azure/virtual-machines/disks-types#ultra-disk-limitations). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Ultra disks. -<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). <br><br> [Supported regions](/azure/virtual-machines/disks-types#regional-availability). <br><br> - Configuration of Premium SSD v2 disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks and GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Premium SSD v2 disks. +<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md).[Learn about the disk considerations for Azure VM](/azure/virtual-machines/disks-types#ultra-disk-limitations). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - File-level restore is currently not supported for machines using Ultra disks. <br><br> - GRS vaults and Cross-Region Restore are currently supported in the following regions for machines using Ultra Disks: Southeast Asia, East Asia, North Europe, West Europe, East US, West US, and West US 3. +<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). [Learn about the disk considerations for Azure VM](/azure/virtual-machines/disks-types#regional-availability). <br><br> - Configuration of Premium SSD v2 disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - File-level restore is currently not supported for machines using Premium SSD v2 disks. <br><br> - GRS vaults and Cross-Region Restore are currently supported in the following regions for machines using Premium SSDv2 Disks: Southeast Asia, East Asia, North Europe, West Europe, East US, West US, and West US 3. [Temporary disks](/azure/virtual-machines/managed-disks-overview#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](/azure/virtual-machines/ephemeral-os-disks) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS. |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | Title: What's new in the Azure Backup service description: Learn about the new features in the Azure Backup service. Previously updated : 09/11/2024 Last updated : 10/17/2024 - ignite-2023 You can learn more about the new releases by bookmarking this page or by [subscr ## Updates summary - October 2024+ - [GRS and CRR support for Azure VMs using Premium SSD v2 and Ultra Disk is now generally available.](#grs-and-crr-support-for-azure-vms-using-premium-ssd-v2-and-ultra-disk-is-now-generally-available) - [Back up Azure VMs with Extended Zones](#back-up-azure-vms-with-extended-zones-preview) - July 2024 - [Azure Blob vaulted backup is now generally available](#azure-blob-vaulted-backup-is-now-generally-available) You can learn more about the new releases by bookmarking this page or by [subscr - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) +## GRS and CRR support for Azure VMs using Premium SSD v2 and Ultra Disk is now generally available. ++Azure Backup now supports backup of Azure VMs using Premium SSD v2 and Ultra disk on GRS vaults and performs Cross-Region Restore (CRR). With Geo-redundant storage (GRS) and cross-region restore support, you can protect your virtual machines from data loss during a disaster and perform periodic audits by restoring data on demand in the secondary region. ++>[!Note] +>Premium SSD v2 offering provides the most advanced block storage solution designed for a broad range of IO-intensive enterprise production workloads that require sub-millisecond disk latencies as well as high IOPS and throughput ΓÇö at a low cost. ++For more information, see the [VM backup support matrix for the supported features and region availability](backup-support-matrix-iaas.md#vm-storage-support). ++ ## Back up Azure VMs with Extended Zones (preview) -Azure Backup now enables you to back up your Azure virtual machines in [Azure Extended Zones](../extended-zones/overview.md). Azure Extended Zones offer enhanced resiliency by distributing resources across multiple physical locations within an Azure region. You can back up multiple Azure virtual machines in Azure Extended Zones. +Azure Backup now enables you to back up your Azure virtual machines in the [Azure Extended Zones](../extended-zones/overview.md). Azure Extended Zones offer enhanced resiliency by distributing resources across multiple physical locations within an Azure region. You can back up multiple Azure virtual machines in Azure Extended Zones. For more information, see [Back up an Azure Virtual Machine in Azure Extended Zones](./backup-azure-vms-enhanced-policy.md). |
batch | Batch Linux Nodes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-linux-nodes.md | When you create a virtual machine image reference, you must specify the followin | **Image reference property** | **Example** | | | |-| Publisher |Canonical | -| Offer |UbuntuServer | -| SKU |20.04-LTS | +| Publisher |canonical | +| Offer |0001-com-ubuntu-server-focal | +| SKU |20_04-lts | | Version |latest | > [!TIP] new_pool.start_task = start_task # Create an ImageReference which specifies the Marketplace # virtual machine image to install on the nodes ir = batchmodels.ImageReference(- publisher="Canonical", - offer="UbuntuServer", - sku="20.04-LTS", + publisher="canonical", + offer="0001-com-ubuntu-server-focal", + sku="20_04-lts", version="latest") # Create the VirtualMachineConfiguration, specifying images = client.account.list_supported_images() image = None for img in images: if (img.image_reference.publisher.lower() == "canonical" and- img.image_reference.offer.lower() == "ubuntuserver" and - img.image_reference.sku.lower() == "20.04-lts"): + img.image_reference.offer.lower() == "0001-com-ubuntu-server-focal" and + img.image_reference.sku.lower() == "20_04-lts"): image = img break List<ImageInformation> images = ImageInformation image = null; foreach (var img in images) {- if (img.ImageReference.Publisher == "Canonical" && - img.ImageReference.Offer == "UbuntuServer" && - img.ImageReference.Sku == "20.04-LTS") + if (img.ImageReference.Publisher == "canonical" && + img.ImageReference.Offer == "0001-com-ubuntu-server-focal" && + img.ImageReference.Sku == "20_04-lts") { image = img; break; Although the previous snippet uses the [PoolOperations.istSupportedImages](/dotn ```csharp ImageReference imageReference = new ImageReference(- publisher: "Canonical", - offer: "UbuntuServer", - sku: "20.04-LTS", + publisher: "canonical", + offer: "0001-com-ubuntu-server-focal", + sku: "20_04-lts", version: "latest"); ``` ::: zone-end |
communication-services | Email Attachment Inline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-attachment-inline.md | + + Title: Enhance email communication with inline attachments ++description: Inline attachments enable you to embed images directly within the email body. ++++ Last updated : 09/30/2024+++++# Enhance email communication with inline attachments ++Email communication is more than just text. It's about creating engaging and visually appealing messages that capture the recipient's attention. ++One way to engage email recipients is by using inline attachments, which enable you to embed images directly within the email body. ++Inline attachments are images or other media files that are embedded directly within the email content, rather than being sent as separate attachments. ++Inline attachments let the recipient view the images as part of the email body, enhancing the overall visual appeal and engagement. ++## Using inline attachments ++Inline attachments are typically used for: ++- Improved Engagement: Inline images can make your emails more visually appealing and engaging. +- Better Branding: Embedding your logo or other brand elements directly in the email can reinforce your brand identity. +- Enhanced User Experience: Inline images can help illustrate your message more effectively, making it easier for recipients to understand and act on your content. ++Benefits of using CID for inline attachments ++We use the HTML attribute content-ID (CID) to embed images directly into the email body. ++Using CID for inline attachments is considered the best approach for the following reasons: ++- Reliability: CID embedding references the image data using a unique identifier, rather than embedding the data directly in the email body. CID embedding ensures that the images are reliably displayed across different email clients and platforms. +- Efficiency: CID enables you to attach the image to the email and reference it within the HTML content using the unique content-ID. This method is more efficient than base64 encoding, which can significantly increase the size of the email and affect deliverability. +- Compatibility: CID supported by most email clients, ensuring that your inline images are displayed correctly for most recipients. +- Security: Using CID avoids the need to host images on external servers, which can pose security risks. Instead, the images are included as part of the email, reducing the risk of external content being blocked or flagged as suspicious. ++## Related articles ++- [Quickstart - Send email with attachments using Azure Communication Services](../../quickstarts/email/send-email-advanced/send-email-with-attachments.md) +- [Quickstart - Send email with inline attachments using Azure Communication Services](../../quickstarts/email/send-email-advanced/send-email-with-inline-attachments.md) |
communication-services | Phone Number Management For Australia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-australia.md | More details on eligible subscription types are as follows: | Country/Region | | :- | | Australia |+| Switzerland | [!INCLUDE [Azure Prepayment](../../includes/azure-prepayment.md)] |
communication-services | Phone Number Management For France | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-france.md | More details on eligible subscription types are as follows: |Japan| |Netherlands| |Spain|+|Switzerland| |United Kingdom| |United States| |
communication-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md | We recommend acquiring identities and tokens before creating chat threads or sta For more information, see the [identity concept overview](./authentication.md) page. ## SMS+ When sending or receiving a high volume of messages, you might receive a ```429``` error. This error indicates you're hitting the service limitations, and your messages are queued to be sent once the number of requests is below the threshold. Rate Limits for SMS: Rate Limits for SMS: |Send Message|Alphanumeric Sender ID |Per resource|60|600|600| ### Action to take+ If you have requirements that exceed the rate-limits, submit [a request to Azure Support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to enable higher throughput. For more information on the SMS SDK and service, see the [SMS SDK overview](./sm You can send a limited number of email messages. If you exceed the following limits for your subscription, your requests are rejected. You can attempt these requests again, after the Retry-After time passes. Take action before reaching the limit by requesting to raise your sending volume limits if needed. -The Azure Communication Services email service is designed to support high throughput. However, the service imposes initial rate limits to help customers onboard smoothly and avoid some of the issues that can occur when switching to a new email service. We recommend gradually increasing your email volume using Azure Communication Services Email over a period of two to four weeks, while closely monitoring the delivery status of your emails. This gradual increase allows third-party email service providers to adapt to the change in IP for your domain's email traffic, thus protecting your sender reputation and maintaining the reliability of your email delivery. +The Azure Communication Services email service is designed to support high throughput. However, the service imposes initial rate limits to help customers onboard smoothly and avoid some of the issues that can occur when switching to a new email service. We recommend gradually increasing your email volume using Azure Communication Services Email over a period of two to four weeks, while closely monitoring the delivery status of your emails. This gradual increase enables third-party email service providers to adapt to the change in IP for your domain's email traffic. The gradual change gives you time to protect your sender reputation and maintain the reliability of your email delivery. -We approve higher limits for customers based on use case requirements, domain reputation, traffic patterns, and failure rates. To request higher limits, follow the instructions at [Quota increase for email domains](./email/email-quota-increase.md). Note that higher quotas are only available for verified custom domains, not Azure-managed domains. +We approve higher limits for customers based on use case requirements, domain reputation, traffic patterns, and failure rates. To request higher limits, follow the instructions at [Quota increase for email domains](./email/email-quota-increase.md). Higher quotas are only available for verified custom domains, not Azure-managed domains. ### Rate Limits We approve higher limits for customers based on use case requirements, domain re | | | | Number of recipients in Email | 50 | | Total email request size (including attachments) | 10 MB |+| Maximum authenticated connections per subscription | 250 | For all message size limits, you need to consider that that base64 encoding increases the size of the message. You need to increase the size value to account for the message size increase that occurs after the message attachments and any other binary data are Base64 encoded. Base64 encoding increases the size of the message by about 33%, so the message size is about 33% larger than the message sizes before encoding. For example, if you specify a maximum message size value of ~10 MB, you can expect a realistic maximum message size value of approximately ~7.5 MB. To increase your email quota, follow the instructions at [Quota increase for ema ### Size Limits -| **Name** | Limit | -|--|--| +| **Name** | Limit | +| | | |Number of participants in thread|250 | |Batch of participants - CreateThread|200 | |Batch of participants - AddParticipant|200 | The Calling SDK doesn't enforce these limits, but your users might experience pe The following timeouts apply to the Communication Services Calling SDKs: -| Action | Timeout in seconds | -| | | -| Reconnect/removal participant | 120 | -| Add or remove new modality from a call (Start/stop video or screen sharing) | 40 | -| Call Transfer operation timeout | 60 | -| 1:1 call establishment timeout | 85 | -| Group call establishment timeout | 85 | -| PSTN call establishment timeout | 115 | -| Promote 1:1 call to a group call timeout | 115 | -+| Action | Timeout in seconds | +| | | +| Reconnect/removal participant | 120 | +| Add or remove new modality from a call (Start/stop video or screen sharing) | 40 | +| Call Transfer operation timeout | 60 | +| 1:1 call establishment timeout | 85 | +| Group call establishment timeout | 85 | +| PSTN call establishment timeout | 115 | +| Promote 1:1 call to a group call timeout | 115 | ### Action to take For more information about the voice and video calling SDK and service, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md) page or [known issues](./known-issues.md). You can also [submit a request to Azure Support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to increase some of the limits, pending review by our vetting team. ## Job Router+ When sending or receiving a high volume of requests, you might receive a ```ThrottleLimitExceededException``` error. This error indicates you're hitting the service limitations, and your requests fail until the token of bucket to handle requests is replenished after a certain time. Rate Limits for Job Router: -|Operation|Scope|Timeframe (seconds)| Limit (number of requests) | Timeout in seconds| -||--|-|-|-| -|General Requests|Per Resource|10|1000|10| +| Operation | Scope | Timeframe (seconds) | Limit (number of requests) | Timeout in seconds | +| | | | | | +| General Requests | Per Resource | 10 | 1000 | 10 | ### Action to take+ If you need to send a volume of messages that exceeds the rate limits, email us at acs-ccap@microsoft.com. ## Teams Interoperability and Microsoft Graph+ Using a Teams interoperability scenario, you'll likely use some Microsoft Graph APIs to create [meetings](/graph/cloud-communications-online-meetings). Each service offered through Microsoft Graph has different limitations; service-specific limits are [described here](/graph/throttling) in more detail. ### Action to take+ When you implement error handling, use the HTTP error code 429 to detect throttling. The failed response includes the `Retry-After` response header. Backing off requests using the `Retry-After` delay is the fastest way to recover from throttling because Microsoft Graph continues to log resource usage while a client is being throttled. -You can find more information on Microsoft Graph [throttling](/graph/throttling) limits in the [Microsoft Graph](/graph/overview) documentation. +You can find more information about Microsoft Graph [throttling](/graph/throttling) limits in the [Microsoft Graph](/graph/overview) documentation. ## Next steps+ See the [help and support](../support.md) options. |
communication-services | Troubleshoot Web Voip Quality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/troubleshoot-web-voip-quality.md | Title: Azure Communication Services troubleshooting VoIP call quality description: Learn how to troubleshoot web VoIP call quality with Azure Communication Services.---++ Previously updated : 3/6/2024 Last updated : 10/17/2024 - # Troubleshoot VoIP call quality -This article describes how to troubleshoot and improve web Voice over Internet Protocol (VoIP) call quality in Azure Communication Services. --Voice and video calling experiences are an essential communication tool for businesses, organizations, and individuals in today's world. However, customers can experience quality problems. Four network parameters can affect quality in calls: available bandwidth, round-trip time (RTT), packet loss, and jitter. +This article describes how to troubleshoot and improve web Voice over Internet Protocol (VoIP) call quality in Azure Communication Services. Voice and video calling experiences are an essential communication tool for businesses, organizations, and individuals in today's world. However, customers can experience quality problems. Four network parameters can affect quality in calls: available bandwidth, round-trip time (RTT), packet loss, and jitter. If quality problems arise with VoIP calling in Azure Communication Services, follow the troubleshooting guidance in this article to ensure the best-possible user experience. When a caller or callee reports audio interference or background noise on a call Also, make sure that the application you're using for web calling is hosted on the latest SDK. For more information, see [Azure Communication Services Calling Web (JavaScript) SDK - Release History](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md). -## Pre-call checkups +## Precall checkups When you're using the internet at various locations, you experience different internet speeds. Factors like the following examples can affect internet speed and reliability: If you tried all the previous actions and still face quality problems, [create a ## End of Call Survey -Enable the End of Call Survey feature to give Azure Communication Services users the option to submit qualitative feedback about their call experience. +Enable the End of Call Survey feature to give Azure Communication Services users the option to submit qualitative feedback about their call experience. By enabling end of call survey you can learn more about end users calling experience and get insight of how you might improve that experience. For more information, see [End of Call Survey overview](end-of-call-survey-concept.md) and the related tutorial [Use the End of Call Survey to collect user feedback](../../tutorials/end-of-call-survey-tutorial.md). ## Related content-+- For detailed deep dive inspection on how to trouble shoot call quality and reliability see [here](../../resources/troubleshooting/voice-video-calling/general-troubleshooting-strategies/overview.md). +- For information about Calling SDK error codes, see [Troubleshooting in Azure Communication Services](../../resources/troubleshooting/voice-video-calling/troubleshooting-codes.md). Use these codes to help determine why a call ended and how to mitigate the issue. - For information about using Call Quality Dashboard (CQD) to view interoperability call logs, see [Use CQD to manage call and meeting quality in Microsoft Teams](/microsoftteams/quality-of-experience-review-guide).--- For information about Calling SDK error codes, see [Troubleshooting in Azure Communication Services](../../resources/troubleshooting/voice-video-calling/troubleshooting-codes.md). Use these codes to help determine why a call ended.- - To ensure smooth functioning of the application and provide better user experience, app developers should follow a checklist. For more information, see the blog post [Checklist for advanced calling experiences in web browsers](https://techcommunity.microsoft.com/t5/azure-communication-services/checklist-for-advanced-calling-experiences-in-web-browsers/ba-p/3266312).- - For more information about preparing your network or your customer's network, see [Network recommendations](network-requirements.md).- - For best practices regarding Azure Communication Services web calling, see [Best practices: Azure Communication Services calling SDKs](../best-practices.md). |
communication-services | Connect Email Communication Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/connect-email-communication-resource.md | -In this quick start, you learn how to connect a verified domain in Azure Communication Services to send email. +This quickstart describes how to connect a verified domain in Azure Communication Services to send email. ::: zone pivot="azure-portal" [!INCLUDE [connect-domain-portal](./includes/connect-domain-portal.md)] |
container-apps | Tutorial Java Quarkus Connect Managed Identity Postgresql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md | What you will learn: * [Java JDK](/azure/developer/java/fundamentals/java-support-on-azure) * [Maven](https://maven.apache.org) * [Docker](https://docs.docker.com/get-docker/)-* [GraalVM](https://www.graalvm.org/downloads/) ## 2. Create a container registry Create a resource group with the [az group create](/cli/azure/group#az-group-cre The following example creates a resource group named `myResourceGroup` in the East US Azure region. ```azurecli-interactive-az group create --name myResourceGroup --location eastus +RESOURCE_GROUP="myResourceGroup" +LOCATION="eastus" ++az group create --name $RESOURCE_GROUP --location $LOCATION ``` -Create an Azure container registry instance using the [az acr create](/cli/azure/acr#az-acr-create) command. The registry name must be unique within Azure, contain 5-50 alphanumeric characters. All letters must be specified in lower case. In the following example, `mycontainerregistry007` is used. Update this to a unique value. +Create an Azure container registry instance using the [az acr create](/cli/azure/acr#az-acr-create) command and retrieve its login server using the [az acr show](/cli/azure/acr#az-acr-show) command. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. All letters must be specified in lower case. In the following example, `mycontainerregistry007` is used. Update this to a unique value. ```azurecli-interactive+REGISTRY_NAME=mycontainerregistry007 az acr create \- --resource-group myResourceGroup \ - --name mycontainerregistry007 \ + --resource-group $RESOURCE_GROUP \ + --name $REGISTRY_NAME \ --sku Basic++REGISTRY_SERVER=$(az acr show \ + --name $REGISTRY_NAME \ + --query 'loginServer' \ + --output tsv | tr -d '\r') ``` ## 3. Clone the sample app and prepare the container image cd quarkus-quickstarts/hibernate-orm-panache-quickstart ```xml <dependency>- <groupId>com.azure</groupId> - <artifactId>azure-identity-providers-jdbc-postgresql</artifactId> - <version>1.0.0-beta.1</version> + <groupId>com.azure</groupId> + <artifactId>azure-identity-extensions</artifactId> + <version>1.1.20</version> </dependency> ``` cd quarkus-quickstarts/hibernate-orm-panache-quickstart Delete the existing content in *application.properties* and replace with the following to configure the database for dev, test, and production modes: ```properties- quarkus.package.type=uber-jar - quarkus.hibernate-orm.database.generation=drop-and-create quarkus.datasource.db-kind=postgresql quarkus.datasource.jdbc.max-size=8 cd quarkus-quickstarts/hibernate-orm-panache-quickstart quarkus.hibernate-orm.sql-load-script=import.sql quarkus.datasource.jdbc.acquisition-timeout = 10 - %dev.quarkus.datasource.username=${AZURE_CLIENT_NAME} - %dev.quarkus.datasource.jdbc.url=jdbc:postgresql://${DBHOST}.postgres.database.azure.com:5432/${DBNAME}?\ - authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin\ - &sslmode=require\ - &azure.clientId=${AZURE_CLIENT_ID}\ - &azure.clientSecret=${AZURE_CLIENT_SECRET}\ - &azure.tenantId=${AZURE_TENANT_ID} -- %prod.quarkus.datasource.username=${AZURE_MI_NAME} - %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://${DBHOST}.postgres.database.azure.com:5432/${DBNAME}?\ - authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin\ + %dev.quarkus.datasource.username=${CURRENT_USERNAME} + %dev.quarkus.datasource.jdbc.url=jdbc:postgresql://${AZURE_POSTGRESQL_HOST}:${AZURE_POSTGRESQL_PORT}/${AZURE_POSTGRESQL_DATABASE}?\ + authenticationPluginClassName=com.azure.identity.extensions.jdbc.postgresql.AzurePostgresqlAuthenticationPlugin\ + &sslmode=require ++ %prod.quarkus.datasource.username=${AZURE_POSTGRESQL_USERNAME} + %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://${AZURE_POSTGRESQL_HOST}:${AZURE_POSTGRESQL_PORT}/${AZURE_POSTGRESQL_DATABASE}?\ + authenticationPluginClassName=com.azure.identity.extensions.jdbc.postgresql.AzurePostgresqlAuthenticationPlugin\ &sslmode=require %dev.quarkus.class-loading.parent-first-artifacts=com.azure:azure-core::jar,\ cd quarkus-quickstarts/hibernate-orm-panache-quickstart io.netty:netty-transport::jar,\ io.netty:netty-buffer::jar,\ com.azure:azure-identity::jar,\- com.azure:azure-identity-providers-core::jar,\ - com.azure:azure-identity-providers-jdbc-postgresql::jar,\ + com.azure:azure-identity-extensions::jar,\ com.fasterxml.jackson.core:jackson-core::jar,\ com.fasterxml.jackson.core:jackson-annotations::jar,\ com.fasterxml.jackson.core:jackson-databind::jar,\ cd quarkus-quickstarts/hibernate-orm-panache-quickstart com.nimbusds:nimbus-jose-jwt::jar,\ net.minidev:json-smart::jar,\ net.minidev:accessors-smart::jar,\- io.netty:netty-transport-native-unix-common::jar + io.netty:netty-transport-native-unix-common::jar,\ + net.java.dev.jna:jna::jar ``` ### Build and push a Docker image to the container registry 1. Build the container image. - Run the following command to build the Quarkus app image. You must tag it with the fully qualified name of your registry login server. The login server name is in the format *\<registry-name\>.azurecr.io* (must be all lowercase), for example, *mycontainerregistry007.azurecr.io*. Replace the name with your own registry name. + Run the following command to build the Quarkus app image. You must tag it with the fully qualified name of your registry login server. ```bash- mvnw quarkus:add-extension -Dextensions="container-image-jib" - mvnw clean package -Pnative -Dquarkus.native.container-build=true -Dquarkus.container-image.build=true -Dquarkus.container-image.registry=mycontainerregistry007 -Dquarkus.container-image.name=quarkus-postgres-passwordless-app -Dquarkus.container-image.tag=v1 + CONTAINER_IMAGE=${REGISTRY_SERVER}/quarkus-postgres-passwordless-app:v1 ++ mvn quarkus:add-extension -Dextensions="container-image-jib" + mvn clean package -Dquarkus.container-image.build=true -Dquarkus.container-image.image=${CONTAINER_IMAGE} ``` 1. Log in to the registry. - Before pushing container images, you must log in to the registry. To do so, use the [az acr login][az-acr-login] command. Specify only the registry resource name when signing in with the Azure CLI. Don't use the fully qualified login server name. + Before pushing container images, you must log in to the registry. To do so, use the [az acr login][az-acr-login] command. ```azurecli-interactive- az acr login --name <registry-name> + az acr login --name $REGISTRY_NAME ``` The command returns a `Login Succeeded` message once completed. 1. Push the image to the registry. - Use [docker push][docker-push] to push the image to the registry instance. Replace `mycontainerregistry007` with the login server name of your registry instance. This example creates the `quarkus-postgres-passwordless-app` repository, containing the `quarkus-postgres-passwordless-app:v1` image. + Use [docker push][docker-push] to push the image to the registry instance. This example creates the `quarkus-postgres-passwordless-app` repository, containing the `quarkus-postgres-passwordless-app:v1` image. ```bash- docker push mycontainerregistry007/quarkus-postgres-passwordless-app:v1 + docker push $CONTAINER_IMAGE ``` ## 4. Create a Container App on Azure cd quarkus-quickstarts/hibernate-orm-panache-quickstart 1. Create a Container Apps instance by running the following command. Make sure you replace the value of the environment variables with the actual name and location you want to use. ```azurecli-interactive- RESOURCE_GROUP="myResourceGroup" - LOCATION="eastus" CONTAINERAPPS_ENVIRONMENT="my-environment" az containerapp env create \ cd quarkus-quickstarts/hibernate-orm-panache-quickstart --location $LOCATION ``` -1. Create a container app with your app image by running the following command. Replace the placeholders with your values. To find the container registry admin account details, see [Authenticate with an Azure container registry](/azure/container-registry/container-registry-authentication) +1. Create a container app with your app image by running the following command: ```azurecli-interactive- CONTAINER_IMAGE_NAME=quarkus-postgres-passwordless-app:v1 - REGISTRY_SERVER=mycontainerregistry007 - REGISTRY_USERNAME=<REGISTRY_USERNAME> - REGISTRY_PASSWORD=<REGISTRY_PASSWORD> -+ APP_NAME=my-container-app az containerapp create \ --resource-group $RESOURCE_GROUP \- --name my-container-app \ - --image $CONTAINER_IMAGE_NAME \ + --name $APP_NAME \ + --image $CONTAINER_IMAGE \ --environment $CONTAINERAPPS_ENVIRONMENT \ --registry-server $REGISTRY_SERVER \- --registry-username $REGISTRY_USERNAME \ - --registry-password $REGISTRY_PASSWORD + --registry-identity system \ + --ingress 'external' \ + --target-port 8080 \ + --min-replicas 1 ```+ + > [!NOTE] + > The options `--registry-username` and `--registry-password` are still supported but aren't recommended because using the identity system is more secure. ## 5. Create and connect a PostgreSQL database with identity connectivity Next, create a PostgreSQL Database and configure your container app to connect t ```azurecli-interactive DB_SERVER_NAME='msdocs-quarkus-postgres-webapp-db'- ADMIN_USERNAME='demoadmin' - ADMIN_PASSWORD='<admin-password>' az postgres flexible-server create \ --resource-group $RESOURCE_GROUP \ --name $DB_SERVER_NAME \ --location $LOCATION \- --admin-user $DB_USERNAME \ - --admin-password $DB_PASSWORD \ - --sku-name GP_Gen5_2 + --public-access None \ + --sku-name Standard_B1ms \ + --tier Burstable \ + --active-directory-auth Enabled ```+ + > [!NOTE] + > The options `--admin-user` and `--admin-password` are still supported but aren't recommended because using the identity system is more secure. The following parameters are used in the above Azure CLI command: - * *resource-group* → Use the same resource group name in which you created the web app, for example `msdocs-quarkus-postgres-webapp-rg`. + * *resource-group* → Use the same resource group name in which you created the web app - for example, `msdocs-quarkus-postgres-webapp-rg`. * *name* → The PostgreSQL database server name. This name must be **unique across all Azure** (the server endpoint becomes `https://<name>.postgres.database.azure.com`). Allowed characters are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and server identifier. (`msdocs-quarkus-postgres-webapp-db`)- * *location* → Use the same location used for the web app. - * *admin-user* → Username for the administrator account. It can't be `azure_superuser`, `admin`, `administrator`, `root`, `guest`, or `public`. For example, `demoadmin` is okay. - * *admin-password* → Password of the administrator user. It must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. -- > [!IMPORTANT] - > When creating usernames or passwords **do not** use the `$` character. Later in this tutorial, you will create environment variables with these values where the `$` character has special meaning within the Linux container used to run Java apps. -+ * *location* → Use the same location used for the web app. Change to a different location if it doesn't work. * *public-access* → `None` which sets the server in public access mode with no firewall rules. Rules will be created in a later step.- * *sku-name* → The name of the pricing tier and compute configuration, for example `GP_Gen5_2`. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/). + * *sku-name* → The name of the pricing tier and compute configuration - for example, `Standard_B1ms`. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/). + * *tier* → The compute tier of the server. For more information, see [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/). + * *active-directory-auth* → `Enabled` to enable Microsoft Entra authentication. 1. Create a database named `fruits` within the PostgreSQL service with this command: ```azurecli-interactive+ DB_NAME=fruits az postgres flexible-server db create \ --resource-group $RESOURCE_GROUP \ --server-name $DB_SERVER_NAME \- --database-name fruits + --database-name $DB_NAME ``` 1. Install the [Service Connector](../service-connector/overview.md) passwordless extension for the Azure CLI: ```azurecli-interactive- az extension add --name serviceconnector-passwordless --upgrade + az extension add --name serviceconnector-passwordless --upgrade --allow-preview true ``` 1. Connect the database to the container app with a system-assigned managed identity, using the connection command. Next, create a PostgreSQL Database and configure your container app to connect t ```azurecli-interactive az containerapp connection create postgres-flexible \ --resource-group $RESOURCE_GROUP \- --name my-container-app \ + --name $APP_NAME \ --target-resource-group $RESOURCE_GROUP \ --server $DB_SERVER_NAME \- --database fruits \ - --managed-identity + --database $DB_NAME \ + --system-identity \ + --container $APP_NAME ``` ## 6. Review your changes Next, create a PostgreSQL Database and configure your container app to connect t You can find the application URL(FQDN) by using the following command: ```azurecli-interactive-az containerapp list --resource-group $RESOURCE_GROUP +echo https://$(az containerapp show \ + --name $APP_NAME \ + --resource-group $RESOURCE_GROUP \ + --query properties.configuration.ingress.fqdn \ + --output tsv) ``` When the new webpage shows your list of fruits, your app is connecting to the database using the managed identity. You should now be able to edit fruit list as before. |
cost-management-billing | Pay By Invoice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md | To qualify for wire transfer payments, you must: > [!IMPORTANT] > - You must pay all outstanding charges before switching to payment by wire transfer.-> - If you switch to payment by wire transfer, you can't switch back to paying by credit or debit card, except for one-time payments. +> - If you switch to payment by wire transfer, you can't switch back to paying by credit or debit card as your recurring form of payment. However, you can make manual, one-time (non-recurring) payments with a credit or debit card. > - As of September 30, 2023, Microsoft no longer accepts checks as a payment method. ## Submit a request to set up payment by wire transfer |
deployment-environments | How To Create Access Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md | To access an environment: ### Deploy an environment ```azurecli-az devcenter dev environment deploy-action --action-id "deploy" --dev-center-name <devcenterName> \ +az devcenter dev environment deploy --action-id "deploy" --dev-center-name <devcenterName> \ -g <resourceGroupName> --project-name <projectName> --environment-name <environmentName> --parameters <parametersJsonString> ``` |
digital-twins | Concepts Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md | Sometimes, you might want to define a relationship without a specific target, so Here's an example of a relationship on a DTDL model that doesn't have a target. In this example, the relationship is for defining what sensors a Room might have, and the relationship can connect to any type. ### Properties of relationships For a comprehensive list of the fields that may appear as part of a component, s Here's a basic example of a component on a DTDL model. This example shows a Room model that makes use of a thermostat model as a component. If other models in this solution should also contain a thermostat, they can reference the same thermostat model as a component in their own definitions, just like Room does. The following example re-imagines the Home model from the earlier DTDL example a In this case, Core contributes an ID and name to Home. Other models can also extend the Core model to get these properties as well. Here's a Room model extending the same parent interface: Once inheritance is applied, the extending interface exposes all properties from the entire inheritance chain. |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | The following table shows locations by service provider. If you want to view ava | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | ✓ | ✓ | Taipei | | **[Fastweb](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | ✓ |✓ | Milan | | **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** | ✓ | ✓ | Montreal<br/>Quebec City<br/>Toronto2 |-| **[Flo Networks](https://flo.net/)** | ✓ | ✓ | Dallas<br/>Los Angeles<br/>Miami<br/>Queretaro(Mexico City)<br/>Sao Paulo<br/>Washington DC **Locations are listed under Neurtrona Networks and Transtelco as providers for circuit creation* | +| **[Flo Networks](https://flo.net/)** | ✓ | ✓ | Dallas<br/>Los Angeles<br/>Miami<br/>Queretaro(Mexico City)<br/>Sao Paulo<br/>Washington DC<br/>**Locations are listed under Neurtrona Networks and Transtelco as providers for circuit creation* | | **[GBI](https://www.gbiinc.com/microsoft-azure/)** | ✓ | ✓ | Dubai2<br/>Frankfurt | | **[G├ëANT](https://www.geant.org/Networks)** | ✓ | ✓ | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>Madrid2<br/>Marseille | | **[GlobalConnect](https://www.globalconnect.no/)** | ✓ | ✓ | Amsterdam<br/>Copenhagen<br/>Oslo<br/>Stavanger<br/>Stockholm | |
firewall | Monitor Firewall Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/monitor-firewall-reference.md | There are a few ways to verify the update was successful, but you can navigate t To create a diagnostic setting and enable Resource Specific Table, see [Create diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/create-diagnostic-settings). +## Flow trace + The firewall logs show traffic through the firewall in the first attempt of a TCP connection, known as the *SYN* packet. However, such an entry doesn't show the full journey of the packet in the TCP handshake. As a result, it's difficult to troubleshoot if a packet is dropped, or asymmetric routing occurred. The Azure Firewall Flow Trace Log addresses this concern. > [!TIP] |
firewall | Rule Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md | Here's an example policy: Assuming BaseRCG1 is a rule collection group priority (200) that contains the rule collections: DNATRC1, DNATRC3,NetworkRC1.\ BaseRCG2 is a rule collection group priority (300) that contains the rule collections: AppRC2, NetworkRC2.\ ChildRCG1 is a rule collection group priority (300) that contains the rule collections: ChNetRC1, ChAppRC1.\-ChildRCG2 is a rule collection group that contains the rule collections: ChNetRC2, ChAppRC2,ChDNATRC3. +ChildRCG2 is a rule collection group priority (650) that contains the rule collections: ChNetRC2, ChAppRC2,ChDNATRC3. As per following table: As per following table: |ChAppRC2 | Application rule collection |2000 |7 |-| |ChDNATRC3 | DNAT rule collection | 3000 | 2 |-| -Initial Processing: +Initial Iteration for DNAT Rules: The process begins by examining the rule collection group (RCG) with the lowest number, which is BaseRCG1 with a priority of 200. Within this group, it searches for DNAT rule collections and evaluates them according to their priorities. In this case, DNATRC1 (priority 600) and DNATRC3 (priority 610) are found and processed accordingly.\-Next, it moves to the next RCG, BaseRCG2 (priority 200), but finds no DNAT rule collection.\ +Next, it moves to the next RCG, BaseRCG2 (priority 300), but finds no DNAT rule collection.\ Following that, it proceeds to ChildRCG1 (priority 300), also without a DNAT rule collection.\ Finally, it checks ChildRCG2 (priority 650) and finds the ChDNATRC3 rule collection (priority 3000). -Iteration Within Rule Collection Groups: +Iteration for NETWORK Rules: Returning to BaseRCG1, the iteration continues, this time for NETWORK rules. Only NetworkRC1 (priority 800) is found.\ Then, it moves to BaseRCG2, where NetworkRC2 (priority 1300) is located.\ |
hdinsight | Apache Hbase Build Java Maven Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-build-java-maven-linux.md | description: Learn how to use Apache Maven to build a Java-based Apache HBase ap Previously updated : 10/17/2023 Last updated : 10/17/2024 # Build Java applications for Apache HBase The following steps use `scp` to copy the JAR to the primary head node of your A ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net ``` -3. To create an HBase table using the Java application, use the following command in your open ssh connection: +3. To create a HBase table using the Java application, use the following command in your open ssh connection: ```bash yarn jar hbaseapp-1.0-SNAPSHOT.jar com.microsoft.examples.CreateTable |
hdinsight | Apache Hbase Migrate Hdinsight 5 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-migrate-hdinsight-5-1.md | Title: Migrate an HBase cluster to an HDInsight 5.1 - Azure HDInsight -description: Learn how to migrate Apache HBase clusters in Azure HDInsight to an HDInsight 5.1. + Title: Migrate a HBase cluster to a HDInsight 5.1 - Azure HDInsight +description: Learn how to migrate Apache HBase clusters in Azure HDInsight to a HDInsight 5.1. Previously updated : 10/03/2023 Last updated : 10/17/2024 -# Migrate an Apache HBase cluster to an HDInsight 5.1 +# Migrate an Apache HBase cluster to a HDInsight 5.1 This article discusses how to update your Apache HBase cluster on Azure HDInsight to a newer version. Use these detailed steps and commands to migrate your Apache HBase cluster. 1. Check Hbase hbck to verify cluster health - 1. Verify HBCK Report page on HBase UI. Healthy cluster does not show any inconsistencies + 1. Verify HBCK Report page on HBase UI. Healthy cluster doesn't show any inconsistencies :::image type="content" source="./media/apache-hbase-migrate-new-version/verify-hbck-report.png" alt-text="Screenshot showing how to verify HBCK report." lightbox="./media/apache-hbase-migrate-new-version/verify-hbck-report.png"::: 1. If any inconsistencies exist, fix inconsistencies using [hbase hbck2](/azure/hdinsight/hbase/how-to-use-hbck2-tool/) |
hdinsight | Apache Hbase Provision Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-provision-vnet.md | description: Get started using HBase in Azure HDInsight. Learn how to create HDI Previously updated : 10/16/2023 Last updated : 10/17/2024 # Create Apache HBase clusters on HDInsight in Azure Virtual Network In this section, you create a Linux-based Apache HBase cluster with the dependen Resource group|Select **Create new**, and specify a new resource group name.| |Location|Select a location for the resource group.| |Cluster Name|Enter a name for the Hadoop cluster to be created.|- |Cluster Login User Name and Password|The default User Name is **admin**. Provide a password.| + |Cluster sign-in User Name and Password|The default User Name is **admin**. Provide a password.| |Ssh User Name and Password|The default User Name is **sshuser**. Provide a password.| Select **I agree to the terms and the conditions**. 1. Select **Purchase**. It takes about around 20 minutes to create a cluster. Once the cluster is created, you can select the cluster in the portal to open it. -After you complete the article, you might want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it is not in use. You are also charged for an HDInsight cluster, even when it is not in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they are not in use. For the instructions of deleting a cluster, see [Manage Apache Hadoop clusters in HDInsight by using the Azure portal](../hdinsight-administer-use-portal-linux.md#delete-clusters). +After you complete the article, you might want to delete the cluster. With HDInsight, your data is stored in Azure Storage, so you can safely delete a cluster when it isn't in use. You're also charged for a HDInsight cluster, even when it isn't in use. Since the charges for the cluster are many times more than the charges for storage, it makes economic sense to delete clusters when they aren't in use. For the instructions of deleting a cluster, see [Manage Apache Hadoop clusters in HDInsight by using the Azure portal](../hdinsight-administer-use-portal-linux.md#delete-clusters). To begin working with your new HBase cluster, you can use the procedures found in [Get started using Apache HBase with Apache Hadoop in HDInsight](./apache-hbase-tutorial-get-started-linux.md). Create an infrastructure as a service (IaaS) virtual machine into the same Azure > [!IMPORTANT] > Replace `CLUSTERNAME` with the name you used when creating the HDInsight cluster in previous steps. -By using these values, the virtual machine is placed in the same virtual network and subnet as the HDInsight cluster. This configuration allows them to directly communicate with each other. There is a way to create an HDInsight cluster with an empty edge node. The edge node can be used to manage the cluster. For more information, see [Use empty edge nodes in HDInsight](../hdinsight-apps-use-edge-node.md). +By using these values, the virtual machine is placed in the same virtual network and subnet as the HDInsight cluster. This configuration allows them to directly communicate with each other. There's a way to create a HDInsight cluster with an empty edge node. The edge node can be used to manage the cluster. For more information, see [Use empty edge nodes in HDInsight](../hdinsight-apps-use-edge-node.md). ### Obtain fully qualified domain name |
hdinsight | Hdinsight 5X Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-5x-component-versioning.md | Title: Open-source components and versions - Azure HDInsight 5.x description: Learn about the open-source components and versions in Azure HDInsight 5.x. Previously updated : 10/26/2023 Last updated : 10/17/2024 # HDInsight 5.x component versions |
hdinsight | Hdinsight Administer Use Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-powershell.md | description: Learn how to perform administrative tasks for the Apache Hadoop clu Previously updated : 10/16/2023 Last updated : 10/17/2024 # Manage Apache Hadoop clusters in HDInsight by using Azure PowerShell |
hdinsight | Hdinsight Apps Install Hiveserver2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-hiveserver2.md | In this section, you install an additional HiveServer2 onto your target hosts. In this article, you've learned how to install HiveServer2 onto your cluster. To learn more about edge nodes and applications, see the following articles: * [Install edge node](hdinsight-apps-use-edge-node.md): Learn how to install an edge node onto your HDInsight cluster.-* [Install HDInsight applications](hdinsight-apps-install-applications.md): Learn how to install an HDInsight application to your clusters. +* [Install HDInsight applications](hdinsight-apps-install-applications.md): Learn how to install a HDInsight application to your clusters. * [Azure SQL DTU Connection Limits](/azure/azure-sql/database/resource-limits-dtu-single-databases): Learn about Azure SQL database limits using DTU. * [Azure SQL vCore Connection Limits](/azure/azure-sql/database/resource-limits-vcore-elastic-pools): Learn about Azure SQL database limits using vCores. |
hdinsight | Hdinsight Config For Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-config-for-vscode.md | Title: Azure HDInsight configuration settings reference description: Introduce the configuration of Azure HDInsight extension. Previously updated : 10/16/2023 Last updated : 10/17/2024 For general information about working with settings in VS Code, refer to [User a | HDInsight: Azure Environment | Azure | Azure environment | | HDInsight: Disable Open Survey Link | Checked | Enable/Disable opening HDInsight survey | | HDInsight: Enable Skip Pyspark Installation | Unchecked | Enable/Disable skipping pyspark installation |-| HDInsight: Login Tips Enable | Unchecked | When this option is checked, there is a prompt when logging in to Azure | +| HDInsight: Sign-in Tips Enable | Unchecked | When this option is checked, there's a prompt when logging in to Azure | | HDInsight: Previous Extension Version | Display the version number of the current extension | Show the previous extension version| | HDInsight: Results Font Family | -apple-system, BlinkMacSystemFont, Segoe WPC, Segoe UI, HelveticaNeue-Light, Ubuntu, Droid Sans, sans-serif | Set the font family for the results grid; set to blank to use the editor font | | HDInsight: Results Font Size | 13 |Set the font size for the results gird; set to blank to use the editor size | | HDInsight Cluster: Linked Cluster | -- | Linked clusters urls. Also can edit the JSON file to set | | HDInsight Hive: Apply Localization | Unchecked | [Optional] Configuration options for localizing into Visual Studio Code's configured locale (must restart Visual Studio Code for settings to take effect)|-| HDInsight Hive: Copy Include Headers | Unchecked | [Optional] Configuration option for copying results from the Results View | +| HDInsight Hive: Copy Includes Headers | Unchecked | [Optional] Configuration option for copying results from the Results Views | | HDInsight Hive: Copy Remove New Line | Checked | [Optional] Configuration options for copying multi-line results from the Results View | | HDInsight Hive › Format: Align Column Definitions In Columns | Unchecked | Should column definition be aligned | | HDInsight Hive › Format: Datatype Casing | none | Should data types be formatted as UPPERCASE, lowercase, or none (not formatted) | For general information about working with settings in VS Code, refer to [User a | HDInsight Hive › Format: Place Select Statement References On New Line | Unchecked | Is reference to objects in a SELECT statement be split into separate lines? For example, for 'SELECT C1, C2 FROM T1' both C1 and C2 is on separate lines | HDInsight Hive: Log Debug Info | Unchecked | [Optional] Log debug output to the VS Code console (Help -> Toggle Developer Tools) | HDInsight Hive: Messages Default Open | Checked | True for the messages pane to be open by default; false for closed|-| HDInsight Hive: Results Font Family | -apple-system, BlinkMacSystemFont, Segoe WPC,Segoe UI, HelveticaNeue-Light, Ubuntu, Droid Sans, sans-serif | Set the font family for the results grid; set to blank to use the editor font | +| HDInsight Hive: Results Font Family | -apple-system, BlinkMacSystemFont, Segoe WPC, Segoe UI, HelveticaNeue-Light, Ubuntu, Droid Sans, sans-serif | Set the font family for the results grid; set to blank to use the editor font | | HDInsight Hive: Results Font Size | 13 | Set the font size for the results grid; set to blank to use the editor size | | HDInsight Hive › Save as `csv`: Include Headers | Checked | [Optional] When true, column headers are included when saving results as CSV | | HDInsight Hive: Shortcuts | -- | Shortcuts related to the results window | For general information about working with settings in VS Code, refer to [User a | HDInsight Job Submission: Livy `Conf` | -- | Livy Configuration. POST/batches | | HDInsight Jupyter: Append Results| Checked | Whether to append the results to the results window or to clear and display them. | | HDInsight Jupyter: Languages | -- | Default settings per language. |-| HDInsight Jupyter › Log: Verbose | Unchecked | If you enable verbose logging. | +| HDInsight Jupyter › Log: Verbose | Unchecked | If you enable verbose logging | | HDInsight Jupyter › Notebook: Startup Args | Can add item | `jupyter notebook` command-line arguments. Each argument is a separate item in the array. For a full list type `jupyter notebook--help` in a terminal window. | | HDInsight Jupyter › Notebook: Startup Folder | ${workspaceRoot} |-- | | HDInsight Jupyter: Python Extension Enabled | Checked | Use Python-Interactive-Window of ms-python extension when submitting pySpark Interactive jobs. Otherwise, use our own `jupyter` window. | |
hdinsight | Hdinsight Hadoop Optimize Hive Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-optimize-hive-query.md | description: This article describes how to optimize your Apache Hive queries in Previously updated : 10/16/2023 Last updated : 10/17/2024 # Optimize Apache Hive queries in Azure HDInsight For more information on running Hive queries on various HDInsight cluster types, ## Scale out worker nodes -Increasing the number of worker nodes in an HDInsight cluster allows the work to use more mappers and reducers to be run in parallel. There are two ways you can increase out scale in HDInsight: +Increasing the number of worker nodes in a HDInsight cluster allows the work to use more mappers and reducers to be run in parallel. There are two ways you can increase out scale in HDInsight: * When you create a cluster, you can specify the number of worker nodes using the Azure portal, Azure PowerShell, or command-line interface. For more information, see [Create HDInsight clusters](hdinsight-hadoop-provision-linux-clusters.md). The following screenshot shows the worker node configuration on the Azure portal: |
hdinsight | Hdinsight Overview Before You Start | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md | Title: Before you start with Azure HDInsight description: In Azure HDInsight, few points to be considered before starting to create a cluster. Previously updated : 10/16/2023 Last updated : 10/17/2024 # Consider the below points before starting to create a cluster. For more information, see how to [Migrate HDInsight cluster to a newer version]( Microsoft will only support machines that are created by the HDInsight service (for example, HDInsight clusters, edge nodes, and worker nodes). We don't support third-party client machines or moving the HDInsight libraries from a supported machine to an external machine. -While this third-party integration may work for some time, it is not recommended in production environments because the scenario isn't supported. +While this third-party integration may work for some time, it isn't recommended in production environments because the scenario isn't supported. When you open a support request for an unsupported scenario, you'll be asked to ***reproduce the problem in a supported scenario*** so we can investigate. Any fixes provided would be for the supported scenario only. ### Supported ways to integrate third party applications -* [Install HDInsight applications](hdinsight-apps-install-applications.md): Learn how to install an HDInsight application to your clusters. +* [Install HDInsight applications](hdinsight-apps-install-applications.md): Learn how to install a HDInsight application to your clusters. * [Install custom HDInsight applications](hdinsight-apps-install-custom-applications.md): learn how to deploy an unpublished HDInsight application to HDInsight. * [Publish HDInsight applications](hdinsight-apps-publish-applications.md): Learn how to publish your custom HDInsight applications to Azure Marketplace. |
hdinsight | Hdinsight Sdk Dotnet Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-dotnet-samples.md | description: Find C# .NET examples on GitHub for common tasks using the HDInsigh Previously updated : 10/16/2023 Last updated : 10/17/2024 # Azure HDInsight: .NET samples |
hdinsight | Hdinsight Troubleshoot Hive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-hive.md | description: Get answers to common questions about working with Apache Hive and keywords: Azure HDInsight, Hive, FAQ, troubleshooting guide, common questions Previously updated : 10/16/2023 Last updated : 10/17/2024 # Troubleshoot Apache Hive by using Azure HDInsight |
hdinsight | Apache Hadoop Connect Hive Power Bi Directquery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hadoop-connect-hive-power-bi-directquery.md | description: Use Microsoft Power BI to visualize Interactive Query Hive data fro Previously updated : 10/16/2023 Last updated : 10/17/2024 # Visualize Interactive Query Apache Hive data with Microsoft Power BI using direct query in HDInsight This article describes how to connect Microsoft Power BI to Azure HDInsight Inte :::image type="content" source="./media/apache-hadoop-connect-hive-power-bi-directquery/hdinsight-power-bi-visualization.png" alt-text="HDInsight Power BI the map report." border="true"::: -You can use the [Apache Hive ODBC driver](../hadoop/apache-hadoop-connect-hive-power-bi.md) to do import via the generic ODBC connector in Power BI Desktop. However it is not recommended for BI workloads given non-interactive nature of the Hive query engine. [HDInsight Interactive Query connector](./apache-hadoop-connect-hive-power-bi-directquery.md) and [HDInsight Apache Spark connector](/power-bi/spark-on-hdinsight-with-direct-connect) are better choices for their performance. +You can use the [Apache Hive ODBC driver](../hadoop/apache-hadoop-connect-hive-power-bi.md) to do import via the generic ODBC connector in Power BI Desktop. However it isn't recommended for BI workloads given non-interactive nature of the Hive query engine. [HDInsight Interactive Query connector](./apache-hadoop-connect-hive-power-bi-directquery.md) and [HDInsight Apache Spark connector](/power-bi/spark-on-hdinsight-with-direct-connect) are better choices for their performance. ## Prerequisites Before going through this article, you must have the following items: -* **HDInsight cluster**. The cluster can be either an HDInsight cluster with Apache Hive or a newly released Interactive Query cluster. For creating clusters, see [Create cluster](../hadoop/apache-hadoop-linux-tutorial-get-started.md). +* **HDInsight cluster**. The cluster can be either a HDInsight cluster with Apache Hive or a newly released Interactive Query cluster. For creating clusters, see [Create cluster](../hadoop/apache-hadoop-linux-tutorial-get-started.md). * **[Microsoft Power BI Desktop](https://powerbi.microsoft.com/desktop/)**. You can download a copy from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=45331). ## Load data from HDInsight The `hivesampletable` Hive table comes with all HDInsight clusters. :::image type="content" source="./media/apache-hadoop-connect-hive-power-bi-directquery/hdinsight-power-bi-open-odbc.png" alt-text="HDInsight Power BI Get Data More." border="true"::: -3. From the **Get Data** window, enter **hdinsight** in the search box. +3. From the `Get Data` window, enter **hdinsight** in the search box. 4. From the search results, select **HDInsight Interactive Query**, and then select **Connect**. If you don't see **HDInsight Interactive Query**, you need to update your Power BI Desktop to the latest version. |
hdinsight | Apache Interactive Query Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-interactive-query-get-started.md | description: An introduction to Interactive Query, also called Apache Hive LLAP, Previously updated : 10/16/2023 Last updated : 10/17/2024 #Customer intent: As a developer new to Interactive Query in Azure HDInsight, I want to have a basic understanding of Interactive Query so I can decide if I want to use it rather than build my own cluster. To execute Hive queries, you have the following options: |Microsoft Power BI|See [Visualize Interactive Query Apache Hive data with Power BI in Azure HDInsight](./apache-hadoop-connect-hive-power-bi-directquery.md), and [Visualize big data with Power BI in Azure HDInsight](../hadoop/apache-hadoop-connect-hive-power-bi.md).| |Visual Studio|See [Connect to Azure HDInsight and run Apache Hive queries using Data Lake Tools for Visual Studio](../hadoop/apache-hadoop-visual-studio-tools-get-started.md#run-interactive-apache-hive-queries).| |Visual Studio Code|See [Use Visual Studio Code for Apache Hive, LLAP, or pySpark](../hdinsight-for-vscode.md).|-|Apache Ambari Hive View|See [Use Apache Hive View with Apache Hadoop in Azure HDInsight](../hadoop/apache-hadoop-use-hive-ambari-view.md). Hive View is not available for HDInsight 4.0.| -|Apache Beeline|See [Use Apache Hive with Apache Hadoop in HDInsight with Beeline](../hadoop/apache-hadoop-use-hive-beeline.md). You can use Beeline from either the head node or from an empty edge node. We recommend using Beeline from an empty edge node. For information about creating an HDInsight cluster by using an empty edge node, see [Use empty edge nodes in HDInsight](../hdinsight-apps-use-edge-node.md).| +|Apache Ambari Hive View|See [Use Apache Hive View with Apache Hadoop in Azure HDInsight](../hadoop/apache-hadoop-use-hive-ambari-view.md). Hive View isn't available for HDInsight 4.0.| +|Apache Beeline|See [Use Apache Hive with Apache Hadoop in HDInsight with Beeline](../hadoop/apache-hadoop-use-hive-beeline.md). You can use Beeline from either the head node or from an empty edge node. We recommend using Beeline from an empty edge node. For information about creating a HDInsight cluster by using an empty edge node, see [Use empty edge nodes in HDInsight](../hdinsight-apps-use-edge-node.md).| |Hive ODBC|See [Connect Excel to Apache Hadoop with the Microsoft Hive ODBC driver](../hadoop/apache-hadoop-connect-excel-hive-odbc-driver.md).| To find the Java Database Connectivity (JDBC) connection string: |
hdinsight | Quickstart Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-bicep.md | The Bicep file used in this quickstart is from [Azure Quickstart Templates](http Two Azure resources are defined in the Bicep file: * [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage Account.-* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create an HDInsight cluster. +* [Microsoft.HDInsight/cluster](/azure/templates/microsoft.hdinsight/clusters): create a HDInsight cluster. ### Deploy the Bicep file Two Azure resources are defined in the Bicep file: You need to provide values for the parameters: * Replace **\<cluster-name\>** with the name of the HDInsight cluster to create.- * Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to log in to cluster dashboards. - * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username can not be admin username. + * Replace **\<cluster-username\>** with the credentials used to submit jobs to the cluster and to sign-in to cluster dashboards. + * Replace **\<ssh-username\>** with the credentials used to remotely access the cluster. The username canΓÇÖt be admin username. - You are prompted to enter the following password: + You're prompted to enter the following password: - * **clusterLoginPassword**, which must be at least 10 characters long and contain one digit, one uppercase letter, one lowercase letter, and one non-alphanumeric character except single-quote, double-quote, backslash, right-bracket, full-stop. It also must not contain three consecutive characters from the cluster username or SSH username. - * **sshPassword**, which must be 6-72 characters long and must contain at least one digit, one uppercase letter, and one lowercase letter. It must not contain any three consecutive characters from the cluster login name. + * **clusterLoginPassword**, which must be at least 10 characters long and contain one digit, one uppercase letter, one lowercase letter, and one nonalphanumeric character except single-quote, double-quote, backslash, right-bracket, full-stop. It also must not contain three consecutive characters from the cluster username or SSH username. + * **sshPassword**, which must be 6-72 characters long and must contain at least one digit, one uppercase letter, and one lowercase letter. It must not contain any three consecutive characters from the cluster sign in name. > [!NOTE] > When the deployment finishes, you should see a message indicating the deployment succeeded. |
hdinsight | Migrate 5 1 Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/migrate-5-1-versions.md | Title: Migrate Apache Kafka workloads to Azure HDInsight 5.1 description: Learn how to migrate Apache Kafka workloads on HDInsight 4.0 to HDInsight 5.1. Previously updated : 10/26/2023 Last updated : 10/17/2024 # Migrate Apache Kafka workloads to Azure HDInsight 5.1 |
hdinsight | Optimize Hbase Ambari | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/optimize-hbase-ambari.md | Title: Optimize Apache HBase with Apache Ambari in Azure HDInsight description: Use the Apache Ambari web UI to configure and optimize Apache HBase. Previously updated : 10/16/2023 Last updated : 10/17/2024 # Optimize Apache HBase with Apache Ambari in Azure HDInsight |
hdinsight | Apache Spark Run Machine Learning Automl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-run-machine-learning-automl.md | Title: Run Azure Machine Learning workloads on Apache Spark in HDInsight description: Learn how to run Azure Machine Learning workloads with automated machine learning (AutoML) on Apache Spark in Azure HDInsight. Previously updated : 10/16/2023 Last updated : 10/17/2024 # Run Azure Machine Learning workloads with automated machine learning on Apache Spark in HDInsight -Azure Machine Learning simplifies and accelerates the building, training, and deployment of machine learning models. In automated machine learning (AutoML), you start with training data that has a defined target feature. Iterate through combinations of algorithms and feature selections automatically select the best model for your data based on the training scores. HDInsight allows customers to provision clusters with hundreds of nodes. AutoML running on Spark in an HDInsight cluster allows users to use compute capacity across these nodes to run training jobs in a scale-out fashion, and to run multiple training jobs in parallel. It allows users to run AutoML experiments while sharing the compute with their other big data workloads. +Azure Machine Learning simplifies and accelerates the building, training, and deployment of machine learning models. In automated machine learning (AutoML), you start with training data that has a defined target feature. Iterate through combinations of algorithms and feature selections automatically select the best model for your data based on the training scores. HDInsight allows customers to provision clusters with hundreds of nodes. AutoML running on Spark in a HDInsight cluster allows users to use compute capacity across these nodes to run training jobs in a scale-out fashion, and to run multiple training jobs in parallel. It allows users to run AutoML experiments while sharing the compute with their other big data workloads. -## Install Azure Machine Learning on an HDInsight cluster +## Install Azure Machine Learning on a HDInsight cluster For general tutorials of automated machine learning, see [Tutorial: Use automated machine learning to build your regression model](/azure/machine-learning/tutorial-auto-train-models).-All new HDInsight-Spark clusters come pre-installed with AzureML-AutoML SDK. +All new HDInsight-Spark clusters come preinstalled with AzureML-AutoML SDK. > [!Note] > Azure Machine Learning packages are installed into Python3 conda environment. The installed Jupyter Notebook should be run using the PySpark3 kernel. You can use Zeppelin notebooks to use AutoML as well. ## Authentication for workspace -Workspace creation and experiment submission require an authentication token. This token can be generated using an [Microsoft Entra application](../../active-directory/develop/app-objects-and-service-principals.md). An [Microsoft Entra user](/azure/developer/python/sdk/authentication-overview) can also be used to generate the required authentication token, if multi-factor authentication isn't enabled on the account. +Workspace creation and experiment submission require an authentication token. This token can be generated using an [Microsoft Entra application](../../active-directory/develop/app-objects-and-service-principals.md). An [Microsoft Entra user](/azure/developer/python/sdk/authentication-overview) can also be used to generate the required authentication token, if multifactor authentication isn't enabled on the account. The following code snippet creates an authentication token using an **Microsoft Entra application**. |
hdinsight | Use Pig | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/use-pig.md | description: Learn how to use Pig with Apache Hadoop on HDInsight. Previously updated : 10/16/2023 Last updated : 10/17/2024 # Use Apache Pig with Apache Hadoop on HDInsight Learn how to use [Apache Pig](https://pig.apache.org/) with HDInsight. -Apache Pig is a platform for creating programs for Apache Hadoop by using a procedural language known as *Pig Latin*. Pig is an alternative to Java for creating *MapReduce* solutions, and it is included with Azure HDInsight. Use the following table to discover the various ways that Pig can be used with HDInsight: +Apache Pig is a platform for creating programs for Apache Hadoop by using a procedural language known as *Pig Latin*. Pig is an alternative to Java for creating *MapReduce* solutions, and it's included with Azure HDInsight. Use the following table to discover the various ways that Pig can be used with HDInsight: ## <a id="why"></a>Why use Apache Pig For more information about Pig Latin, see [Pig Latin Reference Manual 1](https:/ ## <a id="data"></a>Example data -HDInsight provides various example data sets, which are stored in the `/example/data` and `/HdiSamples` directories. These directories are in the default storage for your cluster. The Pig example in this document uses the *log4j* file from `/example/data/sample.log`. +HDInsight provides various example data sets, which are stored in the `/example/data` and `/HdiSamples` directories. These directories are in the default storage for your cluster. The Pig example in this document uses the *Log4j* file from `/example/data/sample.log`. Each log inside the file consists of a line of fields that contains a `[LOG LEVEL]` field to show the type and the severity, for example: |
iot-edge | Tutorial Develop For Linux On Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux-on-windows.md | Typically, you'll want to test and debug each module before running it within an > [!WARNING] > Make sure the last line of the template _ENTRYPOINT ["dotnet", "IotEdgeModule1.dll"]_ the name of the DLL matches the name of your IoT Edge module project. - ![Screenshot of setting the Dockerfile template](./media/tutorial-develop-for-linux-on-windows/visual-studio-solution.png) - 1. To establish an SSH connection with the Linux module, we need to create an RSA key. Open an elevated PowerShell session and run the following commands to create a new RSA key. Make sure you save the RSA key under the same IoT Edge module folder, and the name of the key is _id\_rsa_. ```cmd az iot edge set-modules --hub-name my-iot-hub --device-id my-device --content ./ | **Private Key File** | Full path to the id_rsa that created in a previous step | | **Passphrase** | Passphrase used for the key created in a previous step | -- ![Screenshot of how to connect to a remote system](./media/tutorial-develop-for-linux-on-windows/connect-remote-system.png) - 1. After successfully connecting to the module using SSH, then you can choose the process and select Attach. For the C# module you need to choose process dotnet and **Attach to** to Managed (CoreCLR). It may take 10 to 20 seconds the first time. - [ ![Screenshot of how to attach an edge module process.](./media/tutorial-develop-for-linux-on-windows/attach-process.png) ](./media/tutorial-develop-for-linux-on-windows/attach-process.png#lightbox) - 1. Set a breakpoint to inspect the module. * If developing in C#, set a breakpoint in the `PipeMessage()` function in **ModuleBackgroundService.cs**. |
iot-hub-device-update | Device Update Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-groups.md | |
iot-operations | Concept Iot Operations In Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/concept-iot-operations-in-layered-network.md | |
iot-operations | Howto Configure Aks Edge Essentials Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-aks-edge-essentials-layered-network.md | |
iot-operations | Howto Configure L3 Cluster Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l3-cluster-layered-network.md | |
iot-operations | Howto Configure L4 Cluster Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l4-cluster-layered-network.md | |
iot-operations | Howto Configure Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-layered-network.md | |
iot-operations | Howto Connect Arc Enabled Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-connect-arc-enabled-servers.md | |
iot-operations | Howto Deploy Aks Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-deploy-aks-layered-network.md | |
iot-operations | Overview Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/overview-layered-network.md | |
iot-operations | Howto Configure Tls Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-tls-manual.md | Both EC and RSA keys are supported, but all certificates in the chain must use t 1. Create a full server certificate chain, where the order of the certificates matters: the server certificate is the first one in the file, the intermediate is the second. ```bash- cat mqtts-endpoint.crt intermediate_ca.crt > server_chain.pem + cat mqtts-endpoint.crt intermediate_ca.crt > server_chain.crt ``` 1. Create a Kubernetes secret with the server certificate chain and server key using kubectl. |
iot | Concepts Manage Device Reconnections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-manage-device-reconnections.md | Title: Manage device reconnections to create resilient applications description: Manage the device connection and reconnection process to ensure resilient applications by using the Azure IoT Hub device SDKs.-+ Previously updated : 04/04/2024 Last updated : 10/17/2024 |
iot | Howto Use Iot Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/howto-use-iot-explorer.md | |
modeling-simulation-workbench | Limits Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/limits-quotas.md | vCPU quotas listed are initial default. More capacity can be requested. | Supports parallel deployment? | Yes | Multiple VMs can be deployed simultaneously. | | Deployment location | Same as workbench. | | -## Storage --| Item | Quota or limit | Notes | -|-|--|| -| Home volume quota | 200-GB limit | For entire volume, shared across all users. | -| `datain` volume | 1-TB limit | | -| `dataout` volume | 1-TB limit | | -| File size limit on data pipeline | 100-GB limit per file | | -| Chamber storage volumes | 4 TB initial, 4-TB increments, 20-TB quota | More quota can be requested. | -| Shared storage volumes | 4 TB initial, 4-TB increments, 20-TB quota | More quota can be requested. | +## Storage and data pipeline ++| Item | Quota or limit | Notes | +|-|-|| +| `datain` volume | 1 TB limit | | +| `dataout` volume | 1 TB limit | | +| File size limit on data pipeline | 100 GB limit per file | See `datain` and and `dataout` volume limits. | +| Chamber storage volumes | 4 TB min, up to 20 TB per volume | Default quota can be increased with support request. | +| Shared storage volumes | 4 TB min, up to 20 TB per volume | Default quota can be increased with support request. | ## Networking |
network-watcher | Nsg Flow Logs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-overview.md | Last updated 09/26/2024 Network security group (NSG) flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a [network security group](../virtual-network/network-security-groups-overview.md). Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. - ## Why use flow logs? It's vital to monitor, manage, and know your own network so that you can protect and optimize it. You need to know the current state of the network, who's connecting, and where users are connecting from. You also need to know which ports are open to the internet, what network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen. Here's an example format of a version 1 NSG flow log: "time": "2017-02-16T22:00:32.8950000Z", "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent",- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", + "resourceId": "/SUBSCRIPTIONS/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, Here's an example format of a version 1 NSG flow log: "time": "2017-02-16T22:01:32.8960000Z", "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent",- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", + "resourceId": "/SUBSCRIPTIONS/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, Here's an example format of a version 1 NSG flow log: "time": "2017-02-16T22:00:32.8950000Z", "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent",- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", + "resourceId": "/SUBSCRIPTIONS/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, Here's an example format of a version 1 NSG flow log: "time": "2017-02-16T22:01:32.8960000Z", "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent",- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", + "resourceId": "/SUBSCRIPTIONS/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, Here's an example format of a version 1 NSG flow log: "time": "2017-02-16T22:02:32.9040000Z", "systemId": "55ff55ff-aa66-bb77-cc88-99dd99dd99dd", "category": "NetworkSecurityGroupFlowEvent",- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", + "resourceId": "/SUBSCRIPTIONS/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 1, Here's an example format of a version 2 NSG flow log: "time": "2018-11-13T12:00:35.3899262Z", "systemId": "66aa66aa-bb77-cc88-dd99-00ee00ee00ee", "category": "NetworkSecurityGroupFlowEvent",- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", + "resourceId": "/SUBSCRIPTIONS/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 2, Here's an example format of a version 2 NSG flow log: "time": "2018-11-13T12:01:35.3918317Z", "systemId": "66aa66aa-bb77-cc88-dd99-00ee00ee00ee", "category": "NetworkSecurityGroupFlowEvent",- "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", + "resourceId": "/SUBSCRIPTIONS/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/RESOURCEGROUPS/FABRIKAMRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/FABRIAKMVM1-NSG", "operationName": "NetworkSecurityGroupFlowEvents", "properties": { "Version": 2, Here's an example format of a version 2 NSG flow log: ### Log tuple and bandwidth calculation -![Screenshot that shows an example of a flow log tuple.](./media/nsg-flow-logs-overview/tuple.png) -Here's an example bandwidth calculation for flow tuples from a TCP conversation between 185.170.185.105:35370 and 10.2.0.4:23: +Here's an example of bandwidth calculation for flow tuples from a TCP conversation between `203.0.113.105:35370` and `10.0.0.5:443`: -`1493763938,185.170.185.105,10.2.0.4,35370,23,T,I,A,B,,,,` -`1493695838,185.170.185.105,10.2.0.4,35370,23,T,I,A,C,1021,588096,8005,4610880` -`1493696138,185.170.185.105,10.2.0.4,35370,23,T,I,A,E,52,29952,47,27072` +`1708978215,203.0.113.105,10.0.0.5,35370,443,T,I,A,B,,,,` +`1708978215,203.0.113.105,10.0.0.5,35370,443,T,I,A,C,1021,588096,8005,4610880` +`1708978215,203.0.113.105,10.0.0.5,35370,443,T,I,A,E,52,29952,47,27072` For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000. |
operator-nexus | Concepts Network Fabric Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-network-fabric-controller.md | Title: Azure Operator Nexus Network Fabric Controller description: Overview of Network Fabric Controller for Azure Operator Nexus.--++ Last updated 12/18/2023 |
resource-mover | Support Matrix Move Region Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md | description: Review support for moving Azure VMs between regions with Azure Reso Previously updated : 03/29/2024 Last updated : 10/16/2024 # Support for moving Azure VMs between Azure regions -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - This article summarizes support and prerequisites when you move virtual machines and related network resources across Azure regions using Resource Mover. ## Windows VM support Resource Move supports Azure VMs running these Linux operating systems. **Operating system** | **Details** | Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1-CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1 Ubuntu 14.04 LTS Server | [Supported kernel versions](#supported-ubuntu-kernel-versions) Ubuntu 16.04 LTS Server | [Supported kernel version](#supported-ubuntu-kernel-versions)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloud-init configuration). Password-based sign-in can be reenabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. Ubuntu 18.04 LTS Server | [Supported kernel version](#supported-ubuntu-kernel-versions). |
sentinel | Enable Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enable-monitoring.md | description: Monitor supported data connectors by using the SentinelHealth data Previously updated : 08/01/2024- Last updated : 10/17/2024+appliesto: Microsoft Sentinel in the Azure portal and the Microsoft Defender portal #Customer intent: As a security engineer, I want to configure auditing and health monitoring for my Microsoft Sentinel resources so that I can ensure the integrity and health of our security infrastructure. To implement the health and audit feature using API (Bicep/AZURE RESOURCE MANAGE ## Turn on auditing and health monitoring for your workspace -1. In Microsoft Sentinel, under the **Configuration** menu on the left, select **Settings**. +To get started, enable auditing and health monitoring from the Microsoft Sentinel settings. -1. Select **Settings** from the banner. +1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Configuration**, select **Settings** > **Settings**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), under **System**, select **Settings** > **Microsoft Sentinel**. -1. Scroll down to the **Auditing and health monitoring** section and select it to expand. +1. Select **Auditing and health monitoring**. 1. Select **Enable** to enable auditing and health monitoring across all resource types and to send the auditing and monitoring data to your Microsoft Sentinel workspace (and nowhere else). Or, select the **Configure diagnostic settings** link to enable health monitoring only for the data collector and/or automation resources, or to configure advanced options, like more places to send the data. + #### [Azure portal](#tab/azure-portal) :::image type="content" source="media/enable-monitoring/enable-health-monitoring.png" alt-text="Screenshot shows how to get to the health monitoring settings."::: + #### [Defender portal](#tab/defender-portal) + :::image type="content" source="media/enable-monitoring/enable-health-monitoring-defender.png" alt-text="Screenshot shows how to get to the health monitoring settings in the Defender portal."::: ++ + If you selected **Enable**, then the button will gray out and change to read **Enabling...** and then **Enabled**. At that point, auditing and health monitoring is enabled, and you're done! The appropriate diagnostic settings were added behind the scenes, and you can view and edit them by selecting the **Configure diagnostic settings** link. 1. If you selected **Configure diagnostic settings**, then in the **Diagnostic settings** screen, select **+ Add diagnostic setting**. The *SentinelHealth* and *SentinelAudit* data tables are created at the first ev ## Verify that the tables are receiving data -In the Microsoft Sentinel **Logs** page, run a query on the *SentinelHealth* table. For example: +Run Kusto Query Language (KQL) queries in the Azure portal or the Defender portal to make sure you're getting health and auditing data. ++1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **General**, select **Logs**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), under **Investigation & response**, select **Hunting** > **Advanced hunting**. ++1. Run a query on the *SentinelHealth* table. For example: ++ ```kusto + _SentinelHealth() + | take 20 + ``` ++1. Run a query on the *SentinelAudit* table. For example: -```kusto -_SentinelHealth() - | take 20 -``` + ```kusto + _SentinelAudit() + | take 20 + ``` ## Supported data tables and resource types |
sentinel | Monitor Data Connector Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md | description: Use the SentinelHealth data table and the Health Monitoring workboo Previously updated : 02/11/2024 Last updated : 10/17/2024 -+appliesto: Microsoft Sentinel in the Azure portal and the Microsoft Defender portal #Customer intent: As a security analyst, I want to monitor the health and performance of my data connectors so that I can ensure uninterrupted data ingestion and quickly address any issues. The following features allow you to perform this monitoring from within Microsof ## Use the health monitoring workbook -1. From the Microsoft Sentinel portal, select **Content hub** from the **Content management** section of the navigation menu. +To get started, install the **Data collection health monitoring** workbook from the **Content hub** and view or create a copy of the template from the **Workbooks** section of Microsoft Sentinel. ++1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**. 1. In the **Content hub**, enter *health* in the search bar, and select **Data collection health monitoring** from among the results. 1. Select **Install** from the details pane. When you see a notification message that the workbook is installed, or if instead of *Install*, you see *Configuration*, proceed to the next step. -1. Select **Workbooks** from the **Threat management** section of the navigation menu. +1. In Microsoft Sentinel, under **Threat management**, select **Workbooks**. 1. In the **Workbooks** page, select the **Templates** tab, enter *health* in the search bar, and select **Data collection health monitoring** from among the results. |
service-connector | Quickstart Portal Aks Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-aks-connection.md | Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure. 1. Select **Next: Authentication** to choose an authentication method. - ### [Workload identity](#tab/UMI) + ### [Workload identity (recommended)](#tab/UMI) Select **Workload identity** to authenticate through [Microsoft Entra workload identity](/entra/workload-id/workload-identities-overview) to one or more instances of an Azure service. Then select a user-assigned managed identity to enable workload identity. Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure. 1. Select **Network View** to see all the service connections in a network topology view. :::image type="content" source="./media/aks-quickstart/list-and-view.png" alt-text="Screenshot of the Azure portal, listing and viewing the connections."::: +## Update your container ++Now that you created a connection between your AKS cluster and target service, you need to retrieve the connection secrets and deploy them in your container. ++1. In the [Azure portal](https://portal.azure.com/), navigate to your AKS cluster resource and select **Service Connector (Preview)**. +1. Select the newly created connection, and then select **YAML snippet**. This action opens a panel displaying a sample YAML file generated by Service Connector. +1. To set the connection secrets as environment variables in your container, you have two options: + + * Directly create a deployment using the YAML sample code snippet provided. The snippet includes highlighted sections showing the secret object that will be injected as the environment variables. Select **Apply** to proceed with this method. ++ :::image type="content" source="media/aks-quickstart/sample-yaml-snippet.png" alt-text="Screenshot of the Azure portal showing the sample YAML snippet to create a new connection in AKS."::: ++ * Alternatively, under **Resource Type**, select **Kubernetes Workload**, and then select an existing Kubernetes workload. This action sets the secret object of your new connection as the environment variables for the selected workload. After selecting the workload, select **Apply**. ++ :::image type="content" source="media/aks-quickstart/kubernetes-snippet.png" alt-text="Screenshot of the Azure portal showing the Kubernetes snippet to create a new connection in AKS."::: + ## Next steps Follow the following tutorials to start connecting to Azure services on AKS cluster with Service Connector. |
site-recovery | Azure Stack Site Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-stack-site-recovery.md | Title: Replicate Azure Stack VMs to Azure using Azure Site Recovery description: Learn how to set up disaster recovery to Azure for Azure Stack VMs with the Azure Site Recovery service. Previously updated : 09/11/2024 Last updated : 10/16/2024 -# Replicate Azure Stack VMs to Azure --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). +# Replicate Azure Stack VMs to Azure using Azure Site Recovery This article shows you how to set up disaster recovery Azure Stack VMs to Azure, using the [Azure Site Recovery service](site-recovery-overview.md). Make sure that the VMs are running one of the operating systems summarized in th **Operating system** | **Details** | **64-bit Windows** | Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2 (from SP1)-**CentOS** | 5.2 to 5.11, 6.1 to 6.9, 7.0 to 7.3 **Ubuntu** | 14.04 LTS server, 16.04 LTS server. Review [supported kernels](vmware-physical-azure-support-matrix.md#ubuntu-kernel-versions) ### Prepare for Mobility service installation |
site-recovery | Azure To Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md | Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 07/15/2024 Last updated : 10/16/2024 -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - This article summarizes support and prerequisites for disaster recovery of Azure VMs from one Azure region to another, using the [Azure Site Recovery](site-recovery-overview.md) service. ## Deployment method support Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft **Operating system** | **Details** | Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), 8.7, 8.8, 8.9, 8.10, 9.0, 9.1, 9.2, 9., 9.4 <br> RHEL `9.x` is supported for the [following kernel versions](#supported-kernel-versions-for-red-hat-enterprise-linux-for-azure-virtual-machines).-CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or later), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6, 8.7. Ubuntu 14.04 LTS Server | Includes support for all 14.04.*x* versions; [Supported kernel versions](#supported-ubuntu-kernel-versions-for-azure-virtual-machines); Ubuntu 16.04 LTS Server | Includes support for all 16.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. Ubuntu 18.04 LTS Server | Includes support for all 18.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. |
site-recovery | Azure To Azure Troubleshoot Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - This article describes how to troubleshoot common errors in Azure Site Recovery during replication and recovery of [Azure virtual machines](azure-to-azure-tutorial-enable-replication.md) (VM) from one region to another. For more information about supported configurations, see the [support matrix for replicating Azure VMs](azure-to-azure-support-matrix.md). ## Azure resource quota issues (error code 150097) The installer is unable to find the root disk that hosts the root file-system. Perform the following steps to fix this issue. -1. Find the agent bits under the directory _/var/lib/waagent_ on RHEL and CentOS machines using the below command: <br> +1. Find the agent bits under the directory _/var/lib/waagent_ on RHEL machines using the below command: <br> `# find /var/lib/ -name Micro\*.gz` |
site-recovery | Azure Vm Disaster Recovery With Accelerated Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. The following picture shows communication between two VMs with and without accelerated networking: :::image type="content" source="./media/azure-vm-disaster-recovery-with-accelerated-networking/accelerated-networking-benefit.png" alt-text="Screenshot of difference between accelerated and nonaccelerated networking." lightbox="./media/azure-vm-disaster-recovery-with-accelerated-networking/accelerated-networking-benefit.png"::: The following distributions are supported out of the box from the Azure Gallery: * **Ubuntu 16.04** * **SLES 12 SP3** * **RHEL 7.4**-* **CentOS 7.4** * **CoreOS Linux** * **Debian "Stretch" with backports kernel** * **Oracle Linux 7.4** |
site-recovery | Site Recovery Failover To Azure Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-failover-to-azure-troubleshoot.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - You may receive one of the following errors while doing failover of a virtual machine to Azure. To troubleshoot, use the described steps for each error condition. ## Failover failed with Error ID 28031 If you're able to connect to the machine using RDP but can't open serial console grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg ``` -* If the machine OS is CentOS 7.*, run the following command on the failover Azure VM with root permissions. Reboot the VM after the command. -- ```console - grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg - ``` - ## Unexpected shutdown message (Event ID 6008) When booting up a Windows VM post failover, if you receive an unexpected shutdown message on the recovered VM, it indicates that a VM shutdown state wasn't captured in the recovery point used for failover. This happens when you recover to a point when the VM hadn't been fully shut down. |
site-recovery | Site Recovery Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new-archive.md | Last updated 12/27/2023 This article contains information on older features and updates in the Azure Site Recovery service. The primary [What's new in Azure Site Recovery](./site-recovery-whats-new.md) article contains the latest updates. +## Updates (February 2023) ++### Update Rollup 66 ++[Update rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) provides the following updates: ++**Update** | **Details** + | +**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. +**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. +**Azure VM disaster recovery** | Added support for Ubuntu 22.04, RHEL 8.7 and CentOS 8.7 Linux distro. +**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 22.04, RHEL 8.7 and CentOS 8.7 Linux distro. ++## Updates (November 2022) ++### Update Rollup 65 ++[Update rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) provides the following updates: ++**Update** | **Details** + | +**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. +**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. +**Azure VM disaster recovery** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro. +**VMware VM/physical disaster recovery to Azure** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.<br/><br/> Added Modernized VMware to Azure DR support for government clouds. [Learn more](./replication-appliance-support-matrix.md#allow-urls-for-government-clouds). +++## Updates (October 2022) ++### Update Rollup 64 ++[Update rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) provides the following updates: ++**Update** | **Details** + | +**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. +**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. +**Azure VM disaster recovery** | Added support for Ubuntu 20.04 Linux distro. +**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 20.04 Linux distro.<br/><br/> Modernized experience to enable disaster recovery of VMware virtual machines is now generally available.[Learn more](https://azure.microsoft.com/updates/vmware-dr-ga-with-asr).<br/><br/> Protecting physical machines modernized experience is now supported.<br/><br/> Protecting machines with private endpoint and managed identity enabled is now supported with modernized experience. ++## Updates (August 2022) ++### Update Rollup 63 ++[Update rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) provides the following updates: ++**Update** | **Details** + | +**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. +**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. +**Azure VM disaster recovery** | Added support for Oracle Linux 8.6 Linux distro. +**VMware VM/physical disaster recovery to Azure** | Added support for Oracle Linux 8.6 Linux distro.<br/><br/> Introduced the migration capability to move existing replications from classic to modernized experience for disaster recovery of VMware virtual machines, enabled using Azure Site Recovery. [Learn more](move-from-classic-to-modernized-vmware-disaster-recovery.md). +++## Updates (July 2022) ++### Update Rollup 62 ++[Update rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) provides the following updates: ++> [!Note] +> - The 9.49 version has not been released for VMware replications to Azure preview experience. ++**Update** | **Details** + | +**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. +**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. +**Azure VM disaster recovery** | Added support for RHEL 8.6 and CentOS 8.6 Linux distros. +**VMware VM/physical disaster recovery to Azure** | Added support for RHEL 8.6 and CentOS 8.6 Linux distros.<br/><br/> Added support for configuring proxy bypass rules for VMware and Hyper-V replications, using private endpoints.<br/><br/> Added fixes related to various security issues present in the classic experience. +**Hyper-V disaster recovery to Azure** | Added support for configuring proxy bypass rules for VMware and Hyper-V replications, using private endpoints. ++## Updates (March 2022) ++### Update Rollup 61 ++[Update rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) provides the following updates: ++**Update** | **Details** + | +**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. +**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. +**Azure VM disaster recovery** | Added support for more kernels for Debian 10 and Ubuntu 20.04 Linux distros. <br/><br/> Added public preview support for on-Demand Capacity Reservation integration. +**VMware VM/physical disaster recovery to Azure** | Added support for thin provisioned LVM volumes.<br/><br/> ++## Updates (January 2022) ++### Update Rollup 60 ++[Update rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) provides the following updates: ++**Update** | **Details** + | +**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. +**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article. +**Azure VM disaster recovery** | Support added for retention points to be available for up to 15 days.<br/><br/>Added support for replication to be enabled on Azure virtual machines via Azure Policy. <br/><br/> Added support for ZRS managed disks when replicating Azure virtual machines. <br/><br/> Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/> +**VMware VM/physical disaster recovery to Azure** | Support added for retention points to be available for up to 15 days.<br/><br/>Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/> ++ ## Updates (November 2021) ### Update Rollup 59 |
site-recovery | Site Recovery Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - The [Azure Site Recovery](site-recovery-overview.md) service is updated and improved on an ongoing basis. To help you stay up-to-date, this article provides you with information about the latest releases, new features, and new content. This page is updated regularly. You can follow and subscribe to Site Recovery update notifications in the [Azure updates](https://azure.microsoft.com/updates/?product=site-recovery) channel. Update [rollup 75](https://support.microsoft.com/topic/update-rollup-75-for-azur **VMware VM/physical disaster recovery to Azure** | Added support for Oracle Linux 8.7 with UEK7 kernel, RHEL 9, Cent OS 9 and Oracle Linux 9 Linux distros. <br> <br/> Added support for Windows Server 2019 as the Azure Site Recovery replication appliance. <br> <br/> Added support for Microsoft Edge to be the default browser in Appliance Configuration Manager. <br> <br/> Added support to select an Availability set or a Proximity Placement group, after enabling replication using modernized VMware/Physical machine replication scenario. -## Updates (February 2023) --### Update Rollup 66 --[Update rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) provides the following updates: --**Update** | **Details** - | -**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. -**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. -**Azure VM disaster recovery** | Added support for Ubuntu 22.04, RHEL 8.7 and CentOS 8.7 Linux distro. -**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 22.04, RHEL 8.7 and CentOS 8.7 Linux distro. --## Updates (November 2022) --### Update Rollup 65 --[Update rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) provides the following updates: --**Update** | **Details** - | -**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. -**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. -**Azure VM disaster recovery** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro. -**VMware VM/physical disaster recovery to Azure** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.<br/><br/> Added Modernized VMware to Azure DR support for government clouds. [Learn more](./replication-appliance-support-matrix.md#allow-urls-for-government-clouds). ---## Updates (October 2022) --### Update Rollup 64 --[Update rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) provides the following updates: --**Update** | **Details** - | -**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. -**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. -**Azure VM disaster recovery** | Added support for Ubuntu 20.04 Linux distro. -**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 20.04 Linux distro.<br/><br/> Modernized experience to enable disaster recovery of VMware virtual machines is now generally available.[Learn more](https://azure.microsoft.com/updates/vmware-dr-ga-with-asr).<br/><br/> Protecting physical machines modernized experience is now supported.<br/><br/> Protecting machines with private endpoint and managed identity enabled is now supported with modernized experience. --## Updates (August 2022) --### Update Rollup 63 --[Update rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) provides the following updates: --**Update** | **Details** - | -**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. -**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. -**Azure VM disaster recovery** | Added support for Oracle Linux 8.6 Linux distro. -**VMware VM/physical disaster recovery to Azure** | Added support for Oracle Linux 8.6 Linux distro.<br/><br/> Introduced the migration capability to move existing replications from classic to modernized experience for disaster recovery of VMware virtual machines, enabled using Azure Site Recovery. [Learn more](move-from-classic-to-modernized-vmware-disaster-recovery.md). ---## Updates (July 2022) --### Update Rollup 62 --[Update rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) provides the following updates: --> [!Note] -> - The 9.49 version has not been released for VMware replications to Azure preview experience. --**Update** | **Details** - | -**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. -**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. -**Azure VM disaster recovery** | Added support for RHEL 8.6 and CentOS 8.6 Linux distros. -**VMware VM/physical disaster recovery to Azure** | Added support for RHEL 8.6 and CentOS 8.6 Linux distros.<br/><br/> Added support for configuring proxy bypass rules for VMware and Hyper-V replications, using private endpoints.<br/><br/> Added fixes related to various security issues present in the classic experience. -**Hyper-V disaster recovery to Azure** | Added support for configuring proxy bypass rules for VMware and Hyper-V replications, using private endpoints. --## Updates (March 2022) --### Update Rollup 61 --[Update rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) provides the following updates: --**Update** | **Details** - | -**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. -**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. -**Azure VM disaster recovery** | Added support for more kernels for Debian 10 and Ubuntu 20.04 Linux distros. <br/><br/> Added public preview support for on-Demand Capacity Reservation integration. -**VMware VM/physical disaster recovery to Azure** | Added support for thin provisioned LVM volumes.<br/><br/> --## Updates (January 2022) --### Update Rollup 60 --[Update rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) provides the following updates: --**Update** | **Details** - | -**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. -**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article. -**Azure VM disaster recovery** | Support added for retention points to be available for up to 15 days.<br/><br/>Added support for replication to be enabled on Azure virtual machines via Azure Policy. <br/><br/> Added support for ZRS managed disks when replicating Azure virtual machines. <br/><br/> Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/> -**VMware VM/physical disaster recovery to Azure** | Support added for retention points to be available for up to 15 days.<br/><br/>Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/> -- ## Next steps Keep up-to-date with our updates on the [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page. |
site-recovery | Vmware Azure Install Linux Master Target | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - After you fail over your virtual machines to Azure, you can fail back the virtual machines to the on-premises site. To fail back, you need to reprotect the virtual machine from Azure to the on-premises site. For this process, you need an on-premises master target server to receive the traffic. If your protected virtual machine is a Windows virtual machine, then you need a Windows master target. For a Linux virtual machine, you need a Linux master target. Read the following steps to learn how to create and install a Linux master target. Use the following steps to create a retention disk: 3. Format the drive, and then create a file system on the new drive: **mkfs.ext4 /dev/mapper/\<Retention disk's multipath id>**. - ![File system](./media/vmware-azure-install-linux-master-target/image23-centos.png) - 4. After you create the file system, mount the retention disk. ```bash |
site-recovery | Vmware Physical Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md | Windows 7 with SP1 64-bit | Supported from [Update rollup 36](https://support.mi | Linux | Only 64-bit system is supported. 32-bit system isn't supported.<br/><br/>Every Linux server should have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) installed. It is required to boot the server in Azure after test failover/failover. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/> Site Recovery orchestrates failover to run Linux servers in Azure. However Linux vendors might limit support to only distribution versions that haven't reached end-of-life.<br/><br/> On Linux distributions, only the stock kernels that are part of the distribution minor version release/update are supported.<br/><br/> Upgrading protected machines across major Linux distribution versions isn't supported. To upgrade, disable replication, upgrade the operating system, and then enable replication again.<br/><br/> [Learn more](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure) about support for Linux and open-source technology in Azure.<br/><br/> Chained IO isn't supported by Site Recovery. Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), 8.7, 8.8, 8.9, 8.10, 9.0, 9.1, 9.2, 9.3, 9.4 <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 don't have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br> <br> **Notes**: <br> - Support for Linux Red Hat Enterprise versions `8.9`, `8.10`, `9.0`, `9.1`, `9.2`, `9.3` and `9.4` is only available for Modernized experience and isn't available for Classic experience. <br> - RHEL `9.x` is supported for [the following kernel versions](#supported-kernel-versions-for-red-hat-enterprise-linux-for-azure-virtual-machines) |-Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or later), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6, 8.7 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 don't have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> Ubuntu 22.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 isn't supported.). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 10, Debian 11, Debian 12 [(Review supported kernel versions)](#debian-kernel-versions). SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1, SP2, SP3, SP4, SP5 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 isn't supported. To upgrade, disable replication and re-enable after the upgrade. <br/> Support for SUSE Linux Enterprise Server 15 SP5 is available for Modernized experience only.| |
site-recovery | Vmware Physical Manage Mobility Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - You set up mobility agent on your server when you use Azure Site Recovery for disaster recovery of VMware VMs and physical servers to Azure. Mobility agent coordinates communications between your protected machine, configuration server/scale-out process server and manages data replication. This article summarizes common tasks for managing mobility agent after it's deployed. >[!TIP] You set up mobility agent on your server when you use Azure Site Recovery for di ## Update mobility service from Azure portal 1. Before you start ensure that the configuration server, scale-out process servers, and any master target servers that are a part of your deployment are updated before you update the Mobility Service on protected machines.- 1. From 9.36 version onwards, for SUSE Linux Enterprise Server 11 SP3, RHEL 5, CentOS 5, Debian 7 ensure the latest installer is [available on the configuration server and scale-out process server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). + 1. From 9.36 version onwards, for SUSE Linux Enterprise Server 11 SP3, RHEL 5, and Debian 7 ensure the latest installer is [available on the configuration server and scale-out process server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). 1. In the portal open the vault > **Replicated items**. 1. If the configuration server is the latest version, you see a notification that reads "New Site recovery replication agent update is available. Click to install." |
site-recovery | Vmware Physical Mobility Service Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - When you set up disaster recovery for VMware virtual machines (VM) and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the Site Recovery Mobility service on each on-premises VMware VM and physical server. The Mobility service captures data, writes on the machine, and forwards them to the Site Recovery process server. The Mobility service is installed by the Mobility service agent software that you can deploy using the following methods: - [Push installation](#push-installation): When protection is enabled via the Azure portal, Site Recovery installs the Mobility service on the server. Push installation is an integral part of the job that runs from the Azure portal - Ensure that all push installation [prerequisites](vmware-azure-install-mobility-service.md) are met. - Ensure that all server configurations meet the criteria in the [Support matrix for disaster recovery of VMware VMs and physical servers to Azure](vmware-physical-azure-support-matrix.md).-- From 9.36 version onwards, ensure the latest installer for SUSE Linux Enterprise Server 11 SP3, SUSE Linux Enterprise Server 11 SP4, RHEL 5, CentOS 5, Debian 7, Debian 8, Ubuntu 14.04 is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server).+- From 9.36 version onwards, ensure the latest installer for SUSE Linux Enterprise Server 11 SP3, SUSE Linux Enterprise Server 11 SP4, RHEL 5, Debian 7, Debian 8, Ubuntu 14.04 is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). - From 9.52 version onwards, ensure the latest installer for Debian 9, is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). The push installation workflow is described in the following sections: On the configuration server, go to the folder _%ProgramData%\ASR\home\svsystems\ Installer file | Operating system (64-bit only) | `Microsoft-ASR_UA_version_Windows_GA_date_release.exe` | Windows Server 2016 </br> Windows Server 2012 R2 </br> Windows Server 2012 </br> Windows Server 2008 R2 SP1 <br> Windows Server 2019 <br> Windows Server 2022-[To be downloaded and placed in this folder manually](#rhel-5-or-centos-5-server) | Red Hat Enterprise Linux (RHEL) 5 </br> CentOS 5 -`Microsoft-ASR_UA_version_RHEL6-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 6 </br> CentOS 6 -`Microsoft-ASR_UA_version_RHEL7-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 7 </br> CentOS 7 -`Microsoft-ASR_UA_version_RHEL8-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 8 </br> CentOS 8 +[To be downloaded and placed in this folder manually](#rhel-5) | Red Hat Enterprise Linux (RHEL) 5 +`Microsoft-ASR_UA_version_RHEL6-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 6 +`Microsoft-ASR_UA_version_RHEL7-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 7 +`Microsoft-ASR_UA_version_RHEL8-64_GA_date_release.tar.gz` | Red Hat Enterprise Linux (RHEL) 8 `Microsoft-ASR_UA_version_SLES12-64_GA_date_release.tar.gz` | SUSE Linux Enterprise Server 12 SP1 </br> Includes SP2 and SP3. [To be downloaded and placed in this folder manually](#suse-11-sp3-or-suse-11-sp4-server) | SUSE Linux Enterprise Server 11 SP3 [To be downloaded and placed in this folder manually](#suse-11-sp3-or-suse-11-sp4-server) | SUSE Linux Enterprise Server 11 SP4 As a **prerequisite to update or protect SUSE Linux Enterprise Server 11 SP3 or 1. **For example**, if install path is C:\Program Files (x86)\Microsoft Azure Site Recovery, then the above mentioned directories are 1. C:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository -### RHEL 5 Or CentOS 5 server +### RHEL 5 As a **prerequisite to update or protect RHEL 5 machines** from 9.36 version onwards: 1. Ensure latest mobility agent installer is downloaded from Microsoft Download Center and placed in push installer repository on configuration server and all scale-out process servers-2. [Download](site-recovery-whats-new.md) the latest RHEL 5 or CentOS 5 agent installer. -3. Navigate to Configuration server, copy the RHEL 5 or CentOS 5 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository +2. [Download](site-recovery-whats-new.md) the latest RHEL 5 agent installer. +3. Navigate to Configuration server, copy the RHEL 5 agent installer on the path - INSTALL_DIR\home\svsystems\pushinstallsvc\repository 1. After copying the latest installer, restart InMage PushInstall service. 1. Now, navigate to associated scale-out process servers, repeat step 3 and step 4. 1. **For example**, if install path is C:\Program Files (x86)\Microsoft Azure Site Recovery, then the above mentioned directories will be |
spring-apps | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/whats-new.md | Azure Spring Apps is improved on an ongoing basis. To help you stay up to date w This article is updated quarterly, so revisit it regularly. You can also visit [Azure updates](https://azure.microsoft.com/updates/?query=azure%20spring), where you can search for updates or browse by category. +## Q3 2024 ++The following updates are now available: ++- **Conveniently access app logs in the Azure portal**: We now offer a more convenient and efficient way to query app logs and do log streaming on the Azure portal. This new approach supplements manually composing queries to fetch application logs from the Log Analytics workspace and accessing the log stream through the Azure CLI. For more information, see the [Stream logs](how-to-log-streaming.md?tabs=azure-portal#stream-logs) section of [Stream Azure Spring Apps application console logs in real time](how-to-log-streaming.md). ++- **Regular infrastructure maintenance in the Enterprise plan**: + - Regular upgrade to keep managed components up-to-date: + - Service Registry: upgraded to 1.3.1. + - Application Configuration Service: upgraded to 2.3.1, including a critical fix of missing content details in the logging for ConfigMap and secret creation. + - Spring Cloud Gateway: upgraded to 2.2.5, including a critical fix for a routing rule persistence issue. + - API Portal: upgraded to 1.5.0. + - App Live View: upgraded to 1.8.0. + - App Accelerator: upgraded to 1.8.1. + - Build service: + - Go buildpack: added support for Go 1.22, deprecated Go 1.20, changed default version from Go 1.20 to Go 1.21. + - NodeJS buildpack: changed default version from Node.js 19 to Node.js 20. + - Java Native Image buildpack: deprecated Java 20, added Java 21. + - PHP buildpack: added PHP 8.3 + - Regular upgrade to keep Azure Kubernetes Service up-to-date: upgraded to 1.29.7. ++- **Regular infrastructure maintenance in the Basic and Standard plans**: + - Regular upgrade to keep managed components up-to-date: + - Config server image: upgraded to 1.0.20240930. + - Eureka server image: upgraded to 1.0.20240930. + - Base image for apps: upgraded to Azure Linux 2.0.20231130. + - Regular upgrade to keep Azure Kubernetes Service up-to-date: upgraded to 1.29.7. + ## Q2 2024 The following updates are now available in the Enterprise plan: |
storage | Container Storage Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-release-notes.md | -This article provides the release notes for Azure Container Storage. It's important to note that minor releases introduce new functionalities in a backward-compatible manner (for example, 1.1.0 GA). Patch releases focus on bug fixes, security updates, and smaller improvements (for example, 1.1.1). +This article provides the release notes for Azure Container Storage. It's important to note that minor releases introduce new functionalities in a backward-compatible manner (for example, 1.1.0 GA). Patch releases focus on bug fixes, security updates, and smaller improvements (for example, 1.1.2). ## Supported versions The following Azure Container Storage versions are supported: | Milestone | Status | |-|-| -|1.1.1- Minor Release | Supported | +|1.1.2- Patch Release | Supported | +|1.1.1- Patch Release | Supported | |1.1.0- General Availability| Supported | ## Unsupported versions The following Azure Container Storage versions are no longer supported: 1.0.6-pr ## Minor vs. patch versions -Minor versions introduce small improvements, performance enhancements, or minor new features without breaking existing functionality. For example, version 1.1.0 would move to 1.2.0. Patch versions are released more frequently than minor versions. They focus solely on bug fixes and security updates. For example, version 1.1.1 would be updated to 1.1.2. +Minor versions introduce small improvements, performance enhancements, or minor new features without breaking existing functionality. For example, version 1.1.0 would move to 1.2.0. Patch versions are released more frequently than minor versions. They focus solely on bug fixes and security updates. For example, version 1.1.2 would be updated to 1.1.3. ++## Version 1.1.2 ++### Improvements and issues that are fixed +- **Bug fixes and performance improvements**: We improved the overall system stability by fixing general bugs and optimizing performance. +- **Security Enhancements**: This release improves security by updating package dependencies and Microsoft container images and improving container image builds to reduce dependencies. +- **Volume attachment fixes**: We also resolved an issue where volumes remained in a published state on nodes that were no longer present in the cluster, causing volume mounts to fail. This fix ensures that volumes are properly detached and reattached, allowing workloads to continue without interruptions. + ## Version 1.1.1 |
synapse-analytics | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md | - Title: Previous monthly updates in Azure Synapse Analytics -description: Archive of the new features and documentation improvements for Azure Synapse Analytics --- Previously updated : 07/21/2023------# What's New in Azure Synapse Analytics Archive --This article describes previous month updates to Azure Synapse Analytics. For the most current month's release, check out [Azure Synapse Analytics latest updates](whats-new.md). Each update links to the Azure Synapse Analytics blog and an article that provides more information. --## Generally available features --The following table lists a past history of the features of Azure Synapse Analytics that have transitioned from preview to general availability (GA). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| July 2022 | **Apache Spark™ 3.2 for Synapse Analytics** | Apache Spark™ 3.2 for Synapse Analytics is now generally available. Review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). Highlights of what got better in Spark 3.2 in the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_1).| -| July 2022 | **Apache Spark in Azure Synapse Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md).| -| June 2022 | **Map Data tool** | The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about the Map Data tool, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).| -| June 2022 | **User Defined Functions** | User defined functions (UDFs) are now generally available. To learn more, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628). | -| May 2022 | **Azure Synapse Data Explorer connector for Power Automate, Logic Apps, and Power Apps** | The Azure Data Explorer connector forΓÇ»Power AutomateΓÇ»enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage). | -| April 2022 | **Cross-subscription restore for Azure Synapse SQL** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Blog: Restore a dedicated SQL pool (formerly SQL DW) to a different subscription](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3280185). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff) | -| April 2022 | **Database Designer** | The database designer allows users to visually create databases within Synapse Studio without writing a single line of code. For more information, see [Announcing General Availability of Database Designer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-general-availability-of-database-designer-amp/ba-p/3294234). Read more about [lake databases](database-designer/concepts-lake-database.md) and learn [How to modify an existing lake database using the database designer](database-designer/modify-lake-database.md).| -| April 2022 | **Database Templates** | New industry-specific database templates were introduced in the [Synapse Database Templates General Availability blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-general-availability-and-new-synapse/ba-p/3289790). Learn more about [Database templates](database-designer/concepts-database-templates.md) and [the improved exploration experience](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3295633#TOCREF_5).| -| April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator RBAC (role-based access control) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).| -| March 2022 | **Flowlets** | Flowlets help you design portions of new data flow logic, or to extract portions of an existing data flow, and save them as separate artifact inside your Synapse workspace. Then, you can reuse these Flowlets can inside other data flows. To learn more, review the [Flowlets GA announcement blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md). | -| March 2022 | **Change Feed connectors** | Changed data capture (CDC) feed data flow source transformations for Azure Cosmos DB, Azure Blob Storage, ADLS Gen1, ADLS Gen2, and Common Data Model (CDM) are now generally available. By simply checking a box, you can tell ADF to manage a checkpoint automatically for you and only read the latest rows that were updated or inserted since the last pipeline run. To learn more, review the [Change Feed connectors GA preview blog post](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450) and read [Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-azure-data-lake-storage.md).| -| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools and dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022. | -| March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). | -| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).| -| October 2021 | **Synapse RBAC Roles** | [Synapse role-based access control (RBAC) roles are now generally available](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac). Learn more about [Synapse RBAC roles](./security/synapse-workspace-synapse-rbac-roles.md) and [Azure Synapse role-based access control (RBAC) using PowerShell](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/retrieve-azure-synapse-role-based-access-control-rbac/ba-p/3466419#:~:text=Synapse%20RBAC%20is%20used%20to%20manage%20who%20can%3A,job%20execution%2C%20review%20job%20output%2C%20and%20execution%20logs.).| --## Community --This section is an archive of Azure Synapse Analytics community opportunities and the [Azure Synapse Influencer program](https://aka.ms/synapseinfluencers) from Microsoft. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| May 2022 | **Azure Synapse Influencer program** | Sign up for our free [Azure Synapse Influencer program](https://aka.ms/synapseinfluencers) and get connected with a community of Synapse-users who are dedicated to helping others achieve more with cloud analytics. Register now for our next [Synapse Influencer Ask the Experts session](https://aka.ms/synapseinfluencers/#events). It's free to attend and everyone is welcome to participate and join the discussion on Synapse-related topics. You can [watch past recorded Ask the Experts events](https://aka.ms/ATE-RecordedSessions) on the [Azure Synapse YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g). | -| March 2022 | **Azure Synapse Analytics and Microsoft MVP YouTube video series** | A joint activity with the Azure Synapse product team and the Microsoft MVP community, a new [YouTube MVP Video Series about the Azure Synapse features](https://www.youtube.com/playlist?list=PLzUAjXZBFU9MEK2trKw_PGk4o4XrOzw4H) has launched. See more at the [Azure Synapse Analytics YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g).| --## Apache Spark for Azure Synapse Analytics --This section is an archive of features and capabilities of [Apache Spark for Azure Synapse Analytics](spark/apache-spark-overview.md). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| May 2022 | **Azure Synapse dedicated SQL pool connector for Apache Spark now available in Python** | Previously, the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md) was only available using Scala. Now, [the dedicated SQL pool connector for Apache Spark can be used with Python on Spark 3](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_6). | -| May 2022 | **Manage Azure Synapse Apache Spark configuration** | With the new [Apache Spark configurations](./spark/apache-spark-azure-create-spark-configuration.md) feature, you can create a standalone Spark configuration artifact with auto-suggestions and built-in validation rules. The Spark configuration artifact allows you to share your Spark configuration within and across Azure Synapse workspaces. You can also easily associate your Spark configuration with a Spark pool, a Notebook, and a Spark job definition for reuse and minimize the need to copy the Spark configuration in multiple places. | -| April 2022 | **Apache Spark 3.2 for Synapse Analytics** | Apache Spark 3.2 for Synapse Analytics with preview availability. Review the [official Spark 3.2 release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). | -| April 2022 | **Parameterization for Spark job definition** | You can now assign parameters dynamically based on variables, metadata, or specifying Pipeline specific parameters for the Spark job definition activity. For more details, read [Transform data using Apache Spark job definition](quickstart-transform-data-using-spark-job-definition.md#settings-tab). | -| April 2022 | **Apache Spark notebook snapshot** | You can access a snapshot of the Notebook when there's a Pipeline Notebook run failure or when there's a long-running Notebook job. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1). | -| March 2022 | **Synapse Spark Common Data Model (CDM) connector** | The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). | -| March 2022 | **Performance optimization for Synapse Spark dedicated SQL pool connector** | New improvements to the [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) reduce data movement and leverage `COPY INTO`. Performance tests indicated at least ~5x improvement over the previous version. No action is required from the user to leverage these enhancements. For more information, see [Blog: Synapse Spark Dedicated SQL Pool (DW) Connector: Performance Improvements](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_10).| -| March 2022 | **Support for all Spark Dataframe SaveMode choices** | The [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](spark/synapse-spark-sql-pool-import-export.md) now supports all four Spark Dataframe SaveMode choices: Append, Overwrite, ErrorIfExists, Ignore. For more information on Spark SaveMode, read the [official Apache Spark documentation](https://archive.apache.org/dist/spark/docs/1.6.0/api/java/org/apache/spark/sql/SaveMode.html?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). | -| March 2022 | **Apache Spark in Azure Synapse Analytics Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more on this preview feature, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12).| --## Data integration --This section is an archive of features and capabilities of Azure Synapse Analytics data integration. Learn how to [Load data into Azure Synapse Analytics using Azure Data Factory (ADF) or a Synapse pipeline](../data-factory/load-azure-sql-data-warehouse.md). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| June 2022 | **SAP CDC connector preview** | A new data connector for SAP Change Data Capture (CDC) is now available in preview. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-dat).| -| June 2022 | **Fuzzy join option in Join Transformation** | Use fuzzy matching with a similarity threshold score slider has been added to the [Join transformation in Mapping Data Flows](../data-factory/data-flow-join.md). | -| June 2022 | **Map Data tool GA** | We're excited to announce that the [Map Data tool](./database-designer/overview-map-data.md) is now Generally Available. The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. | -| June 2022 | **Rerun pipeline with new parameters** | You can now change pipeline parameters when rerunning a pipeline from the Monitoring page without having to return to the pipeline editor. To learn more, read [Rerun pipelines and activities](../data-factory/monitor-visually.md#rerun-pipelines-and-activities).| -| June 2022 | **User Defined Functions GA** | [User defined functions (UDFs) in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628) are now generally available (GA). | -| May 2022 | **Export pipeline monitoring as a CSV** | The ability to [export pipeline monitoring to CSV and other monitoring improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531) have been introduced to ADF. | -| May 2022 | **Automatic incremental source data loading from PostgreSQL and MySQL** | Automatic [incremental source data loading from PostgreSQL and MySQL](../data-factory/tutorial-incremental-copy-overview.md) to Synapse SQL and Azure Database is now natively available in ADF. | -| May 2022 | **Assert transformation error handling** | Error handling has now been added to sinks following an [assert transformation in mapping data flow](../data-factory/data-flow-assert.md). You can now choose whether to output the failed rows to the selected sink or to a separate file. | -| May 2022 | **Mapping data flows projection editing** | In mapping data flows, you can now [update source projection column names and column types](../data-factory/data-flow-source.md). | -| April 2022 | **Dataverse connector for Synapse Data Flows** | Dataverse is now a source and sink connector to Synapse Data Flows. You can [Copy and transform data from Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-dynamics-crm-office-365.md?tabs=data-factory).| -| April 2022 | **Configurable Synapse Pipelines Web activity response timeout** | With the response timeout property `httpRequestTimeout`, you can [define a timeout for the HTTP request up to 10 minutes](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307). Web activities work exceptionally well with APIs that follow [theΓÇ»asynchronous request-reply pattern](/azure/architecture/patterns/async-request-reply), a suggested approach for building scalable web APIs/services. | -| March 2022 | **sFTP connector for Synapse data flows** | A native sftp connector in Synapse data flows is supported to read and write data from sFTP using the visual low-code data flows interface in Synapse. To learn more, see [Copy and transform data in SFTP server using Azure Data Factory or Azure Synapse Analytics](../data-factory/connector-sftp.md).| -| March 2022 | **Data flow improvements to Data Preview** | Review features added to the [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). | -| March 2022 | **Pipeline script activity** | You can now [Transform data by using the Script activity](../data-factory/transform-data-using-script.md) to invoke SQL commands to perform both DDL and DML. | -| December 2021 | **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](/azure/cosmos-db/custom-partitioning-analytical-store). | --## Database Templates & Database Designer --This section is an archive of features and capabilities of [database templates](./database-designer/overview-database-templates.md) and [the database designer](database-designer/quick-start-create-lake-database.md). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| April 2022 | **Database Designer** | The database designer allows users to visually create databases within Synapse Studio without writing a single line of code. For more information, see [Announcing General Availability of Database Designer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-general-availability-of-database-designer-amp/ba-p/3294234). Read more about [lake databases](database-designer/concepts-lake-database.md) and learn [How to modify an existing lake database using the database designer](database-designer/modify-lake-database.md).| -| April 2022 | **Database Templates** | New industry-specific database templates were introduced in the [Synapse Database Templates General Availability blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-database-templates-general-availability-and-new-synapse/ba-p/3289790). Learn more about [Database templates](database-designer/concepts-database-templates.md) and [the improved exploration experience](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3295633#TOCREF_5).| -| April 2022 | **Clone lake database** | In Synapse Studio, you can now clone a database using the action menu available on the lake database. To learn more, read [How-to: Clone a lake database](./database-designer/clone-lake-database.md). | -| April 2022 | **Use wildcards to specify custom folder hierarchies** | Lake databases sit on top of data that is in the lake and this data can live in nested folders that don't fit into clean partition patterns. You can now use wildcards to specify custom folder hierarchies. To learn more, read [How-to: Modify a datalake](./database-designer/modify-lake-database.md). | -| January 2022 | **New database templates** | Learn more about new industry-specific [Automotive, Genomics, Manufacturing, and Pharmaceuticals templates](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/four-additional-azure-synapse-database-templates-now-available/ba-p/3058044) and get started with [database templates](./database-designer/overview-database-templates.md) in the Synapse Studio gallery. | --## Developer experience --This section is an archive of quality of life and feature improvements for [developers in Azure Synapse Analytics](sql/develop-overview.md). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| May 2022 | **Updated Azure Synapse Analyzer Report** | Learn about the new features in [version 2.0 of the Synapse Analyzer report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/updated-synapse-analyzer-report-workload-management-and-ability/ba-p/3580269).| -| April 2022 | **Azure Synapse Analyzer Report** | The [Azure Synapse Analyzer Report](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analyzer-report-to-monitor-and-improve-azure/ba-p/3276960) helps you identify common issues that may be present in your database that can lead to performance issues.| -| April 2022 | **Reference unpublished notebooks** | Now, when using %run notebooks, you can [enable 'unpublished notebook reference'](spark/apache-spark-development-using-notebooks.md#reference-unpublished-notebook), which will allow you to reference unpublished notebooks. When enabled, notebook run will fetch the current contents in the notebook web cache, meaning the changes in your notebook editor can be referenced immediately by other notebooks without having to be published (Live mode) or committed (Git mode). | -| March 2022 | **Code cells with exception to show standard output**| Now in Synapse notebooks, both standard output and exception messages are shown when a code statement fails for Python and Scala languages. For examples, see [Synapse notebooks: Code cells with exception to show standard output](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).| -| March 2022 | **Partial output is available for running notebook code cells** | Now in Synapse notebooks, you can see anything you write (with `println` commands, for example) as the cell executes, instead of waiting until it ends. For examples, see [Synapse notebooks: Partial output is available for running notebook code cells ](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1).| -| March 2022 | **Dynamically control your Spark session configuration with pipeline parameters** | Now in Synapse notebooks, you can use pipeline parameters to configure the session with the notebook %%configure magic. For examples, see [Synapse notebooks: Dynamically control your Spark session configuration with pipeline parameters](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_2).| -| March 2022 | **Reuse and manage notebook sessions** | Now in Synapse notebooks, it's easy to reuse an active session conveniently without having to start a new one and to see and manage your active sessions in the **Active sessions** list. To view your sessions, select the 3 dots in the notebook and select **Manage sessions.** For examples, see [Synapse notebooks: Reuse and manage notebook sessions](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3).| -| March 2022 | **Support for Python logging** | Now in Synapse notebooks, anything written through the Python logging module is captured, in addition to the driver logs. For examples, see [Synapse notebooks: Support for Python logging](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_4).| --## Machine Learning --This section is an archive of features and improvements to machine learning models in Azure Synapse Analytics. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| June 2022 | **Distributed Deep Neural Network Training (preview)** | The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in preview. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 also now includes support for the most common deep learning libraries like TensorFlow and PyTorch. To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). | -| November 2021 | **PREDICT** | The T-SQL [PREDICT](/sql/t-sql/queries/predict-transact-sql) syntax is now generally available for dedicated SQL pools. Get started with the [Machine learning model scoring wizard for dedicated SQL pools](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md).| --## Samples and guidance --This section is an archive of guidance and sample project resources for Azure Synapse Analytics. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models withΓÇ»Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specificΓÇ»[Azure AI services](/azure/ai-services/)ΓÇ»models, AI models from partners, and bring-your-own-data models. | -| June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). | -| June 2022 | **Azure Synapse success by design** | The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. | -| June 2022 | **Migration guides for Teradata** | A new Microsoft-authored migration guide for Teradata to Azure Synapse Analytics is now available. [Design and performance for Teradata migrations](migration-guides/teradat). | -| June 2022 | **Migration guides for IBM Netezza** | A new Microsoft-authored migration guide for IBM Netezza to Azure Synapse Analytics is now available. [Design and performance for IBM Netezza migrations](migration-guides/netezz). | --## Security --This section is an archive of security features and settings in Azure Synapse Analytics. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| April 2022 | **Synapse Monitoring Operator RBAC role** | The Synapse Monitoring Operator role-based access control (RBAC) role allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. For more information, review the [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md).| -| March 2022 | **Enforce minimal TLS version** | You can now raise or lower the minimum TLS version for dedicated SQL pools in Synapse workspaces. To learn more, see [Azure SQL connectivity settings](/azure/azure-sql/database/connectivity-settings#minimal-tls-version). The [workspace managed SQL API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) can be used to modify the minimum TLS settings.| -| March 2022 | **Azure Synapse Analytics now supports Azure Active Directory (Azure AD) only authentication** | You can now use Azure Active Directory authentication to centrally manage access to all Azure Synapse resources, including SQL pools. You can [disable local authentication](sql/active-directory-authentication.md#disable-local-authentication) upon creation or after a workspace is created through the Azure portal.| -| December 2021 | **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows. To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).| -| December 2021 | **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now [browse and secure an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder](how-to-access-container-with-access-control-lists.md) in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio.| -| December 2021 | **TLS 2.1 enforced for new Synapse Workspaces** | Starting in December 2021, [a requirement for TLS 1.2](security/connectivity-settings.md#minimal-tls-version) has been implemented for new Synapse Workspaces only. | --## Azure Synapse Data Explorer --Azure Data Explorer (ADX) is a fast and highly scalable data exploration service for log and telemetry data. It offers ingestion from Event Hubs, IoT Hubs, blobs written to blob containers, and Azure Stream Analytics jobs. This section is an archive of features and capabilities of [the Azure Synapse Data Explorer](data-explorer/data-explorer-overview.md) and [the Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). Read more about [What is the difference between Azure Synapse Data Explorer and Azure Data Explorer? (Preview)](data-explorer/data-explorer-compare.md) --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| June 2022 | **Web Explorer new homepage** | The new Azure Synapse [Web Explorer homepage](https://dataexplorer.azure.com/home) makes it even easier to get started with Synapse Web Explorer. | -| June 2022 | **Web Explorer sample gallery** | The [Web Explorer sample gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552) provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. | -| June 2022 | **Web Explorer dashboards drill through capabilities** | You can now [use drillthroughs as parameters in your Synapse Web Explorer dashboards](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters). | -| June 2022 | **Time Zone settings for Web Explorer** | The [Time Zone settings of the Web Explorer](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone) now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. | -| May 2022 | **Synapse Data Explorer live query in Excel** | Using the [new Data ExplorerΓÇ»web experience Open in Excel feature](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/open-live-kusto-query-in-excel/ba-p/3198500), you can now provide access to live results of your query by sharing the connected Excel Workbook with colleagues and team members.ΓÇ»You can open the live query in an Excel Workbook and refresh it directly from Excel to get the most up to date query results. To create an Excel Workbook connected to Synapse Data Explorer, [start by running a query in the Web experience](https://aka.ms/adx.help.livequery). | -| May 2022 | **Use Managed Identities for external SQL Server tables** | With Managed Identity support, Synapse Data Explorer table definition is now simpler and more secure. You can now [use managed identities](/azure/data-explorer/managed-identities-overview) instead of entering in your credentials. To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables).| -| May 2022 | **Azure Synapse Data Explorer connector for Microsoft Power Automate, Logic Apps, and Power Apps** | New Azure Data Explorer connectors forΓÇ»Power Automate are generally available (GA). To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow), the [Microsoft Logic App and Azure Data Explorer](/azure/data-explorer/kusto/tools/logicapps), and the ability to [Create Power Apps application to query data in Azure Data Explorer](/azure/data-explorer/power-apps-connector). | -| May 2022 | **Dynamic events routing from event hub to multiple databases** | We now support [routing events data from Azure Event Hub/Azure IoT Hub/Azure Event Grid to multiple databases](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_15) hosted in a single ADX cluster. To learn more about dynamic routing, read [Ingest from event hub](/azure/data-explorer/ingest-data-event-hub-overview#events-routing). | -| May 2022 | **Configure a database using a KQL inline script as part of JSON ARM deployment template** | Running a [Kusto Query Language (KQL) script to configure your database](/azure/data-explorer/database-script) can now be done using an inline script provided inline as a parameter to a JSON ARM template. | --## Azure Synapse Link --Azure Synapse Link is an automated system for replicating data from [SQL Server or Azure SQL Database](synapse-link/sql-synapse-link-overview.md), [Azure Cosmos DB](/azure/cosmos-db/synapse-link?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext), or [Dataverse](/power-apps/maker/data-platform/export-to-data-lake?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext) into Azure Synapse Analytics. This section is an archive of news about the Azure Synapse Link feature. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| May 2022 | **Azure Synapse Link for SQL preview** | Azure Synapse Link for SQL is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. The [Azure Synapse Link for SQL preview has been announced](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986). For more information, see [Blog: Azure Synapse Link for SQL Deep Dive](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-link-for-sql-deep-dive/ba-p/3567645).| --## Synapse SQL --This section is an archive of improvements and features in SQL pools in Azure Synapse Analytics. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| June 2022 | **Result set size limit increase** | The [maximum size of query result sets](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints) in serverless SQL pools has been increased from 200 GB to 400 GB. | -| May 2022 | **Automatic character column length calculation for serverless SQL pools** | It's no longer necessary to define character column lengths for serverless SQL pools in the data lake. You can get optimal query performance [without having to define the schema](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-may-update-2022/ba-p/3430970#TOCREF_4), because the serverless SQL pool will use automatically calculated average column lengths and cardinality estimation. | -| April 2022 | **Cross-subscription restore for Azure Synapse SQL GA** | With the PowerShell `Az.Sql` module 3.8 update, the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase) cmdlet can be used for cross-subscription restore of dedicated SQL pools. To learn more, see [Restore a dedicated SQL pool to a different subscription](sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell). This feature is now generally available for dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in a Synapse workspace. [What's the difference?](https://aka.ms/dedicatedSQLpooldiff)| -| April 2022 | **Recover SQL pool from dropped server or workspace** | With the PowerShell Restore cmdlets in `Az.Sql` and `Az.Synapse` modules, you can now restore from a deleted server or workspace without filing a support ticket. For more information, see [Restore a dedicated SQL pool from a deleted Azure Synapse workspace](backuprestore/restore-sql-pool-from-deleted-workspace.md) or [Restore a standalone dedicated SQL pools (formerly SQL DW) from a deleted server](backuprestore/restore-sql-pool-from-deleted-workspace.md), depending on your scenario. | -| March 2022 | **Column level encryption for dedicated SQL pools** | [Column level encryption](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true) is now generally available for use on new and existing Azure SQL logical servers with Azure Synapse dedicated SQL pools and dedicated SQL pools in Azure Synapse workspaces. SQL Server Data Tools (SSDT) support for column level encryption for the dedicated SQL pools is available starting with the 17.2 Preview 2 build of Visual Studio 2022.| -| March 2022 | **Parallel execution for CETAS** | Better performance for [CREATE TABLE AS SELECT](sql/develop-tables-cetas.md) (CETAS) and subsequent SELECT statements now made possible by use of parallel execution plans. For examples, see [Better performance for CETAS and subsequent SELECTs](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7).| --## Previous monthly updates in Azure Synapse Analytics --What follows are the previous format of monthly news updates for Synapse Analytics. --## June 2022 update --### General --* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models withΓÇ»[Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specificΓÇ»[Azure AI services](/azure/ai-services/)ΓÇ»models, AI models from partners, and bring-your-own-data models. --* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md). --### SQL --**Result set size limit increase** - We know that you turn to Azure Synapse Analytics to work with large amounts of data. With that in mind, the maximum size of query result sets in Serverless SQL pools has been increased from 200 GB to 400 GB. This limit is shared between concurrent queries. To learn more about this size limit increase and other constraints, read [Self-help for serverless SQL pool](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints). --### Synapse data explorer --* **Web Explorer new homepage** - The new Synapse Web Explorer homepage makes it even easier to get started with Synapse Web Explorer. The [Web Explorer homepage](https://dataexplorer.azure.com/home) now includes the following sections: -- * Get started ΓÇô Sample gallery offering example queries and dashboards for popular Synapse Data Explorer use cases. - * Recommended ΓÇô Popular learning modules designed to help you master Synapse Web Explorer and KQL. - * Documentation ΓÇô Synapse Web Explorer basic and advanced documentation. --* **Web Explorer sample gallery** - A great way to learn about a product is to see how it is being used by others. The Web Explorer sample gallery provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. Each sample includes the dataset, well-documented queries, and a sample dashboard. To learn more about the sample gallery, read [Azure Data Explorer in 60 minutes with the new samples gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552). --* **Web Explorer dashboards drill through capabilities** - You can now add drill through capabilities to your Synapse Web Explorer dashboards. The new drill through capabilities allow you to easily jump back and forth between dashboard pages. This is made possible by using a contextual filter to connect your dashboards. Defining these contextual drill throughs is done by editing the visual interactions of the selected tile in your dashboard. To learn more about drill through capabilities, read [Use drillthroughs as dashboard parameters](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters). --* **Time Zone settings for Web Explorer** - Being able to display data in different time zones is very powerful. You can now decide to view the data in UTC time, your local time zone, or the time zone of the monitored device/machine. The Time Zone settings of the Web Explorer now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. For more information on time zone settings, read [Change datetime to specific time zone](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone). --### Data integration --* **Fuzzy Join option in Join Transformation** - Fuzzy matching with a sliding similarity score option has been added to the Join transformation in Mapping Data Flows. You can create inner and outer joins on data values that are similar rather than exact matches! Previously, you would have had to use an exact match. The sliding scale value goes from 60% to 100%, making it easy to adjust the similarity threshold of the match. For learn more about fuzzy joins, read [Join transformation in mapping data flow](../data-factory/data-flow-join.md). --* **Map Data [Generally Available]** - We're excited to announce that the Map Data tool is now Generally Available. The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about Map Data, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md). --* **Rerun pipeline with new parameters** - You can now change pipeline parameters when rerunning a pipeline from the Monitoring page without having to return to the pipeline editor. After running a pipeline with new parameters, you can easily monitor the new run against the old ones without having to toggle between pages. To learn more about rerunning pipelines with new parameters, read [Rerun pipelines and activities](../data-factory/monitor-visually.md#rerun-pipelines-and-activities). --* **User Defined Functions [Generally Available]** - We're excited to announce that user defined functions (UDFs) are now Generally Available. With user-defined functions, you can create customized expressions that can be reused across multiple mapping data flows. You no longer have to use the same string manipulation, math calculations, or other complex logic several times. User-defined functions will be grouped in libraries to help developers group common sets of functions. To learn more about user defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628). --### Machine learning --**Distributed Deep Neural Network Training with Horovod and Petastorm [Public Preview]** - To simplify the process for creating and managing GPU-accelerated pools, Azure Synapse takes care of pre-installing low-level libraries and setting up all the complex networking requirements between compute nodes. This integration allows users to get started with GPU- accelerated pools within just a few minutes. --Now, Azure Synapse Analytics provides built-in support for deep learning infrastructure. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 now includes support for the most common deep learning libraries like TensorFlow and PyTorch. The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in Public Preview. --To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). --## May 2022 update --The following updates are new to Azure Synapse Analytics this month. --### General --**Get connected with the new Azure Synapse Influencer program!** [Join a community of Azure Synapse Influencers](https://aka.ms/synapseinfluencers) who are helping each other achieve more with cloud analytics! The Azure Synapse Influencer program recognizes Azure Synapse Analytics users and advocates who actively support the community by sharing Synapse-related content, announcements, and product news via social media. --### SQL --* **Data Warehouse Migration guide for Dedicated SQL Pools in Azure Synapse Analytics** - With the benefits that cloud migration offers, we hear that you often look for steps, processes, or guidelines to follow for quick and easy migrations from existing data warehouse environments. We just released a set of [Data Warehouse migration guides](./migration-guides/index.yml) to make your transition to dedicated SQL Pools in Azure Synapse Analytics easier. --* **Automatic character column length calculation** - It's no longer necessary to define character column lengths! Serverless SQL pools let you query files in the data lake without knowing the schema upfront. The best practice was to specify the lengths of character columns to get optimal performance. Not anymore! With this new feature, you can get optimal query performance without having to define the schema. The serverless SQL pool will calculate the average column length for each inferred character column or character column defined as larger than 100 bytes. The schema will stay the same, while the serverless SQL pool will use the calculated average column lengths internally. It will also automatically calculate the cardinality estimation in case there was no previously created statistic. --### Apache Spark for Synapse --* **Azure Synapse Dedicated SQL Pool Connector for Apache Spark Now Available in Python** - Previously, the Azure Synapse Dedicated SQL Pool connector was only available using Scala. Now, it can be used with Python on Spark 3. The only difference between the Scala and Python implementations is the optional Scala callback handle, which allows you to receive post-write metrics. -- The following are now supported in Python on Spark 3: -- * Read using Azure Active Directory (AD) Authentication or Basic Authentication - * Write to Internal Table using Azure AD Authentication or Basic Authentication - * Write to External Table using Azure AD Authentication or Basic Authentication -- To learn more about the connector in Python, read [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md). --* **Manage Azure Synapse Apache Spark configuration** - Apache Spark configuration management is always a challenging task because Spark has hundreds of properties. It is also challenging for you to know the optimal value for Spark configurations. With the new Spark configuration management feature, you can create a standalone Spark configuration artifact with auto-suggestions and built-in validation rules. The Spark configuration artifact allows you to share your Spark configuration within and across Azure Synapse workspaces. You can also easily associate your Spark configuration with a Spark pool, a Notebook, and a Spark job definition for reuse and minimize the need to copy the Spark configuration in multiple places. To learn more about the new Spark configuration management feature, read [Manage Apache Spark configuration](./spark/apache-spark-azure-create-spark-configuration.md). --### Synapse Data Explorer --* **Synapse Data Explorer live query in Excel** - Using the new Data ExplorerΓÇ»web experience Open in Excel feature, you can now provide access to live results of your query by sharing the connected Excel Workbook with colleagues and team members.ΓÇ»ΓÇ»You can open the live query in an Excel Workbook and refresh it directly from Excel to get the most up to date query results. To learn more about Excel live query, read [Open live query in Excel](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/open-live-kusto-query-in-excel/ba-p/3198500). --* **Use Managed Identities for External SQL Server Tables** - One of the key benefits of Azure Synapse is the ability to bring together data integration, enterprise data warehousing, and big data analytics. With Managed Identity support, Synapse Data Explorer table definition is now simpler and more secure. You can now use managed identities instead of entering in your credentials. -- An external SQL table is a schema entity that references data stored outside the Synapse Data Explorer database. Using the Create and alter SQL Server external tables command, External SQL tables can easily be added to the Synapse Data Explorer database schema. -- To learn more about managed identities, read [Managed identities overview](/azure/data-explorer/managed-identities-overview). -- To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables). --* **New KQL Learn module (2 out of 3) is live!** - The power of Kusto Query Language (KQL) is its simplicity to query structured, semi-structured, and unstructured data together. To make it easier for you to learn KQL, we are releasing Learn modules. Previously, we released [Write your first query with Kusto Query Language](/training/modules/write-first-query-kusto-query-language/). New this month is [Gain insights from your data by using Kusto Query Language](/training/modules/gain-insights-data-kusto-query-language/). -- KQL is the query language used to query Synapse Data Explorer big data. KQL has a fast-growing user community, with hundreds of thousands of developers, data engineers, data analysts, and students. -- Check out the newest [KQL Learn module](/training/modules/gain-insights-data-kusto-query-language/) and see for yourself how easy it is to become a KQL master. -- To learn more about KQL, read [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/). --* **Azure Synapse Data Explorer connector for Microsoft Power Automate, Logic Apps, and Power Apps [Generally Available]** - The Azure Data Explorer connector forΓÇ»Power AutomateΓÇ»enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage). --* **Dynamic events routing from event hub to multiple databases** - Routing events from Event Hub/IOT Hub/Event Grid is an activity commonly performed by Azure Data Explorer (ADX) users. Previously, you could route events only to a single database per defined connection. If you wanted to route the events to multiple databases, you needed to create multiple ADX cluster connections. -- To simplify the experience, we now support routing events data to multiple databases hosted in a single ADX cluster. To learn more about dynamic routing, read [Ingest from event hub](/azure/data-explorer/ingest-data-event-hub-overview#events-routing). --* **Configure a database using a KQL inline script as part of JSON ARM deployment template** - Previously, Azure Data Explorer supported running a Kusto Query Language (KQL) script to configure your database during Azure Resource Manager (ARM) template deployment. Now, this can be done using an inline script provided inline as a parameter to a JSON ARM template. To learn more about using a KQL inline script, read [Configure a database using a Kusto Query Language script](/azure/data-explorer/database-script). --### Data Integration --* **Export pipeline monitoring as a CSV** - The ability to export pipeline monitoring to CSV has been added after receiving many community requests for the feature. Simply filter the Pipeline runs screen to the data you want and select *Export to CSV**. To learn more about exporting pipeline monitoring and other monitoring improvements, read [Azure Data Factory monitoring improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531). --* **Incremental data loading made easy for Synapse and Azure Database for PostgreSQL and MySQL** - In a data integration solution, incrementally loading data after an initial full data load is a widely used scenario. Automatic incremental source data loading is now natively available for Synapse SQL and Azure Database for PostgreSQL and MySQL. Users can "enable incremental extract" and only inserted or updated rows will be read by the pipeline. To learn more about incremental data loading, read [Incrementally copy data from a source data store to a destination data store](../data-factory/tutorial-incremental-copy-overview.md). --* **User-Defined Functions for Mapping Data Flows [Public Preview]** - We hear you that you can find yourself doing the same string manipulation, math calculations, or other complex logic several times. Now, with the new user-defined function feature, you can create customized expressions that can be reused across multiple mapping data flows. User-defined functions will be grouped in libraries to help developers group common sets of functions. Once you've created a data flow library, you can add in your user-defined functions. You can even add in multiple arguments to make your function more reusable. To learn more about user-defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628). --* **Assert Error Handling** - Error handling has now been added to sinks following an assert transformation. Assert transformations enable you to build custom rules for data quality and data validation. You can now choose whether to output the failed rows to the selected sink or to a separate file. To learn more about error handling, read [Assert data transformation in mapping data flow](../data-factory/data-flow-assert.md). --* **Mapping data flows projection editing** - New UI updates have been made to source projection editing in mapping data flows. You can now update source projection column names and column types. To learn more about source projection editing, read [Source transformation in mapping data flow](../data-factory/data-flow-source.md). --### Azure Synapse Link --**Azure Synapse Link for SQL Server** - At Microsoft Build 2022, we announced the Public Preview availability of Azure Synapse Link for SQL, for both SQL Server 2022 and Azure SQL Database. Data-driven, quality insights are critical for companies to stay competitive. The speed to achieve those insights can make all the difference. The costly and time-consuming nature of traditional ETL and ELT pipelines is no longer enough. With this release, you can now take advantage of low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. This makes it easier to run BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and [watch our YouTube video](https://www.youtube.com/embed/pgusZy34-Ek). --## Apr 2022 update --The following updates are new to Azure Synapse Analytics this month. --### SQL --* Cross-subscription restore for Azure Synapse SQL is now generally available. Previously, it took many undocumented steps to restore a dedicated SQL pool to another subscription. Now, with the PowerShell Az.Sql module 3.8 update, the Restore-AzSqlDatabase cmdlet can be used for cross-subscription restore. To learn more, see [Restore a dedicated SQL pool (formerly SQL DW) to a different subscription](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-april-update-2022/ba-p/3280185). --* It is now possible to recover a SQL pool from a dropped server or workspace. With the PowerShell Restore cmdlets in Az.Sql and Az.Synapse modules, you can now restore from a deleted server or workspace without filing a support ticket. For more information, read [Synapse workspace SQL pools](./backuprestore/restore-sql-pool-from-deleted-workspace.md) or [standalone SQL pools (formerly SQL DW)](./sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md), depending on your scenario. --### Synapse database templates and database designer --* Based on popular customer feedback, we've made significant improvements to our exploration experience when creating a lake database using an industry template. To learn more, read [Quickstart: Create a new Lake database leveraging database templates](./database-designer/quick-start-create-lake-database.md). --* We've added the option to clone a lake database. This unlocks additional opportunities to manage new versions of databases or support schemas that evolve in discrete steps. You can quickly clone a database using the action menu available on the lake database. To learn more, read [How-to: Clone a lake database](./database-designer/clone-lake-database.md). --* You can now use wildcards to specify custom folder hierarchies. Lake databases sit on top of data that is in the lake and this data can live in nested folders that don't fit into clean partition patterns. Previously, querying lake databases required that your data exists in a simple directory structure that you could browse using the folder icon without the ability to manually specify directory structure or use wildcard characters. To learn more, read [How-to: Modify a datalake](./database-designer/modify-lake-database.md). --### Apache Spark for Synapse --* We are excited to announce the preview availability of Apache Spark™ 3.2 on Synapse Analytics. This new version incorporates user-requested enhancements and resolves 1,700+ Jira tickets. Please review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) for the complete list of fixes and features and review the [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). --* Assigning parameters dynamically based on variables, metadata, or specifying Pipeline specific parameters has been one of your top feature requests. Now, with the release of parameterization for the Spark job definition activity, you can do just that. For more details, read [Transform data using Apache Spark job definition](quickstart-transform-data-using-spark-job-definition.md#settings-tab). --* We often receive customer requests to access the snapshot of the Notebook when there is a Pipeline Notebook run failure or there is a long-running Notebook job. With the release of the Synapse Notebook snapshot feature, you can now view the snapshot of the Notebook activity run with the original Notebook code, the cell output, and the input parameters. You can also access the snapshot of the referenced Notebook from the referencing Notebook cell output if you refer to other Notebooks through Spark utils. To learn more, read [Transform data by running a Synapse notebook](synapse-notebook-activity.md?tabs=classical#see-notebook-activity-run-history) and [Introduction to Microsoft Spark utilities](./spark/microsoft-spark-utilities.md?pivots=programming-language-scala#reference-a-notebook-1). --### Security --* The Synapse Monitoring Operator RBAC role is now generally available. Since the GA of Synapse, customers have asked for a fine-grained RBAC (role-based access control) role that allows a user persona to monitor the execution of Synapse Pipelines and Spark applications without having the ability to run or cancel the execution of these applications. Now, customers can assign the Synapse Monitoring Operator role to such monitoring personas. This allows organizations to stay compliant while having flexibility in the delegation of tasks to individuals or teams. Learn more by reading [Synapse RBAC Roles](security/synapse-workspace-synapse-rbac-roles.md). -### Data integration --* Microsoft has added Dataverse as a source and sink connector to Synapse Data Flows so that you can now build low-code data transformation ETL jobs in Synapse directly accessing your Dataverse environment. For more details on how to use this new connector, read [Mapping data flow properties](../data-factory/connector-dynamics-crm-office-365.md#mapping-data-flow-properties). --* We heard from you that a 1-minute timeout for Web activity was not long enough, especially in cases of synchronous APIs. Now, with the response timeout property 'httpRequestTimeout', you can define timeout for the HTTP request up to 10 minutes. Learn more by reading [Web activity response timeout improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/web-activity-response-timeout-improvement/ba-p/3260307). --### Developer experience --* Previously, if you wanted to reference a notebook in another notebook, you could only reference published or committed content. Now, when using %run notebooks, you can enable 'unpublished notebook reference' which will allow you to reference unpublished notebooks. When enabled, notebook run will fetch the current contents in the notebook web cache, meaning the changes in your notebook editor can be referenced immediately by other notebooks without having to be published (Live mode) or committed (Git mode). To learn more, read [Reference unpublished notebook](spark/apache-spark-development-using-notebooks.md#reference-unpublished-notebook). --## Mar 2022 update --The following updates are new to Azure Synapse Analytics this month. --### Developer Experience --* Code cells in Synapse notebooks that result in exception will now show standard output along with the exception message. This feature is supported for Python and Scala languages. To learn more, see the [example output when a code statement fails](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1). --* Synapse notebooks now support partial output when running code cells. To learn more, see the [examples at this blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_1) --* You can now dynamically control Spark session configuration for the notebook activity with pipeline parameters. To learn more, see the [variable explorer feature of Synapse notebooks.](./spark/apache-spark-development-using-notebooks.md?tabs=classical#parameterized-session-configuration-from-pipeline) --* You can now reuse and manage notebook sessions without having to start a new one. You can easily connect a selected notebook to an active session in the list started from another notebook. You can detach a session from a notebook, stop the session, and monitor it. To learn more, see [how to manage your active notebook sessions.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_3) --* Synapse notebooks now capture anything written through the Python logging module, in addition to the driver logs. To learn more, see [support for Python logging.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_4) --### SQL --* Column Level Encryption for Azure Synapse dedicated SQL Pools is now Generally Available. With column level encryption, you can use different protection keys for each column with each key having its own access permissions. The data in CLE-enforced columns are encrypted on disk and remain encrypted in memory until the DECRYPTBYKEY function is used to decrypt it. To learn more, see [how to encrypt a data column](/sql/relational-databases/security/encryption/encrypt-a-column-of-data?view=azure-sqldw-latest&preserve-view=true). --* Serverless SQL pools now support better performance for CETAS (Create External Table as Select) and subsequent SELECT queries. The performance improvements include, a parallel execution plan resulting in faster CETAS execution and outputting multiple files. To learn more, see [CETAS with Synapse SQL](./sql/develop-tables-cetas.md) article and the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_7) --### Apache Spark for Synapse --* Synapse Spark Common Data Model (CDM) Connector is now Generally Available. The CDM format reader/writer enables a Spark program to read and write CDM entities in a CDM folder via Spark dataframes. To learn more, see [how the CDM connector supports reading, writing data, examples, & known issues](./spark/data-sources/apache-spark-cdm-connector.md). --* Synapse Spark Dedicated SQL Pool (DW) Connector now supports improved performance. The new architecture eliminates redundant data movement and uses COPY-INTO instead of PolyBase. You can authenticate through SQL basic authentication or opt into the Azure Active Directory/Azure AD based authentication method. It now has ~5x improvements over the previous version. To learn more, see [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md) --* Synapse Spark Dedicated SQL Pool (DW) Connector now supports all Spark Dataframe SaveMode choices. It supports Append, Overwrite, ErrorIfExists, and Ignore modes. The Append and Overwrite are critical for managing data ingestion at scale. To learn more, see [DataFrame write SaveMode support](./spark/synapse-spark-sql-pool-import-export.md#supported-dataframe-save-modes) --* Accelerate Spark execution speed using the new Intelligent Cache feature. This feature is currently in public preview. Intelligent Cache automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12) --### Security --* Azure Synapse Analytics now supports Azure Active Directory (Azure AD) authentication. You can turn on Azure AD authentication during the workspace creation or after the workspace is created. To learn more, see [how to use Azure AD authentication with Synapse SQL](./sql/active-directory-authentication.md). --* API support to raise or lower minimal TLS version for workspace managed SQL Server Dedicated SQL. To learn more, see [how to update the minimum TLS setting](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) or read the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_15) for more details. --### Data Integration --* Flowlets and CDC Connectors are now Generally Available. Flowlets in Synapse Data Flows allow for reusable and composable ETL logic. To learn more, see [Flowlets in mapping data flow](../data-factory/concepts-data-flow-flowlet.md) or see the [blog post.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_17) --* sFTP connector for Synapse data flows. You can read and write data while transforming data from sftp using the visual low-code data flows interface in Synapse. To learn more, see [source transformation](../data-factory/connector-sftp.md#source-transformation) --* Data flow improvements to Data Preview. To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng) --* Pipeline script activity. The Script Activity enables data engineers to build powerful data integration pipelines that can read from and write to Synapse databases, and other database types. To learn more, see [Transform data by using the Script activity in Azure Data Factory or Synapse Analytics](../data-factory/transform-data-using-script.md) --## Feb 2022 update --The following updates are new to Azure Synapse Analytics this month. --### SQL --* Serverless SQL Pools now support more consistent query execution times. [Learn how Serverless SQL pools automatically detect spikes in read latency and support consistent query execution time.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_2) --* [The `OPENJSON` function makes it easy to get array element indexes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_3). To learn more, see how the OPENJSON function in a serverless SQL pool allows you to [parse nested arrays and return one row for each JSON array element with the index of each element](/sql/t-sql/functions/openjson-transact-sql?view=azure-sqldw-latest&preserve-view=true#array-element-identity). --### Data integration --* [Upserting data is now supported by the copy activity](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_5). See how you can natively load data into a temporary table and then merge that data into a sink table with [upsert.](../data-factory/connector-azure-sql-database.md?tabs=data-factory#upsert-data) --* [Transform Dynamics Data Visually in Synapse Data Flows.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_6) Learn more on how to use a [Dynamics dataset or an inline dataset as source and sink types to transform data at scale.](../data-factory/connector-dynamics-crm-office-365.md?tabs=data-factory#mapping-data-flow-properties) --* [Connect to your SQL sources in data flows using Always Encrypted](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_7). To learn more, see [how to securely connect to your SQL databases from Synapse data flows using Always Encrypted.](../data-factory/connector-azure-sql-database.md?tabs=data-factory) --* [Capture descriptions from asserts in Data Flows](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_8) To learn more, see [how to define your own dynamic descriptive messages](../data-factory/data-flow-expressions-usage.md#assertErrorMessages) in the assert data flow transformation at the row or column level. --* [Easily define schemas for complex type fields.](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-february-update-2022/ba-p/3221841#TOCREF_9) To learn more, see how you can make the engine to [automatically detect the schema of an embedded complex field inside a string column](../data-factory/data-flow-parse.md). --## Jan 2022 update --The following updates are new to Azure Synapse Analytics this month. --### Apache Spark for Synapse --You can now use four new database templates in Azure Synapse. [Learn more about Automotive, Genomics, Manufacturing, and Pharmaceuticals templates from the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/four-additional-azure-synapse-database-templates-now-available/ba-p/3058044) or the [database templates article](./database-designer/overview-database-templates.md). These templates are currently in public preview and are available within the Synapse Studio gallery. --### Machine Learning --Improvements to the Synapse Machine Learning library v0.9.5 (previously called MMLSpark). This release simplifies the creation of massively scalable machine learning pipelines with Apache Spark. To learn more, [read the blog post about the new capabilities in this release](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_3) or see the [full release notes](https://microsoft.github.io/SynapseML/) --### Security --* The Azure Synapse Analytics security overview - A whitepaper that covers the five layers of security. The security layers include authentication, access control, data protection, network security, and threat protection. [Understand each security feature in detailed](./guidance/security-white-paper-introduction.md) to implement an industry-standard security baseline and protect your data on the cloud. --* TLS 1.2 is now required for newly created Synapse Workspaces. To learn more, see how [TLS 1.2 provides enhanced security using this article](./security/connectivity-settings.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_6). Sign-in attempts to a newly created Synapse workspace from connections using TLS versions lower than 1.2 will fail. --### Data Integration --* Data quality validation rules using Assert transformation - You can now easily add data quality, data validation, and schema validation to your Synapse ETL jobs by using Assert transformation in Synapse data flows. To learn more, see the [Assert transformation in mapping data flow article](../data-factory/data-flow-assert.md) or [the blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_8). --* Native data flow connector for Dynamics - Synapse data flows can now read and write data directly to Dynamics through the new data flow Dynamics connector. Learn more on how to [Create data sets in data flows to read, transform, aggregate, join, etc. using this article](../data-factory/connector-dynamics-crm-office-365.md) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_9). You can then write the data back into Dynamics using the built-in Synapse Spark compute. --* IntelliSense and auto-complete added to pipeline expressions - IntelliSense makes creating expressions, editing them easy. To learn more, see how to [check your expression syntax, find functions, and add code to your pipelines.](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/intellisense-support-in-expression-builder-for-more-productive/ba-p/3041459) --### Synapse SQL --* COPY schema discovery for complex data ingestion. To learn more, see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_12) or [how GitHub leveraged this functionality in Introducing Automatic Schema Discovery with auto table creation for complex datatypes](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introducing-automatic-schema-discovery-with-auto-table-creation/ba-p/3068927). --* Serverless SQL pools now support the HASHBYTES function. HASHBYTES is a T-SQL function, which hashes values. Learn how to use [hash values in distributing data using this article](/sql/t-sql/functions/hashbytes-transact-sql) or the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-january-update-2022/ba-p/3071681#TOCREF_13). --## December 2021 update --The following updates are new to Azure Synapse Analytics this month. --### Apache Spark for Synapse --* Accelerate Spark workloads with NVIDIA GPU acceleration [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--16536080) [article](./spark/apache-spark-rapids-gpu.md) -* Mount remote storage to a Synapse Spark pool [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1823990543) [article](./spark/synapse-file-mount-api.md) -* Natively read & write data in ADLS with Pandas [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-663522290) [article](./spark/tutorial-use-pandas-spark-pool.md) -* Dynamic allocation of executors for Spark [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1143932173) [article](./spark/apache-spark-autoscale.md) --### Machine Learning --* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/Overview/) -* Getting started with state-of-the-art pre-built intelligent models [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-2023639030) [article](./machine-learning/tutorial-form-recognizer-use-mmlspark.md) -* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/Responsible%20AI/Interpreting%20Model%20Predictions/) -* PREDICT is now GA for Synapse Dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1594404878) [article](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md) -* Simple & scalable scoring with PREDICT and MLFlow for Apache Spark for Synapse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--213049585) [article](./machine-learning/tutorial-score-model-predict-spark-pool.md) -* Retail AI solutions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--2020504048) [article](./machine-learning/quickstart-industry-ai-solutions.md) --### Security --* User-Assigned managed identities now supported in Synapse Pipelines in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1340445678) [article](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory) -* Browse ADLS Gen2 folders in an Azure Synapse Analytics workspace in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1147067155) [article](how-to-access-container-with-access-control-lists.md) --### Data Integration --* Pipeline Fail activity [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1827125525) [article](../data-factory/control-flow-fail-activity.md) -* Mapping Data Flow gets new native connectors [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-717833003) [article](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754) -* More notebook export formats: HTML, Python, and LaTeX [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF3) -* Three new chart types in notebook view: box plot, histogram, and pivot table [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF4) -* Reconnect to lost notebook session [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF5) --### Integrate --* Azure Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse) -* Custom partitions for Azure Synapse Link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](/azure/cosmos-db/custom-partitioning-analytical-store) -* Map data tool (Public Preview), a no-code guided ETL experience [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](./database-designer/overview-map-data.md) -* Quick reuse of spark cluster [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](../data-factory/concepts-integration-runtime-performance.md#time-to-live) -* External Call transformation [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF9) [article](../data-factory/data-flow-external-call.md) -* Flowlets (Public Preview) [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF10) [article](../data-factory/concepts-data-flow-flowlet.md) --## November 2021 update --The following updates are new to Azure Synapse Analytics this month. --### Synapse Data Explorer --* Synapse Data Explorer now available in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1022327194) [article](./data-explorer/data-explorer-overview.md) --### Work with Databases and Data Lakes --* Introducing Lake databases (formerly known as Spark databases) [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--795630373) [article](./database-designer/concepts-lake-database.md) -* Lake database designer now available in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1691882460) [article](./database-designer/concepts-lake-database.md#database-designer) -* Database Templates and Database Designer [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--115572003) [article](./database-designer/concepts-database-templates.md) --### SQL --* Delta Lake support for serverless SQL is generally available [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-564486367) [article](./sql/query-delta-lake-format.md) -* Query multiple file paths using OPENROWSET in serverless SQL [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1242968096) [article](./sql/query-single-csv-file.md) -* Serverless SQL queries can now return up to 200 GB of results [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1110860013) [article](./sql/resources-self-help-sql-on-demand.md) -* Handling invalid rows with OPENROWSET in serverless SQL [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--696594450) [article](./sql/develop-openrowset.md) --### Apache Spark for Synapse --* Accelerate Spark workloads with NVIDIA GPU acceleration [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--16536080) [article](./spark/apache-spark-rapids-gpu.md) -* Mount remote storage to a Synapse Spark pool [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1823990543) [article](./spark/synapse-file-mount-api.md) -* Natively read & write data in ADLS with Pandas [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-663522290) [article](./spark/tutorial-use-pandas-spark-pool.md) -* Dynamic allocation of executors for Spark [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1143932173) [article](./spark/apache-spark-autoscale.md) --### Machine Learning --* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/Overview/) -* Getting started with state-of-the-art pre-built intelligent models [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-2023639030) [article](./machine-learning/tutorial-form-recognizer-use-mmlspark.md) -* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/Responsible%20AI/Interpreting%20Model%20Predictions/) -* PREDICT is now GA for Synapse Dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1594404878) [article](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md) -* Simple & scalable scoring with PREDICT and MLFlow for Apache Spark for Synapse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--213049585) [article](./machine-learning/tutorial-score-model-predict-spark-pool.md) -* Retail AI solutions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--2020504048) [article](./machine-learning/quickstart-industry-ai-solutions.md) --### Security --* User-Assigned managed identities now supported in Synapse Pipelines in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--1340445678) [article](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory) -* Browse ADLS Gen2 folders in an Azure Synapse Analytics workspace in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1147067155) [article](how-to-access-container-with-access-control-lists.md) --### Data Integration --* Pipeline Fail activity [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1827125525) [article](../data-factory/control-flow-fail-activity.md) -* Mapping Data Flow gets new native connectors [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-717833003) [article](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/mapping-data-flow-gets-new-native-connectors/ba-p/2866754) --### Azure Synapse Link --* Azure Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse) -* Custom partitions for Azure Synapse Link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](/azure/cosmos-db/custom-partitioning-analytical-store) --## October 2021 update --The following updates are new to Azure Synapse Analytics this month. --### General --* Manage your cost with Azure Synapse pre-purchase plans [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#manage-cost) [article](../cost-management-billing/reservations/synapse-analytics-pre-purchase-plan.md) -* Move your Azure Synapse workspace across Azure regions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#move-workspace-region) [article](how-to-move-workspace-from-one-region-to-another.md) --### Apache Spark for Synapse --* Spark performance optimizations [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#spark-performance) --### Security --* All Synapse RBAC roles are now generally available for use in production [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-rbac) [article](./security/synapse-workspace-synapse-rbac-roles.md) -* Apply User-Assigned Managed Identities for Double Encryption [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#user-assigned-managed-identities) [article](./security/workspaces-encryption.md) -* Synapse Administrators now have elevated access to dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#elevated-access) [article](./security/synapse-workspace-access-control-overview.md) --### Governance --* Synapse workspaces can now automatically push lineage data to Microsoft Purview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#synapse-purview-lineage) [article](../purview/how-to-lineage-azure-synapse-analytics.md) --### Integrate --* Use Stringify in data flows to easily transform complex data types to strings [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#stringify-transform) [article](../data-factory/data-flow-stringify.md) -* Control Spark session time-to-live (TTL) in data flows [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#data-flowspark-ttl) [article](../data-factory/concepts-integration-runtime-performance.md) --### CI/CD & Git --* Deploy Synapse workspaces using GitHub Actions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#deploy-synapse-github-action) [article](./cicd/continuous-integration-delivery.md#configure-github-actions-secrets) -* More control creating Git branches in Synapse Studio [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#create-git-branch-in-studio) [article](./cicd/source-control.md#creating-feature-branches) --### Developer Experience --* Enhanced Markdown editing in Synapse notebooks preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#notebook-markdown-toolbar) [article](./spark/apache-spark-development-using-notebooks.md) -* Pandas dataframes automatically render as nicely formatted HTML tables [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#pandas-dataframe-html) [article](./spark/apache-spark-data-visualization.md) -* Use IPython widgets in Synapse Notebooks [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#notebook-ipythong-widgets) [article](./spark/apache-spark-development-using-notebooks.md) -* Mssparkutils runtime context now available for Python and Scala [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics/azure-synapse-analytics-october-update/ba-p/2875372#mssparkutils-context) [article](./spark/microsoft-spark-utilities.md?pivots=programming-language-python) --## Next steps --[Get started with Azure Synapse Analytics](get-started.md) |
synapse-analytics | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md | - Title: What's new? -description: Learn about the new features and documentation improvements for Azure Synapse Analytics --- Previously updated : 08/01/2023------# What's new in Azure Synapse Analytics? --This page is continuously updated with a recent review of what's new in [Azure Synapse Analytics](overview-what-is.md), and also what features are currently in preview. To follow the latest in Azure Synapse news and features, see the [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) and [companion videos on YouTube](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g). --For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) posts or [previous updates in Azure Synapse Analytics](whats-new-archive.md). --> [!IMPORTANT] -> [Microsoft Fabric has been announced!](https://azure.microsoft.com/blog/introducing-microsoft-fabric-data-analytics-for-the-era-of-ai/) -> - Learn about this exciting new preview and discover [What is Microsoft Fabric?](/fabric/get-started/microsoft-fabric-overview) -> - Get started with [end-to-end tutorials in Microsoft Fabric](/fabric/get-started/end-to-end-tutorials). -> - See [What's new in Microsoft Fabric?](/fabric/get-started/whats-new) --## Features currently in preview --The following table lists the features of Azure Synapse Analytics that are currently in preview. Preview features are sorted alphabetically. --> [!NOTE] -> Features currently in preview are available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), review for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. Azure Synapse Analytics provides previews to give you a chance to evaluate and [share feedback with the product group](https://feedback.azure.com/d365community/forum/9b9ba8e4-0825-ec11-b6e6-000d3a4f07b8) on features before they become generally available (GA). --| **Feature** | **Learn more**| -|:-- |:-- | -| **Apache Spark Delta Lake tables in serverless SQL pools** | The ability to for serverless SQL pools to access Delta Lake tables created in Spark databases is in preview. For more information, see [Azure Synapse Analytics shared metadata tables](metadat).| -| **Apache Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker node temporary storage and attach more disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Azure Synapse Analytics Spark elastic pool storage](spark/apache-spark-pool-configurations.md#elastic-pool-storage).| -| **Apache Spark R language support** | Built-in [R support for Apache Spark](spark/apache-spark-r-language.md) is now in preview. | -| **Azure Synapse Data Explorer** | The [Azure Synapse Data Explorer](./data-explorer/data-explorer-overview.md) provides an interactive query experience to unlock insights from log and telemetry data. Connectors for Azure Data Explorer are available for Synapse Data Explorer. For more news, see [Azure Synapse Data Explorer (preview)](#azure-synapse-data-explorer-preview).| -| **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now browse an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder in your Azure Synapse Analytics workspace in Synapse Studio. To learn more, see [Browse an ADLS Gen2 folder with ACLs in Azure Synapse Analytics](how-to-access-container-with-access-control-lists.md).| -| **Capture changed data from Cosmos DB analytical store** | Azure Cosmos DB analytical store now supports change data capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for MongoDB. For more information, see [Capture Changed Data from your Cosmos DB analytical store](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/capture-changed-data-from-your-cosmos-db-analytical-store/ba-p/3783530) and [DevBlog: Change Data Capture (CDC) with Azure Cosmos DB analytical store](https://devblogs.microsoft.com/cosmosdb/now-in-preview-change-data-capture-cdc-with-azure-cosmos-db-analytical-store/).| -| **Distribution Advisor**| The Distribution Advisor is a new preview feature in Azure Synapse dedicated SQL pools Gen2 that analyzes queries and recommends the best distribution strategies for tables to improve query performance. For more information, see [Distribution Advisor in Azure Synapse SQL](sql/distribution-advisor.md).| -| **Distributed Deep Neural Network Training** | Learn more about new distributed training libraries like Horovod, Petastorm, TensorFlow, and PyTorch in [Deep learning tutorials](./machine-learning/concept-deep-learning.md). | -| **Embed ADX dashboards** | Azure Data Explorer dashboards be [embedded in an IFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). | -| **Reject options for delimited text files** | [Reject options for CREATE EXTERNAL TABLE on delimited files](/sql/t-sql/statements/create-external-table-transact-sql?view=azure-sqldw-latest&preserve-view=true#reject-options-1) is in preview. | -| **Spark Advisor for Azure Synapse Notebook** | The [Spark Advisor for Azure Synapse Notebook](monitoring/apache-spark-advisor.md) analyzes code run by Spark and displays real-time advice for Notebooks. The Spark advisor offers recommendations for code optimization based on built-in common patterns, performs error analysis, and locates the root cause of failures.| -| **Time-To-Live in managed virtual network (VNet)** | Reserve compute for the time-to-live (TTL) in managed virtual network TTL period, saving time and improving efficiency. For more information on this preview, see [Announcing public preview of Time-To-Live (TTL) in managed virtual network](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879).| -| **User-Assigned managed identities** | Now you can use user-assigned managed identities in linked services for authentication in Synapse Pipelines and Dataflows. To learn more, see [Credentials in Azure Data Factory and Azure Synapse](../data-factory/credentials.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=data-factory).| --## Generally available features --The following table lists the features of Azure Synapse Analytics that have transitioned from preview to general availability (GA) within the last 12 months. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| April 2023 | **Apache Spark Optimized Write** | [Optimize Write is a Delta Lake on Azure Synapse](spark/optimize-write-for-apache-spark.md) feature reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data.| -| March 2023 | **Cosmos DB Synapse Link for Azure Data Explorer GA** | Azure Data Explorer supports fully managed data ingestion from Azure Cosmos DB using a change feed. We now support [Cosmos DB accounts behind a Managed Private Endpoint](/azure/data-explorer/security-network-managed-private-endpoint-create) or Service Endpoint.ΓÇ»For more information, see [Ingest data from Azure Cosmos DB into Azure Data Explorer](/azure/data-explorer/ingest-data-cosmos-db-connection). | -| March 2023| **Multi-column distribution in dedicated SQL pools** | You can now [Hash Distribute tables on multiple columns](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/multi-column-distribution-for-dedicated-sql-pools-is-now-ga/ba-p/3774529) for a more even distribution of the base table, reducing data skew over time and improving query performance. For more information on this generally available feature, see the three options: [CREATE MATERIALIZED VIEW](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true), [CREATE TABLE distribution options](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true#TableDistributionOptions), or [CREATE TABLE AS SELECT distribution options](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true#table-distribution-options).| -| March 2023 | **Deploying Synapse SQL serverless using SSDT** | SqlPackage's [long-awaited support for Azure Synapse Analytics serverless SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/deploying-synapse-sql-serverless-objects-across-environments/ba-p/3751953) is now available starting with [the 161.8089.0 SqlPackage](/sql/tools/sqlpackage/release-notes-sqlpackage?view=sql-server-ver16&preserve-view=true#16180890-sqlpackage). Serverless SQL pools are [supported for both the extract and publish actions](/sql/tools/sqlpackage/sqlpackage-for-azure-synapse-analytics?view=sql-server-ver16&preserve-view=true#support-for-serverless-sql-pools). | -| February 2023 | **ADX Dashboards GA** | [Now generally available, Azure Data Explorer dashboards](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/general-availability-adx-dashboards/ba-p/3749361) using [the Azure Data Explorer web UI](https://dataexplorer.azure.com/) allow you to explore your data from end-to-end, starting with data ingestion, running queries, and ultimately building dashboards. | -| February 2023 | **UTF-8 and Japanese collations support for dedicated SQL pools** | Both UTF-8 support and Japanese collations are now [generally available for dedicated SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/utf-8-and-japanese-collations-support-for-dedicated-sql-pools-is/ba-p/3740989). | -| February 2023 | **Azure Synapse Runtime for Apache Spark 3.3** | The [Azure Synapse Runtime for Apache Spark 3.3](spark/apache-spark-33-runtime.md) is now generally available. Based on our testing using the 1TB TPC-HΓÇ»industry benchmark, you'reΓÇ»likely to seeΓÇ»[up to 77% increased performance](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_2). | -| December 2022 | **SSIS IR Express virtual network injection** | Both the standard and express methods to [inject your SSIS Integration Runtime (IR) into a VNet](https://techcommunity.microsoft.com/t5/sql-server-integration-services/vnet-or-no-vnet-secure-data-access-from-ssis-in-azure-data/ba-p/1062056) are generally available now. For more information, see [General Availability of Express Virtual Network injection for SSIS in Azure Data Factory](https://techcommunity.microsoft.com/t5/sql-server-integration-services/general-availability-of-express-virtual-network-injection-for/ba-p/3699993).| -| November 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | The ability to use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster is now generally available. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector) and [ADX output from Azure Stream Analytics](../stream-analytics/azure-database-explorer-output.md).| -| November 2022 | **Azure Synapse Link for SQL** | Azure Synapse Link for SQL is now generally available for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link for SQL feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, visit [What is Azure Synapse Link for SQL?](synapse-link/sql-synapse-link-overview.md)| -| October 2022 | **SAP CDC connector GA** | The data connector for SAP Change Data Capture (CDC) is now GA. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-dat).| -| September 2022 | **MERGE T-SQL syntax** | [MERGE T-SQL syntax](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true) has been a highly requested addition to the Synapse T-SQL library. As in SQL Server, the MERGE syntax encapsulates INSERTs/UPDATEs/DELETEs into a single high-performance statement. Available in dedicated SQL pools in version 10.0.17829 and above. For more, see the [MERGE T-SQL announcement blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/merge-t-sql-for-dedicated-sql-pools-is-now-ga/ba-p/3634331).| -| July 2022 | **Apache Spark™ 3.2 for Synapse Analytics** | Apache Spark™ 3.2 for Synapse Analytics is now generally available. Review the [official release notes](https://spark.apache.org/releases/spark-release-3-2-0.html) and [migration guidelines between Spark 3.1 and 3.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-31-to-32) to assess potential changes to your applications. For more details, read [Apache Spark version support and Azure Synapse Runtime for Apache Spark 3.2](./spark/apache-spark-version-support.md). Highlights of what got better in Spark 3.2 in the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_1).| -| July 2022 | **Apache Spark in Azure Synapse Intelligent Cache feature** | Intelligent Cache for Spark automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md).| -| June 2022 | **Map Data tool** | The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about the Map Data tool, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).| -| June 2022 | **User Defined Functions** | User defined functions (UDFs) are now generally available. To learn more, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628). | --## Community --This section summarizes new Azure Synapse Analytics community opportunities and the [Azure Synapse Influencer program](https://aka.ms/synapseinfluencers) from Microsoft. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| April 2023 | **Azure Synapse MVP Corner** | March highlights from the [Microsoft Azure Synapse MVP blog series Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-march-2023/ba-p/3812901).| -| March 2023 | **Azure Synapse MVP Corner** | February highlights from the [Microsoft Azure Synapse MVP blog series Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-february-2023/ba-p/3777900).| -| February 2023 | **Azure Synapse MVP Corner** | January highlights from the [Microsoft Azure Synapse MVP blog series Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-january-2023/ba-p/3748470).| -| January 2023 | **Azure Synapse MVP Corner** | December highlights from the [Microsoft Azure Synapse MVP blog series Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-december-2022/ba-p/3718122).| -| December 2022 | **Azure Synapse MVP Corner** | November highlights from the Microsoft Azure Synapse MVP blog series in this month's [Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-november-2022/ba-p/3696939).| -| November 2022 | **Azure Synapse Influencer program** | The Azure Synapse Influencer program provides exclusive events and Q&A sessions like Ask the Experts with the Microsoft product team, where members can interact directly with product experts by asking any questions on various rotating topics. Get feedback from members of [Azure Synapse Analytics influencer community](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-influencer-program-passionate-about-azure-synapse/ba-p/3672906). | -| October 2022 | **Azure Synapse MVP Corner** | October highlights from the Microsoft Azure Synapse MVP blog series in this month's [Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-october-2022/ba-p/3668048).| -| September 2022 | **Azure Synapse MVP Corner** | September highlights from the Microsoft Azure Synapse MVP blog series in this month's [Azure Synapse MVP Corner](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-mvp-corner-september-2022/ba-p/3643960).| -| May 2022 | **Azure Synapse Influencer program** | Sign up for our free [Azure Synapse Influencer program](https://aka.ms/synapseinfluencers) and get connected with a community of Synapse-users who are dedicated to helping others achieve more with cloud analytics. Register now for our next [Synapse Influencer Ask the Experts session](https://aka.ms/synapseinfluencers/#events). It's free to attend and everyone is welcome to participate and join the discussion on Synapse-related topics. You can [watch past recorded Ask the Experts events](https://aka.ms/ATE-RecordedSessions) on the [Azure Synapse YouTube channel](https://www.youtube.com/channel/UCsZ4IlYjjVxqe5OZ14tyh5g). | --## Apache Spark for Azure Synapse Analytics --This section summarizes recent new features and capabilities of [Apache Spark for Azure Synapse Analytics](spark/apache-spark-overview.md). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| April 2023 | **Delta Lake - Low Shuffle Merge** | [Low Shuffle Merge optimization for Delta tables](spark/low-shuffle-merge-for-apache-spark.md) is now available in Apache Spark 3.2 and 3.3 pools. You can now update a Delta table with advanced conditions using the Delta Lake MERGE command. | -| March 2023 | **Library management new ability: in-line installation** | `%pip` and `%conda` are now available in Apache Spark for Synapse! `%pip` and `%conda` are commands that can be used on Notebooks to install Python packages. For more information, see [Manage session-scoped Python packages through %pip and %conda commands](spark/apache-spark-manage-session-packages.md#manage-session-scoped-python-packages-through-pip-and-conda-commands). | -| March 2023 | **Increasing Azure Synapse Analytics Spark performance up to 77%** | More regions are receiving the [performance increase for Azure Synapse Spark workloads](https://azure.microsoft.com/updates/increasing-azure-synapse-analytics-spark-performance-by-up-to-77/), including most recently Korea Central, Central India, and Australia Southeast. | -| March 2023 | **Azure Synapse Spark Notebook ΓÇô Unit Testing** | Learn how to [test and create unit test cases for Spark jobs developed using Synapse Notebook](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-spark-notebook-unit-testing/ba-p/3725137). | -| March 2023 | **Apache Spark 2.4 and 3.1 retirement cycle** | The Azure Synapse runtime for Apache Spark 2.4 and 3.1 have entered the [retirement cycle](spark/runtime-for-apache-spark-lifecycle-and-supportability.md). Apache Spark 2.4 will be retired September 29, 2023, and Apache Spark 3.1 will be retired as of January 26, 2024. You should relocate your workloads to a newer Apache Spark runtime within this period. Read more at [Apache Spark runtimes in Azure Synapse](spark/apache-spark-version-support.md) and view the [Spark migration guide](https://spark.apache.org/docs/latest/core-migration-guide.html).| -| February 2023 | **Azure Synapse Runtime for Apache Spark 3.3** | The [Azure Synapse Runtime for Apache Spark 3.3](spark/apache-spark-33-runtime.md) is now generally available. Based on our testing using the 1TB TPC-HΓÇ»industry benchmark, you'reΓÇ»likely to seeΓÇ»[up to 77% increased performance](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_2). | -| January 2023 | **Spark Advisor for Azure Synapse Notebook** | The [Spark Advisor for Azure Synapse Notebook](monitoring/apache-spark-advisor.md) analyzes code run by Spark and displays real-time advice for Notebooks. The Spark advisor offers recommendations for code optimization based on built-in common patterns, performs error analysis, and locates the root cause of failures.ΓÇ» | -| January 2023 | **Improve Spark pool utilization with Synapse Genie** | The Synapse Genie Framework improves Spark pool utilization by executing multiple Synapse notebooks on the same Spark pool instance. Read more about this [metadata-driven utility written in Python](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/improve-spark-pool-utilization-with-synapse-genie/ba-p/3690428). | -| November 2022 | **Azure Synapse Runtime for Apache Spark 3.3** | The [Azure Synapse Runtime for Apache Spark 3.3](spark/apache-spark-33-runtime.md) is currently in preview. For more information, see the [Apache Spark 3.3 preview blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-runtime-for-apache-spark-3-3-is-now-in-public/ba-p/3686449). Based on our testing using the 1TB TPC-HΓÇ»industry benchmark, you'reΓÇ»likely to seeΓÇ»[up to 77% increased performance](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_2). | -| September 2022 | **New informative Livy error codes** | [More precise error codes](spark/apache-spark-handle-livy-error.md) describe the cause of failure and replaces the previous generic error codes. Previously, all errors in failing Spark jobs surfaced with a generic error code displaying `LIVY_JOB_STATE_DEAD`. | -| September 2022 | **New query optimization techniques in Apache Spark for Azure Synapse Analytics** | Read the [findings from Microsoft's work](https://vldb.org/pvldb/vol15/p936-rajan.pdf) to gain considerable performance benefits across the board on the reference TPC-DS workload as well as a significant reduction in query plan generation time. | -| August 2022 | **Apache Spark elastic pool storage** | Azure Synapse Analytics Spark pools now support elastic pool storage in preview. Elastic pool storage allows the Spark engine to monitor worker nodes temporary storage and attach additional disks if needed. No action is required, and you should see fewer job failures as a result. For more information, see [Blog: Azure Synapse Analytics Spark elastic pool storage is available for public preview](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_8).| -| August 2022 | **Apache Spark Optimized Write** | Optimize Write is a Delta Lake on Synapse preview feature that reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data. To learn more, see [The need for optimize write on Apache Spark](spark/optimize-write-for-apache-spark.md).| --## Data integration --This section summarizes recent new features and capabilities of Azure Synapse Analytics data integration. Learn how to [Load data into Azure Synapse Analytics using Azure Data Factory (ADF) or a Synapse pipeline](../data-factory/load-azure-sql-data-warehouse.md). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| April 2023 | **Capture changed data from Cosmos DB analytical store (Public Preview)** | Azure Cosmos DB analytical store now supports change data capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for MongoDB. For more information, see [Capture Changed Data from your Cosmos DB analytical store](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/capture-changed-data-from-your-cosmos-db-analytical-store/ba-p/3783530) and [DevBlog: Change Data Capture (CDC) with Azure Cosmos DB analytical store](https://devblogs.microsoft.com/cosmosdb/now-in-preview-change-data-capture-cdc-with-azure-cosmos-db-analytical-store/).| -| March 2023 | **Deep dive: Synapse pipelines storage event trigger security** | This Customer Success Engineering blog post is a deep dive into [Azure Synapse pipelines storage event trigger security](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-pipelines-storage-event-trigger-security-deep-dive/ba-p/3778250). ADF and Synapse Pipelines offer a feature that allows pipeline execution to be triggered based on various events, such as storage blob creation or deletion. This can be used by customers to implement event-driven pipeline orchestration.| -| January 2023 | **SQL CDC incremental extract now supports numeric columns** | Enabling incremental [extract from SQL Server CDC in dataflows](../data-factory/connector-sql-server.md?tabs=data-factory#native-change-data-capture) allows you to only process rows that have changed since the last time that pipeline was executed. Supported incremental column types now include date/time and numeric columns. | -| December 2022 | **Express virtual network injection** | Both the standard and express methods to [inject your SSIS Integration Runtime (IR) into a VNet](https://techcommunity.microsoft.com/t5/sql-server-integration-services/vnet-or-no-vnet-secure-data-access-from-ssis-in-azure-data/ba-p/1062056) are generally available now. For more information, see [General Availability of Express Virtual Network injection for SSIS in Azure Data Factory](https://techcommunity.microsoft.com/t5/sql-server-integration-services/general-availability-of-express-virtual-network-injection-for/ba-p/3699993).| -| October 2022 | **SAP CDC connector GA** | The data connector for SAP Change Data Capture (CDC) is now GA. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-dat).| -| September 2022 | **Gantt chart view** | You can now view your activity runs with a Gantt chart in [Azure Data Factory Integration Runtime monitoring](../data-factory/monitor-integration-runtime.md). | -| September 2022 | **Monitoring improvements** | We've released [a new bundle of improvements to the monitoring experience](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/further-adf-monitoring-improvements/ba-p/3607669) based on community feedback. | -| September 2022 | **Maximum column optimization in mapping dataflow** | For delimited text data sources such as CSVs, a new **maximum columns** setting allows you to [set the maximum number of columns](../data-factory/format-delimited-text.md#mapping-data-flow-properties). | -| September 2022 | **NUMBER to integer conversion in Oracle data source connector** | New property to convert Oracle NUMBER type to a corresponding integer type in source via the new property **convertDecimalToInteger**. For more information, see the [Oracle source connector](../data-factory/connector-oracle.md?tabs=data-factory#oracle-as-source).| -| September 2022 | **Support for sending a body with HTTP request DELETE method in Web activity** | New support for sending a body (optional) when using the DELETE method in Web activity. For more information, see the available [Type properties for the Web activity](../data-factory/control-flow-web-activity.md#type-properties). | -| August 2022 | **Mapping data flows now support visual Cast transformation** | You can [use the cast transformation](../data-factory/data-flow-cast.md) to easily modify the data types of individual columns in a data flow. | -| August 2022 | **Default activity timeout changed to 12 hours** | The [default activity timeout is now 12 hours](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-changing-default-pipeline-activity-timeout/ba-p/3598729). | -| August 2022 | **Pipeline expression builder ease-of-use enhancements** | We've [updated our expression builder UI to make pipeline designing easier](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/coming-soon-to-adf-more-pipeline-expression-builder-ease-of-use/ba-p/3567196). | -| August 2022 | **New UI for mapping dataflow inline dataset types**| We've updated our data flow source UI to [make it easier to find your inline dataset type](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_21).| -| July 2022 | **Time-To-Live in managed virtual network (VNet)** | Reserve compute for the time-to-live (TTL) in managed virtual network TTL period, saving time and improving efficiency. For more information on this preview, see [Announcing public preview of Time-To-Live (TTL) in managed virtual network](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879).| -| June 2022 | **SAP CDC connector preview** | A new data connector for SAP Change Data Capture (CDC) is now available in preview. For more information, see [Announcing Public Preview of the SAP CDC solution in Azure Data Factory and Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-the-sap-cdc-solution-in-azure-dat).| -| June 2022 | **Fuzzy join option in Join Transformation** | Use fuzzy matching with a similarity threshold score slider has been added to the [Join transformation in Mapping Data Flows](../data-factory/data-flow-join.md). | -| June 2022 | **Map Data tool GA** | We're excited to announce that the [Map Data tool](./database-designer/overview-map-data.md) is now Generally Available. The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. | -| June 2022 | **Rerun pipeline with new parameters** | You can now change pipeline parameters when rerunning a pipeline from the Monitoring page without having to return to the pipeline editor. To learn more, read [Rerun pipelines and activities](../data-factory/monitor-visually.md#rerun-pipelines-and-activities).| -| June 2022 | **User Defined Functions GA** | [User defined functions (UDFs) in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628) are now generally available (GA). | --## Database Templates & Database Designer --This section summarizes recent new features and capabilities of [database templates](./database-designer/overview-database-templates.md) and [the database designer](database-designer/quick-start-create-lake-database.md). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| July 2022 | **Browse industry templates** | Browse industry templates and add tables to create your own lake database. Learn more about [ways you can browse industry templates](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/ways-you-can-browse-industry-templates-and-add-tables-to-create/ba-p/3495011) and get started with [Quickstart: Create a new lake database leveraging database templates](database-designer/quick-start-create-lake-database.md).| --## Developer experience --This section summarizes recent new quality of life and feature improvements for [developers in Azure Synapse Analytics](sql/develop-overview.md). --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| May 2023 | **Using Azure DevOps with Synapse Workspaces to create hot fixes in production environments** | Blog post on how to [deploy a fix from your development Synapse Workspace into a production Synapse Workspace without adversely affecting ongoing development projects](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/using-azure-devops-with-synapse-workspaces-to-create-hot-fixes/ba-p/3809631). | -| December 2022 | **MSSparkUtils is the Swiss Army knife inside Synapse Spark** | MSSparkUtils is a built-in package to help you easily perform common tasks called Microsoft Spark utilities, including the ability to [share results between notebooks](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/mssparkutils-is-the-swiss-army-knife-inside-synapse-spark/ba-p/3673355). | -| September 2022 | **Synapse CICD for publishing workspace artifacts** | Integrating Synapse Studio with a Source Control System such as [Azure DevOps Git](https://dev.azure.com/) or [GitHub](https://github.com/) has been shown as one of Synapse Studio's preferred features to collaborate and provide [source control for Azure Synapse](cicd/source-control.md). The Visual Studio marketplace has a [Synapse workspace deployment task](https://marketplace.visualstudio.com/items?itemName=AzureSynapseWorkspace.synapsecicd-deploy) to automate publishing.| -| July 2022 | **Synapse Notebooks compatibility with IPython** | The official kernel for Jupyter notebooks is IPython and it's now supported in Synapse Notebooks. For more information, see [Synapse Notebooks is now fully compatible with IPython](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_14).| -| July 2022 | **Mssparkutils now has spark.stop() method** | A new API `mssparkutils.session.stop()` has been added to the mssparkutils package. This feature becomes handy when there are multiple sessions running against the same Spark pool. The new API is available for Scala and Python. To learn more, see [Stop an interactive session](spark/microsoft-spark-utilities.md#stop-an-interactive-session).| --## Machine Learning --This section summarizes recent new features and improvements to machine learning models in Azure Synapse Analytics. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| March 2023 | **Using OpenAI GPT in Synapse Analytics** | Microsoft offers Azure OpenAI as an Azure Cognitive Service, and you can [access Azure OpenAI's GPT models from within Synapse Spark](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/using-openai-gpt-in-synapse-analytics/ba-p/3751815). | -| November 2022 | **R Support (preview)** | Azure Synapse Analytics [now provides built-in R support for Apache Spark](./spark/apache-spark-r-language.md), currently in preview. For an example, [install an R library from CRAN and CRAN snapshots](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_16). | -| August 2022 | **SynapseML v.0.10.0** | New [release of SynapseML v0.10.0](https://github.com/microsoft/SynapseML/releases/tag/v0.10.0) (previously MMLSpark), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. Learn more about the [latest additions to SynapseML](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/exciting-new-release-of-synapseml/ba-p/3589606) and get started with [SynapseML](https://aka.ms/spark).| -| August 2022 | **.NET support** | SynapseML v0.10 [adds full support for .NET languages](https://devblogs.microsoft.com/dotnet/announcing-synapseml-for-dotnet/) like C# and F#. For a .NET SynapseML example, see [.NET Example with LightGBMClassifier](https://microsoft.github.io/SynapseML/docs/Reference/Quickstart%20-%20LightGBM%20in%20Dotnet/).| -| August 2022 | **Azure OpenAI Service support** | SynapseML now allows users to tap into 175-Billion parameter language models (GPT-3) from OpenAI that can generate and complete text and code near human parity. For more information, see [Azure OpenAI for Big Data](https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/OpenAI/).| -| August 2022 | **MLflow platform support** | SynapseML models now integrate with [MLflow](https://microsoft.github.io/SynapseML/docs/Use%20with%20MLFlow/Overview/) with full support for saving, loading, deployment, and [autologging](https://microsoft.github.io/SynapseML/docs/Use%20with%20MLFlow/Autologging/).| -| August 2022 | **SynapseML in Binder** | We know that Spark can be intimidating for first users but fear not because with the technology Binder, you can [explore and experiment with SynapseML in Binder](https://mybinder.org/v2/gh/microsoft/SynapseML/93d7ccf?labpath=notebooks%2Ffeatures) with zero setup, install, infrastructure, or Azure account required.| -| June 2022 | **Distributed Deep Neural Network Training (preview)** | The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in preview. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 also now includes support for the most common deep learning libraries like TensorFlow and PyTorch. To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). | --## Samples and guidance --This section summarizes new guidance and sample project resources for Azure Synapse Analytics. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| May 2023 | **Implementing Slow Change Dimension with Synapse** | Demonstrate how to [use a serverless SQL Pool to implement Slow Change Dimension type 2 on top of a data lake](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/transforming-your-data-lake-implementing-slow-change-dimension/ba-p/3718996). | -| May 2023 | **CI & CD With Azure Synapse Dedicated SQL Pool** | Use version control, Continuous Integration & Deployment, and best practices to [manage ALM lifecycle of an Azure Synapse Data Warehouse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/ci-amp-cd-with-azure-synapse-dedicated-sql-pool/ba-p/3810686) with this blog article. | -| March 2023 | **Create a Data Solution on Azure Synapse Analytics with Snapshot Serengeti** | This is [a four-part series on building an end-to-end data analytics and machine learning solution on Azure Synapse Analytics](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/create-a-data-solution-on-azure-synapse-analytics-with-snapshot/ba-p/3764726). The dataset used in this solution is the Snapshot Serengeti dataset, which consists of a large-scale collection of camera trap images. | -| March 2023 | **Introduction to Kusto Query Language (KQL)** | This Customer Success Engineering blog post provides an [introduction to Kusto Query Language (KQL)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/introduction-to-kusto-query-language-kql/ba-p/3758349), a powerful query language to analyze large volumes of structured, semi structured and unstructured (Free Text) data. | -| March 2023 | **Creating a custom disaster recovery plan for your Synapse workspace** | A multi-part blog series on [creating a disaster recovery plan for their Synapse Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/creating-a-custom-disaster-recovery-plan-for-your-synapse/ba-p/3746072). | -| March 2023 | **Azure Synapse connectivity: public endpoints, private endpoints, managed VNet and managed private endpoints** | A three-part expert-written blog series on Azure Synapse connectivity for the various networking options, including [inbound dedicated pool public endpoint connectivity](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-1-inbound-sql-dw-connections-on/ba-p/3589170), [Azure Synapse private endpoints](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-2-inbound-synapse-private/ba-p/3705160), and [managed VNet and managed private endpoints](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-connectivity-series-part-3-synapse-managed-vnet-and/ba-p/3706983). | -| February 2023 | **Historical monitoring dashboards for Azure Synapse dedicated SQL pools** | A walkthrough of the steps to [enable historical monitoring using Azure Monitor Workbook templates on top of Azure Metrics and Azure Log Analytics](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/historical-monitoring-dashboards-for-azure-synapse-dedicated-sql/ba-p/3725322). | -| January 2023 | **Read Data Lake with Synapse Serverless pools** | A two-part guide on how to [use OPENROWSET to query a path within the lake](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-read-data-lake-with-synapse-serverless-part-1/ba-p/3692779) or [use an external table to query a path within the lake](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-read-data-lake-with-synapse-serverless-part-2/ba-p/3692806). | -| January 2023 | **Structured streaming in Synapse Spark** | A detailed example of [streaming IoT temperature data from IoT devices into Synapse Spark](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/structured-streaming-in-synapse-spark/ba-p/3692836). | -| January 2023 | **Create DNS alias for dedicated SQL pool in Synapse workspace for disaster recovery** | A [custom DNS for dedicated SQL pools (formerly SQL DW)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/create-dns-alias-for-dedicated-sql-pool-in-synapse-workspace-for/ba-p/3675676) can provide redirect to client programs during a disaster. | -| December 2022 | **Azure Synapse - Data Lake vs. Delta Lake vs. Data Lakehouse** | Read a new Success Engineering blog post demystifying the terms [Data Lake, Delta Lake, and Data Lakehouse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-data-lake-vs-delta-lake-vs-data-lakehouse/ba-p/3673653). | -| November 2022 | **How Data Exfiltration Protection (DEP) impacts Azure Synapse Analytics Pipelines** | [Data Exfiltration Protection (DEP)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-data-exfiltration-protection-dep-impacts-azure-synapse/ba-p/3676146) is a feature that enables additional restrictions on the ability of Azure Synapse Analytics to connect to other services. | -| November 2022 | **Getting started with REST APIs for Azure Synapse Analytics - Apache Spark Pool** | We provide [instructions on how to setup and use Synapse REST endpoints and describe the Apache Spark Pool operations supported by REST APIs](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/getting-started-with-rest-apis-for-azure-synapse-analytics/ba-p/3668474). | -| November 2022 | **Demystifying Azure Synapse Data Explorer** | A two-part explainer [demystify Data Explorer in Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/demystifying-data-explorer/ba-p/3636191) and [data ingestion with Azure Synapse Data Explorer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/demystifying-data-ingestion-in-azure-synapse-data-explorer/ba-p/3661133). | -| November 2022 | **Synapse Spark Delta Time Travel** | Delta Lake [time travel enables point-in-time query snapshots or even rolls back erroneous updates](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-spark-delta-time-travel/ba-p/3646789). | -| September 2022 | **What is the difference between Synapse dedicated SQL pool (formerly SQL DW) and Serverless SQL pool?** | Understand dedicated vs serverless pools and their concurrency. Read more at [basic concepts of dedicated SQL pools and serverless SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/understand-synapse-dedicated-sql-pool-formerly-sql-dw-and/ba-p/3594628).| -| September 2022 | **Reading Delta Lake in dedicated SQL Pool** | [Sample script](https://github.com/microsoft/Azure_Synapse_Toolbox/tree/master/TSQL_Queries/Delta%20Lake) to import Delta Lake files directly into the dedicated SQL Pool and support features like time-travel. For an explanation, see [Reading Delta Lake in dedicated SQL Pool](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/reading-delta-lake-in-dedicated-sql-pool/ba-p/3571053).| -| September 2022 | **Azure Synapse Customer Success Engineering blog series** | The new [Azure Synapse Customer Success Engineering blog series](https://aka.ms/synapsecseblog) launches with a detailed introduction to [Building the Lakehouse - Implementing a Data Lake Strategy with Azure Synapse](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/building-the-lakehouse-implementing-a-data-lake-strategy-with/ba-p/3612291).| -| June 2022 | **Azure Orbital analytics with Synapse Analytics** | We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models withΓÇ»Azure Synapse Analytics. The sample solution also demonstrates how to integrate geospatial-specificΓÇ»[Azure AI services](/azure/ai-services/)ΓÇ»models, AI models from partners, and bring-your-own-data models. | -| June 2022 | **Migration guides for Oracle** | A new Microsoft-authored migration guide for Oracle to Azure Synapse Analytics is now available. [Design and performance for Oracle migrations](migration-guides/oracle/1-design-performance-migration.md). | -| June 2022 | **Azure Synapse success by design** | The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. | -| June 2022 | **Migration guides for Teradata** | A new Microsoft-authored migration guide for Teradata to Azure Synapse Analytics is now available. [Design and performance for Teradata migrations](migration-guides/teradat). | -| June 2022 | **Migration guides for IBM Netezza** | A new Microsoft-authored migration guide for IBM Netezza to Azure Synapse Analytics is now available. [Design and performance for IBM Netezza migrations](migration-guides/netezz). | --## Security --This section summarizes recent new security features and settings in Azure Synapse Analytics. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| December 2022 | **How Data Exfiltration Protection (DEP) impacts Azure Synapse Analytics Pipelines** | [Data Exfiltration Protection (DEP)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-data-exfiltration-protection-dep-impacts-azure-synapse/ba-p/3676146) is a feature that enables additional restrictions on the ability of Azure Synapse Analytics to connect to other services. | -| August 2022 | **Execute Azure Synapse Spark Notebooks with system-assigned managed identity** | You can [now execute Spark Notebooks with the system-assigned managed identity (or workspace managed identity)](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_30) by enabling *Run as managed identity* from the **Configure** session menu. With this feature, you are able to validate that your notebook works as expected when using the system-assigned managed identity, before using the notebook in a pipeline. For more information, see [Managed identity for Azure Synapse](synapse-service-identity.md).| -| July 2022 | **Changes to permissions needed for publishing to Git** | Now, only Git permissions and the Synapse Artifact Publisher (Synapse RBAC) role are needed to commit changes in Git-mode. For more information, see [Access control enforcement in Synapse Studio](security/synapse-workspace-access-control-overview.md#access-control-enforcement-in-synapse-studio).| --## Azure Synapse Data Explorer (preview) --Azure Data Explorer (ADX) is a fast and highly scalable data exploration service for log and telemetry data. It offers ingestion from Event Hubs, IoT Hubs, blobs written to blob containers, and Azure Stream Analytics jobs. This section summarizes recent new features and capabilities of [the Azure Synapse Data Explorer](data-explorer/data-explorer-overview.md) and [the Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). Read more about [What is the difference between Azure Synapse Data Explorer and Azure Data Explorer?](data-explorer/data-explorer-compare.md) --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| April 2023 | **ARM template to deploy Azure Data Explorer DB with Cosmos DB connection** | An [ARM template is now available to quickly deploy an Azure Data Explorer cluster](/samples/azure/azure-quickstart-templates/kusto-cosmos-db/) with System Assigned Identity, a database, an Azure Cosmos DB account (NoSql), an Azure Cosmos DB database, an Azure Cosmos DB container, and a data connection between the Cosmos DB container and the Kusto database (using the system assigned identity). | -| April 2023 | **Ingest data from Azure Events Hub to ADX free tier** | Azure Data Explorer now supports integration with Events Hub in ADX free tier. For more information, see [Free Event Hub data analysis with Azure Data Explorer](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/free-event-hub-data-analysis-with-azure-data-explorer/ba-p/3775034). | -| March 2023 | **View cluster history in Kusto Data Explorer** | It is now easier to track the history of queries and commands run on a Kusto cluster using [`.show queries`](/azure/data-explorer/kusto/management/queries)ΓÇ»andΓÇ»[`.show commands-and-queries`](/azure/data-explorer/kusto/management/commands-and-queries).ΓÇ»| -| March 2023 | **Amazon S3 support in Kusto Web Explorer** | You can now [ingest data from Amazon S3](/azure/data-explorer/kusto/api/connection-strings/storage-connection-strings) seamlessly via the Ingestion Hub in Kusto Web Explorer (KWE). | -| March 2023 | **Plotly visuals support** | Use the [Plotly graphing library](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/plotly-visualizations-in-azure-data-explorer/ba-p/3717768) to create visualizations for [a KQL query using 'render' operator](/azure/data-explorer/kusto/query/renderoperator?pivots=azuredataexplorer) or interactively when [building ADX dashboards](/azure/data-explorer/azure-data-explorer-dashboards). | -| February 2023 | **ADX Dashboards GA** | [Now generally available, Azure Data Explorer dashboards](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/general-availability-adx-dashboards/ba-p/3749361) using [the Azure Data Explorer web UI](https://dataexplorer.azure.com/) allow you to explore your data from end-to-end, starting with data ingestion, running queries, and ultimately building dashboards. | -| February 2023 | **ADX file ingestion supports up to 1000 files** | The [ADX ingestion wizard](/azure/data-explorer/ingest-data-wizard) now supports up to 1000 files (previously 10) at once. | -| January 2023 | **Apache Log4j 2 connector for Azure Data Explorer** | The [Apache Log4J 2 sink for Azure Data Explorer](https://github.com/Azure/azure-kusto-log4j) was developed to easily stream your Log4j 2 log data to Azure Data Explorer, where you can analyze, visualize, and alert on your logs in real-time. For more information, see [Getting started with Apache Log4j and Azure Data Explorer](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/getting-started-with-apache-log4j-and-azure-data-explorer/ba-p/3705242). | -| January 2023 | **Ingest preexisting Event Hub events to ADX** | ADX can now ingest Event Hubs data that existed before the creation of an Event Hubs data connection in your ADX cluster via the [Event retrieval start date](/azure/data-explorer/ingest-data-event-hub#event-retrieval-start-date). | -| January 2023 | **Multivariate Anomaly Detection** | ADX contains native support for [detecting anomalies over multiple time series](/azure/data-explorer/anomaly-detection) by using the function [series_decompose_anomalies()](/azure/data-explorer/kusto/query/series-decompose-anomaliesfunction). For more information, see [Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/multivariate-anomaly-detection-in-azure-data-explorer/ba-p/3689616). | -| January 2023 | **Improved conditional formatting in dashboard** | [Conditional formatting](/azure/data-explorer/dashboard-customize-visuals#conditional-formatting) helps in surfacing anomalies or outlier data points visually. Now you can either format a visual by using conditions or by applying themes to numeric columns or discrete values to non-numeric ones. | -| January 2023 | **New display options for pie chart displays** | Focus on the data you care about with new display options for [pie chart visualizations in Dashboards](/azure/data-explorer/dashboard-customize-visuals#pie-chart). | -| December 2022 | **ADX Kusto Web Explorer (KWE) JPath viewer** | JPath notation describes the path to one or more elements in a JSON document. Use the new expanded view to quickly get a specific element of a JSON text, and easily copy its path expression. For an example, see [JPath viewer](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/adx-web-ui-updates-december-2022/ba-p/3704159#jpath).| -| December 2022 | **Demystifying data consumption using Azure Synapse Data Explorer** | A guide to the various ways of [retrieving, consuming and visualizing data from Azure Synapse Data Explorer](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/demystifying-data-consumption-using-azure-synapse-data-explorer/ba-p/3684265). | -| November 2022 | **Table Level Sharing support via Azure Data Share** | We have now [added Table level sharing support](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_10) via the [Azure Data Share interface](https://azure.microsoft.com/products/data-share/#overview) where you can share specific tables in the database. This allows you to easily and securely share your data with people in your company or external partners. | -| November 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer** | The ability to use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster is now generally available. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector) and [ADX output from Azure Stream Analytics](../stream-analytics/azure-database-explorer-output.md).| -| November 2022 | **Parse-kv operator** | The new [parse-kv operator](/azure/data-explorer/kusto/query/parse-kv-operator) extracts structured information from a string expression and represents the information in a key/value form. You can use a [specified delimeter](/azure/data-explorer/kusto/query/parse-kv-operator#specified-delimeter), a [non-specified delimeter](/azure/data-explorer/kusto/query/parse-kv-operator#non-specified-delimiter), or [Regex](/azure/data-explorer/kusto/query/parse-kv-operator#regex) via a [RE2 regular expression](/azure/data-explorer/kusto/query/re2). | -| October 2022 | **Leaders and followers in ADX clusters** | Use the [database page in the Azure portal](https://ms.portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Kusto%2Fclusters) to easily identify all the [follower databases following a leader, and the leader for a given follower](/azure/data-explorer/follower). | -| October 2022 | **Aliasing follower databases** |The [follower database feature](/azure/data-explorer/follower) allows you to attach a database located in a different cluster to your Azure Data Explorer cluster. [Now you can override the database name](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-update-2022/ba-p/3680019#TOCREF_12) while establishing a follower relationship. | -| October 2022 | **Ingest data from OpenTelemetry** | [OpenTelemetry (OTel)](https://opentelemetry.io/docs/concepts/what-is-opentelemetry/) is a vendor-neutral open-source application observability framework. The OpenTelemetry exporter [supports ingestion of data from many receivers into Azure Data Explorer](/azure/data-explorer/open-telemetry-connector). | -| October 2022 | **Ingest data from Telegraf** | Telegraf is an open source, lightweight, minimal memory footprint agent for collecting, processing, and writing telemetry data including logs, metrics, and IoT data. The [Azure Data Explorer output plugin serves as the connector from Telegraf](/azure/data-explorer/ingest-data-telegraf) and supports ingestion of data from many types of input plugins into Azure Data Explorer. | -| September 2022 | **Azure Data Explorer Kusto emulator** | The [ADX Emulator is a Docker Image](/azure/data-explorer/kusto-emulator-overview) exposing an ADX Query Engine endpoint. You can use it to create databases and ingest and query data. The emulator understands Kusto Query Language (KQL) the same way the Azure Service does. | -| September 2022 | **Logstash connector proxy configuration** | The Azure Data Explorer (ADX) Logstash plugin enables you to process events from [Logstash](https://github.com/Azure/logstash-output-kusto) into an ADX database for analysis. Version 1.0.5 now supports HTTP/HTTPS proxies.| -| September 2022 | **Kafka support for Protobuf format** | The [ADX Kafka sink connector](https://www.confluent.io/hub/microsoftcorporation/kafka-sink-azure-kusto) leverages the Kafka Connect framework and provides an adapter to ingest data from Kafka in JSON, Avro, String, and now the [Protobuf format](https://developers.google.com/protocol-buffers) in the latest update. Read more about [Ingesting Protobuf data from Kafka to Azure Data Explorer](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/ingesting-protobuf-data-from-kafka-to-azure-data-explorer/ba-p/3595793). | -| September 2022 | **Funnel visuals** | [Funnel is the latest visual we added to Azure Data Explorer dashboards](/azure/data-explorer/dashboard-customize-visuals#funnel) following the feedback we received from customers. | -| September 2022 | **.NET and Node.js support in Sample App Generator** | The [Azure Data Explorer (ADX) sample app generator wizard](https://dataexplorer.azure.com/oneclick/generatecode?sourceType=file&programingLang=C) is a tool that allows you to [create a working app to ingest and query your data](/azure/data-explorer/sample-app-generator-wizard) in your preferred programming language. Now, generating sample apps in .NET and Node.js is supported along with the previously available options Java and Python. | -| August 2022 | **Protobuf support in Kafka sink** | [Azure Data Explorer Kafka sink](https://github.com/Azure/kafka-sink-azure-kusto) - a gold certified Confluent connector - helps ingest data from Kafka to Azure Data Explorer. We have [added Protobuf support in the connector](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/ingesting-protobuf-data-from-kafka-to-azure-data-explorer/ba-p/3595793) to help customers bring Protobuf data into ADX. | -| August 2022 | **Native support for Amazon S3** | The `.ingest into` ADX command ingests data into a table by "pulling" the data from one or more cloud storage files. The command now [supports Amazon S3 URLs](/azure/data-explorer/kusto/management/data-ingestion/ingest-from-storage). For an example, read the blog post announcing [Continuous data ingestion from S3](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-supports-native-ingestion-from-amazon-s3/ba-p/3606746).| -| August 2022 | **Embed ADX dashboards** | The ADX web UI and dashboards be [embedded in an IFrame and hosted in third party apps](/azure/data-explorer/kusto/api/monaco/host-web-ux-in-iframe). | -| August 2022 | **Free cluster upgrade option** | You can now [upgrade your Azure Data Explorer free cluster to a full cluster](/azure/data-explorer/start-for-free-upgrade) that removes the storage limitation allowing you more capacity to grow your data. | -| August 2022 | **Analyze fresh ADX data from Excel pivot table** | Now you can [Use fresh and unlimited volume of ADX data (Kusto) from your favorite analytic tool, Excel pivot tables](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/use-fresh-and-unlimited-volume-of-adx-data-kusto-from-your/ba-p/3588894). MDX queries generated by the Pivot code, will find their way to the Kusto backend as KQL statements that aggregate the data as needed by the pivot and back to Excel.| -| August 2022 | **Query results - color by value** | Highlight unique data at-a-glance in query results to visually group rows that share identical values for a specific column. Use **Explore results** and **Color by value** to [apply color to rows based on the selected column](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_14).| -| August 2022 | **Web explorer - crosshair support for charts** | The `ysplit` property now supports [the crosshair visual](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-august-update-2022/ba-p/3535126#TOCREF_15) (vertical lines that move along the mouse pointer) for many charts. | -| July 2022 | **Scan operator** | The powerful [scan operator](/azure/data-explorer/kusto/query/scan-operator) enables efficient and scalable process mining and sequence analytics and user analytics in ADX. Common scenarios for using `scan` include preventive maintenance for IoT devices, funnel analysis, recursive calculation, security scenarios looking for known attack steps, and more. | -| July 2022 | **Ingest data from Azure Stream Analytics into Synapse Data Explorer (Preview)** | You can now use a Streaming Analytics job to collect data from an event hub and send it to your Azure Data Explorer cluster using the Azure portal or an ARM template. For more information, see [Ingest data from Azure Stream Analytics into Azure Data Explorer](/azure/data-explorer/stream-analytics-connector). | -| July 2022 | **Render charts for each y column** | Synapse Web Data Explorer now supports rendering charts for each y column. For an example, see the [Azure Synapse Analytics July Update 2022](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-july-update-2022/ba-p/3535089#TOCREF_6).| -| June 2022 | **Web Explorer new homepage** | The new Azure Synapse [Web Explorer homepage](https://dataexplorer.azure.com/home) makes it even easier to get started with Synapse Web Explorer. | -| June 2022 | **Web Explorer sample gallery** | The [Web Explorer sample gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552) provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. | -| June 2022 | **Web Explorer dashboards drill through capabilities** | You can now [use drillthroughs as parameters in your Synapse Web Explorer dashboards](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters). | -| June 2022 | **Time Zone settings for Web Explorer** | The [Time Zone settings of the Web Explorer](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone) now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards are automatically refreshed to present the data with the selected time zone. | --## Azure Synapse Link --Azure Synapse Link is an automated system for replicating data from [SQL Server or Azure SQL Database](synapse-link/sql-synapse-link-overview.md), [Azure Cosmos DB](/azure/cosmos-db/synapse-link?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext), or [Dataverse](/power-apps/maker/data-platform/export-to-data-lake?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext) into Azure Synapse Analytics. This section summarizes recent news about the Azure Synapse Link feature. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| March 2023 | **Cosmos DB Synapse Link for Azure Data Explorer GA** | Azure Data Explorer supports fully managed data ingestion from Azure Cosmos DB using a change feed. We now support [Cosmos DB accounts behind a Managed Private Endpoint](/azure/data-explorer/security-network-managed-private-endpoint-create) or Service Endpoint.ΓÇ»For more information, see [Ingest data from Azure Cosmos DB into Azure Data Explorer](/azure/data-explorer/ingest-data-cosmos-db-connection). | -| January 2023 | **Cosmos DB Synapse Link for Azure Data Explorer preview** | Azure Data Explorer supports fully managed data ingestion from Azure Cosmos DB using a change feed. For more information, see [Ingest data from Azure Cosmos DB into Azure Data Explorer (Preview)](/azure/data-explorer/ingest-data-cosmos-db-connection). | -| November 2022 | **Azure Synapse Link for SQL** | Azure Synapse Link for SQL is now generally available for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link for SQL feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. For more information, see [What is Azure Synapse Link for SQL?](synapse-link/sql-synapse-link-overview.md)| -| July 2022 | **Batch mode** | Decide between cost and latency in Azure Synapse Link for SQL by selecting *continuous* or *batch* mode to replicate your data. Batch mode allows you to save even more on costs by only paying for ingestion service during the batch loads instead of it being continuously on. You can select between 20 and 60 minutes for batch processing.| --## Synapse SQL --This section summarizes recent improvements and features in SQL pools in Azure Synapse Analytics. --|**Month** | **Feature** | **Learn more**| -|:-- |:-- | :-- | -| June 2023 | **Updated diagnostic settings fields** | Nine fields have been [added to the dedicated SQL pool diagnostic settings logs](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/missing-fields-added-to-dedicated-sql-pool-diagnostic-settings/ba-p/3844011). | -| March 2023 | **Create alerts for your Azure Synapse dedicated SQL pool** | This Customer Success Engineering blog post provides steps to [configure alerts for your Azure Synapse dedicated SQL pool](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/create-alerts-for-your-synapse-dedicated-sql-pool/ba-p/3773256) and provide recommended alerts to get you started. | -| March 2023 | **Performance Tuning Synapse Dedicated Pools - Understanding the Query Lifecycle** | This Customer Success Engineering blog post is a deep dive into [Understanding Query Lifecycle to Maximize Performance](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/performance-tuning-synapse-dedicated-pools-understanding-the/ba-p/3717260). | -| March 2023 | **GREATEST and LEAST T-SQL syntax support** | [GREATEST](/sql/t-sql/functions/logical-functions-greatest-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [LEAST](/sql/t-sql/functions/logical-functions-least-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions are now available in both serverless and dedicated SQL pools. These scalar-valued functions and return the maximum and minimum value out of a list of one or more expressions. | -| March 2023| **Multi-column distribution in dedicated SQL pools GA** | You can now [Hash Distribute tables on multiple columns](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/multi-column-distribution-for-dedicated-sql-pools-is-now-ga/ba-p/3774529) for a more even distribution of the base table, reducing data skew over time and improving query performance. For more information on this generally available feature, see the three options: [CREATE MATERIALIZED VIEW](/sql/t-sql/statements/create-materialized-view-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true), [CREATE TABLE distribution options](/sql/t-sql/statements/create-table-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true#TableDistributionOptions), or [CREATE TABLE AS SELECT distribution options](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse?view=azure-sqldw-latest&preserve-view=true#table-distribution-options).| -| March 2023 | **Deploying Synapse SQL serverless using SSDT** | SqlPackage's [long-awaited support for Azure Synapse Analytics serverless SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/deploying-synapse-sql-serverless-objects-across-environments/ba-p/3751953) is now available starting with [the 161.8089.0 SqlPackage](/sql/tools/sqlpackage/release-notes-sqlpackage?view=sql-server-ver16&preserve-view=true#16180890-sqlpackage). Serverless SQL pools are [supported for both the extract and publish actions](/sql/tools/sqlpackage/sqlpackage-for-azure-synapse-analytics?view=sql-server-ver16&preserve-view=true#support-for-serverless-sql-pools). | -| February 2023 | **UTF-8 and Japanese collations support for dedicated SQL pools** | Both UTF-8 support and Japanese collations are now [generally available for dedicated SQL pools](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/utf-8-and-japanese-collations-support-for-dedicated-sql-pools-is/ba-p/3740989). | -| September 2022 | **Auto-statistics for OPENROWSET in CSV datasets** | Serverless SQL pool will [automatically create statistics](sql/develop-tables-statistics.md#statistics-in-serverless-sql-pool) for CSV datasets when needed to ensure an optimal query execution plan for OPENROWSET queries. | -| September 2022 | **MERGE T-SQL syntax** | [T-SQL MERGE syntax](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true) has been a highly requested addition to the Synapse T-SQL library. MERGE encapsulates INSERTs/UPDATEs/DELETEs into a single statement. Available in dedicated SQL pools in version 10.0.17829 and above. For more, see the [MERGE T-SQL announcement blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/merge-t-sql-for-dedicated-sql-pools-is-now-ga/ba-p/3634331).| -| August 2022| **Apache Spark Delta Lake tables in serverless SQL pools** | The ability to for serverless SQL pools to access Delta Lake tables created in Spark databases is in preview. For more information, see [Azure Synapse Analytics shared metadata tables](metadat).| -| August 2022| **Multi-column distribution in dedicated SQL pools** | You can now Hash Distribute tables on multiple columns for a more even distribution of the base table, reducing data skew over time and improving query performance. For more information on opting-in to the preview, see [CREATE TABLE distribution options](/sql/t-sql/statements/create-table-azure-sql-data-warehouse#TableDistributionOptions) or [CREATE TABLE AS SELECT distribution options](/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse#table-distribution-options).| -| August 2022| **Distribution Advisor**| The Distribution Advisor is a new preview feature in Azure Synapse dedicated SQL pools Gen2 that analyzes queries and recommends the best distribution strategies for tables to improve query performance. For more information, see [Distribution Advisor in Azure Synapse SQL](sql/distribution-advisor.md).| -| August 2022 | **Add SQL objects and users in Lake databases** | New capabilities announced for lake databases in serverless SQL pools: create schemas, views, procedures, inline table-valued functions. You can also database users from your Azure Active Directory domain and assign them to the db_datareader role. For more information, see [Access lake databases using serverless SQL pool in Azure Synapse Analytics](metadat).| --## Learn more --For older updates, review past [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate) posts or [previous updates in Azure Synapse Analytics](whats-new-archive.md). --- [Get started with Azure Synapse Analytics](get-started.md)-- [Introduction to Azure Synapse Analytics](/training/modules/introduction-azure-synapse-analytics/)-- [Realize Integrated Analytical Solutions with Azure Synapse Analytics](/training/paths/realize-integrated-analytical-solutions-with-azure-synapse-analytics/)-- [Data integration at scale with Azure Data Factory or Azure Synapse Pipeline](/training/paths/data-integration-scale-azure-data-factory/)-- [Microsoft Training Learning Paths for Azure Synapse](/training/browse/?terms=synapse&resource_type=learning%20path)-- [Azure Synapse Analytics in Microsoft Q&A](/answers/topics/azure-synapse-analytics.html)--## Next steps --- [Azure Synapse Analytics Blog](https://aka.ms/SynapseMonthlyUpdate)-- [Become an Azure Synapse Influencer](https://aka.ms/synapseinfluencers)-- [Azure Synapse Analytics terminology](overview-terminology.md)-- [Azure Synapse Analytics migration guides](migration-guides/index.yml)-- [Azure Synapse Analytics frequently asked questions](overview-faq.yml) |
update-manager | Manage Arc Enabled Servers Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-arc-enabled-servers-programmatically.md | description: This article tells how to use Azure Update Manager using REST API w Previously updated : 05/13/2024 Last updated : 10/17/2024 The following table describes the elements of the request body: | `maximumDuration` | Maximum amount of time in minutes the OS update operation can take. It must be an ISO 8601-compliant duration string such as `PT100M`. | | `rebootSetting` | Flag to state if you should reboot the machine and if the Guest OS update installation needs it for completion. Acceptable values are: `IfRequired, NeverReboot, AlwaysReboot`. | | `windowsParameters` | Parameter options for Guest OS update on machine running a supported Microsoft Windows Server operating system. |-| `windowsParameters - classificationsToInclude` | List of categories or classifications of OS updates to apply, as supported and provided by Windows Server OS. Acceptable values are: `Critical, Security, UpdateRollUp, FeaturePack, ServicePack, Definition, Tools, Update` | +| `windowsParameters - classificationsToInclude` | List of categories or classifications of OS updates to apply, as supported and provided by Windows Server OS. Acceptable values are: `Critical, Security, UpdateRollup, FeaturePack, ServicePack, Definition, Tools, Update` | | `windowsParameters - kbNumbersToInclude` | List of Windows Update KB IDs that are available to the machine and that you need install. If you've included any 'classificationsToInclude', the KBs available in the category are installed. 'kbNumbersToInclude' is an option to provide list of specific KB IDs over and above that you want to get installed. For example: `1234` | | `windowsParameters - kbNumbersToExclude` | List of Windows Update KB Ids that are available to the machine and that should **not** be installed. If you've included any 'classificationsToInclude', the KBs available in the category will be installed. 'kbNumbersToExclude' is an option to provide list of specific KB IDs that you want to ensure don't get installed. For example: `5678` | | `maxPatchPublishDate` | This is used to install patches that were published on or before this given max published date.| |
update-manager | Manage Vms Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-vms-programmatically.md | description: This article tells how to use Azure Update Manager in Azure using R Previously updated : 05/13/2024 Last updated : 10/17/2024 The following table describes the elements of the request body: | `maximumDuration` | Maximum amount of time that the operation runs. It must be an ISO 8601-compliant duration string such as `PT4H` (4 hours). | | `rebootSetting` | Flag to state if machine should be rebooted and if Guest OS update installation requires it for completion. Acceptable values are: `IfRequired, NeverReboot, AlwaysReboot`. | | `windowsParameters` | Parameter options for Guest OS update on Azure VMs running a supported Microsoft Windows Server operating system. |-| `windowsParameters - classificationsToInclude` | List of categories/classifications to be used for selecting the updates to be installed on the machine. Acceptable values are: `Critical, Security, UpdateRollUp, FeaturePack, ServicePack, Definition, Tools, Updates` | +| `windowsParameters - classificationsToInclude` | List of categories/classifications to be used for selecting the updates to be installed on the machine. Acceptable values are: `Critical, Security, UpdateRollup, FeaturePack, ServicePack, Definition, Tools, Updates` | | `windowsParameters - kbNumbersToInclude` | List of Windows Update KB Ids that should be installed. All updates belonging to the classifications provided in `classificationsToInclude` list will be installed. `kbNumbersToInclude` is an optional list of specific KBs to be installed in addition to the classifications. For example: `1234` | | `windowsParameters - kbNumbersToExclude` | List of Windows Update KB Ids that should **not** be installed. This parameter overrides `windowsParameters - classificationsToInclude`, meaning a Windows Update KB ID specified here won't be installed even if it belongs to the classification provided under `classificationsToInclude` parameter. | | `maxPatchPublishDate` | This is used to install patches that were published on or before this given max published date.| |
virtual-desktop | Whats New Sxs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-sxs.md | Title: What's new in the Azure Virtual Desktop SxS Network Stack? - Azure description: New features and product updates for the Azure Virtual Desktop SxS Network Stack. Previously updated : 08/13/2024 Last updated : 10/16/2024 Here's information about the SxS Network Stack. | Release | Latest version | |--|--|-| Production | 1.0.2404.16760 | -| Validation | 1.0.2404.16760 | +| Production | 1.0.2407.05700 | +| Validation | 1.0.2407.05700 | +## Version 1.0.2407.05700 ++*Published: September 2024* ++In this release, we've made the following changes: ++- [HVEC preview](whats-new.md#enabling-hevc-gpu-acceleration-for-azure-virtual-desktop-is-now-in-preview) support. +- Addressed an issue in the RemoteApp scenario that could cause the text highlight color in the File ExplorerΓÇÖs address bar to appear incorrectly. + ## Version 1.0.2404.16760 *Published: July 2024* |
virtual-network-manager | Concept Connectivity Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-connectivity-configuration.md | A hub-and-spoke is a network topology in which you have a virtual network select In this configuration, you have settings you can enable such as *direct connectivity* between spoke virtual networks. By default, this connectivity is only for virtual networks in the same region. To allow connectivity across different Azure regions, you need to enable *Global mesh*. You can also enable *Gateway* transit to allow spoke virtual networks to use the VPN or ExpressRoute gateway deployed in the hub. +If checked, any peerings that do not match the contents of this configuration can by removed, even if these peerings were manually created after this configuration is deployed. If you remove a VNet from a network group used in the configuration, your virtual manager removes only peerings it created. + ### Direct connectivity Enabling *Direct connectivity* creates an overlay of a [*connected group*](#connected-group) on top of your hub and spoke topology, which contains spoke virtual networks of a given group. Direct connectivity allows a spoke VNet to talk directly to other VNets in its spoke group, but not to VNets in other spokes. |
virtual-network | Virtual Network Bandwidth Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-bandwidth-testing.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). - This article describes how to use the free NTTTCP tool from Microsoft to test network bandwidth and throughput performance on Azure Windows or Linux virtual machines (VMs). A tool like NTTTCP targets the network for testing and minimizes the use of other resources that could affect performance. ## Prerequisites To measure throughput from Linux machines, use [NTTTCP-for-Linux](https://github 1. Prepare both the sender and receiver VMs for NTTTCP-for-Linux by running the following commands, depending on your distro: - - For **CentOS**, install `gcc` , `make` and `git`. -- ``` bash - sudo yum install gcc -y - sudo yum install git -y - sudo yum install make -y - ``` - - For **Ubuntu**, install `build-essential` and `git`. ```bash |
virtual-wan | How To Network Virtual Appliance Inbound | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-network-virtual-appliance-inbound.md | To enable the DNAT use case associate one or more Azure Public IP address resour ### Example -In the following example, users access an application hosted in an Azure Virtual Network (Application IP 10.60.0.4) connect to a DNAT Public IP (4.4.4.4) assigned to the NVA on Port 443. +In the following example, users access an application hosted in an Azure Virtual Network (Application IP 10.60.0.4) connect to a DNAT Public IP (198.51.100.4) assigned to the NVA on Port 443. The following configurations are performed: -* **Internet inbound** IP addresses assigned to the NVA are 4.4.4.4 and 5.5.5.5. -* **NVA DNAT rule** is programmed to translate traffic with destination 4.4.4.4:443 to 10.60.0.4:443. +* **Internet inbound** IP addresses assigned to the NVA are 198.51.100.4 and 192.0.2.4. +* **NVA DNAT rule** is programmed to translate traffic with destination 198.51.100.4:443 to 10.60.0.4:443. * NVA orchestrator interfaces with Azure APIs to create **inbound security rules** and Virtual WAN control plane programs infrastructure appropriately to support traffic flow. #### Inbound traffic flow |
virtual-wan | Scenario Isolate Virtual Networks Branches | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-isolate-virtual-networks-branches.md | |
vpn-gateway | Vpn Gateway Howto Vnet Vnet Resource Manager Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md | You can create either a bidirectional, or single direction connection. For this * **First virtual network gateway**: Select **VNet1GW** from the dropdown. * **Second virtual network gateway**: Select **VNet4GW** from the dropdown.- * **Shared key (PSK)**: In this field, enter a shared key for your connection. You can generate or create this key yourself. In a site-to-site connection, the key you use is the same for your on-premises device and your virtual network gateway connection. The concept is similar here, except that rather than connecting to a VPN device, you're connecting to another virtual network gateway. + * **Shared key (PSK)**: In this field, enter a shared key for your connection. You can generate or create this key yourself. In a site-to-site connection, the key you use is the same for your on-premises device and your virtual network gateway connection. The concept is similar here, except that rather than connecting to a VPN device, you're connecting to another virtual network gateway. The important thing when specifying a shared key is that it's exactly the same for both sides of the connection. * **IKE Protocol**: IKEv2 1. For this exercise, you can leave the rest of the settings as their default values. 1. Select **Review + create**, then **Create** to validate and create your connections. You can create either a bidirectional, or single direction connection. For this ## Add more connections -If you want to add more connections, navigate to the virtual network gateway from which you want to create the connection, then select **Connections**. You can create another VNet-to-VNet connection, or create an IPsec Site-to-Site connection to an on-premises location. Be sure to adjust the **Connection type** to match the type of connection you want to create. Before you create more connections, verify that the address space for your virtual network doesn't overlap with any of the address spaces you want to connect to. For steps to create a Site-to-Site connection, see [Create a Site-to-Site connection](./tutorial-site-to-site-portal.md). +If you want to add more connections, navigate to the virtual network gateway from which you want to create the connection, then select **Connections**. You can create another VNet-to-VNet connection, or create an IPsec Site-to-Site connection to an on-premises location. Be sure to adjust the **Connection type** to match the type of connection you want to create. When you configure a connection that uses a shared key, make sure that the shared key is exactly the same for both sides of the connection. Before you create more connections, verify that the address space for your virtual network doesn't overlap with any of the address spaces you want to connect to. For steps to create a Site-to-Site connection, see [Create a Site-to-Site connection](./tutorial-site-to-site-portal.md). ## VNet-to-VNet FAQ |