Updates from: 10/30/2023 02:11:12
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Quickstart Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-image.md
-+ Last updated 05/08/2023
ai-services Quickstart Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-text.md
-+ Last updated 07/18/2023
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
description: Use this article to import and use your data in Azure OpenAI.
-+
aks Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/devops-pipeline.md
Last updated 10/11/2023-+ zone_pivot_groups: pipelines-version
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
description: Learn how to connect to Azure Kubernetes Service (AKS) cluster node
Last updated 10/04/2023 -+ #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
aks Tutorial Kubernetes Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-paas-services.md
Title: Kubernetes on Azure tutorial - Use PaaS services with an Azure Kubernetes
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to use the Azure Service Bus service with your AKS cluster. Last updated 10/23/2023-+ #Customer intent: As a developer, I want to learn how to use PaaS services with an Azure Kubernetes Service (AKS) cluster so that I can deploy and manage my applications.
In the next tutorial, you learn how to scale an application in AKS.
[get-az-service-bus-namespace]: /powershell/module/az.servicebus/get-azservicebusnamespace [get-az-service-bus-key]: /powershell/module/az.servicebus/get-azservicebuskey [import-azakscredential]: /powershell/module/az.aks/import-azakscredential
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
aks Upgrade Aks Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-aks-cluster.md
Title: Upgrade an Azure Kubernetes Service (AKS) cluster description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates. -+ Last updated 10/19/2023
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Title: Upgrade options for Azure Kubernetes Service (AKS) clusters description: Learn the different ways to upgrade an Azure Kubernetes Service (AKS) cluster. -+ Last updated 10/19/2023- # Upgrade options for Azure Kubernetes Service (AKS) clusters
attestation Logs Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/logs-data-reference.md
Last updated 10/16/2023 -+ # Data reference of Azure Attestation logs
For a reference of all Azure Monitor Logs / Log Analytics tables, including info
## Diagnostics tables Azure Attestation uses the [Azure Activity](/azure/azure-monitor/reference/tables/azureactivity) and [Azure Attestation Diagnostics](/azure/azure-monitor/reference/tables/azureattestationdiagnostics) tables to store resource log information. -
attestation Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/monitor-logs.md
Last updated 10/16/2023 -+ # Monitor Azure Attestation
attestation Trust Domain Extensions Eat Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/trust-domain-extensions-eat-profile.md
Last updated 10/18/2023 -+ # Azure Attestation EAT profile for Intel® Trust Domain Extensions (TDX)
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Or to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrad
Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
-For Arc-enabled VMware vSphere (preview), manual upgrade is available, and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware vSphere announces General Availability, appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded.
+For Arc-enabled VMware vSphere (preview), manual upgrade is available, and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware vSphere announces General Availability, appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, then another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This will deploy a new Arc resource bridge using the latest version and reconnect pre-existing Azure resources.
[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For subsequent upgrades, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
azure-arc Remove Vcenter From Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md
Last updated 03/28/2022 -+ # Customer intent: As an infrastructure admin, I want to cleanly remove my VMware vCenter environment from Azure Arc-enabled VMware vSphere.
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
In contrast, for clustered caches, we recommend using the metrics with the suffi
- The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub there will be no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there will be `Total Operations` metrics that reflect the cache usage for pub/sub operations. - Used Memory - The amount of cache memory in MB that is used for key/value pairs in the cache during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value doesn't include metadata or fragmentation.
+ - On the Enterprise and Enterprise Flash tier, the Used Memory value includes the memory in both the primary and replica nodes. This can make the metric appear twice as large as expected.
- Used Memory Percentage - The percent of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. This value doesn't include fragmentation. - Used Memory RSS
azure-functions Durable Functions Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-entities.md
Last updated 10/24/2023 ms.devlang: csharp, java, javascript, python-+ zone_pivot_groups: df-languages #Customer intent: As a developer, I want to learn what durable entities are and how to use them to solve distributed, stateful problems in my applications.
azure-functions Functions Bindings Dapr Input Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-input-secret.md
description: Learn how to access Dapr Secret input binding data during function
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Input State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-input-state.md
description: Learn how to provide Dapr State input binding data during a functio
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Output Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-invoke.md
description: Learn how to send data to a Dapr Invoke output binding during funct
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Output Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-publish.md
description: Learn how to provide Dapr Publish output binding data using Azure F
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
The Python v1 model requires no additional changes, aside from setting up the ou
## Next steps [Learn more about Dapr publish and subscribe.](https://docs.dapr.io/developing-applications/building-blocks/pubsub/)-
azure-functions Functions Bindings Dapr Output State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output-state.md
description: Learn how to provide Dapr State output binding data during a functi
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-output.md
description: Learn how to provide Dapr Binding output binding data during a func
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
The Python v1 model requires no additional changes, aside from setting up the ou
## Next steps
-[Learn more about Dapr service invocation.](https://docs.dapr.io/developing-applications/building-blocks/bindings/)
+[Learn more about Dapr service invocation.](https://docs.dapr.io/developing-applications/building-blocks/bindings/)
azure-functions Functions Bindings Dapr Trigger Svc Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger-svc-invoke.md
description: Learn how to run Azure Functions as Dapr service invocation data ch
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Trigger Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger-topic.md
description: Learn how to run Azure Functions as Dapr topic data changes.
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr-trigger.md
description: Learn how to run Azure Functions as Dapr input binding data changes
Last updated 10/11/2023 ms.devlang: csharp, java, javascript, powershell, python-+ zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Bindings Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr.md
Title: Dapr Extension for Azure Functions description: Learn to use the Dapr triggers and bindings in Azure Functions. + Last updated 10/11/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
Learn how to use the Dapr Extension for Azure Functions via the provided samples
## Next steps
-[Learn more about Dapr.](https://docs.dapr.io/)
+[Learn more about Dapr.](https://docs.dapr.io/)
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
1. On the **Actions** tab, select or create the required [action groups](./action-groups.md).
-1. <a name="custom-props"></a>(Optional) In the **Custom properties** section, if you've configured action groups for this alert rule, you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions.
+1. <a name="custom-props"></a>(Optional) In the **Advanced options** at **Details tab** the **Custom properties** section is added, if you've configured action groups for this alert rule. In this ection you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions.
The custom properties are specified as key:value pairs, using either static text, a dynamic value extracted from the alert payload, or a combination of both.
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
Four different configmaps can be configured to provide scrape configuration and
* metric keep-lists - this setting is used to control which metrics are listed to be allowed from each default target and to change the default behavior * scrape intervals for default/pre-definetargets. `30 secs` is the default scrape frequency and it can be changed per default target using this configmap * debug-mode - turning this ON helps to debug missing metric/ingestion issues - see more on [troubleshooting](prometheus-metrics-troubleshoot.md#debug-mode)
-2. [`ama-metrics-prometheus-config`](https://aka.ms/azureprometheus-addon-rs-configmap)
+2. [`ama-metrics-prometheus-config`](https://aka.ms/azureprometheus-addon-rs-configmap) (**Recommended**)
This config map can be used to provide Prometheus scrape config for addon replica. Addon runs a singleton replica, and any cluster level services can be discovered and scraped by providing scrape jobs in this configmap. You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster.
-3. [`ama-metrics-prometheus-config-node`](https://aka.ms/azureprometheus-addon-ds-configmap)
+3. [`ama-metrics-prometheus-config-node`](https://aka.ms/azureprometheus-addon-ds-configmap) (**Advanced**)
This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Linux** node in the cluster, and any node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which gets substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**. You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster
-4. [`ama-metrics-prometheus-config-node-windows`](https://aka.ms/azureprometheus-addon-ds-configmap-windows)
+4. [`ama-metrics-prometheus-config-node-windows`](https://aka.ms/azureprometheus-addon-ds-configmap-windows) (**Advanced**)
This config map can be used to provide Prometheus scrape config for addon DaemonSet that runs on every **Windows** node in the cluster, and node level targets on each node can be scraped by providing scrape jobs in this configmap. When you use this configmap, you can use `$NODE_IP` variable in your scrape config, which will be substituted by corresponding node's ip address in DaemonSet pod running on each node. This way you get access to scrape anything that runs on that node from the metrics addon DaemonSet. **Please be careful when you use discoveries in scrape config in this node level config map, as every node in the cluster will setup & discover the target(s) and will collect redundant metrics**. You can take the sample configmap from the above git hub repo, add scrape jobs that you would need and apply/deploy the config map to `kube-system` namespace for your cluster
You can configure the metrics add-on to scrape targets other than the default on
Follow the instructions to [create, validate, and apply the configmap](prometheus-metrics-scrape-validate.md) for your cluster.
-### Advanced setup: Configure custom Prometheus scrape jobs for the DaemonSet
-The `ama-metrics` Replica pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` Replica pod to the `ama-metrics` DaemonSet pod.
-
-The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), is similar to the replica-set configmap, and can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery. Otherwise, each node tries to scrape all targets and makes many calls to the Kubernetes API server.
-
-Example:- The following `node-exporter` config is one of the default targets for the DaemonSet pods. It uses the `$NODE_IP` environment variable, which is already set for every `ama-metrics` add-on container to target a specific port on the node.
-
- ```yaml
- - job_name: nodesample
- scrape_interval: 30s
- scheme: http
- metrics_path: /metrics
- relabel_configs:
- - source_labels: [__metrics_path__]
- regex: (.*)
- target_label: metrics_path
- - source_labels: [__address__]
- replacement: '$NODE_NAME'
- target_label: instance
- static_configs:
- - targets: ['$NODE_IP:9100']
- ```
-
-Custom scrape targets can follow the same format by using `static_configs` with targets and using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the DaemonSet takes the config, scrapes the metrics, and sends them for that node.
## Prometheus configuration tips and examples
azure-monitor Prometheus Metrics Scrape Validate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-validate.md
In addition to the default scrape targets that Azure Monitor Prometheus agent scrapes by default, use the following steps to provide more scrape config to the agent using a configmap. The Azure Monitor Prometheus agent doesn't understand or process operator [CRDs](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) for scrape configuration, but instead uses the native Prometheus configuration as defined in [Prometheus configuration](https://aka.ms/azureprometheus-promioconfig-scrape). The three configmaps that can be used for custom target scraping are --- ama-metrics-prometheus-config - When a configmap with this name is created, scrape jobs defined in it are run from the Azure monitor metrics replica pod running in the cluster.-- ama-metrics-prometheus-config-node - When a configmap with this name is created, scrape jobs defined in it are run from each **Linux** DaemonSet pod running in the cluster. For more information, see [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).-- ama-metrics-prometheus-config-node-windows - When a configmap with this name is created, scrape jobs defined in it are run from each **windows** DaemonSet. For more information, see [Advanced Setup](prometheus-metrics-scrape-configuration.md#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
+- ama-metrics-prometheus-config (**Recommended**) - When a configmap with this name is created, scrape jobs defined in it are run from the Azure monitor metrics replica pod running in the cluster.
+- ama-metrics-prometheus-config-node (**Advanced**) - When a configmap with this name is created, scrape jobs defined in it are run from each **Linux** DaemonSet pod running in the cluster. For more information, see [Advanced Setup](#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
+- ama-metrics-prometheus-config-node-windows (**Advanced**) - When a configmap with this name is created, scrape jobs defined in it are run from each **windows** DaemonSet. For more information, see [Advanced Setup](#advanced-setup-configure-custom-prometheus-scrape-jobs-for-the-daemonset).
## Create Prometheus configuration file
A sample of the `ama-metrics-prometheus-config` configmap is [here](https://gith
### Troubleshooting If you successfully created the configmap (ama-metrics-prometheus-config or ama-metrics-prometheus-config-node) in the **kube-system** namespace and still don't see the custom targets being scraped, check for errors in the **replica pod** logs for **ama-metrics-prometheus-config** configmap or **DaemonSet pod** logs for **ama-metrics-prometheus-config-node** configmap) using *kubectl logs* and make sure there are no errors in the *Start Merging Default and Custom Prometheus Config* section with prefix *prometheus-config-merger*
+> [!NOTE]
+> ### Advanced setup: Configure custom Prometheus scrape jobs for the DaemonSet
+>
+> The `ama-metrics` Replica pod consumes the custom Prometheus config and scrapes the specified targets. For a cluster with a large number of nodes and pods and a large volume of metrics to scrape, some of the applicable custom scrape targets can be off-loaded from the single `ama-metrics` Replica pod to the `ama-metrics` DaemonSet pod.
+>
+> The [ama-metrics-prometheus-config-node configmap](https://aka.ms/azureprometheus-addon-ds-configmap), is similar to the replica-set configmap, and can be created to have static scrape configs on each node. The scrape config should only target a single node and shouldn't use service discovery/pod annotations. Otherwise, each node tries to scrape all targets and makes many calls to the Kubernetes API server.
+>
+> Custom scrape targets can follow the same format by using `static_configs` with targets and using the `$NODE_IP` environment variable and specifying the port to scrape. Each pod of the DaemonSet takes the config, scrapes the metrics, and sends them for that node.
+>
+> Example:- The following `node-exporter` config is one of the default targets for the DaemonSet pods. It uses the `$NODE_IP` environment variable, which is already set for every `ama-metrics` add-on container to target a specific port on the node.
+ ```yaml
+ - job_name: nodesample
+ scrape_interval: 30s
+ scheme: http
+ metrics_path: /metrics
+ relabel_configs:
+ - source_labels: [__metrics_path__]
+ regex: (.*)
+ target_label: metrics_path
+ - source_labels: [__address__]
+ replacement: '$NODE_NAME'
+ target_label: instance
+ static_configs:
+ - targets: ['$NODE_IP:9100']
+ ```
+ ## Next steps - [Learn more about collecting Prometheus metrics](../essentials/prometheus-metrics-overview.md).
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
-+ Last updated 10/19/2023
azure-monitor Tutorial Logs Ingestion Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-code.md
Title: 'Sample code to send data to Azure Monitor using Logs ingestion API' description: Sample code using REST API and client libraries for Logs ingestion API in Azure Monitor. Previously updated : 09/14/2023 Last updated : 10/27/2023 # Sample code to send data to Azure Monitor using Logs ingestion API
This article provides sample code using the [Logs ingestion API](logs-ingestion-
- Custom table in a Log Analytics workspace - Data collection endpoint (DCE) to receive data - Data collection rule (DCR) to direct the data to the target table-- AD application with access to the DCR
+- Microsoft Entra application with access to the DCR
## Sample code
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
In VM insights, you can view discovered application components on Windows and Li
For information about configuring VM insights, see [Enable VM insights](vminsights-enable-overview.md).
+## Limitations
+
+- If you're duplicating IP ranges either with VMs or Azure Virtual Machine Scale Sets across subnets and virtual networks, VM insights Map might display incorrect information. This issue is known. We're investigating options to improve this experience.
+- The Map feature currently only supports IPv4. We're investigating support for IPv6. We also support IPv4 that's tunnelled inside IPv6.
+- A map for a resource group or other large group might be difficult to view. Although we've made improvements to Map to handle large and complex configurations, we realize a map can have many nodes, connections, and nodes working as a cluster. We're committed to continuing to enhance support to increase scalability.
+- In the Free pricing tier, the VM insights Map feature supports only five machines that are connected to a Log Analytics workspace.
+ ## Prerequisites To enable the Map feature in VM insights, the virtual machine requires one of the following agents:
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
VM insights monitors key operating system performance indicators related to proc
- Support tuning and optimization to achieve efficiency. - Support capacity planning.
+> [!NOTE]
+> The network chart on the Performance tab looks different from the network chart on the Azure VM overview page because the overview page displays charts based on the host's measurement of activity in the guest VM. The network chart on the Azure VM overview only displays network traffic that will be billed. Inter-virtual network traffic isn't included. The data and charts shown for VM insights are based on data from the guest VM. The network chart displays all TCP/IP traffic that's inbound and outbound to that VM, including inter-virtual network traffic.
++ ## Limitations Limitations in performance collection with VM insights:
chaos-studio Chaos Studio Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-configure-customer-managed-keys.md
+
+ Title: Configure customer-managed keys (preview) for experiment encryption
+
+description: Learn how to configure customer-managed keys (preview) for your Azure Chaos Studio experiment resource using Azure Blob Storage
++++ Last updated : 10/06/2023++
+
+# Configure customer-managed keys (preview) for Azure Chaos Studio using Azure Blob Storage
+
+Azure Chaos Studio automatically encrypts all data stored in your experiment resource with keys that Microsoft provides (service-managed keys). As an optional feature, you can add a second layer of security by also providing your own (customer-managed) encryption key(s). Customer-managed keys offer greater flexibility for controlling access and key-rotation policies.
+
+When you use customer-managed encryption keys, you need to specify a user-assigned managed identity (UMI) to retrieve the key. The UMI you create needs to match the UMI that you use for the Chaos Studio experiment.
+
+When configured, Azure Chaos Studio uses Azure Storage, which uses the customer-managed key to encrypt all of your experiment execution and result data within your own Storage account.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+- An existing user-assigned managed identity. For more information about creating a user-assigned managed identity, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+
+- A public-access enabled Azure storage account
+
+## Limitations
+
+- Azure Chaos Studio experiments can't automatically rotate the customer-managed key to use the latest version of the encryption key. You would do key rotation directly in your chosen Azure Storage account.
+
+- You will need to use our **2023-10-27-preview REST API** to create and use CMK-enabled experiments ONLY. There is **no** support for CMK-enabled experiments in our GA-stable REST API until H1 2024.
+
+- Azure Chaos Studio currently **only supports creating Chaos Studio Customer-Managed-Key experiments via the Command Line using our 2023-10-27-preview REST API**. As a result, you **cannot** create a Chaos Studio experiment with CMK enabled via the Azure portal. We plan to add this functionality in H1 of 2024.
+
+- The storage account must have **public access from all networks** enabled for Azure Chaos Studio experiments to be able to use it. If you have a hard requirement from your organization, reach out to your CSA for potential solutions.
+
+## Configure your Azure storage account
+
+When creating and/or updating your storage account to use for a CMK experiment, you need to navigate to the encryption tab and set the Encryption type to Customer-managed keys (CMK) and fill out all required information.
+> [!NOTE]
+> The User-assigned managed identity that you use should match the one you use for the corresponding Chaos Studio CMK-enabled experiment.
+
+## Use customer-managed keys with Azure Chaos Studio
+
+You can only configure customer-managed encryption keys when you create a new Azure Chaos Studio experiment resource. When you specify the encryption key details, you also have to select a user-assigned managed identity to retrieve the key from Azure Key Vault.
+
+> [!NOTE]
+> The UMI should be the SAME user-assigned managed identity you use with your Chaos Studio experiment resource, otherwise the Chaos Studio CMK experiment fails to Create/Update.
+
+
+# [Azure CLI](#tab/azure-cli)
+
+
+The following code sample shows an example PUT command for creating or updating a Chaos Studio experiment resource to enable customer-managed keys:
+
+> [!NOTE]
+>The two parameters specific to CMK experiments are under the "CustomerDataStorage" block, in which we ask for the Subscription ID of the Azure Blob Storage Account you want to use to storage your experiment data and the name of the Blob Storage container to use or create.
+
+```HTTP
+PUT https://management.azure.com/subscriptions/<yourSubscriptionID>/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2023-10-27-preview
+
+{
+ "location": "eastus2euap",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "steps": [
+ {
+ "name": "step1",
+ "branches": [
+ {
+ "name": "branch1",
+ "actions": [
+ {
+ "type": "continuous",
+ "name": "urn:csci:microsoft:virtualMachine:shutdown/1.0",
+ "selectorId": "selector1",
+ "duration": "PT10M",
+ "parameters": [
+ {
+ "key": "abruptShutdown",
+ "value": "false"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ ],
+ "selectors": [
+ {
+ "type": "List",
+ "id": "selector1",
+ "targets": [
+ {
+ "type": "ChaosTarget",
+ "id": "/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Compute/virtualMachines/exampleVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine"
+ }
+ ]
+ }
+ ],
+ "customerDataStorage": {
+ "storageAccountResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/exampleRG/providers/Microsoft.Storage/storageAccounts/exampleStorage",
+ "blobContainerName": "azurechaosstudioexperiments"
+ }
+ }
+}
+```
+## Disable CMK on a Chaos Studio experiment
+
+If you run the same PUT command from the previous example on an existing CMK-enabled experiment resource, but leave the fields in "customerDataStorage" empty, CMK is disabled on an experiment.
+
+## Re-enable CMK on a Chaos Studio experiment
+
+If you run the same PUT command from the previous example on an existing experiment resource using the 2023-10-27-preview REST API and populate the fields in "customerDataStorage", CMK is re-enabled on an experiment.
+
+## Change the user-assigned managed identity for retrieving the encryption key
+
+You can change the managed identity for customer-managed keys for an existing Chaos Studio experiment at any time. The outcome would be identical to updating the User-assigned Managed identity for any Chaos Studio experiment.
+> [!NOTE]
+>If the User-Assigned Managed Identity does NOT have the correct permissions to retrieve the CMK from your key vault and write to the Blob Storage, the PUT command to update the UMI fails.
+
+### List whether an experiment is CMK-enabled or not
+
+Using the "Get Experiment" command from the 2023-10-27-preview REST API, the response shows you whether the "CustomerDataStorage" properties have been populated or not, which is how you can tell whether an experiment has CMK enabled or not.
+
+## Update the customer-managed encryption key being used by your Azure Storage Account
+
+You can change the key that you're using at any time, since Azure Chaos Studio is using your own Azure Storage account for encryption using your CMK.
++
+
+## Frequently asked questions
+
+### Is there an extra charge to enable customer-managed keys?
+
+While there's no charge associated directly from Azure Chaos Studio, the use of Azure Blob Storage and Azure Key Vault could carry some additional cost subject to those services' individual pricing.
+
+### Are customer-managed keys supported for existing Azure Chaos Studio experiments?
+
+This feature is currently only available for Azure Chaos Studio experiments created using our **2023-10-27-preview** REST API.
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via Azure Resource Manager template, REST API, or the Azure CLI, JSON-escaped Chaos Mesh spec that uses the [NetworkChaos kind](https://chaos-mesh.org/docs/simulate-network-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. |
+| jsonSpec | A JSON-formatted Chaos Mesh spec that uses the [NetworkChaos kind](https://chaos-mesh.org/docs/simulate-network-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Use single-quotes within the JSON or escape the quotes with a backslash character. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. Specifying duration within the `jsonSpec` isn't necessary, but will be used if available. |
### Sample JSON
Currently, only virtual machine scale sets configured with the **Uniform** orche
"parameters": [ { "key": "jsonSpec",
- "value": "{\"action\":\"delay\",\"mode\":\"one\",\"selector\":{\"namespaces\":[\"default\"],\"labelSelectors\":{\"app\":\"web-show\"}},\"delay\":{\"latency\":\"10ms\",\"correlation\":\"100\",\"jitter\":\"0ms\"}}"
+ "value": "{\"action\":\"delay\",\"mode\":\"one\",\"selector\":{\"namespaces\":[\"default\"]},\"delay\":{\"latency\":\"200ms\",\"correlation\":\"100\",\"jitter\":\"0ms\"}}}"
} ], "selectorid": "myResources"
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via Azure Resource Manager template, REST API, or the Azure CLI, JSON-escaped Chaos Mesh spec that uses the [PodChaos kind](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. |
+| jsonSpec | A JSON-formatted Chaos Mesh spec that uses the [PodChaos kind](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Use single-quotes within the JSON or escape the quotes with a backslash character. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. Specifying duration within the `jsonSpec` isn't necessary, but will be used if available. |
### Sample JSON
Currently, only virtual machine scale sets configured with the **Uniform** orche
"parameters": [ { "key": "jsonSpec",
- "value": "{\"action\":\"pod-failure\",\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app.kubernetes.io\/component\":\"tikv\"}}}"
+ "value": "{\"action\":\"pod-failure\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]}}"
} ], "selectorid": "myResources"
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:stressChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via Azure Resource Manager template, REST API, or the Azure CLI, JSON-escaped Chaos Mesh spec that uses the [StressChaos kind](https://chaos-mesh.org/docs/simulate-heavy-stress-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. |
+| jsonSpec | A JSON-formatted Chaos Mesh spec that uses the [StressChaos kind](https://chaos-mesh.org/docs/simulate-heavy-stress-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Use single-quotes within the JSON or escape the quotes with a backslash character. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. Specifying duration within the `jsonSpec` isn't necessary, but will be used if available. |
### Sample JSON
Currently, only virtual machine scale sets configured with the **Uniform** orche
"parameters": [ { "key": "jsonSpec",
- "value": "{\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"app1\"}},\"stressors\":{\"memory\":{\"workers\":4,\"size\":\"256MB\"}}}"
+ "value": "{\"mode\":\"one\",\"selector\":{\"namespaces\":[\"default\"]},\"stressors\":{\"cpu\":{\"workers\":1,\"load\":50},\"memory\":{\"workers\":4,\"size\":\"256MB\"}}"
} ], "selectorid": "myResources"
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:IOChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via Azure Resource Manager template, REST API, or the Azure CLI, JSON-escaped Chaos Mesh spec that uses the [IOChaos kind](https://chaos-mesh.org/docs/simulate-io-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. |
+| jsonSpec | A JSON-formatted Chaos Mesh spec that uses the [IOChaos kind](https://chaos-mesh.org/docs/simulate-io-chaos-on-kubernetes/#create-experiments-using-the-yaml-files). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Use single-quotes within the JSON or escape the quotes with a backslash character. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. Specifying duration within the `jsonSpec` isn't necessary, but will be used if available. |
### Sample JSON
Currently, only virtual machine scale sets configured with the **Uniform** orche
"parameters": [ { "key": "jsonSpec",
- "value": "{\"action\":\"latency\",\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"etcd\"}},\"volumePath\":\"\/var\/run\/etcd\",\"path\":\"\/var\/run\/etcd\/**\/*\",\"delay\":\"100ms\",\"percent\":50}"
+ "value": "{\"action\":\"latency\",\"mode\":\"one\",\"selector\":{\"app\":\"etcd\"},\"volumePath\":\"\/var\/run\/etcd\",\"path\":\"\/var\/run\/etcd\/**\/*\",\"delay\":\"100ms\",\"percent\":50}"
} ], "selectorid": "myResources"
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:timeChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via Azure Resource Manager template, REST API, or the Azure CLI, JSON-escaped Chaos Mesh spec that uses the [TimeChaos kind](https://chaos-mesh.org/docs/simulate-time-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. |
+| jsonSpec | A JSON-formatted Chaos Mesh spec that uses the [TimeChaos kind](https://chaos-mesh.org/docs/simulate-time-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Use single-quotes within the JSON or escape the quotes with a backslash character. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. Specifying duration within the `jsonSpec` isn't necessary, but will be used if available. |
### Sample JSON
Currently, only virtual machine scale sets configured with the **Uniform** orche
"parameters": [ { "key": "jsonSpec",
- "value": "{\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"app1\"}},\"timeOffset\":\"-10m100ns\"}"
+ "value": "{\"mode\":\"one\",\"selector\":{\"namespaces\":[\"default\"]},\"timeOffset\":\"-10m100ns\"}"
} ], "selectorid": "myResources"
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:kernelChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via Azure Resource Manager template, REST API, or the Azure CLI, JSON-escaped Chaos Mesh spec that uses the [KernelChaos kind](https://chaos-mesh.org/docs/simulate-kernel-chaos-on-kubernetes/#configuration-file).You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. |
+| jsonSpec | A JSON-formatted Chaos Mesh spec that uses the [KernelChaos kind](https://chaos-mesh.org/docs/simulate-kernel-chaos-on-kubernetes/#configuration-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Use single-quotes within the JSON or escape the quotes with a backslash character. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. Specifying duration within the `jsonSpec` isn't necessary, but will be used if available. |
### Sample JSON
Currently, only virtual machine scale sets configured with the **Uniform** orche
"parameters": [ { "key": "jsonSpec",
- "value": "{\"mode\":\"one\",\"selector\":{\"namespaces\":[\"chaos-mount\"]},\"failKernRequest\":{\"callchain\":[{\"funcname\":\"__x64_sys_mount\"}],\"failtype\":0}}"
+ "value": "{\"mode\":\"one\",\"selector\":{\"namespaces\":[\"default\"]},\"failKernRequest\":{\"callchain\":[{\"funcname\":\"__x64_sys_mount\"}],\"failtype\":0}}"
} ], "selectorid": "myResources"
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:httpChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via Azure Resource Manager template, REST API, or the Azure CLI, JSON-escaped Chaos Mesh spec that uses the [HTTPChaos kind](https://chaos-mesh.org/docs/simulate-http-chaos-on-kubernetes/#create-experiments). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. |
+| jsonSpec | A JSON-formatted Chaos Mesh spec that uses the [HTTPChaos kind](https://chaos-mesh.org/docs/simulate-http-chaos-on-kubernetes/#create-experiments). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Use single-quotes within the JSON or escape the quotes with a backslash character. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. Specifying duration within the `jsonSpec` isn't necessary, but will be used if available. |
### Sample JSON
Currently, only virtual machine scale sets configured with the **Uniform** orche
"parameters": [ { "key": "jsonSpec",
- "value": "{\"mode\":\"all\",\"selector\":{\"labelSelectors\":{\"app\":\"nginx\"}},\"target\":\"Request\",\"port\":80,\"method\":\"GET\",\"path\":\"\/api\",\"abort\":true,\"scheduler\":{\"cron\":\"@every 10m\"}}"
+ "value": "{\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"target\":\"Request\",\"port\":80,\"method\":\"GET\",\"path\":\"/api\",\"abort\":true}"
} ], "selectorid": "myResources"
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Prerequisites | The AKS cluster must [have Chaos Mesh deployed](chaos-studio-tutorial-aks-portal.md) and the [DNS service must be installed](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#deploy-chaos-dns-service). | | Urn | urn:csci:microsoft:azureKubernetesServiceChaosMesh:dnsChaos/2.1 | | Parameters (key, value) | |
-| jsonSpec | A JSON-formatted and, if created via an Azure Resource Manager template, REST API, or the Azure CLI, JSON-escaped Chaos Mesh spec that uses the [DNSChaos kind](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Then use a JSON string escape tool like [JSON Escape / Unescape](https://www.freeformatter.com/json-escape.html) to escape the JSON spec. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. |
+| jsonSpec | A JSON-formatted Chaos Mesh spec that uses the [DNSChaos kind](https://chaos-mesh.org/docs/simulate-dns-chaos-on-kubernetes/#create-experiments-using-the-yaml-file). You can use a YAML-to-JSON converter like [Convert YAML To JSON](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minify it. Use single-quotes within the JSON or escape the quotes with a backslash character. Only include the YAML under the `jsonSpec` property. Don't include information like metadata and kind. Specifying duration within the `jsonSpec` isn't necessary, but will be used if available. |
### Sample JSON
Currently, only virtual machine scale sets configured with the **Uniform** orche
"parameters": [ { "key": "jsonSpec",
- "value": "{\"action\":\"random\",\"mode\":\"all\",\"patterns\":[\"google.com\",\"chaos-mesh.*\",\"github.?om\"],\"selector\":{\"namespaces\":[\"busybox\"]}}"
+ "value": "{\"action\":\"random\",\"mode\":\"all\",\"patterns\":[\"google.com\",\"chaos-mesh.*\",\"github.?om\"],\"selector\":{\"namespaces\":[\"default\"]}}"
} ], "selectorid": "myResources"
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Fault type | Continuous. | | Parameters (key, value) | | | certificateName | Name of Azure Key Vault certificate on which the fault is executed. |
-| version | Certificate version that should be updated. If not specified, the latest version is updated. |
+| version | Certificate version that should be disabled. If not specified, the latest version is disabled. |
### Sample JSON
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
During the public preview of Azure Chaos Studio, there are a few limitations and
- **Supported browsers** - The Chaos Studio portal experience has only been tested on the following browsers: * **Windows:** Microsoft Edge, Google Chrome, and Firefox * **MacOS:** Safari, Google Chrome, and Firefox-- **Terraform** - Chaos Studio does not support Terraform at this time.-- **Powershell modules** - Chaos Studio does not have dedicated Powershell modules at this time. For Powershell, use our REST API-- **Azure CLI** - Chaos Studio does not have dedicated AzCLI modules at this time. Use our REST API from AzCLI-- **Azure Policy** - Chaos Studio does not support the applicable built-in policies for our service (audit policy for customer-managed keys and Private Link) at this time. -- **Private Link** To use Private Link for Agent Service, you need to have your subscription allowlisted and use our preview API version. We do not support Azure Portal UI experiments for Agent-based experiments using Private Link. These restrictions do NOT apply to our Service-direct faults-- **Customer-Managed Keys** You will need to use our 2023-10-27-preview REST API via a CLI to create CMK-enabled experiments. We do not support Portal UI experiments using CMK at this time.-- **Lockbox** At present, we do not have integration with Customer Lockbox.-- **Java SDK** At present, we do not have a dedicated Java SDK. If this is something you would use, reach out to us with your feature request. -- **Built-in roles** - Chaos Studio does not currently have its own built-in roles. Permissions may be attained to run a chaos experiment by either assigning an [Azure built-in role](chaos-studio-fault-providers.md) or a created custom role to the experiment's identity.
+- **Terraform** - Chaos Studio doesn't support Terraform at this time.
+- **PowerShell modules** - Chaos Studio doesn't have dedicated PowerShell modules at this time. For PowerShell, use our REST API
+- **Azure CLI** - Chaos Studio doesn't have dedicated AzCLI modules at this time. Use our REST API from AzCLI
+- **Azure Policy** - Chaos Studio doesn't support the applicable built-in policies for our service (audit policy for customer-managed keys and Private Link) at this time.
+- **Private Link** To use Private Link for Agent Service, you need to have your subscription allowlisted and use our preview API version. We don't support Azure portal UI experiments for Agent-based experiments using Private Link. These restrictions do NOT apply to our Service-direct faults
+- **Customer-Managed Keys** You need to use our 2023-10-27-preview REST API via a CLI to create CMK-enabled experiments. We don't support portal UI experiments using CMK at this time.
+- **Lockbox** At present, we don't have integration with Customer Lockbox.
+- **Java SDK** At present, we don't have a dedicated Java SDK. If this is something you would use, reach out to us with your feature request.
+- **Built-in roles** - Chaos Studio doesn't currently have its own built-in roles. Permissions can be attained to run a chaos experiment by either assigning an [Azure built-in role](chaos-studio-fault-providers.md) or a created custom role to the experiment's identity.
+- **Agent Service Tags** Currently we don't have service tags available for our Agent-based faults.
## Known issues When you pick target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected.
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Create an Azure resource and ensure that it's one of the supported [fault provid
![Screenshot that shows the Targets view in the Azure portal.](images/quickstart-virtual-machine-enabled.png)
-1. A notification appears and indicates that the resources selected were successfully enabled.
+1. Confirm that the desired resource is listed. Select **Review + Enable**, then **Enable**.
+
+1. A notification appears and indicates that the resource selected was successfully enabled.
![Screenshot that shows a notification that indicates that targets were successfully enabled.](images/tutorial-service-direct-targets-enable-confirm.png) ## Create an experiment 1. Select **Experiments**.
- ![Screenshot that shows selecting Experiments.](images/quickstart-left-experiment.png)
-1. Select **Add an experiment**.
+ ![Screenshot that shows selecting Experiments.](images/quickstart-left-experiment.png)
- ![Screenshot that shows Add an experiment in the Azure portal.](images/add-an-experiment.png)
+1. Select **Create** > **New experiment**.
1. Fill in the **Subscription**, **Resource Group**, and **Location** boxes where you want to deploy the chaos experiment. Give your experiment a name. Select **Next: Experiment designer**. ![Screenshot that shows adding experiment basics.](images/quickstart-service-direct-add-basics.png)
-1. In the Chaos Studio experiment designer, give a friendly name to your **Step** and **Branch**. Select **Add fault**.
+1. In the Chaos Studio experiment designer, give a friendly name to your **Step** and **Branch**. Select **Add action > Add fault**.
![Screenshot that shows the Experiment designer.](images/quickstart-service-direct-add-designer.png)
chaos-studio Chaos Studio Set Up App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-set-up-app-insights.md
Once you have met all the prerequisite steps, copy the **Instrumentation Key** f
[![Screenshot that shows Instrumentation Key in App Insights.](images/step-1a-app-insights.png)](images/step-1a-app-insights.png#lightbox) ## Step 2: Enable the Target Platform for your Agent-Based Fault with Application Insights
-Navigate to the Chaos Studio overview page and click on the **Targets** blade under the "Experiments Management" section. Find the target platform, ensure it's enabled for agent-based faults, and select "Manage Actions" in the right-most column. See screenshot below for an example:
+Navigate to the Chaos Studio overview page and click on the **Targets** blade under the "Experiments Management" section. Find the target platform. If it is already enabled as a target for agent-based experiments, you will need to disable it as a target and then "enable for agent-based targets" to bring up the Chaos Studio agent target configuration pane.
+See screenshot below for an example:
<br/> <br/>
Navigate to the Chaos Studio overview page and click on the **Targets** blade un
[![Screenshot that shows the Chaos Targets Page.](images/step-2a-app-insights.png)](images/step-2a-app-insights.png#lightbox) ## Step 3: Add your Application Insights account and Instrumentation key
-At this point, the resource configuration page seen in the screenshot should come up . After configuring your managed identity, make sure Application Insights is "Enabled" and then select your desired Application Insights Account and enter the Instrumentation Key you copied in Step 1. Once you have filled out the required information, you can click "Review+Create" to deploy your resource.
+At this point, the Agent target configuration page seen in the screenshot should come up . After configuring your managed identity, make sure Application Insights is "Enabled" and then select your desired Application Insights Account and enter the Instrumentation Key you copied in Step 1. Once you have filled out the required information, you can click "Review+Create" to deploy your resource.
<br/>
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
Chaos Studio can't inject faults against a VM unless that VM was added to Chaos
Virtual machines have two target types. One target type enables service-direct faults (where no agent is required). The other target type enables agent-based faults (which requires the installation of an agent). The chaos agent is an application installed on your VM as a [VM extension](../virtual-machines/extensions/overview.md). You use it to inject faults in the guest operating system.
-### Install stress-ng (Linux only)
-
-The Chaos Studio agent for Linux requires stress-ng. This open-source application can cause various stress events on a VM. To install stress-ng, [connect to your Linux VM](../virtual-machines/ssh-keys-portal.md). Then run the appropriate installation command for your package manager. For example:
-
-```bash
-sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng
-```
-
-Or:
-
-```bash
-sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && sudo yum -y install stress-ng
-```
- ### Enable the chaos target and capabilities Next, set up a Microsoft-Agent target on each VM or virtual machine scale set that specifies the user-assigned managed identity that the agent uses to connect to Chaos Studio. In this example, we use one managed identity for all VMs. A target must be created via REST API. In this example, we use the `az rest` CLI command to execute the REST API calls.
Next, set up a Microsoft-Agent target on each VM or virtual machine scale set th
```azurecli-interactive az rest --method put --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview --body @target.json --query properties.agentProfileId -o tsv ```
+
+ If you receive a PowerShell parsing error, switch to a Bash terminal as recommended for this tutorial or surround the referenced JSON file in single quotes (`--body '@target.json'`).
1. Copy down the GUID for the **agentProfileId** returned by this command for use in a later step.
-1. Create the capabilities by replacing `$RESOURCE_ID` with the resource ID of the target VM or virtual machine scale set. Replace `$CAPABILITY` with the [name of the fault capability you're enabling](chaos-studio-fault-library.md).
+1. Create the capabilities by replacing `$RESOURCE_ID` with the resource ID of the target VM or virtual machine scale set. Replace `$CAPABILITY` with the [name of the fault capability you're enabling](chaos-studio-fault-library.md) (for example, `CPUPressure-1.0`).
```azurecli-interactive az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent/capabilities/$CAPABILITY?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
chaos-studio Chaos Studio Tutorial Agent Based Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md
Chaos Studio can't inject faults against a VM unless that VM was added to Chaos
Virtual machines have two target types. One target type enables service-direct faults (where no agent is required). Another target type enables agent-based faults (which requires the installation of an agent). The chaos agent is an application installed on your VM as a [VM extension](../virtual-machines/extensions/overview.md). You use it to inject faults in the guest operating system.
-### Install stress-ng
-
-The Chaos Studio agent for Linux requires stress-ng. This open-source application can cause various stress events on a VM. To install stress-ng, [connect to your Linux VM](../virtual-machines/ssh-keys-portal.md). Then run the appropriate installation command for your package manager. For example:
-
-```bash
-sudo apt-get update && sudo apt-get -y install unzip && sudo apt-get -y install stress-ng
-```
-
-Or:
-
-```bash
-sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && sudo yum -y install stress-ng
-```
- ### Enable the chaos target, capabilities, and agent > [!IMPORTANT]
You've now successfully added your Linux VM to Chaos Studio. In the **Targets**
## Create an experiment Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel.
-1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Add an experiment**.
+1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Create** > **New experiment**.
![Screenshot that shows the Experiments view in the Azure portal.](images/tutorial-agent-based-add.png) 1. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a name. Select **Next: Experiment designer**. ![Screenshot that shows adding basic experiment details.](images/tutorial-agent-based-add-basics.png)
-1. You're now in the Chaos Studio experiment designer. You can build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch**. Then select **Add fault**.
+1. You're now in the Chaos Studio experiment designer. You can build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch**. Then select **Add action > Add fault**.
![Screenshot that shows the experiment designer.](images/tutorial-agent-based-add-designer.png) 1. Select **CPU Pressure** from the dropdown list. Fill in **Duration** with the number of minutes to apply pressure. Fill in **pressureLevel** with the amount of CPU pressure to apply. Leave **virtualMachineScaleSetInstances** blank. Select **Next: Target resources**.
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
Now you can create your experiment. A chaos experiment defines the actions you w
```json {"action":"pod-failure","mode":"all","selector":{"namespaces":["default"]}} ```
- 1. Use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec.
+ 1. Use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec, or change the double-quotes to single-quotes.
```json {\"action\":\"pod-failure\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]}} ```
+ ```json
+ {'action':'pod-failure','mode':'all','selector':{'namespaces':['default']}}
+ ```
+ 1. Create your experiment JSON by starting with the following JSON sample. Modify the JSON to correspond to the experiment you want to run by using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update), the [fault library](chaos-studio-fault-library.md), and the `jsonSpec` created in the previous step. ```json
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Chaos Studio can't inject faults against a resource unless that resource is adde
1. Select the checkbox next to your AKS cluster. Select **Enable targets** and then select **Enable service-direct targets** from the dropdown menu. ![Screenshot that shows enabling targets in the Azure portal.](images/tutorial-aks-targets-enable.png)+
+1. Confirm that the desired resource is listed. Select **Review + Enable**, then **Enable**.
+ 1. A notification appears that indicates that the resources you selected were successfully enabled. ![Screenshot that shows the notification showing that the target was successfully enabled.](images/tutorial-aks-targets-enable-confirm.png)
You've now successfully added your AKS cluster to Chaos Studio. In the **Targets
## Create an experiment Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel.
-1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Add an experiment**
+1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Create** > **New experiment**.
![Screenshot that shows the Experiments view in the Azure portal.](images/tutorial-aks-add.png) 1. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a name. Select **Next: Experiment designer**. ![Screenshot that shows adding basic experiment details.](images/tutorial-aks-add-basics.png)
-1. You're now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch** and select **Add fault**.
+1. You're now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch** and select **Add action > Add fault**.
![Screenshot that shows the experiment designer.](images/tutorial-aks-add-designer.png) 1. Select **AKS Chaos Mesh Pod Chaos** from the dropdown list. Fill in **Duration** with the number of minutes you want the failure to last and **jsonSpec** with the following information:
chaos-studio Chaos Studio Tutorial Dynamic Target Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-portal.md
You've now successfully added your virtual machine scale set to Chaos Studio.
Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel.
-1. In Chaos Studio, go to **Experiments** > **Create**.
+1. In Chaos Studio, go to **Experiments** > **Create** > **New experiment**.
[![Screenshot that shows the Experiments screen, with the Create button highlighted.](images/tutorial-dynamic-targets-experiment-browse.png)](images/tutorial-dynamic-targets-experiment-browse.png#lightbox) 1. Add a name for your experiment that complies with resource naming guidelines. Select **Next: Experiment designer**.
chaos-studio Chaos Studio Tutorial Service Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-portal.md
Chaos Studio can't inject faults against a resource unless that resource is adde
1. Select the checkbox next to your Azure Cosmos DB account. Select **Enable targets** and then select **Enable service-direct targets** from the dropdown menu. ![Screenshot that shows enabling targets in the Azure portal.](images/tutorial-service-direct-targets-enable.png)+
+1. Confirm that the desired resource is listed. Select **Review + Enable**, then **Enable**.
+ 1. A notification appears that indicates that the resources selected were successfully enabled. ![Screenshot that shows a notification showing the target was successfully enabled.](images/tutorial-service-direct-targets-enable-confirm.png)
You've now successfully added your Azure Cosmos DB account to Chaos Studio. In t
## Create an experiment Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel.
-1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Add an experiment**.
+1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Create** > **New experiment**.
![Screenshot that shows the Experiments view in the Azure portal.](images/tutorial-service-direct-add.png) 1. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a name. Select **Next: Experiment designer**. ![Screenshot that shows adding basic experiment details.](images/tutorial-service-direct-add-basics.png)
-1. You're now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch** and select **Add fault**.
+1. You're now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch** and select **Add action > Add fault**.
![Screenshot that shows the experiment designer.](images/tutorial-service-direct-add-designer.png) 1. Select **CosmosDB Failover** from the dropdown list. Fill in **Duration** with the number of minutes you want the failure to last and **readRegion** with the read region of your Azure Cosmos DB account. Select **Next: Target resources**.
communication-services Troubleshoot Tls Certificate Sip Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/monitoring-troubleshooting-telephony/troubleshoot-tls-certificate-sip-options.md
One of the intermediary devices (such as a firewall or a router) on the path bet
- [Monitor direct routing](./monitor-direct-routing.md) - [Plan for Azure direct routing](../direct-routing-infrastructure.md) - [Pair the Session Border Controller and configure voice routing](../direct-routing-provisioning.md)-- [Outbound call to a phone number](../../../quickstarts/telephony/pstn-call.md)
+- [Outbound call to a phone number](../../../quickstarts/telephony/pstn-call.md)
communication-services Theming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/theming.md
Title: Theming over the UI Library
+ Title: Theming the UI Library
description: Use Azure Communication Services UI Library for Mobile native to set up Theming Previously updated : 05/24/2022 Last updated : 10/27/2023 zone_pivot_groups: acs-plat-web-ios-android #Customer intent: As a developer, I want to set up the Theming of my application
-# Theming
+# Theming the UI Library
-ACS UI Library uses components and icons from both [Fluent UI](https://developer.microsoft.com/fluentui), the cross-platform design system that's used by Microsoft. As a result, the components are built with usability, accessibility, and localization in mind.
+Azure Communication Services UI Library is a set of components, icons and composites designed to make it easier for you to build high-quality user interfaces for your projects. The UI Library uses components and icons from [Fluent UI](https://developer.microsoft.com/fluentui), the cross-platform design system that's used by Microsoft. As a result, the components are built with usability, accessibility, and localization in mind.
+
+The UI Library is fully documented for developers on a separate site. Our documentation is interactive and designed to make it easy to understand how the APIs work by giving you the ability to try them out directly from a web page. See the [UI Library documentation](https://azure.github.io/communication-ui-library/?path=/docs/overview--page) for more information.
## Prerequisites
ACS UI Library uses components and icons from both [Fluent UI](https://developer
- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ::: zone pivot="platform-web" ::: zone-end ::: zone pivot="platform-android" ::: zone-end ::: zone pivot="platform-ios" ::: zone-end ## Next steps
communication-services Handle Advanced Messaging Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/advanced-messaging/whatsapp/handle-advanced-messaging-events.md
The Event Grid Viewer is a sample site that allows you to view incoming events f
- Select **Create**.
-3. Now if you navigate back to the "Events" option in left panel of your ACS resource, you should be able to see the new event subscription with the Advanced Messaging events.
+3. Now if you navigate back to the "Events" option in left panel of your Azure Communication Services resource, you should be able to see the new event subscription with the Advanced Messaging events.
:::image type="content" source="./media/handle-advanced-messaging-events/verify-events.png" alt-text="Screenshot that shows two Advanced messaging events subscribed.":::
communication-services Get Started With Voice Video Calling Custom Teams Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md
zone_pivot_groups: acs-plat-web-ios-android-windows-+ # QuickStart: Add 1:1 video calling as a Teams user to your application
container-registry Tutorial Rotate Revoke Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-rotate-revoke-customer-managed-keys.md
To change the access policy of the managed identity that your registry uses, run
az keyvault delete-policy \ --resource-group <resource-group-name> \ --name <key-vault-name> \
- --key_id <key-vault-key-id>
+ --object-id <key-vault-key-id>
``` To delete the individual versions of a key, run the [az-keyvault-key-delete](/cli/azure/keyvault/key#az-keyvault-key-delete) command. This operation requires the *keys/delete* permission.
cost-management-billing Poland Limited Time Sql Services Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/poland-limited-time-sql-services-reservations.md
+
+ Title: Save on select Azure SQL Services in Poland Central for a limited time
+description: Learn how to save up to 66 percent on select Azure SQL Services in Poland Central for a limited time.
+++++ Last updated : 10/27/2023++++
+# Save on select Azure SQL Services in Poland Central for a limited time
+
+Save up to 66 percent compared to pay-as-you-go pricing when you purchase one or three-year reserved capacity for select [Azure SQL Database](/azure/azure-sql/database/reserved-capacity-overview), [SQL Managed Instances](/azure/azure-sql/database/reserved-capacity-overview), and [Azure Database for MySQL](../../mysql/single-server/concept-reserved-pricing.md) in Poland Central for a limited time. This offer is available between November 1, 2023 – March 31, 2024.
+
+## Purchase the limited time offer
+
+To take advantage of this limited-time offer, [purchase](https://aka.ms/reservations) a one or three-year term for select Azure SQL Databases, SQL Managed Instances, and Azure Database for MySQL in the Poland Central region.
+
+### Buy a reservation
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Select **All services** > **Reservations**.
+1. Select **Add** and then select a qualified product listed in the [Terms and conditions of the limited time offer](#terms-and-conditions-of-the-limited-time-offer) section.
+1. Select the [scope](prepare-buy-reservation.md#reservation-scoping-options), and then a billing subscription that you want to use for the reservation. You can change the reservation scope after purchase.
+1. Set the **Region** to **Poland Central**.
+1. Select a reservation term and billing frequency.
+1. Select **Add to cart**.
+1. In the cart, you can change the quantity. After you review your cart and you're ready to purchase, select **Next: Review + buy**.
+1. Select **Buy now**.
+
+You can view the reservation in the [Reservations](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade/Reservations) page in the Azure portal.
+
+## Charge back limited time offer costs
+
+Enterprise Agreement and Microsoft Customer Agreement billing readers can view amortized cost data for reservations. They can use the cost data to charge back the monetary value for a subscription, resource group, resource, or a tag to their partners. In amortized data, the effective price is the prorated hourly reservation cost. The cost is the total cost of reservation usage by the resource on that day. Users with an individual subscription can get the amortized cost data from their usage file. For more information, see [Charge back Azure Reservation costs](charge-back-usage.md).
+
+## Terms and conditions of the limited time offer
+
+These terms and conditions (hereinafter referred to as "terms") govern the limited time offer ("offer") provided by Microsoft to customers purchasing a one or three year reserved capacity for Azure SQL Databases, SQL Managed Instances, and Azure Database for MySQL in Poland Central between November 1, 2023 (12 AM Pacific Standard Time) ΓÇô March 31, 2024 (11:59 PM Pacific Standard Time), for any of the following
+
+- Azure Database for MySQL Single Server General Purpose - Compute Gen5
+- Azure Database for MySQL Single Server Memory Optimized - Compute Gen5
+- SQL Database Single/Elastic Pool Business Critical - Compute DC-Series
+- SQL Database Single/Elastic Pool Business Critical - Compute Gen5
+- SQL Database Single/Elastic Pool Business Critical - Compute M Series
+- SQL Database Single/Elastic Pool General Purpose - Compute DC-Series
+- SQL Database Single/Elastic Pool General Purpose - Compute FSv2 Series
+- SQL Database Single/Elastic Pool General Purpose - Compute Gen5
+- SQL Database SingleDB/Elastic Pool Hyperscale - Compute Gen5
+- SQL Managed Instance Business Critical - Compute Gen5
+- SQL Managed Instance General Purpose - Compute Gen5
+
+The 66 percent saving is based on one Azure Database for MySQL Single Server Memory Optimized - Compute Gen5 in the Poland Central region running for 36 months at a pay-as-you-go rate; reduced rate for a three-year reserved capacity. Actual savings might vary based on location, term commitment, instance type, or usage. The savings doesn't include operating system costs. For more information about pricing, see [Poland Central SQL Services reservation savings](/legal/cost-management-billing/reservations/poland-central-limited-time-sql-services).
+
+**Eligibility** - The Offer is open to individuals who meet the following criteria:
+
+- To buy a reservation, you must have the owner role or reservation purchaser role on an Azure subscription that's of one of the following types:
+ - Enterprise (MS-AZR-0017P or MS-AZR-0148P)
+ - Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P)
+ - Microsoft Customer Agreement
+- Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. You can't purchase a reservation if you have a custom role that mimics the owner role or reservation purchaser role on an Azure subscription. You must use the built-in owner or built-in reservation purchaser role.
+- For more information about who can purchase a reservation, see [Buy an Azure reservation](prepare-buy-reservation.md).
+
+**Offer details** - For Azure SQL Database or SQL Managed Instance, you make a commitment for SQL Database or SQL Managed Instance use for one or three years to get a significant discount on the compute costs. To purchase reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term.
+
+You don't need to assign the reservation to a specific database or managed instance. Matching existing deployments that are already running or ones that are newly deployed automatically get the benefit. Hence, by purchasing a reserved capacity, existing resources infrastructure wouldn't be modified and thus no failover/downtime is triggered on existing resources. By purchasing a reservation, you commit to usage for the compute costs for one or three years. As soon as you buy a reservation, the compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates.
+
+For more information, see [Save compute costs with reserved capacity - Azure SQL Database & SQL Managed Instance](/azure/azure-sql/database/reserved-capacity-overview).
+
+For Azure Database for MySQL, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term.
+
+You don't need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation doesn't cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires. Azure Database for MySQL usage is then billed at the pay-as-you go price.
+
+For more information, seeΓÇ»[Prepay for compute with reserved capacity - Azure Database for MySQL](/azure/mysql/single-server/concept-reserved-pricing).
+
+- Additional taxes might apply.
+- Payment will be processed using the payment method on file for the selected subscriptions.
+- Estimated savings are calculated based on your current on-demand rate.
+
+**Qualifying purchase** - To be eligible for the 66% discount, customers must make a purchase of the one or three year reserved capacity for Azure SQL Databases, SQL Managed Instances, and Azure Database for MySQL in Poland Central for one of the following qualified services in Poland Central between November 1, 2023, and March 31, 2024.
+
+- Azure Database for MySQL Single Server General Purpose - Compute Gen5
+- Azure Database for MySQL Single Server Memory Optimized - Compute Gen5
+- SQL Database Single/Elastic Pool Business Critical - Compute DC-Series
+- SQL Database Single/Elastic Pool Business Critical - Compute Gen5
+- SQL Database Single/Elastic Pool Business Critical - Compute M Series
+- SQL Database Single/Elastic Pool General Purpose - Compute DC-Series
+- SQL Database Single/Elastic Pool General Purpose - Compute FSv2 Series
+- SQL Database Single/Elastic Pool General Purpose - Compute Gen5
+- SQL Database SingleDB/Elastic Pool Hyperscale - Compute Gen5
+- SQL Managed Instance Business Critical - Compute Gen5
+- SQL Managed Instance General Purpose - Compute Gen5
+
+**Discount limitations**
+
+- A reservation discount is "use-it-or-lose-it." So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+
+- When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are lost.
+
+- Stopped resources are billed and continue to use reservation hours. Deallocate or delete resources or scale-in other resources to use your available reservation hours with other workloads.
+
+- For more information about how reservation discounts are applied, see [How a reservation discount is applied](reservation-discount-application.md).
+
+**Exchanges and refunds** - The offer follows standard exchange and refund policies for reservations. For more information about exchanges and refunds, see [Self-service exchanges and refunds for Azure Reservations](exchange-and-refund-azure-reservations.md).
+
+**Renewals**
+
+- The renewal price **will not be** the limited time offer price, but the price available at time of renewal.
+- For more information about renewals, see [Automatically renew Azure reservations](reservation-renew.md).
+
+**Termination or modification** - Microsoft reserves the right to modify, suspend, or terminate the offer at any time without prior notice.
+
+If you have purchased the one or three year reserved capacity for Azure SQL Databases, SQL Managed Instances, and Azure Database for MySQL qualified services in Poland Central between November 1, 2023, and March 31, 2024 you'll continue to get the discount throughout the purchased term length, even if the offer is canceled.
+
+By participating in the offer, customers agree to be bound by these terms and the decisions of Microsoft. Microsoft reserves the right to disqualify any customer who violates these terms or engages in any fraudulent or harmful activities related to the offer.
+
+## Next steps
+
+- [Understand Azure reservation discount](reservation-discount-application.md)
+- [Purchase reserved capacity in the Azure portal](https://portal.azure.com/#view/Microsoft_Azure_Reservations/ReservationsBrowseBlade)
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The optional Defender CSPM plan, provides advanced posture management capabiliti
Microsoft Defender CSPM protects across all your multicloud workloads, but billing only applies for Servers, Database, and Storage accounts at $5/billable resource/month. The underlying compute services for AKS are regarded as servers for billing purposes. > [!NOTE]
->
+>
> - The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1, 2023. Billing will apply for Servers, Database, and Storage resources. Billable workloads will be VMs, Storage accounts, OSS DBs, SQL PaaS, & SQL servers on machines.ΓÇï >
-> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscriptions that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscriptions that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Plan availability
The following table summarizes each plan and their cloud availability.
## Next steps
-Learn about Defender for Cloud's [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+- Watch video: [Predict future security incidents! Cloud Security Posture Management with Microsoft Defender](https://www.youtube.com/watch?v=jF3NSR_OepI)
+- Learn about Defender for Cloud's [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
In this section of the wizard, you select the Defender for Cloud plans that you
:::image type="content" source="media/quickstart-onboard-aws/aws-configure-access.png" alt-text="Screenshot that shows deployment options and instructions for configuring access.":::
+ > [!NOTE]
+ > If you select **Management account** to create a connector to a management account, then the tab to onboard with Terraform is not visible in the UI, but you can still onboard using Terraform, similar to what's covered at [Onboarding your AWS/GCP environment to Microsoft Defender for Cloud with Terraform - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/onboarding-your-aws-gcp-environment-to-microsoft-defender-for/ba-p/3798664).
+ 1. Follow the on-screen instructions for the selected deployment method to complete the required dependencies on AWS. If you're onboarding a management account, you need to run the CloudFormation template both as Stack and as StackSet. Connectors are created for the member accounts up to 24 hours after the onboarding. 1. Select **Next: Review and generate**.
defender-for-iot Virtual Sensor Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-vmware.md
Title: OT sensor VM (VMware ESXi) - Microsoft Defender for IoT description: Learn about deploying a Microsoft Defender for IoT OT sensor as a virtual appliance using VMware ESXi. Previously updated : 04/24/2022 Last updated : 08/20/2023
This procedure describes how to create a virtual machine by using ESXi.
1. For **CD/DVD Drive 1**, select **Datastore ISO file** and choose the ISO file that you uploaded earlier.
+1. In your VM options, change your boot options from **Firmware** to **BIOS**. Make sure that you're not booting from EFI.
+ 1. Select **Next** > **Finish**. ## Software installation
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
Supported attributes are defined as follows:
||| |`-h`, `--help` | Shows the help message and exits. | |`[-n <NAME>]`, `[--name <NAME>]` | Define the rule's name.|
-|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `xx:yy-xx:yy, xx:yy-xx:yy` |
+|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `hh:mm-hh:mm, hh:mm-hh:mm` |
|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`| |`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`| | `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
Supported attributes are defined as follows:
||| |`-h`, `--help` | Shows the help message and exits. | |`[-n <NAME>]`, `[--name <NAME>]` | The name of the rule you want to modify.|
-|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `xx:yy-xx:yy, xx:yy-xx:yy` |
+|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `hh:mm-hh:mm, hh:mm-hh:mm` |
|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`| |`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`| | `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
Supported attributes are defined as follows:
||| |`-h`, `--help` | Shows the help message and exits. | |`[-n <NAME>]`, `[--name <NAME>]` | The name of the rule you want to delete.|
-|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `xx:yy-xx:yy, xx:yy-xx:yy` |
+|`[-ts <TIMES>]` `[--time_span <TIMES>]` | Defines the time span for which the rule is active, using the following syntax: `hh:mm-hh:mm, hh:mm-hh:mm` |
|`[-dir <DIRECTION>]`, `--direction <DIRECTION>` | Address direction to exclude. Use one of the following values: `both`, `src`, `dst`| |`[-dev <DEVICES>]`, `[--devices <DEVICES>]` | Device addresses or address types to exclude, using the following syntax: `ip-x.x.x.x`, `mac-xx:xx:xx:xx:xx:xx`, `subnet:x.x.x.x/x`| | `[-a <ALERTS>]`, `--alerts <ALERTS>`|Alert names to exclude, by hex value. For example: `0x00000, 0x000001` |
hdinsight-aks Create Cluster Using Arm Template Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-using-arm-template-script.md
Title: Export ARM template in Azure HDInsight on AKS description: How to create an ARM template to cluster using script in Azure HDInsight on AKS + Last updated 08/29/2023
hdinsight-aks Create Cluster Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-using-arm-template.md
Title: Export cluster ARM template description: Learn how to Create cluster ARM template + Last updated 08/29/2023
hdinsight-aks Assign Kafka Topic Event Message To Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md
Title: Write event messages into Azure Data Lake Storage Gen2 with DataStream API
-description: Learn how to write event messages into Azure Data Lake Storage Gen2 with DataStream API
+ Title: Write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API
+description: Learn how to write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Write event messages into Azure Data Lake Storage Gen2 with DataStream API
+# Write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Apache Flink uses file systems to consume and persistently store data, both for
## Prerequisites
-* [HDInsight on AKS Apache Flink 1.16.0](../flink/flink-create-cluster-portal.md)
-* [HDInsight Kafka](../../hdinsight/kafk)
- * You're required to ensure the network settings are taken care as described on [Using HDInsight Kafka](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS Flink and HDInsight Kafka are in the same Virtual Network
+* [Apache Flink cluster on HDInsight on AKS ](../flink/flink-create-cluster-portal.md)
+* [Apache Kafka cluster on HDInsight](../../hdinsight/kafk)
+ * You're required to ensure the network settings are taken care as described on [Using Apache Kafka on HDInsight](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS and HDInsight clusters are in the same Virtual Network
* Use MSI to access ADLS Gen2 * IntelliJ for development on an Azure VM in HDInsight on AKS Virtual Network
Flink provides an Apache Kafka connector for reading data from and writing data
*abfsGen2.java* > [!Note]
-> Replace [HDInsight Kafka](../../hdinsight/kafk)bootStrapServers with your own brokers for Kafka 2.4 or 3.2
+> Replace [Apache Kafka on HDInsight cluster](../../hdinsight/kafk) bootStrapServers with your own brokers for Kafka 2.4 or 3.2
``` java package contoso.example;
You can specify a rolling policy that rolls the in-progress part file on any of
## Reference - [Apache Kafka Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kafka) - [Flink DataStream Filesystem](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/filesystem)
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-databricks.md
Title: Incorporate Flink DataStream into Azure Databricks Delta Lake Table
-description: Learn about incorporate Flink DataStream into Azure Databricks Delta Lake Table in HDInsight on AKS - Apache Flink
+ Title: Incorporate Apache Flink® DataStream into Azure Databricks Delta Lake Table
+description: Learn about incorporate Apache Flink® DataStream into Azure Databricks Delta Lake Table
Previously updated : 10/05/2023 Last updated : 10/27/2023
-# Incorporate Flink DataStream into Azure Databricks Delta Lake Table
+# Incorporate Apache Flink® DataStream into Azure Databricks Delta Lake Tables
-This example shows how to sink stream data landed into Azure ADLS Gen2 from HDInsight Flink cluster on AKS applications into Delta Lake tables using Azure Databricks Auto Loader.
+This example shows how to sink stream data in Azure ADLS Gen2 from Apache Flink cluster on HDInsight on AKS into Delta Lake tables using Azure Databricks Auto Loader.
## Prerequisites -- [HDInsight Flink 1.16.0 on AKS](./flink-create-cluster-portal.md)-- [HDInsight Kafka 3.2.0](../../hdinsight/kafk)
+- [Apache Flink 1.16.0 on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
+- [Apache Kafka 3.2 on HDInsight](../../hdinsight/kafk)
- [Azure Databricks](/azure/databricks/getting-started/) in the same VNET as HDInsight on AKS - [ADLS Gen2](/azure/databricks/getting-started/connect-to-azure-storage/) and Service Principal
Databricks Auto Loader makes it easy to stream data land into object storage fro
Here are the steps how you can use data from Flink in Azure Databricks delta live tables.
-### Create Kafka table on Flink SQL
+### Create Apache Kafka® table on Apache Flink® SQL
In this step, you can create Kafka table and ADLS Gen2 on Flink SQL. For the purpose of this document, we are using a airplanes_state_real_time table, you can use any topic of your choice.
AS SELECT * FROM cloud_files("dbfs:/mnt/contosoflinkgen2/flink/airplanes_state_r
### Check Delta Live Table on Azure Databricks Notebook :::image type="content" source="media/azure-databricks/delta-live-table-azure.png" alt-text="Screenshot shows check Delta Live Table on Azure Databricks Notebook." lightbox="media/azure-databricks/delta-live-table-azure.png":::+
+### Reference
+
+* Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Azure Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/azure-iot-hub.md
Title: Process real-time IoT data on Flink with Azure HDInsight on AKS
-description: How to integrate Azure IoT Hub and Apache Flink
+ Title: Process real-time IoT data on Apache Flink® with Azure HDInsight on AKS
+description: How to integrate Azure IoT Hub and Apache Flink®
Last updated 10/03/2023
-# Process real-time IoT data on Flink with Azure HDInsight on AKS
+# Process real-time IoT data on Apache Flink® with Azure HDInsight on AKS
Azure IoT Hub is a managed service hosted in the cloud that acts as a central message hub for communication between an IoT application and its attached devices. You can connect millions of devices and their backend solutions reliably and securely. Almost any device can be connected to an IoT hub. ## Prerequisites 1. [Create an Azure IoTHub](/azure/iot-hub/iot-hub-create-through-portal/)
-2. [Create an HDInsight on AKS Flink cluster](./flink-create-cluster-portal.md)
+2. [Create Flink cluster on HDInsight on AKS](./flink-create-cluster-portal.md)
## Configure Flink cluster
public class StreamingJob {
Submit job using HDInsight on AKS's [Flink job submission API](./flink-job-management.md) :::image type="content" source="./media/azure-iot-hub/create-new-job.png" alt-text="Screenshot shows create a new job." lightbox="./media/azure-iot-hub/create-new-job.png":::+
+### Reference
+
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Change Data Capture Connectors For Apache Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/change-data-capture-connectors-for-apache-flink.md
Title: How to perform Change Data Capture of SQL Server with DataStream API and DataStream Source.
-description: Learn how to perform Change Data Capture of SQL Server with DataStream API and DataStream Source.
+ Title: How to perform Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source.
+description: Learn how to perform Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source.
Last updated 08/29/2023
-# Change Data Capture of SQL Server with DataStream API and DataStream Source
+# Change Data Capture of SQL Server with Apache Flink® DataStream API and DataStream Source on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
In this article, learn how to perform Change Data Capture of SQL Server using Da
## Prerequisites
-* [HDInsight on AKS Apache Flink 1.16.0](../flink/flink-create-cluster-portal.md)
-* [HDInsight Kafka](../../hdinsight/kafk)
- * You're required to ensure the network settings are taken care as described on [Using HDInsight Kafka](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS Flink and HDInsight Kafka are in the same VNet
+* [Apache Flink cluster on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
+* [Apache Kafka cluster on HDInsight](../../hdinsight/kafk)
+ * You're required to ensure the network settings are taken care as described on [Using HDInsight Kafka](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS and HDInsight clusters are in the same VNet
* Azure SQLServer
-* HDInsight Kafka cluster and HDInsight on AKS Flink clusters are located in the same VNet
* Install [IntelliJ IDEA](https://www.jetbrains.com/idea/download/#section=windows) for development on an Azure VM, which locates in HDInsight VNet ### SQLServer CDC Connector
public class mssqlSinkToKafka {
* [SQLServer CDC Connector](https://github.com/ververic) is licensed under [Apache 2.0 License](https://github.com/ververica/flink-cdc-connectors/blob/master/LICENSE) * [Apache Kafka in Azure HDInsight](../../hdinsight/kafk)
-* [Flink Kafka Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kafka/#behind-the-scene)
+* [Kafka Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kafka/#behind-the-scene)
+* Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Cosmos Db For Apache Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/cosmos-db-for-apache-cassandra.md
Title: Using Azure Cosmos DB (Apache Cassandra) with HDInsight on AKS - Flink
-description: Learn how to Sink HDInsight Kafka message into Azure Cosmos DB for Apache Cassandra, with Apache Flink running on HDInsight on AKS.
+ Title: Using Azure Cosmos DB for Apache Cassandra® with HDInsight on AKS for Apache Flink®
+description: Learn how to Sink Apache Kafka® message into Azure Cosmos DB for Apache Cassandra®, with Apache Flink® running on HDInsight on AKS.
Last updated 08/29/2023
-# Sink Kafka messages into Azure Cosmos DB for Apache Cassandra, with HDInsight on AKS - Flink
+# Sink Apache Kafka® messages into Azure Cosmos DB for Apache Cassandra, with Apache Flink® on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This example uses [HDInsight on AKS Flink 1.16.0](../flink/flink-overview.md) to sink [HDInsight Kafka 3.2.0](/azure/hdinsight/kafka/apache-kafka-introduction) messages into [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra/introduction)
+This example uses [Apache Flink](../flink/flink-overview.md) to sink [HDInsight for Apache Kafka](/azure/hdinsight/kafka/apache-kafka-introduction) messages into [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra/introduction)
+
+This example is prominent when Engineers prefer real-time aggregated data for analysis. With access to historical aggregated data, you can build machine learning (ML) models to build insights or actions. You can also ingest IoT data into Apache Flink to aggregate data in real-time and store it in Apache Cassandra.
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](../flink/flink-create-cluster-portal.md)
-* [HDInsight 5.1 Kafka 3.2](../../hdinsight/kafk)
+* [Apache Flink 1.16.0 on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
+* [Apache Kafka 3.2 on HDInsight](../../hdinsight/kafk)
* [Azure Cosmos DB for Apache Cassandra](../../cosmos-db/cassandra/index.yml)
-* Prepare an Ubuntu VM as maven project development env in the same VNet as HDInsight on AKS.
+* An Ubuntu VM for maven project development environment in the same VNet as HDInsight on AKS cluster.
## Azure Cosmos DB for Apache Cassandra
public class CassandraSink implements SinkFunction<Tuple3<Integer, String, Strin
**main class: CassandraDemo.java** > [!Note]
-> * Replace Kafka Broker IPs with your cluster broker IPs
+> * Replace Kafka Broker IPs with your Kafka cluster broker IPs
> * Prepare topic > * user `/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic user --bootstrap-server wn0-flinkd:9092`
Run UserProfile class in /azure-cosmos-db-cassandra-java-getting-started-main/sr
bin/flink run -c com.azure.cosmosdb.cassandra.examples.UserProfile -j cosmosdb-cassandra-examples.jar ```
-## Sink Kafka Topics into Cosmos DB (Apache Cassandra)
+## Sink Kafka Topics into Cosmos DB for Apache Cassandra
Run CassandraDemo class to sink Kafka topic into Cosmos DB for Apache Cassandra
bin/flink run -c com.azure.cosmosdb.cassandra.examples.CassandraDemo -j cosmosdb
## Validate Apache Flink Job Submission
-Check job on HDInsight on AKS Flink UI
+Check job on Flink Web UI on HDInsight on AKS Cluster
:::image type="content" source="./media/cosmos-db-for-apache-cassandra/check-output-on-flink-ui.png" alt-text="Screenshot showing how to check the job on HDInsight on AKS Flink UI." lightbox="./media/cosmos-db-for-apache-cassandra/check-output-on-flink-ui.png":::
sshuser@hn0-flinkd:~$ python user.py | /usr/hdp/current/kafka-broker/bin/kafka-c
* [Azure Cosmos DB for Apache Cassandra](../../cosmos-db/cassandr). * [Create a API for Cassandra account in Azure Cosmos DB](../../cosmos-db/cassandr) * [Azure Samples ](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-java-getting-started)
+* Apache, Apache Kafka, Kafka, Apache Flink, Flink, Apache Cassandra, Cassandra and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Create Kafka Table Flink Kafka Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/create-kafka-table-flink-kafka-sql-connector.md
Title: How to create Kafka table on Apache FlinkSQL - Azure portal
-description: Learn how to create Kafka table on Apache FlinkSQL
+ Title: How to create Apache Kafka table on an Apache Flink® on HDInsight on AKS
+description: Learn how to create Apache Kafka table on Apache Flink®
Previously updated : 10/06/2023 Last updated : 10/27/2023
-# Create Kafka table on Apache FlinkSQL
+# Create Apache Kafka® table on Apache Flink® on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Using this example, learn how to Create Kafka table on Apache FlinkSQL.
## Prerequisites
-* [HDInsight Kafka](../../hdinsight/kafk)
-* [HDInsight on AKS Apache Flink 1.16.0](../flink/flink-create-cluster-portal.md)
+* [Apache Kafka cluster on HDInsight](../../hdinsight/kafk)
+* [Apache Flink cluster on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
## Kafka SQL connector on Apache Flink The Kafka connector allows for reading data from and writing data into Kafka topics. For more information, refer [Apache Kafka SQL Connector](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/kafka)
-## Create a Kafka table on Apache Flink SQL
+## Create a Kafka table on Flink SQL
### Prepare topic and data on HDInsight Kafka
Detailed instructions are provided on how to use Secure Shell for [Flink SQL cli
### Download Kafka SQL Connector & Dependencies into SSH
-We're using the **Kafka 3.2.0** dependencies in the below step, You're required to update the command based on your Kafka version on HDInsight.
+We're using the **Kafka 3.2.0** dependencies in the below step, You're required to update the command based on your Kafka version on HDInsight cluster.
``` wget https://repo1.maven.org/maven2/org/apache/kafka/kafka-clients/3.2.0/kafka-clients-3.2.0.jar wget https://repo1.maven.org/maven2/org/apache/flink/flink-connector-kafka/1.16.0/flink-connector-kafka-1.16.0.jar
Here are the streaming jobs on Flink Web UI
## Reference * [Apache Kafka SQL Connector](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/kafka)
+* Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Datastream Api Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/datastream-api-mongodb.md
Title: DataStream API for MongoDB as a source and sink on Apache Flink
-description: Learn how to use DataStream API for MongoDB as a source and sink on Apache Flink
+ Title: Use DataStream API for MongoDB as a source and sink with Apache Flink®
+description: Learn how to use Apache Flink® DataStream API on HDInsight on AKS for MongoDB as a source and sink
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# DataStream API for MongoDB as a source and sink on Apache Flink
+# Use Apache Flink® DataStream API on HDInsight on AKS for MongoDB as a source and sink
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
In this example, you learn how to use MongoDB to source and sink with DataStream
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](../flink/flink-create-cluster-portal.md)
+* [Flink cluster 1.16.0 on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
* For this demonstration, use a Window VM as maven project develop env in the same VNET as HDInsight on AKS.
-* We use the [Apache Flink - MongoDB Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/mongodb/)
+* We use the [MongoDB Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/mongodb/)
* For this demonstration, use an Ubuntu VM in the same VNET as HDInsight on AKS, install a MongoDB on this VM. ## Installation of MongoDB on Ubuntu VM
test> db.click_events.find()
**Use Mongo DB's admin.click_events collection as a source, and sink to ADLS Gen2** :::image type="content" source="./media/datastream-api-mongodb/step-5-mongodb-collection-adls-gen2.png" alt-text="Screenshot displays How to create a node and connect to web SSH." border="true" lightbox="./media/datastream-api-mongodb/step-5-mongodb-collection-adls-gen2.png":::+
+### Reference
+
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Fabric Lakehouse Flink Datastream Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/fabric-lakehouse-flink-datastream-api.md
Title: Microsoft Fabric with Apache Flink in HDInsight on AKS
-description: An introduction to lakehouse on Microsoft Fabric with Apache Flink over HDInsight on AKS
+ Title: Microsoft Fabric with Apache Flink® in HDInsight on AKS
+description: An introduction to lakehouse on Microsoft Fabric with Apache Flink® on HDInsight on AKS
Last updated 08/29/2023
-# Connect to OneLake in Microsoft Fabric with HDInsight on AKS cluster for Apache Flink
+# Connect to OneLake in Microsoft Fabric with HDInsight on AKS cluster for Apache Flink®
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This example demonstrates on how to use HDInsight on AKS Apache Flink with [Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview).
+This example demonstrates on how to use HDInsight on AKS cluster for Apache Flink® with [Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview).
[Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. + With Fabric, you don't need to piece together different services from multiple vendors. Instead, you can enjoy a highly integrated, end-to-end, and easy-to-use product that is designed to simplify your analytics needs. In this example, you learn how to connect to OneLake in Microsoft Fabric with HDInsight on AKS cluster for Apache Flink. ## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](../flink/flink-create-cluster-portal.md)
+* [Flink cluster on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
* Create a License Mode of at least Premium Capacity Workspace on [Power BI](https://app.powerbi.com/) * [Create a Lake House](/fabric/data-engineering/tutorial-build-lakehouse) on this workspace
public class onelakeDemo {
``` ### Package the jar and submit to Flink
-Here, we use the packaged jar and submit to Flink cluster
+Here, we use the packaged jar and submit to Flink cluster in HDInsight on AKS
:::image type="content" source="./media/fabric-lakehouse-flink-datastream-api/jar-submit-flink-step-1.png" alt-text="Screenshot showing How to submit packaged jar and submitting to Flink cluster - step 1." border="true" lightbox="./media/fabric-lakehouse-flink-datastream-api/jar-submit-flink-step-1.png":::
Let's check the output on Microsoft Fabric
### References * [Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview) * [Microsoft Fabric Lakehouse](/fabric/data-engineering/lakehouse-overview)
+* Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Catalog Delta Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-catalog-delta-hive.md
Title: Table API and SQL - Use Delta Catalog type with Hive in HDInsight on AKS - Apache Flink
-description: Learn about how to create Apache Flink-Delta Catalog in HDInsight on AKS - Apache Flink
+ Title: Table API and SQL - Use Delta Catalog type with Hive with Apache Flink® on Azure HDInsight on AKS
+description: Learn about how to create Delta Catalog with Apache Flink® on Azure HDInsight on AKS
Last updated 08/29/2023
-# Create Apache Flink-Delta Catalog
+# Create Delta Catalog with Apache Flink® on Azure HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
We use arrival data of flights from a sample data, you can choose a table of you
You can view the Delta Table output on the ABFS container :::image type="content" source="media/flink-catalog-delta-hive/flink-catalog-delta-hive-output.png" alt-text="Screenshot showing output of the delta table in ABFS.":::+
+### Reference
+
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Catalog Iceberg Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-catalog-iceberg-hive.md
Title: Table API and SQL - Use Iceberg Catalog type with Hive in HDInsight on AKS - Apache Flink
-description: Learn how to create Apache Flink-Iceberg Catalog in HDInsight on AKS - Apache Flink
+ Title: Table API and SQL - Use Iceberg Catalog type with Hive in Apache Flink® on HDInsight on AKS
+description: Learn how to create Iceberg Catalog in Apache Flink® on HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Create Apache Flink-Iceberg Catalog
+# Create Iceberg Catalog in Apache Flink® on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-[Apache Iceberg](https://iceberg.apache.org/) is an open table format for huge analytic datasets. Iceberg adds tables to compute engines like Flink, using a high-performance table format that works just like a SQL table. Apache Iceberg [supports](https://iceberg.apache.org/multi-engine-support/#apache-flink) both Apache FlinkΓÇÖs DataStream API and Table API.
+[Apache Iceberg](https://iceberg.apache.org/) is an open table format for huge analytic datasets. Iceberg adds tables to compute engines like Apache Flink, using a high-performance table format that works just like a SQL table. Apache Iceberg [supports](https://iceberg.apache.org/multi-engine-support/#apache-flink) both Apache FlinkΓÇÖs DataStream API and Table API.
-In this article, we learn how to use Iceberg Table managed in Hive catalog, with HDInsight on AKS - Flink
+In this article, we learn how to use Iceberg Table managed in Hive catalog, with Apache Flink on HDInsight on AKS cluster
## Prerequisites - You're required to have an operational Flink cluster with secure shell, learn how to [create a cluster](../flink/flink-create-cluster-portal.md)
With the following steps, we illustrate how you can create Flink-Iceberg Catalog
You can view the Iceberg Table output on the ABFS container :::image type="content" source="./media/flink-catalog-iceberg-hive/flink-catalog-iceberg-hive-output.png" alt-text="Screenshot showing output of the Iceberg table in ABFS.":::+
+### Reference
+
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Hive, Hive, Apache Iceberg, Iceberg, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-cluster-configuration.md
Title: Flink cluster configuration - HDInsight on AKS - Apache Flink
-description: Learn Flink cluster configuration troubleshoot in HDInsight on AKS - Apache Flink
+ Title: Troubleshoot Apache Flink® on HDInsight on AKS
+description: Learn to troubleshoot Apache Flink® cluster configurations on HDInsight on AKS
Last updated 09/26/2023
-# Troubleshoot Flink cluster configuration
+# Troubleshoot Apache Flink® cluster configurations on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Some of the errors may occur due to environment conditions and be transient. The
- Detailed error message.
-1. Contact support team with this information.
+1. Contact [support team](../hdinsight-aks-support-help.md) with this information.
| Error code | Description | ||| | System.DependencyFailure | Failure in one of cluster components. |
+### Reference
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Configuration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-configuration-management.md
Title: Apache Flink Configuration Management in HDInsight on AKS
+ Title: Apache Flink® Configuration Management in HDInsight on AKS
description: Learn about Apache Flink Configuration Management in HDInsight on AKS Last updated 08/29/2023
-# Apache Flink configuration management
+# Apache Flink® Configuration management in HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-HDInsight on AKS provides a set of default configurations of Apache Flink for most properties and a few based on common application profiles. However, in case you're required to tweak Flink configuration properties to improve performance for certain applications with state usage, parallelism, or memory settings, you can change certain properties at cluster level using **Configuration management** section in HDInsight on AKS Flink.
+HDInsight on AKS provides a set of default configurations of Apache Flink for most properties and a few based on common application profiles. However, in case you're required to tweak Flink configuration properties to improve performance for certain applications with state usage, parallelism, or memory settings, you can change certain properties at cluster level using **Configuration management** section in HDInsight on AKS cluster.
1. Go to **Configuration Management** section on your Apache Flink cluster page
The state backend determines how Flink manages and persists the state of your ap
`state.backend: <value>`
-By default HDInsight on AKS Flink uses rocks db
+By default Apache Flink clusters in HDInsight on AKS use Rocks db
## Checkpoint Storage Path
Since savepoint is provided in the job, the Flink knows from where to start proc
### Reference
-[Apache Flink Configurations](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/)
+- [Apache Flink Configurations](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/)
+- Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-create-cluster-portal.md
Title: Create an Apache Flink cluster - Azure portal
-description: Creating an Apache Flink cluster in HDInsight on AKS in the Azure portal.
+ Title: Create an Apache Flink® cluster in HDInsight on AKS using Azure portal
+description: Creating an Apache Flink cluster in HDInsight on AKS with Azure portal.
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Create an Apache Flink cluster in the Azure portal
+# Create an Apache Flink® cluster in HDInsight on AKS with Azure portal
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-Complete the following steps to create an Apache Flink cluster by using the Azure portal.
+Complete the following steps to create an Apache Flink cluster on Azure portal.
## Prerequisites
Flink clusters can be created once cluster pool deployment has been completed, l
1. On the **Review + create** page, look for the **Validation succeeded** message at the top of the page and then click **Create**. The **Deployment is in process** page is displayed which the cluster is created. It takes 5-10 minutes to create the cluster. Once the cluster is created, the **"Your deployment is complete"** message is displayed. If you navigate away from the page, you can check your Notifications for the current status.+
+> [!NOTE]
+> Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink How To Setup Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-how-to-setup-event-hub.md
Title: How to connect HDInsight on AKS Flink with Azure Event Hubs for Apache Kafka®
-description: Learn how to connect HDInsight on AKS Flink with Azure Event Hubs for Apache Kafka®
+ Title: How to connect Apache Flink® on HDInsight on AKS with Azure Event Hubs for Apache Kafka®
+description: Learn how to connect Apache Flink® on HDInsight on AKS with Azure Event Hubs for Apache Kafka®
Last updated 08/29/2023
-# Connect HDInsight on AKS Flink with Azure Event Hubs for Apache Kafka®
+# Connect Apache Flink® on HDInsight on AKS with Azure Event Hubs for Apache Kafka®
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] A well known use case for Apache Flink is stream analytics. The popular choice by many users to use the data streams, which are ingested using Apache Kafka. Typical installations of Flink and Kafka start with event streams being pushed to Kafka, which can be consumed by Flink jobs. Azure Event Hubs provides an Apache Kafka endpoint on an event hub, which enables users to connect to the event hub using the Kafka protocol.
-In this article, we explore how to connect [Azure Event Hubs](/azure/event-hubs/event-hubs-about) with [HDInsight on AKS Flink](./flink-overview.md) and cover the following
+In this article, we explore how to connect [Azure Event Hubs](/azure/event-hubs/event-hubs-about) with [Apache Flink on HDInsight on AKS](./flink-overview.md) and cover the following
> [!div class="checklist"] > * Create an Event Hubs namespace
In this article, we explore how to connect [Azure Event Hubs](/azure/event-hubs/
1. Once the code is executed, the events are stored in the topic **ΓÇ£TESTΓÇ¥** :::image type="content" source="./media/flink-eventhub/events-stored-in-topic.png" alt-text="Screenshot showing Event Hubs stored in topic." border="true" lightbox="./media/flink-eventhub/events-stored-in-topic.png":::+
+### Reference
+
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Job Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-job-management.md
Title: Flink job management in HDInsight on AKS
+ Title: Apache Flink® job management in HDInsight on AKS
description: HDInsight on AKS provides a feature to manage and submit Apache Flink jobs directly through the Azure portal Last updated 09/07/2023
-# Flink job management
+# Apache Flink® job management in HDInsight on AKS clusters
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-HDInsight on AKS provides a feature to manage and submit Apache Flink jobs directly through the Azure portal (user-friendly interface) and ARM Rest APIs.
+HDInsight on AKS provides a feature to manage and submit Apache Flink® jobs directly through the Azure portal (user-friendly interface) and ARM Rest APIs.
-This feature empowers users to efficiently control and monitor their Flink jobs without requiring deep cluster-level knowledge.
+This feature empowers users to efficiently control and monitor their Apache Flink jobs without requiring deep cluster-level knowledge.
## Benefits
Portal --> HDInsight on AKS Cluster Pool --> Flink Cluster --> Settings --> Flin
### <a id="arm-rest-api">Job Management Using Rest API</a>
-HDInsight on AKS - Flink supports user friendly ARM Rest APIs to submit job and manage job. Using this Flink REST API, you can seamlessly integrate Flink job operations into your Azure Pipeline. Whether you're launching new jobs, updating running jobs, or performing various job operations, this streamlined approach eliminates manual steps and empowers you to manage your Flink cluster efficiently.
+HDInsight on AKS supports user friendly ARM Rest APIs to submit job and manage job. Using this Flink REST API, you can seamlessly integrate Flink job operations into your Azure Pipeline. Whether you're launching new jobs, updating running jobs, or performing various job operations, this streamlined approach eliminates manual steps and empowers you to manage your Flink cluster efficiently.
#### Base URL format for Rest API
To authenticate Flink ARM Rest API users, need to get the bearer token or acces
| Property | Description | Default Value | Mandatory | | -- | -- | - | |
- | jobType | Type of Job.It should be ΓÇ£FlinkJobΓÇ¥ | | Yes|
+ | jobType | Type of Job. It should be ΓÇ£FlinkJobΓÇ¥ | | Yes|
| jobName | Unique name for job. This is displayed on portal. Job name should be in small latter.| | Yes | | action | It indicates operation type on job. It should be ΓÇ£NEWΓÇ¥ always for new job launch. | | Yes | | jobJarDirectory | Storage path for job jar directory. Users should create directory in cluster storage and upload job jar.| Yes |
To authenticate Flink ARM Rest API users, need to get the bearer token or acces
> [!NOTE] > When any action is in progress, actionResult will indicate it with the value 'IN_PROGRESS' On successful completion, it will show 'SUCCESS', and in case of failure, it will be 'FAILED'.+
+### Reference
+
+- [Apache Flink Job Scheduling](https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/internals/job_scheduling/)
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+
hdinsight-aks Flink Job Orchestration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-job-orchestration.md
Title: Azure data factory managed airflow - HDInsight on AKS
-description: Learn how to perform Flink job orchestration using Azure Data Factory managed airflow
+ Title: Azure Data Factory Managed Airflow with Apache Flink® on HDInsight on AKS
+description: Learn how to perform Apache Flink® job orchestration using Azure Data Factory Managed Airflow
Previously updated : 10/11/2023 Last updated : 10/28/2023
-# Flink job orchestration using Azure Data Factory managed airflow
+# Apache Flink® job orchestration using Azure Data Factory Managed Airflow
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This article covers managing HDInsight Flink job using Azure REST API ([refer Job Management REST API section in this tutorial](flink-job-management.md)) and orchestration data pipeline with Azure Data Factory Managed Airflow. [Azure Data Factory Managed Airflow](/azure/data-factory/concept-managed-airflow) service is a simple and efficient way to create and manage [Apache Airflow](https://airflow.apache.org/) environments, enabling you to run data pipelines at scale easily.
+This article covers managing a Flink job using [Azure REST API](flink-job-management.md#arm-rest-api) and orchestration data pipeline with Azure Data Factory Managed Airflow. [Azure Data Factory Managed Airflow](/azure/data-factory/concept-managed-airflow) service is a simple and efficient way to create and manage [Apache Airflow](https://airflow.apache.org/) environments, enabling you to run data pipelines at scale easily.
-Apache Airflow is an open-source platform that programmatically creates, schedules, and monitors complex data workflows. It allows you to define a set of tasks, called operators, that can be combined into directed acyclic graphs (DAGs) to represent data pipelines.
+Apache Airflow is an open-source platform that programmatically creates, schedules, and monitors complex data workflows. It allows you to define a set of tasks, called operators that can be combined into directed acyclic graphs (DAGs) to represent data pipelines.
The following diagram shows the placement of Airflow, Key Vault, and HDInsight on AKS in Azure.
It is recommended to rotate access keys or secrets periodically.
1. [Setup Flink Cluster](flink-create-cluster-portal.md)
-1. Upload your Flink Job jar to the storage account -  It can be the primary storage account associated with the Flink cluster or any other storage account, where Assign the “Storage Blob Data Owner” role to the user-assigned MSI used for the cluster to this storage account.
+1. Upload your Flink Job jar to the storage account. It can be the primary storage account associated with the Flink cluster or any other storage account, where you should assign the "Storage Blob Data Owner" role to the user-assigned MSI used for the cluster in this storage account.
1. Azure Key Vault - You can follow [this tutorial to create a new Azure Key Vault](/azure/key-vault/general/quick-create-portal/) in case, if you don't have one.
-1. Create [Microsoft Entra service principal](/cli/azure/ad/sp/) to access Key Vault – Grant permission to access Azure Key Vault with the “Key Vault Secrets Officer” role, and make a note of ‘appId’, ‘password’, and ‘tenant’ from the response. We need to use the same for Airflow to use Key Vault storage as backends for storing sensitive information.
+1. Create [Microsoft Entra service principal](/cli/azure/ad/sp/) to access Key Vault – Grant permission to access Azure Key Vault with the “Key Vault Secrets Officer” role, and make a note of ‘appId’ ‘password’, and ‘tenant’ from the response. We need to use the same for Airflow to use Key Vault storage as backends for storing sensitive information.
``` az ad sp create-for-rbac -n <sp name> --role ΓÇ£Key Vault Secrets OfficerΓÇ¥ --scopes <key vault Resource ID> ```
-1. Create Managed Airflow [enable with Azure Key Vault to store and manage your sensitive information in a secure and centralized manner](/azure/data-factory/enable-azure-key-vault-for-managed-airflow). By doing this, you can use variables and connections, and they automatically be stored in Azure Key Vault. The name of connections and variables need to be prefixed by variables_prefix  defined in AIRFLOW__SECRETS__BACKEND_KWARGS. For example, If variables_prefix has a value as  hdinsight-aks-variables then for a variable key of hello, you would want to store your Variable at hdinsight-aks-variable -hello.
+1. Create Managed Airflow enable with [Azure Key Vault](/azure/data-factory/enable-azure-key-vault-for-managed-airflow) to store and manage your sensitive information in a secure and centralized manner. By doing this, you can use variables and connections, and they automatically be stored in Azure Key Vault. The name of connections and variables need to be prefixed by variables_prefix  defined in AIRFLOW__SECRETS__BACKEND_KWARGS. For example, If variables_prefix has a value as  hdinsight-aks-variables then for a variable key of hello, you would want to store your Variable at hdinsight-aks-variable -hello.
- Add the following settings for the Airflow configuration overrides in integrated runtime properties:
The wordcount.py is an example of orchestrating a Flink job submission using Apa
The DAG has two tasks: -- get OAuth Token
+- get `OAuth Token`
- Invoke HDInsight Flink Job Submission Azure REST API to submit a new job
The DAG expects to have setup for the Service Principal, as described during the
```
- Refer to the [sample code](https://github.com/Azure-Samples/hdinsight-aks/blob/main/flink/airflow-python-sample-code).
-
+### Reference
+
+- Refer to the [sample code](https://github.com/Azure-Samples/hdinsight-aks/blob/main/flink/airflow-python-sample-code).
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Airflow, Airflow, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-overview.md
Title: What is Apache Flink in Azure HDInsight on AKS? (Preview)
-description: An introduction to Apache Flink in Azure HDInsight on AKS.
+ Title: What is Apache Flink® in Azure HDInsight on AKS? (Preview)
+description: An introduction to Apache Flink® in Azure HDInsight on AKS.
Previously updated : 08/29/2023 Last updated : 10/28/2023
-# What is Apache Flink in Azure HDInsight on AKS? (Preview)
+# What is Apache Flink® in Azure HDInsight on AKS? (Preview)
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Apache Flink is an excellent choice to develop and run many different types of a
Read more on common use cases described on [Apache Flink Use cases](https://flink.apache.org/use-cases/#use-cases)
-## Apache Flink Cluster Deployment Types
-Flink can execute applications in Session mode or Application mode. Currently HDInsight on AKS supports only Session clusters. You can run multiple Flink jobs on a Session cluster.
+Apache Flink clusters in HDInsight on AKS are a fully managed service. Benefits of creating a Flink cluster in HDInsight on AKS are listed here.
+
+| Feature | Description |
+| | |
+| Ease creation |You can create a new Flink cluster in HDInsight in minutes using the Azure portal, Azure PowerShell, or the SDK. See [Get started with Apache Flink cluster in HDInsight on AKS](flink-create-cluster-portal.md). |
+| Ease of use | Flink clusters in HDInsight on AKS include portal based configuration management, and scaling. In addition to this with job management API, you use the REST API or Azure portal for job management.|
+| REST APIs | Flink clusters in HDInsight on AKS include [Job management API](flink-job-management.md), a REST API-based Flink job submission method to remotely submit and monitor jobs on Azure portal.|
+| Deployment Type | Flink can execute applications in Session mode or Application mode. Currently HDInsight on AKS supports only Session clusters. You can run multiple Flink jobs on a Session cluster. App mode is on the roadmap for HDInsight on AKS clusters|
+| Support for Metastore | Flink clusters in HDInsight on AKS can support catalogs with [Hive Metastore](hive-dialect-flink.md) in different open file formats with remote checkpoints to Azure Data Lake Storage Gen2.|
+| Support for Azure Storage | Flink clusters in HDInsight can use Azure Data Lake Storage Gen2 as File sink. For more information on Data Lake Storage Gen2, see [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md).|
+| Integration with Azure services | Flink cluster in HDInsight on AKS comes with an integration to Kafka along with [Azure Event Hubs](flink-how-to-setup-event-hub.md) and [Azure HDInsight](process-and-consume-data.md). You can build streaming applications using the Event Hubs or HDInsight. |
+| Adaptability | HDInsight on AKS allows you to scale the Flink cluster nodes based on schedule with the Autoscale feature. See [Automatically scale Azure HDInsight on AKS clusters](../hdinsight-on-aks-autoscale-clusters.md). |
+| State Backend | HDInsight on AKS uses the [RocksDB](http://rocksdb.org) as default StateBackend. RocksDB is an embeddable persistent key-value store for fast storage.|
+| Checkpoints | Checkpointing is enabled in HDInsight on AKS clusters by default. Default settings on HDInsight on AKS maintain the last five checkpoints in persistent storage. In case, your job fails, the job can be restarted from the latest checkpoint.|
+| Incremental Checkpoints | RocksDB supports Incremental Checkpoints. We encourage the use of incremental checkpoints for large state, you need to enable this feature manually. Setting a default in your `flink-conf.yaml: state.backend.incremental: true` enables incremental checkpoints, unless the application overrides this setting in the code. This statement is true by default. You can alternatively configure this value directly in the code (overrides the config default) ``EmbeddedRocksDBStateBackend` backend = new `EmbeddedRocksDBStateBackend(true);`` . By default, we preserve the last five checkpoints in the checkpoint dir configured. This value can be changed by changing the configuration on configuration management section `state.checkpoints.num-retained: 5`|
+
+Apache Flink clusters in HDInsight include the following components, they are available on the clusters by default.
+
+* [DataStreamAPI](https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/datastream/overview/#what-is-a-datastream)
+* [TableAPI & SQL](https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/overview/#table-api--sql).
+
+Refer to the [Roadmap](../whats-new.md#coming-soon) on what's coming soon!
## Apache Flink Job Management
Flink schedules jobs using three distributed components, Job manager, Task manag
:::image type="content" source="./media/flink-overview/flink-process.png" alt-text="Flink process diagram showing how the job, Job manager, Task manager, and Job client work together.":::
-## Checkpoints in Apache Flink
-
-Every function and operator in Flink can be stateful. Stateful functions store data across the processing of individual elements/events, making state a critical building block for any type of more elaborate operation. In order to make state fault tolerant, Flink needs to **checkpoint the state**. Checkpoints allow Flink to recover state and positions in the streams to give the application the same semantics as a failure-free execution that means they play an important role for Flink to recover from failure both its state and the corresponding stream positions.
-
-Checkpointing is enabled in HDInsight on AKS Flink by default. Default settings on HDInsight on AKS maintain the last five checkpoints in persistent storage. In case, your job fails, the job can be restarted from the latest checkpoint.
-
-## State Backends
-
-Backends determine where state is stored. Stream processing applications are often stateful, *remembering* information from processed events and using it to influence further event processing. In Flink, the remembered information, that is, state, is stored locally in the configured state backend.
-
-When checkpointing is activated, such state is persisted upon checkpoints to guard against data loss and recover consistently. How the state is represented internally, and how and where it's persisted upon checkpoints depends on the chosen **State Backend**. HDInsight on AKS uses the RocksDB as default StateBackend.
-
-**Supported state backends:**
-
-* HashMapStateBackend
-* EmbeddedRocksDBStateBackend
-
-### The HashMapStateBackend
-
-The `HashMapStateBackend` holds data internally as objects on the Java heap. Key/value state and window operators hold hash tables that store the values, triggers, etc.
-
-The HashMapStateBackend is encouraged for:
-
-* Jobs with large state, long windows, large key/value states.
-* All high-availability setups.
-
-it 's also recommended to set managed memory to zero. This value ensures that the maximum amount of memory is allocated for user code on the JVM.
-Unlike `EmbeddedRocksDBStateBackend`, the `HashMapStateBackend` stores data as objects on the heap so that it 's unsafe to reuse objects.
-
-### The EmbeddedRocksDBStateBackend
-
-The `EmbeddedRocksDBStateBackend` holds in-flight data in a [RocksDB](http://rocksdb.org) database that is (per default). Unlike storing java objects in `HashMapStateBackend`, data is stored as serialized byte arrays, which mainly define the type serializer, resulting in key comparisons being byte-wise instead of using JavaΓÇÖs `hashCode()` and `equals()` methods.
-
-By default, we use RocksDb as the state backend. RocksDB is an embeddable persistent key-value store for fast storage.
-
-```
-state.backend: rocksdb
-state.checkpoints.dir: <STORAGE_LOCATION>
-```
-By default, HDInsight on AKS stores the checkpoints in the storage account configured by the user, so that the checkpoints are persisted.
-
-### Incremental Checkpoints
-
-RocksDB supports Incremental Checkpoints, which can dramatically reduce the checkpointing time in comparison to full checkpoints. Instead of producing a full, self-contained backup of the state backend, incremental checkpoints only record the changes that happened since the latest completed checkpoint. An incremental checkpoint builds upon (typically multiple) previous checkpoints.
-
-Flink applies RocksDBΓÇÖs internal compaction mechanism in a way that is self-consolidating over time. As a result, the incremental checkpoint history in Flink doesn't grow indefinitely, and old checkpoints are eventually subsumed and pruned automatically. Recovery time of incremental checkpoints may be longer or shorter compared to full checkpoints. If your network bandwidth is the bottleneck, it may take a bit longer to restore from an incremental checkpoint, because it implies fetching more data (more deltas).
-
-Restore from an incremental checkpoint is faster, if the bottleneck is your CPU or IOPs, because restore from an incremental checkpoint means not to rebuild the local RocksDB tables from FlinkΓÇÖs canonical key value snapshot format (used in savepoints and full checkpoints).
-
-While we encourage the use of incremental checkpoints for large state, you need to enable this feature manually:
-
-* Setting a default in your `flink-conf.yaml: state.backend.incremental: true` enables incremental checkpoints, unless the application overrides this setting in the code. This statement is true by default.
-* You can alternatively configure this value directly in the code (overrides the config default):
-
-```
-EmbeddedRocksDBStateBackend` backend = new `EmbeddedRocksDBStateBackend(true);
-```
-
-By default, we preserve the last five checkpoints in the checkpoint dir configured.
-
-This value can be changed by changing the following config"
-
-`state.checkpoints.num-retained: 5`
-
-## Windowing in Flink
-
-Windowing is a key feature in stream processing systems such as Apache Flink. Windowing splits the continuous stream into finite batches on which computations can be performed. In Flink, windowing can be done on the entire steam or per-key basis.
-
-Windowing refers to the process of dividing a stream of events into finite, nonoverlapping segments called windows. This feature allows users to perform computations on specific subsets of data based on time or key-based criteria.
-
-Windows allow users to split the streamed data into segments that can be processed. Due to the unbounded nature of data streams, there's no situation where all the data is available, because users would be waiting indefinitely for new data points to arrive - so instead, windowing offers a way to define a subset of data points that you can then process and analyze. The trigger defines when the window is considered ready for processing, and the function set for the window specifies how to process the data.
-
-Learn [more](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/datastream/operators/windows/)
- ### Reference
-[Apache Flink](https://flink.apache.org/)
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Table Api And Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-table-api-and-sql.md
Title: Table API and SQL - HDInsight on AKS - Apache Flink
-description: Learn about Table API and SQL in HDInsight on AKS - Apache Flink
+ Title: Table API and SQL in Apache Flink® clusters on HDInsight on AKS
+description: Learn about Table API and SQL in Apache Flink® clusters on HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Table API and SQL in HDInsight on AKS - Apache Flink
+# Table API and SQL in Apache Flink® clusters on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Like other SQL engines, Flink queries operate on top of tables. It differs from
Flink data processing pipelines begin with source tables and end with sink tables. Source tables produce rows operated over during the query execution; they're the tables referenced in the *FROM* clause of a query. Connectors can be of type HDInsight Kafka, HDInsight HBase, Azure Event Hubs, databases, filesystems, or any other system whose connector lies in the classpath.
-## Using SQL Client in HDInsight on AKS - Flink
+## Using Flink SQL Client in HDInsight on AKS clusters
You can refer this article on how to use CLI from [Secure Shell](./flink-web-ssh-on-portal-to-flink-sql.md) on Azure portal. Here are some quick samples of how to get started.
Write out to **Sink Table** from **Source Table**:
GROUP BY grade; ```
-## Adding Dependencies for Apache Flink SQL
+## Adding Dependencies
JAR statements are used to add user jars into the classpath or remove user jars from the classpath or show added jars in the classpath in the runtime.
Flink SQL> REMOVE JAR '/path/hello.jar';
[INFO] The specified jar is removed from session classloader. ```
-## Hive Metastore in HDInsight on AKS - Flink
+## Hive Metastore in Apache Flink® clusters on HDInsight on AKS
Catalogs provide metadata, such as databases, tables, partitions, views, and functions and information needed to access data stored in a database or other external systems.
The *GenericInMemoryCatalog* is an in-memory implementation of a catalog. All th
The *HiveCatalog* serves two purposes; as persistent storage for pure Flink metadata, and as an interface for reading and writing existing Hive metadata. > [!NOTE]
-> In HDInsight on AKS, Flink comes with an integrated option of Hive Metastore. You can opt for Hive Metastore during [cluster creation](../flink/flink-create-cluster-portal.md)
+> HDInsight on AKS clusters comes with an integrated option of Hive Metastore for Apache Flink. You can opt for Hive Metastore during [cluster creation](../flink/flink-create-cluster-portal.md)
## How to Create and Register Flink Databases to Catalogs
You can refer this article on how to use CLI and get started with Flink SQL Clie
USE CATALOG myhive; ``` > [!NOTE]
- > HDInsight on AKS Flink supports **Hive 3.1.2** and **Hadoop 3.3.2**. The `hive-conf-dir` is set to location `/opt/hive-conf`
+ > HDInsight on AKS supports **Hive 3.1.2** and **Hadoop 3.3.2**. The `hive-conf-dir` is set to location `/opt/hive-conf`
- Let us create Database in hive catalog and make it default for the session (unless changed). :::image type="content" source="./media/flink-table-sql-api/create-default-hive-catalog.png" alt-text="Screenshot showing creating database in hive catalog and making it default catalog for the session.":::
You can refer this article on how to use CLI and get started with Flink SQL Clie
CREATE TABLE partitioned_hive_table(x int, days STRING) PARTITIONED BY (days) WITH ( 'connector' = 'hive', 'sink.partition-commit.delay'='1 s', 'sink.partition-commit.policy.kind'='metastore,success-file'); ``` > [!IMPORTANT]
-> There is a known limitation in Flink. The last ΓÇÿnΓÇÖ columns are chosen for partitions, irrespective of user defined partition column. [FLINK-32596](https://issues.apache.org/jira/browse/FLINK-32596) The partition key will be wrong when use Flink dialect to create Hive table.
+> There is a known limitation in Apache Flink. The last ΓÇÿnΓÇÖ columns are chosen for partitions, irrespective of user defined partition column. [FLINK-32596](https://issues.apache.org/jira/browse/FLINK-32596) The partition key will be wrong when use Flink dialect to create Hive table.
+
+### Reference
+- [Apache Flink Table API & SQL](https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/overview/#table-api--sql)
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Flink Web Ssh On Portal To Flink Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-web-ssh-on-portal-to-flink-sql.md
Title: How to enter the HDInsight on AKS Flink CLI client using Secure Shell (SSH) on Azure portal
-description: How to enter the HDInsight on AKS Flink SQL & DStream CLI client using webssh on Azure portal
+ Title: How to enter the Apache Flink® CLI client using Secure Shell (SSH) on HDInsight on AKS clusters with Azure portal
+description: How to enter Apache Flink® SQL & DStream CLI client using webssh on HDInsight on AKS clusters with Azure portal
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Access CLI Client using Secure Shell (SSH) on Azure portal
+# Access Apache Flink® CLI client using Secure Shell (SSH) on HDInsight on AKS clusters with Azure portal
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This example guides how to enter the HDInsight on AKS Flink CLI client using SSH on Azure portal, we cover both Flink SQL and Flink DataStream
+This example guides how to enter the Apache Flink CLI client on HDInsight on AKS clusters using SSH on Azure portal, we cover both SQL and Flink DataStream
## Prerequisites - You're required to select SSH during [creation](./flink-create-cluster-portal.md) of Flink Cluster
Submitting a job means to upload the jobΓÇÖs JAR to the SSH pod and initiating t
## Reference * [Flink SQL Client](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sqlclient/)
+* Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Fraud Detection Flink Datastream Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/fraud-detection-flink-datastream-api.md
Title: Fraud detection with the Apache Flink DataStream API
-description: Learn about Fraud detection with the Apache Flink DataStream API
+ Title: Fraud detection with the Apache Flink® DataStream API
+description: Learn about Fraud detection with the Apache Flink® DataStream API
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Fraud detection with the Apache Flink DataStream API
+# Fraud detection with the Apache Flink® DataStream API
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
In this article, learn how to run Fraud detection use case with the Apache Flink
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](../flink/flink-create-cluster-portal.md)
+* [Flink cluster 1.16.0 on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
* IntelliJ Idea community edition installed locally ## Develop code in IDE
After making the code changes, create the jar using the following steps in Intel
## Reference * [Fraud Detector v2: State + Time](https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/try-flink/datastream/#fraud-detector-v2-state--time--1008465039) * [Sample TransactionIterator.java](https://github.com/apache/flink/blob/master/flink-walkthroughs/flink-walkthrough-common/src/main/java/org/apache/flink/walkthrough/common/source/TransactionIterator.java)
+* Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Hive Dialect Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/hive-dialect-flink.md
Title: Hive dialect in Flink
-description: Hive dialect in Flink HDInsight on AKS
+ Title: Hive dialect in Apache Flink® clusters on HDInsight on AKS
+description: how to use Hive dialect in Apache Flink® clusters on HDInsight on AKS
Previously updated : 09/18/2023 Last updated : 10/27/2023
-# Hive dialect in Flink
+# Hive dialect in Apache Flink® clusters on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-In this article, learn how to use Hive dialect in HDInsight on AKS - Flink.
+In this article, learn how to use Hive dialect in Apache Flink clusters on HDInsight on AKS.
## Introduction
-The user cannot change the default `flink` dialect to hive dialect for their usage on HDInsight on AKS - Flink. All the SQL operations fail once changed to hive dialect with the following error.
+The user cannot change the default `flink` dialect to hive dialect for their usage on HDInsight on AKS clusters. All the SQL operations fail once changed to hive dialect with the following error.
```Caused by:
The reason for this issue arises due to an open [Hive Jira](https://issues.apach
:::image type="content" source="./media/hive-dialect-flink/flink-container-table-2.png" alt-text="Screenshot shows container table 2." lightbox="./media/hive-dialect-flink/flink-container-table-2.png"::: :::image type="content" source="./media/hive-dialect-flink/flink-container-table-3.png" alt-text="Screenshot shows container table 3." lightbox="./media/hive-dialect-flink/flink-container-table-3.png":::+
+### Reference
+- [Hive Dialect in Apache Flink](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/hive/hive_dialect/#hive-dialect)
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Integration Of Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/integration-of-azure-data-explorer.md
Title: Integration of Azure Data Explorer and Flink
-description: Integration of Azure Data Explorer and Flink in HDInsight on AKS
+ Title: Integration of Azure Data Explorer and Apache Flink®
+description: Integration of Azure Data Explorer and Apache Flink® in HDInsight on AKS
Last updated 09/18/2023
-# Integration of Azure Data Explorer and Flink
+# Integration of Azure Data Explorer and Apache Flink®
Azure Data Explorer is a fully managed, high-performance, big data analytics platform that makes it easy to analyze high volumes of data in near real time.
-ADX helps users in analysis of large volumes of data from streaming applications, websites, IoT devices, etc. Integrating Flink with ADX helps you to process real-time data and analyze it in ADX.
+ADX helps users in analysis of large volumes of data from streaming applications, websites, IoT devices, etc. Integrating Apache Flink with ADX helps you to process real-time data and analyze it in ADX.
## Prerequisites -- [Create HDInsight on AKS Flink cluster](./flink-create-cluster-portal.md)
+- [Create Apache Flink cluster on HDInsight on AKS](./flink-create-cluster-portal.md)
- [Create Azure data explorer](/azure/data-explorer/create-cluster-and-database/) ## Steps to use Azure Data Explorer as sink in Flink
-1. [Create HDInsight on AKS Flink cluster](./flink-create-cluster-portal.md).
+1. [Create Flink cluster](./flink-create-cluster-portal.md).
1. [Create ADX with database](/azure/data-explorer/create-cluster-and-database/) and table as required.
ADX helps users in analysis of large volumes of data from streaming applications
There is no delay in writing the data to the Kusto table from Flink.
+### Reference
+
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Join Stream Kafka Table Filesystem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/join-stream-kafka-table-filesystem.md
Title: Enrich the events from Kafka with the attributes from FileSystem with Apache Flink
-description: Learn how to join stream from Kafka with table from fileSystem using DataStream API
+ Title: Enrich the events from Apache Kafka® with the attributes from FileSystem with Apache Flink®
+description: Learn how to join stream from Kafka with table from fileSystem using Apache Flink® DataStream API
Last updated 08/29/2023
-# Enrich the events from Kafka with attributes from ADLS Gen2 with Apache Flink
+# Enrich the events from Apache Kafka® with attributes from ADLS Gen2 with Apache Flink®
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
In this article, you can learn how you can enrich the real time events by joinin
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](../flink/flink-create-cluster-portal.md)
-* [HDInsight Kafka](../../hdinsight/kafk)
- * You're required to ensure the network settings are taken care as described on [Using HDInsight Kafka](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS Flink and HDInsight Kafka are in the same VNet
+* [Flink cluster on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
+* [Kafka cluster on HDInsight](../../hdinsight/kafk)
+ * You're required to ensure the network settings are taken care as described on [Using Kafka on HDInsight](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS and HDInsight clusters are in the same VNet
* For this demonstration, we're using a Window VM as maven project develop environment in the same VNet as HDInsight on AKS ## Kafka topic preparation
We continue to produce and consume the user activity and item attributes in the
## Reference
-[Flink Examples](https://github.com/flink-extended/)
+- [Flink Examples](https://github.com/flink-extended/)
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Monitor Changes Postgres Table Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/monitor-changes-postgres-table-flink.md
Title: Change Data Capture (CDC) of PostgreSQL table using Apache FlinkSQL
-description: Learn how to perform CDC on PostgreSQL table using Apache FlinkSQL CDC
+ Title: Change Data Capture (CDC) of PostgreSQL table using Apache Flink®
+description: Learn how to perform CDC on PostgreSQL table using Apache Flink®
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Change Data Capture (CDC) of PostgreSQL table using Apache FlinkSQL
+# Change Data Capture (CDC) of PostgreSQL table using Apache Flink®
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Now, let's learn how to monitor changes on PostgreSQL table using Flink-SQL CDC.
## Prerequisites * [Azure PostgresSQL flexible server Version 14.7](/azure/postgresql/flexible-server/overview)
-* [HDInsight on AKS Flink 1.16.0](./flink-create-cluster-portal.md)
+* [Apache Flink Cluster on HDInsight on AKS](./flink-create-cluster-portal.md)
* Linux virtual Machine to use PostgreSQL client * Add the NSG rule that allows inbound and outbound connections on port 5432 in HDInsight on AKS pool subnet.
Now, let's learn how to monitor changes on PostgreSQL table using Flink-SQL CDC.
### Reference
-[PostgreSQL CDC Connector](https://ververica.github.io/flink-cdc-connectors/release-2.1/content/connectors/postgres-cdc.html) is licensed under [Apache 2.0 License](https://github.com/ververica/flink-cdc-connectors/blob/master/LICENSE)
+- [Apache Flink Website](https://flink.apache.org/)
+- [PostgreSQL CDC Connector](https://ververica.github.io/flink-cdc-connectors/release-2.1/content/connectors/postgres-cdc.html) is licensed under [Apache 2.0 License](https://github.com/ververica/flink-cdc-connectors/blob/master/LICENSE)
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Process And Consume Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/process-and-consume-data.md
Title: Using HDInsight Kafka with HDInsight on AKS Apache Flink
-description: Learn how to use HDInsight Kafka with HDInsight on AKS Apache Flink
+ Title: Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
+description: Learn how to use Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Using HDInsight Kafka with HDInsight on AKS - Apache Flink
+# Using Apache Kafka® on HDInsight with Apache Flink® on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] A well known use case for Apache Flink is stream analytics. The popular choice by many users to use the data streams, which are ingested using Apache Kafka. Typical installations of Flink and Kafka start with event streams being pushed to Kafka, which can be consumed by Flink jobs.
-This example uses HDInsight on AKS Flink 1.16.0 to process streaming data consuming and producing Kafka topic.
+This example uses HDInsight on AKS clusters running Flink 1.16.0 to process streaming data consuming and producing Kafka topic.
> [!NOTE] > FlinkKafkaConsumer is deprecated and will be removed with Flink 1.17, please use KafkaSource instead.
This example uses HDInsight on AKS Flink 1.16.0 to process streaming data consum
* Create a [HDInsight on AKS Cluster pool](../quickstart-create-cluster.md) with same VNet. * Create a Flink cluster to the cluster pool created.
-## Apache Flink-Kafka Connector
+## Apache Kafka Connector
Flink provides an [Apache Kafka Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kafka/) for reading data from and writing data to Kafka topics with exactly once guarantees.
Flink provides an [Apache Kafka Connector](https://nightlies.apache.org/flink/fl
## Building Kafka Sink
-Kafka sink provides a builder class to construct an instance of a KafkaSink. We use the same to construct our Sink and use it along with HDInsight on AKS Flink
+Kafka sink provides a builder class to construct an instance of a KafkaSink. We use the same to construct our Sink and use it along with Flink cluster running on HDInsight on AKS
**SinKafkaToKafka.java** ``` java
public class Event {
## Reference * [Apache Kafka Connector](https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka)
+* Apache, Apache Kafka, Kafka, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Sink Kafka To Kibana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-kafka-to-kibana.md
Title: Use Elasticsearch along with HDInsight on AKS - Apache Flink
-description: Learn how to use Elasticsearch along HDInsight on AKS - Apache Flink
+ Title: Use Elasticsearch along with Apache Flink® on HDInsight on AKS
+description: Learn how to use Elasticsearch along Apache Flink® on HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Using Elasticsearch with HDInsight on AKS - Apache Flink
+# Using Elasticsearch with Apache Flink® on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-Flink for real-time analytics can be used to build a dashboard application that visualizes the streaming data using Elasticsearch and Kibana.
+Apache Flink for real-time analytics can be used to build a dashboard application that visualizes the streaming data using Elasticsearch and Kibana.
Flink can be used to analyze a stream of taxi ride events and compute metrics. Metrics can include number of rides per hour, the average fare per ride, or the most popular pickup locations. You can write these metrics to an Elasticsearch index using a Flink sink and use Kibana to connect and create charts or dashboards to display metrics in real-time.
-In this article, learn how to Use Elastic along HDInsight Flink.
+In this article, learn how to Use Elastic along Apache Flink® on HDInsight on AKS.
## Elasticsearch and Kibana
For more information, refer
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](./flink-create-cluster-portal.md)
+* [Create Flink 1.16.0 cluster](./flink-create-cluster-portal.md)
* Elasticsearch-7.13.2 * Kibana-7.13.2 * [HDInsight 5.0 - Kafka 2.4.1](../../hdinsight/kafk)
You can find the job in running state on your Flink Web UI
## Reference * [Apache Kafka SQL Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/kafka) * [Elasticsearch SQL Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/elasticsearch)
+* Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Sink Sql Server Table Using Flink Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-sql-server-table-using-flink-sql.md
Title: Change Data Capture (CDC) of SQL Server using Apache Flink SQL
-description: Learn how to perform CDC of SQL Server using Apache Flink SQL
+ Title: Change Data Capture (CDC) of SQL Server using Apache Flink®
+description: Learn how to perform CDC of SQL Server using Apache Flink®
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Change Data Capture (CDC) of SQL Server using Apache Flink SQL
+# Change Data Capture (CDC) of SQL Server using Apache Flink®
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] Change Data Capture (CDC) is a technique you can use to track row-level changes in database tables in response to create, update, and delete operations. In this article, we use [CDC Connectors for Apache Flink®](https://github.com/ververica/flink-cdc-connectors), which offer a set of source connectors for Apache Flink. The connectors integrate [Debezium®](https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/debezium/#debezium-format) as the engine to capture the data changes.
-Flink supports to interpret Debezium JSON and Avro messages as INSERT/UPDATE/DELETE messages into Flink SQL system.
+Apache Flink supports to interpret Debezium JSON and Avro messages as INSERT/UPDATE/DELETE messages into Flink SQL system.
This support is useful in many cases to:
This support is useful in many cases to:
Now, let us learn how to use Change Data Capture (CDC) of SQL Server using Flink SQL. The SQLServer CDC connector allows for reading snapshot data and incremental data from SQLServer database. ## Prerequisites
- * [HDInsight on AKS Flink 1.16.0](../flink/flink-create-cluster-portal.md)
+ * [Flink cluster on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
* [Azure SQL Server](/azure/azure-sql/azure-sql-iaas-vs-paas-what-is-overview)
-### Apache Flink SQLServer CDC Connector
+### SQLServer CDC Connector
-The SQLServer CDC connector is a Flink Source connector, which reads database snapshot first and then continues to read change events with exactly once processing even failures happen. This example uses FLINK CDC to create a SQLServerCDC table on FLINK SQL
+The SQLServer CDC connector is a Flink Source connector, which reads database snapshot first and then continues to read change events with exactly once processing even failures happen. This example uses Flink CDC to create a SQLServerCDC table on FLINK SQL
-### Use SSH to use Flink SQL-client
+### Use SSH to use Flink SQL client
We have already covered this section in detail on how to use [secure shell](./flink-web-ssh-on-portal-to-flink-sql.md) with Flink.
Monitor the table on Flink SQL
### Reference * [SQLServer CDC Connector](https://ververica.github.io/flink-cdc-connectors/master/content/connectors/sqlserver-cdc.html) is licensed under [Apache 2.0 License](https://github.com/ververica/flink-cdc-connectors/blob/master/LICENSE)
+* Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Use Apache Nifi With Datastream Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-apache-nifi-with-datastream-api.md
Title: Use Apache NiFi with HDInsight on AKS Apache Flink to publish into ADLS Gen2
-description: Learn how to use Apache NiFi to consume Processed Kafka topic from HDInsight Apache Flink on AKS and publish into ADLS Gen2
+ Title: Use Apache NiFi with HDInsight on AKS clusters running Apache Flink® to publish into ADLS Gen2
+description: Learn how to use Apache NiFi to consume processed Apache Kafka® topic from Apache Flink® on HDInsight on AKS clusters and publish into ADLS Gen2
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Use Apache NiFi to consume processed Kafka topics from Apache Flink and publish into ADLS Gen2
+# Use Apache NiFi to consume processed Apache Kafka® topics from Apache Flink® and publish into ADLS Gen2
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
By combining the low latency streaming features of Apache Flink and the dataflow
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](../flink/flink-create-cluster-portal.md)
-* [HDInsight Kafka](../../hdinsight/kafk)
- * You're required to ensure the network settings are taken care as described on [Using HDInsight Kafka](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS Flink and HDInsight Kafka are in the same VNet
+* [Flink cluster on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
+* [Kafka cluster on HDInsight](../../hdinsight/kafk)
+ * You're required to ensure the network settings are taken care as described on [Using Kafka on HDInsight](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS and HDInsight clusters are in the same VNet
* For this demonstration, we're using a Window VM as maven project develop env in the same VNET as HDInsight on AKS * For this demonstration, we're using an Ubuntu VM in the same VNET as HDInsight on AKS, install Apache NiFi 1.22.0 on this VM
By combining the low latency streaming features of Apache Flink and the dataflow
For purposes of this demonstration, we're using a HDInsight Kafka Cluster, let us prepare HDInsight Kafka topic for the demo. > [!NOTE]
-> Setup a HDInsight [Kafka](../../hdinsight/kafk) Cluster and Replace broker list with your own list before you get started for both Kafka 2.4 and 3.2.
+> Setup a HDInsight cluster with [Apache Kafka](../../hdinsight/kafk) and replace broker list with your own list before you get started for both Kafka 2.4 and 3.2.
-**HDInsight Kafka 2.4.1**
+**Kafka 2.4.1**
``` /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic click_events --zookeeper zk0-contsk:2181 ```
-**HDInsight Kafka 3.2.0**
+**Kafka 3.2.0**
``` /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 2 --partitions 3 --topic click_events --bootstrap-server wn0-contsk:9092 ```
Here, we configure NiFi properties in order to be accessed outside the localhost
:::image type="content" source="./media/use-apache-nifi-with-datastream-api/step-2-configuring-nifi.png" alt-text="Screenshot showing how to define NiFi properties." border="true" lightbox="./media/use-apache-nifi-with-datastream-api/step-2-configuring-nifi.png":::
-## Process streaming data from HDInsight Kafka On HDInsight on AKS Flink
+## Process streaming data from Kafka cluster on HDInsight with Flink cluster on HDInsight on AKS
Let us develop the source code on Maven, to build the jar.
public class ClickSource implements SourceFunction<Event> {
``` **Maven pom.xml**
-You can replace 2.4.1 with 3.2.0 in case you're using HDInsight Kafka 3.2.0, where applicable on the pom.xml
+You can replace 2.4.1 with 3.2.0 in case you're using Kafka 3.2.0 on HDInsight, where applicable on the pom.xml
``` xml <?xml version="1.0" encoding="UTF-8"?>
You can replace 2.4.1 with 3.2.0 in case you're using HDInsight Kafka 3.2.0, whe
</project> ```
-## Submit streaming job to HDInsight on AKS - Flink
+## Submit streaming job to Flink cluster on HDInsight on AKS
-Now, lets submit streaming job as mentioned in the previous step into HDInsight on AKS - Flink
+Now, lets submit streaming job as mentioned in the previous step into Flink cluster
:::image type="content" source="./media/use-apache-nifi-with-datastream-api/step-5-flink-ui-job-submission.png" alt-text="Screenshot showing how to submit the streaming job from FLink UI." border="true" lightbox="./media/use-apache-nifi-with-datastream-api/step-5-flink-ui-job-submission.png":::
-## Check the topic on HDInsight Kafka
+## Check the topic on Kafka cluster
-Check the topic on HDInsight Kafka.
+Check the topic on Kafka.
``` root@hn0-contos:/home/sshuser# /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --topic click_events --bootstrap-server wn0-contos:9092
Once you have assigned a managed identity to the Azure VM, you need to make sure
* [Azure Data Lake Storage](https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-azure-nar/1.12.0/org.apache.nifi.processors.azure.storage.PutAzureDataLakeStorage/https://docsupdatetracker.net/index.html) * [ADLS Credentials Controller Service](https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-azure-nar/1.12.0/org.apache.nifi.services.azure.storage.ADLSCredentialsControllerService/https://docsupdatetracker.net/index.html) * [Download IntelliJ IDEA for development](https://www.jetbrains.com/idea/download/#section=windows)
+* Apache, Apache Kafka, Kafka, Apache Flink, Flink,Apache NiFi, NiFi and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Use Azure Pipelines To Run Flink Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-azure-pipelines-to-run-flink-jobs.md
Title: How to use Azure Pipelines with HDInsight on AKS - Flink
-description: Learn how to use Azure Pipelines with HDInsight on AKS - Flink
+ Title: How to use Azure Pipelines with Apache Flink® on HDInsight on AKS
+description: Learn how to use Azure Pipelines with Apache Flink®
Previously updated : 09/25/2023 Last updated : 10/27/2023
-# How to use Azure Pipelines with HDInsight on AKS - Flink
+# How to use Azure Pipelines with Apache Flink® on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-In this article, you'll learn how to use Azure Pipelines with HDInsight on AKS to submit Flink jobs via the cluster's REST API. We guide you through the process using a sample YAML pipeline and a PowerShell script, both of which streamline the automation of the REST API interactions.
+In this article, you'll learn how to use Azure Pipelines with HDInsight on AKS to submit Flink jobs with the cluster's REST API. We guide you through the process using a sample YAML pipeline and a PowerShell script, both of which streamline the automation of the REST API interactions.
## Prerequisites
In this article, you'll learn how to use Azure Pipelines with HDInsight on AKS t
az ad sp create-for-rbac -n azure-flink-pipeline --role Contributor --scopes /subscriptions/abdc-1234-abcd-1234-abcd-1234/resourceGroups/myResourceGroupName/providers/Microsoft.HDInsight/clusterpools/hiloclusterpool/clusters/flinkcluster` ```
+### Reference
+
+- [Apache Flink Website](https://flink.apache.org/)
+
+> [!NOTE]
+> Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
++ ### Create a key vault 1. Create Azure Key Vault, you can follow [this tutorial](/azure/key-vault/general/quick-create-portal) to create a new Azure Key Vault.
hdinsight-aks Use Flink Cli To Submit Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-cli-to-submit-jobs.md
Title: How to use Apache Flink CLI to submit jobs
-description: Learn how to use Apache Flink CLI to submit jobs
+ Title: How to use Apache Flink® CLI to submit jobs
+description: Learn how to use Apache Flink® CLI to submit jobs
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Apache Flink Command-Line Interface (CLI)
+# Apache Flink® Command-Line Interface (CLI) on HDInsight on AKS clusters
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Both installing and updating the CLI require rerunning the install script. Insta
curl -L https://aka.ms/hdionaksflinkcliinstalllinux | bash ```
-This command installs Flink CLI in the user's home directory (`$HOME/flink-cli`). The script can also be downloaded and run locally. You may have to restart your shell in order for changes to take effect.
+This command installs Flink CLI in the user's home directory (`$HOME/flink-cli`). The script can also be downloaded and run locally. You might have to restart your shell in order for changes to take effect.
## Run an Apache Flink command to test
Here are some examples of actions supported by FlinkΓÇÖs CLI tool:
| run | This action executes jobs. It requires at least the jar containing the job. Flink- or job-related arguments can be passed if necessary. | | info | This action can be used to print an optimized execution graph of the passed job. Again, the jar containing the job needs to be passed. | | list | This action *lists all running or scheduled jobs*.|
-| savepoint | This action can be used to *create or disposing savepoints* for a given job. It may be necessary to specify a savepoint directory besides the JobID. |
+| savepoint | This action can be used to *create or disposing savepoints* for a given job. It might be necessary to specify a savepoint directory besides the JobID. |
| cancel | This action can be used to *cancel running jobs* based on their JobID. | | stop | This action combines the *cancel and savepoint actions to stop* a running job but also creates a savepoint to start from again. |
bin/flink <action> --help
> [!TIP] > * If you have a Proxy blocking the connection: In order to get the installation scripts, your proxy needs to allow HTTPS connections to the following addresses: `https://aka.ms/` and `https://hdiconfigactions.blob.core.windows.net` > * To resolve the issue, add the user or group to the [authorization profile](../hdinsight-on-aks-manage-authorization-profile.md).+
+### Reference
+
+- [Apache Flink Website](https://flink.apache.org/)
+- Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Use Flink Delta Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-delta-connector.md
Title: How to use Apache Flink & Delta connector in HDInsight on AKS
-description: Learn how to use Apache Flink-Delta connector
+ Title: How to use Apache Flink® on HDInsight on AKS with Flink/Delta connector
+description: Learn how to use Flink/Delta Connector
Last updated 08/29/2023
-# How to use Apache Flink-Delta connector
+# How to use Flink/Delta Connector
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
In this article, you learn how to use Flink-Delta connector
> * Write the data to a delta table. > * Query it in Power BI.
-## What is Apache Flink-Delta connector
+## What is Flink/Delta connector
-Flink-Delta Connector is a JVM library to read and write data from Apache Flink applications to Delta tables utilizing the Delta Standalone JVM library. The connector provides exactly once delivery guarantee.
+Flink/Delta Connector is a JVM library to read and write data from Apache Flink applications to Delta tables utilizing the Delta Standalone JVM library. The connector provides exactly once delivery guarantee.
## Apache Flink-Delta Connector includes * DeltaSink for writing data from Apache Flink to a Delta table. * DeltaSource for reading Delta tables using Apache Flink.
-We are using the following connector, to match with the HDInsight on AKS Flink version.
+We are using the following connector, to match with the Apache Flink version running on HDInsight on AKS cluster.
|Connector's version| Flink's version| |-|-|
We are using the following connector, to match with the HDInsight on AKS Flink v
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](./flink-create-cluster-portal.md)
+* [Create Flink 1.16.0 cluster](./flink-create-cluster-portal.md)
* storage account * [Power BI desktop](https://www.microsoft.com/download/details.aspx?id=58494)
Once the data is in delta sink, you can run the query in Power BI desktop and cr
* [Delta connectors](https://github.com/delta-io/connectors/tree/master/flink). * [Delta Power BI connectors](https://github.com/delta-io/connectors/tree/master/powerbi).
+* Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Use Flink To Sink Kafka Message Into Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-to-sink-kafka-message-into-hbase.md
Title: Write messages to HBase with DataStream API
-description: Learn how to write messages to HBase with DataStream API
+ Title: Write messages to Apache HBase® with Apache Flink® DataStream API
+description: Learn how to write messages to Apache HBase with Apache Flink DataStream API
Last updated 08/29/2023
-# Write messages to HBase with DataStream API
+# Write messages to Apache HBase® with Apache Flink® DataStream API
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
In a real world scenario, this example is a stream analytics layer to realize va
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0](../flink/flink-create-cluster-portal.md)
-* [HDInsight Kafka](../flink/process-and-consume-data.md)
-* [HDInsight HBase 2.4.11](../../hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md#create-apache-hbase-cluster)
- * You're required to make sure HDInsight on AKS Flink can connect to HDInsight HBase Master(zk), with same virtual network.
+* [Apache Flink cluster on HDInsight on AKS](../flink/flink-create-cluster-portal.md)
+* [Apache Kafka cluster on HDInsight](../flink/process-and-consume-data.md)
+* [Apache HBase 2.4.11 clusteron HDInsight](../../hdinsight/hbase/apache-hbase-tutorial-get-started-linux.md#create-apache-hbase-cluster)
+ * You're required to ensure HDInsight on AKS cluster can connect to HDInsight cluster, with same virtual network.
* Maven project on IntelliJ IDEA for development on an Azure VM in the same VNet ## Implementation Steps
if __name__ == "__main__":
main() ```
-**Use pipeline to produce Kafka topic**
+**Use pipeline to produce Apache Kafka topic**
We're going to use click_events for the Kafka topic ```
python weblog.py | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh -
..... ```
-**Create HBase table on HDInsight HBase**
+**Create HBase table on HDInsight cluster**
``` sql root@hn0-contos:/home/sshuser# hbase shell
Took 0.9531 seconds
## References * [Apache Kafka Connector](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/kafka) * [Download IntelliJ IDEA](https://www.jetbrains.com/idea/download/#section=windows)
+* Apache, Apache Kafka, Kafka, Apache HBase, HBase, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Use Hive Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-catalog.md
Title: Use Hive Catalog, Hive Read & Write demo on Apache Flink SQL
-description: Learn how to use Hive Catalog, Hive Read & Write demo on Apache Flink SQL
+ Title: Use Hive Catalog, Hive Read & Write demo on Apache Flink®
+description: Learn how to use Hive Catalog, Hive Read & Write demo on Apache Flink® on HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# How to use Hive Catalog with Apache Flink SQL
+# How to use Hive Catalog with Apache Flink® on HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-This example uses HiveΓÇÖs Metastore as a persistent catalog with Apache FlinkΓÇÖs HiveCatalog. We will use this functionality for storing Kafka table and MySQL table metadata on Flink across sessions. Flink uses Kafka table registered in Hive Catalog as a source, perform some lookup and sink result to MySQL database
+This example uses HiveΓÇÖs Metastore as a persistent catalog with Apache FlinkΓÇÖs Hive Catalog. We use this functionality for storing Kafka table and MySQL table metadata on Flink across sessions. Flink uses Kafka table registered in Hive Catalog as a source, perform some lookup and sink result to MySQL database
## Prerequisites
-* [HDInsight on AKS Flink 1.16.0 with Hive Metastore 3.1.2](../flink/flink-create-cluster-portal.md)
-* [HDInsight Kafka](../../hdinsight/kafk)
- * You're required to ensure the network settings are complete as described on [Using HDInsight Kafka](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS Flink and HDInsight Kafka are in the same VNet
+* [Apache Flink Cluster on HDInsight on AKS with Hive Metastore 3.1.2](../flink/flink-create-cluster-portal.md)
+* [Apache Kafka cluster on HDInsight](../../hdinsight/kafk)
+ * You're required to ensure the network settings are complete as described on [Using Kafka](../flink/process-and-consume-data.md); that's to make sure HDInsight on AKS and HDInsight clusters are in the same VNet
* MySQL 8.0.33
-## Apache Hive on Flink
+## Apache Hive on Apache Flink
Flink offers a two-fold integration with Hive.
Flink offers a two-fold integration with Hive.
- The second is to offer Flink as an alternative engine for reading and writing Hive tables. - The HiveCatalog is designed to be ΓÇ£out of the boxΓÇ¥ compatible with existing Hive installations. You don't need to modify your existing Hive Metastore or change the data placement or partitioning of your tables.
-You may refer to this page for more details on [Apache Hive](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/hive/overview/)
+For more information, see [Apache Hive](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/hive/overview/)
## Environment preparation
wget https://repo1.maven.org/maven2/org/apache/flink/flink-connector-kafka/1.16.
**Moving the planner jar**
-Move the jar flink-table-planner_2.12-1.16.0-0.0.18.jar located in webssh pod's /opt to /lib and move out the jar flink-table-planner-loader-1.16.0-0.0.18.jar from /lib. Please refer to [issue](https://issues.apache.org/jira/browse/FLINK-25128) for more details. Perform the following steps to move the planner jar.
+Move the jar flink-table-planner_2.12-1.16.0-0.0.18.jar located in webssh pod's /opt to /lib and move out the jar flink-table-planner-loader-1.16.0-0.0.18.jar from /lib. Refer to [issue](https://issues.apache.org/jira/browse/FLINK-25128) for more details. Perform the following steps to move the planner jar.
``` mv /opt/flink-webssh/opt/flink-table-planner_2.12-1.16.0-0.0.18.jar /opt/flink-webssh/lib/
FROM kafka_user_orders where product_id = 104;
### Reference * [Apache Hive](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/hive/overview/)
+* Apache, Apache Hive, Hive, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Use Hive Metastore Datastream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-metastore-datastream.md
Title: Use Hive Metastore with Apache Flink DataStream API
-description: Use Hive Metastore with Apache Flink DataStream API
+ Title: Use Hive Metastore with Apache Flink® DataStream API
+description: Use Hive Metastore with Apache Flink® DataStream API
Last updated 08/29/2023
-# Use Hive Metastore with Apache Flink DataStream API
+# Use Hive Metastore with Apache Flink® DataStream API
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)] Over the years, Hive Metastore has evolved into a de facto metadata center in the Hadoop ecosystem. Many companies have a separate Hive Metastore service instance in their production environments to manage all their metadata (Hive or non-Hive metadata). For users who have both Hive and Flink deployments, HiveCatalog enables them to use Hive Metastore to manage FlinkΓÇÖs metadata.
-## Supported Hive versions for HDInsight on AKS - Apache Flink
+## Supported Hive versions for Apache Flink clusters on HDInsight on AKS
Supported Hive Version: - 3.1
public static void main(String[] args) throws Exception
``` ## References
-[Apache Flink - Hive read & write](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/hive/hive_read_write/)
+- [Hive read & write](https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/hive/hive_read_write/)
+- Apache, Apache Hive, Hive, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Monitor With Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/monitor-with-prometheus-grafana.md
Title: Monitoring with Azure Managed Prometheus and Grafana
description: Learn how to use monitor With Azure Managed Prometheus and Grafana Previously updated : 08/29/2023 Last updated : 10/27/2023 # Monitoring with Azure Managed Prometheus and Grafana
This article covers the details of enabling the monitoring feature in HDInsight
* An Azure Managed Prometheus workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create a Azure Managed Prometheus workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md). * Azure Managed Grafana workspace. For the instructions, see [Create a Azure Managed Grafana workspace](../managed-grafan). * An [HDInsight on AKS cluster](./quickstart-create-cluster.md). Currently, you can use Azure Managed Prometheus with the following HDInsight on AKS cluster types:
- * Apache Spark
- * Apache Flink
+ * Apache SparkΓäó
+ * Apache Flink®
* Trino For the instructions on how to create an HDInsight on AKS cluster, see [Get started with Azure HDInsight on AKS](./overview.md).
You can use the Grafana dashboard to view the service and system. Trino cluster
1. View the metric as per selection. :::image type="content" source="./media/monitor-with-prometheus-grafana/view-output.png" alt-text="Screenshot showing how to view the output." border="true" lightbox="./media/monitor-with-prometheus-grafana/view-output.png":::+
+## Reference
+
+* Apache, Apache Spark, Spark, and associated open source project names are [trademarks](./trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Secure Traffic By Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-firewall.md
Title: Use firewall to restrict outbound traffic on HDInsight on AKS using Azure CLI description: Learn how to secure traffic using firewall on HDInsight on AKS using Azure CLI + Last updated 08/3/2023
The following steps provide details about the specific network and application r
## How to debug If you find the cluster works unexpectedly, you can check the firewall logs to find which traffic is blocked.-
hdinsight-aks Azure Hdinsight Spark On Aks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/azure-hdinsight-spark-on-aks-delta-lake.md
Title: How to use Delta Lake scenario in Azure HDInsight on AKS Spark cluster.
-description: Learn how to use Delta Lake scenario in Azure HDInsight on AKS Spark cluster.
+ Title: How to use Delta Lake in Azure HDInsight on AKS with Apache SparkΓäó cluster.
+description: Learn how to use Delta Lake scenario in Azure HDInsight on AKS with Apache SparkΓäó cluster.
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Use Delta Lake scenario in Azure HDInsight on AKS Spark cluster (Preview)
+# Use Delta Lake in Azure HDInsight on AKS with Apache SparkΓäó cluster (Preview)
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-[Azure HDInsight on AKS](../overview.md) is a managed cloud-based service for big data analytics that helps organizations process large amounts data. This tutorial shows how to use Delta Lake scenario in Azure HDInsight on AKS Spark cluster.
+[Azure HDInsight on AKS](../overview.md) is a managed cloud-based service for big data analytics that helps organizations process large amounts data. This tutorial shows how to use Delta Lake in Azure HDInsight on AKS with Apache SparkΓäó cluster.
## Prerequisite
-1. Create an [Azure HDInsight on AKS Spark cluster](./create-spark-cluster.md)
+1. Create an [Apache SparkΓäó cluster in Azure HDInsight on AKS](./create-spark-cluster.md)
- :::image type="content" source="./media/azure-hdinsight-spark-on-aks-delta-lake/create-spark-cluster.png" alt-text="Screenshot showing spark cluster creation." lightbox="./media/azure-hdinsight-spark-on-aks-delta-lake/create-spark-cluster.png":::
+ :::image type="content" source="./media/azure-hdinsight-spark-on-aks-delta-lake/create-spark-cluster.png" alt-text="Screenshot showing spark cluster creation." lightbox="./media/azure-hdinsight-spark-on-aks-delta-lake/create-spark-cluster.png":::
1. Run Delta Lake scenario in Jupyter Notebook. Create a Jupyter notebook and select "Spark" while creating a notebook, since the following example is in Scala.
Last updated 08/29/2023
### Provide require configurations for the delta lake
-Delta Lake Spark Compatibility matrix - [Delta Lake](https://docs.delta.io/latest/releases.html), change Delta Lake version based on Spark Version.
+Delta Lake with Apache Spark Compatibility matrix - [Delta Lake](https://docs.delta.io/latest/releases.html), change Delta Lake version based on Apache Spark Version.
``` %%configure -f { "conf": {"spark.jars.packages": "io.delta:delta-core_2.12:1.0.1,net.andreinc:mockneat:0.4.8",
dfTxLog.select(col("add")("path").alias("file_path")).withColumn("version",subst
:::image type="content" source="./media/azure-hdinsight-spark-on-aks-delta-lake/data-after-each-data-load.png" alt-text="Screenshot KPI data after each data load." border="true" lightbox="./media/azure-hdinsight-spark-on-aks-delta-lake/data-after-each-data-load.png":::
+## Reference
+
+* Apache, Apache Spark, Spark, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+
hdinsight-aks Configuration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/configuration-management.md
Title: Configuration management in HDInsight on AKS Spark
-description: Learn how to perform Configuration management in HDInsight on AKS Spark
+ Title: Configuration management in HDInsight on AKS with Apache SparkΓäó
+description: Learn how to perform Configuration management in HDInsight on AKS with Apache SparkΓäó cluster
Previously updated : 08/29/2023 Last updated : 10/19/2023
-# Configuration management in HDInsight on AKS Spark
+# Configuration management in HDInsight on AKS with Apache SparkΓäó cluster
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-Azure HDInsight on AKS is a managed cloud-based service for big data analytics that helps organizations process large amounts data. This tutorial shows how to use configuration management in Azure HDInsight on AKS Spark cluster.
+Azure HDInsight on AKS is a managed cloud-based service for big data analytics that helps organizations process large amounts data. This tutorial shows how to use configuration management in Azure HDInsight on AKS with Apache SparkΓäó cluster.
-Configuration management is used to add specific configurations into the spark cluster.
+Configuration management is used to add specific configurations into the Apache Spark cluster.
When user updates a configuration in the management portal the corresponding service is restarted in rolling manner.
When user updates a configuration in the management portal the corresponding ser
> Selecting **Save** will restart the clusters. > It is advisable not to have any active jobs while making configuration changes, since restarting the cluster may impact the active jobs.
+## Reference
+
+* Apache, Apache Spark, Spark, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+
## Next steps * [Library management in Spark](./library-management.md)
hdinsight-aks Connect To One Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/connect-to-one-lake-storage.md
Title: Connect to OneLake Storage
description: Learn how to connect to OneLake storage Previously updated : 08/29/2023 Last updated : 10/27/2023 # Connect to OneLake Storage
Last updated 08/29/2023
This tutorial shows how to connect to OneLake with a Jupyter notebook from an Azure HDInsight on AKS cluster.
-1. Create an HDInsight on AKS Spark cluster. Follow these instructions: Set up clusters in HDInsight on AKS.
+1. Create an HDInsight on AKS cluster with Apache SparkΓäó. Follow these instructions: Set up clusters in HDInsight on AKS.
1. While providing cluster information, remember your Cluster login Username and Password, as you need them later to access the cluster. 1. Create a user assigned managed identity (UAMI): Create for Azure HDInsight on AKS - UAMI and choose it as the identity in the **Storage** screen.
This tutorial shows how to connect to OneLake with a Jupyter notebook from an Az
1. In the Azure portal, look for your cluster and select the notebook. :::image type="content" source="./media/connect-to-one-lake-storage/overview-page.png" alt-text="Screenshot showing cluster overview page." lightbox="./media/connect-to-one-lake-storage/overview-page.png":::
-1. Create a new Spark Notebook.
+1. Create a new Notebook and select type as **pyspark**.
1. Copy the workspace and Lakehouse names into your notebook and build your OneLake URL for your Lakehouse. Now you can read any file from this file path. ``` fp = 'abfss://' + 'Workspace Name' + '@onelake.dfs.fabric.microsoft.com/' + 'Lakehouse Name' + '/Files/'
This tutorial shows how to connect to OneLake with a Jupyter notebook from an Az
`writecsvdf = df.write.format("csv").save(fp + "out.csv")` 1. Test that your data was successfully written by checking in your Lakehouse or by reading your newly loaded file.+
+## Reference
+
+* Apache, Apache Spark, Spark, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Hdinsight On Aks Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/hdinsight-on-aks-spark-overview.md
Title: What is Apache Spark in HDInsight on AKS? (Preview)
-description: An introduction to Apache Spark in HDInsight on AKS
+ Title: What is Apache SparkΓäó in HDInsight on AKS? (Preview)
+description: An introduction to Apache SparkΓäó in HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# What is Apache Spark in HDInsight on AKS? (Preview)
+# What is Apache SparkΓäó in HDInsight on AKS? (Preview)
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
-Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications.
+Apache SparkΓäó is a parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications.
-Spark provides primitives for in-memory cluster computing. A Spark job can load and cache data into memory and query it repeatedly. In-memory computing is faster than disk-based applications, such as Hadoop, which shares data through Hadoop distributed file system (HDFS). Spark allows integration with the Scala and Python programming languages to let you manipulate distributed data sets like local collections. There's no need to structure everything as map and reduce operations.
+Apache SparkΓäó provides primitives for in-memory cluster computing. A Spark job can load and cache data into memory and query it repeatedly. In-memory computing is faster than disk-based applications, such as Hadoop, which shares data through Hadoop distributed file system (HDFS). Apache Spark allows integration with the Scala and Python programming languages to let you manipulate distributed data sets like local collections. There's no need to structure everything as map and reduce operations.
:::image type="content" source="./media/spark-overview/spark-overview.png" alt-text="Diagram showing Spark overview in HDInsight on AKS.":::
-## HDInsight Spark in AKS
+## Apache Spark cluster with HDInsight on AKS
Azure HDInsight is a managed, full-spectrum, open-source analytics service for enterprises.
-Apache Spark in Azure HDInsight is the managed spark service in Microsoft Azure. With Apache Spark on AKS in Azure HDInsight, you can store and process your data all within Azure. Spark clusters in HDInsight are compatible with or [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md), allows you to apply Spark processing on your existing data stores.
+Apache SparkΓäó in Azure HDInsight on AKS is the managed spark service in Microsoft Azure. With Apache Spark in Azure HDInsight on AKS, you can store and process your data all within Azure. Spark clusters in HDInsight are compatible with or [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md), allows you to apply Spark processing on your existing data stores.
The Apache Spark framework for HDInsight on AKS enables fast data analytics and cluster computing using in-memory processing. Jupyter Notebook lets you interact with your data, combine code with markdown text, and do simple visualizations.
-Spark on AKS in HDInsight composed of multiple components as pods.
+Apache Spark on AKS in HDInsight composed of multiple components as pods.
## Cluster Controllers Cluster controllers are responsible for installing and managing respective service. Various controllers are installed and managed in a Spark cluster.
-## Spark service components
+## Apache Spark service components
**Zookeeper service:** A three node Zookeeper cluster, serves as distributed coordinator or High Availability storage for other services. **Yarn service:** Hadoop Yarn cluster, Spark jobs would be scheduled in the cluster as Yarn applications.
-**Client Interfaces:** HDInsight on AKS Spark provides various client interfaces. Livy Server, Jupyter Notebook, Spark History Server, provides Spark services to HDInsight on AKS users.
+**Client Interfaces:** Apache Spark clusters in HDInsight on AKS, provides various client interfaces. Livy Server, Jupyter Notebook, Spark History Server, provides Spark services to HDInsight on AKS users.
+
+## Reference
+
+* Apache, Apache Spark, Spark, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Submit Manage Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/submit-manage-jobs.md
Title: How to submit and manage jobs on a Spark cluster in Azure HDInsight on AKS
-description: Learn how to submit and manage jobs on a Spark cluster in HDInsight on AKS
+ Title: How to submit and manage jobs on an Apache SparkΓäó cluster in Azure HDInsight on AKS
+description: Learn how to submit and manage jobs on an Apache SparkΓäó cluster in HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# Submit and manage jobs on a Spark cluster in HDInsight on AKS
+# Submit and manage jobs on an Apache SparkΓäó cluster in HDInsight on AKS
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Once the cluster is created, user can use various interfaces to submit and manag
## Using Jupyter ### Prerequisites
-An Apache Spark cluster on HDInsight on AKS. For more information, seeΓÇ»[Create an Apache Spark cluster](./create-spark-cluster.md).
+An Apache SparkΓäó cluster on HDInsight on AKS. For more information, seeΓÇ»[Create an Apache Spark cluster](./create-spark-cluster.md).
Jupyter Notebook is an interactive notebook environment that supports various programming languages. ### Create a Jupyter Notebook
-1. Navigate to the Spark cluster page and open the **Overview** tab. Click on Jupyter, it asks you to authenticate and open the Jupyter web page.
+1. Navigate to the Apache SparkΓäó cluster page and open the **Overview** tab. Click on Jupyter, it asks you to authenticate and open the Jupyter web page.
:::image type="content" source="./media/submit-manage-jobs/select-jupyter-notebook.png" alt-text="Screenshot of how to select Jupyter notebook." border="true" lightbox="./media/submit-manage-jobs/select-jupyter-notebook.png":::
Jupyter Notebook is an interactive notebook environment that supports various pr
## Using Apache Zeppelin notebooks
-HDInsight on AKS Spark clusters includeΓÇ»[Apache Zeppelin notebooks](https://zeppelin.apache.org/). Use the notebooks to run Apache Spark jobs. In this article, you learn how to use the Zeppelin notebook on an HDInsight on AKS cluster.
+Apache Spark clusters in HDInsight on AKS includeΓÇ»[Apache Zeppelin notebooks](https://zeppelin.apache.org/). Use the notebooks to run Apache Spark jobs. In this article, you learn how to use the Zeppelin notebook on an HDInsight on AKS cluster.
### Prerequisites An Apache Spark cluster on HDInsight on AKS. For instructions, seeΓÇ»[Create an Apache Spark cluster](./create-spark-cluster.md). #### Launch an Apache Zeppelin notebook
-1. Navigate to the Spark cluster Overview page and select Zeppelin notebook from Cluster dashboards. It prompts to authenticate and open the Zeppelin page.
+1. Navigate to the Apache Spark cluster Overview page and select Zeppelin notebook from Cluster dashboards. It prompts to authenticate and open the Zeppelin page.
:::image type="content" source="./media/submit-manage-jobs/select-zeppelin.png" alt-text="Screenshot of how to select Zeppelin." lightbox="./media/submit-manage-jobs/select-zeppelin.png":::
An Apache Spark cluster on HDInsight on AKS. For instructions, seeΓÇ»[Create an
:::image type="content" source="./media/submit-manage-jobs/run-spark-submit-job.png" alt-text="Screenshot showing how to run Spark submit job." lightbox="./media/submit-manage-jobs/view-vim-file.png":::
-## Monitor queries on a Spark cluster in HDInsight on AKS
+## Monitor queries on an Apache Spark cluster in HDInsight on AKS
#### Spark History UI
An Apache Spark cluster on HDInsight on AKS. For instructions, seeΓÇ»[Create an
:::image type="content" source="./media/submit-manage-jobs/view-logs.png" alt-text="View Logs." lightbox="./media/submit-manage-jobs/view-logs.png":::
+## Reference
+* Apache, Apache Spark, Spark, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Use Hive Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/use-hive-metastore.md
Title: How to use Hive metastore in Spark
-description: Learn how to use Hive metastore in Spark
+ Title: How to use Hive metastore in Apache SparkΓäó
+description: Learn how to use Hive metastore in Apache SparkΓäó
Previously updated : 08/29/2023 Last updated : 10/27/2023
-# How to use Hive metastore in Spark
+# How to use Hive metastore with Apache SparkΓäó cluster
[!INCLUDE [feature-in-preview](../includes/feature-in-preview.md)]
Azure HDInsight on AKS supports custom meta stores, which are recommended for pr
1. Create Azure SQL database 1. Create a key vault for storing the credentials
-1. Configure Metastore while you create a HDInsight Spark cluster
+1. Configure Metastore while you create a HDInsight on AKS cluster with Apache SparkΓäó
1. Operate on External Metastore (Shows databases and do a select limit 1). While you create the cluster, HDInsight service needs to connect to the external metastore and verify your credentials.
While you create the cluster, HDInsight service needs to connect to the external
:::image type="content" source="./media/use-hive-metastore/basic-tab.png" alt-text="Screenshot showing the basic tab." lightbox="./media/use-hive-metastore/basic-tab.png":::
-1. The rest of the details are to be filled in as per the cluster creation rules for [HDInsight on AKS Spark cluster](./create-spark-cluster.md).
+1. The rest of the details are to be filled in as per the cluster creation rules for [Apache Spark cluster in HDInsight on AKS](./create-spark-cluster.md).
1. Click on **Review and Create.**
While you create the cluster, HDInsight service needs to connect to the external
`>> spark.sql("select * from sampleTable").show()` :::image type="content" source="./media/use-hive-metastore/read-table.png" alt-text="Screenshot showing how to read table." lightbox="./media/use-hive-metastore/read-table.png":::
+
+## Reference
+* Apache, Apache Spark, Spark, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Trino Add Catalogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-add-catalogs.md
Title: Configure catalogs in Azure HDInsight on AKS description: Add catalogs to an existing Trino cluster in HDInsight on AKS + Last updated 10/19/2023
This article demonstrates how you can add a new catalog to your cluster using AR
|values|ItΓÇÖs possible to specify catalog configuration using content property as single string, and using separate key-value pairs for each individual Trino catalog property as shown for memory catalog.| Deploy the updated ARM template to reflect the changes in your cluster. Learn how to [deploy an ARM template](/azure/azure-resource-manager/templates/deploy-portal).-
hdinsight-aks Trino Service Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-service-configuration.md
Title: Trino cluster configuration description: How to perform service configuration for Trino clusters for HDInsight on AKS. + Last updated 10/19/2023
hdinsight-aks Trino Ui Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-ui-jdbc-driver.md
Title: Trino JDBC driver description: Using the Trino JDBC driver. + Last updated 10/19/2023
healthcare-apis Dicom Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-register-application.md
Title: Register a client application for the DICOM service in Microsoft Entra ID
-description: How to register a client application for the DICOM service in Microsoft Entra ID.
+description: Learn how to register a client application for the DICOM service in Microsoft Entra ID.
# Register a client application for the DICOM service
-In this article, you'll learn how to register a client application for the DICOM&reg; service. You can find more information on [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
+In this article, you learn how to register a client application for the DICOM&reg; service. You can find more information on [Register an application with the Microsoft identity platform](../../active-directory/develop/quickstart-register-app.md).
## Register a new application 1. In the [Azure portal](https://portal.azure.com), select **Microsoft Entra ID**.
-2. Select **App registrations**.
-[ ![Screen shot of new app registration window.](media/register-application-one.png) ](media/register-application-one.png#lightbox)
-3. Select **New registration**.
-4. For Supported account types, select **Accounts in this organization directory only**. Leave the other options as is.
-[ ![Screenshot of new registration account options.](media/register-application-two.png) ](media/register-application-two.png#lightbox)
-5. Select **Register**.
+1. Select **App registrations**.
+
+ [![Screenshot that shows new app registration window.](media/register-application-one.png) ](media/register-application-one.png#lightbox)
+
+1. Select **New registration**.
+1. For **Supported account types**, select **Accounts in this organizational directory only**. Leave the other options as is.
+
+ [![Screenshot that shows new registration account options.](media/register-application-two.png) ](media/register-application-two.png#lightbox)
+
+1. Select **Register**.
## Application ID (client ID)
-After registering a new application, you can find the application (client) ID and Directory (tenant) ID from the overview menu option. Make a note of the values for use later.
+After you register a new application, you can find the **Application (client) ID** and **Directory (tenant) ID** from the **Overview** menu option. Make a note of the values for use later.
-[ ![Screenshot of client ID overview panel.](media/register-application-three.png) ](media/register-application-three.png#lightbox)
+[![Screenshot that shows the client ID Overview pane.](media/register-application-three.png) ](media/register-application-three.png#lightbox)
-## Authentication setting: confidential vs. public
+## Authentication setting: Confidential vs. public
-Select **Authentication** to review the settings. The default value for **Allow public client flows** is "No".
+Select **Authentication** to review the settings. The default value for **Allow public client flows** is **No**.
If you keep this default value, the application registration is a **confidential client application** and a certificate or secret is required.
-[ ![Screenshot of confidential client application.](media/register-application-five.png) ](media/register-application-five.png#lightbox)
+[![Screenshot that shows confidential client application.](media/register-application-five.png) ](media/register-application-five.png#lightbox)
-If you change the default value to "Yes" for the "Allow public client flows" option in the advanced setting, the application registration is a **public client application** and a certificate or secret isn't required. The "Yes" value is useful when you want to use the client application in your mobile app or a JavaScript app where you don't want to store any secrets.
+If you change the default value to **Yes** for the **Allow public client flows** option in the **Advanced** setting, the application registration is a **public client application** and a certificate or secret isn't required. The **Yes** value is useful when you want to use the client application in your mobile app or a JavaScript app where you don't want to store any secrets.
For tools that require a redirect URL, select **Add a platform** to configure the platform.
->[!NOTE]
->
->For Postman, select **Mobile and desktop applications**. Enter "https://www.getpostman.com/oauth2/callback" in the **Custom redirect URIs** section. Select the **Configure** button to save the setting.
+> [!NOTE]
+> For Postman, select **Mobile and desktop applications**. Enter `https://www.getpostman.com/oauth2/callback` in the **Custom redirect URIs** section. Select **Configure** to save the setting.
-[ ![Screenshot of configure other services.](media/register-application-five-bravo.png) ](media/register-application-five-bravo.png#lightbox)
+[![Screenshot that shows configuring other services.](media/register-application-five-bravo.png) ](media/register-application-five-bravo.png#lightbox)
## Certificates & secrets
-Select **Certificates & Secrets** and select **New Client Secret**.
+Select **Certificates & secrets** and select **New client secret**.
Add and then copy the secret value.
-[ ![Screenshot of certificates and secrets.](media/register-application-six.png) ](media/register-application-six.png#lightbox)
+[![Screenshot that shows the Certificates & secrets pane.](media/register-application-six.png) ](media/register-application-six.png#lightbox)
Optionally, you can upload a certificate (public key) and use the Certificate ID, a GUID value associated with the certificate. For testing purposes, you can create a self-signed certificate using tools such as the PowerShell command line, `New-SelfSignedCertificate`, and then export the certificate from the certificate store. ## API permissions
-The following steps are required for the DICOM service. In addition, user access permissions or role assignments for the Azure Health Data Services are managed through RBAC. For more details, visit [Configure Azure RBAC for Azure Health Data Services](./../configure-azure-rbac.md).
+The following steps are required for the DICOM service. In addition, user access permissions or role assignments for Azure Health Data Services are managed through role-based access control (RBAC). For more information, see [Configure Azure RBAC for Azure Health Data Services](./../configure-azure-rbac.md).
-1. Select the **API permissions** blade.
+1. Select the **API permissions** pane.
- [ ![Screenshot of API permission page with Add a permission button highlighted.](./media/dicom-add-apis-permissions.png) ](./media/dicom-add-apis-permissions.png#lightbox)
+ [![Screenshot that shows the API permissions page with the Add a permission button highlighted.](./media/dicom-add-apis-permissions.png) ](./media/dicom-add-apis-permissions.png#lightbox)
-2. Select **Add a permission**.
+1. Select **Add a permission**.
- Add a permission to the DICOM service by searching for **Azure API for DICOM** under **APIs my organization** uses.
+ Add a permission to the DICOM service by searching for **Azure API for DICOM** under **APIs my organization uses**.
- [ ![Screenshot of Search API permissions page with the APIs my organization uses tab selected.](./media/dicom-search-apis-permissions.png) ](./media/dicom-search-apis-permissions.png#lightbox)
+ [![Screenshot that shows the Search API permissions page with the APIs my organization uses tab selected.](./media/dicom-search-apis-permissions.png) ](./media/dicom-search-apis-permissions.png#lightbox)
- The search result for Azure API for DICOM will only return if you've already deployed the DICOM service in the workspace.
+ The search result for Azure API for DICOM only returns if you've already deployed the DICOM service in the workspace.
- If you're referencing a different resource application, select your DICOM API Resource Application Registration that you created previously under **APIs my organization**.
+ If you're referencing a different resource application, select your DICOM API resource application registration that you created previously under **APIs my organization uses**.
-3. Select scopes (permissions) that the confidential client application will ask for on behalf of a user. Select **Dicom.ReadWrite**, and then select **Add permissions**.
+1. Select scopes (permissions) that the confidential client application asks for on behalf of a user. Select **Dicom.ReadWrite**, and then select **Add permissions**.
- [ ![Screenshot of scopes (permissions) that the client application will ask for on behalf of a user.](./media/dicom-select-scopes-new.png) ](./media/dicom-select-scopes-new.png#lightbox)
+ [![Screenshot that shows scopes (permissions) that the client application asks for on behalf of a user.](./media/dicom-select-scopes-new.png) ](./media/dicom-select-scopes-new.png#lightbox)
-Your application registration is now complete.
+Your application registration is now finished.
[!INCLUDE [DICOM trademark statement](../includes/healthcare-apis-dicom-trademark.md)]
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-access-token.md
Title: Get an access token for the DICOM service in Azure Health Data Services
description: Find out how to secure your access to the DICOM service with a token. Use the Azure command-line tool and unique identifiers to manage your medical images. -+ Last updated 10/13/2023
You can use a token with the DICOM service [using cURL](dicomweb-standard-apis-c
-X GET --header "Authorization: Bearer $token" https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com/v<version of REST API>/changefeed ```
healthcare-apis Get Started With Analytics Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-analytics-dicom.md
Title: Get started using DICOM data in analytics workloads - Azure Health Data Services
-description: This guide demonstrates how to use Azure Data Factory and Microsoft Fabric to perform analytics on DICOM data.
+description: This article demonstrates how to use Azure Data Factory and Microsoft Fabric to perform analytics on DICOM data.
Last updated 10/13/2023
-# Get Started using DICOM Data in Analytics Workloads
+# Get started using DICOM data in analytics workloads
-This article details how to get started using DICOM&reg; data in analytics workloads with Azure Data Factory and Microsoft Fabric.
+This article describes how to get started by using DICOM&reg; data in analytics workloads with Azure Data Factory and Microsoft Fabric.
## Prerequisites
-Before getting started, ensure you have done the following steps:
-
-* Deploy an instance of the [DICOM Service](deploy-dicom-services-in-azure.md).
-* Create a [storage account with Azure Data lake Storage Gen2 (ADLS Gen2) capabilities](../../storage/blobs/create-data-lake-storage-account.md) by enabling a hierarchical namespace.
- * Create a container to store DICOM metadata, for example, named "dicom".
-* Create an instance of [Azure Data Factory (ADF)](../../data-factory/quickstart-create-data-factory.md).
- * Ensure that a [system assigned managed identity](../../data-factory/data-factory-service-identity.md) has been enabled.
-* Create a [Lakehouse](/fabric/data-engineering/tutorial-build-lakehouse) in Microsoft Fabric.
-* Add role assignments to the ADF system assigned managed identity for the DICOM Service and the ADLS Gen2 storage account.
+
+Before you get started, ensure that you've done the following steps:
+
+* Deploy an instance of the [DICOM service](deploy-dicom-services-in-azure.md).
+* Create a [storage account with Azure Data Lake Storage Gen2 capabilities](../../storage/blobs/create-data-lake-storage-account.md) by enabling a hierarchical namespace:
+ * Create a container to store DICOM metadata, for example, named `dicom`.
+* Create a [Data Factory](../../data-factory/quickstart-create-data-factory.md) instance:
+ * Enable a [system-assigned managed identity](../../data-factory/data-factory-service-identity.md).
+* Create a [lakehouse](/fabric/data-engineering/tutorial-build-lakehouse) in Fabric.
+* Add role assignments to the Data Factory system-assigned managed identity for the DICOM service and the Data Lake Storage Gen2 storage account:
* Add the **DICOM Data Reader** role to grant permission to the DICOM service.
- * Add the **Storage Blob Data Contributor** role to grant permission to the ADLS Gen2 account.
+ * Add the **Storage Blob Data Contributor** role to grant permission to the Data Lake Storage Gen2 account.
-## Configure an Azure Data Factory pipeline for the DICOM service
+## Configure a Data Factory pipeline for the DICOM service
-In this example, an Azure Data Factory [pipeline](../../data-factory/concepts-pipelines-activities.md) will be used to write DICOM attributes for instances, series, and studies into a storage account in a [Delta table](https://delta.io/) format.
+In this example, a Data Factory [pipeline](../../data-factory/concepts-pipelines-activities.md) is used to write DICOM attributes for instances, series, and studies into a storage account in a [Delta table](https://delta.io/) format.
-From the Azure portal, open the Azure Data Factory instance and select **Launch Studio** to begin.
+From the Azure portal, open the Data Factory instance and select **Launch studio** to begin.
### Create linked services
-Azure Data Factory pipelines read from _data sources_ and write to _data sinks_, typically other Azure services. These connections to other services are managed as _linked services_. The pipeline in this example will read data from a DICOM service and write its output to a storage account, so a linked service must be created for both.
-#### Create linked service for the DICOM service
-1. In the Azure Data Factory Studio, select **Manage** from the navigation menu. Under **Connections** select **Linked services** and then select **New**.
+Data Factory pipelines read from _data sources_ and write to _data sinks_, which are typically other Azure services. These connections to other services are managed as _linked services_.
+
+The pipeline in this example reads data from a DICOM service and writes its output to a storage account, so a linked service must be created for both.
+
+#### Create a linked service for the DICOM service
+
+1. In Azure Data Factory Studio, select **Manage** from the menu on the left. Under **Connections**, select **Linked services** and then select **New**.
+ :::image type="content" source="media/data-factory-linked-services.png" alt-text="Screenshot that shows the Linked services screen in Data Factory." lightbox="media/data-factory-linked-services.png":::
-2. On the New linked service panel, search for "REST". Select the **REST** tile and then **Continue**.
+1. On the **New linked service** pane, search for **REST**. Select the **REST** tile and then select **Continue**.
+ :::image type="content" source="media/data-factory-rest.png" alt-text="Screenshot that shows the New linked service pane with the REST tile selected." lightbox="media/data-factory-rest.png":::
-3. Enter a **Name** and **Description** for the linked service.
+1. Enter a **Name** and **Description** for the linked service.
+ :::image type="content" source="media/data-factory-linked-service-dicom.png" alt-text="Screenshot that shows the New linked service pane with DICOM service details." lightbox="media/data-factory-linked-service-dicom.png":::
-4. In the **Base URL** field, enter the Service URL for your DICOM service. For example, a DICOM service named `contosoclinic` in the `contosohealth` workspace will have the Service URL `https://contosohealth-contosoclinic.dicom.azurehealthcareapis.com`.
+1. In the **Base URL** field, enter the service URL for your DICOM service. For example, a DICOM service named `contosoclinic` in the `contosohealth` workspace has the service URL `https://contosohealth-contosoclinic.dicom.azurehealthcareapis.com`.
-5. For Authentication type, select **System Assigned Managed Identity**.
+1. For **Authentication type**, select **System Assigned Managed Identity**.
-6. For **AAD resource**, enter `https://dicom.healthcareapis.azure.com`. Note, this URL is the same for all DICOM service instances.
+1. For **AAD resource**, enter `https://dicom.healthcareapis.azure.com`. This URL is the same for all DICOM service instances.
-7. After populating the required fields, select **Test connection** to ensure the identity's roles are correctly configured.
+1. After you fill in the required fields, select **Test connection** to ensure the identity's roles are correctly configured.
-8. When the connection test is successful, select **Create**.
+1. When the connection test is successful, select **Create**.
-#### Create linked service for Azure Data Lake Storage Gen2
-1. In the Azure Data Factory Studio, select **Manage** from the navigation menu. Under **Connections** select **Linked services** and then select **New**.
+#### Create a linked service for Azure Data Lake Storage Gen2
-2. On the New linked service panel, search for "Azure Data Lake Storage Gen2". Select the **Azure Data Lake Storage Gen2** tile and then **Continue**.
+1. In Data Factory Studio, select **Manage** from the menu on the left. Under **Connections**, select **Linked services** and then select **New**.
+1. On the **New linked service** pane, search for **Azure Data Lake Storage Gen2**. Select the **Azure Data Lake Storage Gen2** tile and then select **Continue**.
-3. Enter a **Name** and **Description** for the linked service.
+ :::image type="content" source="media/data-factory-adls.png" alt-text="Screenshot that shows the New linked service pane with the Azure Data Lake Storage Gen2 tile selected." lightbox="media/data-factory-adls.png":::
+1. Enter a **Name** and **Description** for the linked service.
-4. For Authentication type, select **System Assigned Managed Identity**.
+ :::image type="content" source="media/data-factory-linked-service-adls.png" alt-text="Screenshot that shows the New linked service pane with Data Lake Storage Gen2 details." lightbox="media/data-factory-linked-service-adls.png":::
-5. Enter the storage account details by entering the URL to the storage account manually or by selecting the Azure subscription and storage account from dropdowns.
+1. For **Authentication type**, select **System Assigned Managed Identity**.
-6. After populating the required fields, select **Test connection** to ensure the identity's roles are correctly configured.
+1. Enter the storage account details by entering the URL to the storage account manually. Or you can select the Azure subscription and storage account from dropdowns.
-7. When the connection test is successful, select **Create**.
+1. After you fill in the required fields, select **Test connection** to ensure the identity's roles are correctly configured.
+
+1. When the connection test is successful, select **Create**.
### Create a pipeline for DICOM data
-Azure Data Factory pipelines are a collection of _activities_ that perform a task, like copying DICOM metadata to Delta tables. This section details the creation of a pipeline that regularly synchronizes DICOM data to Delta tables as data is added to, updated in, and deleted from a DICOM service.
-1. Select **Author** from the navigation menu. In the **Factory Resources** pane, select the plus (+) to add a new resource. Select **Pipeline** and then **Template gallery** from the menu.
+Data Factory pipelines are a collection of _activities_ that perform a task, like copying DICOM metadata to Delta tables. This section details the creation of a pipeline that regularly synchronizes DICOM data to Delta tables as data is added to, updated in, and deleted from a DICOM service.
+
+1. Select **Author** from the menu on the left. In the **Factory Resources** pane, select the plus sign (+) to add a new resource. Select **Pipeline** and then select **Template gallery** from the menu.
+
+ :::image type="content" source="media/data-factory-create-pipeline-menu.png" alt-text="Screenshot that shows Template gallery selected under Pipeline." lightbox="media/data-factory-create-pipeline-menu.png":::
+1. In the **Template gallery**, search for **DICOM**. Select the **Copy DICOM Metadata Changes to ADLS Gen2 in Delta Format** tile and then select **Continue**.
-2. In the Template gallery, search for "DICOM". Select the **Copy DICOM Metadata Changes to ADLS Gen2 in Delta Format** tile and then **Continue**.
+ :::image type="content" source="media/data-factory-gallery-dicom.png" alt-text="Screenshot that shows the DICOM template selected in the Template gallery." lightbox="media/data-factory-gallery-dicom.png":::
+1. In the **Inputs** section, select the linked services previously created for the DICOM service and Data Lake Storage Gen2 account.
-3. In the **Inputs** section, select the linked services previously created for the DICOM service and Azure Data Lake Storage Gen2 account.
+ :::image type="content" source="media/data-factory-create-pipeline.png" alt-text="Screenshot that shows the Inputs section with linked services selected." lightbox="media/data-factory-create-pipeline.png":::
+1. Select **Use this template** to create the new pipeline.
-4. Select **Use this template** to create the new pipeline.
+## Schedule a pipeline
-## Scheduling a pipeline
-Pipelines are scheduled by _triggers_. There are different types of triggers including _schedule triggers_, which allow pipelines to be triggered on a wall-clock schedule, and _manual triggers_, which trigger pipelines on demand. In this example, a _tumbling window trigger_ is used to periodically run the pipeline given a starting point and regular time interval. For more information about triggers, see the [pipeline execution and triggers article](../../data-factory/concepts-pipeline-execution-triggers.md).
+Pipelines are scheduled by _triggers_. There are different types of triggers. _Schedule triggers_ allow pipelines to be triggered on a wall-clock schedule. _Manual triggers_ trigger pipelines on demand.
+
+In this example, a _tumbling window trigger_ is used to periodically run the pipeline given a starting point and regular time interval. For more information about triggers, see [Pipeline execution and triggers in Azure Data Factory or Azure Synapse Analytics](../../data-factory/concepts-pipeline-execution-triggers.md).
### Create a new tumbling window trigger
-1. Select **Author** from the navigation menu. Select the pipeline for the DICOM service and select **Add trigger** and **New/Edit** from the menu bar.
+1. Select **Author** from the menu on the left. Select the pipeline for the DICOM service and select **Add trigger** and **New/Edit** from the menu bar.
+
+ :::image type="content" source="media/data-factory-add-trigger.png" alt-text="Screenshot that shows the pipeline view of Data Factory Studio with the Add trigger button on the menu bar selected." lightbox="media/data-factory-add-trigger.png":::
-2. In the **Add triggers** panel, select the **Choose trigger** dropdown and then **New**.
+1. On the **Add triggers** pane, select the **Choose trigger** dropdown and then select **New**.
-3. Enter a **Name** and **Description** for the trigger.
+1. Enter a **Name** and **Description** for the trigger.
+ :::image type="content" source="media/data-factory-new-trigger.png" alt-text="Screenshot that shows the New trigger pane with the Name, Description, Type, Date, and Recurrence fields." lightbox="media/data-factory-new-trigger.png":::
-4. Select **Tumbling window** as the type.
+1. Select **Tumbling window** as the **Type**.
-5. To configure a pipeline that runs hourly, set the recurrence to **1 Hour**.
+1. To configure a pipeline that runs hourly, set the **Recurrence** to **1 Hour**.
-6. Expand the **Advanced** section and enter a **Delay** of **15 minutes**. This will allow any pending operations at the end of an hour to complete before processing.
+1. Expand the **Advanced** section and enter a **Delay** of **15 minutes**. This setting allows any pending operations at the end of an hour to complete before processing.
-7. Set the **Max concurrency** to **1** to ensure consistency across tables.
+1. Set **Max concurrency** to **1** to ensure consistency across tables.
-8. Select **Ok** to continue configuring the trigger run parameters.
+1. Select **OK** to continue configuring the trigger run parameters.
### Configure trigger run parameters
-Triggers not only define when to run a pipeline, they also include [parameters](../../data-factory/how-to-use-trigger-parameterization.md) that are passed to the pipeline execution. The **Copy DICOM Metadata Changes to Delta** template defines a few parameters detailed in the table below. Note, if no value is supplied during configuration, the listed default value will be used for each parameter.
+
+Triggers define when to run a pipeline. They also include [parameters](../../data-factory/how-to-use-trigger-parameterization.md) that are passed to the pipeline execution. The **Copy DICOM Metadata Changes to Delta** template defines a few parameters that are described in the following table. If no value is supplied during configuration, the listed default value is used for each parameter.
| Parameter name | Description | Default value | | :- | :- | : |
-| BatchSize | The maximum number of changes to retrieve at a time from the change feed (max 200). | `200` |
-| ApiVersion | The API version for the Azure DICOM Service (min 2). | `2` |
-| StartTime | The inclusive start time for DICOM changes. | `0001-01-01T00:00:00Z` |
-| EndTime | The exclusive end time for DICOM changes. | `9999-12-31T23:59:59Z` |
-| ContainerName | The container name for the resulting Delta tables. | `dicom` |
-| InstanceTablePath | The path containing the Delta table for DICOM SOP instances within the container.| `instance` |
-| SeriesTablePath | The path containing the Delta table for DICOM series within the container. | `series` |
-| StudyTablePath | The path containing the Delta table for DICOM studies within the container. | `study` |
-| RetentionHours | The maximum retention in hours for data in the Delta tables. | `720` |
+| BatchSize | The maximum number of changes to retrieve at a time from the change feed (maximum 200) | `200` |
+| ApiVersion | The API version for the Azure DICOM service (minimum 2) | `2` |
+| StartTime | The inclusive start time for DICOM changes | `0001-01-01T00:00:00Z` |
+| EndTime | The exclusive end time for DICOM changes | `9999-12-31T23:59:59Z` |
+| ContainerName | The container name for the resulting Delta tables | `dicom` |
+| InstanceTablePath | The path that contains the Delta table for DICOM SOP instances within the container| `instance` |
+| SeriesTablePath | The path that contains the Delta table for DICOM series within the container | `series` |
+| StudyTablePath | The path that contains the Delta table for DICOM studies within the container | `study` |
+| RetentionHours | The maximum retention in hours for data in the Delta tables | `720` |
+
+1. On the **Trigger Run Parameters** pane, enter the **ContainerName** value that matches the name of the storage container created in the prerequisites.
-1. In the **Trigger run parameters** panel, enter in the **ContainerName** that matches the name of the storage container created in the prerequisites.
+ :::image type="content" source="media/data-factory-trigger-parameters.png" alt-text="Screenshot that shows the Trigger Run Parameters pane, with StartTime and EndTime values entered." lightbox="media/data-factory-trigger-parameters.png":::
+1. For **StartTime**, use the system variable `@formatDateTime(trigger().outputs.windowStartTime)`.
-2. For **StartTime** use the system variable `@formatDateTime(trigger().outputs.windowStartTime)`.
+1. For **EndTime**, use the system variable `@formatDateTime(trigger().outputs.windowEndTime)`.
-3. For **EndTime** use the system variable `@formatDateTime(trigger().outputs.windowEndTime)`.
+ > [!NOTE]
+ > Only tumbling window triggers support the system variables:
+ > * `@trigger().outputs.windowStartTime` and
+ > * `@trigger().outputs.windowEndTime`
+ >
+ > Schedule triggers use different system variables:
+ > * `@trigger().scheduledTime` and
+ > * `@trigger().startTime`
+ >
+ > Learn more about [trigger types](../../data-factory/concepts-pipeline-execution-triggers.md#trigger-type-comparison).
-> [!NOTE]
-> Only tumbling window triggers support the system variables:
-> * `@trigger().outputs.windowStartTime` and
-> * `@trigger().outputs.windowEndTime`
->
-> Schedule triggers use different system variables:
-> * `@trigger().scheduledTime` and
-> * `@trigger().startTime`
->
-> Learn more about [trigger types](../../data-factory/concepts-pipeline-execution-triggers.md#trigger-type-comparison).
+1. Select **Save** to create the new trigger. Select **Publish** to begin your trigger running on the defined schedule.
-4. Select **Save** to create the new trigger. Be sure to select **Publish** on the menu bar to begin your trigger running on the defined schedule.
+ :::image type="content" source="media/data-factory-publish.png" alt-text="Screenshot that shows the Publish button on the main menu bar." lightbox="media/data-factory-publish.png":::
+After the trigger is published, it can be triggered manually by using the **Trigger now** option. If the start time was set for a value in the past, the pipeline starts immediately.
-After the trigger is published, it can be triggered manually using the **Trigger now** option. If the start time was set for a value in the past, the pipeline will start immediately.
+## Monitor pipeline runs
-## Monitoring pipeline runs
-Trigger runs and their associated pipeline runs can be monitored in the **Monitor** tab. Here, users can browse when each pipeline ran, how long it took to execute, and potentially debug any problems that arose.
+You can monitor trigger runs and their associated pipeline runs on the **Monitor** tab. Here, you can browse when each pipeline ran and how long it took to run. You can also potentially debug any problems that arose.
## Microsoft Fabric
-[Microsoft Fabric](https://www.microsoft.com/microsoft-fabric) is an all-in-one analytics solution that sits on top of [Microsoft OneLake](/fabric/onelake/onelake-overview). With the use of [Microsoft Fabric Lakehouse](/fabric/data-engineering/lakehouse-overview), data in OneLake can be managed, structured, and analyzed in a single location. Any data outside of OneLake, written to Azure Data Lake Storage Gen2, can be connected to OneLake as shortcuts to take advantage of FabricΓÇÖs suite of tools.
-### Creating shortcuts
-1. Navigate to the lakehouse created in the prerequisites. In the **Explorer** view, select the triple-dot menu (...) next to the **Tables** folder.
+[Fabric](https://www.microsoft.com/microsoft-fabric) is an all-in-one analytics solution that sits on top of [Microsoft OneLake](/fabric/onelake/onelake-overview). With the use of a [Fabric lakehouse](/fabric/data-engineering/lakehouse-overview), you can manage, structure, and analyze data in OneLake in a single location. Any data outside of OneLake, written to Data Lake Storage Gen2, can be connected to OneLake as shortcuts to take advantage of Fabric's suite of tools.
+
+### Create shortcuts
+
+1. Go to the lakehouse created in the prerequisites. In the **Explorer** view, select the ellipsis menu (**...**) next to the **Tables** folder.
+
+1. Select **New shortcut** to create a new shortcut to the storage account that contains the DICOM analytics data.
-2. Select **New shortcut** to create a new shortcut to the storage account that contains the DICOM analytics data.
+ :::image type="content" source="media/fabric-create-shortcut.png" alt-text="Screenshot that shows the New shortcut option in the Explorer view." lightbox="media/fabric-create-shortcut.png":::
+1. Select **Azure Data Lake Storage Gen2** as the source for the shortcut.
-3. Select **Azure Data Lake Storage Gen2** as the source for the shortcut.
+ :::image type="content" source="media/fabric-new-shortcut.png" alt-text="Screenshot that shows the New shortcut view with the Azure Data Lake Storage Gen2 tile." lightbox="media/fabric-new-shortcut.png":::
+1. Under **Connection settings**, enter the **URL** you used in the [Linked services](#create-a-linked-service-for-azure-data-lake-storage-gen2) section.
-4. Under **Connection settings**, enter the **URL** used in the [Linked Services](#create-linked-service-for-azure-data-lake-storage-gen2) section above.
+ :::image type="content" source="media/fabric-connection-settings.png" alt-text="Screenshot that shows the connection settings for the Azure Data Lake Storage Gen2 account." lightbox="media/fabric-connection-settings.png":::
+1. Select an existing connection or create a new connection by selecting the **Authentication kind** you want to use.
-5. Select an existing connection or create a new connection, selecting the Authentication kind you want to use.
+ > [!NOTE]
+ > There are a few options for authenticating between Data Lake Storage Gen2 and Fabric. You can use an organizational account or a service principal. We don't recommend using account keys or shared access signature tokens.
-> [!NOTE]
-> For authenticating between Azure Data Lake Storage Gen2 and Microsoft Fabric, there are a few options, including an organizational account and service principal; it is not recommended to use account keys or Shared Access Signature (SAS) tokens.
+1. Select **Next**.
-6. Select **Next**.
+1. Enter a **Shortcut Name** that represents the data created by the Data Factory pipeline. For example, for the `instance` Delta table, the shortcut name should probably be **instance**.
-7. Enter a **Shortcut Name** that represents the data created by the Azure Data Factory pipeline. For example, for the `instance` Delta table, the shortcut name should probably be **instance**.
+1. Enter the **Sub Path** that matches the `ContainerName` parameter from [run parameters](#configure-trigger-run-parameters) configuration and the name of the table for the shortcut. For example, use `/dicom/instance` for the Delta table with the path `instance` in the `dicom` container.
-8. Enter the **Sub Path** that matches the `ContainerName` parameter from [run parameters](#configure-trigger-run-parameters) configuration and the name of the table for the shortcut. For example, use "/dicom/instance" for the Delta table with the path `instance` in the `dicom` container.
+1. Select **Create** to create the shortcut.
-9. Select **Create** to create the shortcut.
+1. Repeat steps 2 to 9 to add the remaining shortcuts to the other Delta tables in the storage account (for example, `series` and `study`).
-10. Repeat steps 2-9 for adding the remaining shortcuts to the other Delta tables in the storage account (e.g. `series` and `study`).
+After you've created the shortcuts, expand a table to show the names and types of the columns.
-After the shortcuts have been created, expanding a table will show the names and types of the columns.
+### Run notebooks
-### Running notebooks
-Once the tables have been created in the lakehouse, they can be queried from [Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook). Notebooks may be created directly from the lakehouse by selecting **Open Notebook** from the menu bar.
+After the tables are created in the lakehouse, you can query them from [Fabric notebooks](/fabric/data-engineering/how-to-use-notebook). You can create notebooks directly from the lakehouse by selecting **Open Notebook** from the menu bar.
-On the notebook page, the contents of the lakehouse can still be viewed on the left-hand side, including the newly added tables. At the top of the page, select the language for the notebook (the language may also be configured for individual cells). The following example will use Spark SQL.
+On the notebook page, the contents of the lakehouse can still be viewed on the left side, including the newly added tables. At the top of the page, select the language for the notebook. The language can also be configured for individual cells. The following example uses Spark SQL.
-#### Query tables using Spark SQL
-In the cell editor, enter a simple Spark SQL query like a `SELECT` statement.
+#### Query tables by using Spark SQL
+
+In the cell editor, enter a Spark SQL query like a `SELECT` statement.
``` SQL SELECT * from instance ```
-This query will select all of the contents from the `instance` table. When ready, select the **Run cell** button to execute the query.
+This query selects all the contents from the `instance` table. When you're ready, select **Run cell** to run the query.
-After a few seconds, the results of the query should appear in a table beneath the cell like (the time might be longer if this is the first Spark query in the session as the Spark context will need to be initialized).
+After a few seconds, the results of the query appear in a table underneath the cell like the example shown here. The amount of time might be longer if this Spark query is the first in the session because the Spark context needs to be initialized.
## Summary+ In this article, you learned how to:
-* Use Azure Data Factory templates to create a pipeline from the DICOM service to an Azure Data Lake Storage Gen2 account
-* Configure a trigger to extract DICOM metadata on an hourly schedule
-* Use shortcuts to connect DICOM data in a storage account to a Microsoft Fabric lakehouse
-* Use notebooks to query for DICOM data in the lakehouse
-## Next steps
+* Use Data Factory templates to create a pipeline from the DICOM service to a Data Lake Storage Gen2 account.
+* Configure a trigger to extract DICOM metadata on an hourly schedule.
+* Use shortcuts to connect DICOM data in a storage account to a Fabric lakehouse.
+* Use notebooks to query for DICOM data in the lakehouse.
-Learn more about Azure Data Factory pipelines:
+## Next steps
-* [Pipelines and activities in Azure Data Factory](../../data-factory/concepts-pipelines-activities.md)
+Learn more about Data Factory pipelines:
-* [How to use Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook)
+* [Pipelines and activities in Data Factory](../../data-factory/concepts-pipelines-activities.md)
+* [Use Microsoft Fabric notebooks](/fabric/data-engineering/how-to-use-notebook)
-
[!INCLUDE [DICOM trademark statement](../includes/healthcare-apis-dicom-trademark.md)]
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
ms.suite: integration
Last updated 10/10/2023--+ # Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using Visual Studio Code.
machine-learning How To Package Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-package-models.md
reviewer: msakande + Last updated 10/04/2023
migrate Discovered Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discovered-metadata.md
ms.
Last updated 02/24/2023-+ # Metadata discovered by Azure Migrate appliance
operator-nexus Howto Use Mde Runtime Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-mde-runtime-protection.md
Last updated 10/15/2023-+ # Introduction to the Microsoft Defender for Endpoint runtime protection service
operator-nexus Quickstarts Virtual Machine Deployment Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-virtual-machine-deployment-arm.md
Last updated 07/30/2023-+ # Quickstart: Create an Azure Operator Nexus virtual machine by using Azure Resource Manager template (ARM template)
operator-nexus Quickstarts Virtual Machine Deployment Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-virtual-machine-deployment-bicep.md
Last updated 07/30/2023-+ # Quickstart: Create an Azure Operator Nexus virtual machine by using Bicep
partner-solutions Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-cli.md
Title: Create Apache Kafka for Confluent Cloud through Azure CLI description: This article describes how to use the Azure CLI to create an instance of Apache Kafka for Confluent Cloud. + Last updated 06/07/2021 - # QuickStart: Get started with Apache Kafka for Confluent Cloud - Azure CLI
partner-solutions Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-powershell.md
Title: Create Apache Kafka for Confluent Cloud through Azure PowerShell description: This article describes how to use Azure PowerShell to create an instance of Apache Kafka for Confluent Cloud. + Last updated 11/03/2021 - # QuickStart: Get started with Apache Kafka for Confluent Cloud - Azure PowerShell
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/manage.md
Title: Manage a Confluent Cloud description: This article describes management of a Confluent Cloud on the Azure portal. How to set up single sign-on, delete a Confluent organization, and get support. + Last updated 06/07/2021
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md
Title: Azure Service Bus premium and standard tiers description: This article describes standard and premium tiers of Azure Service Bus. Compares these tiers and provides technical differences. -+ Last updated 05/02/2023
service-fabric How To Managed Cluster Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-gateway.md
-+ Last updated 09/05/2023
service-fabric How To Managed Cluster Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-ddos-protection.md
-+ Last updated 09/05/2023
service-fabric How To Managed Cluster Troubleshoot Snat Port Exhaustion Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-troubleshoot-snat-port-exhaustion-issues.md
-+ Last updated 09/05/2023
site-recovery Azure To Azure How To Enable Replication Cmk Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md
description: This article describes how to configure replication for VMs with cu
+ Last updated 10/09/2023 - # Replicate machines with Customer-Managed Keys (CMK) enabled disks
spring-apps Application Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/application-observability.md
Last updated 10/02/2023-+ # Optimize application observability for Azure Spring Apps
spring-apps Quickstart Deploy Restful Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-restful-api-app.md
Last updated 10/02/2023 -+ # Quickstart: Deploy RESTful API application to Azure Spring Apps
storage Transport Layer Security Configure Migrate To TLS2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-migrate-to-TLS2.md
+
+ Title: Migrate to Transport Layer Security (TLS) 1.2 for Azure Blob Storage
+
+description: Avoid disruptions to client applications that connect to your storage account by migrating to Transport Layer Security (TLS) version 1.2.
+++++ Last updated : 10/27/2023++
+ms.devlang: csharp
++
+# Migrate to TLS 1.2 for Azure Blob Storage
+
+On **Nov 1, 2024**, Azure Blob Storage will stop supporting versions 1.0 and 1.1 of Transport Layer Security (TLS). TLS 1.2 will become the new minimum TLS version. This change impacts all existing and new blob storage accounts, using TLS 1.0 and 1.1 in all clouds. Storage accounts already using TLS 1.2 aren't impacted by this change.
+
+To avoid disruptions to applications that connect to your storage account, you must ensure that your account requires clients to send and receive data by using TLS **1.2**, and remove dependencies on TLS version 1.0 and 1.1.
+
+## About Transport Layer Security
+
+Transport Layer Security (TLS) is an internet security protocol that establishes encryption channels over networks to encrypt communication between your applications and servers. When storage data is accessed via HTTPS connections, communication between client applications and the storage account is encrypted using TLS.
+
+TLS encrypts data sent over the internet to prevent malicious users from accessing private, sensitive information. The client and server perform a TLS handshake to verify each other's identity and determine how they communicate. During the handshake, each party identifies which TLS versions they use. The client and server can communicate if they both support a common version.
+
+## Why use TLS 1.2?
+
+We're recommending that customers secure their infrastructure by using TLS 1.2 with Azure Storage. The older TLS versions (1.0 and 1.1) are being deprecated and removed to meet evolving technology and regulatory standards (FedRamp, NIST), and provide improved security for our customers.
+
+TLS 1.2 is more secure and faster than TLS 1.0 and 1.1, which don't support modern cryptographic algorithms and cipher suites. While many customers using Azure storage are already using TLS 1.2, we're sharing further guidance to accelerate this transition for customers that are still using TLS 1.0 or 1.1.
+
+## Configure clients to use TLS 1.2
+
+First, identify each client that makes requests to the Blob Storage service of your account. Then, ensure that each client uses TLS 1.2 to make those requests.
+
+For each client application, we recommend the following tasks.
+
+- Update the operating system to the latest version.
+
+- Update your development libraries and frameworks to their latest versions. (For example, Python 3.6 and 3.7 support TLS 1.2).
+
+- Fix hardcoded instances of older security protocols TLS 1.0 and 1.1.
+
+- Configure clients to use a TLS 1.2. See [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json).
+
+For more detailed guidance, see the [checklist to deprecate older TLS versions in your environment](/security/engineering/solving-tls1-problem#figure-1-security-protocol-support-by-os-version).
+
+> [!IMPORTANT]
+> Notify your customers and partners of your product or service's migration to TLS 1.2 so that they can make the necessary changes to their applications.
+
+## Enforce TLS 1.2 as the minimum allowed version
+
+In advance of the deprecation date, you can enable Azure policy to enforce minimum TLS version.
+
+To understand how configuring the minimum TLS version might affect client applications, we recommend that you enable logging for your Azure Storage account and analyze the logs after an interval of time to detect what versions of TLS client applications are using.
+
+When you're confident that traffic from clients using older versions of TLS is minimal, or that it's acceptable to fail requests made with an older version of TLS, then you can begin enforcement of a minimum TLS version on your storage account.
+
+To learn how to detect the TLS versions used by client applications, and then enforce TLS 1.2 as the minimum allowed version, see [Enforce a minimum required version of Transport Layer Security (TLS) for incoming requests for Azure Storage](transport-layer-security-configure-minimum-version.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json#detect-the-tls-version-used-by-client-applications).
++
+## Quick Tips
+
+- Windows 8+ has TLS 1.2 enabled by default.
+
+- Windows Server 2016+ has TLS 1.2 enabled by default.
+
+- When possible, avoid hardcoding the protocol version. Instead, configure your applications to always defer to your operating system's default TLS version.
+
+- For example, you can enable the SystemDefaultTLSVersion flag in .NET Framework applications to defer to your operating system's default version. This approach lets your applications take advantage of future TLS versions.
+
+- If you can't avoid hardcoding, specify TLS 1.2.
+
+- Upgrade applications that target .NET Framework 4.5 or earlier. Instead, use .NET Framework 4.7 or later because these versions support TLS 1.2.
+
+ For example, Visual Studio 2013 doesn't support TLS 1.2. Instead, use at least the latest release of Visual Studio 2017.
+
+- Use [Qualys SSL Labs](https://www.ssllabs.com/) to identify which TLS version is requested by clients connecting to your application.
+
+- Use [Fiddler](https://www.telerik.com/fiddler) to identify which TLS version your client uses when you send out HTTPS requests.
+
+## Next steps
+
+- [Solving the TLS 1.0 Problem, 2nd Edition](/security/engineering/solving-tls1-problem) – deep dive into migrating to TLS 1.2.
+
+- [How to enable TLS 1.2 on clients](/mem/configmgr/core/plan-design/security/enable-tls-1-2-client) – for Microsoft Configuration Manager.
+
+- [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) – contains instructions to update TLS version in PowerShell
+
+- [Enable support for TLS 1.2 in your environment for Microsoft Entra ID TLS 1.1 and 1.0 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment) – contains information on updating TLS version for WinHTTP.
+
+- [Transport Layer Security (TLS) best practices with the .NET Framework](/dotnet/framework/network-programming/tls) – best practices when configuring security protocols for applications targeting .NET Framework.
+
+- [TLS best practices with the .NET Framework](https://github.com/dotnet/docs/issues/4675) – GitHub to ask questions about best practices with .NET Framework.
+
+- [Troubleshooting TLS 1.2 compatibility with PowerShell](https://github.com/microsoft/azure-devops-tls12) – probe to check TLS 1.2 compatibility and identify issues when incompatible with PowerShell
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
To be able to upload certificates, you must have "**Key Vault Administrator**"
| Members | \<Your account information or email> |
-### Upload Certificate to Key vault
+### Upload Certificate to Key vault via Azure CLI
-You can use Azure CLI to upload certificates as secrets to your key vault or use the Azure portal to upload the certificate as a secret.
> [!IMPORTANT] > You must have "**Key Vault Administrator**" permissions access to your Key vault for this command to work properly
-> You must upload the certificate as a secret.
-> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job.
+> You must upload the certificate as a secret. You must use Azure CLI to upload certificates as secrets to your key vault.
+> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job
-#### Option One - Upload certificate via Azure CLI
+You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
-The following command can upload the certificate as a secret to your key vault.
+Below are some steps you can follow to upload the your certificate to Azure CLI using your powershell
+**Login to Azure CLI:**
```azurecli-interactive
-az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret>
-
+az login
```
-#### Option Two - Upload certificate via the Azure portal
-Use the following steps to upload a certificate as a secret using the Azure portal in your key vault:
-1. Select **Secrets**.
-
-1. Select **Generate/Import** > **Add role assignment** to open the **Add role assignment** page.
-
-1. Complete the following configuration for creating a secret:
+**Connect to your subscription containing your key vault:**
+```azurecli-interactive
+az account set --subscription <subscription name>
+```
- | Setting | Value |
- | | |
- | Upload Options | Certificate |
- | Upload certificate | \<select the certificate to upload> |
- | Name | \<Name you want to give your secret> |
- | activation date | (optional) |
- | expiration date | (optional) |
+**The following command can upload the certificate as a secret to your key vault:**
+```azurecli-interactive
+az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret>
+```
### Configure Managed identity Azure Stream Analytics requires you to configure managed identity to access key vault.
Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network docum
* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units or one (1) V2 streaming unit. * When using mTLS or SASL_SSL with Azure Key vault, you must convert your Java Key Store to PEM format. * The minimum version of Kafka you can configure Azure Stream Analytics to connect to is version 0.10.
+* Azure Stream Analytics does not support authentication to confluent cloud using OAuth or SAML single sign-on (SSO). You must use API Key via the SASL_SSL protocol
> [!NOTE] > For direct help with using the Azure Stream Analytics Kafka output, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com).
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
The following are the major use cases:
* Log Aggregation * Stream Processing
-Azure Stream Analytics lets you connect directly to Kafka clusters to ingest data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The Kafka Adapters are backward compatible and support all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd.
+Azure Stream Analytics lets you connect directly to Kafka clusters to ingest data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The ASA Kafka input is backward compatible and supports all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd.
## Configuration The following table lists the property names and their description for creating a Kafka Input:
You can use four types of security protocols to connect to your Kafka clusters:
### Connect to Confluent Cloud using API key
-The ASA Kafka adapter is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server auth.
+The ASA Kafka input is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server auth.
Confluent uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA) You can download the ISRG Root X1 certificate in PEM format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/). To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows:
To be able to upload certificates, you must have "**Key Vault Administrator**"
| Members | \<Your account information or email> |
-### Upload Certificate to Key vault
+### Upload Certificate to Key vault via Azure CLI
-You can use Azure CLI to upload certificates as secrets to your key vault or use the Azure portal to upload the certificate as a secret.
> [!IMPORTANT]
-> You must upload the certificate as a secret.
-
-#### Option One - Upload certificate via Azure CLI
+> You must have "**Key Vault Administrator**" permissions access to your Key vault for this command to work properly
+> You must upload the certificate as a secret. You must use Azure CLI to upload certificates as secrets to your key vault.
+> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job
+You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
The following command can upload the certificate as a secret to your key vault. You must have "**Key Vault Administrator**" permissions access to your Key vault for this command to work properly.
+**Login to Azure CLI:**
```azurecli-interactive
-az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret>
-
+az login
```
-#### Option Two - Upload certificate via the Azure portal
-Use the following steps to upload a certificate as a secret using the Azure portal in your key vault:
-1. Select **Secrets**.
-
-1. Select **Generate/Import** > **Add role assignment** to open the **Add role assignment** page.
-
-1. Complete the following configuration for creating a secret:
+**Connect to your subscription containing your key vault:**
+```azurecli-interactive
+az account set --subscription <subscription name>
+```
- | Setting | Value |
- | | |
- | Upload Options | Certificate |
- | Upload certificate | \<select the certificate to upload> |
- | Name | \<Name you want to give your secret> |
+**The following command can upload the certificate as a secret to your key vault:**
+```azurecli-interactive
+az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret>
+```
### Configure Managed identity
Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network docum
### Limitations
-* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units.
+* When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units or one (1) V2 streaming unit. .
* When using mTLS or SASL_SSL with Azure Key vault, you must convert your Java Key Store to PEM format. * The minimum version of Kafka you can configure Azure Stream Analytics to connect to is version 0.10. * Azure Stream Analytics does not support authentication to confluent cloud using OAuth or SAML single sign-on (SSO). You must use API Key via the SASL_SSL protocol > [!NOTE]
-> For direct help with using the Azure Stream Analytics Kafka adapter, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com).
+> For direct help with using the Azure Stream Analytics Kafka input, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com).
>
virtual-desktop Deploy Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md
Title: Deploy Azure Virtual Desktop - Azure Virtual Desktop description: Learn how to deploy Azure Virtual Desktop by creating a host pool, workspace, application group, session hosts, and assign users. + Last updated 10/25/2023
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
Title: Prerequisites for Azure Virtual Desktop description: Find what prerequisites you need to complete to successfully connect your users to their Windows desktops and applications. -+ Last updated 10/25/2023
virtual-machines Auto Shutdown Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/auto-shutdown-vm.md
-+ Last updated 09/27/2023- # Auto-shutdown the VM
For more information on how to delete a virtual machine, see [delete a VM](./del
Learn about sizes and how to resize a VM: - Types of virtual machine [sizes.](./sizes.md)-- Change the [size of a virtual machine](./resize-vm.md).
+- Change the [size of a virtual machine](./resize-vm.md).
virtual-machines Disks Migrate Lrs Zrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-migrate-lrs-zrs.md
Last updated 10/19/2023 -+ # Convert a disk from LRS to ZRS
virtual-wan Virtual Wan Expressroute About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-about.md
A virtual hub can contain gateways for site-to-site, ExpressRoute, or point-to-s
## ExpressRoute circuit SKUs supported in Virtual WAN The following ExpressRoute circuit SKUs can be connected to the hub gateway: Local, Standard, and Premium. ExpressRoute Direct circuits are also supported with Virtual WAN. To learn more about different SKUs, visit [ExpressRoute Circuit SKUs](../expressroute/expressroute-faqs.md#what-is-the-connectivity-scope-for-different-expressroute-circuit-skus). ExpressRoute Local circuits can only be connected to ExpressRoute gateways in the same region, but they can still access resources in spoke virtual networks located in other regions.
-## ExpressRoute performance
+## ExpressRoute gateway performance
ExpressRoute gateways are provisioned in units of 2 Gbps. One scale unit = 2 Gbps with support up to 10 scale units = 20 Gbps.
ExpressRoute gateways are provisioned in units of 2 Gbps. One scale unit = 2 Gbp
## BGP with ExpressRoute in Virtual WAN- Dynamic routing (BGP) is supported. For more information, please see [Dynamic Route Exchange with ExpressRoute](../expressroute/expressroute-routing.md#dynamic-route-exchange). The ASN of the ExpressRoute gateway in the hub and ExpressRoute circuit are fixed and can't be edited at this time. ## ExpressRoute connection concepts
Dynamic routing (BGP) is supported. For more information, please see [Dynamic Ro
> If you have configured a 0.0.0.0/0 route statically in a virtual hub route table or dynamically via a network virtual appliance for traffic inspection, that traffic will bypass inspection when destined for Azure Storage and is in the same region as the ExpressRoute gateway in the virtual hub. As a workaround, you can either use [Private Link](../private-link/private-link-overview.md) to access Azure Storage or put the Azure Storage service in a different region than the virtual hub. >
+## ExpressRoute limits in Virtual WAN
+| Maximum number of circuits connected to the same virtual hub's ExpressRoute gateway | Limit |
+| | |
+| Maximum number of circuits in the same peering location connected to the same virtual hub | 4 |
+| Maximum number of circuits in different peering locations connected to the same virtual hub | 8 |
+
+The above two limits hold true regardless of the number of ExpressRoute gateway scale units deployed. For ExpressRoute circuit route limits, please see [ExpressRoute Circuit Route Advertisement Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#route-advertisement-limits).
+ ## Next steps Next, for a tutorial on connecting an ExpressRoute circuit to Virtual WAN, see: