Updates from: 11/07/2024 02:06:08
Service Microsoft Docs article Related commit history on GitHub Change details
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
Previously updated : 10/26/2022 Last updated : 11/05/2024 # Import SOAP API to API Management
To define a wildcard SOAP action:
1. In the portal, select the API you created in the previous step. 1. In the **Design** tab, select **+ Add Operation**. 1. Enter a **Display name** for the operation.
-1. In the URL, select `POST` and enter `/soapAction={any}` in the resource. The template parameter inside the curly brackets is arbitrary and doesn't affect the execution.
+1. In the URL, select `POST` and enter `/?soapAction={any}` in the resource. The template parameter inside the curly brackets is arbitrary and doesn't affect the execution.
+
+> [!NOTE]
+> Don't use the **OpenAPI specification** editor in the **Design** tab to modify a SOAP API.
+ [!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-append-apis.md)]
api-management Xml To Json Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/xml-to-json-policy.md
The `xml-to-json` policy converts a request or response body from XML to JSON. T
## Policy statement ```xml
-<xml-to-json kind="javascript-friendly | direct" apply="always | content-type-xml" consider-accept-header="true | false" always-array-children="true | false"/>
+<xml-to-json kind="javascript-friendly | direct" apply="always | content-type-xml" consider-accept-header="true | false" always-array-child-elements="true | false"/>
```
The `xml-to-json` policy converts a request or response body from XML to JSON. T
|kind|The attribute must be set to one of the following values.<br /><br /> - `javascript-friendly` - the converted JSON has a form friendly to JavaScript developers.<br />- `direct` - the converted JSON reflects the original XML document's structure.<br/><br/>Policy expressions are allowed.|Yes|N/A| |apply|The attribute must be set to one of the following values.<br /><br /> - `always` - convert always.<br />- `content-type-xml` - convert only if response Content-Type header indicates presence of XML.<br/><br/>Policy expressions are allowed.|Yes|N/A| |consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - `true` - apply conversion if JSON is requested in request Accept header.<br />- `false` -always apply conversion.<br/><br/>Policy expressions are allowed.|No|`true`|
-|always-array-children|The attribute must be set to one of the following values.<br /><br /> - `true` - Always convert child elements into a JSON array.<br />- `false` - Only convert multiple child elements into a JSON array. Convert a single child element into a JSON object.<br/><br/>Policy expressions are allowed.|No|`false`|
+|always-array-child-elements|The attribute must be set to one of the following values.<br /><br /> - `true` - Always convert child elements into a JSON array.<br />- `false` - Only convert multiple child elements into a JSON array. Convert a single child element into a JSON object.<br/><br/>Policy expressions are allowed.|No|`false`|
## Usage
The `xml-to-json` policy converts a request or response body from XML to JSON. T
* [Transformation](api-management-policies.md#transformation)
application-gateway Hsts Http Headers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/hsts-http-headers-portal.md
+
+ Title: Use header rewrite to add HSTS header in portal - Azure Application Gateway
+description: Learn how to use the Azure portal to configure an Azure Application Gateway with HSTS Policy
++++ Last updated : 11/06/2024+++
+# Add HSTS headers with Azure Application Gateway - Azure portal
+
+This article describes how to use the [Header Rewrite](./rewrite-http-headers-url.md) in [Application Gateway v2 SKU](./application-gateway-autoscaling-zone-redundant.md) to add HTTP Strict-Transport-Security (HSTS) response header to better secure traffic through Application Gateway.
+
+HSTS policy helps protect or minimize your sites against man-in-the-middle, cookie-hijacking, and protocol downgrade attacks. After a client has established the first successful HTTPS connection with your HSTS-enabled website, HSTS header ensures going forward the client can access only through HTTPS.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Before you begin
+
+You need to have an Application Gateway v2 SKU deployment to complete the steps in this article. Rewriting headers isn't supported in the v1 SKU. If you don't have the v2 SKU, create an [Application Gateway v2 SKU](./tutorial-autoscale-ps.md) deployment before you begin.
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
+
+## Create required objects
+
+To configure HSTS policy, you must first complete these steps:
+
+1. Create the objects that are required for adding an HSTS header:
+
+ - **HTTP Listener**: Create a basic or multisite HTTP listener. This listener must listen on port 80, and the protocol must be set to HTTP.
+
+ - **HTTPS Listener**: Create a basic or multisite HTTPS listener. This listener must listen on port 443, have the protocol set to HTTPS, and contain a certificate.
+
+2. Create a routing rule that redirects all the traffic from the HTTP listener to the HTTPS listener.
+
+To learn more about how to set up http to https redirection, see [HTTP to HTTPS Redirection](./redirect-http-to-https-portal.md).
+
+## Configure HSTS policy
+
+In this example, we will add the Strict Transport Security (STS) response header, using the rewrite rules of application gateway.
+
+1. Select **All resources**, and then select your application gateway.
+
+2. Select **Rewrites** in the left pane.
+
+3. Select **Rewrite set**:
+
+ :::image type="content" source="./media/hsts-http-headers-portal/add-rewrite-set.png" alt-text="Screenshot that shows how to add a rewrite set." lightbox="./media/hsts-http-headers-portal/add-rewrite-set.png":::
+
+4. Provide a name for the rewrite set and associate it with a routing rule:
+
+ - Enter the name for the rewrite set in the **Name** box.
+ - Select one or more of the rules listed in the **Associated routing rules** list. You can select only rules that haven't been associated with other rewrite sets. The rules that have already been associated with other rewrite sets are dimmed.
+ - Select **Next**.
+
+ :::image type="content" source="./media/hsts-http-headers-portal/name-and-association.png" alt-text="Screenshot that shows how to add the name and association for a rewrite set.":::
+
+5. Create a rewrite rule:
+
+ - Select **Add rewrite rule**.
+
+ :::image type="content" source="./media/hsts-http-headers-portal/add-rewrite-rule.png" alt-text="Screenshot that shows how to add a rewrite rule.":::
+
+ - Enter a name for the rewrite rule in the **Rewrite rule name** box. Enter a number in the **Rule sequence** box.
+
+ :::image type="content" source="./media/hsts-http-headers-portal/rule-name.png" alt-text="Screenshot that shows how to add a rewrite rule name.":::
+
+6. Add an action to rewrite the response header:
+
+ - In the **Rewrite type** list, select **Response Header**.
+
+ - In the **Action type** list, select **Set**.
+
+ - Under **Header name**, select **Common header**.
+
+ - In the **Common header** list, select **Strict-Transport-Security**.
+
+ - Enter the header value. In this example, we'll use `max-age=31536000; includeSubdomains; preload` as the header value.
+
+ - Select **OK**.
+
+ :::image type="content" source="./media/hsts-http-headers-portal/action.png" alt-text="Screenshot that shows how to add an action.":::
+
+7. Select **Create** to create the rewrite set:
+
+ :::image type="content" source="./media/hsts-http-headers-portal/create-rewrite-set.png" alt-text="Screenshot that shows how to click create." lightbox="./media/hsts-http-headers-portal/create-rewrite-set.png":::
+
+## Limitations and Recommendations
+
+ - In order to maximize security, you must show HSTS policy as soon as possible when users begin an HTTPS session. In order to enforce HTTPS for a given domain, the browser only needs to observe the STS header once. Hence, it should be added to home pages and critical pages of a site. However, that is not sufficient, it is best practice to cover as much of the URL space as possible and prioritize non-cacheable content.
+
+ - In this example, the response header Strict-Transport-Security is set to `max-age=31536000; includeSubdomains; preload`. However, users can also set the header to equal `max-age=31536000; includeSubdomains`, removing the preload. Preloading helps strengthen HSTS by ensuring clients always access the site using HTTPS, even if it is their first time accessing it. You must submit your domain and subdomains to https://hstspreload.org/ in order to ensure that users will never access the site using HTTP. Although the preload list is hosted by Google, all major browsers use this list.
+
+ - HSTS Policy will not prevent attacks against TLS itself or attacks on the servers.
+
+## Next steps
+
+To learn more about directives, please visit https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security
+
+To learn more about how to set up some common header rewrite use cases, see [common header rewrite scenarios](./rewrite-http-headers-url.md).
azure-app-configuration Reference Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/reference-kubernetes-provider.md
# Azure App Configuration Kubernetes Provider reference
-The following reference outlines the properties supported by the Azure App Configuration Kubernetes Provider `v2.0.0`. See [release notes](https://github.com/Azure/AppConfiguration/blob/main/releaseNotes/KubernetesProvider.md) for more information on the change.
+The following reference outlines the properties supported by the Azure App Configuration Kubernetes Provider `v2.1.0`. See [release notes](https://github.com/Azure/AppConfiguration/blob/main/releaseNotes/KubernetesProvider.md) for more information on the change.
## Properties
An `AzureAppConfigurationProvider` resource has the following top-level child pr
|endpoint|The endpoint of Azure App Configuration, which you would like to retrieve the key-values from.|alternative|string| |connectionStringReference|The name of the Kubernetes Secret that contains Azure App Configuration connection string.|alternative|string| |replicaDiscoveryEnabled|The setting that determines whether replicas of Azure App Configuration are automatically discovered and used for failover. If the property is absent, a default value of `true` is used.|false|bool|
+|loadBalancingEnabled|The setting that enables your workload to distribute requests to App Configuration across all available replicas. If the property is absent, a default value of `false` is used.|false|bool|
|target|The destination of the retrieved key-values in Kubernetes.|true|object| |auth|The authentication method to access Azure App Configuration.|false|object| |configuration|The settings for querying and processing key-values in Azure App Configuration.|false|object|
The `spec.configuration.refresh` property has the following child properties.
|Name|Description|Required|Type| ||||| |enabled|The setting that determines whether key-values from Azure App Configuration is automatically refreshed. If the property is absent, a default value of `false` is used.|false|bool|
-|monitoring|The key-values monitored for change detection, aka sentinel keys. The key-values from Azure App Configuration are refreshed only if at least one of the monitored key-values is changed.|true|object|
+|monitoring|The key-values monitored for change detection, aka sentinel keys. The key-values from Azure App Configuration are refreshed only if at least one of the monitored key-values is changed. If this property is absent, all the selected key-values will be monitored for refresh. |false|object|
|interval|The interval at which the key-values are refreshed from Azure App Configuration. It must be greater than or equal to 1 second. If the property is absent, a default value of 30 seconds is used.|false|duration string| The `spec.configuration.refresh.monitoring.keyValues` is an array of objects, which have the following child properties.
azure-functions Container Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/container-concepts.md
description: Describes the options for and benefits of running your function cod
Previously updated : 04/05/2024 Last updated : 10/13/2024 #CustomerIntent: As a developer, I want to understand the options that are available to me for hosting function apps in Linux containers so I can choose the best development and deployment options for containerized deployments of function code to Azure. # Linux container support in Azure Functions
-When you plan and develop your individual functions to run in Azure Functions, you are typically focused on the code itself. Azure Functions makes it easy to deploy just your code project to a function app in Azure. When you deploy your code project to a function app that runs on Linux, the project runs in a container that is created for you automatically. This container is managed by Functions.
+When you plan and develop your individual functions to run in Azure Functions, you're typically focused on the code itself. Azure Functions makes it easy to deploy just your code project to a function app in Azure. When you deploy your code project to a function app that runs on Linux, the project runs in a container that is created for you automatically. This container is managed by Functions.
Functions also supports containerized function app deployments. In a containerized deployment, you create your own function app instance in a local Docker container from a supported based image. You can then deploy this _containerized_ function app to a hosting environment in Azure. Creating your own function app container lets you customize or otherwise control the immediate runtime environment of your function code. + ## Container hosting options There are several options for hosting your containerized function apps in Azure: | Hosting option | Benefits | | | |
-| **[Azure Container Apps]** | Azure Functions provides integrated support for developing, deploying, and managing containerized function apps on [Azure Container Apps](../container-apps/overview.md). Use Azure Container Apps to host your function app containers when you need to run your event-driven functions in Azure in the same environment as other microservices, APIs, websites, workflows, or any container hosted programs. Container Apps hosting lets you run your functions in a managed Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and KEDA. Container Apps uses the power of the underlying Azure Kubernetes Service (AKS) while removing the complexity of having to work with Kubernetes APIs. |
+| **[Azure Container Apps]** | Azure Functions provides integrated support for developing, deploying, and managing containerized function apps on [Azure Container Apps](../container-apps/overview.md). This enables you to manage your apps using the same Functions tools and pages in the Azure portal. Use Azure Container Apps to host your function app containers when you need to run your event-driven functions in Azure in the same environment as other microservices, APIs, websites, workflows, or any container hosted programs. Container Apps hosting lets you run your functions in a managed Kubernetes-based environment with built-in support for open-source monitoring, mTLS, Dapr, and KEDA. Supports scale-to-zero and provides a servless pay-for-what-you-use hosting model. You can also request dedicated hardware, even GPUs, by using workload profiles. _Recommended hosting option for running containerized function apps on Azure._ |
| **Azure Arc-enabled Kubernetes clusters (preview)** | You can host your function apps on Azure Arc-enabled Kubernetes clusters as either a [code-only deployment](./create-first-function-arc-cli.md) or in a [custom Linux container](./create-first-function-arc-custom-container.md). Azure Arc lets you attach Kubernetes clusters so that you can manage and configure them in Azure. _Hosting Azure Functions containers on Azure Arc-enabled Kubernetes clusters is currently in preview._ |
-| **[Azure Functions]** | You can deploy your containerized function apps to run in either an [Elastic Premium plan](./functions-premium-plan.md) or a [Dedicated plan](./dedicated-plan.md). Premium plan hosting provides you with the benefits of dynamic scaling. You might want to use Dedicated plan hosting to take advantage of existing unused App Service plan resources. |
-| **[Kubernetes]** | Because the Azure Functions runtime provides flexibility in hosting where and how you want, you can host and manage your function app containers directly in Kubernetes clusters. [KEDA](https://keda.sh) (Kubernetes-based Event Driven Autoscaling) pairs seamlessly with the Azure Functions runtime and tooling to provide event driven scale in Kubernetes. Just keep in mind that running your containerized function apps on Kubernetes, either by using KEDA or by direct deployment, is an open-source effort that you can use free of cost, with best-effort support provided by contributors and from the community. |
+| **[Azure Functions]** | You can host your containerized function apps in Azure Functions by running the container in either an [Elastic Premium plan](./functions-premium-plan.md) or a [Dedicated plan](./dedicated-plan.md). Premium plan hosting provides you with the benefits of dynamic scaling. You might want to use Dedicated plan hosting to take advantage of existing unused App Service plan resources. |
+| **[Kubernetes]** | Because the Azure Functions runtime provides flexibility in hosting where and how you want, you can host and manage your function app containers directly in Kubernetes clusters. [KEDA](https://keda.sh) (Kubernetes-based Event Driven Autoscaling) pairs seamlessly with the Azure Functions runtime and tooling to provide event driven scale in Kubernetes. Just keep in mind that running your containerized function apps on Kubernetes, either by using KEDA or by direct deployment, is an open-source effort that you can use free of cost, with best-effort support provided by contributors and from the community. You're responsible for maintaining your own function app containers in a cluster, even when deploying to Azure Kubernetes Service (AKS). |
+
+## Feature support comparison
+
+The degree to which various features and behaviors of Azure Functions are supported when running your function app in a container depends on the container hosting option you choose.
+
+| Feature/behavior | [Container Apps (integrated)][Azure Container Apps] | [Container Apps (direct)](../container-apps/overview.md) | [Premium plan](./functions-premium-plan.md) | [Dedicated plan](./dedicated-plan.md) | [Kubernetes] |
+| | | ||-| |
+| Product support | Yes | No | Yes |Yes | No |
+| Functions portal integration | Yes | No | Yes | Yes | No |
+| [Event-driven scaling](./event-driven-scaling.md) | Yes<sup>5</sup> | Yes ([scale rules](../container-apps/scale-app.md#scale-rules)) | Yes | No | No |
+| Maximum scale (instances) | 1000<sup>1</sup> | 1000<sup>1</sup> | 100<sup>2</sup> | 10-30<sup>3</sup> | Varies by cluster |
+| [Scale-to-zero instances](./event-driven-scaling.md#scale-in-behaviors) | Yes | Yes | No | No | KEDA |
+| Execution time limit | Unbounded<sup>6</sup>| Unbounded<sup>6</sup> | Unbounded<sup>7</sup> | Unbounded<sup>8</sup> | None |
+| [Core Tools deployment](./functions-run-local.md#deploy-containers) | [`func azurecontainerapps`](./functions-core-tools-reference.md#func-azurecontainerapps-deploy) | No | No | No | [`func kubernetes`](./functions-core-tools-reference.md#func-kubernetes-deploy) |
+| [Revisions](../container-apps/revisions.md) | No | Yes |No |No |No |
+| [Deployment slots](./functions-deployment-slots.md) |No |No |Yes |Yes |No |
+| [Streaming logs](./streaming-logs.md) | Yes | [Yes](../container-apps/log-streaming.md) | Yes | Yes | No |
+| [Console access](../container-apps/container-console.md) | Not currently available<sup>4</sup> | Yes | Yes (using [Kudu](./functions-how-to-custom-container.md#enable-ssh-connections)) | Yes (using [Kudu](./functions-how-to-custom-container.md#enable-ssh-connections)) | Yes (in pods [using `kubctl`](https://kubernetes.io/docs/reference/kubectl/)) |
+| Cold start mitigation | Minimum replicas | [Scale rules](../container-apps/scale-app.md#scale-rules) | [Always-ready/pre-warmed instances](functions-premium-plan.md#eliminate-cold-starts) | n/a | n/a |
+| [App Service authentication](../app-service/overview-authentication-authorization.md) | Not currently available<sup>4</sup> | Yes | Yes | Yes | No |
+| [Custom domain names](../app-service/app-service-web-tutorial-custom-domain.md) | Not currently available<sup>4</sup> | Yes | Yes | Yes | No |
+| [Private key certificates](../app-service/overview-tls.md) | Not currently available<sup>4</sup> | Yes | Yes | Yes | No |
+| Virtual networks | Yes | Yes | Yes | Yes | Yes |
+| Availability zones | Yes | Yes | Yes | Yes | Yes |
+| Diagnostics | Not currently available<sup>4</sup> | [Yes](../container-apps/troubleshooting.md#use-the-diagnose-and-solve-problems-tool) | [Yes](./functions-diagnostics.md) | [Yes](./functions-diagnostics.md) | No |
+| Dedicated hardware | Yes ([workload profiles](../container-apps/workload-profiles-overview.md)) | Yes ([workload profiles](../container-apps/workload-profiles-overview.md)) | No | Yes | Yes |
+| Dedicated GPUs | Yes ([workload profiles](../container-apps/workload-profiles-overview.md)) | Yes ([workload profiles](../container-apps/workload-profiles-overview.md)) | No | No | Yes |
+| [Configurable memory/CPU count](../container-apps/workload-profiles-overview.md) | Yes | Yes | No | No | Yes |
+| "Free grant" option | [Yes](../container-apps/billing.md#consumption-plan) | [Yes](../container-apps/billing.md#consumption-plan) | No | No | No |
+| Pricing details | [Container Apps billing](../container-apps/billing.md) | [Container Apps billing](../container-apps/billing.md) | [Premium plan billing](./functions-premium-plan.md#billing) | [Dedicated plan billing](./dedicated-plan.md#billing) | [AKS pricing](/azure/aks/free-standard-pricing-tiers) |
+| Service name requirements | 2-32 characters: limited to lowercase letters, numbers, and hyphens. Must start with a letter and end with an alphanumeric character. | 2-32 characters: limited to lowercase letters, numbers, and hyphens. Must start with a letter and end with an alphanumeric character. | Less than 64 characters: limited to alphanumeric characters and hyphens. Can't start with or end in a hyphen. | Less than 64 characters: limited to alphanumeric characters and hyphens. Can't start with or end in a hyphen. | Less than 253 characters: limited to alphanumeric characters and hyphens. Must start and end with an alphanumeric character. |
+
+1. On Container Apps, the default is 10 instances, but you can set the [maximum number of replicas](../container-apps/scale-app.md#scale-definition), which has an overall maximum of 1000. This setting is honored as long as there's enough cores quota available. When you create your function app from the Azure portal, you're limited to 300 instances.
+2. In some regions, Linux apps on a Premium plan can scale to 100 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
+3. For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
+4. Feature parity is a goal of integrated hosting on Azure Container Apps.
+5. Requires [KEDA](./functions-kubernetes-keda.md); supported by most triggers. To learn which triggers support event-driven scaling, see [Considerations for Container Apps hosting](functions-container-apps-hosting.md#considerations-for-container-apps-hosting).
+6. When the [minimum number of replicas](../container-apps/scale-app.md#scale-definition) is set to zero, the default timeout depends on the specific triggers used in the app.
+7. There's no maximum execution timeout duration enforced. However, the grace period given to a function execution is 60 minutes [during scale in](event-driven-scaling.md#scale-in-behaviors), and a grace period of 10 minutes is given during platform updates.
+8. Requires the App Service plan be set to [Always On](dedicated-plan.md#always-on). A grace period of 10 minutes is given during platform updates.
## Getting started
There are several options for hosting your containerized function apps in Azure:
[Azure Container Apps]: functions-container-apps-hosting.md [Kubernetes]: functions-kubernetes-keda.md [Azure Functions]: functions-how-to-custom-container.md?pivots=azure-functions#azure-portal-create-using-containers
-[Azure Arc-enabled Kubernetes clusters]
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Application settings in a function app contain configuration options that affect
In this article, example connection string values are truncated for readability.
-Because Azure Functions leverages the Azure App Service platform for hosting, you might find some settings relevant to your function app hosting documented in [Environment variables and app settings in Azure App Service](../app-service/reference-app-settings.md).
+Because Azure Functions uses the Azure App Service platform for hosting, you might find some settings relevant to your function app hosting documented in [Environment variables and app settings in Azure App Service](../app-service/reference-app-settings.md).
## App setting considerations
This authentication requirement is applied to connections from the Functions hos
The connection string for Application Insights. Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. While the use of `APPLICATIONINSIGHTS_CONNECTION_STRING` is recommended in all cases, it's required in the following cases:
-+ When your function app requires the added customizations supported by using the connection string.
-+ When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint.
++ When your function app requires the added customizations supported by using the connection string ++ When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint For more information, see [Connection strings](/azure/azure-monitor/app/sdk-connection-string).
When this app setting is omitted or set to `false`, a page similar to the follow
## AzureWebJobsDotNetReleaseCompilation
-`true` means use Release mode when compiling .NET code; `false` means use Debug mode. Default is `true`.
+`true` means use `Release` mode when compiling .NET code; `false` means use Debug mode. Default is `true`.
|Key|Sample value| |||
For Node.js v18 or lower, the app setting is used, and the default behavior depe
## FUNCTIONS\_REQUEST\_BODY\_SIZE\_LIMIT
-Overrides the default limit on the body size of requests sent to HTTP endpoints. The value is given in bytes, with a default maximum request size of 104857600 bytes.
+Overrides the default limit on the body size of requests sent to HTTP endpoints. The value is given in bytes, with a default maximum request size of 104,857,600 bytes.
|Key|Sample value| |||
Indicates whether the `/home` directory is shared across scaled instances, with
Some configurations must be maintained at the App Service level as site settings, such as language versions. These settings are managed in the portal, by using REST APIs, or by using Azure CLI or Azure PowerShell. The following are site settings that could be required, depending on your runtime language, OS, and versions:
+## AcrUseManagedIdentityCreds
+
+Indicates whether the image is obtained from an Azure Container Registry instance using managed identity authentication. A value of `true` requires that managed identity be used, which is recommended over stored authentication credentials as a security best practice.
+
+## AcrUserManagedIdentityID
+
+Indicates the managed identity to use when obtaining the image from an Azure Container Registry instance. Requires that `AcrUseManagedIdentityCreds` is set to `true`. These are the valid values:
+
+| Value | Description |
+| - | - |
+| `system` | The system assigned managed identity of the function app is used. |
+| `<USER_IDENTITY_RESOURCE_ID>` | The fully qualified resource ID of a user-assigned managed identity. |
+
+The identity that you specify must be added to the `ACRPull` role in the container registry. For more information, see [Create and configure a function app on Azure with the image](functions-deploy-container-apps.md?tabs=acr#create-and-configure-a-function-app-on-azure-with-the-image).
+ ## alwaysOn On a function app running in a [Dedicated (App Service) plan](./dedicated-plan.md), the Functions runtime goes idle after a few minutes of inactivity, a which point only requests to an HTTP trigger _wakes up_ your function app. To make sure that your non-HTTP triggered functions run correctly, including Timer trigger functions, enable Always On for the function app by setting the `alwaysOn` site setting to a value of `true`.
In the [Flex Consumption plan](./flex-consumption-plan.md), these site propertie
| `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` |Replaced by functionAppConfig's deployment section| | `WEBSITE_CONTENTOVERVNET` |Not used for networking in Flex Consumption| | `WEBSITE_CONTENTSHARE` |Replaced by functionAppConfig's deployment section|
-| `WEBSITE_DNS_SERVER` |DNS is inherited from the integrated VNet in Flex|
+| `WEBSITE_DNS_SERVER` |DNS is inherited from the integrated virtual network in Flex|
| `WEBSITE_NODE_DEFAULT_VERSION` |Replaced by `version` in `properties.functionAppConfig.runtime`| | `WEBSITE_RUN_FROM_PACKAGE`|Not used for deployments in Flex Consumption| | `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` |Content share is not used in Flex Consumption|
azure-functions Functions Bindings Azure Mysql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-mysql-input.md
+
+ Title: Azure Database for MySQL input binding for Functions
+description: Learn to use the Azure Database for MySQL input binding in Azure Functions.
+++ Last updated : 9/26/2024++
+zone_pivot_groups: programming-languages-set-functions
++
+# Azure Database for MySQL input binding for Azure Functions (Preview)
+
+When a function runs, the Azure Database for MySQL input binding retrieves data from a database and passes it to the input parameter of the function.
+
+For information on setup and configuration details, see the [overview](./functions-bindings-azure-mysql.md).
++
+## Examples
+<a id="example"></a>
++++
+# [Isolated worker model](#tab/isolated-process)
+
+More samples for the Azure Database for MySQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples).
+
+This section contains the following examples:
+
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop)
+* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c-oop)
+
+The examples refer to a `Product` class and a corresponding database table:
+
+```csharp
+namespace AzureMySqlSamples.Common
+{
+ public class Product
+ {
+ public int? ProductId { get; set; }
+
+ public string Name { get; set; }
+
+ public int Cost { get; set; }
+
+ public override bool Equals(object obj)
+ }
+}
+```
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
++
+<a id="http-trigger-look-up-id-from-query-string-c-oop"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a C# function that retrieves a single record. The function is [triggered by an HTTP request](./functions-bindings-http-webhook-trigger.md) that uses a query string to specify the ID. That ID is used to retrieve a `Product` record with the specified query.
+
+> [!NOTE]
+> The HTTP query string parameter is case-sensitive.
+>
+
+```cs
+using System.Collections.Generic;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.MySql;
+using Microsoft.Azure.Functions.Worker.Http;
+using AzureMySqlSamples.Common;
+
+namespace AzureMySqlSamples.InputBindingIsolatedSamples
+{
+ public static class GetProductById
+ {
+ [Function(nameof(GetProductById))]
+ public static IEnumerable<Product> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproducts/{productId}")]
+ HttpRequestData req,
+ [MySqlInput("select * from Products where ProductId = @productId",
+ "MySqlConnectionString",
+ parameters: "@ProductId={productId}")]
+ IEnumerable<Product> products)
+ {
+ return products;
+ }
+ }
+}
+```
+
+<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a>
+### HTTP trigger, get multiple rows from route parameter
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves rows returned by the query. The function is [triggered by an HTTP request](./functions-bindings-http-webhook-trigger.md) that uses route data to specify the value of a query parameter. That parameter is used to filter the `Product` records in the specified query.
+
+```cs
+using System.Collections.Generic;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.MySql;
+using Microsoft.Azure.Functions.Worker.Http;
+using AzureMySqlSamples.Common;
+
+namespace AzureMySqlSamples.InputBindingIsolatedSamples
+{
+ public static class GetProducts
+ {
+ [Function(nameof(GetProducts))]
+ public static IEnumerable<Product> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproducts")]
+ HttpRequestData req,
+ [MySqlInput("select * from Products",
+ "MySqlConnectionString")]
+ IEnumerable<Product> products)
+ {
+ return products;
+ }
+ }
+}
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-c-oop"></a>
+### HTTP trigger, delete rows
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `DeleteProductsCost` must be created on the MySQL database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+
+```sql
+DROP PROCEDURE IF EXISTS DeleteProductsCost;
+
+Create Procedure DeleteProductsCost(cost INT)
+BEGIN
+ DELETE from Products where Products.cost = cost;
+END
+```
+
+```cs
+namespace AzureMySqlSamples.InputBindingSamples
+{
+ public static class GetProductsStoredProcedure
+ {
+ [FunctionName(nameof(GetProductsStoredProcedure))]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproducts-storedprocedure/{cost}")]
+ HttpRequest req,
+ [MySql("DeleteProductsCost",
+ "MySqlConnectionString",
+ commandType: System.Data.CommandType.StoredProcedure,
+ parameters: "@Cost={cost}")]
+ IEnumerable<Product> products)
+ {
+ return new OkObjectResult(products);
+ }
+ }
+}
+```
+
+# [In-process model](#tab/in-process)
+
+More samples for the Azure Database for MySQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-csharp).
+
+This section contains the following examples:
+
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c)
+* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c)
+
+The examples refer to a `Product` class and a corresponding database table:
+
+```csharp
+namespace AzureMySqlSamples.Common
+{
+ public class Product
+ {
+ public int? ProductId { get; set; }
+
+ public string Name { get; set; }
+
+ public int Cost { get; set; }
+
+ public override bool Equals(object obj)
+ }
+}
+```
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-look-up-id-from-query-string-c"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request that uses a query string to specify the ID. That ID is used to retrieve a `Product` record with the specified query.
+
+> [!NOTE]
+> The HTTP query string parameter is case-sensitive.
+>
+
+```cs
+using System.Collections.Generic;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Azure.WebJobs.Extensions.MySql;
+using AzureMySqlSamples.Common;
+
+namespace AzureMySqlSamples.InputBindingSamples
+{
+ public static class GetProductById
+ {
+ [FunctionName("GetProductById")]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get", Route = "getproducts/{productId}")] HttpRequest req,
+ [MySql("select * from Products where ProductId = @productId",
+ "MySqlConnectionString",
+ parameters: "@ProductId={productId}")]
+ IEnumerable<Product> products)
+ {
+ return new OkObjectResult(products);
+ }
+ }
+}
+```
+
+<a id="http-trigger-get-multiple-items-from-route-data-c"></a>
+### HTTP trigger, get multiple rows from route parameter
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request that uses route data to specify the value of a query parameter. That parameter is used to filter the `Product` records in the specified query.
+
+```cs
+using System.Collections.Generic;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Azure.WebJobs.Extensions.MySql;
+using AzureMySqlSamples.Common;
+
+namespace AzureMySqlSamples.InputBindingSamples
+{
+ public static class GetProducts
+ {
+ [FunctionName("GetProducts")]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get", Route = "getproducts")] HttpRequest req,
+ [MySql("select * from Products",
+ "MySqlConnectionString")]
+ IEnumerable<Product> products)
+ {
+ return new OkObjectResult(products);
+ }
+ }
+}
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-c"></a>
+### HTTP trigger, delete rows
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `DeleteProductsCost` must be created on the MySQL database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+
+```sql
+DROP PROCEDURE IF EXISTS DeleteProductsCost;
+
+Create Procedure DeleteProductsCost(cost INT)
+BEGIN
+ DELETE from Products where Products.cost = cost;
+END
+```
+
+```cs
+namespace AzureMySqlSamples.InputBindingSamples
+{
+ public static class GetProductsStoredProcedure
+ {
+ [FunctionName(nameof(GetProductsStoredProcedure))]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "getproducts-storedprocedure/{cost}")]
+ HttpRequest req,
+ [MySql("DeleteProductsCost",
+ "MySqlConnectionString",
+ commandType: System.Data.CommandType.StoredProcedure,
+ parameters: "@Cost={cost}")]
+ IEnumerable<Product> products)
+ {
+ return new OkObjectResult(products);
+ }
+ }
+}
+```
++++++
+More samples for the Azure Database for MySQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-java).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-java)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-java)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-java)
+
+The examples refer to a `Product` class and a corresponding database table:
+
+```java
+package com.function.Common;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class Product {
+ @JsonProperty("ProductId")
+ private int ProductId;
+ @JsonProperty("Name")
+ private String Name;
+ @JsonProperty("Cost")
+ private int Cost;
+
+ public Product() {
+ }
+```
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-get-multiple-items-java"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a MySQL input binding in a Java function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query and returns the results in the HTTP response.
+
+```java
+package com.function;
+
+import com.function.Common.Product;
+import com.microsoft.azure.functions.HttpMethod;
+import com.microsoft.azure.functions.HttpRequestMessage;
+import com.microsoft.azure.functions.HttpResponseMessage;
+import com.microsoft.azure.functions.HttpStatus;
+import com.microsoft.azure.functions.annotation.AuthorizationLevel;
+import com.microsoft.azure.functions.annotation.FunctionName;
+import com.microsoft.azure.functions.annotation.HttpTrigger;
+import com.microsoft.azure.functions.mysql.annotation.CommandType;
+import com.microsoft.azure.functions.mysql.annotation.MySqlInput;
+
+import java.util.Optional;
+
+public class GetProducts {
+ @FunctionName("GetProducts")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS,
+ route = "getproducts}")
+ HttpRequestMessage<Optional<String>> request,
+ @MySqlInput(
+ name = "products",
+ commandText = "SELECT * FROM Products",
+ commandType = CommandType.Text,
+ connectionStringSetting = "MySqlConnectionString")
+ Product[] products) {
+
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(products).build();
+ }
+}
+```
+
+<a id="http-trigger-look-up-id-from-query-string-java"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a MySQL input binding in a Java function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+```java
+public class GetProductById {
+ @FunctionName("GetProductById")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS,
+ route = "getproducts/{productid}")
+ HttpRequestMessage<Optional<String>> request,
+ @MySqlInput(
+ name = "products",
+ commandText = "SELECT * FROM Products WHERE ProductId= @productId",
+ commandType = CommandType.Text,
+ parameters = "@productId={productid}",
+ connectionStringSetting = "MySqlConnectionString")
+ Product[] products) {
+
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(products).build();
+ }
+}
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-java"></a>
+### HTTP trigger, delete rows
+
+The following example shows a MySQL input binding in a Java function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `DeleteProductsCost` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+
+```sql
+DROP PROCEDURE IF EXISTS DeleteProductsCost;
+
+Create Procedure DeleteProductsCost(cost INT)
+BEGIN
+ DELETE from Products where Products.cost = cost;
+END
+```
+
+```java
+public class DeleteProductsStoredProcedure {
+ @FunctionName("DeleteProductsStoredProcedure")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS,
+ route = "Deleteproducts-storedprocedure/{cost}")
+ HttpRequestMessage<Optional<String>> request,
+ @MySqlInput(
+ name = "products",
+ commandText = "DeleteProductsCost",
+ commandType = CommandType.StoredProcedure,
+ parameters = "@Cost={cost}",
+ connectionStringSetting = "MySqlConnectionString")
+ Product[] products) {
+
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(products).build();
+ }
+}
+```
+++
+More samples for the Azure Database for MySQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-js).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-javascript)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-javascript)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-javascript)
+
+The examples refer to a database table:
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-get-multiple-items-javascript"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a MYSQL input binding that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query and returns the results in the HTTP response.
+++
+# [Model v4](#tab/nodejs-v4)
+
+```typescript
+import { app, HttpRequest, HttpResponseInit, input, InvocationContext } from '@azure/functions';
+
+const mysqlInput = input.generic({
+ commandText: 'select * from Products',
+ commandType: 'Text',
+ connectionStringSetting: 'MySqlConnectionString',
+});
+
+export async function httpTrigger1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> {
+ context.log('HTTP trigger and MySQL input binding function processed a request.');
+ const products = context.extraInputs.get(mysqlInput);
+ return {
+ jsonBody: products,
+ };
+}
+
+app.http('httpTrigger1', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [mysqlInput],
+ handler: httpTrigger1,
+});
+```
++
+# [Model v3](#tab/nodejs-v3)
+
+TypeScript samples aren't documented for model v3.
+++++
+# [Model v4](#tab/nodejs-v4)
+
+```javascript
+const { app, input } = require('@azure/functions');
+
+const mysqlInput = input.generic({
+ type: 'mysql',
+ commandText: 'select * from Products where Cost = @Cost',
+ commandType: 'Text',
+ connectionStringSetting: 'MySqlConnectionString'
+})
+
+app.http('GetProducts', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ route: 'getproducts/{cost}',
+ extraInputs: [mysqlInput],
+ handler: async (request, context) => {
+ const products = JSON.stringify(context.extraInputs.get(mysqlInput));
+
+ return {
+ status: 200,
+ body: products
+ };
+ }
+});
+```
++
+# [Model v3](#tab/nodejs-v3)
+
+The following example is binding data in the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "getproducts"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "select * from Products",
+ "commandType": "Text",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample JavaScript code:
+
+```javascript
+module.exports = async function (context, req, products) {
+ context.log('JavaScript HTTP trigger and MySQL input binding function processed a request.');
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ mimetype: "application/json",
+ body: products
+ };
+}
+```
+++++
+<a id="http-trigger-look-up-id-from-query-string-javascript"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a MySQL input binding that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
++
+# [Model v4](#tab/nodejs-v4)
+
+```typescript
+import { app, HttpRequest, HttpResponseInit, input, InvocationContext } from '@azure/functions';
+
+const mysqlInput = input.generic({
+ commandText: 'select * from Products where ProductId= @productId',
+ commandType: 'Text',
+ parameters: '@productId={productid}',
+ connectionStringSetting: 'MySqlConnectionString',
+});
+
+export async function httpTrigger1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> {
+ context.log('HTTP trigger and MySQL input binding function processed a request.');
+ const products = context.extraInputs.get(mysqlInput);
+ return {
+ jsonBody: products,
+ };
+}
+
+app.http('httpTrigger1', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [mysqlInput],
+ handler: httpTrigger1,
+});
+```
+
+# [Model v3](#tab/nodejs-v3)
+
+TypeScript samples aren't documented for model v3.
+++++
+# [Model v4](#tab/nodejs-v4)
+
+```javascript
+const { app, input } = require('@azure/functions');
+
+const mysqlInput = input.generic({
+ type: 'mysql',
+ commandText: 'select * from Products where ProductId= @productId',
+ commandType: 'Text',
+ parameters: '@productId={productid}',
+ connectionStringSetting: 'MySqlConnectionString'
+})
+
+app.http('GetProducts', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ route: 'getproducts/{productid}',
+ extraInputs: [mysqlInput],
+ handler: async (request, context) => {
+ const products = JSON.stringify(context.extraInputs.get(mysqlInput));
+
+ return {
+ status: 200,
+ body: products
+ };
+ }
+});
+```
+
+# [Model v3](#tab/nodejs-v3)
+
+The following example is binding data in the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "getproducts/{productid}"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "select * from Products where ProductId= @productId",
+ "commandType": "Text",
+ "parameters": "@productId={productid}",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample JavaScript code:
+
+```javascript
+module.exports = async function (context, req, products) {
+ context.log('JavaScript HTTP trigger function processed a request.');
+ context.log(JSON.stringify(products));
+ return {
+ status: 200,
+ body: products
+ };
+}
+```
+++++
+<a id="http-trigger-delete-one-or-multiple-rows-javascript"></a>
+### HTTP trigger, delete rows
+
+The following example shows a MySQL input binding that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `DeleteProductsCost` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+
+```sql
+DROP PROCEDURE IF EXISTS DeleteProductsCost;
+
+Create Procedure DeleteProductsCost(cost INT)
+BEGIN
+ DELETE from Products where Products.cost = cost;
+END
+```
+++
+# [Model v4](#tab/nodejs-v4)
+
+```typescript
+import { app, HttpRequest, HttpResponseInit, input, InvocationContext } from '@azure/functions';
+
+const mysqlInput = input.generic({
+ commandText: 'DeleteProductsCost',
+ commandType: 'StoredProcedure',
+ parameters: '@Cost={cost}',
+ connectionStringSetting: 'MySqlConnectionString',
+});
+
+export async function httpTrigger1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> {
+ context.log('HTTP trigger and MySQL input binding function processed a request.');
+ const products = context.extraInputs.get(mysqlInput);
+ return {
+ jsonBody: products,
+ };
+}
+
+app.http('httpTrigger1', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [mysqlInput],
+ handler: httpTrigger1,
+});
+```
+
+# [Model v3](#tab/nodejs-v3)
+
+TypeScript samples aren't documented for model v3.
+++++
+# [Model v4](#tab/nodejs-v4)
+
+```javascript
+const { app, input } = require('@azure/functions');
+
+const mysqlInput = input.generic({
+ type: 'mysql',
+ commandText: 'DeleteProductsCost',
+ commandType: 'StoredProcedure',
+ parameters: '@Cost={cost}',
+ connectionStringSetting: 'MySqlConnectionString'
+})
+
+app.http('httpTrigger1', {
+ methods: ['POST'],
+ authLevel: 'anonymous',
+ route: 'DeleteProductsByCost',
+ extraInputs: [mysqlInput],
+ handler: async (request, context) => {
+ const products = JSON.stringify(context.extraInputs.get(mysqlInput));
+
+ return {
+ status: 200,
+ body: products
+ };
+ }
+});
+```
+
+# [Model v3](#tab/nodejs-v3)
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "DeleteProductsByCost"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "DeleteProductsCost",
+ "commandType": "StoredProcedure",
+ "parameters": "@Cost={cost}",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample JavaScript code:
++
+```javascript
+module.exports = async function (context, req, products) {
+ context.log('JavaScript HTTP trigger function processed a request.');
+ context.log(JSON.stringify(products));
+ return {
+ status: 200,
+ body: products
+ };
+}
+```
+++++
+More samples for the Azure Database for MySQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-powershell).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-powershell)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-powershell)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-powershell)
+
+The examples refer to a database table:
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-get-multiple-items-powershell"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a MySQL input binding in a function.json file and a PowerShell function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query and returns the results in the HTTP response.
+
+The following example is binding data in the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "Request",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "getproducts/{cost}"
+ },
+ {
+ "name": "response",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "select * from Products",
+ "commandType": "Text",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+using namespace System.Net
+
+param($Request, $TriggerMetadata, $products)
+
+Write-Host "PowerShell function with MySql Input Binding processed a request."
+
+Push-OutputBinding -Name response -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $products
+})
+```
+
+<a id="http-trigger-look-up-id-from-query-string-powershell"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a MySQL input binding in a PowerShell function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+The following example is binding data in the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "Request",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "getproducts/{productid}"
+ },
+ {
+ "name": "response",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "select * from Products where ProductId= @productId",
+ "commandType": "Text",
+ "parameters": "MySqlConnectionString",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample PowerShell code for the function in the `run.ps1` file:
++
+```powershell
+using namespace System.Net
+
+param($Request, $TriggerMetadata, $products)
+
+Write-Host "PowerShell function with MySql Input Binding processed a request."
+
+Push-OutputBinding -Name response -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $products
+})
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-powershell"></a>
+### HTTP trigger, delete rows
+
+The following example shows a MySQL input binding in a function.json file and a PowerShell function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `DeleteProductsCost` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+
+```sql
+DROP PROCEDURE IF EXISTS DeleteProductsCost;
+
+Create Procedure DeleteProductsCost(cost INT)
+BEGIN
+ DELETE from Products where Products.cost = 'cost';
+END
+```
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "Request",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "deleteproducts-storedprocedure/{cost}"
+ },
+ {
+ "name": "response",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "DeleteProductsCost",
+ "commandType": "StoredProcedure",
+ "parameters": "@Cost={cost}",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample PowerShell code for the function in the `run.ps1` file:
++
+```powershell
+using namespace System.Net
+
+param($Request, $TriggerMetadata, $products)
+
+Write-Host "PowerShell function with MySql Input Binding processed a request."
+
+Push-OutputBinding -Name response -Value ([HttpResponseContext]@{
+ StatusCode = [System.Net.HttpStatusCode]::OK
+ Body = $products
+}
+```
+++
+More samples for the Azure Database for MySQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-python).
+
+This section contains the following examples:
+
+* [HTTP trigger, get multiple rows](#http-trigger-get-multiple-items-python)
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-python)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-python)
+
+The examples refer to a database table:
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+> [!NOTE]
+> Please note that Azure Functions version 1.22.0b4 must be used for Python .
+>
+
+<a id="http-trigger-get-multiple-items-python"></a>
+### HTTP trigger, get multiple rows
+
+The following example shows a MySQL input binding in a function.json file and a Python function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query and returns the results in the HTTP response.
+
+# [v2](#tab/python-v2)
+
+The following example is sample python code for the function_app.py file:
+
+```python
+import azure.functions as func
+import datetime
+import json
+import logging
+
+app = func.FunctionApp()
+
+
+@app.generic_trigger(arg_name="req", type="httpTrigger", route="getproducts/{cost}")
+@app.generic_output_binding(arg_name="$return", type="http")
+@app.generic_input_binding(arg_name="products", type="mysql",
+ commandText= "select * from Products",
+ command_type="Text",
+ connection_string_setting="MySqlConnectionString")
+def mysql_test(req: func.HttpRequest, products: func.MySqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), products))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
+
+# [v1](#tab/python-v1)
+
+The following example is binding data in the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "getproducts/{cost}"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "select * from Products",
+ "commandType": "Text",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+ }
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample Python code:
++
+```python
+import azure.functions as func
+import json
+
+def main(req: func.HttpRequest, products: func.MySqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), products))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
+++
+<a id="http-trigger-look-up-id-from-query-string-python"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a MySQL input binding in a Python function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and reads from a query filtered by a parameter from the query string and returns the row in the HTTP response.
+
+# [v2](#tab/python-v2)
+
+The following example is sample python code for the function_app.py file:
+
+```python
+import azure.functions as func
+import datetime
+import json
+import logging
+
+app = func.FunctionApp()
+
+
+@app.generic_trigger(arg_name="req", type="httpTrigger", route="getproducts/{cost}")
+@app.generic_output_binding(arg_name="$return", type="http")
+@app.generic_input_binding(arg_name="products", type="mysql",
+ commandText= "select * from Products where ProductId= @productId",
+ command_type="Text",
+ parameters= "@productId={productid}",
+ connection_string_setting="MySqlConnectionString")
+def mysql_test(req: func.HttpRequest, products: func.MySqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), products))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
+
+# [v1](#tab/python-v1)
+
+The following is binding data in the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "getproducts/{productid}"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "select * from Products where ProductId= @productId",
+ "commandType": "Text",
+ "parameters": "@productId={productid}",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+ }
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample Python code:
++
+```python
+import azure.functions as func
+import json
+
+def main(req: func.HttpRequest, products: func.MySqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), products))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
+++
+<a id="http-trigger-delete-one-or-multiple-rows-python"></a>
+### HTTP trigger, delete rows
+
+The following example shows a MySQL input binding in a function.json file and a Python function that is [triggered by an HTTP](./functions-bindings-http-webhook-trigger.md) request and executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `DeleteProductsCost` must be created on the database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
+
+```sql
+DROP PROCEDURE IF EXISTS DeleteProductsCost;
+
+Create Procedure DeleteProductsCost(cost INT)
+BEGIN
+ DELETE from Products where Products.cost = cost;
+END
+```
+
+# [v2](#tab/python-v2)
+
+The following example is sample python code for the function_app.py file:
+
+```python
+import azure.functions as func
+import datetime
+import json
+import logging
+
+app = func.FunctionApp()
+
+
+@app.generic_trigger(arg_name="req", type="httpTrigger", route="getproducts/{cost}")
+@app.generic_output_binding(arg_name="$return", type="http")
+@app.generic_input_binding(arg_name="products", type="mysql",
+ commandText= "DeleteProductsCost",
+ command_type="StoredProcedure",
+ parameters= "@Cost={cost}",
+ connection_string_setting="MySqlConnectionString")
+def mysql_test(req: func.HttpRequest, products: func.MySqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), products))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
+
+# [v1](#tab/python-v1)
+
+```json
+
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get"
+ ],
+ "route": "getproducts-storedprocedure/{cost}"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "products",
+ "type": "mysql",
+ "direction": "in",
+ "commandText": "DeleteProductsCost",
+ "commandType": "StoredProcedure",
+ "parameters": "@Cost={cost}",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+ }
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following is sample Python code:
++
+```python
+import json
+import azure.functions as func
+
+# The input binding executes the `SELECT * FROM Products WHERE Cost = @Cost` query.
+# The Parameters argument passes the `{cost}` specified in the URL that triggers the function,
+# `getproducts/{cost}`, as the value of the `@Cost` parameter in the query.
+# CommandType is set to `Text`, since the constructor argument of the binding is a raw query.
+def main(req: func.HttpRequest, products: func.MySqlRowList) -> func.HttpResponse:
+ rows = list(map(lambda r: json.loads(r.to_json()), products))
+
+ return func.HttpResponse(
+ json.dumps(rows),
+ status_code=200,
+ mimetype="application/json"
+ )
+```
+++++
+## Attributes
+
+The [C# library](functions-dotnet-class-library.md) uses the MySqlAttribute attribute to declare the MySQL bindings on the function, which has the following properties:
+
+| Attribute property |Description|
+|||
+| **CommandText** | Required. The MySQL query command or name of the stored procedure executed by the binding. |
+| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
+| **CommandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **Parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
++
+## Annotations
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@MySQLInput` annotation on parameters whose value would come from Azure Database for MySQL. This annotation supports the following elements:
+
+| Element |Description|
+|||
+| **commandText** | Required. The MySQL query command or name of the stored procedure executed by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. |
+| **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is ["Text"](/dotnet/api/system.data.commandtype#fields) for a query and ["StoredProcedure"](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+|**name** | Required. The unique name of the function binding. |
+| **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+++
+## Configuration
+
+# [Model v4](#tab/nodejs-v4)
+
+The following table explains the properties that you can set on the `options` object passed to the `input.generic()` method.
+
+| Property | Description |
+||-|
+| **commandText** | Required. The MySQL query command or name of the stored procedure executed by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. Optional keywords in the connection string value are [available to refine MySQL bindings connectivity](./functions-bindings-azure-mysql.md#mysql-connection-string). |
+| **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+
+# [Model v3](#tab/nodejs-v3)
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+| Property | Description |
+||-|
+|**type** | Required. Must be set to `Mysql`. |
+|**direction** | Required. Must be set to `in`. |
+|**name** | Required. The name of the variable that represents the query results in function code. |
+| **commandText** | Required. The MySQL query command or name of the stored procedure executed by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. Optional keywords in the connection string value are [available to refine MySQL bindings connectivity](./functions-bindings-azure-mysql.md#mysql-connection-string). |
+| **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
++++
+## Configuration
+
+The following table explains the binding configuration properties that you set in the function.json file.
+
+|function.json property | Description|
+||-|
+|**type** | Required. Must be set to `mysql`. |
+|**direction** | Required. Must be set to `in`. |
+|**name** | Required. The name of the variable that represents the query results in function code. |
+| **commandText** | Required. The MySQL query command or name of the stored procedure executed by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. Optional keywords in the connection string value are [available to refine MySQL bindings connectivity](./functions-bindings-azure-mysql.md#mysql-connection-string). |
+| **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
++++
+## Usage
+
+The attribute's constructor takes the MySQL command text, the command type, parameters, and the connection string setting name. The command can be a MYSQL query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](https://dev.mysql.com/doc/connector-net/en/connector-net-connections-string.html) to the Azure Database for MySQL.
++
+If an exception occurs when a MySQL input binding is executed then the function code won't execute. This might result in an error code being returned, such as an HTTP trigger returning a 500 error code.
+
+## Next steps
+
+- [Save data to a database (Output binding)](./functions-bindings-azure-mysql-output.md)
+- [Run a function from a HTTP request (trigger)](./functions-bindings-http-webhook-trigger.md)
+
azure-functions Functions Bindings Azure Mysql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-mysql-output.md
+
+ Title: Azure Database for MySQL output binding for Functions
+description: Learn to use the Azure Database for MySQL output binding in Azure Functions.
+++ Last updated : 6/26/2024++
+zone_pivot_groups: programming-languages-set-functions
++
+# Azure Database for MySQL output binding for Azure Functions (Preview)
+
+The Azure Database for MySQL output binding lets you write to a database.
+
+For information on setup and configuration details, see the [overview](./functions-bindings-azure-mysql.md).
++
+## Examples
+<a id="example"></a>
++++
+# [Isolated worker model](#tab/isolated-process)
+
+More samples for the Azure Database for MySQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples).
+
+This section contains the following example:
+
+* [HTTP trigger, write one record](#http-trigger-write-one-record-c-oop)
+
+The examples refer to a `Product` class and a corresponding database table:
+
+```csharp
+namespace AzureMySqlSamples.Common
+{
+ public class Product
+ {
+ public int? ProductId { get; set; }
+
+ public string Name { get; set; }
+
+ public int Cost { get; set; }
+
+ public override bool Equals(object obj)
+ }
+}
+```
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-write-one-record-c-oop"></a>
+
+### HTTP trigger, write one record
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
+
+```cs
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.MySql;
+using Microsoft.Azure.Functions.Worker.Http;
+using AzureMySqlSamples.Common;
+
+namespace AzureMySqlSamples.OutputBindingSamples
+{
+ public static class AddProduct
+ {
+ [FunctionName(nameof(AddProduct))]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproduct")]
+ [FromBody] Product prod,
+ [MySql("Products", "MySqlConnectionString")] out Product product)
+ {
+ product = prod;
+ return new CreatedResult($"/api/addproduct", product);
+ }
+ }
+}
+```
+
+# [In-process model](#tab/in-process)
+
+More samples for the Azure Database for MySQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-csharp).
+
+This section contains the following example:
+
+* [HTTP trigger, write one record](#http-trigger-write-one-record-c)
+
+The examples refer to a `Product` class and a corresponding database table:
+
+```csharp
+namespace AzureMySqlSamples.Common
+{
+ public class Product
+ {
+ public int? ProductId { get; set; }
+
+ public string Name { get; set; }
+
+ public int Cost { get; set; }
+
+ public override bool Equals(object obj)
+ }
+}
+```
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-write-one-record-c"></a>
+
+### HTTP trigger, write one record
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
+
+```csharp
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Azure.WebJobs.Extensions.MySql;
+using AzureMySqlSamples.Common;
+
+namespace AzureMySqlSamples.OutputBindingSamples
+{
+ public static class AddProduct
+ {
+ [FunctionName(nameof(AddProduct))]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addproduct")]
+ [FromBody] Product prod,
+ [MySql("Products", "MySqlConnectionString")] out Product product)
+ {
+ product = prod;
+ return new CreatedResult($"/api/addproduct", product);
+ }
+ }
+}
+```
++++++
+More samples for the Azure Database for MySQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-java).
+
+This section contains the following example:
+
+* [HTTP trigger, write a record to a table](#http-trigger-write-record-to-table-java)
+
+The examples refer to a `Product` class and a corresponding database table:
+
+```java
+package com.function.Common;
+
+import com.fasterxml.jackson.annotation.JsonProperty;
+
+public class Product {
+ @JsonProperty("ProductId")
+ private int ProductId;
+ @JsonProperty("Name")
+ private String Name;
+ @JsonProperty("Cost")
+ private int Cost;
+
+ public Product() {
+ }
+
+ public Product(int productId, String name, int cost) {
+ ProductId = productId;
+ Name = name;
+ Cost = cost;
+ }
+}
+```
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-write-record-to-table-java"></a>
+### HTTP trigger, write a record to a table
+
+The following example shows a MySQL output binding in a Java function that adds a record to a table, using data provided in an HTTP POST request as a JSON body. The function takes an additional dependency on the [com.google.code.gson](https://github.com/google/gson) library to parse the JSON body.
+
+```xml
+<dependency>
+ <groupId>com.google.code.gson</groupId>
+ <artifactId>gson</artifactId>
+ <version>2.10.1</version>
+</dependency>
+```
+
+```java
+package com.function;
+
+import com.microsoft.azure.functions.HttpMethod;
+import com.microsoft.azure.functions.HttpRequestMessage;
+import com.microsoft.azure.functions.HttpResponseMessage;
+import com.microsoft.azure.functions.HttpStatus;
+import com.microsoft.azure.functions.OutputBinding;
+import com.microsoft.azure.functions.annotation.AuthorizationLevel;
+import com.microsoft.azure.functions.annotation.FunctionName;
+import com.microsoft.azure.functions.annotation.HttpTrigger;
+import com.microsoft.azure.functions.mysql.annotation.MySqlOutput;
+import com.fasterxml.jackson.core.JsonParseException;
+import com.fasterxml.jackson.databind.JsonMappingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.function.Common.Product;
+
+import java.io.IOException;
+import java.util.Optional;
+
+public class AddProduct {
+ @FunctionName("AddProduct")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.POST},
+ authLevel = AuthorizationLevel.ANONYMOUS,
+ route = "addproduct")
+ HttpRequestMessage<Optional<String>> request,
+ @MySqlOutput(
+ name = "product",
+ commandText = "Products",
+ connectionStringSetting = "MySqlConnectionString")
+ OutputBinding<Product> product) throws JsonParseException, JsonMappingException, IOException {
+
+ String json = request.getBody().get();
+ ObjectMapper mapper = new ObjectMapper();
+ Product p = mapper.readValue(json, Product.class);
+ product.setValue(p);
+
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "application/json").body(product).build();
+ }
+}
+```
+++
+More samples for the Azure Database for MySQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples).
+
+This section contains the following example:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-javascript)
+
+The examples refer to a database table:
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-write-records-to-table-javascript"></a>
+### HTTP trigger, write records to a table
+
+The following example shows a MySQL output binding that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+++
+# [Model v4](#tab/nodejs-v4)
+
+```typescript
+const { app, output } = require('@azure/functions');
+
+const mysqlOutput = output.generic({
+ type: 'mysql',
+ commandText: 'Products',
+ connectionStringSetting: 'MySqlConnectionString'
+})
+
+// Upsert the product, which will insert it into the Products table if the primary key (ProductId) for that item doesn't exist.
+// If it does then update it to have the new name and cost.
+app.http('AddProduct', {
+ methods: ['POST'],
+ authLevel: 'anonymous',
+ extraOutputs: [mysqlOutput],
+ handler: async (request, context) => {
+ // Note that this expects the body to be a JSON object or array of objects which have a property
+ // matching each of the columns in the table to upsert to.
+ const product = await request.json();
+ context.extraOutputs.set(mysqlOutput, product);
+
+ return {
+ status: 201,
+ body: JSON.stringify(product)
+ };
+ }
+});
+```
+
+# [Model v3](#tab/nodejs-v3)
+
+TypeScript samples aren't documented for model v3.
+++++
+# [Model v4](#tab/nodejs-v4)
+
+```javascript
+const { app, output } = require('@azure/functions');
+
+const mysqlOutput = output.generic({
+ type: 'mysql',
+ commandText: 'Products',
+ connectionStringSetting: 'MySqlConnectionString'
+})
+
+// Upsert the product, which will insert it into the Products table if the primary key (ProductId) for that item doesn't exist.
+// If it does then update it to have the new name and cost.
+app.http('AddProduct', {
+ methods: ['POST'],
+ authLevel: 'anonymous',
+ extraOutputs: [mysqlOutput],
+ handler: async (request, context) => {
+ // Note that this expects the body to be a JSON object or array of objects which have a property
+ // matching each of the columns in the table to upsert to.
+ const product = await request.json();
+ context.extraOutputs.set(mysqlOutput, product);
+
+ return {
+ status: 201,
+ body: JSON.stringify(product)
+ };
+ }
+});
+```
++
+# [Model v3](#tab/nodejs-v3)
+
+The following example is binding data in the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "direction": "in",
+ "type": "httpTrigger",
+ "methods": [
+ "post"
+ ],
+ "route": "addproduct"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "product",
+ "type": "mysql",
+ "direction": "out",
+ "commandText": "Products",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample JavaScript code:
+
+```javascript
+module.exports = async function (context, req, products) {
+ context.log('JavaScript HTTP trigger and MySQL output binding function processed a request.');
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ mimetype: "application/json",
+ body: products
+ };
+}
+```
++++
+More samples for the Azure Database for MySQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-powershell).
+
+This section contains the following example:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-powershell)
+
+The examples refer to a database table:
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+<a id="http-trigger-write-records-to-table-powershell"></a>
+### HTTP trigger, write records to a table
+
+The following example shows a MySQL output binding in a function.json file and a PowerShell function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+
+The following example is binding data in the function.json file:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "Request",
+ "direction": "in",
+ "type": "httpTrigger",
+ "methods": [
+ "post"
+ ],
+ "route": "addproduct"
+ },
+ {
+ "name": "response",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "product",
+ "type": "mysql",
+ "direction": "out",
+ "commandText": "Products",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample PowerShell code for the function in the `run.ps1` file:
+
+```powershell
+using namespace System.Net
+
+# Trigger binding data passed in via param block
+param($Request, $TriggerMetadata)
+
+# Write to the Azure Functions log stream.
+Write-Host "PowerShell function with MySql Output Binding processed a request."
+
+# Note that this expects the body to be a JSON object or array of objects
+# which have a property matching each of the columns in the table to upsert to.
+$req_body = $Request.Body
+
+# Assign the value we want to pass to the MySql Output binding.
+# The -Name value corresponds to the name property in the function.json for the binding
+Push-OutputBinding -Name product -Value $req_body
+
+# Assign the value to return as the HTTP response.
+# The -Name value matches the name property in the function.json for the binding
+Push-OutputBinding -Name response -Value ([HttpResponseContext]@{
+ StatusCode = [HttpStatusCode]::OK
+ Body = $req_body
+})
+```
++++
+More samples for the Azure Database for MySQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples/samples-python).
+
+This section contains the following example:
+
+* [HTTP trigger, write records to a table](#http-trigger-write-records-to-table-python)
+
+The examples refer to a database table:
+
+```sql
+DROP TABLE IF EXISTS Products;
+
+CREATE TABLE Products (
+ ProductId int PRIMARY KEY,
+ Name varchar(100) NULL,
+ Cost int NULL
+);
+```
+
+> [!NOTE]
+> Please note that Azure Functions version 1.22.0b4 must be used for Python .
+>
++
+<a id="http-trigger-write-records-to-table-python"></a>
+### HTTP trigger, write records to a table
+
+The following example shows a MySQL output binding in a function.json file and a Python function that adds records to a table, using data provided in an HTTP POST request as a JSON body.
+
+# [v2](#tab/python-v2)
+
+The following example is sample python code for the function_app.py file:
+
+```python
+import json
+
+import azure.functions as func
+
+app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
+@app.generic_trigger(arg_name="req", type="httpTrigger", route="addproduct")
+@app.generic_output_binding(arg_name="$return", type="http")
+@app.generic_output_binding(arg_name="r", type="mysql",
+ command_text="Products",
+ connection_string_setting="MySqlConnectionString")
+def mysql_output(req: func.HttpRequest, r: func.Out[func.MySqlRow]) \
+ -> func.HttpResponse:
+ body = json.loads(req.get_body())
+ row = func.MySqlRow.from_dict(body)
+ r.set(row)
+
+ return func.HttpResponse(
+ body=req.get_body(),
+ status_code=201,
+ mimetype="application/json"
+ )
+```
+
+# [v1](#tab/python-v1)
+
+The following example is binding data in the function.json file:
+
+```json
+{
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "authLevel": "function",
+ "name": "req",
+ "direction": "in",
+ "type": "httpTrigger",
+ "methods": [
+ "post"
+ ],
+ "route": "addproduct"
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ },
+ {
+ "name": "product",
+ "type": "mysql",
+ "direction": "out",
+ "commandText": "Products",
+ "connectionStringSetting": "MySqlConnectionString"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+The following example is sample Python code:
+
+```python
+import json
+import azure.functions as func
+
+def main(req: func.HttpRequest, product: func.Out[func.MySqlRow]) -> func.HttpResponse:
+ """Upsert the product, which will insert it into the Products table if the primary key
+ (ProductId) for that item doesn't exist. If it does then update it to have the new name
+ and cost.
+ """
+
+ # Note that this expects the body to be a JSON object which
+ # have a property matching each of the columns in the table to upsert to.
+ body = json.loads(req.get_body())
+ row = func.MySqlRow.from_dict(body)
+ product.set(row)
+
+ return func.HttpResponse(
+ body=req.get_body(),
+ status_code=201,
+ mimetype="application/json"
+ )
+```
+++++
+## Attributes
+
+The [C# library](functions-dotnet-class-library.md) uses the MySqlAttribute attribute to declare the MySQL bindings on the function, which has the following properties:
+
+| Attribute property |Description|
+|||
+| **CommandText** | Required. The name of the table being written to by the binding. |
+| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable. |
+++
+## Annotations
+
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@MySQLOutput` annotation on parameters whose value would come from Azure Database for MySQL. This annotation supports the following elements:
+
+| Element |Description|
+|||
+| **commandText** | Required. The name of the table being written to by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
+|**name** | Required. The unique name of the function binding. |
+++
+## Configuration
+
+# [Model v4](#tab/nodejs-v4)
+
+The following table explains the properties that you can set on the `options` object passed to the `output.generic()` method.
+
+| Property | Description |
+||-|
+| **commandText** | Required. The name of the table being written to by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable. |
+
+# [Model v3](#tab/nodejs-v3)
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+| Property | Description |
+||-|
+|**type** | Required. Must be set to `Mysql`.|
+|**direction** | Required. Must be set to `out`. |
+|**name** | Required. The name of the variable that represents the entity in function code. |
+| **commandText** | Required. The name of the table being written to by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable. |
++++
+## Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Required. Must be set to `Mysql`.|
+|**direction** | Required. Must be set to `out`. |
+|**name** | Required. The name of the variable that represents the entity in function code. |
+| **commandText** | Required. The name of the table being written to by the binding. |
+| **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable. |
+++
+> [!NOTE]
+>The output binding supports all special characters including ($, `, -, _) . It is same as mentioned in mysql community [documentation](https://dev.mysql.com/doc/refman/8.0/en/identifiers.html)
+>
+>It is on different programming language if special character is supported to define members attributes containing special characters. For example, C# have few limitations to define [variables](https://learn.microsoft.com/dotnet/csharp/fundamentals/coding-style/identifier-names)
+>
+>Apart from that, the output binding covering all special characters can be done using 'JObject'. The detailed example can be followed in this [Github link](https://github.com/Azure/azure-functions-mysql-extension/blob/main/samples/samples-csharp/OutputBindingSamples/AddProductJObject.cs)
+>
+
+## Usage
+
+The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the connection string to Azure Database for MySQL.
+
+If an exception occurs when a MySQL output binding is executed then the function code stop executing. This may result in an error code being returned, such as an HTTP trigger returning a 500 error code.
+
+## Next steps
+
+- [Read data from a database (Input binding)](./functions-bindings-azure-mysql-input.md)
+
azure-functions Functions Bindings Azure Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-mysql.md
+
+ Title: Azure Database for MySQL bindings for Functions
+description: Understand how to use Azure Database for MySQL bindings in Azure Functions.
+++
+ - build-2023
+ - devx-track-extended-java
+ - devx-track-js
+ - devx-track-python
+ - ignite-2023
Last updated : 10/26/2024++
+zone_pivot_groups: programming-languages-set-functions-lang-workers
++
+# Azure Database for MySQL bindings for Azure Functions overview (Preview)
+
+This set of articles explains how to work with [Azure Database for MySQL](/azure/mysql/index) bindings in Azure Functions. For preview, Azure Functions supports input bindings and output bindings for Azure Database for MySQL.
+
+| Action | Type |
+|||
+| Read data from a database | [Input binding](./functions-bindings-azure-mysql-input.md) |
+| Save data to a database |[Output binding](./functions-bindings-azure-mysql-output.md) |
++
+## Install extension
+
+The extension NuGet package you install depends on the C# mode you're using in your function app:
+
+# [Isolated worker model](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.MySql/1.0.3-preview/).
+
+```bash
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.MySql --version 1.0.3-preview
+```
+
+# [In-process model](#tab/in-process)
+
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.MySql/1.0.3-preview).
+
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.MySql --version 1.0.3-preview
+```
+++++++
+## Install bundle
+
+The MySQL bindings extension is part of the v4 [extension bundle](https://learn.microsoft.com/azure/azure-functions/functions-bindings-register#extension-bundles), which is specified in your host.json project file.
++
+### Preview Bundle v4.x
+
+You can use the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
++++++
+## Functions runtime
++
+## Install bundle
+
+The MySQL bindings extension is part of the v4 [extension bundle](https://learn.microsoft.com/azure/azure-functions/functions-bindings-register#extension-bundles), which is specified in your host.json project file.
++
+### Preview Bundle v4.x
+
+You can use the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+++++++
+## Install bundle
+
+The MySQL bindings extension is part of the v4 [extension bundle](https://learn.microsoft.com/azure/azure-functions/functions-bindings-register#extension-bundles), which is specified in your host.json project file.
+
+### Preview Bundle v4.x
+
+You can use the preview extension bundle by adding or replacing the following code in your `host.json` file:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+++
+## Update packages
+
+You can use the preview extension bundle with an update to the `pom.xml` file in your Java Azure Functions project as seen in the following snippet:
+
+```xml
+<dependency>
+<groupId>com.microsoft.azure.functions</groupId>
+<artifactId>azure-functions-java-library-mysql</artifactId>
+<version>1.0.1-preview</version>
+</dependency>
+```
++
+## MySQL connection string
+
+Azure Database for MySQL bindings for Azure Functions have a required property for the connection string on all bindings. These pass the connection string to the MySql.Data.MySqlClient library and supports the connection string as defined in the [MySqlClient ConnectionString documentation](https://dev.mysql.com/doc/connector-net/en/connector-net-connections-string.html). Notable keywords include:
+
+- `server` the host on which the server instance is running. The value can be a host name, IPv4 address, or IPv6 address.
+- `uid` the MySQL user account to provide for the authentication process
+- `pwd` the password to use for the authentication process.
+- `database` The default database for the connection. If no database is specified, the connection has no default database
+
+## Considerations
+
+- Azure Database for MySQL binding supports version 4.x and later of the Functions runtime.
+- Source code for the Azure Database for MySQL bindings can be found in [this GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/src).
+- This binding requires connectivity to an Azure Database for MySQL.
+- Output bindings against tables with columns of spatial data types `GEOMETRY`, `POINT`, or `POLYGON` aren't supported and data upserts will fail.
+
+## Samples
+
+In addition to the samples for C#, Java, JavaScript, PowerShell, and Python available in the [Azure SQL bindings GitHub repository](https://github.com/Azure/azure-functions-mysql-extension/tree/main/samples), more are available in Azure Samples.
++
+## Next steps
+
+- [Read data from a database (Input binding)](./functions-bindings-azure-mysql-input.md)
+- [Save data to a database (Output binding)](./functions-bindings-azure-mysql-output.md)
+
azure-functions Functions Container Apps Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md
Title: Azure Container Apps hosting of Azure Functions description: Learn about how you can use Azure Functions on Azure Container Apps to host and manage containerized function apps in Azure. Previously updated : 07/04/2024 Last updated : 11/05/2024 # Customer intent: As a cloud developer, I want to learn more about hosting my function apps in Linux containers managed by Azure Container Apps.
If you run into any issues with these managed resource groups, you should contac
Keep in mind the following considerations when deploying your function app containers to Container Apps: + While all triggers can be used, only the following triggers can dynamically scale (from zero instances) when running in a Container Apps environment:
- + HTTP
+ + Azure Event Grid
+ + Azure Event Hubs
+ + Azure Blob storage (event-based)
+ Azure Queue Storage + Azure Service Bus
- + Azure Event Hubs
+ + Durable Functions (MSSQL storage provider)
+ + HTTP
+ Kafka + Timer + These limitations apply to Kafka triggers: + The protocol value of `ssl` isn't supported when hosted on Container Apps. Use a [different protocol value](functions-bindings-kafka-trigger.md?pivots=programming-language-csharp#attributes). + For a Kafka trigger to dynamically scale when connected to Event Hubs, the `username` property must resolve to an application setting that contains the actual username value. When the default `$ConnectionString` value is used, the Kafka trigger won't be able to cause the app to scale dynamically. + For the built-in Container Apps [policy definitions](../container-apps/policy-reference.md#policy-definitions), currently only environment-level policies apply to Azure Functions containers.
-+ You can use managed identities both for [trigger and binding connections](functions-reference.md#configure-an-identity-based-connection) and for [deployments from an Azure Container Registry](https://azure.github.io/AppService/2021/07/03/Linux-container-from-ACR-with-private-endpoint.html#using-user-assigned-managed-identity).
-+ When either your function app and Azure Container Registry-based deployment use managed identity-based connections, you can't modify the CPU and memory allocation settings in the portal. You must instead [use the Azure CLI](functions-how-to-custom-container.md?tabs=acr%2Cazure-cli2%2Cazure-cli&pivots=container-apps#container-apps-workload-profiles).
++ You can use managed identities for these connections:
+ + [Deployment from an Azure Container Registry](functions-deploy-container-apps.md?tabs=acr#create-and-configure-a-function-app-on-azure-with-the-image)
+ + [Triggers and bindings](functions-reference.md#configure-an-identity-based-connection)
+ + [Required host storage connection](functions-identity-based-connections-tutorial.md)
+ You currently can't move a Container Apps hosted function app deployment between resource groups or between subscriptions. Instead, you would have to recreate the existing containerized app deployment in a new resource group, subscription, or region. + When using Container Apps, you don't have direct access to the lower-level Kubernetes APIs. + The `containerapp` extension conflicts with the `appservice-kube` extension in Azure CLI. If you have previously published apps to Azure Arc, run `az extension list` and make sure that `appservice-kube` isn't installed. If it is, you can remove it by running `az extension remove -n appservice-kube`.
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
The following Kubernetes deployment options are available:
Core Tools uses the local Docker CLI to build and publish the image. Make sure your Docker is already installed locally. Run the `docker login` command to connect to your account.
-To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
+Azure Functions supports hosting your containerized functions either in Azure Container Apps or in Azure Functions. Running your containers directly in a Kubernetes cluster or in Azure Kubernetes Service (AKS) isn't officially supported by Azure Functions. To learn more, see [Linux container support in Azure Functions](container-concepts.md).
## `func kubernetes install`
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Title: Automate function app resource deployment to Azure
description: Learn how to build, validate, and use a Bicep file or an Azure Resource Manager template to deploy your function app and related Azure resources. ms.assetid: d20743e3-aab6-442c-a836-9bcea09bfd32 Previously updated : 08/22/2024 Last updated : 10/21/2024 zone_pivot_groups: functions-hosting-plan
You need to set the connection string of this storage account as the `AzureWebJo
Deployments to an app running in the Flex Consumption plan require a container in Azure Blob Storage as the deployment source. You can use either the default storage account or you can specify a separate storage account. For more information, see [Configure deployment settings](flex-consumption-how-to.md#configure-deployment-settings).
-This deployment account must already be configured when you create your app, including the specific container used for deployments. To learn more about configuring deployments, see [Deployment sources](#deployment-sources-2).
+This deployment account must already be configured when you create your app, including the specific container used for deployments. To learn more about configuring deployments, see [Deployment sources](#deployment-sources).
This example shows how to create a container in the storage account:
For the snippet in context, see [this deployment example](https://github.com/Azu
-Other deployment settings are [configured with the app itself](#deployment-sources-2).
+Other deployment settings are [configured with the app itself](#deployment-sources).
::: zone-end ### Enable storage logs
For a complete end-to-end example, see this [azuredeploy.json template](https://
::: zone-end ## Deployment sources
+You can use the [`linuxFxVersion`](./functions-app-settings.md#linuxfxversion) site setting to request that a specific Linux container be deployed to your app when it's created. More settings are required to access images in a private repository. For more information, see [Application configuration](#application-configuration).
Your Bicep file or ARM template can optionally also define a deployment for your function code, which could include these methods: + [Zip deployment package](./deployment-zip-push.md) + [Linux container](./functions-how-to-custom-container.md) ::: zone-end ::: zone pivot="flex-consumption-plan"
-## Deployment sources
- In the Flex Consumption plan, your project code is deployed from a zip-compressed package published to a Blob storage container. For more information, see [Deployment](flex-consumption-plan.md#deployment). The specific storage account and container used for deployments, the authentication method, and credentials are set in the `functionAppConfig.deployment.storage` element of the `properties` for the site. The container and any application settings must exist when the app is created. For an example of how to create the storage container, see [Deployment container](#deployment-container). This example uses a system assigned managed identity to access the specified blob storage container, which is created elsewhere in the deployment:
For a complete reference example, see [this ARM template](https://github.com/Azu
When using a connection string instead of managed identities, you need to instead set the `authentication.type` to `StorageAccountConnectionString` and set `authentication.storageAccountConnectionStringName` to the name of the application setting that contains the deployment storage account connection string. ::: zone-end ::: zone pivot="consumption-plan"
-## Deployment sources
- Your Bicep file or ARM template can optionally also define a deployment for your function code using a [zip deployment package](./deployment-zip-push.md). ::: zone-end ::: zone pivot="dedicated-plan,premium-plan,consumption-plan"
These site settings are required on the `siteConfig` property:
+ [`alwaysOn`](functions-app-settings.md#alwayson) + [`linuxFxVersion`](functions-app-settings.md#linuxfxversion) ::: zone-end
+These site settings are required only when using managed identities to obtain the image from an Azure Container Registry instance:
+++ [`AcrUseManagedIdentityCreds`](functions-app-settings.md#acrusemanagedidentitycreds)++ [`AcrUserManagedIdentityID`](functions-app-settings.md#acrusermanagedidentityid) ::: zone pivot="consumption-plan,premium-plan,dedicated-plan" These application settings are required (or recommended) for a specific operating system and hosting option: ::: zone-end
azure-functions Streaming Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/streaming-logs.md
While developing an application, you often want to see what's being written to t
There are two ways to view a stream of log files being generated by your function executions.
-* **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan. When your function is scaled to multiple instances, data from other instances isn't shown using this method.
+* **Live Metrics Stream (recommended)**: when your function app is [connected to Application Insights](configure-monitoring.md#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](/azure/azure-monitor/app/live-stream). Use this method when monitoring functions running on multiple-instances and supports all plan types. This method uses [sampled data](configure-monitoring.md#configure-sampling).
-* **Live Metrics Stream**: when your function app is [connected to Application Insights](configure-monitoring.md#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](/azure/azure-monitor/app/live-stream). Use this method when monitoring functions running on multiple-instances and supports all plan types. This method uses [sampled data](configure-monitoring.md#configure-sampling).
+* **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan. When your function is scaled to multiple instances, data from other instances isn't shown using this method.
Log streams can be viewed both in the portal and in most local development environments.
azure-netapp-files Backup Configure Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-manual.md
Previously updated : 07/03/2024 Last updated : 11/05/2024 # Configure manual backups for Azure NetApp Files
The following list summarizes manual backup behaviors:
>[!NOTE] >The option to disable backups is no longer available beginning with the 2023.09 API version. If your workflows require the disable function, you can still use an API version earlier than 2023.09 or the Azure CLI. -- ## Requirements * Azure NetApp Files requires you to assign a backup vault before allowing backup creation on a volume. To configure a backup vault, see [Manage backup vaults](backup-vault-manage.md) for more information.
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-policy-based.md
Previously updated : 05/27/2024 Last updated : 11/05/2024 # Configure policy-based backups for Azure NetApp Files
Assigning a policy creates a baseline snapshot that is the current state of the
[!INCLUDE [consideration regarding deleting backups after deleting resource or subscription](includes/disable-delete-backup.md)] - ## Configure a backup policy A backup policy enables a volume to be protected on a regularly scheduled interval. It does not require snapshot policies to be configured. Backup policies will continue the daily cadence based on the time of day when the backup policy is linked to the volume, using the time zone of the Azure region where the volume exists. Weekly schedules are preset to occur each Monday after the daily cadence. Monthly schedules are preset to occur on the first day of each calendar month after the daily cadence. If backups are needed at a specific time/day, consider using [manual backups](backup-configure-manual.md).
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
description: Learn about the platform updates to Azure VMware Solution.
Previously updated : 8/20/2024 Last updated : 11/6/2024 # What's new in Azure VMware Solution Microsoft regularly applies important updates to the Azure VMware Solution for new features and software lifecycle management. You should receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](architecture-private-clouds.md#host-maintenance-and-lifecycle-management).
+## November 2024
+
+Azure VMware Solution is now ready to update all existing Azure Commercial customers from vSphere 7 to vSphere 8 (Stretched Clusters & Azure Government still pending). Over the coming months, all customers will receive a scheduling notice for this upgrade. If you want to prioritize your vSphere 8 upgrade, open a [Service Request](https://rc.portal.azure.com/#create/Microsoft.Support) with Microsoft requesting a "Priority vSphere 8 upgrade" for your private cloud. [Learn more](architecture-private-clouds.md#vmware-software-versions)
+
+All new Azure VMware Solution private clouds are being deployed with VMware vSphere 8.0 version in [Microsoft Azure Government](https://azure.microsoft.com/explore/global-infrastructure/government/#why-azure). [Learn more](architecture-private-clouds.md#vmware-software-versions)
+ ## October 2024
-The VMware Cloud Foundations (VCF) license portability feature on Azure VMware Solution is to modernize your VMware workload by bringing your VCF entitlements to Azure VMware Solution and take advantage of incredible cost savings.
+The VMware Cloud Foundations (VCF) license portability feature on Azure VMware Solution allows you to bring your VCF entitlement to Azure VMware Solution and take advantage of potential cost savings.
## August 2024
-All new Azure VMware Solution private clouds are being deployed with VMware vSphere 8.0 version in Azure Commercial. [Learn more](architecture-private-clouds.md#vmware-software-versions)
+All new Azure VMware Solution private clouds are being deployed with VMware vSphere 8.0 version in Azure Commercial (Stretched Clusters excluded). [Learn more](architecture-private-clouds.md#vmware-software-versions)
Azure VMware Solution was approved to be added as a service within the [DoD SRG Impact Level 4 (IL4)](/azure/azure-government/compliance/azure-services-in-fedramp-auditscope#azure-government-services-by-audit-scope) Provisional Authorization (PA) in [Microsoft Azure Government](https://azure.microsoft.com/explore/global-infrastructure/government/#why-azure).
batch Monitor Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/monitor-application-insights.md
description: Learn how to instrument an Azure Batch .NET application using the A
ms.devlang: csharp Previously updated : 06/13/2024 Last updated : 11/06/2024 # Monitor and debug an Azure Batch .NET application with Application Insights
This article shows how to add and configure the Application Insights library int
A sample C# solution with code to accompany this article is available on [GitHub](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights). This example adds Application Insights instrumentation code to the [TopNWords](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/TopNWords) example. If you're not familiar with that example, try building and running TopNWords first. Doing this will help you understand a basic Batch workflow of processing a set of input blobs in parallel on multiple compute nodes.
-> [!TIP]
-> As an alternative, configure your Batch solution to display Application Insights data such as VM performance counters in Batch Explorer. [Batch Explorer](https://github.com/Azure/BatchExplorer) is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. Download an [installation package](https://azure.github.io/BatchExplorer/) for Mac, Linux, or Windows. See the [batch-insights repo](https://github.com/Azure/batch-insights) for quick steps to enable Application Insights data in Batch Explorer.
- ## Prerequisites - [Visual Studio 2017 or later](https://www.visualstudio.com/vs) - [Batch account and linked storage account](batch-account-create-portal.md) - [Application Insights resource](/previous-versions/azure/azure-monitor/app/create-new-resource). Use the Azure portal to create an Application Insights *resource*. Select the *General* **Application type**. - Copy the [instrumentation key](/previous-versions/azure/azure-monitor/app/create-new-resource#copy-the-instrumentation-key) from the Azure portal. You'll need this value later.
-
+ > [!NOTE] > You may be [charged](https://azure.microsoft.com/pricing/details/application-insights/) for data stored in Application Insights. This includes the diagnostic and monitoring data discussed in this article.
private static readonly List<string> AIFilesToUpload = new List<string>()
"Microsoft.AI.PerfCounterCollector.dll", "Microsoft.AI.ServerTelemetryChannel.dll", "Microsoft.AI.WindowsServer.dll",
-
+ // custom telemetry initializer assemblies "Microsoft.Azure.Batch.Samples.TelemetryInitializer.dll", };
for (int i = 1; i <= topNWordsConfiguration.NumberOfTasks; i++)
accountSettings.StorageAccountName, accountSettings.StorageAccountKey));
- //This is the list of files to stage to a container -- for each job, one container is created and
+ //This is the list of files to stage to a container -- for each job, one container is created and
//files all resolve to Azure Blobs by their name (so two tasks with the same named file will create just 1 blob in //the container). task.FilesToStage = new List<IFileStagingProvider>
for (int i = 1; i <= topNWordsConfiguration.NumberOfTasks; i++)
foreach (FileToStage stagedFile in aiStagedFiles) { task.FilesToStage.Add(stagedFile);
- }
+ }
task.RunElevated = false; tasksToRun.Add(task); }
To view trace logs in your Applications Insights resource, click **Live Stream**
### View trace logs
-To view trace logs in your Applications Insights resource, click **Search**. This view shows a list of diagnostic data captured by Application Insights including traces, events, and exceptions.
+To view trace logs in your Applications Insights resource, click **Search**. This view shows a list of diagnostic data captured by Application Insights including traces, events, and exceptions.
The following screenshot shows how a single trace for a task is logged and later queried for debugging purposes.
To create a sample chart:
## Monitor compute nodes continuously You may have noticed that all metrics, including performance counters, are only logged when the tasks are running. This behavior is useful because it limits the amount of
-data that Application Insights logs. However, there are cases when you would always like to monitor the compute nodes. For example, they might be running background work which is not scheduled via the Batch service. In this case, set up a monitoring process to run for the life of the compute node.
+data that Application Insights logs. However, there are cases when you would always like to monitor the compute nodes. For example, they might be running background work which is not scheduled via the Batch service. In this case, set up a monitoring process to run for the life of the compute node.
One way to achieve this behavior is to spawn a process that loads the Application Insights library and runs in the background. In the example, the start task loads the binaries on the machine and keeps a process running indefinitely. Configure the Application Insights configuration file for this process to emit additional data you're interested in, such as performance counters.
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
When you hit service limitations, you receive an HTTP status code 429 (Too many
- Reduce the frequency of calls. - Avoid immediate retries because all requests accrue against your usage limits.
-You can find more general guidance on how to set up your service architecture to handle throttling and limitations in the [Azure Architecture](/azure/architecture) documentation for [throttling patterns](/azure/architecture/patterns/throttling). Throttling limits can be increased through a request to Azure Support.
+You can find more general guidance on how to set up your service architecture to handle throttling and limitations in the [Azure Architecture](/azure/architecture) documentation for [throttling patterns](/azure/architecture/patterns/throttling). To increase throttling limits you need to make a request to [Azure Support](../support.md).
1. Open the [Azure portal](https://ms.portal.azure.com/) and sign in. 2. Select [Help+Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
Number purchase limits can be increased through a request to Azure Support.
8. Add **Additional details** as needed, then click **Next**. 9. At **Review + create** check the information, make changes as needed, then click **Create**. -- ## Identity | Operation | Timeframes (seconds) | Limit (number of requests) |
For more information, see the [identity concept overview](./authentication.md) p
When sending or receiving a high volume of messages, you might receive a ```429``` error. This error indicates you're hitting the service limitations, and your messages are queued to be sent once the number of requests is below the threshold.
-### Rate Limits for SMS
+Rate Limits for SMS:
|Operation|Number Type |Scope|Timeframe (s)| Limit (request #) | Message units per minute| |||--|-|-|-|
For more information on the SMS SDK and service, see the [SMS SDK overview](./sm
## Email
-You can send a limited number of email messages. If you exceed the [email rate limits](#rate-limits-for-email) for your subscription, your requests are rejected. You can attempt these requests again, after the Retry-After time passes. Take action before reaching the limit by requesting to raise your sending volume limits if needed.
-
-The Azure Communication Services email service is designed to support high throughput. However, the service imposes initial rate limits to help customers onboard smoothly and avoid some of the issues that can occur when switching to a new email service.
-
-We recommend gradually increasing your email volume using Azure Communication Services Email over a period of two to four weeks, while closely monitoring the delivery status of your emails. This gradual increase enables third-party email service providers to adapt to the change in IP for your domain's email traffic. The gradual change gives you time to protect your sender reputation and maintain the reliability of your email delivery.
-
-Azure Communication Services email service supports high volume up to 1-2 million messages per hour. High throughput can be enabled based on several factors, including:
-- Customer peak traffic-- Business needs-- Ability to manage failure rates-- Domain reputation-
-### Failure Rate Requirements
-
-To enable a high email quota, your email failure rate must be less than one percent (1%). If your failure rate is high, you must resolve the issues before requesting a quota increase.
-Customers are expected to actively monitor their failure rates.
+You can send a limited number of email messages. If you exceed the following limits for your subscription, your requests are rejected. You can attempt these requests again, after the Retry-After time passes. Take action before reaching the limit by requesting to raise your sending volume limits if needed.
-If the failure rate increases after a quota increase, Azure Communication Services will contact the customer for immediate action and a resolution timeline. In extreme cases, if the failure rate isn't managed within the specified timeline, Azure Communication Services may reduce or suspend service until the issue is resolved.
-
-#### Related articles
-
-Azure Communication Services provides rich logs and analytics to help monitor and manage failure rates. For more information, see the following articles:
--- [Improve sender reputation in Azure Communication Services email](./email/sender-reputation-managed-suppression-list.md)-- [Email Insights](./analytics/insights/email-insights.md)-- [Enable logs via Diagnostic Settings in Azure Monitor](./analytics/enable-logging.md)-- [Quickstart: Handle Email events](../quickstarts/email/handle-email-events.md)-- [Quickstart: Manage domain suppression lists in Azure Communication Services using the management client libraries](../quickstarts/email/manage-suppression-list-management-sdks.md)-
-> [!NOTE]
-> To request higher limits, follow the instructions at [Quota increase for email domains](./email/email-quota-increase.md). Higher quotas are only available for verified custom domains, not Azure-managed domains.
+The Azure Communication Services email service is designed to support high throughput. However, the service imposes initial rate limits to help customers onboard smoothly and avoid some of the issues that can occur when switching to a new email service. We recommend gradually increasing your email volume using Azure Communication Services Email over a period of two to four weeks, while closely monitoring the delivery status of your emails. This gradual increase enables third-party email service providers to adapt to the change in IP for your domain's email traffic. The gradual change gives you time to protect your sender reputation and maintain the reliability of your email delivery.
### Rate Limits for Email
+We approve higher limits for customers based on use case requirements, domain reputation, traffic patterns, and failure rates. To request higher limits, follow the instructions at [Quota increase for email domains](./email/email-quota-increase.md). Higher quotas are only available for verified custom domains, not Azure-managed domains.
+ [Custom Domains](../quickstarts/email/add-custom-verified-domains.md) | Operation | Scope | Timeframe (minutes) | Limit (number of emails) |
To increase your email quota, follow the instructions at [Quota increase for ema
| Send typing indicator | per Chat thread | 10 | 30 | > [!NOTE]
-> \* Read receipts and typing indicators are not supported on chat threads with more than 20 participants.
+> \* Read receipts and typing indicators are not supported on chat threads with more than 20 participants.
### Chat storage
For more information about the voice and video calling SDK and service, see the
When sending or receiving a high volume of requests, you might receive a ```ThrottleLimitExceededException``` error. This error indicates you're hitting the service limitations, and your requests fail until the token of bucket to handle requests is replenished after a certain time.
-### Rate Limits for Job Router
+Rate Limits for Job Router:
| Operation | Scope | Timeframe (seconds) | Limit (number of requests) | Timeout in seconds | | | | | | |
If you need to send a volume of messages that exceeds the rate limits, email us
## Teams Interoperability and Microsoft Graph
-Using a Teams interoperability scenario, you often use Microsoft Graph APIs to create [meetings](/graph/cloud-communications-online-meetings).
+Using a Teams interoperability scenario, you'll likely use some Microsoft Graph APIs to create [meetings](/graph/cloud-communications-online-meetings).
Each service offered through Microsoft Graph has different limitations; service-specific limits are [described here](/graph/throttling) in more detail.
communication-services Ui Library Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/ui-library/ui-library-use-cases.md
Title: UI Library use cases
-description: Learn about the UI Library and how it can help you build communication experiences
+description: Learn about the Azure Communication Services UI Library and how it can help you build communication experiences.
communication-services Get Started Chat Ui Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/ui-library/get-started-chat-ui-library.md
Title: Quickstart - Integrate chat experiences in your app by using UI Library
+ Title: 'Quickstart: Integrate chat experiences in your app by using UI Library'
-description: Get started with Azure Communication Services UI Library composites to add Chat communication experiences to your applications.
+description: Get started with Azure Communication Services UI Library composites to add chat communication experiences to your applications.
Last updated 11/29/2022
Get started with Azure Communication Services UI Library to quickly integrate communication experiences into your applications. In this quickstart, learn how to integrate UI Library chat composites into an application and set up the experience for your app users.
-Communication Services UI Library renders a full chat experience right in your application. It takes care of connecting to Azure Communication Services chat services, and updates participant's presence automatically. As a developer, you need to worry about where in your app's user experience you want the chat experience to launch and only create the Azure Communication Services resources as required.
+Azure Communication Services UI Library renders a full chat experience right in your application. It takes care of connecting to Azure Communication Services chat services and updates a participant's presence automatically. As a developer, you need to decide about where in your app's user experience you want the chat experience to start and create only the Azure Communication Services resources as required.
::: zone pivot="platform-web"
Communication Services UI Library renders a full chat experience right in your a
## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group.
+If you want to clean up and remove an Azure Communication Services subscription, you can delete the resource or resource group.
Deleting the resource group also deletes any other resources associated with it.
communication-services Email Detect Sensitive Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/email-detect-sensitive-content.md
+
+ Title: Pre-send email analysis - Detecting sensitive data and inappropriate content using Azure AI
+
+description: How to detect sensitive data and inappropriate content in email messages before sending, using Azure AI in Azure Communication Services.
+++++ Last updated : 10/30/2024+++++
+# Pre-send email analysis: Detecting sensitive data and inappropriate content using Azure AI
+
+Azure Communication Services email enables organizations to send high volume messages to their customers using their applications. This tutorial shows how to leverage Azure AI to ensure that your messages accurately reflect your businessΓÇÖs brand and reputation before sending them. Azure AI offers services to analyze your email content for sensitive data and identify inappropriate content.
+
+This tutorial describes how to use Azure AI Text Analytics to check for sensitive data and Azure AI Content Safety to identify inappropriate text content. Use these functions to check your content before sending the email using Azure Communication Services.
+
+## Prerequisites
+
+You need to complete these quickstarts to set up the Azure AI resources:
+
+- [Quickstart: Detect Personally Identifying Information (PII) in text](/azure/ai-services/language-service/personally-identifiable-information/quickstart)
+
+- [Quickstart: Moderate text and images with content safety in Azure AI Studio](/azure/ai-studio/quickstarts/content-safety)
+
+## Prerequisite check
+
+1. In a terminal or command window, run the dotnet command to check that the .NET client library is installed.
+
+ `reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP"`
+
+2. View the subdomains associated with your Email Communication Services resource. Sign in to the Azure portal. Locate your Email Communication Services resource. Open the **Provision domains** tab from the left navigation pane.
+
+ :::image type="content" source="./media/email-view-subdomains.png" alt-text="Screenshot showing the email subdomains in your Email Communication Services resource in the Azure portal.":::
+
+ > [!NOTE]
+ > Make sure that the email sub-domain you plan to use for sending email is verified in your email communication resource. For more information, see [Quickstart: How to add custom verified email domains](../quickstarts/email/add-custom-verified-domains.md).
+
+3. View the domains connected to your Azure Communication Services resource. Sign in to the Azure portal. Locate your Azure Communication Services resource. Open the **Email** > **Domains** tab from the left navigation pane.
+
+ :::image type="content" source="./media/email-view-connected-domains.png" alt-text="Screenshot showing the email domains connected to your Email Communication Services resource in the Azure portal.":::
+
+ > [!NOTE]
+ > Verified custom sub-domains must be connected with your Azure Communication Services resource before you use the resource to send emails. For more information, see [Quickstart: How to connect a verified email domain](../quickstarts/email/connect-email-communication-resource.md).
+
+## Create a new C# application
+
+This section describes how to create a new C# application, install required packages, and create the Main function.
+
+1. In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `EmailPreCheck`. This command creates a simple "Hello World" C# project with a single source file: `Program.cs`.
+
+ `dotnet new console -o EmailPreCheck`
+
+2. Change your directory to the newly created `EmailPreCheck` app folder and use the `dotnet build` command to compile your application.
+
+ `cd EmailPreCheck`
+
+ `dotnet build`
+
+### Install required packages
+
+From the application directory, install the Azure Communication Services Email client and Azure AI libraries for .NET packages using the `dotnet add package` commands.
+
+`dotnet add package Azure.Communication.Email`
+
+`dotnet add package Azure.AI.TextAnalytics`
+
+`dotnet add package Microsoft.Azure.CognitiveServices.ContentModerator`
+
+## Create the Main function
+
+Open `Program.cs` and replace the existing contents with the following code. The `using` directives include the `Azure.Communication.Email` and `Azure.AI namespaces`. The rest of the code outlines the `SendMail` function for your program.
+
+```csharp
+using System;
+using System.Collections.Generic;
+using System.Threading;
+using System.Threading.Tasks;
+
+using Azure;
+using Azure.Communication.Email;
+using Azure.AI.TextAnalytics;
+using Azure.AI.ContentSafety;
+namespace SendEmail
+{
+ internal class Program
+ {
+ static async Task Main(string[] args) {
+ // Authenticate and create the Azure Communication Services email client
+
+ // Set sample content
+
+ // Pre-check for sensitive data and inappropriate content
+
+ // Send Email
+ }
+ }
+}
+```
+
+## Add function that checks for sensitive data
+
+Create a new function to analyze the email subject and body for sensitive data such as social security numbers and credit card numbers.
+
+```csharp
+private static async Task<bool> AnalyzeSensitiveData(List<string> documents)
+{
+// Client authentication goes here
++
+// Function implementation goes here
+
+}
+```
+
+### Create the Text Analytics client with authentication
+
+Create a new function with a Text Analytics client that also retrieves your connection information. Add the following code into the `AnalyzeSensitiveData` function to retrieve the connection key and endpoint for the resource from environment variables named `LANGUAGE_KEY` and `LANGUAGE_ENDPOINT`. It also creates the new `TextAnalyticsClient` and `AzureKeyCredential` variables. For more information about managing your Text Analytics connection information, see [Quickstart: Detect Personally Identifiable Information \(PII\) > Get your key and endpoint](/azure/ai-services/language-service/personally-identifiable-information/quickstart#get-your-key-and-endpoint).
+
+```csharp
+// This example requires environment variables named "LANGUAGE_KEY" and "LANGUAGE_ENDPOINT"
+string languageKey = Environment.GetEnvironmentVariable("LANGUAGE_KEY");
+string languageEndpoint = Environment.GetEnvironmentVariable("LANGUAGE_ENDPOINT");
+var client = new TextAnalyticsClient(new Uri(languageEndpoint), new AzureKeyCredential(languageKey));
+
+```
+
+### Check the content for sensitive data
+
+Loop through the content to check for any sensitive data. Start the sensitivity check with a baseline of `false`. If sensitive data is found, return `true`.
+
+Add the following code into the `AnalyzeSensitiveData` function following the line that creates the `TextAnalyticsClient` variable.
+
+```csharp
+bool senstiveDataDetected = false; // we start with a baseline that of no sensitive data
+var actions = new TextAnalyticsActions
+{
+ RecognizePiiEntitiesActions = new List<RecognizePiiEntitiesAction> { new RecognizePiiEntitiesAction() }
+};
+
+var operation = await client.StartAnalyzeActionsAsync(documents, actions);
+await operation.WaitForCompletionAsync();
+
+await foreach (var documentResults in operation.Value)
+{
+ foreach (var actionResult in documentResults.RecognizePiiEntitiesResults)
+ {
+ if (actionResult.HasError)
+ {
+ Console.WriteLine($"Error: {actionResult.Error.ErrorCode} - {actionResult.Error.Message}");
+
+ }
+ else
+ {
+ foreach (var document in actionResult.DocumentsResults)
+ {
+ foreach (var entity in document.Entities)
+ {
+ if (document.Entities.Count > 0)
+ {
+ senstiveDataDetected = true; // Sensitive data detected
+ }
+
+ }
+ }
+ }
+
+ }
+}
+return senstiveDataDetected;
+```
+
+## Add function that checks for inappropriate content
+
+Create another new function to analyze the email subject and body for inappropriate content such as hate or violence.
+
+```csharp
+static async Task<bool> AnalyzeInappropriateContent(List<string> documents)
+{
+// Client authentication goes here
+
+// Function implementation goes here
+}
+```
+
+### Create the Content Safety client with authentication
+
+Create a new function with a Content Safety client that also retrieves your connection information. Add the following code into the `AnalyzeInappropriateContent` function to retrieve the connection key and endpoint for the resource from environment variables named `CONTENT_LANGUAGE_KEY` and `CONTENT_LANGUAGE_ENDPOINT`. It also creates a new `ContentSafetyClient` variable. If you're using the same Azure AI instance for Text Analytics, these values remain the same. For more information about managing your Content Safety connection information, see [Quickstart: Detect Personally Identifiable Information (PII) > Create environment variables](/azure/ai-services/language-service/personally-identifiable-information/quickstart#create-environment-variables).
+
+```csharp
+// This example requires environment variables named "CONTENT_LANGUAGE_KEY" and "CONTENT_LANGUAGE_ENDPOINT"
+ string contentSafetyLanguageKey = Environment.GetEnvironmentVariable("CONTENT_LANGUAGE_KEY");
+string contentSafetyEndpoint = Environment.GetEnvironmentVariable("CONTENT_LANGUAGE_ENDPOINT");
+var client = new ContentSafetyClient(new Uri(contentSafetyEndpoint), new AzureKeyCredential(contentSafetyLanguageKey));
+```
+
+### Check for inappropriate content
+
+Loop through the content to check for inappropriate content. Start the inappropriate content detection with a baseline of `false`. If inappropriate content is found, return `true`.
+
+Add the following code into the `AnalyzeInappropriateContent` function after the line that creates the `ContentSafetyClient` variable.
+
+```csharp
+bool inappropriateTextDetected = false;
+foreach (var document in documents)
+{
+ var options = new AnalyzeTextOptions(document);
+ AnalyzeTextResult response = await client.AnalyzeTextAsync(options);
+ // Check the response
+ if (response != null)
+ {
+ // Access the results from the response
+ foreach (var category in response.CategoriesAnalysis)
+ {
+ if (category.Severity > 2) // Severity: 0=safe, 2=low, 4=medium, 6=high
+ {
+ inappropriateTextDetected = true;
+ }
+ }
+ }
+ else
+ {
+ Console.WriteLine("Failed to analyze content.");
+ }
+}
+return inappropriateTextDetected; // No inappropriate content detected
+```
+
+## Update the Main function to run prechecks and send email
+
+Now that you added the two functions for checking for sensitive data and inappropriate content, you can call them before sending email from Azure Communication Services.
+
+### Create and authenticate the email client
+
+You have a few options available for authenticating to an email client. This example fetches your connection string from an environment variable.
+
+Open `Program.cs` in an editor. Add the following code to the body of the Main function to initialize an `EmailClient` with your connection string. This code retrieves the connection string for the resource from an environment variable named `COMMUNICATION_SERVICES_CONNECTION_STRING`. For more information about managing your resource connection string, see [Quickstart: Create and manage Communication Services resources > Store your connection string](../quickstarts/create-communication-resource.md#store-your-connection-string).
+
+```csharp
+// This code shows how to fetch your connection string from an environment variable.
+string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING");
+EmailClient emailClient = new EmailClient(connectionString);
+```
+
+## Add sample content
+
+Add the sample email content into the Main function, following the lines that create the email client.
+
+You need to get the sender email address. For more information about Azure Communication Services email domains, see [Quickstart: How to add Azure Managed Domains to Email Communication Service](../quickstarts/email/add-custom-verified-domains.md).
+
+Modify the recipient email address variable.
+
+Put both the subject and the message body into a `List<string>` which can be used by the two content checking functions.
+
+```csharp
+//Set sample content
+var sender = "donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net"; // get the send email from your email resource in the Azure Portal
+var recipient = "emailalias@contoso.com"; // modify the recipient
+var subject = "Precheck Azure Communication Service Email with Azure AI";
+var htmlContent = "<html><body><h1>Precheck email test</h1><br/><h4>This email message is sent from Azure Communication Service Email. </h4>";
+htmlContent += "<p> My SSN is 123-12-1234. My Credit Card Number is: 1234 4321 5678 8765. My address is 1011 Main St, Redmond, WA, 998052 </p>";
+htmlContent += "<p>A 51-year-old man was found dead in his car. There were blood stains on the dashboard and windscreen.";
+htmlContent += "At autopsy, a deep, oblique, long incised injury was found on the front of the neck. It turns out that he died by suicide.</p>";
+htmlContent += "</body></html>";
+List<string> documents = new List<string> { subject, htmlContent };
+```
+
+### Pre-check content before sending email
+
+You need to call the two functions to look for violations and use the results to determine whether or not to send the email. Add the following code to the Main function after the sample content.
+
+```csharp
+// Pre-Check content
+bool containsSensitiveData = await AnalyzeSensitiveData(documents);
+bool containsInappropriateContent = await AnalyzeInappropriateContent(documents);
+
+// Send email only if not sensitive data or inappropriate content is detected
+if (containsSensitiveData == false && containsInappropriateContent == false)
+{
+
+ /// Send the email message with WaitUntil.Started
+ EmailSendOperation emailSendOperation = await emailClient.SendAsync(
+ Azure.WaitUntil.Started,
+ sender,
+ recipient,
+ subject,
+ htmlContent);
+
+ /// Call UpdateStatus on the email send operation to poll for the status manually
+ try
+ {
+ while (true)
+ {
+ await emailSendOperation.UpdateStatusAsync();
+ if (emailSendOperation.HasCompleted)
+ {
+ break;
+ }
+ await Task.Delay(100);
+ }
+
+ if (emailSendOperation.HasValue)
+ {
+ Console.WriteLine($"Email queued for delivery. Status = {emailSendOperation.Value.Status}");
+ }
+ }
+ catch (RequestFailedException ex)
+ {
+ Console.WriteLine($"Email send failed with Code = {ex.ErrorCode} and Message = {ex.Message}");
+ }
+
+ /// Get the OperationId so that it can be used for tracking the message for troubleshooting
+ string operationId = emailSendOperation.Id;
+ Console.WriteLine($"Email operation id = {operationId}");
+}
+else
+{
+ Console.WriteLine("Sensitive data and/or inappropriate content detected, email not sent\n\n");
+}
+```
+
+## Next steps
+
+- Learn more about [Azure Communication Services](../overview.md).
+- Learn more about [Azure AI Studio](/azure/ai-studio/).
communication-services Inline Image Tutorial Interop Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/inline-image-tutorial-interop-chat.md
Title: Enable inline image using UI Library in Teams Interoperability Chat
-description: Learn how to use the UI Library to enable inline image support in Teams Interoperability Chat
+description: Learn how to use UI Library to enable inline image support in Teams Interoperability Chat.
-# Enable inline image using UI Library in Teams Interoperability Chat
+# Enable inline image by using UI Library in Teams Interoperability Chat
-In a Teams Interoperability Chat ("Interop Chat"), we can enable Azure Communication Service end users to receive inline images sent by Teams users. Additionally, when rich text editor is enabled, Azure Communication Service end users can send inline images to Teams users. Refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
+With Teams Interoperability Chat ("Interop Chat"), you can enable Azure Communication Services users to receive inline images sent by Teams users. When a rich text editor is enabled, Azure Communication Services users can send inline images to Teams users. To learn more, see [UI Library use cases](../concepts/ui-library/ui-library-use-cases.md).
-> [!IMPORTANT]
->
-> Receiving inline images feature comes with the CallWithChat Composite without additional setups.
-> Sending inline images feature can be enabled by set `richTextEditor` to true under the CallWithChatCompositeOptions.
+The feature in Azure Communication Services for receiving inline images comes with the `CallWithChat` composite without extra setup. To enable the feature in Azure Communication Services for sending inline images, set `richTextEditor` to `true` under `CallWithChatCompositeOptions`.
> [!IMPORTANT]
-> The sending inline image feature of Azure Communication Services is currently in preview.
+> The feature in Azure Communication Services for sending inline images is currently in preview.
> > Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. > > For more information, review [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - ## Download code Access the code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-quickstart-teams-interop-meeting-chat).
Access the code for this tutorial on [GitHub](https://github.com/Azure-Samples/c
- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).-- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions. Use the `node --version` command to check your version.-- An active Communication Services resource and connection string. [Create a Communication Services resource](../quickstarts/create-communication-resource.md).-- Using the UI library version [1.15.0](https://www.npmjs.com/package/@azure/communication-react/v/1.15.0) or the latest for receiving inline images. Using the UI library version [1.19.0-beta.1](https://www.npmjs.com/package/@azure/communication-react/v/1.19.0-beta.1) or the latest beta version for sending inline images.-- Have a Teams meeting created and the meeting link ready.-- Be familiar with how [ChatWithChat Composite](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) works.-
+- [Node.js](https://nodejs.org/), Active Long-Term Support (LTS) and Maintenance LTS versions. Use the `node --version` command to check your version.
+- An active Azure Communication Services resource and connection string. For more information, see [Create an Azure Communication Services resource](../quickstarts/create-communication-resource.md).
+- UI Library version [1.15.0](https://www.npmjs.com/package/@azure/communication-react/v/1.15.0) or the latest version for receiving inline images. Use the UI Library version [1.19.0-beta.1](https://www.npmjs.com/package/@azure/communication-react/v/1.19.0-beta.1) or the latest beta version for sending inline images.
+- A Teams meeting created and the meeting link ready.
+- A familiarity with how [ChatWithChat composite](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) works.
## Background
-First of all, we need to understand that Teams Interop Chat has to be part of a Teams meeting currently. When the Teams user creates an online meeting, a chat thread would be created and associated with the meeting. To enable the Azure Communication Service end user joining the chat and starting to send/receive messages, a meeting participant (a Teams user) would need to admit them to the call first. Otherwise, they don't have access to the chat.
+First of all, Teams Interop Chat must be part of a Teams meeting currently. When the Teams user creates an online meeting, a chat thread is created and associated with the meeting. To enable the Azure Communication Services user to join the chat and start to send or receive messages, a meeting participant (a Teams user) must admit them to the call first. Otherwise, they don't have access to the chat.
-Once the Azure Communication Service end user is admitted to the call, they would be able to start to chat with other participants on the call. In this tutorial, we're checking out how inline image works in Interop chat.
+After the Azure Communication Services user is admitted to the call, they can start to chat with other participants on the call. In this tutorial, you learn how the feature for sending and receiving inline images works in Interop Chat.
## Overview
-As mentioned previously, since we need to join a Teams meeting first, we need to use the ChatWithChat Composite from the UI library.
+Because you need to join a Teams meeting first, we need to use the `ChatWithChat` composite from UI Library.
-Let's follow the basic example from the [storybook page](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) to create a ChatWithChat Composite.
+Let's follow the basic example from the [storybook page](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) to create a `ChatWithChat` composite.
From the sample code, it needs `CallWithChatExampleProps`, which is defined as the following code snippet:
export type CallWithChatExampleProps = {
}; ```
-There no specific setup needed to enable receiving inline images. However, to be able to send inline images, `richTextEditor` function need to be enabled through the `CallWithChatExampleProps`. Here's a code snippet on how to enable it:
+
+No specific setup is needed to enable receiving inline images. But to send inline images, the `richTextEditor` function must be enabled through `CallWithChatExampleProps`. Here's a code snippet on how to enable it:
+ ```js <CallWithChatExperience // ...any other call with chat props
There no specific setup needed to enable receiving inline images. However, to be
``` -
-To be able to start the Composite for meeting chat, we need to pass `TeamsMeetingLinkLocator`, which looks like this:
+To start the composite for meeting chat, you need to pass `TeamsMeetingLinkLocator`, which looks like this:
```js { "meetingLink": "<TEAMS_MEETING_LINK>" } ```
-This is all you need - and there's no other setup needed.
-
+No other setup is needed.
## Run the code
-Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot:
-
-![Screenshot of a Azure Communication Services UI library.](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.")
-
-Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends an image, you should see something like the following screenshot:
+Let's run `npm run start`. Then you can access the sample app via `localhost:3000`.
-!["Screenshot of a Teams client sending 2 inline images."](./media/inline-image-tutorial-interop-chat-1.png "Screenshot of a Teams client sending 2 inline images.")
+![Screenshot that shows Azure Communication Services UI Library.](./media/inline-image-tutorial-interop-chat-0.png "Screenshot that shows Azure Communication Services UI Library.")
-![Screenshot of Azure Communication Services UI library receiving two inline images.](./media/inline-image-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving 2 inline images.")
+Select the chat button located at the bottom of the pane to open the chat pane. Now, if a Teams user sends an image, you should see something like the following screenshot.
+!["Screenshot that shows a Teams client sending two inline images."](./media/inline-image-tutorial-interop-chat-1.png "Screenshot that shows a Teams client sending two inline images.")
-When sending inline images is enabled, you should see something like the following screenshot:
-![Screenshot of Azure Communication Services UI library sending two inline images and editing messages.](./media/inline-image-tutorial-interop-chat-3.png "Screenshot of Azure Communication Services UI library sending 2 inline images and editing messages.")
+![Screenshot that shows Azure Communication Services UI Library receiving two inline images.](./media/inline-image-tutorial-interop-chat-2.png "Screenshot that shows Azure Communication Services UI Library receiving two inline images.")
-!["Screenshot of a Teams client receiving 2 inline images."](./media/inline-image-tutorial-interop-chat-4.png "Screenshot of a Teams client receiving 2 inline images.")
+When the feature for sending inline images is enabled, you should see something like the following screenshot.
+![Screenshot that shows Azure Communication Services UI Library sending two inline images and editing messages.](./media/inline-image-tutorial-interop-chat-3.png "Screenshot that shows Azure Communication Services UI Library sending two inline images and editing messages.")
-## Known Issues
+!["Screenshot that shows a Teams client receiving two inline images."](./media/inline-image-tutorial-interop-chat-4.png "Screenshot that shows a Teams client receiving two inline images.")
-* The UI library might not support certain GIF images at this time. The user might receive a static image instead.
-* The Web UI library doesn't support Clips (short videos) sent by the Teams users at this time.
-* For certain Android devices, pasting of a single image is only supported when long pressing on the rich text editor and choosing
-paste. Selecting from the clipboard view from keyboard may not be supported.
+## Known issues
+* UI Library might not support certain GIF images at this time. The user might receive a static image instead.
+* The web UI Library doesn't support clips (short videos) sent by Teams users at this time.
+* For certain Android devices, pasting a single image is supported only when you hold down the rich text editor and select **Paste**. Selecting from the clipboard view by using the keyboard might not be supported.
-## Next steps
+## Next step
> [!div class="nextstepaction"]
-> [Check the rest of the UI Library](https://azure.github.io/communication-ui-library/)
+> [Check the rest of UI Library](https://azure.github.io/communication-ui-library/)
-You may also want to:
+You might also want to:
- [Check UI Library use cases](../concepts/ui-library/ui-library-use-cases.md) - [Add chat to your app](../quickstarts/chat/get-started.md)-- [Creating user access tokens](../quickstarts/identity/access-tokens.md)
+- [Create user access tokens](../quickstarts/identity/access-tokens.md)
- [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md)-- [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md)
+- [Add file sharing with UI Library in Azure Communication Services Chat](./file-sharing-tutorial-acs-chat.md)
- [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md)
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 01/05/2024 Last updated : 09/12/2024 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
These properties are supported for the linked service:
For a full list of sections and properties available for defining datasets, see [Datasets](concepts-datasets-linked-services.md).
+Azure Data Factory supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [Excel format](format-excel.md)
+- [Iceberg format](format-iceberg.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+- [XML format](format-xml.md)
The following properties are supported for Data Lake Storage Gen2 under `location` settings in the format-based dataset:
The following properties are supported for Data Lake Storage Gen2 under `storeSe
### Azure Data Lake Storage Gen2 as a sink type
+Azure Data Factory supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [Iceberg format](format-iceberg.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
The following properties are supported for Data Lake Storage Gen2 under `storeSettings` settings in format-based copy sink:
In this case, all files that were sourced under /data/sales are moved to /backup
### Sink properties
-In the sink transformation, you can write to either a container or folder in Azure Data Lake Storage Gen2. the **Settings** tab lets you manage how the files get written.
+In the sink transformation, you can write to either a container or folder in Azure Data Lake Storage Gen2. The **Settings** tab lets you manage how the files get written.
:::image type="content" source="media/data-flow/file-sink-settings.png" alt-text="sink options":::
data-factory Connector Deprecation Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md
Title: Planned connector deprecations for Azure Data Factory
-description: This page describes future deprecations for some connectors of Azure Data Factory.
+ Title: Upgrade plan for Azure Data Factory connectors
+description: This article describes future upgrades for some connectors of Azure Data Factory.
Previously updated : 10/16/2024 Last updated : 11/06/2024
-# Planned connector deprecations for Azure Data Factory
+# Upgrade plan for Azure Data Factory connectors
-This article describes future deprecations for some connectors of Azure Data Factory.
+This article describes future upgrades for some connectors of Azure Data Factory.
> [!NOTE] > "Deprecated" means we intend to remove the connector from a future release. Unless they are in *Preview*, connectors remain fully supported until they are officially deprecated. This deprecation notification can span a few months or longer. After removal, the connector will no longer work. This notice is to allow you sufficient time to plan and update your code before the connector is deprecated.
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 01/09/2024 Last updated : 11/05/2024
The following file formats are supported. Refer to each article for format-based
- [Delimited text format](format-delimited-text.md) - [Delta format](format-delta.md) - [Excel format](format-excel.md)
+- [Iceberg format](format-iceberg.md)
- [JSON format](format-json.md) - [ORC format](format-orc.md) - [Parquet format](format-parquet.md)
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
Previously updated : 08/02/2024 Last updated : 11/05/2024
To copy data from a source to a sink, the service that runs the Copy activity pe
### Supported file formats
+Azure Data Factory supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [Excel format](format-excel.md)
+- [Iceberg format](format-iceberg.md) (only for Azure Data Lake Storage Gen2)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+- [XML format](format-xml.md)
You can use the Copy activity to copy files as-is between two file-based data stores, in which case the data is copied efficiently without any serialization or deserialization. In addition, you can also parse or generate files of a given format, for example, you can perform the following:
data-factory Format Iceberg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-iceberg.md
+
+ Title: Iceberg format in Azure Data Factory
+
+description: This topic describes how to deal with Iceberg format in Azure Data Factory and Azure Synapse Analytics.
++++ Last updated : 09/12/2024+++
+# Iceberg format in Azure Data Factory and Azure Synapse Analytics
++
+Follow this article when you want to **write the data into Iceberg format**.
+
+Iceberg format is supported for the following connectors:
+
+- [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md)
+
+You can use Iceberg dataset in [Copy activity](copy-activity-overview.md).
+
+## Dataset properties
+
+For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article. This section provides a list of properties supported by the Iceberg format dataset.
+
+| Property | Description | Required |
+| - | | -- |
+| type | The type property of the dataset must be set to **Iceberg**. | Yes |
+| location | Location settings of the file(s). Each file-based connector has its own location type and supported properties under `location`. | Yes |
+
+Below is an example of Iceberg dataset on Azure Data Lake Storage Gen2:
+
+```json
+{
+ "name": "IcebergDataset",
+ "properties": {
+ "type": "Iceberg",
+ "linkedServiceName": {
+ "referenceName": "<Azure Data Lake Storage Gen2 linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "schema": [ < physical schema, optional, auto retrieved during authoring >
+ ],
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobFSLocation",
+ "fileSystem": "filesystemname",
+ "folderPath": "folder/subfolder",
+ }
+ }
+ }
+}
+
+```
+
+## Copy activity properties
+
+For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by the Iceberg sink.
+
+### Iceberg as sink
+
+The following properties are supported in the copy activity ***\*sink\**** section.
+
+| Property | Description | Required |
+| -- | | -- |
+| type | The type property of the copy activity source must be set to **IcebergSink**. | Yes |
+| formatSettings | A group of properties. Refer to **Iceberg write settings** table below. | No |
+| storeSettings | A group of properties on how to write data to a data store. Each file-based connector has its own supported write settings under `storeSettings`. | No |
+
+Supported **Iceberg write settings** under `formatSettings`:
+
+| Property | Description | Required |
+| - | | -- |
+| type | The type of formatSettings must be set to **IcebergWriteSettings**. | Yes |
+
+## Related connectors and formats
+
+Here are some common connectors and formats related to the delimited text format:
+
+- [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md)
+- [Binary format](format-binary.md)
+- [Delta format](format-delta.md)
+- [Excel format](format-excel.md)
+- [JSON format](format-json.md)
+- [Parquet format](format-parquet.md)
+
+## Related content
+
+- [Data type mapping in dataset schemas](copy-activity-schema-and-type-mapping.md#data-type-mapping)
+- [Copy activity overview](copy-activity-overview.md)
defender-for-iot Hpe Proliant Dl360 Gen11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360-gen11.md
+
+ Title: HPE ProLiant DL360 Gen 11 OT monitoring - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL360 Gen 11 appliance when used for OT monitoring with Microsoft Defender for IoT.
Last updated : 03/14/2024+++
+# HPE ProLiant DL360 Gen 11
+
+This article describes the **HPE ProLiant DL360 Gen 11** appliance for OT sensors, customized for use with Microsoft Defender for IoT.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | C5600 |
+|**Performance** | Max bandwidth: 3 Gbps <br> Max devices: 12,000 |
+|**Physical specifications** | Mounting: 1U|
+|**Status** | Supported, available preconfigured|
+
+The following image displays the hardware elements on the HPE ProLiant DL360 Gen11 that are used by Defender for IoT:
++
+## Specifications
+
+|Component |Specifications|
+|||
+|**Chassis** |1U rack server |
+|**Physical Characteristics** | HPE DL360 Gen11 8SFFΓÇ» |
+|**Processor** | INT Xeon-S 4510 CPU for HPE OEM |
+|**Chipset** | Intel C262|
+|**Memory** | 4 HPE 32GB (1x32GB) Dual Rank x8 DDR5-5600 CAS-46-45-45 EC8 Registered Smart Memory Kit |
+|**Storage**| 6 HPE 2.4TB SAS 12G Mission Critical 10K SFF BC 3-year Warranty 512e Multi Vendor HDD |
+|**Network controller**| On-board: 8 x 1 Gb |
+|**Management** | HPE iLO Advanced |
+|**Power** |HPE 800W Flex Slot Titanium Hot Plug Low Halogen Power Supply Kit |
+|**Rack support** | HPE 1U Rail 3 kit |
+
+## HPE DL360 Gen 11 Plus (NHP 4SFF) - Bill of materials
+
+|PN |Description |Quantity|
+|-- | --| |
+|**P55428-B21** | HPE OEM ProLiant DL360 Gen11 SFF NC Configure-to-order Server |1|
+|**P55428-B21#B19** | HPE OEM ProLiant DL360 Gen11 SFF NC Configure-to-order Server |1|
+|**P67824-B21** | INT Xeon-S 4510 CPU for HPE OEM |2|
+|**P64706-B21** | HPE 32GB (1x32GB) Dual Rank x8 DDR5-5600 CAS-46-45-45 EC8 Registered Smart Memory Kit |4|
+|**P48896-B21** | HPE ProLiant DL360 Gen11 8SFF x4 U.3 Tri-Mode Backplane Kit |1|
+|**P28352-B21** | HPE 2.4TB SAS 12G Mission Critical 10K SFF BC 3-year Warranty 512e Multi Vendor HDD |6|
+|**P48901-B21** | HPE ProLiant DL360 Gen11 x16 Full Height Riser Kit |1|
+|**P51178-B21** | Broadcom BCM5719 Ethernet 1Gb 4-port BASE-T Adapter for HPE |1|
+|**P47789-B21** | HPE MR216i-o Gen11 x16 Lanes without Cache OCP SPDM Storage Controller |1|
+|**P10097-B21** | Broadcom BCM57416 Ethernet 10Gb 2-port BASE-T OCP3 Adapter for HPE |1|
+|**P48907-B21** | HPE ProLiant DL3X0 Gen11 1U Standard Fan Kit |1|
+|**P54697-B21** | HPE ProLiant DL3X0 Gen11 1U 2P Standard Fan Kit |1|
+|**865438-B21** | HPE 800W Flex Slot Titanium Hot Plug Low Halogen Power Supply Kit |2|
+|**AF573A** | HPE C13 - C14 WW 250V 10Amp Flint Gray 2.0m Jumper Cord |2|
+|**P48830-B21** | HPE ProLiant DL3XX Gen11 CPU2 to OCP2 x8 Enablement Kit |1|
+|**P52416-B21** | HPE ProLiant DL360 Gen11 OROC Tri-Mode Cable Kit |1|
+|**BD505A** | HPE iLO Advanced 1-server License with 3yr Support on iLO Licensed Features |1|
+|**P48904-B21** | HPE ProLiant DL3X0 Gen11 1U Standard Heat Sink Kit |2|
+|**P52341-B21** | HPE ProLiant DL3XX Gen11 Easy Install Rail 3 Kit |1|
+
+## HPE ProLiant DL360 Gen 11 installation
+
+This section describes how to install OT sensor software on the HPE ProLiant DL360 Gen 11 appliance and includes adjusting the appliance's BIOS configuration.
+
+During this procedure, you configure the iLO port. We recommend that you also change the default password provided for the administrative user.
+
+> [!NOTE]
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
+>
+
+### Enable remote access and update the password
+
+Use the following procedure to set up network options and update the default password.
+
+**To enable and update the password**:
+
+1. Connect a screen and a keyboard to the HP appliance, turn on the appliance, and press **F9**.
+
+ :::image type="content" source="../media/tutorial-install-components/hpe-proliant-screen-v2.png" alt-text="Screenshot that shows the HPE ProLiant window.":::
+
+1. Go to **System Utilities** > **System Configuration** > **iLO 5 Configuration Utility** > **Network Options**.
+
+ :::image type="content" source="../media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
+
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Set **Enable DHCP** to **Off**.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
+
+1. Select **F10: Save**.
+
+1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
+
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
+
+1. Change the default password and select **F10: Save**.
+
+### Set up the BIOS and RAID array
+
+This procedure describes how to configure the BIOS configuration for an unconfigured sensor appliance.
+If any of the steps below are missing in the BIOS, make sure that the hardware matches the specifications above.
+
+HPE BIOS iLO is a system management software designed to give administrators control of HPE hardware remotely. It allows administrators to monitor system performance, configure settings, and troubleshoot hardware issues from a web browser. It can also be used to update system BIOS and firmware. The BIOS can be set up locally or remotely. To set up the BIOS remotely from a management computer, you need to define the HPE IP address and the management computer's IP address on the same subnet.
+
+**To configure the HPE BIOS**:
+
+> [!IMPORTANT]
+> Please make sure your server is using the HPE SPP 2022.03.1 (BIOS version U32 v2.6.2) or later.
+
+1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
+
+1. In the **BIOS/Ethernet Adapter/NIC Configuration**, disable LLDP Agent for all NIC cards.
+
+1. In the **BIOS/Platform Configuration (RBSU)** form, select **Boot Options**.
+
+1. Change **Boot Mode** to **UEFI BIOS Mode**, and then select **F10: Save**.
+
+1. Select **Esc** twice to close the **System Configuration** form.
+
+1. Select **Embedded RAID1: HPE Smart Array P408i-a SR Gen 10** > **Array Configuration** > **Create Array**.
+
+1. In the **Create Array** form, select all the drives, and enable RAID Level 5.
+
+> [!NOTE]
+> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED).
+>
++
+### Install OT sensor software on the HPE DL360
+
+This procedure describes how to install OT sensor software on the HPE DL360.
+
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+
+**To install OT sensor software**:
+
+1. Connect a screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
+
+1. Continue with the generic procedure for installing OT sensor software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
+
+Then, use any of the following procedures to continue:
+
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../legacy-central-management/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
dev-box Concept Dev Box Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-network-requirements.md
Previously updated : 05/29/2024 Last updated : 10/31/2024 #Customer intent: As a platform engineer, I want to understand Dev Box networking requirements so that developers can access the resources they need.
To use your own network and provision [Microsoft Entra hybrid joined](/azure/dev
- The Azure virtual network must be able to resolve Domain Name Services (DNS) entries for your Active Directory Domain Services (AD DS) environment. To support this resolution, define your AD DS DNS servers as the DNS servers for the virtual network. - The Azure virtual network must have network access to an enterprise domain controller, either in Azure or on-premises.
-When connecting to resources on-premises through Microsoft Entra hybrid joins, work with your Azure network topology expert. Best practice is to implement a [hub-and-spoke network topology](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology). The hub is the central point that connects to your on-premises network; you can use an Express Route, a site-to-site VPN, or a point-to-site VPN. The spoke is the virtual network that contains the dev boxes. You peer the dev box virtual network to the on-premises connected virtual network to provide access to on-premises resources. Hub and spoke topology can help you manage network traffic and security.
- > [!IMPORTANT] > When using your own network, Microsoft Dev Box currently does not support moving network interfaces to a different virtual network or a different subnet.
You can check that your dev boxes can connect to these FQDNs and endpoints by fo
> [!IMPORTANT] > Microsoft doesn't support dev box deployments where the FQDNs and endpoints listed in this article are blocked.
+### Use FQDN tags and service tags for endpoints through Azure Firewall
+
+Managing network security controls for dev boxes can be complex. To simplify configuration, use fully qualified domain name (FQDN) tags and service tags to allow network traffic.
+
+- **FQDN tags**
+
+ An [FQDN tag](/azure/firewall/fqdn-tags) is a predefined tag in Azure Firewall that represents a group of fully qualified domain names. By using FQDN tags, you can easily create and maintain egress rules for specific services like Windows 365 without manually specifying each domain name.
+
+ The groupings defined by FQDN tags can overlap. For example, the Windows365 FQDN tag includes AVD endpoints for standard ports, see [reference](/windows-365/enterprise/azure-firewall-windows-365#windows365-tag).
+
+ Non-Microsoft firewalls don't usually support FQDN tags or service tags. There might be a different term for the same functionality; check your firewall documentation.
+
+- **Service tags**
+
+ A [service tag](/azure/firewall/service-tags) represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. Service tags can be used in both [Network Security Group (NSG)](/azure/virtual-network/network-security-groups-overview) and [Azure Firewall](/azure/firewall/service-tags) rules to restrict outbound network access, and in [User Defined Route (UDR)](/azure/virtual-network/virtual-networks-udr-overview#user-defined) to customize traffic routing behavior.
+
+## Required endpoints for physical device network connectivity
Although most of the configuration is for the cloud-based dev box network, end user connectivity occurs from a physical device. Therefore, you must also follow the connectivity guidelines on the physical device network. |Device or service |Network connectivity required URLs and ports |Description |
Although most of the configuration is for the cloud-based dev box network, end u
|Azure Virtual Desktop session host virtual machine |[Link](/azure/virtual-desktop/safe-url-list?tabs=azure#session-host-virtual-machines) |Remote connectivity between dev boxes and the backend Azure Virtual Desktop service.| |Windows 365 service |[Link](/windows-365/enterprise/requirements-network?tabs=enterprise%2Cent#windows-365-service) |Provisioning and health checks.|
-## Required endpoints
+Any device you use to connect to a dev box must have access to the following FQDNs and endpoints. Allowing these FQDNs and endpoints is essential for a reliable client experience. Blocking access to these FQDNs and endpoints is unsupported and affects service functionality.
-The following URLs and ports are required for the provisioning of dev boxes and the Azure Network Connection (ANC) health checks. All endpoints connect over port 443 unless otherwise specified.
-
-# [Windows 365 service endpoints](#tab/W365)
-- *.infra.windows365.microsoft.com-- cpcsaamssa1prodprap01.blob.core.windows.net-- cpcsaamssa1prodprau01.blob.core.windows.net-- cpcsaamssa1prodpreu01.blob.core.windows.net-- cpcsaamssa1prodpreu02.blob.core.windows.net-- cpcsaamssa1prodprna01.blob.core.windows.net-- cpcsaamssa1prodprna02.blob.core.windows.net-- cpcstcnryprodprap01.blob.core.windows.net-- cpcstcnryprodprau01.blob.core.windows.net-- cpcstcnryprodpreu01.blob.core.windows.net-- cpcstcnryprodpreu02.blob.core.windows.net-- cpcstcnryprodprna01.blob.core.windows.net-- cpcstcnryprodprna02.blob.core.windows.net-- cpcstprovprodpreu01.blob.core.windows.net-- cpcstprovprodpreu02.blob.core.windows.net-- cpcstprovprodprna01.blob.core.windows.net-- cpcstprovprodprna02.blob.core.windows.net-- cpcstprovprodprap01.blob.core.windows.net-- cpcstprovprodprau01.blob.core.windows.net-- prna01.prod.cpcgateway.trafficmanager.net-- prna02.prod.cpcgateway.trafficmanager.net-- preu01.prod.cpcgateway.trafficmanager.net-- preu02.prod.cpcgateway.trafficmanager.net-- prap01.prod.cpcgateway.trafficmanager.net-- prau01.prod.cpcgateway.trafficmanager.net-
-# [Dev box communication endpoints](#tab/DevBox)
-- *.agentmanagement.dc.azure.com--- endpointdiscovery.cmdagent.trafficmanager.net-- registration.prna01.cmdagent.trafficmanager.net-- registration.preu01.cmdagent.trafficmanager.net-- registration.prap01.cmdagent.trafficmanager.net-- registration.prau01.cmdagent.trafficmanager.net-- registration.prna02.cmdagent.trafficmanager.net-
-# [Registration endpoints](#tab/Registration)
-- login.microsoftonline.com-- login.live.com-- enterpriseregistration.windows.net-- global.azure-devices-provisioning.net (443 & 5671 outbound)-- hm-iot-in-prod-prap01.azure-devices.net (443 & 5671 outbound)-- hm-iot-in-prod-prau01.azure-devices.net (443 & 5671 outbound)-- hm-iot-in-prod-preu01.azure-devices.net (443 & 5671 outbound)-- hm-iot-in-prod-prna01.azure-devices.net (443 & 5671 outbound)-- hm-iot-in-prod-prna02.azure-devices.net (443 & 5671 outbound)-- hm-iot-in-2-prod-preu01.azure-devices.net (443 & 5671 outbound)-- hm-iot-in-2-prod-prna01.azure-devices.net (443 & 5671 outbound)-- hm-iot-in-3-prod-preu01.azure-devices.net (443 & 5671 outbound)-- hm-iot-in-3-prod-prna01.azure-devices.net (443 & 5671 outbound)---
-## Use FQDN tags and service tags for endpoints through Azure Firewall
-
-Managing network security controls for dev boxes can be complex. To simplify configuration, use fully qualified domain name (FQDN) tags and service tags to allow network traffic.
--- **FQDN tags**
+|Address |Protocol |Outbound port |Purpose |Clients |
+||||||
+|login.microsoftonline.com |TCP |443 |Authentication to Microsoft Online Services |All |
+|*.wvd.microsoft.com |TCP |443 |Service traffic |All |
+|*.servicebus.windows.net |TCP |443 |Troubleshooting data |All |
+|go.microsoft.com |TCP |443 |Microsoft FWLinks |All |
+|aka.ms |TCP |443 |Microsoft URL shortener |All |
+|learn.microsoft.com |TCP |443 |Documentation |All |
+|privacy.microsoft.com |TCP |443 |Privacy statement |All |
+|query.prod.cms.rt.microsoft.com |TCP |443 |Download an MSI to update the client. Required for automatic updates. |Windows Desktop |
- An [FQDN tag](/azure/firewall/fqdn-tags) is a predefined tag in Azure Firewall that represents a group of fully qualified domain names. By using FQDN tags, you can easily create and maintain egress rules for specific services like Windows 365 without manually specifying each domain name.
+These FQDNs and endpoints only correspond to client sites and resources.
- Non-Microsoft firewalls don't usually support FQDN tags or service tags. There might be a different term for the same functionality; check your firewall documentation.
+## Required endpoints for dev box provisioning
-- **Service tags**
+The following URLs and ports are required for the provisioning of dev boxes and the Azure Network Connection (ANC) health checks. All endpoints connect over port 443 unless otherwise specified.
- A [service tag](/azure/virtual-network/service-tags-overview) represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. Service tags can be used in both [Network Security Group (NSG)](/azure/virtual-network/network-security-groups-overview) and [Azure Firewall](/azure/firewall/service-tags) rules to restrict outbound network access, and in [User Defined Route (UDR)](/azure/virtual-network/virtual-networks-udr-overview#user-defined) to customize traffic routing behavior.
+| Category | Endpoints | FQDN tag or Service tag |
+||--|-|
+| **Dev box communication endpoints** | *.agentmanagement.dc.azure.com<br>*.cmdagent.trafficmanager.net | N/A |
+| **Windows 365 service and registration endpoints** | For current Windows 365 registration endpoints, see [Windows 365 network requirements](/windows-365/enterprise/requirements-network?tabs=enterprise%2Cent#windows-365-service). | FQDN tag: *Windows365* |
+| **Azure Virtual Desktop service endpoints** | For current AVD service endpoints, see [Session host virtual machines](/azure/virtual-desktop/required-fqdn-endpoint?tabs=azure#session-host-virtual-machines). | FQDN tag: *WindowsVirtualDesktop* |
+| **Microsoft Entra ID** | FQDNs and endpoints for Microsoft Entra ID can be found under ID 56, 59 and 125 in [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). | Service tag: *AzureActiveDirectory* |
+| **Microsoft Intune** | For current FQDNs and endpoints for Microsoft Entra ID, see [Intune core service](/mem/intune/fundamentals/intune-endpoints?tabs=north-america#intune-core-service).| FQDN tag: *MicrosoftIntune* |
-The listed FQDNs and endpoints and tags only correspond to Azure Virtual Desktop sites and resources. They don't include FQDNs and endpoints for other services such as Microsoft Entra ID. For service tags for other services, see [Available service tags](/azure/virtual-network/service-tags-overview#available-service-tags).
+The listed FQDNs and endpoints and tags correspond to the required resources. They don't include FQDNs and endpoints for all services. For service tags for other services, see [Available service tags](/azure/virtual-network/service-tags-overview#available-service-tags).
Azure Virtual Desktop doesn't have a list of IP address ranges that you can unblock instead of FQDNs to allow network traffic. If you're using a Next Generation Firewall (NGFW), you need to use a dynamic list made for Azure IP addresses to make sure you can connect.
This list doesn't include FQDNs and endpoints for other services such as Microso
## Remote Desktop Protocol (RDP) broker service endpoints
-Direct connectivity to Azure Virtual Desktop RDP broker service endpoints is critical for remote performance to a dev box. These endpoints affect both connectivity and latency. To align with the Microsoft 365 network connectivity principles, you should categorize these endpoints as *Optimize* endpoints, and use a [Remote Desktop Protocol (RDP) Shortpath](/windows-365/enterprise/rdp-shortpath-public-networks) from your Azure virtual network to those endpoints. RDP Shortpath can provide another connection path for improved dev box connectivity, especially in suboptimal network conditions.
+Direct connectivity to Azure Virtual Desktop RDP broker service endpoints is critical for remote performance to a dev box. These endpoints affect both connectivity and latency. To align with the Microsoft 365 network connectivity principles, you should categorize these endpoints as *Optimize* endpoints, and use a [Remote Desktop Protocol (RDP) Shortpath](/windows-365/enterprise/rdp-shortpath-public-networks) from your Azure virtual network to those endpoints. RDP Shortpath can provide another connection path for improved dev box connectivity, especially in suboptimal network conditions.
+
+To make it easier to configure network security controls, use Azure Virtual Desktop service tags to identify those endpoints for direct routing using an Azure Networking User Defined Route (UDR). A UDR results in direct routing between your virtual network and the RDP broker for lowest latency.
-To make it easier to configure network security controls, use Azure Virtual Desktop service tags to identify those endpoints for direct routing using an Azure Networking User Defined Route (UDR). A UDR results in direct routing between your virtual network and the RDP broker for lowest latency. For more information about Azure Service Tags, see Azure service tags overview.
Changing the network routes of a dev box (at the network layer or at the dev box layer like VPN) might break the connection between the dev box and the Azure Virtual Desktop RDP broker. If so, the end user is disconnected from their dev box until a connection is re-established. ## DNS requirements
Configure your Azure Virtual Network where the dev boxes are provisioned as foll
> Adding at least two DNS servers, as you would with a physical PC, helps mitigate the risk of a single point of failure in name resolution. For more information, see configuring [Azure Virtual Networks settings](/azure/virtual-network/manage-virtual-network#change-dns-servers). - ## Connecting to on-premises resources You can allow dev boxes to connect to on-premises resources through a hybrid connection. Work with your Azure network expert to implement a [hub and spoke networking topology](/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology). The hub is the central point that connects to your on-premises network; you can use an Express Route, a site-to-site VPN, or a point-to-site VPN. The spoke is the virtual network that contains the dev boxes. Hub and spoke topology can help you manage network traffic and security. You peer the dev box virtual network to the on-premises connected virtual network to provide access to on-premises resources. ## Traffic interception technologies
-Some enterprise customers use traffic interception, SSL decryption, deep packet inspection, and other similar technologies for security teams to monitor network traffic. Dev box provisioning might need direct access to the virtual machine. These traffic interception technologies can cause issues with running Azure network connection checks or dev box provisioning. Make sure no network interception is enforced for dev boxes provisioned within Microsoft Dev Box.
+Some enterprise customers use traffic interception, TLS decryption, deep packet inspection, and other similar technologies for security teams to monitor network traffic. These traffic interception technologies can cause issues with running Azure network connection checks or dev box provisioning. Make sure no network interception is enforced for dev boxes provisioned within Microsoft Dev Box.
Traffic interception technologies can exacerbate latency issues. You can use a [Remote Desktop Protocol (RDP) Shortpath](/windows-365/enterprise/rdp-shortpath-public-networks) to help minimize latency issues.
-## End user devices
-
-Any device on which you use one of the Remote Desktop clients to connect to Azure Virtual Desktop must have access to the following FQDNs and endpoints. Allowing these FQDNs and endpoints is essential for a reliable client experience. Blocking access to these FQDNs and endpoints is unsupported and affects service functionality.
-
-|Address |Protocol |Outbound port |Purpose |Clients |
-||||||
-|login.microsoftonline.com |TCP |443 |Authentication to Microsoft Online Services |All |
-|*.wvd.microsoft.com |TCP |443 |Service traffic |All |
-|*.servicebus.windows.net |TCP |443 |Troubleshooting data |All |
-|go.microsoft.com |TCP |443 |Microsoft FWLinks |All |
-|aka.ms |TCP |443 |Microsoft URL shortener |All |
-|learn.microsoft.com |TCP |443 |Documentation |All |
-|privacy.microsoft.com |TCP |443 |Privacy statement |All |
-|query.prod.cms.rt.microsoft.com |TCP |443 |Download an MSI to update the client. Required for automatic updates. |Windows Desktop |
-
-These FQDNs and endpoints only correspond to client sites and resources. This list doesn't include FQDNs and endpoints for other services such as Microsoft Entra ID or Office 365. Microsoft Entra FQDNs and endpoints can be found under ID 56, 59 and 125 in Office 365 URLs and IP address ranges.
- ## Troubleshooting This section covers some common connection and network issues.
This section covers some common connection and network issues.
- **Logon attempt failed**
- If the dev box user encounters logon problems and sees an error message indicating that the logon attempt failed, ensure you enabled the PKU2U protocol on both the local PC and the session host.
+ If the dev box user encounters sign in problems and sees an error message indicating that the sign in attempt failed, ensure you enabled the PKU2U protocol on both the local PC and the session host.
- For more information about troubleshooting logon errors, see [Troubleshoot connections to Microsoft Entra joined VMs - Windows Desktop client](/azure/virtual-desktop/troubleshoot-azure-ad-connections#the-logon-attempt-failed).
+ For more information about troubleshooting sign in errors, see [Troubleshoot connections to Microsoft Entra joined VMs - Windows Desktop client](/azure/virtual-desktop/troubleshoot-azure-ad-connections#the-logon-attempt-failed).
- **Group policy issues in hybrid environments**
This section covers some common connection and network issues.
### IPv6 addressing issues
-If you're experiencing IPv6 issues, check that the *Microsoft.AzureActiveDirectory* service endpoint is not enabled on the virtual network or subnet. This service endpoint converts the IPv4 to IPv6.
+If you're experiencing IPv6 issues, check that the *Microsoft.AzureActiveDirectory* service endpoint isn't enabled on the virtual network or subnet. This service endpoint converts the IPv4 to IPv6.
For more information, see [Virtual Network service endpoints](/azure/virtual-network/virtual-network-service-endpoints-overview). ### Updating dev box definition image issues
-When you update the image used in a dev box definition, you must ensure that you have sufficient IP addresses available in your virtual network. Additional free IP addresses are necessary for the Azure Network connection health check. If the health check fails the dev box definition will not update. You need 1 additional IP address per dev box, and two IP addresses for the health check and Dev Box infrastructure.
+When you update the image used in a dev box definition, you must ensure that you have sufficient IP addresses available in your virtual network. More free IP addresses are necessary for the Azure Network connection health check. If the health check fails, the dev box definition doesn't update. You need one extra IP address per dev box, and one IP addresses for the health check and Dev Box infrastructure.
For more information about updating dev box definition images, see [Update a dev box definition](how-to-manage-dev-box-definitions.md#update-a-dev-box-definition).
event-hubs Apache Kafka Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-configurations.md
Property | Recommended Values | Permitted Range | Notes
Property | Recommended Values | Permitted Range | Notes |:|--:|
-`retries` | 2 | | Default is 2147483647.
+`retries` | > 0 | | Default is 2147483647.
`request.timeout.ms` | 30000 .. 60000 | > 20000| Event Hubs will internally default to a minimum of 20,000 ms. `librdkafka` default value is 5000, which can be problematic. *While requests with lower timeout values are accepted, client behavior isn't guaranteed.* `partitioner` | `consistent_random` | See librdkafka documentation | `consistent_random` is default and best. Empty and null keys are handled ideally for most cases. `compression.codec` | `none` || Compression currently not supported.
firewall Forced Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/forced-tunneling.md
Previously updated : 03/22/2024 Last updated : 09/10/2024 # Azure Firewall forced tunneling
-When you configure a new Azure Firewall, you can route all Internet-bound traffic to a designated next hop instead of going directly to the Internet. For example, you might have a default route advertised via BGP or using User Defined Route (UDR) to force traffic to an on-premises edge firewall or other network virtual appliance (NVA) to process network traffic before it's passed to the Internet. To support this configuration, you must create Azure Firewall with forced tunneling configuration enabled. This is a mandatory requirement to avoid service disruption.
+When you configure a new Azure Firewall, you can route all Internet-bound traffic to a designated next hop instead of going directly to the Internet. For example, you could have a default route advertised via BGP or using User Defined Routes (UDRs) to force traffic to an on-premises edge firewall or other network virtual appliance (NVA) to process network traffic before it's passed to the Internet. To support this configuration, you must create an Azure Firewall with the Firewall Management NIC enabled.
-If you have a pre-existing firewall, you must stop/start the firewall in forced tunneling mode to support this configuration. Stopping/starting the firewall can be used to configure forced tunneling the firewall without the need to redeploy a new one. You should do this during maintenance hours to avoid disruptions. For more information, see the [Azure Firewall FAQ](firewall-faq.yml#how-can-i-stop-and-start-azure-firewall) about stopping and restarting a firewall in forced tunnelling mode.
-You might prefer not to expose a public IP address directly to the Internet. In this case, you can deploy Azure Firewall in forced tunneling mode without a public IP address. This configuration creates a management interface with a public IP address that is used by Azure Firewall for its operations. The public IP address is used exclusively by the Azure platform and can't be used for any other purpose. The tenant data path network can be configured without a public IP address, and Internet traffic can be forced tunneled to another firewall or blocked.
+You might prefer not to expose a public IP address directly to the Internet. In this case, you can deploy Azure Firewall with the Management NIC enabled without a public IP address. When the Management NIC is enabled, it creates a management interface with a public IP address that is used by Azure Firewall for its operations. The public IP address is used exclusively by the Azure platform and can't be used for any other purpose. The tenant data path network can be configured without a public IP address, and Internet traffic can be forced tunneled to another firewall or blocked.
-Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. Azure Firewall doesnΓÇÖt SNAT when the destination IP address is a private IP address range per IANA RFC 1918. This logic works perfectly when you egress directly to the Internet. However, with forced tunneling enabled, Internet-bound traffic is SNATed to one of the firewall private IP addresses in the AzureFirewallSubnet. This hides the source address from your on-premises firewall. You can configure Azure Firewall to not SNAT regardless of the destination IP address by adding *0.0.0.0/0* as your private IP address range. With this configuration, Azure Firewall can never egress directly to the Internet. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
+Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. Azure Firewall doesnΓÇÖt SNAT when the destination IP address is a private IP address range per IANA RFC 1918. This logic works perfectly when you egress directly to the Internet. However, with forced tunneling configured, Internet-bound traffic might be SNATed to one of the firewall private IP addresses in the AzureFirewallSubnet. This hides the source address from your on-premises firewall. You can configure Azure Firewall to not SNAT regardless of the destination IP address by adding *0.0.0.0/0* as your private IP address range. With this configuration, Azure Firewall can never egress directly to the Internet. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
+
+Azure Firewall also supports split tunneling, which is the ability to selectively route traffic. For example, you can configure Azure Firewall to direct all traffic to your on-premises network while routing traffic to the Internet for KMS activation, ensuring the KMS server is activated. You can do this using route tables on the AzureFirewallSubnet. For more information, see [Configuring Azure Firewall in Forced Tunneling mode - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/azure-network-security-blog/configuring-azure-firewall-in-forced-tunneling-mode/ba-p/3581955).
> [!IMPORTANT] > If you deploy Azure Firewall inside of a Virtual WAN Hub (Secured Virtual Hub), advertising the default route over Express Route or VPN Gateway is not currently supported. A fix is being investigated. > [!IMPORTANT]
-> DNAT isn't supported with forced tunneling enabled. Firewalls deployed with forced tunneling enabled can't support inbound access from the Internet because of asymmetric routing.
+> DNAT isn't supported with forced tunneling enabled. Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing. However, firewalls with a Management NIC still support DNAT.
## Forced tunneling configuration
-You can configure forced tunneling during Firewall creation by enabling forced tunneling mode as shown in the following screenshot. To support forced tunneling, Service Management traffic is separated from customer traffic. Another dedicated subnet named **AzureFirewallManagementSubnet** (minimum subnet size /26) is required with its own associated public IP address. This public IP address is for management traffic. It's used exclusively by the Azure platform and can't be used for any other purpose.
-
-In forced tunneling mode, the Azure Firewall service incorporates the Management subnet (AzureFirewallManagementSubnet) for its *operational* purposes. By default, the service associates a system-provided route table to the Management subnet. The only route allowed on this subnet is a default route to the Internet and *Propagate gateway* routes must be disabled. Avoid associating customer route tables to the Management subnet when you create the firewall.
--
-Within this configuration, the *AzureFirewallSubnet* can now include routes to any on-premises firewall or NVA to process traffic before it's passed to the Internet. You can also publish these routes via BGP to *AzureFirewallSubnet* if **Propagate gateway routes** is enabled on this subnet.
+When the Firewall Management NIC is enabled, the *AzureFirewallSubnet* can now include routes to any on-premises firewall or NVA to process traffic before it's passed to the Internet. You can also publish these routes via BGP to *AzureFirewallSubnet* if **Propagate gateway routes** is enabled on this subnet.
For example, you can create a default route on the *AzureFirewallSubnet* with your VPN gateway as the next hop to get to your on-premises device. Or you can enable **Propagate gateway routes** to get the appropriate routes to the on-premises network. -
-If you enable forced tunneling, Internet-bound traffic is SNATed to one of the firewall private IP addresses in AzureFirewallSubnet, hiding the source from your on-premises firewall.
+If you configure forced tunneling, Internet-bound traffic is SNATed to one of the firewall private IP addresses in AzureFirewallSubnet, hiding the source from your on-premises firewall.
If your organization uses a public IP address range for private networks, Azure Firewall SNATs the traffic to one of the firewall private IP addresses in AzureFirewallSubnet. However, you can configure Azure Firewall to **not** SNAT your public IP address range. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
-Once you configure Azure Firewall to support forced tunneling, you can't undo the configuration. If you remove all other IP configurations on your firewall, the management IP configuration is removed as well, and the firewall is deallocated. The public IP address assigned to the management IP configuration can't be removed, but you can assign a different public IP address.
+## Related content
-## Next steps
+- [Azure Firewall Management NIC](management-nic.md)
-- [Tutorial: Deploy and configure Azure Firewall in a hybrid network using the Azure portal](tutorial-hybrid-portal.md)
firewall Management Nic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/management-nic.md
+
+ Title: Azure Firewall Management NIC
+description: You can configure a Management NIC to support the Forced Tunneling and Packet Capture features.
++ Last updated : 11/6/2024+++++
+# Azure Firewall Management NIC
+
+> [!NOTE]
+> This feature was previously called Forced Tunneling. Originally, a Management NIC was required only for Forced Tunneling. However, upcoming Firewall features will also require a Management NIC, so it has been decoupled from Forced Tunneling. All relevant documentation has been updated to reflect this.
+
+An Azure Firewall Management NIC separates firewall management traffic from customer traffic. Upcoming Firewall features will also require a Management NIC. To support any of these capabilities, you must create an Azure Firewall with the Firewall Management NIC enabled or enable it on an existing Azure Firewall. This is a mandatory requirement to avoid service disruption.
+
+## What happens when you enable the Management NIC
+
+If you enable a Management NIC, the firewall routes its management traffic via the AzureFirewallManagementSubnet (minimum subnet size /26) with its associated public IP address. You assign this public IP address for the firewall to manage traffic. It's used exclusively by the Azure platform and can't be used for any other purpose. All traffic required for firewall operational purposes is incorporated into the AzureFirewallManagementSubnet.
+
+By default, the service associates a system-provided route table to the Management subnet. The only route allowed on this subnet is a default route to the Internet and *Propagate gateway routes* must be disabled. Avoid associating customer route tables to the Management subnet, as this can cause service disruptions if configured incorrectly. If you do associate a route table, then ensure it has a default route to the Internet to avoid service disruptions.
++
+## Enable the Management NIC on existing firewalls
+
+For Standard and Premium firewall versions, the Firewall Management NIC must be manually enabled during the create process as shown previously, but all Basic Firewall versions and all Secured Hub firewalls always have a Management NIC enabled.
+
+For a pre-existing firewall, you must stop the firewall and then restart it with the Firewall Management NIC enabled to support Forced tunneling. Stopping/starting the firewall can be used to enable the Firewall Management NIC without the need to delete an existing firewall and redeploy a new one. You should always start/stop the firewall during maintenance hours to avoid disruptions, including when attempting to enable the Firewall Management NIC.
+
+Use the following steps:
+
+1. Create the `AzureFirewallManagementSubnet` on the Azure portal and use the appropriate IP address range for the virtual network.
+
+ :::image type="content" source="media/management-nic/firewall-management-subnet.png" alt-text="Screenshot showing add a subnet.":::
+1. Create the new management public IP address with the same properties as the existing firewall public IP address: SKU, Tier, and Location.
+
+ :::image type="content" source="media/management-nic/firewall-management-ip.png" lightbox="media/management-nic/firewall-management-ip.png" alt-text="Screenshot showing the public IP address creation.":::
+
+1. Stop the firewall
+
+ Use the information in [Azure Firewall FAQ](firewall-faq.yml#how-can-i-stop-and-start-azure-firewall) to stop the firewall:
+
+ ```azurepowershell
+ $azfw = Get-AzFirewall -Name "FW Name" -ResourceGroupName "RG Name"
+ $azfw.Deallocate()
+ Set-AzFirewall -AzureFirewall $azfw
+ ```
+
+
+1. Start the firewall with the management public IP address and subnet.
+
+ Start a firewall with one public IP address and a Management public IP address:
+
+ ```azurepowershell
+ $azfw = Get-AzFirewall -Name "FW Name" -ResourceGroupName "RG Name"
+ $vnet = Get-AzVirtualNetwork -Name "VNet Name" -ResourceGroupName "RG Name"
+ $pip = Get-AzPublicIpAddress -Name "azfwpublicip" -ResourceGroupName "RG Name"
+ $mgmtPip = Get-AzPublicIpAddress -Name "mgmtpip" -ResourceGroupName "RG Name"
+ $azfw.Allocate($vnet, $pip, $mgmtPip)
+ $azfw | Set-AzFirewall
+ ```
+
+ Start a firewall with two public IP addresses and a Management public IP address:
+
+ ```azurepowershell
+ $azfw = Get-AzFirewall -Name "FW Name" -ResourceGroupName "RG Name"
+ $vnet = Get-AzVirtualNetwork -Name "VNet Name" -ResourceGroupName "RG Name"
+ $pip1 = Get-AzPublicIpAddress -Name "azfwpublicip" -ResourceGroupName "RG Name"
+ $pip2 = Get-AzPublicIpAddress -Name "azfwpublicip2" -ResourceGroupName "RG Name"
+ $mgmtPip = Get-AzPublicIpAddress -Name "mgmtpip" -ResourceGroupName "RG Name"
+ $azfw.Allocate($vnet,@($pip1,$pip2), $mgmtPip)
+ $azfw | Set-AzFirewall
+ ```
+
+ > [!NOTE]
+ > You must reallocate a firewall and public IP to the original resource group and subscription. When stop/start is performed, the private IP address of the firewall may change to a different IP address within the subnet. This can affect the connectivity of previously configured route tables.
+
+Now when you view the firewall in the Azure portal, you see the assigned Management public IP address:
+++
+> [!NOTE]
+> If you remove all other IP address configurations on your firewall, the management IP address configuration is removed as well, and the firewall is deallocated. The public IP address assigned to the management IP address configuration can't be removed, but you can assign a different public IP address.
+
+## Related content
+
+- [Azure Firewall forced tunneling](forced-tunneling.md)
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
zone_pivot_groups: front-door-tiers
# End-to-end TLS with Azure Front Door
+> [!IMPORTANT]
+> Support for TLS 1.0 and 1.1 will be discontinued on March 1, 2025.
+ Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), is the standard security technology for establishing an encrypted link between a web server and a client, like a web browser. This link ensures that all data passed between the server and the client remain private and encrypted. To meet your security or compliance requirements, Azure Front Door supports end-to-end TLS encryption. Front Door TLS/SSL offload terminates the TLS connection, decrypts the traffic at the Azure Front Door, and re-encrypts the traffic before forwarding it to the origin. When connections to the origin use the origin's public IP address, it's a good security practice to configure HTTPS as the forwarding protocol on your Azure Front Door. By using HTTPS as the forwarding protocol, you can enforce end-to-end TLS encryption for the entire processing of the request from the client to the origin. TLS/SSL offload is also supported if you deploy a private origin with Azure Front Door Premium using the [Private Link](private-link.md) feature.
Azure Front Door offloads the TLS sessions at the edge and decrypts client reque
## Supported TLS versions
-Azure Front Door supports four versions of the TLS protocol: TLS versions 1.0, 1.1, 1.2 and 1.3. All Azure Front Door profiles created after September 2019 use TLS 1.2 as the default minimum with TLS 1.3 enabled, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
+Azure Front Door currently supports four versions of the TLS protocol: TLS versions 1.0, 1.1, 1.2 and 1.3. All Azure Front Door profiles created after September 2019 use TLS 1.2 as the default minimum with TLS 1.3 enabled, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility. Support for TLS 1.0 and 1.1 will be discontinued on March 1, 2025.
Although Azure Front Door supports TLS 1.2, which introduced client/mutual authentication in RFC 5246, currently, Azure Front Door doesn't support client/mutual authentication (mTLS) yet.
-You can configure the minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or theΓÇ»[Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2. As such, specifying TLS 1.2 as the minimum version controls the minimum acceptable TLS version Azure Front Door will accept from a client. For minimum TLS version 1.2 the negotiation will attempt to establish TLS 1.3 and then TLS 1.2, while for minimum TLS version 1.0 all four versions will be attempted. When Azure Front Door initiates TLS traffic to the origin, it will attempt to negotiate the best TLS version that the origin can reliably and consistently accept. Supported TLS versions for origin connections are TLS 1.0, TLS 1.1, TLS 1.2 and TLS 1.3.
+You can configure the minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or theΓÇ»[Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2. As such, specifying TLS 1.2 as the minimum version controls the minimum acceptable TLS version Azure Front Door will accept from a client. For minimum TLS version 1.2 the negotiation will attempt to establish TLS 1.3 and then TLS 1.2, while for minimum TLS version 1.0 all four versions will be attempted. When Azure Front Door initiates TLS traffic to the origin, it will attempt to negotiate the best TLS version that the origin can reliably and consistently accept. Supported TLS versions for origin connections are TLS 1.0, TLS 1.1, TLS 1.2 and TLS 1.3. Support for TLS 1.0 and 1.1 will be discontinued on March 1, 2025.
> [!NOTE] > * Clients with TLS 1.3 enabled are required to support one of the Microsoft SDL compliant EC Curves, including Secp384r1, Secp256r1, and Secp521, in order to successfully make requests with Azure Front Door using TLS 1.3.
healthcare-apis Access Healthcare Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/access-healthcare-apis.md
Last updated 04/29/2024-+ # Access Azure Health Data Services
healthcare-apis Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/availability-zones.md
Last updated 10/15/2024-+ # Availability Zones for Azure Health Data Services
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-private-link.md
Last updated 05/06/2024-+ # Configure Private Link for Azure Health Data Services
healthcare-apis Health Data Services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/health-data-services-get-started.md
Last updated 06/10/2024-+ # Introduction to Azure Health Data Services
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-faqs.md
Last updated 12/15/2022-+ # Frequently asked questions about Azure Health Data Services
healthcare-apis Healthcare Apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-quickstart.md
Last updated 06/07/2024-+
healthcare-apis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/logging.md
Last updated 09/12/2024-+ # Logging for Azure Health Data Services
healthcare-apis Network Access Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/network-access-security.md
Last updated 09/12/2024-+ # Manage network access security in Azure Health Data Services
healthcare-apis Release Notes 2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2021.md
Last updated 03/13/2024-+
healthcare-apis Release Notes 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2022.md
Last updated 03/13/2024-+
healthcare-apis Release Notes 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2023.md
Last updated 03/13/2024-+
healthcare-apis Release Notes 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2024.md
Last updated 07/29/2024-+
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/workspace-overview.md
Last updated 1/5/2023-+ # What is Azure Health Data Services workspace?
iot-operations Howto Configure Dataflow Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-profile.md
spec:
-> [!IMPORTANT]
-> Currently in public preview, adjusting the instance count may result in message loss. At this time, it's recommended to not adjust the instance count for a profile with active dataflows.
+> [!CAUTION]
+> Currently in public preview, adjusting the instance count may result in message loss or duplication. At this time, it's recommended to not adjust the instance count for a profile with active dataflows.
## Diagnostic settings
resource dataflowProfile 'Microsoft.IoTOperations/instances/dataflowProfiles@202
parent: aioInstance name: '<NAME>' properties: {
- instanceCount: <COUNT>
+ instanceCount: 1
diagnostics: { { logs: {
metadata:
name: '<NAME>' namespace: azure-iot-operations spec:
- instanceCount: <COUNT>
+ instanceCount: 1
diagnostics: logs: level: debug
iot-operations Howto Configure Kafka Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md
Previously updated : 11/04/2024 Last updated : 11/06/2024 ai-usage: ai-assisted #CustomerIntent: As an operator, I want to understand how to configure dataflow endpoints for Kafka in Azure IoT Operations so that I can send data to and from Kafka endpoints.
To set up bi-directional communication between Azure IoT Operations Preview and
[Azure Event Hubs is compatible with the Kafka protocol](../../event-hubs/azure-event-hubs-kafka-overview.md) and can be used with dataflows with some limitations.
-### Create an Azure Event Hubs namespace and event hub in it
+### Create an Azure Event Hubs namespace and event hub
First, [create a Kafka-enabled Azure Event Hubs namespace](../../event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md)
To configure a dataflow endpoint for non-Event-Hub Kafka brokers, set the host,
| -- | - | | Name | The name of the dataflow endpoint. | | Host | The hostname of the Kafka broker in the format `<Kafa-broker-host>:xxxx`. Include port number in the host setting. |
- | Authentication method| The method used for authentication. Choose *SASL* or *X509 certificate*. |
+ | Authentication method| The method used for authentication. Choose *SASL*. |
| SASL type | The type of SASL authentication. Choose *Plain*, *ScramSha256*, or *ScramSha512*. Required if using *SASL*. |
- | Synced secret name | The name of the secret. Required if using *SASL* or *X509*. |
+ | Synced secret name | The name of the secret. Required if using *SASL*. |
| Username reference of token secret | The reference to the username in the SASL token secret. Required if using *SASL*. |
- | X509 client certificate | The X.509 client certificate used for authentication. Required if using *X509*. |
- | X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. Required if using *X509*. |
- | X509 client key | The private key corresponding to the X.509 client certificate. Required if using *X509*. |
+ 1. Select **Apply** to provision the endpoint.
The secret must be in the same namespace as the Kafka dataflow endpoint. The sec
<!-- TODO: double check! -->
-### X.509
-
-To use X.509 for authentication, update the authentication section of the Kafka settings to use the X509Certificate method and specify reference to the secret that holds the X.509 certificate.
-
-# [Portal](#tab/portal)
-
-In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **X509 certificate**.
-
-Enter the following settings for the endpoint:
-
-| Setting | Description |
-| | - |
-| Synced secret name | The name of the secret. |
-| X509 client certificate | The X.509 client certificate used for authentication. |
-| X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. |
-| X509 client key | The private key corresponding to the X.509 client certificate. |
-
-# [Bicep](#tab/bicep)
--
-```bicep
-kafkaSettings: {
- authentication: {
- method: 'X509Certificate'
- x509CertificateSettings: {
- secretRef: '<SECRET_NAME>'
- }
- }
-}
-```
-
-# [Kubernetes](#tab/kubernetes)
-
-The secret must be in the same namespace as the Kafka dataflow endpoint. Use Kubernetes TLS secret containing the public certificate and private key. For example:
-
-```bash
-kubectl create secret tls my-tls-secret -n azure-iot-operations \
- --cert=path/to/cert/file \
- --key=path/to/key/file
-```
-
-```yaml
-kafkaSettings:
- authentication:
- method: X509Certificate
- x509CertificateSettings:
- secretRef: <SECRET_NAME>
-```
--- ### System-assigned managed identity
iot-operations Howto Configure Mqtt Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint.md
kubectl create configmap client-ca-configmap --from-file root_ca.crt -n azure-io
### Client ID prefix
-You can set a client ID prefix for the MQTT client. The client ID is generated by appending the dataflow instance name to the prefix.
+You can set a client ID prefix for the MQTT client. The client ID is generated by appending the dataflow instance name to the prefix.
+
+> [!CAUTION]
+> Most applications should not modify the client ID prefix. Don't modify this after an initial IoT Operations deployment. Changing the client ID prefix after deployment might result in data loss.
# [Portal](#tab/portal)
iot-operations Concept About State Store Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/concept-about-state-store-protocol.md
Any other failure follows the state store's general error reporting pattern:
When a `keyName` being monitored via `KEYNOTIFY` is modified or deleted, the state store sends a notification to the client. The topic is determined by convention - the client doesn't specify the topic during the `KEYNOTIFY` process.
-The topic is defined in the following example. The `clientId` is an upper-case hex encoded representation of the MQTT ClientId of the client that initiated the `KEYNOTIFY` request and `keyName` is a hex encoded representation of the key that changed.
+The topic is defined in the following example. The `clientId` is an upper-case hex encoded representation of the MQTT ClientId of the client that initiated the `KEYNOTIFY` request and `keyName` is a hex encoded representation of the key that changed. The state store follows the Base 16 encoding rules of [RFC 4648 - The Base16, Base32, and Base64 Data Encodings](https://datatracker.ietf.org/doc/html/rfc4648#section-8) for this encoding.
```console clients/statestore/v1/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/{clientId}/command/notify/{keyName}
iot-operations Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/known-issues.md
kubectl delete pod aio-opc-opc.tcp-1-f95d76c54-w9v9c -n azure-iot-operations
- You can't use anonymous authentication for MQTT and Kafka endpoints when you deploy dataflow endpoints from the operations experience UI. The current workaround is to use a YAML configuration file and apply it by using `kubectl`. -- Changing the instance count in a dataflow profile on an active dataflow might result in new messages being discarded or in messages being duplicated on the destination.
+- Currently in public preview, adjusting the instance count (instanceCount) in a dataflow profile may result in messages being discarded or duplicated on the destination. At this time, it's recommended to not adjust the instance count for a profile with active dataflows.
- When you create a dataflow, if you set the `dataSources` field as an empty list, the dataflow crashes. The current workaround is to always enter at least one value in the data sources.
load-balancer Admin State Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/admin-state-overview.md
Previously updated : 05/29/2024 Last updated : 10/17/2024
Administrative state (Admin state) is a feature of Azure Load Balancer that allows you to override the Load BalancerΓÇÖs health probe behavior on a per backend pool instance basis. This feature is useful in scenarios where you would like to take down your backend instance for maintenance, patching, or testing. -- ## Why use admin state? Admin state is useful in scenarios where you want to have more control over the behavior of your Load Balancer. For example, you can set the admin state to up to always consider the backend instance eligible for new connections, even if the health probe indicates otherwise. Conversely, you can set the admin state to down to prevent new connections, even if the health probe indicates that the backend instance is healthy. This can be useful for maintenance or other scenarios where you want to temporarily take a backend instance out of rotation.
load-balancer Cross Subscription How To Attach Backend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-subscription-how-to-attach-backend.md
Previously updated : 06/18/2024 Last updated : 10/17/2024
In this article, you learn how to attach a cross-subscription backend to an Azur
A [cross-subscription load balancer](cross-subscription-overview.md) can reference a virtual network that resides in a different subscription other than the load balancers. This feature allows you to deploy a load balancer in one subscription and reference a virtual network in another subscription. - [!INCLUDE [load-balancer-cross-subscription-prerequisites](../../includes/load-balancer-cross-subscription-prerequisites.md)] [!INCLUDE [load-balancer-cross-subscription-azure-sign-in](../../includes/load-balancer-cross-subscription-azure-sign-in.md)]
load-balancer Cross Subscription How To Attach Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-subscription-how-to-attach-frontend.md
Previously updated : 06/18/2024 Last updated : 10/17/2024
In this article, you learn how to create a load balancer in one Azure subscripti
A [cross-subscription load balancer](cross-subscription-overview.md) can reference a virtual network that resides in a different subscription other than the load balancers. This feature allows you to deploy a load balancer in one subscription and reference a virtual network in another subscription. - ## Prerequisites # [Azure PowerShell](#tab/azurepowershell)
load-balancer Cross Subscription How To Global Backend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-subscription-how-to-global-backend.md
Previously updated : 06/18/2024 Last updated : 10/17/2024
In this article, you learn how to create a global load balancer with cross-subsc
A [cross-subscription load balancer](cross-subscription-overview.md) can reference a virtual network that resides in a different subscription other than the load balancers. This feature allows you to deploy a load balancer in one subscription and reference a virtual network in another subscription. - ## Prerequisites # [Azure PowerShell](#tab/azurepowershell)
load-balancer Cross Subscription How To Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-subscription-how-to-internal-load-balancer.md
Previously updated : 06/18/2024 Last updated : 10/17/2024 #CustomerIntent: As a < type of user >, I want < what? > so that < why? > .
In this how-to guide, you learn how to create a cross-subscription internal load
A [cross-subscription internal load balancer (ILB)](cross-subscription-overview.md) can reference a virtual network that resides in a different subscription other than the load balancers. This feature allows you to deploy a load balancer in one subscription and reference a virtual network in another subscription. - [!INCLUDE [load-balancer-cross-subscription-prerequisites](../../includes/load-balancer-cross-subscription-prerequisites.md)] [!INCLUDE [load-balancer-cross-subscription-azure-sign-in](../../includes/load-balancer-cross-subscription-azure-sign-in.md)]
load-balancer Cross Subscription Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-subscription-overview.md
Previously updated : 06/18/2024 Last updated : 10/17/2024
Azure Load Balancer supports cross-subscription load balancing, where the fronte
This article provides an overview of cross-subscription load balancing with Azure Load Balancer, and the scenarios it supports. - ## What is cross-subscription load balancing? Cross-subscription load balancing allows you to deploy Azure Load Balancer resources across multiple subscriptions. This feature enables you to deploy a load balancer in one subscription and have the frontend IP and backend pool instances in a different subscription. This capability is useful for organizations that have separate subscriptions for networking and application resources.
load-balancer Manage Admin State How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-admin-state-how-to.md
Administrative State (Admin State) is a feature of Azure Load Balancer that allo
You can use the Azure portal, Azure PowerShell, or Azure CLI to manage the admin state for a backend pool instance. Each section provides instructions for each method with examples for setting, updating, or removing an admin state configuration. - ## Prerequisites # [Azure portal](#tab/azureportal)
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A load test configuration uses the following keys:
| `testName` | string | N | | **Deprecated**. Unique identifier of the load test. This setting is replaced by `testId`. You can still run existing tests with the `testName` field. | | `displayName` | string | N | | Display name of the test. This value is shown in the list of tests in the Azure portal. If not provided, `testId` is used as the display name. | | `description` | string | N | | Short description of the test. The value has a maximum length of 100 characters. |
-| `testType` | string | Y | | Test type. Possible values:<br/><ul><li>`URL`: URL-based load test</li><li>`JMX`: JMeter-based load test</li></ul> |
-| `testPlan` | string | Y | | Reference to the test plan file.<br/><ul><li>If `testType: JMX`: relative path to the JMeter test script.</li><li>If `testType: URL`: relative path to the [requests JSON file](./how-to-add-requests-to-url-based-test.md).</li></ul> |
+| `testType` | string | Y | | Test type. Possible values:<br/><ul><li>`URL`: URL-based load test</li><li>`JMX`: JMeter-based load test</li><li>`Locust`: Locust-based load test</li></ul> |
+| `testPlan` | string | Y | | Reference to the test plan file.<br/><ul><li>If `testType: JMX`: relative path to the JMeter test script.</li><li>If `testType: Locust`: relative path to the Locust test script.</li><li>If `testType: URL`: relative path to the [requests JSON file](./how-to-add-requests-to-url-based-test.md).</li></ul> |
| `engineInstances` | integer | Y | | Number of parallel test engine instances for running the test plan. Learn more about [configuring high-scale load](./how-to-high-scale-load.md). |
-| `configurationFiles` | array of string | N | | List of external files, required by the test script. For example, CSV data files, images, or any other data file.<br/>Azure Load Testing uploads all files in the same folder as the test script. In the JMeter script, only refer to external files using the file name, and remove any file path information. |
+| `configurationFiles` | array of string | N | | List of external files, required by the test script. For example, CSV data files, images, or any other data file.<br/>Azure Load Testing uploads all files in the same folder as the test script. In the JMeter script or the Locust script, only refer to external files using the file name, and remove any file path information. |
| `failureCriteria` | object | N | | List of load test fail criteria. See [failureCriteria](#failurecriteria-configuration) for more details. | | `autoStop` | string or object | N | | Automatically stop the load test when the error percentage exceeds a value.<br/>Possible values:<br/>- `disable`: don't stop a load test automatically.<br/>- *object*: see [autostop](#autostop-configuration) configuration for more details. |
-| `properties` | object | N | | JMeter user property file references. See [properties](#properties-configuration) for more details. |
-| `zipArtifacts` | array of string| N | | Specifies the list of zip artifact files. For files other than JMeter scripts and user properties, if the file size exceeds 50 MB, compress them into a ZIP file. Ensure that the ZIP file remains below 50 MB in size. Only 5 ZIP artifacts are allowed with a maximum of 1000 files in each and uncompressed size of 1 GB. Only applies when `testType: JMX`. |
+| `properties` | object | N | | <ul><li>If `testType: JMX`: JMeter user property file references.</li><li>If `testType: Locust`: Locust configuration file references.</li></ul> See [properties](#properties-configuration) for more details. |
+| `zipArtifacts` | array of string| N | | Specifies the list of zip artifact files. For files other than JMeter scripts and user properties for JMeter-based tests and Locust script and configuration files for Locust-based tests, if the file size exceeds 50 MB, compress them into a ZIP file. Ensure that the ZIP file remains below 50 MB in size. Only 5 ZIP artifacts are allowed with a maximum of 1000 files in each and uncompressed size of 1 GB. Only applies for `testType: JMX` and `testType: Locust`. |
| `splitAllCSVs` | boolean | N | False | Split the input CSV files evenly across all test engine instances. For more information, see [Read a CSV file in load tests](./how-to-read-csv-data.md#split-csv-input-data-across-test-engines). |
-| `secrets` | object | N | | List of secrets that the Apache JMeter script references. See [secrets](#secrets-configuration) for more details. |
-| `env` | object | N | | List of environment variables that the Apache JMeter script references. See [environment variables](#env-configuration) for more details. |
-| `certificates` | object | N | | List of client certificates for authenticating with application endpoints in the JMeter script. See [certificates](#certificates-configuration) for more details.|
+| `secrets` | object | N | | List of secrets that the Apache JMeter or Locust script references. See [secrets](#secrets-configuration) for more details. |
+| `env` | object | N | | List of environment variables that the Apache JMeter script or Locust references. See [environment variables](#env-configuration) for more details. |
+| `certificates` | object | N | | List of client certificates for authenticating with application endpoints in the JMeter or Locust script. See [certificates](#certificates-configuration) for more details.|
| `keyVaultReferenceIdentity` | string | N | | Resource ID of the user-assigned managed identity for accessing the secrets from your Azure Key Vault. If you use a system-managed identity, this information isn't needed. Make sure to grant this user-assigned identity access to your Azure key vault. Learn more about [managed identities in Azure Load Testing](./how-to-use-a-managed-identity.md). | | `subnetId` | string | N | | Resource ID of the virtual network subnet for testing privately hosted endpoints. This subnet hosts the injected test engine VMs. For more information, see [how to load test privately hosted endpoints](./how-to-test-private-endpoint.md). | | `publicIPDisabled` | boolean | N | | Disable the deployment of a public IP address, load balancer, and network security group while testing a private endpoint. For more information, see [how to load test privately hosted endpoints](./how-to-test-private-endpoint.md). |
You can specify a JMeter user properties file for your load test. The user prope
| Key | Type | Default value | Description | | -- | -- | -- | - |
-| `userPropertyFile` | string | | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file is uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. |
+| `userPropertyFile` | string | | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties) or a Locust [configuration file](https://docs.locust.io/en/stable/configuration.html#configuration-file). For Locust, files with extensions .conf, .ini and .toml are supported as a configuration file. The file is uploaded to the Azure Load Testing resource alongside the test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. |
#### User property file configuration sample
properties:
userPropertyFile: 'user.properties' ```
+The following code snippet shows a load test configuration, which specifies a Locust configuration file.
+
+```yaml
+version: v0.1
+testId: SampleTest
+displayName: Sample Test
+description: Load test website home page
+testPlan: SampleTest.py
+testType: Locust
+engineInstances: 1
+properties:
+ userPropertyFile: 'locust.conf'
+```
+ ### `secrets` configuration You can store secret values in Azure Key Vault and reference them in your test plan. Learn more about [using secrets with Azure Load Testing](./how-to-parameterize-load-tests.md).
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
Each data source also has its own limits that can be reflected in Azure Managed
## Managed identities
-Each Azure Managed Grafana instance can only have one user-assigned managed identity, or one user-assigned managed identity assigned.
+Each Azure Managed Grafana instance can only be assigned one managed identity, user-assigned or system-assigned, but not both.
## Related links
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/migrate-support-matrix-vmware.md
Last updated 11/04/2024
-zone_pivot_groups: vmware-discovery-requirements
# Support matrix for VMware discovery
To assess servers, first, create an Azure Migrate project. The Azure Migrate: Di
As you plan your migration of VMware servers to Azure, see the [migration support matrix](../migrate-support-matrix-vmware-migration.md). - ## VMware requirements VMware | Details
VMware | Details
vCenter Server | Servers that you want to discover and assess must be managed by vCenter Server version 8.0, 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses aren't supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers). Permissions | The Azure Migrate: Discovery and assessment tool requires a vCenter Server read-only account.<br /><br /> If you want to use the tool for software inventory, agentless dependency analysis, web apps, and SQL discovery, the account must have privileges for guest operations on VMware virtual machines (VMs). - ## Server requirements
VMware | Details
Operating systems | All Windows and Linux operating systems can be assessed for migration. Storage | Disks attached to SCSI, IDE, and SATA-based controllers are supported. - ## Azure Migrate appliance requirements
Here are more requirements for the appliance:
- In Azure Government, you must deploy the appliance by using a [script](../deploy-appliance-script-government.md). - The appliance must be able to access specific URLs in [public clouds](../migrate-appliance.md#public-cloud-urls) and [government clouds](../migrate-appliance.md#government-cloud-urls). --- ## Port access requirements Device | Connection
Azure Migrate appliance | Inbound connections on TCP port 3389 to allow remote d
vCenter Server | Inbound connections on TCP port 443 to allow the appliance to collect configuration and performance metadata for assessments. <br /><br /> The appliance connects to vCenter on port 443 by default. If vCenter Server listens on a different port, you can modify the port when you set up discovery. ESXi hosts | For [discovery of software inventory](../how-to-discover-applications.md) or [agentless dependency analysis](../concepts-dependency-visualization.md#agentless-analysis), the appliance connects to ESXi hosts on TCP port 443 to discover software inventory and dependencies on the servers. -- ## Software inventory requirements
Server access | You can add multiple domain and nondomain (Windows/Linux) creden
Port access | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory. <br /><br /> If you use domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /> <br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations Discovery | Software inventory is performed from vCenter Server by using VMware Tools installed on the servers.<br/><br/> The appliance gathers the information about the software inventory from the server running vCenter Server through vSphere APIs.<br/><br/> Software inventory is agentless. No agent is installed on the server, and the appliance doesn't connect directly to the servers. -- ## SQL Server instance and database discovery requirements [Software inventory](../how-to-discover-applications.md) identifies SQL Server instances. The appliance attempts to connect to the respective SQL Server instances through the Windows authentication or SQL Server authentication credentials in the appliance configuration manager by using this information. The appliance can connect to only those SQL Server instances to which it has network line of sight. Software inventory by itself might not need network line of sight.
Use the following sample scripts to create a login and provision it with the nec
--GO ``` - ## Web apps discovery requirements
Required privileges | Local admin. | Root or sudo user.
> [!NOTE] > Data is always encrypted at rest and during transit. -- ## Dependency analysis requirements (agentless)
Discovery method | Dependency information between servers is gathered by using
> [!Note] > In some recent Linux OS versions, the netstat command was replaced by the `ss` command; have that in mind when preparing the servers. - ## Dependency analysis requirements (agent-based)
Management | When you register agents to the workspace, use the ID and key provi
Internet connectivity | If servers aren't connected to the internet, install the Log Analytics gateway on the servers. Azure Government | Agent-based dependency analysis isn't supported. ## Limitations
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
The following sections show you how to set up Azure Database for MySQL - Flexibl
The sample is a Java application backed by a MySQL database, and is deployed to the OpenShift cluster using Source-to-Image (S2I). For more information about S2I, see the [S2I Documentation](http://red.ht/eap-aro-s2i).
+> [!NOTE]
+> Because Azure Workload Identity is not yet supported by Azure OpenShift, this article still uses username and password for database authentication instead of using passwordless database connections.
+ Open a shell and set the following environment variables. Replace the substitutions as appropriate. ```bash
operator-nexus List Of Metrics Collected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/list-of-metrics-collected.md
All these metrics for Nexus Cluster are collected and delivered to Azure Monitor
|KubeletRunningPods|Kubelet|Kubelet Running Pods|Count|Number of pods running on the node. In the absence of data, this metric will retain the most recent value emitted|Host| |KubeletRuntimeOperationsErrorsTotal|Kubelet|Kubelet Runtime Operations Errors Total|Count|Cumulative number of runtime operation errors by operation type. In the absence of data, this metric will retain the most recent value emitted|Host, Operation Type| |KubeletStartedPodsErrorsTotal|Kubelet|Kubelet Started Pods Errors Total|Count|Cumulative number of errors when starting pods. In the absence of data, this metric will retain the most recent value emitted|Host|
-|KubeletVolumeStatsAvailableBytes|Kubelet|Volume Available Bytes|Bytes|Number of available bytes in the volume. In the absence of data, this metric will retain the most recent value emitted|Host, Namespace, Persistent Volume Claim|
-|KubeletVolumeStatsCapacityBytes|Kubelet|Volume Capacity Bytes|Bytes|Capacity of the volume. In the absence of data, this metric will retain the most recent value emitted|Host, Namespace, Persistent Volume Claim|
-|KubeletVolumeStatsUsedBytes|Kubelet|Volume Used Bytes|Bytes|Number of used bytes in the volume. In the absence of data, this metric will retain the most recent value emitted|Host, Namespace, Persistent Volume Claim|
### ***Kubernetes Node***
All these metrics for Nexus Cluster are collected and delivered to Azure Monitor
|KubevirtVirtControllerReady|VMOrchestrator|Kubevirt Virt Controller Ready|Unspecified|Indication for a virt-controller that is ready to take the lead. The value is 1 if the virt-controller is ready, 0 otherwise. In the absence of data, this metric will default to 0|Pod Name| |KubevirtVirtOperatorReady|VMOrchestrator|Kubevirt Virt Operator Ready|Unspecified|Indication for a virt operator being ready. The value is 1 if the virt operator is ready, 0 otherwise. In the absence of data, this metric will default to 0|Pod Name| |KubevirtVmiMemoryActualBalloonBytes|VMOrchestrator|Kubevirt VMI Memory Balloon Bytes|Bytes|Current balloon size. In the absence of data, this metric will default to 0|Name, Node|
-|KubevirtVmiMemoryAvailableBytes|VMOrchestrator|Kubevirt VMI Memory Available Bytes|Bytes|Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages. In the absence of data, this metric will default to 0|Name, Node|
+|KubevirtVmiMemoryAvailableBytes|VMOrchestrator|Kubevirt VMI Memory Available Bytes|Bytes|Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS doesn't initialize all assigned pages. In the absence of data, this metric will default to 0|Name, Node|
|KubevirtVmiMemorySwapInTrafficBytesTotal|VMOrchestrator|Kubevirt VMI Mem Swp In Traffic Bytes|Bytes|The total amount of data read from swap space of the guest. In the absence of data, this metric will retain the most recent value emitted|Name, Node| |KubevirtVmiMemoryDomainBytesTotal|VMOrchestrator|Kubevirt VMI Mem Dom Bytes (Preview)|Bytes|The amount of memory allocated to the domain. The memory value in the domain XML file. In the absence of data, this metric will retain the most recent value emitted|Node| |KubevirtVmiMemorySwapOutTrafficBytesTotal|VMOrchestrator|Kubevirt VMI Mem Swp Out Traffic Bytes|Bytes|The total amount of memory written out to swap space of the guest. In the absence of data, this metric will retain the most recent value emitted|Name, Node|
All these metrics for Nexus Cluster are collected and delivered to Azure Monitor
|KubevirtVmiNetworkReceivePacketsTotal|VMOrchestrator|Kubevirt VMI Net Rx Packets|Bytes|Total network traffic received packets. In the absence of data, this metric will retain the most recent value emitted|Interface, Name, Node| |KubevirtVmiNetworkTransmitPacketsDroppedTotal|VMOrchestrator|Kubevirt VMI Net Tx Packets Drop|Bytes|The total number of transmit packets dropped on virtual NIC (vNIC) interfaces. In the absence of data, this metric will retain the most recent value emitted|Interface, Name, Node| |KubevirtVmiNetworkTransmitPacketsTotal|VMOrchestrator|Kubevirt VMI Net Tx Packets Total|Bytes|Total network traffic transmitted packets. In the absence of data, this metric will retain the most recent value emitted|Interface, Name, Node|
-|KubevirtVmiOutdatedInstances|VMOrchestrator|Kubevirt VMI Outdated Count|Count|Indication for the total number of VirtualMachineInstance (VMI) workloads that are not running within the most up-to-date version of the virt-launcher environment. In the absence of data, this metric will default to 0||
+|KubevirtVmiOutdatedInstances|VMOrchestrator|Kubevirt VMI Outdated Count|Count|Indication for the total number of VirtualMachineInstance (VMI) workloads that aren't running within the most up-to-date version of the virt-launcher environment. In the absence of data, this metric will default to 0||
|KubevirtVmiPhaseCount|VMOrchestrator|Kubevirt VMI Phase Count|Count|Sum of Virtual Machine Instances (VMIs) per phase and node. Phase can be one of the following values: Pending, Scheduling, Scheduled, Running, Succeeded, Failed, Unknown. In the absence of data, this metric will retain the most recent value emitted|Node, Phase, Workload| |KubevirtVmiStorageIopsReadTotal|VMOrchestrator|Kubevirt VMI Storage IOPS Read Total|Count|Total number of Input/Output (I/O) read operations. In the absence of data, this metric will retain the most recent value emitted|Drive, Name, Node| |KubevirtVmiStorageIopsWriteTotal|VMOrchestrator|Kubevirt VMI Storage IOPS Write Total|Count|Total number of Input/Output (I/O) write operations. In the absence of data, this metric will retain the most recent value emitted|Drive, Name, Node|
All these metrics for Nexus Cluster are collected and delivered to Azure Monitor
| Metric | Category | Display Name | Unit | Description | Dimensions | |-|:-:|:--:|:-:|:--:|:--:|
-|NexusClusterHeartbeatConnectionStatus|Nexus Cluster|Cluster Heartbeat Connection Status|Count|Indicates whether the Cluster is having issues communicating with the Cluster Manager. The value of the metric is 0 when the connection is healthy and 1 when it is unhealthy. In the absence of data, this metric will retain the most recent value emitted|Reason|
+|NexusClusterHeartbeatConnectionStatus|Nexus Cluster|Cluster Heartbeat Connection Status|Count|Indicates whether the Cluster is having issues communicating with the Cluster Manager. The value of the metric is 0 when the connection is healthy and 1 when it's unhealthy. In the absence of data, this metric will retain the most recent value emitted|Reason|
|NexusClusterMachineGroupUpgrade|Nexus Cluster|Cluster Machine Group Upgrade|Count|Tracks Cluster Machine Group Upgrades performed. The value of the metric is 0 when the result is successful and 1 for all other results. In the absence of data, this metric will retain the most recent value emitted|Machine Group, Result, Upgraded From Version, Upgraded To Version| ## Baremetal servers
All the metrics from Storage appliance are collected and delivered to Azure Moni
|PurefaArraySpaceProvisionedBytes|Storage Array|Nexus Storage Array Space Prov (Deprecated)|Bytes|Deprecated - Overall space provisioned for the pure storage array. In the absence of data, this metric will retain the most recent value emitted|| |PurefaArraySpaceUsage|Storage Array|Nexus Storage Array Space Used (Deprecated)|Percent|Deprecated - Space usage of the pure storage array in percentage. In the absence of data, this metric will default to 0|| |PurefaArraySpaceUsedBytes|Storage Array|Nexus Storage Array Space Used Bytes (Deprecated)|Bytes|Deprecated - Overall space used for the pure storage array. In the absence of data, this metric will retain the most recent value emitted|Dimension|
-|PurefaHardwareChassisHealth|Storage Array|Nexus Storage HW Chassis Health (Deprecated)|Count|Deprecated - Denotes whether a hardware chassis of the pure storage array is healthy or not. A value of 0 means the chassis is healthy, a value of 1 means it is unhealthy. In the absence of data, this metric will default to 0||
-|PurefaHardwareControllerHealth|Storage Array|Nexus Storage HW Controller Health (Deprecated)|Count|Deprecated - Denotes whether a hardware controller of the pure storage array is healthy or not. A value of 0 means the controller is healthy, a value of 1 means it is unhealthy. In the absence of data, this metric will default to 0|Controller|
+|PurefaHardwareChassisHealth|Storage Array|Nexus Storage HW Chassis Health (Deprecated)|Count|Deprecated - Denotes whether a hardware chassis of the pure storage array is healthy or not. A value of 0 means the chassis is healthy, a value of 1 means it's unhealthy. In the absence of data, this metric will default to 0||
+|PurefaHardwareControllerHealth|Storage Array|Nexus Storage HW Controller Health (Deprecated)|Count|Deprecated - Denotes whether a hardware controller of the pure storage array is healthy or not. A value of 0 means the controller is healthy, a value of 1 means it's unhealthy. In the absence of data, this metric will default to 0|Controller|
|PurefaHardwarePowerVolt|Storage Array|Nexus Storage Hardware Power Volts (Deprecated)|Unspecified|Deprecated - Hardware power supply voltage of the pure storage array. In the absence of data, this metric will default to 0|Power Supply| |PurefaHardwareTemperatureCelsiusByChassis|Storage Array|Nexus Storage Hardware Temp Celsius By Chassis (Deprecated)|Unspecified|Deprecated - Hardware temperature, in Celsius, of the controller in the pure storage array. In the absence of data, this metric will retain the most recent value emitted|Sensor, Chassis| |PurefaHardwareTemperatureCelsiusByController|Storage Array|Nexus Storage Hardware Temp Celsius By Controller (Deprecated)|Unspecified|Deprecated - Hardware temperature, in Celsius, of the controller in the pure storage array. In the absence of data, this metric will retain the most recent value emitted|Controller, Sensor|
operator-service-manager Best Practices Onboard Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md
The component name is defined in the NFDV:
## Cleanup considerations
-Delete operator resources in the following order to make sure no orphaned resources are left behind:
-
+### Operator Resources
+As the first step towards cleaning up a deployed environment, start by deleting operator resources in the following order:
- SNS - Site - CGV
-> [!IMPORTANT]
-> Make sure SNS is deleted before you delete the NFDV.
+Only once these operator resources are succesfully deleted, should a user proceed to delete other environment resources, such as the NAKS cluster.
-Delete publisher resources in the following order to make sure no orphaned resources are left behind:
+> [!IMPORTANT]
+> Deleting resources out of order can result in orphaned resources left behind.
-- CGS
+### Publisher Resources
+As the first step towards cleaning up an onboarded environment, start by deleting publisher resources in the following order:
- NSDV - NSDG+
+> [!IMPORTANT]
+> Make sure SNS is deleted before you delete the NFDV.
+ - NFDV - NFDG - Artifact Manifest - Artifact Store - Publisher
-## Considerations if your NF runs cert-manager
- > [!IMPORTANT]
-> This guidance applies only to certain releases. Check your version for proper behavior.
-
-From release 1.0.2728-50 to release Version 2.0.2777-132, AOSM uses cert-manager to store and rotate certificates. As part of this change, AOSM deploys a cert-manager operator, and associate CRDs, in the azurehybridnetwork namespace. Since having multiple cert-manager operators, even deployed in separate namespaces, will watch across all namespaces, only one cert-manager can be effectively run on the cluster.
-
-Any user trying to install cert-manager on the cluster, as part of a workload deployment, will get a deployment failure with an error that the CRD ΓÇ£exists and cannot be imported into the current release.ΓÇ¥ To avoid this error, the recommendation is to skip installing cert-manager, instead take dependency on cert-manager operator and CRD already installed by AOSM.
-
-### Other Configuration Changes to Consider
-
-In addition to disabling the NfApp associated with the old user cert-manager, we have found other changes may be needed;
-1. If one NfApp contains both cert-manager and the CA installation, these must broken into two NfApps, so that the partner can disable cert-manager but enable CA installation.
-2. If any other NfApps have DependsOn references to the old user cert-manager NfApp, these will need to be removed.
-3. If any other NfApps reference the old user cert-manager namespace value, this will need to be changed to the new azurehybridnetwork namespace value.
-
-### Cert-Manager Version Compatibility & Management
-
-For the cert-manager operator, our current deployed version is 1.14.5. Users should test for compatibility with this version. Future cert-manager operator upgrades will be supported via the NFO extension upgrade process.
-
-For the CRD resources, our current deployed version is 1.14.5. Users should test for compatibility with this version. Since management of a common cluster CRD is something typically handled by a cluster administrator, we are working to enable CRD resource upgrades via standard Nexus Add-on process.
+> Deleting resources out of order can result in orphaned resources left behind.
## NfApp Sequential Ordering Behavior ### Overview- By default, containerized network function applications (NfApps) are installed or updated based on the sequential order in which they appear in the network function design version (NFDV). For delete, the NfApps are deleted in the reverse order sepcified. Where a publisher needs to define specific ordering of NfApps, different from the default, a dependsOnProfile is used to define a unique sequence for install, update and delete operations. ### How to use dependsOnProfile- A publisher can use the dependsOnProfile in the NFDV to control the sequence of helm executions for NfApps. Given the following example, on install operation the NfApps will be deployed in the following order: dummyApplication1, dummyApplication2, then dummyApplication. On update operation, the NfApps will be updated in the following order: dummyApplication2, dummyApplication1, then dummyApplication. On delete operation, the NfApps will be deleted in the following order: dummyApplication2, dummyApplication1, then dummyApplication. ```json
A publisher can use the dependsOnProfile in the NFDV to control the sequence of
``` ### Common Errors- As of today, if dependsOnProfile provided in the NFDV is invalid, the NF operation will fail with a validation error. The validation error message is shown in the operation status resource and looks similar to the following example. ```json
operator-service-manager Get Started With Cluster Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/get-started-with-cluster-registry.md
# Get started with cluster registry
+Improve resiliency for cloud native network functions with Azure Operator Service Manager (AOSM) cluster registry (CR)
+
+## Document history
* Created & First Published: July 26, 2024 * Updated for HA: October 16, 2024
+* Updated for GC: November 5, 2024
-## Overview
-Improve resiliency for cloud native network functions with Azure Operator Service Manager (AOSM) cluster registry (CR). This feature requires the following minimum environment:
+## Feature dependencies
+This feature requires the following minimum environment:
* Minimum AOSM ARM API Version: 2023-09-01 * First version, no high availability (HA) for Network Function (NF) kubernetes extension: 1.0.2711-7 * First version, with HA for NF kubernetes extension: 2.0.2810-144
+* First version, with GC for NF kubernetes extension: 2.0.2860-160
-## Introduction
+## Cluster registry overview
Azure Operator Service Manager (AOSM) cluster registry (CR) enables a local copy of container images in the Nexus K8s cluster. When the containerized network function (CNF) is installed with cluster registry enabled, the container images are pulled from the remote AOSM artifact store and saved to this local cluster registry. Using a mutating webhook, cluster registry automatically intercepts image requests and substitutes the local registry path, to avoid publisher packaging changes. With cluster register, CNF access to container images survives loss of connectivity to the remote artifact store. ### Key use cases and benefits
The cluster registry feature deploys helper pods on the target edge cluster to a
* This pod stores and retrieves container images for CNF. ### Cluster registry garbage collection
-AOSM cluster extension runs a background job to regularly clean up container images. The job schedule and conditions are configured by end-user, but by default the job runs once per days at a 0% utilization threshold. This job will check if the cluster registry usage has reached the specified threshold, and if so, it will initiate the garbage collection process.
+AOSM cluster extension runs a background garbage collection (GC) job to regularly clean up container images. This job will run based on a schedule, check if the cluster registry usage has reached the specified threshold, and if so, initiate the garbage collection process. The job schedule and threshold is configured by the end-user, but by default the job runs once per day at a 0% utilization threshold.
#### Clean up garbage image manifests AOSM maintains references between pod owner resource and consuming images in cluster registry. Upon initiating the images cleanup process, images will be identified which are not linked to any pods, issuing a soft delete to remove them from cluster registry. This type of soft delete doesn't immediately free cluster registry storage space. Actual image files removal depends on the CNCF distribution registry garbage collection outlined below.
AOSM sets up the cluster registry using open source [CNCF distribution registry]
> This process requires the cluster registry in read-only mode. If images are uploaded when registry not in read-only mode, there is the risk that images layers are mistakenly deleted leading to a corrupted image. Registry requires lock in read-only mode for a duration of up to 1 minute. Consequently, AOSM will defer other NF deployment when cluster registry in read-only mode. #### Garbage collection configuration parameters
-Customers can adjust the following settings to configure the schedule and conditions for the garbage collection job.
+The following parameters configure the schedule and threshold for the garbage collection job.
* global.networkfunctionextension.clusterRegistry.clusterRegistryGCCadence * global.networkfunctionextension.clusterRegistry.clusterRegistryGCThreshold
-* For more configuration details, please refer to the [Network function extension installation instructions](manage-network-function-operator.md)
+* For more configuration details, please refer to the latest [Network function extension installation instructions](manage-network-function-operator.md)
## High availability and resiliency considerations The AOSM NF extension relies uses a mutating webhook and edge registry to support key features.
All AOSM operator containers are configured with appropriate request, limit for
* Pod Anti affinity only deals with the initial placement of pods, subsequent pod scaling, and repair, follows standard K8s scheduling logic. ## Frequently Asked Questions
-* Can I use AOSM cluster registry with a CNF application previously deployed?
- * If there's a CNF application already deployed without cluster registry, the container images are not available automatically. The cluster registry must be enabled before deploying the network function with AOSM.
-* Can I change the storage size after a deployment?
- * Storage size can't be modified after the initial deployment. We recommend configuring the volume size by 3x to 4x of the starting size.
+#### Can I use AOSM cluster registry with a CNF application previously deployed?
+If there's a CNF application already deployed without cluster registry, the container images are not available automatically. The cluster registry must be enabled before deploying the network function with AOSM.
+
+#### Can I change the storage size after a deployment?
+Storage size can't be modified after the initial deployment. We recommend configuring the volume size by 3x to 4x of the starting size.
+
+#### Can I list the files presently stored in the cluster repository?
+The following command can be used to list files in a human readable format:
+```bash
+ kubectl get artifacts -A -o jsonpath='{range .items[*]}{.spec.sourceArtifact}'
+```
+This command should produce output similar to the following:
+```bash
+ ppleltestpublisheras2f88b55037.azurecr.io/nginx:1.0.0
+```
operator-service-manager Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/release-notes.md
Through MicrosoftΓÇÖs Secure Future Initiative (SFI), this release delivers the
* NFO - Signing of helm package used by network function extension. * NFO - Signing of core image used by network function extension.
-* NFO - Use of Cert-manager for service certificate management and rotation. This change can result in failed SNS deployments if not properly reconciled. For guidance on the impact of this change, see our [best practice recommendations](best-practices-onboard-deploy.md#considerations-if-your-nf-runs-cert-manager).
+* NFO - Use of Cert-manager for service certificate management and rotation. This change can result in failed SNS deployments if not properly reconciled.
* NFO - Automated refresh of AOSM certificates during extension installation. * NFO - A dedicated service account for the preupgrade job to safeguard against modifications to the existing network function extension service account. * RP - The service principles (SPs) used for deploying site & Network Function (NF) now require ΓÇ£Microsoft.ExtendedLocation/customLocations/readΓÇ¥ permission. The SPs that deploy day N scenario now require "Microsoft.Kubernetes/connectedClusters/listClusterUserCredentials/action" permission. This change can result in failed SNS deployments if not properly reconciled
oracle Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md
To purchase Oracle Database@Azure, contact [Oracle's sales team](https://go.orac
Billing and payment for the service is done through Azure. Payment for Oracle Database@Azure counts toward your Microsoft Azure Consumption Commitment (MACC). Existing Oracle Database software customers can use the Bring Your Own License (BYOL) option or Unlimited License Agreements (ULAs). On your regular Microsoft Azure invoices, you can see charges for Oracle Database@Azure alongside charges for your other Azure Marketplace services.
+## Compliance
+
+Oracle Database@Azure is an Oracle Cloud database service that runs Oracle Database workloads in a customer's Azure environment. Oracle Database@Azure offers various Oracle Database Services through customerΓÇÖs Microsoft Azure environment. This service allows customers to monitor database metrics, audit logs, events, logging data, and telemetry natively in Azure. It runs on infrastructure managed by Oracle's Cloud Infrastructure operations team who performs software patching, infrastructure updates, and other operations through a connection to Oracle Cloud.
+All infrastructure for Oracle Database@Azure is co-located in Azure's physical data centers and uses Azure Virtual Network for networking, managed within the Azure environment. Federated identity and access management for Oracle Database@Azure is provided by Microsoft Entra ID.
+
+For detailed information on the compliance certifications please visit [Microsoft Services Trust Portal](https://servicetrust.microsoft.com/) and [Oracle compliance website](https://docs.oracle.com/en-us/iaas/Content/multicloud/compliance.htm). If you have further questions about OracleDB@Azure compliance please reach out to your account team and/or get information through [Oracle and Microsoft support for Oracle Database@Azure](https://docs.oracle.com/en-us/iaas/Content/multicloud/oaahelp.htm).
+
+## Available regions
+
+Oracle Database@Azure is available in the following locations. Oracle Database@Azure infrastructure resources must be provisioned in the Azure regions listed.
+
+|Azure region|Oracle Exadata Database@Azure|Oracle Autonomous Database@Azure|
+|-|:-:|:--:|
+|East US |&check; | &check;|
+|Germany West Central | &check;|&check; |
+|France Central |&check; | |
+|UK South |&check; |&check; |
+|Canada Central |&check; |&check; |
+|Australia East |&check; |&check; |
+
+## Oracle Support scope and contact information
+
+Oracle Support is your first line of support for all Oracle Database@Azure issues. Oracle Support can help you with the following types of Oracle Database@Azure issues:
+
+- Database connection issues (Oracle TNS)
+- Oracle Database performance issues
+- Oracle Database error resolution
+- Networking issues related to communications with the OCI tenancy associated with the service
+- Quota (limits) increases to receive more capacity
+- Scaling to add more compute and storage capacity to Oracle Database@Azure
+- New generation hardware upgrades
+- Billing issues related to Oracle Database@Azure
+
+If you contact Oracle Support, be sure to tell your Oracle Support agent that your issue is related to Oracle Database@Azure. Support requests for this service are handled by a support team that specializes in these deployments. A member of this specialized team contacts you directly.
+
+1. Call **1-800-223-1711.** If you're outside of the United States, visit [Oracle Support Contacts Global Directory](https://www.oracle.com/support/contact.html) to find contact information for your country or region.
+2. Choose option "2" to open a new Service Request (SR).
+3. Choose option "4" for "unsure".
+4. Enter "#" each time you're asked for your CSI number. At the third attempt, your call is directed to an Oracle Support agent.
+5. Let the agent know that you have an issue with your multicloud system, and the name of the product (for example, or). An internal Service Request is opened on your behalf and a support engineer contacts you directly.
+
+You can also submit a question to the Oracle Database@Azure forum in Oracle's [Cloud Customer Connect](https://community.oracle.com/customerconnect/categories/oracle-cloud-infrastructure-and-platform) community. This option is available to all customers.
+
+## Azure Support scope and contact information
+
+Azure provides support for the following, collaborating with OCI as needed:
+
+- Virtual networking issues including those involving network address translation (NAT), firewalls, DNS and traffic management, and delegated Azure subnets.
+- Bastion and virtual machine (VM) issues including database host connection, software installation, latency, and host performance.
+- VM metrics, database logs, database events.
+
+See [Contact Microsoft Azure Support](https://support.microsoft.com/topic/contact-microsoft-azure-support-2315e669-8b1f-493b-5fb1-d88a8736ffe4) in the Azure documentation for information on Azure support. For SLA information about the service offering, please refer to the [Oracle PaaS and IaaS Public Cloud Services Pillar Document](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.oracle.com%2Fcontracts%2Fdocs%2Fpaas_iaas_pub_cld_srvs_pillar_4021422.pdf%3Fdownload%3Dfalse&data=05%7C02%7Cjacobjaygbay%40microsoft.com%7Cc226ce0d176442b3302608dc3ed3a6d0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638454325970975560%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=VZvhVUJzmUCzI25kKlf9hKmsf5GlrMPsQujqjGNsJbk%3D&reserved=0)
+ ## Next steps - [Onboard with Oracle Database@Azure](onboard-oracle-database.md) - [Provision and manage Oracle Database@Azure](provision-oracle-database.md)
oracle Oracle Database Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/oracle-database-regions.md
The following table lists Azure regions and corresponding OCI regions that suppo
| Southeast Asia | Singapore (Singapore) | Γ£ô | Γ£ô | | Japan East | Japan East(Tokyo) | Γ£ô | |
-## Brazil (APAC)
+## Brazil
+ | Azure region | OCI region | Oracle Exadata Database@Azure | Oracle Autonomous Database@Azure | |-|--|-|-| | Brazil South | Brazil Southeast (Vinhedo) | Γ£ô | |
reliability Migrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-database.md
To create a geo-replica of the database:
## Disable zone-redundancy
-To disable zone-redundancy for a single database or an elastic pool, you can use the portal or ARM API.
+To disable zone-redundancy for a single database or an elastic pool, you can use the portal, ARM API, PowerShell, or CLI.
-To disable zone-redundancy for Hyperscale service tier, you can reverse the steps documented in [Redeployment (Hyperscale)](#redeployment-hyperscale).
+### Disable zone-redundancy for a single database
-# [Elastic pool](#tab/pool)
-**To disable zone-redundancy with Azure portal:**
+# [Portal](#tab/portal)
-1. Go to the [Azure portal](https://portal.azure.com) to find and select the elastic pool that you no longer want to be zone-redundant.
+1. Go to the [Azure portal](https://portal.azure.com) to find and select the database that you no longer want to be zone-redundant.
1. Select **Settings**, and then select **Configure**.
-1. Select **No** for **Would you like to make this elastic pool zone redundant?**.
+1. Select **No** for **Would you like to make this database zone redundant?**
1. Select **Save**.
-**To disable zone-redundancy with PowerShell:**
+# [PowerShell](#tab/powershell)
```powershell
-Set-AzSqlElasticpool -ResourceGroupName "RSETLEM-AzureSQLDB" -ServerName "rs-az-testserver1" -ElasticPoolName "testep10" -ZoneRedundant:$false
+set-azsqlDatabase -ResourceGroupName "<Resource-Group-Name>" -DatabaseName "<Database-Name>" -ServerName "<Server-Name>" -ZoneRedundant:$false
```
-**To disable zone-redundancy with Azure CLI:**
+# [CLI](#tab/cli)
```azurecli
-az sql elastic-pool update --resource-group "RSETLEM-AzureSQLDB" --server "rs-az-testserver1" --name "testep10" --zone-redundant false
+az sql db update --resource-group "RSETLEM-AzureSQLDB" --server "rs-az-testserver1" --name "TestDB1" --zone-redundant false
```
-**To disable zone-redundancy with ARM,** see [Databases - Create Or Update in ARM](/rest/api/sql/elastic-pools/create-or-update?tabs=HTTP) and use the `properties.zoneRedundant` property.
+# [ARM](#tab/arm)
+
+See [Databases - Create Or Update in ARM](/rest/api/sql/2022-05-01-preview/databases/create-or-update?tabs=HTTP) and use the `properties.zoneRedundant` property.
-# [Single database](#tab/single)
+
+### Disable zone-redundancy for an elastic pool
-**To disable zone-redundancy with Azure portal:**
+# [Portal](#tab/portal)
-1. Go to the [Azure portal](https://portal.azure.com) to find and select the database that you no longer want to be zone-redundant.
+1. Go to the [Azure portal](https://portal.azure.com) to find and select the elastic pool that you no longer want to be zone-redundant.
1. Select **Settings**, and then select **Configure**.
-1. Select **No** for **Would you like to make this database zone redundant?**
+1. Select **No** for **Would you like to make this elastic pool zone redundant?**.
1. Select **Save**.
-**To disable zone-redundancy with PowerShell:**
+# [PowerShell](#tab/powershell)
```powershell
-set-azsqlDatabase -ResourceGroupName "RSETLEM-AzureSQLDB" -DatabaseName "TestDB1" -ServerName "rs-az-testserver1" -ZoneRedundant:$false
+Set-AzSqlElasticpool -ResourceGroupName "<Resource-Group-Name>" -ServerName "<Server-Name>" -ElasticPoolName "<Elastic-Pool-Name>" -ZoneRedundant:$false
```
-**To disable zone-redundancy with Azure CLI:**
+# [CLI](#tab/cli)
```azurecli
-az sql db update --resource-group "RSETLEM-AzureSQLDB" --server "rs-az-testserver1" --name "TestDB1" --zone-redundant false
+az sql elastic-pool update --resource-group "RSETLEM-AzureSQLDB" --server "rs-az-testserver1" --name "testep10" --zone-redundant false
```
-**To disable zone-redundancy with ARM,** see [Databases - Create Or Update in ARM](/rest/api/sql/2022-05-01-preview/databases/create-or-update?tabs=HTTP) and use the `properties.zoneRedundant` property.
+
+# [ARM](#tab/arm)
+
+See [Databases - Create Or Update in ARM](/rest/api/sql/elastic-pools/create-or-update?tabs=HTTP) and use the `properties.zoneRedundant` property.
+++
+To disable zone-redundancy for Hyperscale service tier, you can reverse the steps documented in [Redeployment (Hyperscale)](#redeployment-hyperscale).
+ ## Next steps
resource-mover Tutorial Move Region Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
Now that you've prepared the resources prepared, you can initiate the move.
- Resource Mover re-creates other resources by using the prepared ARM templates. There's usually no downtime. - After you've moved the resources, their status changes to *Commit move pending*.
- :::image type="content" source="./media/tutorial-move-region-encrypted-virtual-machines/resources-commit-move-pending.png" alt-text="Screenshot of a list of resources with a 'Commit move pending' status." lightbox="./media/tutorial-move-region-encrypted-virtual-machines/resources-commit-move-pending.png" :::
- ## Discard or commit the move
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Previously updated : 10/29/2024 Last updated : 11/05/2024
In the SAP workload documentation space, you can find the following areas:
- **Azure Monitor for SAP solutions**: Microsoft developed monitoring solutions specifically for SAP supported OS and DBMS, as well as S/4HANA and NetWeaver. This section documents the deployment and usage of the service ## Change Log
+- November 5, 2024: Add missing step to start HANA [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md).
- November 1, 2024: Adding HANA high-availability hook ChkSrv for [dying indexserver for RHEL based cluster setups](./sap-hana-high-availability-rhel.md#implement-sap-hana-system-replication-hooks). - October 29, 2024: some changes on disk caching and smaller updates in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md). Plus fixing some typoes in HANA storage configuration documents - October 28, 2024: Added information on RedHat support and the configuration of Azure fence agents for VMs in the Azure Government cloud to the document [Set up Pacemaker on Red Hat Enterprise Linux in Azure](./high-availability-guide-rhel-pacemaker.md).
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
Previously updated : 10/16/2024 Last updated : 11/05/2024
The steps in this section use the following prefixes:
hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMode=sync --name=SITE2 ```
+1. **[2]** Start HANA.
+
+ Run the following command as <hanasid\>adm to start HANA:
+
+ ```bash
+ sapcontrol -nr 03 -function StartSystem
+ ```
+ 1. **[1]** Check replication status. Check the replication status and wait until all databases are in sync. If the status remains UNKNOWN, check your firewall settings.
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-data-sources.md
Title: Microsoft Sentinel data connectors
description: Learn about supported data connectors, like Microsoft Defender XDR (formerly Microsoft 365 Defender), Microsoft 365 and Office 365, Microsoft Entra ID, ATP, and Defender for Cloud Apps to Microsoft Sentinel. Previously updated : 03/02/2024 Last updated : 11/06/2024 appliesto: - Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal -- #Customer intent: As a security eningeer, I want to use data connectors to integrate various data sources into Microsoft Sentinel so that I can enhance threat detection and response capabilities.- # Microsoft Sentinel data connectors
To add more data connectors, install the solution associated with the data conne
## REST API integration for data connectors
-Many security technologies provide a set of APIs for retrieving log files. Some data sources can use those APIs to connect to Microsoft Sentinel.
-
-Data connectors that use APIs either integrate from the provider side or integrate using Azure Functions, as described in the following sections.
-
-### Integration on the provider side
-
-An API integration built by the provider connects with the provider data sources and pushes data into Microsoft Sentinel custom log tables by using the Azure Monitor Data Collector API. For more information, see [Send log data to Azure Monitor by using the HTTP Data Collector API](/azure/azure-monitor/logs/data-collector-api?branch=main&tabs=powershell).
-
-To learn about REST API integration, read your provider documentation and [Connect your data source to Microsoft Sentinel's REST-API to ingest data](connect-rest-api-template.md).
-
-### Integration using Azure Functions
-
-Integrations that use Azure Functions to connect with a provider API first format the data, and then send it to Microsoft Sentinel custom log tables using the Azure Monitor Data Collector API.
+Many security solutions provide a set of APIs for retrieving log files and other security data from their product or service. Those APIs connect to Microsoft Sentinel with one of the following methods:
+- The data source APIs are configured with the [Codeless Connector Platform](create-codeless-connector.md).
+- The data connector uses the Log Ingestion API for Azure Monitor as part of an Azure Function or Logic App.
-For more information, see:
-- [Send log data to Azure Monitor by using the HTTP Data Collector API](/azure/azure-monitor/logs/data-collector-api?branch=main&tabs=powershell)
+For more information about connecting with Azure Functions, see the following articles:
- [Use Azure Functions to connect your data source to Microsoft Sentinel](connect-azure-functions-template.md) - [Azure Functions documentation](../azure-functions/index.yml)
+- [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/)
-Integrations that use Azure Functions might have extra data ingestion costs, because you host Azure Functions in your Azure organization. Learn more about [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
+For more information about connecting with Logic Apps, see [Connect with Logic Apps](create-custom-connector.md#connect-with-logic-apps).
## Agent-based integration for data connectors
sentinel Create Custom Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-custom-connector.md
Title: Resources for creating Microsoft Sentinel custom connectors
description: Learn about available resources for creating custom connectors for Microsoft Sentinel. Methods include the Log Analytics API, Logstash, Logic Apps, PowerShell, and Azure Functions. Previously updated : 10/01/2024 Last updated : 11/06/2024 #Customer intent: As a security engineer, I want to know which Microsoft Sentinel custom data connector would be most appropriate to build for ingesting data from sources with no out-of-the-box solution.
The following table compares essential details about each method for creating cu
|**[Azure Monitor Agent](#connect-with-the-azure-monitor-agent)** <br>Best for collecting files from on-premises and IaaS sources | File collection, data transformation | No | Low | |**[Logstash](#connect-with-logstash)** <br>Best for on-premises and IaaS sources, any source for which a plugin is available, and organizations already familiar with Logstash | Supports all capabilities of the Azure Monitor Agent | No; requires a VM or VM cluster to run | Low; supports many scenarios with plugins | |**[Logic Apps](#connect-with-logic-apps)** <br>High cost; avoid for high-volume data <br>Best for low-volume cloud sources | Codeless programming allows for limited flexibility, without support for implementing algorithms.<br><br> If no available action already supports your requirements, creating a custom action may add complexity. | Yes | Low; simple, codeless development |
-|**[PowerShell](#connect-with-powershell)** <br>Best for prototyping and periodic file uploads | Direct support for file collection. <br><br>PowerShell can be used to collect more sources, but will require coding and configuring the script as a service. |No | Low |
-|**[Log Analytics API](#connect-with-the-log-analytics-api)** <br>Best for ISVs implementing integration, and for unique collection requirements | Supports all capabilities available with the code. | Depends on the implementation | High |
+|**[Log Ingestion API in Azure Monitor](#connect-with-the-log-ingestion-api)** <br>Best for ISVs implementing integration, and for unique collection requirements | Supports all capabilities available with the code. | Depends on the implementation | High |
|**[Azure Functions](#connect-with-azure-functions)** <br>Best for high-volume cloud sources, and for unique collection requirements | Supports all capabilities available with the code. | Yes | High; requires programming knowledge | > [!TIP] > For comparisons of using Logic Apps and Azure Functions for the same connector, see: >
-> - [Ingest Fastly Web Application Firewall logs into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-fastly-web-application-firewall-logs-into-azure-sentinel/ba-p/1238804)
+> - [Ingest Fastly Web Application Firewall logs into Microsoft Sentinel](https://techcommunity.microsoft.com/blog/microsoftsentinelblog/ingest-fastly-web-application-firewall-logs-into-azure-sentinel/1238804)
> - Office 365 (Microsoft Sentinel GitHub community): [Logic App connector](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Get-O365Data) | [Azure Function connector](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/O365%20Data) > ## Connect with the Codeless Connector Platform
-The Codeless Connector Platform (CCP) provides a configuration file that can be used by both customers and partners, and then deployed to your own workspace, or as a solution to Microsoft Sentinel's solution's gallery.
+The Codeless Connector Platform (CCP) provides a configuration file that can be used by both customers and partners, and then deployed to your own workspace, or as a solution to Microsoft Sentinel's content hub.
Connectors created using the CCP are fully SaaS, without any requirements for service installations, and also include health monitoring and full support from Microsoft Sentinel.
With the Microsoft Sentinel Logstash Output plugin, you can use any Logstash inp
For examples of using Logstash as a custom connector, see: -- [Hunting for Capital One Breach TTPs in AWS logs using Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/hunting-for-capital-one-breach-ttps-in-aws-logs-using-azure/ba-p/1019767) (blog)
+- [Hunting for Capital One Breach TTPs in AWS logs using Microsoft Sentinel](https://techcommunity.microsoft.com/blog/microsoftsentinelblog/hunting-for-capital-one-breach-ttps-in-aws-logs-using-azure-sentinelpart-i/1014258) (blog)
- [Radware Microsoft Sentinel implementation guide](https://support.radware.com/ci/okcsFattach/get/1025459_3) For examples of useful Logstash plugins, see:
For examples of useful Logstash plugins, see:
- [Google_pubsub input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-google_pubsub.html) > [!TIP]
-> Logstash also enables scaled data collection using a cluster. For more information, see [Using a load-balanced Logstash VM at scale](https://techcommunity.microsoft.com/t5/azure-sentinel/scaling-up-syslog-cef-collection/ba-p/1185854).
+> Logstash also enables scaled data collection using a cluster. For more information, see [Using a load-balanced Logstash VM at scale](https://techcommunity.microsoft.com/blog/microsoftsentinelblog/scaling-up-syslog-cef-collection/1185854).
> ## Connect with Logic Apps
For examples of how you can create a custom connector for Microsoft Sentinel usi
- [Create a data pipeline with the Data Collector API](/connectors/azureloganalyticsdatacollector/) - [Palo Alto Prisma Logic App connector using a webhook](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Ingest-Prisma) (Microsoft Sentinel GitHub community)-- [Secure your Microsoft Teams calls with scheduled activation](https://techcommunity.microsoft.com/t5/azure-sentinel/secure-your-calls-monitoring-microsoft-teams-callrecords/ba-p/1574600) (blog)-- [Ingesting AlienVault OTX threat indicators into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/ingesting-alien-vault-otx-threat-indicators-into-azure-sentinel/ba-p/1086566) (blog)
+- [Secure your Microsoft Teams calls with scheduled activation](https://techcommunity.microsoft.com/blog/microsoftsentinelblog/secure-your-calls--monitoring-microsoft-teams-callrecords-activity-logs-using-az/1574600) (blog)
+- [Ingesting AlienVault OTX threat indicators into Microsoft Sentinel](https://techcommunity.microsoft.com/blog/microsoftsentinelblog/ingesting-alien-vault-otx-threat-indicators-into-azure-sentinel/1086566) (blog)
-## Connect with PowerShell
-
-The [Upload-AzMonitorLog PowerShell script](https://www.powershellgallery.com/packages/Upload-AzMonitorLog/) enables you to use PowerShell to stream events or context information to Microsoft Sentinel from the command line. This streaming effectively creates a custom connector between your data source and Microsoft Sentinel.
-
-For example, the following script uploads a CSV file to Microsoft Sentinel:
-
-``` PowerShell
-Import-Csv .\testcsv.csv
-| .\Upload-AzMonitorLog.ps1
--WorkspaceId '69f7ec3e-cae3-458d-b4ea-6975385-6e426'--WorkspaceKey $WSKey--LogTypeName 'MyNewCSV'--AddComputerName--AdditionalDataTaggingName "MyAdditionalField"--AdditionalDataTaggingValue "Foo"
-```
-
-The [Upload-AzMonitorLog PowerShell script](https://www.powershellgallery.com/packages/Upload-AzMonitorLog/) script uses the following parameters:
-
-|Parameter |Description |
-|||
-|**WorkspaceId** | Your Microsoft Sentinel workspace ID, where you'll be storing your data. [Find your workspace ID and key](#find-your-workspace-id-and-key). |
-|**WorkspaceKey** | The primary or secondary key for the Microsoft Sentinel workspace where you'll be storing your data. [Find your workspace ID and key](#find-your-workspace-id-and-key). |
-|**LogTypeName** | The name of the custom log table where you want to store the data. A suffix of **_CL** will automatically be added to the end of your table name. |
-|**AddComputerName** | When this parameter exists, the script adds the current computer name to every log record, in a field named **Computer**. |
-|**TaggedAzureResourceId** | When this parameter exists, the script associates all uploaded log records with the specified Azure resource. <br><br>This association enables the uploaded log records for resource-context queries, and adheres to resource-centric, role-based access control. |
-|**AdditionalDataTaggingName** | When this parameter exists, the script adds another field to every log record, with the configured name, and the value that's configured for the **AdditionalDataTaggingValue** parameter. <br><br>In this case, **AdditionalDataTaggingValue** must not be empty. |
-|**AdditionalDataTaggingValue** | When this parameter exists, the script adds another field to every log record, with the configured value, and the field name configured for the **AdditionalDataTaggingName** parameter. <br><br>If the **AdditionalDataTaggingName** parameter is empty, but a value is configured, the default field name is **DataTagging**. |
--
-### Find your workspace ID and key
-
-Find the details for the **WorkspaceID** and **WorkspaceKey** parameters in Microsoft Sentinel:
-
-1. In Microsoft Sentinel, select **Settings** on the left, and then select the **Workspace settings** tab.
-
-1. Under **Get started with Log Analytics** > **1 Connect a data source**, select **Windows and Linux agents management**.
-
-1. Find your workspace ID, primary key, and secondary key on the **Windows servers** tabs.
-
-## Connect with the Log Analytics API
+## Connect with the Log Ingestion API
You can stream events to Microsoft Sentinel by using the Log Analytics Data Collector API to call a RESTful endpoint directly. While calling a RESTful endpoint directly requires more programming, it also provides more flexibility.
-For more information, see the [Log Analytics Data collector API](/azure/azure-monitor/logs/data-collector-api), especially the following examples:
--- [C#](/azure/azure-monitor/logs/data-collector-api#sample-requests)-- [Python](/azure/azure-monitor/logs/data-collector-api#sample-requests)
+For more information, see the following articles:
+- [Log Ingestion API in Azure Monitor](/azure/azure-monitor/logs/logs-ingestion-api-overview).
+- [Sample code to send data to Azure Monitor using Logs ingestion API](/azure/azure-monitor/logs/tutorial-logs-ingestion-code).
## Connect with Azure Functions
For examples of this method, see:
- [Connect your Proofpoint TAP to Microsoft Sentinel with Azure Function](./data-connectors/proofpoint-tap-using-azure-functions.md) - [Connect your Qualys VM to Microsoft Sentinel with Azure Function](data-connectors/qualys-vulnerability-management-using-azure-functions.md) - [Ingesting XML, CSV, or other formats of data](/azure/azure-monitor/logs/create-pipeline-datacollector-api#ingesting-xml-csv-or-other-formats-of-data)-- [Monitoring Zoom with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516) (blog)
+- [Monitoring Zoom with Microsoft Sentinel](https://techcommunity.microsoft.com/blog/microsoftsentinelblog/monitoring-zoom-with-azure-sentinel/1341516) (blog)
- [Deploy a Function App for getting Office 365 Management API data into Microsoft Sentinel](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/O365%20Data) (Microsoft Sentinel GitHub community) ## Parse your custom connector data
Use the data ingested into Microsoft Sentinel to secure your environment with an
- [Investigate incidents](investigate-cases.md) - [Detect threats](threat-detection.md) - [Automate threat prevention](tutorial-respond-threats-playbook.md)-- [Hunt for threats](hunting.md)-
-Also, learn about one example of creating a custom connector to monitor Zoom: [Monitoring Zoom with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/monitoring-zoom-with-azure-sentinel/ba-p/1341516).
+- [Hunt for threats](hunting.md)
sentinel Sample Workspace Designs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sample-workspace-designs.md
The Contoso Corporation is a multinational business with headquarters in London.
Due to an acquisition several years ago, Contoso has two Microsoft Entra tenants: `contoso.onmicrosoft.com` and `wingtip.onmicrosoft.com`. Each tenant has its own Office 365 instance and multiple Azure subscriptions, as shown in the following image: ### Contoso compliance and regional deployment
Constoso's solution includes the following considerations:
The resulting workspace design for Contoso is illustrated in the following image: The suggested solution includes:
Fabrikam's solution includes the following considerations:
The resulting workspace design for Fabrikam is illustrated in the following image, including only key log sources for the sake of design simplicity: The suggested solution includes:
The Adventure Works solution includes the following considerations:
The resulting workspace design for Adventure Works is illustrated in the following image, including only key log sources for the sake of design simplicity: The suggested solution includes:
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
Title: Deploy Azure Site Recovery replication appliance - Modernized
description: This article describes how to replicate appliance for VMware disaster recovery to Azure with Azure Site Recovery - Modernized Previously updated : 04/04/2024 Last updated : 11/06/2024
>[!NOTE] > The information in this article applies to Azure Site Recovery - Modernized. For information about configuration server requirements in Classic releases, [see this article](vmware-azure-configuration-server-requirements.md).-
->[!NOTE]
+>
> Ensure you create a new and exclusive Recovery Services vault for setting up the ASR replication appliance. Don't use an existing vault.
+> [!IMPORTANT]
+> Microsoft recommends that you use roles with the fewest permissions. This helps improve security for your organization. Global Administrator is a highly privileged role that should be limited to emergency scenarios when you can't use an existing role.
+ You deploy an on-premises replication appliance when you use [Azure Site Recovery](site-recovery-overview.md) for disaster recovery of VMware VMs or physical servers to Azure. - The replication appliance coordinates communications between on-premises VMware and Azure. It also manages data replication.
site-recovery Site Recovery Monitor And Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-monitor-and-troubleshoot.md
To test the working of the alerts for a test VM using Azure Site Recovery, you c
You can view the alerts settings under **Recovery Services Vault** > **Settings** > **Properties** > **Monitoring Settings**. The built-in alerts for Site Recovery are enabled by default, but you can disable either or both categories of Site Recovery alerts. Select the checkbox to opt out of classic alerts for Site Recovery and only use built-in alerts. If not done, duplicate alerts are generated for classic and built-in. ### Manage Azure Site Recovery alerts in Business Continuity Center
site-recovery Vmware Azure Set Up Replication Tutorial Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-set-up-replication-tutorial-modernized.md
Title: Set up VMware VM disaster recovery to Azure with Azure Site Recovery - Mo
description: Learn how to set up disaster recovery to Azure for on-premises VMware VMs with Azure Site Recovery - Modernized. Previously updated : 05/23/2024 Last updated : 11/06/2024 # Set up disaster recovery to Azure for on-premises VMware VMs - Modernized
+> [!IMPORTANT]
+> Microsoft recommends that you use roles with the fewest permissions. This helps improve security for your organization. Global Administrator is a highly privileged role that should be limited to emergency scenarios when you can't use an existing role.
+ This article describes how to enable replication for on-premises VMware VMs, for disaster recovery to Azure using the Modernized VMware/Physical machine protection experience. For information on how to set up disaster recovery in Azure Site Recovery Classic releases, see [the tutorial](vmware-azure-tutorial.md).
storage Monitor Blob Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage-reference.md
Title: Monitoring data reference for Azure Blob Storage description: This article contains important reference material you need when you monitor Azure Blob Storage. Previously updated : 08/27/2024 Last updated : 11/05/2024
For the metrics supporting dimensions, you need to specify the dimension value t
The following sections describe the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
+> [!NOTE]
+> The field names listed in each section below are valid when resource logs are sent to Azure storage or to an event hub. When the logs are sent to a Log Analytics workspace, the field names might be different.
+ ### Fields that describe the operation ```json { "time": "2019-02-28T19:10:21.2123117Z",
- "resourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/mytestrp/providers/Microsoft.Storage/storageAccounts/testaccount1/blobServices/default",
+ "resourceId": "/subscriptions/00001111-aaaa-2222-bbbb-3333cccc4444/resourceGroups/mytestrp/providers/Microsoft.Storage/storageAccounts/testaccount1/blobServices/default",
"category": "StorageWrite", "operationName": "PutBlob", "operationVersion": "2017-04-17",
The following sections describe the properties for Azure Storage resource logs w
"statusText": "Success", "durationMs": 5, "callerIpAddress": "192.168.0.1:11111",
- "correlationId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "correlationId": "aaaa0000-bb11-2222-33cc-444444dddddd",
"location": "uswestcentral", "uri": "http://mystorageaccount.blob.core.windows.net/cont1/blobname?timeout=10" }
The following sections describe the properties for Azure Storage resource logs w
"authorization": [ { "action": "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
- "denyAssignmentId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "denyAssignmentId": "aaaa0000-bb11-2222-33cc-444444dddddd",
"principals": [ {
- "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "id": "aaaaaaaa-bbbb-cccc-1111-222222222222",
"type": "User" } ], "reason": "Policy", "result": "Granted",
- "roleAssignmentId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "roleDefinitionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "roleAssignmentId": "00aa00aa-bb11-cc22-dd33-44ee44ee44ee",
+ "roleDefinitionId": "11bb11bb-cc22-dd33-ee44-55ff55ff55ff",
"type": "RBAC" } ],
The following sections describe the properties for Azure Storage resource logs w
"objectKey": "/samplestorageaccount/samplecontainer/sampleblob.png" }, "requester": {
- "appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "appId": "00001111-aaaa-2222-bbbb-3333cccc4444",
"audience": "https://storage.azure.com/",
- "objectId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "tokenIssuer": "https://sts.windows.net/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ "objectId": "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb",
+ "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee",
+ "tokenIssuer": "https://sts.windows.net/2c2c2c2c-3333-dddd-4444-5e5e5e5e5e5e",
+ "uniqueName": "someone@example.com"
},
+ "delegatedResource": {
+ "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee",
+ "resourceId": "a0a0a0a0-bbbb-cccc-dddd-e1e1e1e1e1e1",
+ "objectId": "a0a0a0a0-bbbb-cccc-dddd-e1e1e1e1e1e1"
+ },
"type": "OAuth" }, }
The following sections describe the properties for Azure Storage resource logs w
```json { "properties": {
- "accountName": "testaccount1",
- "requestUrl": "https://testaccount1.blob.core.windows.net:443/upload?restype=container&comp=list&prefix=&delimiter=/&marker=&maxresults=30&include=metadata&_=1551405598426",
+ "accountName": "contoso",
+ "requestUrl": "https://contoso.blob.core.windows.net:443/upload?restype=container&comp=list&prefix=&delimiter=/&marker=&maxresults=30&include=metadata&_=1551405598426",
"userAgentHeader": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/17.17134",
- "referrerHeader": "blob:https://portal.azure.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "referrerHeader": "blob:https://portal.azure.com/00001111-aaaa-2222-bbbb-3333cccc4444",
"clientRequestId": "", "etag": "", "serverLatencyMs": 63,
The following sections describe the properties for Azure Storage resource logs w
"smbFileId" : " 0x9223442405598953", "smbSessionID" : "0x8530280128000049", "smbCommandMajor" : "0x6",
- "smbCommandMinor" : "DirectoryCloseAndDelete"
+ "smbCommandMinor" : "DirectoryCloseAndDelete",
+ "downloadRange" : "bytes=4-4194307",
+ "accessTier": "None",
+ "sourceAccessTier": "Hot",
+ "rehydratePriority":"High"
} } ```
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
Previously updated : 06/22/2023 Last updated : 11/05/2024
You can use [private endpoints](../../private-link/private-endpoint-overview.md)
Using private endpoints for your storage account enables you to: -- Secure your storage account by configuring the storage firewall to block all connections on the public endpoint for the storage service.
+- Secure your storage account by using a private link. You can manually configure the storage firewall to block connections on the public endpoint of the storage service. Creating a private link does not automatically block connections on the public endpoint.
- Increase security for the virtual network (VNet), by enabling you to block exfiltration of data from the VNet. - Securely connect to storage accounts from on-premises networks that connect to the VNet using [VPN](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../../expressroute/expressroute-locations.md) with private-peering.
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
mssparkutils.fs.append("file path", "content to append", True) # Set the last pa
``` ::: zone-end
+> [!NOTE]
+> ```mssparkutils.fs.append()``` and ```mssparkutils.fs.put()``` do not support concurrent writing to the same file due to lack of atomicity guarantees.
+ ### Delete file or directory Removes a file or a directory.
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
Title: Configure FSLogix profile container on Azure Virtual Desktop with Azure N
description: Learn how to configure FSLogix profile container on Azure Virtual Desktop with Azure NetApp Files. Previously updated : 07/01/2020 Last updated : 11/05/2024
The instructions in this guide are specifically for Azure Virtual Desktop users.
## Considerations
-* To optimize performance and scalability, the number of _concurrent_ users accessing FSLogix profile containers stored on a single Azure NetApp Files regular volume should be limited to 3,000. Having more than 3,000 _concurrent_ users on a single volume causes significant increased latency on the volume. If your scenario requires more than 3,000 _concurrent_ users, divide users across multiple regular volumes or use a large volume. A single large volume can store FSLogix profiles for up to 50,000 _concurrent_ users. For more information on large volumes, see [Requirements and considerations for large volumes](../azure-netapp-files/large-volumes-requirements-considerations.md).
+* To optimize performance and scalability, the number of concurrent user connections accessing FSLogix profile containers stored on a single Azure NetApp Files regular volume should be limited to 3,000.
+ A user connection is defined as either:
+
+ - a connection to an [FSLogix profile container](/fslogix/concepts-container-types#profile-container)
+ - a connection to an [FSLogix ODFC container](/fslogix/concepts-container-types#odfc-container)
+
+ If you're utilizing both FSLogix profiles and FSLogix ODFC containers, note that a single regular volume should contain no more than 3,000 FSLogix profiles _or_ FSLogix ODFC containers (combined). Having more than 3,000 concurrent user connections on a single volume causes significant increased latency on the volume. If your scenario requires more than 3,000 concurrent user connections, divide users across multiple regular volumes or use a large volume.
+ A single large volume can accommodate up to 50,000 concurrent user connections for FSLogix containers. For more information on large volumes, see [Requirements and considerations for large volumes](../azure-netapp-files/large-volumes-requirements-considerations.md).
+ If you're utilizing both FSLogix profiles and FSLogix ODFC containers, note that a single large volume should contain no more than 50,000 FSLogix profiles _or_ FSLogix ODFC containers (combined).
* To protect your FSLogix profile containers, consider using [Azure NetApp Files snapshots](../azure-netapp-files/snapshots-introduction.md) and [Azure NetApp Files backup](../azure-netapp-files/backup-introduction.md).
virtual-network-manager Concept Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-limitations.md
Previously updated : 10/30/2024 Last updated : 11/06/2024 #CustomerIntent: As a network admin, I want understand the limitations in Azure Virtual Network Manager so that I can properly deploy it my environment.
This article provides an overview of the current limitations when you're using [
* Azure Virtual Network Manager policies don't support the standard evaluation cycle for policy compliance. For more information, see [Evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). * The move of the subscription where the Azure Virtual Network Manager instance exists to another tenant is not supported.
-## Limitations for connected groups
+## Limitations and limits for peering and connected groups
-* A connected group can have up to 250 virtual networks. Virtual networks in a [mesh topology](concept-connectivity-configuration.md#mesh-network-topology) are in a [connected group](concept-connectivity-configuration.md#connected-group), so a mesh configuration has a limit of 250 virtual networks.
-* BareMetal Infastructures are not supported. This includes the following BareMetal Infrastructures:
+* A virtual network can be peered up to 1000 virtual networks using Azure Virtual Network Manager's hub and spoke topology. This means that you can peer up to 1000 spoke virtual networks to a hub virtual network.
+* By default, a [connected group](concept-connectivity-configuration.md) can have up to 250 virtual networks. This is a soft limit and can be increased up to 1000 virtual networks by submitting a request using [this form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbRzeHatNxLHpJshECDnD5QidURTM2OERMQlYxWkE1UTNBMlRNUkJUNkhDTy4u&route=shorturl).
+* By default, a virtual network can be part of up to two connected groups. For example, a virtual network:
+ * Can be part of two mesh configurations.
+ * Can be part of a mesh topology and a network group that has direct connectivity enabled in a hub-and-spoke topology.
+ * Can be part of two network groups with direct connectivity enabled in the same or a different hub-and-spoke configuration.
+* The following BareMetal Infrastructures are not supported:
* [Azure NetApp Files](../azure-netapp-files/index.yml) * [Azure VMware Solution](../azure-vmware/index.yml) * [Nutanix Cloud Clusters on Azure](../baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md) * [Oracle Database@Azure](../oracle/oracle-db/oracle-database-what-is-new.md) * [Azure Payment HSM](/azure/payment-hsm/solution-design)
-* Maximum number of private endpoints per connected group is 1000.
-* You can have network groups with or without [direct connectivity](concept-connectivity-configuration.md#direct-connectivity) enabled in the same [hub-and-spoke configuration](concept-connectivity-configuration.md#hub-and-spoke-topology), as long as the total number of virtual networks peered to the hub doesn't exceed 500 virtual networks.
- * If the network group peered to the hub *has direct connectivity enabled*, these virtual networks are in a connected group, so the network group has a limit of 250 virtual networks.
- * If the network group peered to the hub *doesn't have direct connectivity enabled*, the network group can have up to the total limit for a hub-and-spoke topology.
-* A virtual network can be part of up to two connected groups. For example, a virtual network:
-
- * Can be part of two mesh configurations.
- * Can be part of a mesh topology and a network group that has direct connectivity enabled in a hub-and-spoke topology.
- * Can be part of two network groups with direct connectivity enabled in the same or a different hub-and-spoke configuration.
-
+* The maximum number of private endpoints per connected group is 1000.
* You can have virtual networks with overlapping IP spaces in the same connected group. However, communication to an overlapped IP address is dropped. ## Limitations for security admin rules
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
You create custom routes by either creating [user-defined](#user-defined) routes
### User-defined
-To customize your traffic routes, you shouldn't modify the default routes but you should create custom, or user-defined(static) routes which override Azure's default system routes. In Azure, you create a route table, then associate the route table to zero or more virtual network subnets. Each subnet can have zero or one route table associated to it. To learn about the maximum number of routes you can add to a route table and the maximum number of user-defined route tables you can create per Azure subscription, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits). When you create a route table and associate it to a subnet, the table's routes are combined with the subnet's default routes. If there are conflicting route assignments, user-defined routes override the default routes.
+To customize your traffic routes, you shouldn't modify the default routes but you should create custom, or user-defined(static) routes which override Azure's default system routes. In Azure, you create a route table, then associate the route table to zero or more virtual network subnets. Each subnet can have zero or one route table associated to it. To learn about the maximum number of routes you can add to a route table and the maximum number of user-defined route tables you can create per Azure subscription, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits).
+
+By default, a route table can contain up to 1000 user-defined routes (UDRs). With Azure Virtual Network ManagerΓÇÖs [routing configuration](../virtual-network-manager/concept-user-defined-route.md), this can be expanded to 1000 UDRs per route table. This increased limit supports more advanced routing setups, such as directing traffic from on-premises data centers through a firewall to each spoke virtual network in a hub-and-spoke topology when you have a higher number of spoke virtual networks.
+
+When you create a route table and associate it to a subnet, the table's routes are combined with the subnet's default routes. If there are conflicting route assignments, user-defined routes override the default routes.
You can specify the following next hop types when creating a user-defined route:
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
The following features are currently in gated public preview. After working with
|1|ExpressRoute connectivity with Azure Storage and the 0.0.0.0/0 route|If you have configured a 0.0.0.0/0 route statically in a virtual hub route table or dynamically via a network virtual appliance for traffic inspection, that traffic will bypass inspection when destined for Azure Storage and is in the same region as the ExpressRoute gateway in the virtual hub. | | As a workaround, you can either use [Private Link](../private-link/private-link-overview.md) to access Azure Storage or put the Azure Storage service in a different region than the virtual hub.| |2| Default routes (0/0) won't propagate inter-hub |0/0 routes won't propagate between two virtual WAN hubs. | June 2020 | None. Note: While the Virtual WAN team has fixed the issue, wherein static routes defined in the static route section of the VNet peering page propagate to route tables listed in "propagate to route tables" or the labels listed in "propagate to route tables" on the VNet connection page, default routes (0/0) won't propagate inter-hub. | |3| Two ExpressRoute circuits in the same peering location connected to multiple hubs |If you have two ExpressRoute circuits in the same peering location, and both of these circuits are connected to multiple virtual hubs in the same Virtual WAN, then connectivity to your Azure resources might be impacted. | July 2023 | Make sure each virtual hub has at least 1 virtual network connected to it. This ensures connectivity to your Azure resources. The Virtual WAN team is also working on a fix for this issue. |
-|4| ExpressRoute ECMP Support | Today, ExpressRoute ECMP is not enabled by default for virtual hub deployments. When multiple ExpressRoute circuits are connected to a Virtual WAN hub, ECMP enables traffic from spoke virtual networks to on-premises over ExpressRoute to be distributed across all ExpressRoute circuits advertising the same on-premises routes. | | To enable ECMP for your Virtual WAN hub, please reach out to virtual-wan-ecmp@microsoft.com. |
+|4| ExpressRoute ECMP Support | Today, ExpressRoute ECMP is not enabled by default for virtual hub deployments. When multiple ExpressRoute circuits are connected to a Virtual WAN hub, ECMP enables traffic from spoke virtual networks to on-premises over ExpressRoute to be distributed across all ExpressRoute circuits advertising the same on-premises routes. | | To enable ECMP for your Virtual WAN hub, please reach out to virtual-wan-ecmp@microsoft.com after January 1, 2025. |
| 5| Virtual WAN hub address prefixes are not advertised to other Virtual WAN hubs in the same Virtual WAN.| You can't leverage Virtual WAN hub-to-hub full mesh routing capabilities to provide connectivity between NVA orchestration software deployed in a VNET or on-premises connected to a Virtual WAN hub to an Integrated NVA or SaaS solution deployed in a different Virtual WAN hub. | | If your NVA or SaaS orchestrator is deployed on-premises, connect that on-premises site to all Virtual WAN hubs with NVAs or SaaS solutions deployed in them. If your orchestrator is in an Azure VNET, manage NVAs or SaaS solutions using public IP. Support for Azure VNET orchestrators is on the roadmap.| |6| Configuring routing intent to route between connectivity and firewall NVAs in the same Virtual WAN Hub| Virtual WAN routing intent private routing policy does not support routing between an SD-WAN NVA and a Firewall NVA (or SaaS solution) deployed in the same Virtual hub.| | Deploy the connectivity and firewall integrated NVAs in two different hubs in the same Azure region. Alternatively, deploy the connectivity NVA to a spoke Virtual Network connected to your Virtual WAN Hub and leverage the [BGP peering](scenario-bgp-peering-hub.md).| | 7| BGP between the Virtual WAN hub router and NVAs deployed in the Virtual WAN hub does not come up if the ASN used for BGP peering is updated post-deployment.|Virtual Hub router expects NVA in the hub to use the ASN that was configured on the router when the NVA was first deployed. Updating the ASN associated with the NVA on the NVA resource does not properly register the new ASN with the Virtual Hub router so the router rejects BGP sessions from the NVA if the NVA OS is configured to use the new ASN. | |Delete and recreate the NVA in the Virtual WAN hub with the correct ASN.|