Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
api-management | Authentication Basic Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-basic-policy.md | |
api-management | Authentication Certificate Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-certificate-policy.md | - Use the `authentication-certificate` policy to authenticate with a backend service using a client certificate. When the certificate is [installed into API Management](./api-management-howto-mutual-certificates.md) first, identify it first by its thumbprint or certificate ID (resource name). + Use the `authentication-certificate` policy to authenticate with a backend service using a client certificate. When the certificate is [installed into API Management](./api-management-howto-mutual-certificates.md) first, identify it first by its thumbprint or certificate ID (resourcename). + > [!CAUTION] > If the certificate references a certificate stored in Azure Key Vault, identify it using the certificate ID. When a key vault certificate is rotated, its thumbprint in API Management will change, and the policy will not resolve the new certificate if it is identified by thumbprint.+### Usage notes ++- We recommend configuring [key vault certificates](api-management-howto-mutual-certificates.md) to manage certificates used to secure access to backend services. +- If you configure a certificate password in this policy, we recommend using a [named value](api-management-howto-properties.md). ++ ## Examples ### Client certificate identified by the certificate ID |
api-management | Migrate Stv1 To Stv2 Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2-vnet.md | Title: Migrate to stv2 platform - Azure API Management - VNet injected -description: Migrate your Azure API Management instance in-place from stv1 to stv2 compute platform. Follow these migration steps if your instance is deployed (injected) in an external or internal VNet. +description: Migrate Azure API Management in-place from stv1 to stv2 platform. Follow these migration steps for instances injected in an external or internal VNet. Previously updated : 08/19/2024 Last updated : 09/16/2024 +> [!NOTE] +> **New in August 2024**: To help you migrate your VNet-injected instance, we've improved the migration options in the portal! You can now migrate your instance in-place and keep the same subnet and IP address. + For a VNet-inject instance, you have the following migration options: -* [**Option 1: Keep the same subnet**](#option-1-migrate-and-keep-same-subnet) - Migrate the instance in-place and keep the instances's existing subnet configuration. You can choose whether the API Management instance's original VIP address is preserved (recommended) or whether a new VIP address will be generated. Currently, the [Migrate to Stv2](/rest/api/apimanagement/api-management-service/migratetostv2) REST API supports migrating the instance using the same subnet configuration. +* [**Option 1: Keep the same subnet**](#option-1-migrate-and-keep-same-subnet) - Migrate the instance in-place and keep the instances's existing subnet configuration. You can choose whether the API Management instance's original VIP address is preserved (recommended) or whether a new VIP address will be generated. -* [**Option 2: Change to a new subnet**](#option-2-migrate-and-change-to-new-subnet) - Migrate your instance by specifying a different subnet in the same or a different VNet. After migration, optionally migrate back to the instance's original subnet. The migration process changes the VIP address(es) of the instance. After migration, you need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address(es). Currently, the **Platform migration** blade in the Azure portal supports this migration option. +* [**Option 2: Change to a new subnet**](#option-2-migrate-and-change-to-new-subnet) - Migrate your instance by specifying a different subnet in the same or a different VNet. After migration, optionally migrate back to the instance's original subnet. The migration process changes the VIP address(es) of the instance. After migration, you need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address(es). If you need to migrate a *non-VNnet-injected* API Management hosted on the `stv1` platform, see [Migrate a non-VNet-injected API Management instance to the stv2 platform](migrate-stv1-to-stv2-no-vnet.md). API Management platform migration from `stv1` to `stv2` involves updating the un * The upgrade process involves creating a new compute in parallel to the old compute, which can take up to 45 minutes. Plan longer times for multi-region deployments and in scenarios that involve changing the subnet more than once. * The API Management status in the Azure portal will be **Updating**.-* For certain migration options, the VIP address (or addresses, for a multi-region deployment) of the instance will change. If you migrate and keep the same subnet configuration, you can choose to preserve the VIP address or a new public VIP will be generated. - [!INCLUDE [api-management-migration-no-preserve-ip](../../includes/api-management-migration-no-preserve-ip.md)] +* For certain migration options, you can choose to preserve the VIP address or a new public VIP will be generated. * For migration scenarios when a new VIP address is generated: * Azure manages the migration. * The gateway DNS still points to the old compute if a custom domain is in use. API Management platform migration from `stv1` to `stv2` involves updating the un * For an instance in internal VNet mode, customer manages the DNS, so the DNS entries continue to point to old compute until updated by the customer. * It's the DNS that points to either the new or the old compute and hence no downtime to the APIs. * Changes are required to your firewall rules, if any, to allow the new compute subnet to reach the backends.- * After successful migration, the old compute is automatically decommissioned after a short period. Using the **Platform migration** blade in the portal, you can enable a migration setting to retain the old gateway for 48 hours. *The 48 hour delay option is only available for VNet-injected services.* + * After successful migration, the old compute is automatically decommissioned after a short period. When changing to a new subnet using the **Platform migration** blade in the portal, you can enable a migration setting to retain the old gateway for 48 hours. *The 48 hour delay option is only available for VNet-injected services.* ## Prerequisites Other prerequisites are specific to the migration options in the following secti ## Option 1: Migrate and keep same subnet -You can migrate your API Management instance to the `stv2` platform keeping the existing subnet configuration, which simplifies your migration. Currently, you can use the Migrate to Stv2 REST API to migrate the instance using the same subnet configuration. +You can migrate your API Management instance to the `stv2` platform keeping the existing subnet configuration, which simplifies your migration. You can migrate using the **Platform migration** blade in the Azure portal or the Migrate to Stv2 REST API. ### Prerequisites You can migrate your API Management instance to the `stv2` platform keeping the ### Public IP address options - same-subnet migration - You can choose whether the API Management instance's original VIP address is preserved (recommended) or whether a new VIP address will be generated. * **Preserve virtual IP address** - If you preserve the VIP address in a VNet in external mode, API requests can remain responsive during migration (see [Expected downtime](#expected-downtime-and-compute-retention)); for a VNet in internal mode, temporary downtime is expected. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes. No further configuration is required after migration. With this option, the `stv1` compute is deleted permanently after the migration is complete. There is no option to retain it temporarily. + The following image shows a high level overview of what happens when the IP address is preserved. ++ :::image type="content" source="media/migrate-stv1-to-stv2-vnet/apim-preserve-ip.gif" alt-text="Diagram of in-place migration to same subnet and preserving IP address."::: + * **New virtual IP address** - If you choose this option, API Management generates a new VIP address for your instance. API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address. - With this option, the `stv1` compute is retained for a period by default after migration is complete so that you can validate the migrated instance and confirm the network and DNS configuration. + With this option, the `stv1` compute is retained for a short period by default after migration is complete so that you can validate the migrated instance and confirm the network and DNS configuration. ++ The following image shows a high level overview of what happens when a new IP address is generated. ++ :::image type="content" source="media/migrate-stv1-to-stv2-vnet/apim-new-ip.gif" alt-text="Diagram of in-place migration to same subnet and generating new IP address."::: [!INCLUDE [api-management-migration-precreated-ip](../../includes/api-management-migration-precreated-ip.md)] When migrating a VNet-injected instance and keeping the same subnet configuratio |External | Preserve VIP | No downtime; traffic is served on a temporary IP address for up to 20 minutes during migration to the new `stv2` deployment | No retention | |External | New VIP | No downtime | Retained by default for 15 minutes to allow you to update network dependencies | |Internal | Preserve VIP | Downtime for approximately 20 minutes during migration while the existing IP address is assigned to the new `stv2` deployment. | No retention |-|Internal | New VIP | No downtime | Retained by default for 4 hours to allow you to update network dependencies | +|Internal | New VIP | No downtime | Retained by default for 15 minutes to allow you to update network dependencies; can be extended to 48 hours when using the portal | -### Migration script +### Migration steps - keep the same subnet +#### [Portal](#tab/portal) +1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. +1. In the left menu, under **Settings**, select **Platform migration**. +1. Under **Select a migration option**, select **Keep the same subnet**. +1. Under **Choose an IP address option**, select one of the two IP address options. + > [!NOTE] + > If your VNet is in external mode, take note of the precreated public IP address for the migration process. Use this address to configure network connectivity for your migrated instance. +1. (For instances injected in internal mode and migrating to a new VIP) Under **Choose the scenario that aligns with your requirements**, choose one of the two options, depending on whether you want to maintain the original `stv1` compute for a period after migration. +1. Select **Verify** to run automated checks on the subnet. If problems are detected, adjust your subnet configuration and run checks again. For other network dependencies, such as DNS and firewall rules, check manually. +1. Confirm that you want to migrate, and select **Start migration**. + The status of your API Management instance changes to **Updating**. The migration process takes approximately 45 minutes to complete. When the status changes to **Online**, migration is complete. ++#### [Azure CLI](#tab/cli) ++### Migration script + > [!NOTE] > If your API Management instance is deployed in multiple regions, the REST API migrates the VNet settings for all locations of your instance using a single call. ++ ## Option 2: Migrate and change to new subnet Using the Azure portal, you can migrate your instance by specifying a different subnet in the same or a different VNet. After migration, optionally migrate back to the instance's original subnet. The following image shows a high level overview of what happens during migration [!INCLUDE [api-management-publicip-internal-vnet](../../includes/api-management-publicip-internal-vnet.md)] -### Migration steps +### Migration steps - change to a new subnet 1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. 1. In the left menu, under **Settings**, select **Platform migration**.-1. On the **Platform migration** page, in **Step 1**, review migration requirements and prerequisites. -1. In **Step 2**, choose migration settings: - * Select a location to migrate. - * Select the **Virtual network**, **Subnet**, and optional **Public IP address** you want to migrate to. - - :::image type="content" source="media/migrate-stv1-to-stv2-vnet/select-location.png" alt-text="Screenshot of selecting network migration settings in the portal." lightbox="media/migrate-stv1-to-stv2-vnet/select-location.png"::: -- * Select either **Return to original subnet as soon as possible** or **Stay in the new subnet and keep stv1 compute around for 48 hours** after migration. If you choose the former, the `stv1` compute will be deleted approximately 15 minutes after migration, allowing you to proceed directly with [manual migration back to the original subnet](#optional-migrate-back-to-original-subnet) if desired. If you choose the latter, the `stv1` compute is retained for 48 hours. You can use this period to validate your network settings and connectivity. +1. Under **Select a migration option**, select **Change to a new subnet**. +1. Under **Choose the scenario that aligns with your requirements**, choose one of the two options, depending on whether you want to maintain the original `stv1` compute for a period after migration. - :::image type="content" source="media/migrate-stv1-to-stv2-vnet/enable-retain-gateway.png" alt-text="Screenshot of options to retain stv1 compute in the portal." lightbox="media/migrate-stv1-to-stv2-vnet/enable-retain-gateway.png"::: --1. In **Step 3**, confirm you want to migrate, and select **Migrate**. + :::image type="content" source="media/migrate-stv1-to-stv2-vnet/enable-retain-gateway.png" alt-text="Screenshot of options to retain stv1 compute in the portal."::: +1. Under **Define migration settings for each location**: + 1. Select a location to migrate. + 1. Select the **Virtual network**, **Subnet**, and optional **Public IP address** you want to migrate to. + + :::image type="content" source="media/migrate-stv1-to-stv2-vnet/select-location.png" alt-text="Screenshot of selecting network migration settings in the portal." lightbox="media/migrate-stv1-to-stv2-vnet/select-location.png"::: +1. Under **Verify that your subnet meets migration requirements**, select **Verify** to run automated checks on the subnet. If problems are detected, adjust your subnet configuration and run checks again. For other network dependencies, such as DNS and firewall rules, check manually. +1. Confirm that you want to migrate, and select **Migrate**. The status of your API Management instance changes to **Updating**. The migration process takes approximately 45 minutes to complete. When the status changes to **Online**, migration is complete. If your API Management instance is deployed in multiple regions, repeat the preceding steps to continue migrating VNet settings for the remaining locations of your instance. After you update the VNet configuration, the status of your API Management insta For VNet-injected instances, see the prerequisites for the options to [migrate and keep the same subnet](#option-1-migrate-and-keep-same-subnet) or to [migrate and change to a new subnet](#option-2-migrate-and-change-to-new-subnet). -- **Will the migration cause a downtime?**+- **Will the migration cause downtime?** When migrating a VNet-injected instance and keeping the same subnet configuration, minimal or no downtime for the API gateway is expected. See the summary table in [Expected downtime](#expected-downtime-and-compute-retention). After you update the VNet configuration, the status of your API Management insta `stv1` to `stv2` migration involves updating the compute platform alone and the internal storage layer isn't changed. Hence all the configuration is safe during the migration process. This includes the system-assigned managed identity, which if enabled is preserved. -- **How to confirm that the migration is complete and successful?**+- **How to confirm that the migration is complete and successful** The migration is considered complete and successful when the status in the **Overview** page reads **Online** along with the platform version being either `stv2` or `stv2.1`. Also verify that the network status in the **Network** blade shows green for all required connectivity. After you update the VNet configuration, the status of your API Management insta - **Can I preserve the IP address of the instance?** - Yes, you can preserve the IP address by [migrating and keeping the same subnet](#option-1-migrate-and-keep-same-subnet). - + Yes, the **Platform migration** blade in the portal and the REST API have options to preserve the IP address. ++ - **Is there a migration path without modifying the existing instance?** Yes, you need a [side-by-side migration](migrate-stv1-to-stv2.md#alternative-side-by-side-deployment). That means you create a new API Management instance in parallel with your current instance and copy the configuration over to the new instance. After you update the VNet configuration, the status of your API Management insta - **What functionality is not available during migration?** - API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) is locked for 30 minutes. In scenarios where you migrate to a new subnet, after migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address. -+ API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) is locked for 30 minutes. - **How long will the migration take?** - The expected duration for a migration to a new VNet configuration is approximately 45 minutes. The indicator to check if the migration was already performed is to check if Status of your instance is back to **Online** and not **Updating**. + The expected duration for a migration to a new VNet configuration is approximately 45 minutes. The indicator to check if the migration was already performed is to check if Status of your instance is back to **Online** and not **Updating**. Plan longer times for multi-region deployments and in scenarios that involve changing the subnet more than once. - **Is there a way to validate the VNet configuration before attempting migration?** After you update the VNet configuration, the status of your API Management insta If there's a failure during the migration process, the instance will automatically roll back to the `stv1` platform. However, after the service migrates successfully, you can't roll back to the `stv1` platform. - When migrating and changing to a new VIP, after migration there is a short window if time during which the old gateway continues to serve traffic and you can confirm your network settings. See [Confirm settings before old gateway is purged](#confirm-settings-before-old-gateway-is-purged) - - **Is there any change required in custom domain/private DNS zones?** With VNet-injected instances in internal mode and changing to a new VIP, you'll need to update the private DNS zones to the new VNet IP address acquired after the migration. Pay attention to update non-Azure DNS zones, too (for example, your on-premises DNS servers pointing to API Management private IP address). However, in external mode, the migration process will automatically update the default domains if in use. After you update the VNet configuration, the status of your API Management insta - **Can I upgrade my stv1 instance to the same subnet?** - - Currently, you can only upgrade to the same subnet in a single pass when using the [Migrate to stv2 REST API](#option-1-migrate-and-keep-same-subnet). - - Currently, if you use the **Platform migration** blade in the portal, you need to migrate to a new subnet and then migrate back to the original subnet: - - The old gateway takes between 15 mins to 45 mins to vacate the subnet, so that you can initiate the move. However, you can enable a migration setting to retain the old gateway for 48 hours. - - Ensure that the old subnet's networking for [NSG](./virtual-network-reference.md#required-ports) and [firewall](./api-management-using-with-vnet.md?tabs=stv2#force-tunnel-traffic-to-on-premises-firewall-using-expressroute-or-network-virtual-appliance) is updated for `stv2` dependencies. - - Subnet IP address allocation is nondeterministic, therefore the original ILB (ingress) IP for "internal mode" deployments may change when you move back to the original subnet. This would require a DNS change if you're using A records. + Yes, use the **Platform migration** blade in the portal, or use the [Migrate to stv2 REST API](#option-1-migrate-and-keep-same-subnet). -- **Can I test the new gateway before switching the live traffic?**+- **Can I test the new gateway in a new subnet before switching the live traffic?** - - By default, the old and the new managed gateways coexist for 15 mins, which is a small window of time to validate the deployment. You can enable a migration setting to retain the old gateway for 48 hours. This change keeps the old and the new managed gateways active to receive traffic and facilitate validation. + - When you migrate to a new subnet, by default the old and the new managed gateways coexist for 15 minutes, which is a small window of time to validate the deployment. You can enable a migration setting to retain the old gateway for 48 hours. This change keeps the old and the new managed gateways active to receive traffic and facilitate validation. - The migration process automatically updates the default domain names, and if being used, the traffic routes to the new gateways immediately. - If custom domain names are in use, the corresponding DNS records might need to be updated with the new IP address if not using CNAME. Customers can update their hosts file to the new API Management IP and validate the instance before making the switch. During this validation process, the old gateway continues to serve the live traffic. -- **Are there any considerations when using default domain name?**+- **Are there any considerations when using default domain name in a new subnet?** Instances that are using the default DNS name in external mode have the DNS autoupdated by the migration process. Moreover, the management endpoint, which always uses the default domain name, is automatically updated by the migration process. Since the switch happens immediately on a successful migration, the new instance starts receiving traffic immediately, and it's critical that any networking restrictions/dependencies are taken care of in advance to avoid impacted APIs being unavailable. |
api-management | Proxy Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/proxy-policy.md | The `proxy` policy allows you to route requests forwarded to backends via an HTT - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation - [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace +### Usage notes ++- We recommend using [named values](api-management-howto-properties.md) to provide credentials, with secrets protected in a key vault. ++ ## Example In this example, [named values](api-management-howto-properties.md) are used for the username and password to avoid storing sensitive information in the policy document. |
api-management | Virtual Network Workspaces Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-workspaces-resources.md | For information about networking options in API Management, see [Use a virtual n * The subnet can't be shared with another Azure resource, including another workspace gateway. +## Subnet size ++* Minimum: /27 (32 addresses) +* Maximum: /24 (256 addresses) - recommended + ## Subnet delegation The subnet must be delegated as follows to enable the desired inbound and outbound access. |
application-gateway | Alb Controller Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-release-notes.md | Instructions for new or existing deployments of ALB Controller are found in the | ALB Controller Version | Gateway API Version | Kubernetes Version | Release Notes | | - | - | | - |-| 1.0.2| v1 | v1.26, v1.27, v1.28, v1.29 | ECDSA + RSA certificate support for both Ingress and Gateway API, Ingress fixes, Server-sent events support | +| 1.2.3| v1.1 | v1.26, v1.27, v1.28, v1.29, v1.30 | Gateway API v1.1, gRPC support, frontend mutual authentication, readiness probe fixes, custom health probe port and TLS mode | ## Release history | ALB Controller Version | Gateway API Version | Kubernetes Version | Release Notes | | - | - | | - |+| 1.0.2| v1 | v1.26, v1.27, v1.28, v1.29 | ECDSA + RSA certificate support for both Ingress and Gateway API, Ingress fixes, Server-sent events support | | 1.0.0| v1 | v1.26, v1.27, v1.28 | General Availability! URL redirect for both Gateway and Ingress API, v1beta1 -> v1 of Gateway API, quality improvements<br/>Breaking Changes: TLS Policy for Gateway API [PolicyTargetReference](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1alpha2.PolicyTargetReferenceWithSectionName)<br/>Listener is now referred to as [SectionName](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.SectionName)<br/>Fixes: Request timeout of 3 seconds, [HealthCheckPolicy interval](https://github.com/Azure/AKS/issues/4086), [pod crash for missing API fields](https://github.com/Azure/AKS/issues/4087) | | 0.6.3 | v1beta1 | v1.25 | Hotfix to address handling of Application Gateway for Containers frontends during controller restart in managed scenario | | 0.6.2 | - | - | Skipped release | |
application-gateway | Api Specification Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/api-specification-kubernetes.md | deployment status.</p> </td> </tr><tr><td><p>"InProgress"</p></td> <td><p>AlbReasonInProgress indicates whether the Application Gateway for Containers resource-is in the process of being created, updated or deleted.</p> +is in the process of being created, updated, or deleted.</p> </td> </tr></tbody> </table> AlbStatus <h3 id="alb.networking.azure.io/v1.BackendTLSPolicy">BackendTLSPolicy </h3> <div>-<p>BackendTLSPolicy is the schema for the BackendTLSPolicys API</p> +<p>BackendTLSPolicy is the schema for the BackendTLSPolicys API.</p> </div> <table> <thead> BackendTLSPolicyConfig <em>(Optional)</em> <p>Override defines policy configuration that should override policy configuration attached below the targeted resource in the hierarchy.</p>-<p>Note: Override is currently not supported and will result in a validation error. +<p>Note: Override is currently not supported and result in a validation error. Support for Override will be added in a future release.</p> </td> </tr> int (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.BackendTLSPolicy">BackendTLSPolicy</a>) </p> <div>-<p>BackendTLSPolicySpec defines the desired state of BackendTLSPolicy</p> +<p>BackendTLSPolicySpec defines the desired state of BackendTLSPolicy.</p> </div> <table> <thead> BackendTLSPolicyConfig <em>(Optional)</em> <p>Override defines policy configuration that should override policy configuration attached below the targeted resource in the hierarchy.</p>-<p>Note: Override is currently not supported and will result in a validation error. +<p>Note: Override is currently not supported and result in a validation error. Support for Override will be added in a future release.</p> </td> </tr> vocabulary to describe BackendTLSPolicy state.</p> (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.BackendTLSPolicyConfig">BackendTLSPolicyConfig</a>) </p> <div>-<p>CommonTLSPolicy is the schema for the CommonTLSPolicy API</p> +<p>CommonTLSPolicy is the schema for the CommonTLSPolicy API.</p> </div> <table> <thead> CommonTLSPolicyVerify </td> <td> <em>(Optional)</em>-<p>Verify provides the options to verify the backend certificate</p> +<p>Verify provides the options to verify the peer certificate.</p> </td> </tr> </tbody> CommonTLSPolicyVerify (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.CommonTLSPolicy">CommonTLSPolicy</a>) </p> <div>-<p>CommonTLSPolicyVerify defines the schema for the CommonTLSPolicyVerify API</p> +<p>CommonTLSPolicyVerify defines the schema for the CommonTLSPolicyVerify API.</p> </div> <table> <thead> Gateway API .SecretObjectReference </em> </td> <td>-<p>CaCertificateRef is the CA certificate used to verify peer certificate of -the backend.</p> +<p>CaCertificateRef is the CA certificate used to verify peer certificate.</p> </td> </tr> <tr> string </em> </td> <td>+<em>(Optional)</em> <p>SubjectAltName is the subject alternative name used to verify peer-certificate of the backend.</p> +certificate.</p> </td> </tr> </tbody> Kubernetes core API.</p> <tbody> <tr> <td>-<code>PolicyTargetReference</code><br/> +<code>NamespacedPolicyTargetReference</code><br/> <em>-<a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.PolicyTargetReference"> -Gateway API .PolicyTargetReference +<a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1alpha2.NamespacedPolicyTargetReference"> +Gateway API alpha2.NamespacedPolicyTargetReference </a> </em> </td> <td> <p>-(Members of <code>PolicyTargetReference</code> are embedded into this type.) +(Members of <code>NamespacedPolicyTargetReference</code> are embedded into this type.) </p> </td> </tr> resources, SectionNames is interpreted as the following:</p> <li>Gateway: Listener Name</li> <li>Service: Port Name</li> </ul>-<p>If a SectionNames is specified, but does not exist on the targeted object, -the Policy will fail to attach, and the policy implementation will record +<p>If a SectionNames is specified, but doesn’t exist on the targeted object, +the Policy fails to attach, and the policy implementation will record a <code>ResolvedRefs</code> or similar Condition in the Policy’s status.</p> </td> </tr> FrontendTLSPolicyConfig <em>(Optional)</em> <p>Override defines policy configuration that should override policy configuration attached below the targeted resource in the hierarchy.</p>-<p>Note: Override is currently not supported and will result in a validation error. +<p>Note: Override is currently not supported and result in a validation error. Support for Override will be added in a future release.</p> </td> </tr> When the given FrontendTLSPolicy is correctly configured</p> </tr><tr><td><p>"InvalidFrontendTLSPolicy"</p></td> <td><p>FrontendTLSPolicyReasonInvalid is the reason when the FrontendTLSPolicy isn’t Accepted</p> </td>+</tr><tr><td><p>"InvalidCertificateRef"</p></td> +<td><p>FrontendTLSPolicyReasonInvalidCertificateRef is used when an invalid certificate is referenced</p> +</td> +</tr><tr><td><p>"InvalidDefault"</p></td> +<td><p>FrontendTLSPolicyReasonInvalidDefault is used when the default is invalid</p> +</td> </tr><tr><td><p>"InvalidGateway"</p></td> <td><p>FrontendTLSPolicyReasonInvalidGateway is used when the gateway is invalid</p> </td> When the given FrontendTLSPolicy is correctly configured</p> </tr><tr><td><p>"InvalidPolicyType"</p></td> <td><p>FrontendTLSPolicyReasonInvalidPolicyType is used when the policy type is invalid</p> </td>+</tr><tr><td><p>"InvalidTargetReference"</p></td> +<td><p>FrontendTLSPolicyReasonInvalidTargetReference is used when the target reference is invalid</p> +</td> </tr><tr><td><p>"NoTargetReference"</p></td> <td><p>FrontendTLSPolicyReasonNoTargetReference is used when there’s no target reference</p> </td> Policy.</p> <tbody> <tr> <td>+<code>verify</code><br/> +<em> +<a href="#alb.networking.azure.io/v1.MTLSPolicyVerify"> +MTLSPolicyVerify +</a> +</em> +</td> +<td> +<em>(Optional)</em> +<p>Verify provides the options to verify the peer certificate.</p> +</td> +</tr> +<tr> +<td> <code>policyType</code><br/> <em> <a href="#alb.networking.azure.io/v1.PolicyType"> PolicyType </em> </td> <td>+<em>(Optional)</em> <p>Type is the type of the policy.</p> </td> </tr> FrontendTLSPolicyConfig <em>(Optional)</em> <p>Override defines policy configuration that should override policy configuration attached below the targeted resource in the hierarchy.</p>-<p>Note: Override is currently not supported and will result in a validation error. +<p>Note: Override is currently not supported and result in a validation error. Support for Override will be added in a future release.</p> </td> </tr> This is a strict version of the policy “2023-06”.</p> </td> </tr></tbody> </table>+<h3 id="alb.networking.azure.io/v1.GRPCSpecifiers">GRPCSpecifiers +</h3> +<p> +(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">HealthCheckPolicyConfig</a>) +</p> +<div> +<p>GRPCSpecifiers defines the schema for GRPC HealthCheck.</p> +</div> +<table> +<thead> +<tr> +<th>Field</th> +<th>Description</th> +</tr> +</thead> +<tbody> +<tr> +<td> +<code>authority</code><br/> +<em> +string +</em> +</td> +<td> +<em>(Optional)</em> +<p>Authority if present is used as the value of the Authority header in the health check.</p> +</td> +</tr> +<tr> +<td> +<code>service</code><br/> +<em> +string +</em> +</td> +<td> +<em>(Optional)</em> +<p>Service allows the configuration of a Health check registered under a different service name.</p> +</td> +</tr> +</tbody> +</table> <h3 id="alb.networking.azure.io/v1.HTTPHeader">HTTPHeader </h3> <p> the implementation setting the Accepted Condition for the Route to <code>status: <tr> <td>/foo/bar</td> <td>/foo</td>-<td> </td> +<td></td> <td>/bar</td> </tr> <tr> <td>/foo/</td> <td>/foo</td>-<td> </td> +<td></td> <td>/</td> </tr> <tr> <td>/foo</td> <td>/foo</td>-<td> </td> +<td></td> <td>/</td> </tr> <tr> match the prefix <code>/abc</code>, but the path <code>/abcd</code> wouldn&rsquo (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicyConfig">HealthCheckPolicyConfig</a>) </p> <div>-<p>HTTPSpecifiers defines the schema for HTTP HealthCheck check specification</p> +<p>HTTPSpecifiers defines the schema for HTTP HealthCheck check specification.</p> </div> <table> <thead> HTTPMatch <p>HeaderFilter defines a filter that modifies the headers of an HTTP request or response. Only one action for a given header name is permitted. Filters specifying multiple actions of the same or different type for any one-header name are invalid and will be rejected. +header name are invalid and rejected. Configuration to set or add multiple values for a header must use RFC 7230 header value formatting, separating each value with a comma.</p> </div> my-header2: bar</p> <div> <p>HeaderName is the name of a header or query parameter.</p> </div>-<h3 id="alb.networking.azure.io/v1.HealthCheckPolicy">HealthCheckPolicy -</h3> +<h3 id="alb.networking.azure.io/v1.HealthCheckPolicy">HealthCheckPolicy</h3> <div>-<p>HealthCheckPolicy is the schema for the HealthCheckPolicy API</p> +<p>HealthCheckPolicy is the schema for the HealthCheckPolicy API.</p> </div> <table> <thead> particular HealthCheckPolicy condition type is raised.</p> <th>Description</th> </tr> </thead>-<tbody><tr><td><p>"Accepted"</p></td> -<td><p>HealthCheckPolicyReasonAccepted is used to set the HealthCheckPolicyConditionReason to Accepted -When the given HealthCheckPolicy is correctly configured</p> +<tbody><tr><td><p>"BackendTLSPolicyNotFound"</p></td> +<td><p>BackendTLSPolicyConditionNotFound is used when the BackendTLSPolicy is not found for the service.</p> +</td> +</tr><tr><td><p>"Accepted"</p></td> +<td><p>HealthCheckPolicyReasonAccepted is used to set the HealthCheckPolicyConditionReason to Accepted. +When the given HealthCheckPolicy is correctly configured.</p> </td> </tr><tr><td><p>"InvalidHealthCheckPolicy"</p></td>-<td><p>HealthCheckPolicyReasonInvalid is the reason when the HealthCheckPolicy isn’t Accepted</p> +<td><p>HealthCheckPolicyReasonInvalid is the reason when the HealthCheckPolicy isn’t Accepted.</p> </td> </tr><tr><td><p>"InvalidGroup"</p></td>-<td><p>HealthCheckPolicyReasonInvalidGroup is used when the group is invalid</p> +<td><p>HealthCheckPolicyReasonInvalidGroup is used when the group is invalid.</p> </td> </tr><tr><td><p>"InvalidKind"</p></td>-<td><p>HealthCheckPolicyReasonInvalidKind is used when the kind/group is invalid</p> +<td><p>HealthCheckPolicyReasonInvalidKind is used when the kind/group is invalid.</p> </td> </tr><tr><td><p>"InvalidName"</p></td>-<td><p>HealthCheckPolicyReasonInvalidName is used when the name is invalid</p> +<td><p>HealthCheckPolicyReasonInvalidName is used when the name is invalid.</p> </td> </tr><tr><td><p>"InvalidPort"</p></td>-<td><p>HealthCheckPolicyReasonInvalidPort is used when the port is invalid</p> +<td><p>HealthCheckPolicyReasonInvalidPort is used when the port is invalid.</p> </td> </tr><tr><td><p>"InvalidService"</p></td>-<td><p>HealthCheckPolicyReasonInvalidService is used when the Service is invalid</p> +<td><p>HealthCheckPolicyReasonInvalidService is used when the Service is invalid.</p> </td> </tr><tr><td><p>"NoTargetReference"</p></td>-<td><p>HealthCheckPolicyReasonNoTargetReference is used when there’s no target reference</p> +<td><p>HealthCheckPolicyReasonNoTargetReference is used when there’s no target reference.</p> </td> </tr><tr><td><p>"OverrideNotSupported"</p></td>-<td><p>HealthCheckPolicyReasonOverrideNotSupported is used when the override isn’t supported</p> +<td><p>HealthCheckPolicyReasonOverrideNotSupported is used when the override isn’t supported.</p> </td> </tr><tr><td><p>"RefNotPermitted"</p></td>-<td><p>HealthCheckPolicyReasonRefNotPermitted is used when the ref isn’t permitted</p> +<td><p>HealthCheckPolicyReasonRefNotPermitted is used when the ref isn’t permitted.</p> </td> </tr><tr><td><p>"SectionNamesNotPermitted"</p></td>-<td><p>HealthCheckPolicyReasonSectionNamesNotPermitted is used when the section names aren’t permitted</p> +<td><p>HealthCheckPolicyReasonSectionNamesNotPermitted is used when the section names aren’t permitted.</p> </td> </tr></tbody> </table> field.</p> </tr> </thead> <tbody><tr><td><p>"Accepted"</p></td>-<td><p>HealthCheckPolicyConditionAccepted is used to set the HealthCheckPolicyConditionType to Accepted</p> +<td><p>HealthCheckPolicyConditionAccepted is used to set the HealthCheckPolicyConditionType to Accepted.</p> </td> </tr><tr><td><p>"ResolvedRefs"</p></td>-<td><p>HealthCheckPolicyConditionResolvedRefs is used to set the HealthCheckPolicyCondition to ResolvedRefs</p> +<td><p>HealthCheckPolicyConditionResolvedRefs is used to set the HealthCheckPolicyCondition to ResolvedRefs.</p> </td> </tr></tbody> </table> field.</p> (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicySpec">HealthCheckPolicySpec</a>) </p> <div>-<p>HealthCheckPolicyConfig defines the schema for HealthCheck check specification</p> +<p>HealthCheckPolicyConfig defines the schema for HealthCheck check specification.</p> </div> <table> <thead> considered failed.</p> </tr> <tr> <td>+<code>port</code><br/> +<em> +int32 +</em> +</td> +<td> +<em>(Optional)</em> +<p>Port is the port to use for HealthCheck checks.</p> +</td> +</tr> +<tr> +<td> <code>unhealthyThreshold</code><br/> <em> int32 int32 </td> <td> <em>(Optional)</em>-<p>UnhealthyThreshold is the number of consecutive failed HealthCheck checks</p> +<p>UnhealthyThreshold is the number of consecutive failed HealthCheck checks.</p> </td> </tr> <tr> int32 </td> <td> <em>(Optional)</em>-<p>HealthyThreshold is the number of consecutive successful HealthCheck checks</p> +<p>HealthyThreshold is the number of consecutive successful HealthCheck checks.</p> +</td> +</tr> +<tr> +<td> +<code>useTLS</code><br/> +<em> +bool +</em> +</td> +<td> +<em>(Optional)</em> +<p>UseTLS indicates whether health check should enforce TLS. +By default, health check will use the same protocol as the +service if the same port is used for health check. If the port +is different, health check will be plaintext.</p> </td> </tr> <tr> HTTPSpecifiers target resource.</p> </td> </tr>+<tr> +<td> +<code>grpc</code><br/> +<em> +<a href="#alb.networking.azure.io/v1.GRPCSpecifiers"> +GRPCSpecifiers +</a> +</em> +</td> +<td> +<p>GRPC configures a gRPC v1 HealthCheck (<a href="https://github.com/grpc/grpc-proto/blob/master/grpc/health/v1/health.proto">https://github.com/grpc/grpc-proto/blob/master/grpc/health/v1/health.proto</a>) +against the target resource.</p> +</td> +</tr> </tbody> </table> <h3 id="alb.networking.azure.io/v1.HealthCheckPolicySpec">HealthCheckPolicySpec target resource.</p> (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.HealthCheckPolicy">HealthCheckPolicy</a>) </p> <div>-<p>HealthCheckPolicySpec defines the desired state of HealthCheckPolicy</p> +<p>HealthCheckPolicySpec defines the desired state of HealthCheckPolicy.</p> </div> <table> <thead> Kubernetes meta/v1.Duration </tr> </tbody> </table>+<h3 id="alb.networking.azure.io/v1.MTLSPolicyVerify">MTLSPolicyVerify +</h3> +<p> +(<em>Appears on:</em><a href="#alb.networking.azure.io/v1.FrontendTLSPolicyConfig">FrontendTLSPolicyConfig</a>) +</p> +<div> +<p>MTLSPolicyVerify defines the schema for the MTLSPolicyVerify API.</p> +</div> +<table> +<thead> +<tr> +<th>Field</th> +<th>Description</th> +</tr> +</thead> +<tbody> +<tr> +<td> +<code>caCertificateRef</code><br/> +<em> +<a href="https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.SecretObjectReference"> +Gateway API .SecretObjectReference +</a> +</em> +</td> +<td> +<p>CaCertificateRef is the CA certificate used to verify peer certificate.</p> +</td> +</tr> +<tr> +<td> +<code>subjectAltNames</code><br/> +<em> +[]string +</em> +</td> +<td> +<em>(Optional)</em> +<p>SubjectAltNames is the list of subject alternative names used to verify peer +certificate.</p> +</td> +</tr> +</tbody> +</table> <h3 id="alb.networking.azure.io/v1.PolicyType">PolicyType </h3> <p> Valid Protocol values are:</p> </tr> </thead> <tbody><tr><td><p>"HTTP"</p></td>-<td><p>HTTP implies that the service uses HTTP</p> +<td><p>ProtocolHTTP implies that the service uses HTTP.</p> </td> </tr><tr><td><p>"HTTPS"</p></td>-<td><p>HTTPS implies that the service uses HTTPS</p> +<td><p>ProtocolHTTPS implies that the service uses HTTPS.</p> </td> </tr><tr><td><p>"TCP"</p></td>-<td><p>TCP implies that the service uses plain TCP</p> +<td><p>ProtocolTCP implies that the service uses plain TCP.</p> </td> </tr></tbody> </table> header from an HTTP response before it’s sent to the client.</p> <h3 id="alb.networking.azure.io/v1.RoutePolicy">RoutePolicy </h3> <div>-<p>RoutePolicy is the schema for the RoutePolicy API</p> +<p>RoutePolicy is the schema for the RoutePolicy API.</p> </div> <table> <thead> RoutePolicyConfig <em>(Optional)</em> <p>Override defines policy configuration that should override policy configuration attached below the targeted resource in the hierarchy.</p>-<p>Note: Override is currently not supported and will result in a validation error. +<p>Note: Override is currently not supported and result in a validation error. Support for Override will be added in a future release.</p> </td> </tr> When the given RoutePolicy is correctly configured</p> </tr><tr><td><p>"InvalidRoutePolicy"</p></td> <td><p>RoutePolicyReasonInvalid is the reason when the RoutePolicy isn’t Accepted</p> </td>+</tr><tr><td><p>"InvalidGRPCRoute"</p></td> +<td><p>RoutePolicyReasonInvalidGRPCRoute is used when the GRPCRoute is invalid</p> +</td> </tr><tr><td><p>"InvalidGroup"</p></td> <td><p>RoutePolicyReasonInvalidGroup is used when the group is invalid</p> </td> SessionAffinity (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.RoutePolicy">RoutePolicy</a>) </p> <div>-<p>RoutePolicySpec defines the desired state of RoutePolicy</p> +<p>RoutePolicySpec defines the desired state of RoutePolicy.</p> </div> <table> <thead> RoutePolicyConfig <em>(Optional)</em> <p>Override defines policy configuration that should override policy configuration attached below the targeted resource in the hierarchy.</p>-<p>Note: Override is currently not supported and will result in a validation error. +<p>Note: Override is currently not supported and result in a validation error. Support for Override will be added in a future release.</p> </td> </tr> vocabulary to describe RoutePolicy state.</p> (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.RoutePolicyConfig">RoutePolicyConfig</a>) </p> <div>-<p>RouteTimeouts defines the schema for Timeouts specification</p> +<p>RouteTimeouts defines the schema for Timeouts specification.</p> </div> <table> <thead> Kubernetes meta/v1.Duration (<em>Appears on:</em><a href="#alb.networking.azure.io/v1.IngressBackendSettings">IngressBackendSettings</a>, <a href="#alb.networking.azure.io/v1.RoutePolicyConfig">RoutePolicyConfig</a>) </p> <div>-<p>SessionAffinity defines the schema for Session Affinity specification</p> +<p>SessionAffinity defines the schema for Session Affinity specification.</p> </div> <table> <thead> |
application-gateway | Custom Health Probe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/custom-health-probe.md | The following properties make up custom health probes: | Property | Default Value | | -- | - |-| interval | How often in seconds health probes should be sent to the backend target. The minimum interval must be > 0 seconds. | -| timeout | How long in seconds the request should wait until it's marked as a failure The minimum interval must be > 0 seconds. | +| interval | How often in seconds health probes should be sent to the backend target. The minimum interval must be > 0 seconds. | +| timeout | How long in seconds the request should wait until it's marked as a failure. The minimum interval must be > 0 seconds. | | healthyThreshold | Number of health probes before marking the target endpoint healthy. The minimum interval must be > 0. |+| port | The port number used when probing the backend target. | | unhealthyTreshold | Number of health probes to fail before the backend target should be labeled unhealthy. The minimum interval must be > 0. |+| grpc | Specified if the backend service is expecting gRPC connections. The value must be `{}`. | +| (http) | Specified if the backend service is expecting http connections. | | (http) host | The hostname specified in the request to the backend target. | | (http) path | The specific path of the request. If a single file should be loaded, the path might be /https://docsupdatetracker.net/index.html. | | (http -> match) statusCodes | Contains two properties, `start` and `end`, that define the range of valid HTTP status codes returned from the backend. |+| UseTLS | UseTLS indicates whether health check should enforce TLS. If not specified, health check uses the same protocol as the service if the same port is used for health check. If the port is different, health check is cleartext. | [![A diagram showing the Application Gateway for Containers using custom health probes to determine backend health.](./media/custom-health-probe/custom-health-probe.png)](./media/custom-health-probe/custom-health-probe.png#lightbox) When the default health probe is used, the following values for each health prob | healthyTrehshold | 1 probe | | unhealthyTreshold | 3 probes | | port | The port number used is defined by the backend port number in the Ingress resource or HttpRoute backend port in the HttpRoute resource. |-| protocol | HTTP or HTTPS<sup>1</sup> | | (http) host | localhost | | (http) path | / |+| UseTLS | HTTP for HTTP and HTTPS when TLS is specified. | -<sup>1</sup> HTTPS will be used when a backendTLSPolicy references a target backend service (for Gateway API implementation) or IngressExtension with a backendSetting protocol of HTTPS (for Ingress API implementation) is specified. +<sup>1</sup> HTTPS is used when a backendTLSPolicy references a target backend service (for Gateway API implementation) or IngressExtension with a backendSetting protocol of HTTPS (for Ingress API implementation) is specified. >[!Note] >Health probes are initiated with the `User-Agent` value of `Microsoft-Azure-Application-LB/AGC`. When the default health probe is used, the following values for each health prob In both Gateway API and Ingress API, a custom health probe can be defined by defining a [_HealthCheckPolicyPolicy_ resource](api-specification-kubernetes.md#alb.networking.azure.io/v1.HealthCheckPolicy) and referencing a service the health probes should check against. As the service is referenced by an HTTPRoute or Ingress resource with a class reference to Application Gateway for Containers, the custom health probe is used for each reference. -In this example, the health probe emitted by Application Gateway for Containers sends the hostname contoso.com to the pods that make up _test-service_. The request path is `/`, a probe is emitted every 5 seconds and wait 3 seconds before determining the connection has timed out. If a response is received, an HTTP response code between 200 and 299 (inclusive of 200 and 299) is considered healthy, all other responses are considered unhealthy. +In this example, the health probe emitted by Application Gateway for Containers sends the hostname contoso.com to the pods that make up _test-service_. The requested protocol is `http` with a path of `/`. A probe is emitted every 5 seconds and waits 3 seconds before determining the connection has timed out. If a response is received, an HTTP response code between 200 and 299 (inclusive of 200 and 299) is considered healthy, all other responses are considered unhealthy. ```bash kubectl apply -f - <<EOF spec: timeout: 3s healthyThreshold: 1 unhealthyThreshold: 1+ port: 8123 + # grpc: {} # defined if probing a gRPC endpoint http: host: contoso.com path: / spec: statusCodes: - start: 200 end: 299+ UseTLS: true EOF ``` |
application-gateway | Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/diagnostics.md | Each access log entry in Application Gateway for Containers contains the followi | clientIp | IP address of the client initiating the request to the frontend of Application Gateway for Containers | | frontendName | Name of the Application Gateway for Containers frontend that received the request from the client | | frontendPort | Port number the request was listened on by Application Gateway for Containers |+| frontendTLSFailureReason | Contains information on why TLS negotiation failed. Commonly used for understanding failed authentication requests for client mutual authentication | +| frontendTLSPeerFingerprint | The fingerprint (thumbprint) of the certificate presented by a client to the frontend of Application Gateway for Containers | | hostName | Host header value received from the client by Application Gateway for Containers | | httpMethod | HTTP Method of the request received from the client by Application Gateway for Containers as per [RFC 7231](https://datatracker.ietf.org/doc/html/rfc7231#section-4.3). | | httpStatusCode | HTTP Status code returned from Application Gateway for Containers to the client | Here an example of the access log emitted in JSON format to a storage account. "backendTimeTaken": "-", "clientIp": "xxx.xxx.xxx.xxx:52526", "frontendName": "frontend-primary",- "frontendPort": "80", + "frontendPort": "443", + "frontendTLSFailureReason": "-", + "frontendTLSPeerFingerprint": "2c01bbc93009ad1fc977fe9115fae7ad298b665f", "hostName": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.fzXX.alb.azure.com", "httpMethod": "GET", "httpStatusCode": "200", Here an example of the access log emitted in JSON format to a storage account. "responseBodyBytes": "91", "responseHeaderBytes": "190", "timeTaken": "2",- "tlsCipher": "-", + "tlsCipher": "TLS_AES_256_GCM_SHA384", "tlsProtocol": "-", "trackingId": "0ef125db-7fb7-48a0-b3fe-03fe0ffed873", "userAgent": "curl\/7.81.0" |
application-gateway | Grpc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/grpc.md | + + Title: gRPC with Application Gateway for Containers +description: Learn how to configure Application Gateway for Containers with support for gRPC. +++++ Last updated : 9/16/2024++++# gRPC on Application Gateway for Containers ++## What is gRPC? ++[gRPC](https://grpc.io/docs/what-is-grpc/introduction/) is a modern, high-performance framework that evolves the age-old [remote procedure call (RPC)](https://en.wikipedia.org/wiki/Remote_procedure_call) protocol. At the application level, gRPC streamlines messaging between clients and back-end services. Originating from Google, gRPC is open source and part of the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/) ecosystem of cloud-native offerings. CNCF considers gRPC an [incubating project](https://github.com/cncf/toc/blob/main/process/graduation_criteria.md). Incubating means end users are using the technology in production applications, and the project has a healthy number of contributors. ++A typical gRPC client app exposes a local, in-process function that implements a business operation. Under the covers, that local function invokes another function on a remote machine. What appears to be a local call essentially becomes a transparent out-of-process call to a remote service. The RPC plumbing abstracts the point-to-point networking communication, serialization, and execution between computers. ++In cloud-native applications, developers often work across programming languages, frameworks, and technologies. This *interoperability* complicates message contracts and the plumbing required for cross-platform communication. gRPC provides a "uniform horizontal layer" that abstracts these concerns. Developers code in their native platform focused on business functionality, while gRPC handles communication plumbing. ++gRPC offers comprehensive support across most popular development stacks, including Java, JavaScript, C#, Go, Swift, and NodeJS. ++## gRPC Benefits ++gRPC uses HTTP/2 for its transport protocol. While compatible with HTTP 1.1, HTTP/2 features many advanced capabilities: ++- A binary framing protocol for data transport - unlike HTTP 1.1, which is text based. +- Multiplexing support for sending multiple parallel requests over the same connection - HTTP 1.1 limits processing to one request/response message at a time. +- Bidirectional full-duplex communication for sending both client requests and server responses simultaneously. +- Built-in streaming enabling requests and responses to asynchronously stream large data sets. +- Header compression that reduces network usage. ++gRPC is lightweight and highly performant. It can be up to 8x faster than JSON serialization with messages 60-80% smaller. ++## Protocol Buffers ++gRPC embraces an open-source technology called [Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview). They provide a highly efficient and platform-neutral serialization format for serializing structured messages that services send to each other. Using a cross-platform Interface Definition Language (IDL), developers define a service contract for each microservice. The contract, implemented as a text-based `.proto` file, describes the methods, inputs, and outputs for each service. The same contract file can be used for gRPC clients and services built on different development platforms. ++Using the proto file, the Protobuf compiler, `protoc`, generates both client and service code for your target platform. The code includes the following components: ++- Strongly typed objects, shared by the client and service, that represent the service operations and data elements for a message. +- A strongly typed base class with the required network plumbing that the remote gRPC service can inherit and extend. +- A client stub that contains the required plumbing to invoke the remote gRPC service. ++At run time, each message is serialized as a standard Protobuf representation and exchanged between the client and remote service. Unlike JSON or XML, Protobuf messages are serialized as compiled binary bytes. ++## RPC life cycle ++gRPC has four [rpc life cycles](https://grpc.io/docs/what-is-grpc/core-concepts/#rpc-life-cycle) for how a client interacts with the gRPC server. ++### Unary RPC ++In the unary life cycle, a request is made to the gRPC server and a response is returned. ++![Diagram depicting unary gRPC life cycle.](./media/grpc/grpc-unary.png) ++### Client streaming RPC ++In the client streaming life cycle, a request is made to the gRPC server, and then the client streams a sequence of additional messages to the server without the need for the server to return additional responses. ++![Diagram depicting client streaming gRPC life cycle.](./media/grpc/grpc-client-streaming.png) ++### Server streaming RPC ++In the server streaming life cycle, a request is made to the gRPC server, and then the server streams a sequence of messages back to the client without the need for the client to return additional responses. ++![Diagram depicting server streaming gRPC life cycle.](./media/grpc/grpc-server-streaming.png) ++### Bidirectional streaming RPC ++In the bidirectional streaming life cycle, a request is made to the gRPC server, and both the client and server send a sequence of messages, operating independently from each other. ++![Diagram depicting bidirectional streaming gRPC life cycle.](./media/grpc/grpc-bidrectional-streaming.png) ++## gRPC implementation in Application Gateway for Containers ++Application Gateway for Containers is able to proxy requests following each of the four life cycles: unary, client streaming, server streaming, and bidirectional stream. ++### gRPC definition ++Configuration is implemented through Kubernetes Gateway API by definition of a [GRPCRoute](https://gateway-api.sigs.k8s.io/api-types/grpcroute/) resource (no support is offered for gRPC in Ingress API for Application Gateway for Containers). Each GRPCRoute resource must reference a Gateway resource. More than one GRPCRoute resource may reference the same gateway provided the rules to handle the request are unique. ++For example, the following GRPCRoute would be attached to a gateway called `Gateway-01`. ++```yaml +apiVersion: gateway.networking.k8s.io/v1 +kind: GRPCRoute +metadata: + name: grpc-route-example + namespace: grpc-namespace +spec: + parentRefs: + - name: gateway-01 + namespace: gateway-namespace + rules: + - matches: + - method: + service: ChatBotService + method: TalkBack + backendRefs: + - name: gRPC-TalkBack + port: 8080 +``` ++>[!Note] +>gRPC is only supported using Gateway API for Application Gateway for Containers. ++### Health probes ++By default, Application Gateway for Containers attempts to initiate a TCP handshake to the backend port running the gRPC service. If the handshake completes, the backend is considered healthy. ++If using a HealthCheckPolicy as a custom health probe, the defined policy determines probe behavior. ++Here's an example of a HealthCheckPolicy for a gRPC backend. ++```yaml +apiVersion: alb.networking.azure.io/v1 +kind: HealthCheckPolicy +metadata: + name: gateway-health-check-policy + namespace: test-infra +spec: + targetRef: + group: "" + kind: Service + name: test-service + namespace: test-infra + default: + interval: 5s + timeout: 3s + healthyThreshold: 1 + unhealthyThreshold: 1 + port: 8123 + grpc: {} # defined if probing a gRPC endpoint + UseTLS: true +``` ++In this example, protocol, port, and UseTLS are optional, however if the service contains multiple pods and the gRPC pod is exposed on a different port, you can reference how the probe should be initiated to that pod explicitly. |
application-gateway | How To Frontend Mtls Gateway Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-frontend-mtls-gateway-api.md | + + Title: Frontend MTLS with Application Gateway for Containers - Gateway API +description: Learn how to configure Application Gateway for Containers with support for frontend MTLS authentication. +++++ Last updated : 9/16/2024++++# Frontend MTLS with Application Gateway for Containers - Gateway API ++This document helps set up an example application that uses the following resources from Gateway API. Steps are provided to: ++- Create a [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) resource with one HTTPS listener. +- Create an [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) resource that references a backend service. +- Create a [FrontendTLSPolicy](api-specification-kubernetes.md#alb.networking.azure.io/v1.FrontendTLSPolicy) resource that has a CA certificate. ++## Background ++Mutual Transport Layer Security (MTLS) is a process that relies on certificates to encrypt communications and identify clients to a service. This enables Application Gateway for Containers to further increase its security posture by only trusting connections from authenticated devices. ++See the following figure: ++[![A diagram showing the Application Gateway for Containers frontend MTLS process.](./media/how-to-frontend-mtls-gateway-api/frontend-mtls.png)](./media/how-to-frontend-mtls-gateway-api/frontend-mtls.png#lightbox) ++The valid client certificate flow shows a client presenting a certificate to the frontend of Application Gateway for Containers. Application Gateway for Containers determines the certificate is valid and proxies the request to the backend target. The response is ultimately returned to the client. ++The revoked client certificate flow shows a client presenting a revoked certificate to the frontend of Application Gateway for Containers. Application Gateway for Containers determines the certificate is not valid and prevents the request from being proxied to the client. The client will receive an HTTP 400 bad request and corresponding reason. ++## Prerequisites ++1. If following the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md). +2. If following the ALB managed deployment strategy, ensure you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provision the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md). +3. Deploy sample HTTP application: ++ Apply the following deployment.yaml file on your cluster to create a sample web application and deploy sample secrets to demonstrate frontend mutual authentication (mTLS). ++ ```bash + kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml + ``` + + This command creates the following on your cluster: + + - A namespace called `test-infra` + - One service called `echo` in the `test-infra` namespace + - One deployment called `echo` in the `test-infra` namespace + - One secret called `listener-tls-secret` in the `test-infra` namespace ++### Generate certificate(s) ++For this example, we will create a root certificate and issue a client certificate from the root. If you already have a root certificate and client certificate, you may skip these steps. ++#### Generate a private key for the root certificate ++`openssl genrsa -out root.key 2048` ++#### Generate a root certificate ++`openssl req -x509 -new -nodes -key root.key -sha256 -days 1024 -out root.crt -subj "/C=US/ST=North Dakota/L=Fargo/O=Contoso/CN=contoso-root"` ++#### Generate a private key for the client certificate ++`openssl genrsa -out client.key 2048` ++#### Create a certificate signing request for the client certificate ++`openssl req -new -key client.key -out client.csr -subj "/C=US/ST=North Dakota/L=Fargo/O=Contoso/CN=contoso-client"` ++#### Generate a client certificate signed by the root certificate ++`openssl x509 -req -in client.csr -CA root.crt -CAkey root.key -CAcreateserial -out client.crt -days 1024 -sha256` ++## Deploy the required Gateway API resources ++# [ALB managed deployment](#tab/alb-managed) ++Create a gateway ++```bash +kubectl apply -f - <<EOF +apiVersion: gateway.networking.k8s.io/v1 +kind: Gateway +metadata: + name: gateway-01 + namespace: test-infra + annotations: + alb.networking.azure.io/alb-namespace: alb-test-infra + alb.networking.azure.io/alb-name: alb-test +spec: + gatewayClassName: azure-alb-external + listeners: + - name: mtls-listener + port: 443 + protocol: HTTPS + allowedRoutes: + namespaces: + from: Same + tls: + mode: Terminate + certificateRefs: + - kind : Secret + group: "" + name: contoso.com +EOF +``` +++# [Bring your own (BYO) deployment](#tab/byo) ++1. Set the following environment variables ++ ```bash + RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>' + RESOURCE_NAME='alb-test' ++ RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv) + FRONTEND_NAME='frontend' + az network alb frontend create -g $RESOURCE_GROUP -n $FRONTEND_NAME --alb-name $AGFC_NAME + ``` ++2. Create a Gateway ++ ```bash + kubectl apply -f - <<EOF + apiVersion: gateway.networking.k8s.io/v1 + kind: Gateway + metadata: + name: gateway-01 + namespace: test-infra + annotations: + alb.networking.azure.io/alb-id: $RESOURCE_ID + spec: + gatewayClassName: azure-alb-external + listeners: + - name: mtls-listener + port: 443 + protocol: HTTPS + allowedRoutes: + namespaces: + from: Same + tls: + mode: Terminate + certificateRefs: + - kind : Secret + group: "" + name: contoso.com + addresses: + - type: alb.networking.azure.io/alb-frontend + value: $FRONTEND_NAME + EOF + ``` ++++Once the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway. ++```bash +kubectl get gateway gateway-01 -n test-infra -o yaml +``` ++Example output of successful gateway creation: ++```yaml +status: + addresses: + - type: IPAddress + value: xxxx.yyyy.alb.azure.com + conditions: + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: Valid Gateway + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: Application Gateway For Containers resource has been successfully updated. + observedGeneration: 1 + reason: Programmed + status: "True" + type: Programmed + listeners: + - attachedRoutes: 0 + conditions: + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: "" + observedGeneration: 1 + reason: ResolvedRefs + status: "True" + type: ResolvedRefs + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: Listener is accepted + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted + - lastTransitionTime: "2023-06-19T21:04:55Z" + message: Application Gateway For Containers resource has been successfully updated. + observedGeneration: 1 + reason: Programmed + status: "True" + type: Programmed + name: https-listener + supportedKinds: + - group: gateway.networking.k8s.io + kind: HTTPRoute +``` ++Once the gateway is created, create an HTTPRoute resource. ++```bash +kubectl apply -f - <<EOF +apiVersion: gateway.networking.k8s.io/v1 +kind: HTTPRoute +metadata: + name: https-route + namespace: test-infra +spec: + parentRefs: + - name: gateway-01 + rules: + - backendRefs: + - name: mtls-app + port: 443 +EOF +``` ++Once the HTTPRoute resource is created, ensure the route is _Accepted_ and the Application Gateway for Containers resource is _Programmed_. ++```bash +kubectl get httproute https-route -n test-infra -o yaml +``` ++Verify the status of the Application Gateway for Containers resource is successfully updated. ++```yaml +status: + parents: + - conditions: + - lastTransitionTime: "2023-06-19T22:18:23Z" + message: "" + observedGeneration: 1 + reason: ResolvedRefs + status: "True" + type: ResolvedRefs + - lastTransitionTime: "2023-06-19T22:18:23Z" + message: Route is Accepted + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted + - lastTransitionTime: "2023-06-19T22:18:23Z" + message: Application Gateway For Containers resource has been successfully updated. + observedGeneration: 1 + reason: Programmed + status: "True" + type: Programmed + controllerName: alb.networking.azure.io/alb-controller + parentRef: + group: gateway.networking.k8s.io + kind: Gateway + name: gateway-01 + namespace: test-infra + ``` ++Create a FrontendTLSPolicy ++```bash +kubectl apply -f - <<EOF +apiVersion: alb.networking.azure.io/v1 +kind: FrontendTLSPolicy +metadata: + name: mtls-policy + namespace: test-infra +spec: + targetRef: + group: gateway.networking.k8s.io + kind: Gateway + name: gateway-01 + namespace: test-infra + sectionNames: + - mtls-listener + default: + verify: + caCertificateRef: + name: ca.bundle + group: "" + kind: Secret + namespace: test-infra + subjectAltName: "contoso-client" +EOF +``` ++Once the FrontendTLSPolicy object is created, check the status on the object to ensure that the policy is valid: ++```bash +kubectl get frontendtlspolicy mtls-policy -n test-infra -o yaml +``` ++Example output of valid FrontendTLSPolicy object creation: ++```yaml +status: + conditions: + - lastTransitionTime: "2023-06-29T16:54:42Z" + message: Valid FrontendTLSPolicy + observedGeneration: 1 + reason: Accepted + status: "True" + type: Accepted +``` ++## Test access to the application ++Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN: ++```bash +fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}') +``` ++Curling this FQDN should return responses from the backend as configured on the HTTPRoute. ++```bash +curl --cert client.crt --key client.key --insecure https://$fqdn/ +``` ++Congratulations, you have installed ALB Controller, deployed a backend application, authenticated via client certificate, and routed traffic to the application via the gateway on Application Gateway for Containers. |
application-gateway | How To Url Redirect Ingress Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-redirect-ingress-api.md | A redirect sets the response status code returned to clients to understand the p The following figure illustrates an example of a request destined for _contoso.com/summer-promotion_ being redirected to _contoso.com/shop/category/5_. In addition, a second request initiated to contoso.com via http protocol is returned a redirect to initiate a new connection to its https variant. -[ ![A diagram showing the Application Gateway for Containers returning a redirect URL to a client.](./media/how-to-url-redirect-ingress-api/url-redirect.png) ](./media/how-to-url-redirect-ingress-api/url-redirect.png#lightbox) +[![A diagram showing the Application Gateway for Containers returning a redirect URL to a client.](./media/how-to-url-redirect-ingress-api/url-redirect.png)](./media/how-to-url-redirect-ingress-api/url-redirect.png#lightbox) ## Prerequisites |
application-gateway | How To Url Rewrite Gateway Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md | The following figure illustrates an example of a request destined for _contoso.c 1. If following the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md). 2. If following the ALB managed deployment strategy, ensure you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provision the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).-3. Deploy sample HTTP application +3. Deploy sample HTTP application: ++ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate traffic splitting / weighted round robin support. - Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities. - ```bash- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml + kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml ```+ + This command creates the following on your cluster: - This command creates the following on your cluster: -- - A namespace called `test-infra` - - One service called `echo` in the `test-infra` namespace - - One deployment called `echo` in the `test-infra` namespace - - One secret called `listener-tls-secret` in the `test-infra` namespace + - A namespace called `test-infra` + - Two services called `backend-v1` and `backend-v2` in the `test-infra` namespace + - Two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace ## Deploy the required Gateway API resources |
application-gateway | Migrate From Agic To Agc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/migrate-from-agic-to-agc.md | Prior to migration, it is important to identify any dependencies on Application Such dependencies include: - Web Application Firewall (WAF)-- Frontend Mutual Authentication - Private IP - Ports other than 80 and 443 - Configurable request timeout values |
application-gateway | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/overview.md | Application Gateway for Containers supports the following features for traffic m - Automatic retries - Autoscaling - Availability zone resiliency-- Default and custom health probes+- Custom and default health probes - ECDSA and RSA certificate support+- gRPC - Header rewrite - HTTP/2 - HTTPS traffic management: Application Gateway for Containers supports the following features for traffic m - Query string - Methods - Ports (80/443)-- Mutual authentication (mTLS) to backend target+- Mutual authentication (mTLS) to frontend, backend, or end-to-end - Server-sent event (SSE) support - Traffic splitting / weighted round robin - TLS policies |
application-gateway | Quickstart Deploy Application Gateway For Containers Alb Controller | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md | You need to complete the following tasks before deploying Application Gateway fo az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \ --namespace $HELM_NAMESPACE \- --version 1.0.2 \ + --version 1.2.3 \ --set albController.namespace=$CONTROLLER_NAMESPACE \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ``` You need to complete the following tasks before deploying Application Gateway fo az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \ --namespace $HELM_NAMESPACE \- --version 1.0.2 \ + --version 1.2.3 \ --set albController.namespace=$CONTROLLER_NAMESPACE \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ``` |
application-gateway | Troubleshooting Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/troubleshooting-guide.md | Example output: | NAME | READY | UP-TO-DATE | AVAILABLE | AGE | CONTAINERS | IMAGES | SELECTOR | | | -- | - | | - | -- | - | -- |-| alb-controller | 2/2 | 2 | 2 | 18d | alb-controller | mcr.microsoft.com/application-lb/images/alb-controller:**1.0.2** | app=alb-controller | -| alb-controller-bootstrap | 1/1 | 1 | 1 | 18d | alb-controller-bootstrap | mcr.microsoft.com/application-lb/images/alb-controller-bootstrap:**1.0.2** | app=alb-controller-bootstrap | +| alb-controller | 2/2 | 2 | 2 | 18d | alb-controller | mcr.microsoft.com/application-lb/images/alb-controller:**1.2.3** | app=alb-controller | +| alb-controller-bootstrap | 1/1 | 1 | 1 | 18d | alb-controller-bootstrap | mcr.microsoft.com/application-lb/images/alb-controller-bootstrap:**1.2.3** | app=alb-controller-bootstrap | -In this example, the ALB controller version is **1.0.2**. +In this example, the ALB controller version is **1.2.3**. The ALB Controller version can be upgraded by running the `helm upgrade alb-controller` command. For more information, see [Install the ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#install-the-alb-controller). |
application-gateway | Ingress Controller Add Health Probes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-add-health-probes.md | -By default, Ingress controller will provision an HTTP GET probe for the exposed pods. +By default, Ingress controller provisions an HTTP GET probe for the exposed pods. The probe properties can be customized by adding a [Readiness or Liveness Probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) to your `deployment`/`pod` spec. +> [!TIP] +> Also see [What is Application Gateway for Containers](for-containers/overview.md). + ## With `readinessProbe` or `livenessProbe` ```yaml apiVersion: networking.k8s.io/v1 |
application-gateway | Ingress Controller Annotations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-annotations.md | Title: Application Gateway Ingress Controller annotations -description: This article provides documentation on the annotations specific to the Application Gateway Ingress Controller. +description: This article provides documentation on the annotations that are specific to the Application Gateway Ingress Controller. Previously updated : 5/13/2024 Last updated : 9/17/2024 # Annotations for Application Gateway Ingress Controller -The Kubernetes Ingress resource can be annotated with arbitrary key/value pairs. AGIC relies on annotations to program Application Gateway features, which aren't configurable using the Ingress YAML. Ingress annotations are applied to all HTTP settings, backend pools, and listeners derived from an ingress resource. +You can annotate the Kubernetes ingress resource with arbitrary key/value pairs. Application Gateway Ingress Controller (AGIC) relies on annotations to program Azure Application Gateway features that aren't configurable via the ingress YAML. Ingress annotations are applied to all HTTP settings, backend pools, and listeners derived from an ingress resource. ++> [!TIP] +> Also see [What is Application Gateway for Containers](for-containers/overview.md). ## List of supported annotations -For an Ingress resource to be observed by AGIC, it **must be annotated** with `kubernetes.io/ingress.class: azure/application-gateway`. Only then AGIC works with the Ingress resource in question. +For AGIC to observe an ingress resource, the resource *must be annotated* with `kubernetes.io/ingress.class: azure/application-gateway`. -| Annotation Key | Value Type | Default Value | Allowed Values | +| Annotation key | Value type | Default value | Allowed values | | -- | -- | -- | -- | | [appgw.ingress.kubernetes.io/backend-path-prefix](#backend-path-prefix) | `string` | `nil` || | [appgw.ingress.kubernetes.io/backend-hostname](#backend-hostname) | `string` | `nil` || For an Ingress resource to be observed by AGIC, it **must be annotated** with `k ## Backend Path Prefix -The following annotation allows the backend path specified in an ingress resource to be rewritten with prefix specified in this annotation. It allows users to expose services whose endpoints are different than endpoint names used to expose a service in an ingress resource. +The following annotation allows the backend path specified in an ingress resource to be rewritten with the specified prefix. Use it to expose services whose endpoints are different from the endpoint names that you use to expose a service in an ingress resource. ### Usage spec: number: 80 ``` -In the previous example, you've defined an ingress resource named `go-server-ingress-bkprefix` with an annotation `appgw.ingress.kubernetes.io/backend-path-prefix: "/test/"`. The annotation tells application gateway to create an HTTP setting, which has a path prefix override for the path `/hello` to `/test/`. +The preceding example defines an ingress resource named `go-server-ingress-bkprefix` with an annotation named `appgw.ingress.kubernetes.io/backend-path-prefix: "/test/"`. The annotation tells Application Gateway to create an HTTP setting that has a path prefix override for the path `/hello` to `/test/`. -> [!NOTE] -> In the above example, only one rule is defined. However, the annotations are applicable to the entire ingress resource, so if a user defined multiple rules, the backend path prefix would be set up for each of the paths specified. If a user wants different rules with different path prefixes (even for the same service), they would need to define different ingress resources. +The example defines only one rule. However, the annotations apply to the entire ingress resource. So if you define multiple rules, you set up the backend path prefix for each of the specified paths. If you want different rules with different path prefixes (even for the same service), you need to define different ingress resources. ## Backend Hostname -This annotation allows us to specify the host name that Application Gateway should use while talking to the Pods. +Use the following annotation to specify the hostname that Application Gateway should use while talking to the pods. ### Usage spec: ## Custom Health Probe -Application Gateway [can be configured](./application-gateway-probe-overview.md) to send custom health probes to the backend address pool. When these annotations are present, Kubernetes Ingress controller [creates a custom probe](./application-gateway-create-probe-portal.md) to monitor the backend application and applies the changes to the application gateway. +You can [configure Application Gateway](./application-gateway-probe-overview.md) to send custom health probes to the backend address pool. When the following annotations are present, the Kubernetes ingress controller [creates a custom probe](./application-gateway-create-probe-portal.md) to monitor the backend application. The controller then applies the changes to Application Gateway. -`health-probe-hostname`: This annotation allows a custom hostname on the health probe.<br> -`health-probe-port`: This annotation configures a custom health probe port.<br> -`health-probe-path`: This annotation defines a path for the health probe.<br> -`health-probe-status-code`: This annotation allows the health probe to accept different HTTP status codes.<br> -`health-probe-interval`: This annotation defines the interval that the health probe runs at.<br> -`health-probe-timeout`: This annotation defines how long the health probe will wait for a response before failing the probe.<br> -`health-probe-unhealthy-threshold`: This annotation defines how many health probes must fail for the backend to be marked as unhealthy. +- `health-probe-hostname`: This annotation allows a custom hostname on the health probe. +- `health-probe-port`: This annotation configures a custom port for the health probe. +- `health-probe-path`: This annotation defines a path for the health probe. +- `health-probe-status-code`: This annotation allows the health probe to accept different HTTP status codes. +- `health-probe-interval`: This annotation defines the interval at which the health probe runs. +- `health-probe-timeout`: This annotation defines how long the health probe waits for a response before failing the probe. +- `health-probe-unhealthy-threshold`: This annotation defines how many health probes must fail for the backend to be marked as unhealthy. ### Usage spec: ## TLS Redirect -Application Gateway [can be configured](./redirect-overview.md) to automatically redirect HTTP URLs to their HTTPS counterparts. When this annotation is present and TLS is properly configured, Kubernetes Ingress controller creates a [routing rule with a redirection configuration](./redirect-http-to-https-portal.md#add-a-routing-rule-with-a-redirection-configuration) and applies the changes to your Application Gateway. The redirect created will be HTTP `301 Moved Permanently`. +You can [configure Application Gateway](./redirect-overview.md) to automatically redirect HTTP URLs to their HTTPS counterparts. When this annotation is present and TLS is properly configured, the Kubernetes ingress controller creates a [routing rule with a redirection configuration](./redirect-http-to-https-portal.md#add-a-routing-rule-with-a-redirection-configuration). The controller then applies the changes to your Application Gateway instance. The created redirect is HTTP `301 Moved Permanently`. ### Usage spec: ## Connection Draining -`connection-draining`: This annotation allows us to specify whether to enable connection draining. -`connection-draining-timeout`: This annotation allows us to specify a timeout, after which Application Gateway terminates the requests to the draining backend endpoint. +Use the following annotations if you want to use connection draining: ++- `connection-draining`: This annotation specifies whether to enable connection draining. +- `connection-draining-timeout`: This annotation specifies a timeout, after which Application Gateway terminates the requests to the draining backend endpoint. ### Usage spec: ## Cookie Based Affinity -The following annotation allows you to specify whether to enable cookie based affinity. +Use the following annotation to enable cookie-based affinity. ### Usage spec: ## Request Timeout -The following annotation allows you to specify the request timeout in seconds, after which Application Gateway fails the request if response is not received. +Use the following annotation to specify the request timeout in seconds. After the timeout, Application Gateway fails a request if the response isn't received. ### Usage spec: ## Use Private IP -The following annotation allows you to specify whether to expose this endpoint on Private IP of Application Gateway. +Use the following annotation to specify whether to expose this endpoint on a private IP address of Application Gateway. -> [!NOTE] -> * For Application Gateway that doesn't have a private IP, Ingresses with `appgw.ingress.kubernetes.io/use-private-ip: "true"` is ignored. This is reflected in the controller logs and ingress events for those ingresses with `NoPrivateIP` warning. +For an Application Gateway instance that doesn't have a private IP, ingresses with `appgw.ingress.kubernetes.io/use-private-ip: "true"` are ignored. The controller logs and ingress events for those ingresses show a `NoPrivateIP` warning. ### Usage spec: ## Override Frontend Port -The annotation allows you to configure a frontend listener to use different ports other than 80/443 for http/https. +Use the following annotation to configure a frontend listener to use ports other than 80 for HTTP and 443 for HTTPS. -If the port is within the App Gw authorized range (1 - 64999), this listener will be created on this specific port. If an invalid port or no port is set in the annotation, the configuration will fall back on default 80 or 443. +If the port is within the Application Gateway authorized range (1 to 64999), the listener is created on this specific port. If you set an invalid port or no port in the annotation, the configuration uses the default of 80 or 443. ### Usage spec: ``` > [!NOTE]->External request will need to target http://somehost:8080 instead of http://somehost. +> External requests need to target `http://somehost:8080` instead of `http://somehost`. ## Backend Protocol -The following annotation allows you to specify the protocol that Application Gateway should use while communicating with the pods. Supported Protocols are `http` and `https`. +Use the following to specify the protocol that Application Gateway should use when it communicates with the pods. Supported protocols are HTTP and HTTPS. -> [!NOTE] -> While self-signed certificates are supported on Application Gateway, currently AGIC only supports `https` when pods are using a certificate signed by a well-known CA. -> -> Don't use port 80 with HTTPS and port 443 with HTTP on the pods. +Although self-signed certificates are supported on Application Gateway, AGIC currently supports HTTPS only when pods use a certificate signed by a well-known certificate authority. ++Don't use port 80 with HTTPS and port 443 with HTTP on the pods. ### Usage spec: ## Hostname Extension -Application Gateway can be configured to accept multiple hostnames. The hostname-extention annotation allows for this by letting you define multiple hostnames including wildcard hostnames. This will append the hostnames onto the FQDN that is defined in the ingress spec.rules.host on the frontend listener so it is [configured as a multisite listener.](./multiple-site-overview.md) +You can configure Application Gateway to accept multiple hostnames. Use the `hostname-extension` annotation to define multiple hostnames, including wildcard hostnames. This action appends the hostnames onto the FQDN that's defined in the ingress `spec.rules.host` information on the frontend listener, so it's [configured as a multisite listener](./multiple-site-overview.md). ### Usage spec: number: 443 ``` -> [!NOTE] -> In the above example the listener would be configured to accept traffic for the hostnames "hostname1.contoso.com" and "hostname2.contoso.com" +The preceding example configures the listener to accept traffic for the hostnames `hostname1.contoso.com` and `hostname2.contoso.com`. ## WAF Policy for Path -This annotation allows you to attach an already created WAF policy to the list paths for a host within a Kubernetes Ingress resource being annotated. +Use the following annotation to attach an existing web application firewall (WAF) policy to the list paths for a host within a Kubernetes ingress resource that's being annotated. The WAF policy is applied to both `/ad-server` and `/auth` URLs. ### Usage spec: pathType: Exact ``` -> [!NOTE] -> The WAF policy will be applied to both /ad-server and /auth URLs. - ## Application Gateway SSL Certificate -The SSL certificate [can be configured to Application Gateway](/cli/azure/network/application-gateway/ssl-cert#az-network-application-gateway-ssl-cert-create) either from a local PFX certificate file or a reference to an Azure Key Vault unversioned secret ID. When the annotation is present with a certificate name and the certificate is pre-installed in Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway. appgw-ssl-certificate annotation can also be used together with ssl-redirect annotation in case of SSL redirect. --Please refer to appgw-ssl-certificate feature for more details. +You can [configure the SSL certificate to Application Gateway](/cli/azure/network/application-gateway/ssl-cert#az-network-application-gateway-ssl-cert-create) from either a local PFX certificate file or a reference to an Azure Key Vault unversioned secret ID. When the annotation is present with a certificate name and the certificate is preinstalled in Application Gateway, the Kubernetes ingress controller creates a routing rule with an HTTPS listener and applies the changes to your Application Gateway instance. You can also use the `appgw-ssl-certificate` annotation together with an `ssl-redirect` annotation in the case of an SSL redirect. > [!NOTE]-> Annotation "appgw-ssl-certificate" will be ignored when TLS Spec is defined in ingress at the same time. If a user wants different certs with different hosts(multi tls certificate termination), they would need to define different ingress resources. +> The `appgw-ssl-certificate` annotation is ignored when the TLS specification is defined in ingress at the same time. If you want different certificates with different hosts (termination of multiple TLS certificates), you need to define different ingress resources. ### Usage spec: ## Application Gateway SSL Profile -Users can configure a ssl profile on the Application Gateway per listener. When the annotation is present with a profile name and the profile is pre-installed in the Application Gateway, Kubernetes Ingress controller will create a routing rule with a HTTPS listener and apply the changes to your App Gateway. +You can configure an SSL profile on the Application Gateway instance per listener. When the annotation is present with a profile name and the profile is preinstalled in Application Gateway, the Kubernetes ingress controller creates a routing rule with an HTTPS listener and applies the changes to your Application Gateway instance. ### Usage spec: ## Application Gateway Trusted Root Certificate -Users now can configure their own root certificates to Application Gateway to be trusted via AGIC. The annotation appgw-trusted-root-certificate can be used together with annotation backend-protocol to indicate end-to-end ssl encryption, multiple root certificates, separated by comma, if specified, for example, "name-of-my-root-cert1,name-of-my-root-certificate2". +You now can configure your own root certificates to Application Gateway to be trusted via AGIC. You can use the `appgw-trusted-root-certificate` annotation together with the `backend-protocol` annotation to indicate end-to-end SSL encryption. If you specify multiple root certificates, separate them with a comma; for example, `name-of-my-root-cert1,name-of-my-root-cert2`. ### Usage spec: ## Rewrite Rule Set -The following annotation allows you to assign an existing rewrite rule set to the corresponding request routing rule. +Use the following annotation to assign an existing rewrite rule set to the corresponding request routing rule. ### Usage spec: ## Rewrite Rule Set Custom Resource -> [!Note] -> [Application Gateway for Containers](https://aka.ms/agc) has been released, which introduces numerous performance, resilience, and feature changes. Please consider leveraging Application Gateway for Containers for your next deployment. -> URL Rewrite rules for Application Gateway for Containers may be found [here for Gateway API](./for-containers/how-to-url-rewrite-gateway-api.md) and [here for Ingress API](for-containers/how-to-url-rewrite-ingress-api.md). -> Header Rewrite rules for Application Gateway for Containers may be found [here for Gateway API](./for-containers/how-to-header-rewrite-gateway-api.md). +> [!NOTE] +> The release of [Application Gateway for Containers](https://aka.ms/agc) introduces numerous performance, resilience, and feature changes. Consider using Application Gateway for Containers for your next deployment. +> +> You can find URL rewrite rules for Application Gateway for Containers in [this article about the Gateway API](./for-containers/how-to-url-rewrite-gateway-api.md) and [this article about the Ingress API](for-containers/how-to-url-rewrite-ingress-api.md). You can find header rewrite rules for Application Gateway for Containers in [this article about the Gateway API](./for-containers/how-to-header-rewrite-gateway-api.md). ++Application Gateway allows you to rewrite selected contents of requests and responses. With this feature, you can translate URLs, change query string parameters, and modify request and response headers. You can also use this feature to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. Rewrite Rule Set Custom Resource brings this feature to AGIC. -> [!Note] -> This feature is supported since 1.6.0-rc1. Use [`appgw.ingress.kubernetes.io/rewrite-rule-set`](#rewrite-rule-set), which allows using an existing rewrite rule set on Application Gateway. +HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks like adding security-related header fields (for example, `HSTS` or `X-XSS-Protection`), removing response header fields that might reveal sensitive information, and removing port information from `X-Forwarded-For` headers. -Application Gateway allows you to rewrite selected content of requests and responses. With this feature, you can translate URLs, query string parameters as well as modify request and response headers. It also allows you to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. These conditions are based on the request and response information. Rewrite Rule Set Custom Resource brings this feature to AGIC. +With the URL rewrite capability, you can: -HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers. +- Rewrite the hostname, path, and query string of the request URL. +- Choose to rewrite the URL of all requests or only requests that match one or more of the conditions that you set. These conditions are based on the request and response properties (request header, response header, and server variables). +- Choose to route the request based on either the original URL or the rewritten URL. -With URL rewrite capability, you can: - Rewrite the host name, path and query string of the request URL - Choose to rewrite the URL of all requests or only those requests which match one or more of the conditions you set. These conditions are based on the request and response properties (request header, response header and server variables). - Choose to route the request based on either the original URL or the rewritten URL +> [!NOTE] +> This feature is supported since 1.6.0-rc1. Use [`appgw.ingress.kubernetes.io/rewrite-rule-set`](#rewrite-rule-set), which allows using an existing rewrite rule set on Application Gateway. ### Usage spec: ## Rule Priority -This annotation allows for application gateway ingress controller to explicitly set the priority of the associated [Request Routing Rules.](./multiple-site-overview.md#request-routing-rules-evaluation-order) +The following annotation allows for the Application Gateway ingress controller to explicitly set the priority of the associated [request routing rules](./multiple-site-overview.md#request-routing-rules-evaluation-order). ### Usage spec: number: 8080 ``` -In the above example the request routing rule would have a priority of 10 set. +The preceding example sets a priority of 10 for the request routing rule. |
application-gateway | Ingress Controller Autoscale Pods | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-autoscale-pods.md | Use following two components: > The Azure Kubernetes Metrics Adapter is no longer maintained. Kubernetes Event-driven Autoscaling (KEDA) is an alternative.<br> > Also see [Application Gateway for Containers](for-containers/overview.md). +> [!TIP] +> Also see [What is Application Gateway for Containers](for-containers/overview.md). + ## Setting up Azure Kubernetes Metric Adapter 1. First, create a Microsoft Entra service principal and assign it `Monitoring Reader` access over Application Gateway's resource group. |
application-gateway | Ingress Controller Cookie Affinity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-cookie-affinity.md | +> [!TIP] +> Also see [What is Application Gateway for Containers](for-containers/overview.md). + ## Example ```yaml apiVersion: networking.k8s.io/v1 |
application-gateway | Ingress Controller Disable Addon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-disable-addon.md | -Application Gateway Ingress Controller (AGIC) deployed as an AKS add-on allows you to enable and disable the add-on with one line in Azure CLI. The life cycle of the Application Gateway will differ when you disable the AGIC add-on, depending on if the Application Gateway was created by the AGIC add-on, or if it was deployed separately from the AGIC add-on. You can run the same command to re-enable the AGIC add-on if you ever disable it, or to enable the AGIC add-on using an existing AKS cluster and Application Gateway. +Application Gateway Ingress Controller (AGIC) deployed as an AKS add-on allows you to enable and disable the add-on with one line in Azure CLI. The life cycle of the Application Gateway differs when you disable the AGIC add-on, depending on if the Application Gateway was created by the AGIC add-on, or if it was deployed separately from the AGIC add-on. You can run the same command to re-enable the AGIC add-on if you ever disable it, or to enable the AGIC add-on using an existing AKS cluster and Application Gateway. ++> [!TIP] +> Also see [What is Application Gateway for Containers](for-containers/overview.md). ## Disabling AGIC add-on with associated Application Gateway -If the AGIC add-on automatically deployed the Application Gateway for you when you first set everything up, then disabling the AGIC add-on will by default delete the Application Gateway based on a couple criteria. There are two criteria that the AGIC add-on looks for to determine if it should delete the associated Application Gateway when you disable it: +If the AGIC add-on automatically deployed the Application Gateway for you when you first set up everything, then disabling the AGIC add-on will by default delete the Application Gateway based on a couple criteria. There are two criteria that the AGIC add-on looks for to determine if it should delete the associated Application Gateway when you disable it: - Is the Application Gateway that the AGIC add-on is associated with deployed in the MC_* node resource group? - Does the Application Gateway that the AGIC add-on is associated with have the tag "created-by: ingress-appgw"? The tag is used by AGIC to determine if the Application Gateway was deployed by the add-on or not. -If both criteria are met, then the AGIC add-on will delete the Application Gateway it created when the add-on is disabled; however, it won't delete the public IP or the subnet in which the Application Gateway was deployed with/in. If the first criteria is not met, then it won't matter if the Application Gateway has the "created-by: ingress-appgw" tag - disabling the add-on won't delete the Application Gateway. Likewise, if the second criteria is not met, i.e. the Application Gateway lacks that tag, then disabling the add-on won't delete the Application Gateway in the MC_* node resource group. +If both criteria are met, then the AGIC add-on will delete the Application Gateway it created when the add-on is disabled; however, it won't delete the public IP or the subnet in which the Application Gateway was deployed with/in. If the first criteria isn't met, then it won't matter if the Application Gateway has the "created-by: ingress-appgw" tag - disabling the add-on won't delete the Application Gateway. Likewise, if the second criteria isn't met, that is. The Application Gateway lacks that tag, then disabling the add-on won't delete the Application Gateway in the MC_* node resource group. > [!TIP] > If you don't want the Application Gateway to be deleted when disabling the add-on, but it meets both criteria then remove the "created-by: ingress-appgw" tag to prevent the add-on from deleting your Application Gateway. az aks enable-addons -n <AKS-cluster-name> -g <AKS-cluster-resource-group> -a in ``` ## Next steps-For more details on how to enable the AGIC add-on using an existing Application Gateway and AKS cluster, see [AGIC add-on brownfield deployment](tutorial-ingress-controller-add-on-existing.md). +For more information on how to enable the AGIC add-on using an existing Application Gateway and AKS cluster, see [AGIC add-on brownfield deployment](tutorial-ingress-controller-add-on-existing.md). |
application-gateway | Ingress Controller Install Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md | Gateway should that become necessary [Helm](/azure/aks/kubernetes-helm) is a package manager for Kubernetes, used to install the `application-gateway-kubernetes-ingress` package. > [!NOTE]-> If you use [Cloud Shell](https://shell.azure.com/), you don't need to install Helm. Azure Cloud Shell comes with Helm version 3. Skip the first step and just add the AGIC Helm repository. +> If you use [Cloud Shell](https://shell.azure.com/), you don't need to install Helm. Azure Cloud Shell comes with Helm version 3. -1. Install [Helm](/azure/aks/kubernetes-helm) and run the following to add `application-gateway-kubernetes-ingress` helm package: +Install [Helm](/azure/aks/kubernetes-helm) and run the following: - - *Kubernetes RBAC enabled* AKS cluster + - *Kubernetes RBAC enabled* AKS cluster - ```bash - kubectl create serviceaccount --namespace kube-system tiller-sa - kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller-sa - helm init --tiller-namespace kube-system --service-account tiller-sa - ``` --2. Add the AGIC Helm repository: - ```bash - helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ - helm repo update - ``` + ```bash + kubectl create serviceaccount --namespace kube-system tiller-sa + kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller-sa + helm init --tiller-namespace kube-system --service-account tiller-sa + ``` ## Azure Resource Manager Authentication kubectl apply -f $file -n $namespace ## Install Ingress Controller as a Helm Chart -In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use [Cloud Shell](https://shell.azure.com/) to install the AGIC Helm package: +In the first few steps, we installed Helm's Tiller on your Kubernetes cluster. Use [Cloud Shell](https://shell.azure.com/) to install the AGIC Helm package: -1. Add the `application-gateway-kubernetes-ingress` helm repo and perform a helm update +1. Perform a helm update ```bash- helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ helm repo update ``` In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use > [!NOTE] > The `<identity-client-id>` is a property of the Microsoft Entra Workload ID you setup in the previous section. You can retrieve this information by running the following command: `az identity show -g <resourcegroup> -n <identity-name>`, where `<resourcegroup>` is the resource group hosting the infrastructure resources related to the AKS cluster, Application Gateway and managed identity. -1. Install Helm chart `application-gateway-kubernetes-ingress` with the `helm-config.yaml` configuration from the previous step +1. Install Helm chart with the `helm-config.yaml` configuration from the previous step ```bash- helm install -f <helm-config.yaml> application-gateway-kubernetes-ingress/ingress-azure + helm install agic-controller oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.7.5 -f helm-config.yaml ``` Alternatively you can combine the `helm-config.yaml` and the Helm command in one step: ```bash- helm install ./helm/ingress-azure \ - --name ingress-azure \ + helm install oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure \ + --name agic-controller \ + --version 1.7.5 \ --namespace default \ --debug \ --set appgw.name=applicationgatewayABCD \ Apply the Helm changes: helm upgrade \ --recreate-pods \ -f helm-config.yaml \- ingress-azure application-gateway-kubernetes-ingress/ingress-azure + agic-controller + oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure ``` As a result, your AKS cluster has a new instance of `AzureIngressProhibitedTarget` called `prohibit-all-targets`: Since Helm with `appgw.shared=true` and the default `prohibit-all-targets` block Let's assume that we already have a working AKS cluster, Application Gateway, and configured AGIC in our cluster. We have an Ingress for `prod.contoso.com` and are successfully serving traffic for it from the cluster. We want to add `staging.contoso.com` to our-existing Application Gateway, but need to host it on a [VM](https://azure.microsoft.com/services/virtual-machines/). We -are going to reuse the existing Application Gateway and manually configure a listener and backend pools for +existing Application Gateway, but need to host it on a [VM](https://azure.microsoft.com/services/virtual-machines/). We're going to reuse the existing Application Gateway and manually configure a listener and backend pools for `staging.contoso.com`. But manually tweaking Application Gateway config (using [portal](https://portal.azure.com), [ARM APIs](/rest/api/resources/) or [Terraform](https://www.terraform.io/)) would conflict with AGIC's assumptions of full ownership. Shortly after we apply |
application-gateway | Ingress Controller Install New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md | -installed in an environment with no pre-existing components. +installed in an environment with no preexisting components. > [!TIP] > Also see [What is Application Gateway for Containers](for-containers/overview.md). choose to use another environment, ensure the following command-line tools are i ## Create an Identity -Follow the steps below to create a Microsoft Entra [service principal object](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). Record the `appId`, `password`, and `objectId` values - these values will be used in the following steps. +Follow the steps below to create a Microsoft Entra [service principal object](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). Record the `appId`, `password`, and `objectId` values - these values are used in the following steps. 1. Create AD service principal ([Read more about Azure RBAC](../role-based-access-control/overview.md)): ```azurecli Follow the steps below to create a Microsoft Entra [service principal object](.. ``` The output of this command is `objectId`, which will be used in the Azure Resource Manager template below -1. Create the parameter file that will be used in the Azure Resource Manager template deployment later. +1. Create the parameter file that is used in the Azure Resource Manager template deployment later. ```bash cat <<EOF > parameters.json { Follow the steps below to create a Microsoft Entra [service principal object](.. To deploy an **Kubernetes RBAC** enabled cluster, set the `aksEnableRBAC` field to `true` ## Deploy Components-This step will add the following components to your subscription: +This step adds the following components to your subscription: - [Azure Kubernetes Service](/azure/aks/intro-kubernetes) - [Application Gateway](./overview.md) v2 With the instructions in the previous section, we created and configured a new A ### Set up Kubernetes Credentials For the following steps, we need setup [kubectl](https://kubectl.docs.kubernetes.io/) command,-which we'll use to connect to our new Kubernetes cluster. [Cloud Shell](https://shell.azure.com/) has `kubectl` already installed. We'll use `az` CLI to obtain credentials for Kubernetes. +which we use to connect to our new Kubernetes cluster. [Cloud Shell](https://shell.azure.com/) has `kubectl` already installed. We'll use `az` CLI to obtain credentials for Kubernetes. Get credentials for your newly deployed AKS ([read more](/azure/aks/manage-azure-rbac#use-azure-rbac-for-kubernetes-authorization-with-kubectl)): To install Microsoft Entra Pod Identity to your cluster: ``` ### Install Helm-[Helm](/azure/aks/kubernetes-helm) is a package manager for Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress` package. +[Helm](/azure/aks/kubernetes-helm) is a package manager for Kubernetes. We use it to install the `application-gateway-kubernetes-ingress` package. > [!NOTE] > If you use [Cloud Shell](https://shell.azure.com/), you don't need to install Helm. Azure Cloud Shell comes with Helm version 3. Skip the first step and just add the AGIC Helm repository. -1. Install [Helm](/azure/aks/kubernetes-helm) and run the following to add `application-gateway-kubernetes-ingress` helm package: +1. Install [Helm](/azure/aks/kubernetes-helm) and run the following: - *Kubernetes RBAC enabled* AKS cluster To install Microsoft Entra Pod Identity to your cluster: helm init ``` -2. Add the AGIC Helm repository: - ```bash - helm repo add application-gateway-kubernetes-ingress https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/ - helm repo update - ``` - ### Install Ingress Controller Helm Chart 1. Use the `deployment-outputs.json` file created above and create the following variables. To install Microsoft Entra Pod Identity to your cluster: 1. Install the Application Gateway ingress controller package: ```bash- helm install -f helm-config.yaml --generate-name application-gateway-kubernetes-ingress/ingress-azure + helm install agic-controller oci://mcr.microsoft.com/azure-application-gateway/charts/ingress-azure --version 1.7.5 -f helm-config.yaml ``` ## Install a Sample App |
azure-netapp-files | Azure Netapp Files Create Volumes Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md | This article shows you how to create an SMB3 volume. For NFS volumes, see [Creat * You must have already set up a capacity pool. See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md). * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md). * [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)]-* The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it: --1. Register the feature: -- ```azurepowershell-interactive - Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSmbNonBrowsable - Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBAccessBasedEnumeration - ``` --2. Check the status of the feature registration: -- > [!NOTE] - > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing. -- ```azurepowershell-interactive - Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSmbNonBrowsable - Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBAccessBasedEnumeration - ``` -You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. ## Configure Active Directory connections Before creating an SMB volume, you need to create an Active Directory connection * <a name="access-based-enumeration"></a> If you want to enable access-based enumeration, select **Enable Access Based Enumeration**. - This feature will hide directories and files created under a share from users who do not have access permissions to the files or folders under the share. Users will still be able to view the share. + Hide directories and files created under a share from users who don't have access permissions to the files or folders under the share. Users are still able to view the share. * <a name="non-browsable-share"></a> You can enable the **non-browsable-share feature.** - This feature prevents the Windows client from browsing the share. The share does not show up in the Windows File Browser or in the list of shares when you run the `net view \\server /all` command. -- > [!IMPORTANT] - > Both the access-based enumeration and non-browsable shares features are currently in preview. If this is your first time using either, refer to the steps in [Before you begin](#before-you-begin) to register either feature. + Prevent the Windows client from browsing the share. The share doesn't show up in the Windows File Browser or in the list of shares when you run the `net view \\server /all` command. * <a name="continuous-availability"></a>If you want to enable Continuous Availability for the SMB volume, select **Enable Continuous Availability**. |
azure-netapp-files | Create Volumes Dual Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md | To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md). * [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)]-* The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it: --1. Register the feature: -- ```azurepowershell-interactive - Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSmbNonBrowsable - Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBAccessBasedEnumeration - ``` --2. Check the status of the feature registration: -- > [!NOTE] - > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing. -- ```azurepowershell-interactive - Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSmbNonBrowsable - Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSMBAccessBasedEnumeration - ``` -You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. ## Considerations You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` * <a name="access-based-enumeration"></a> If you want to enable access-based enumeration, select **Enable Access Based Enumeration**. - This feature hides directories and files created under a share from users who do not have access permissions. You can still view the share. You can only enable access-based enumeration if the dual-protocol volume uses NTFS security style. + Access-based enumeration hides directories and files created under a share from users who do not have access permissions. You can still view the share. You can only enable access-based enumeration if the dual-protocol volume uses NTFS security style. * <a name="non-browsable-share"></a> You can enable the **non-browsable-share feature.** This feature prevents the Windows client from browsing the share. The share does not show up in the Windows File Browser or in the list of shares when you run the `net view \\server /all` command. - > [!IMPORTANT] - > The access-based enumeration and non-browsable shares features are currently in preview. If this is your first time using either, refer to the steps in [Before you begin](#before-you-begin) to register the features. - * Customize **Unix Permissions** as needed to specify change permissions for the mount path. The setting does not apply to the files under the mount path. The default setting is `0770`. This default setting grants read, write, and execute permissions to the owner and the group, but no permissions are granted to other users. Registration requirement and considerations apply for setting **Unix Permissions**. Follow instructions in [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md). |
azure-netapp-files | Reserved Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/reserved-capacity.md | - Title: Reserved capacity for Azure NetApp Files -description: Learn how to optimize TCO with capacity reservations in Azure NetApp Files. ------- Previously updated : 09/16/2024---# Reserved capacity for Azure NetApp Files --You can save money on the storage costs for Azure NetApp Files with capacity reservations. Azure NetApp Files reserved capacity offers you a discount on capacity for storage costs when you commit to a reservation for one or three years, optimizing your TCO. A reservation provides a fixed amount of storage capacity for the term of the reservation. --Azure NetApp Files reserved capacity can significantly reduce your capacity costs for storing data in your Azure NetApp Files volumes. How much you save depends on the total capacity you choose to reserve, and the [service level](azure-netapp-files-service-levels.md) chosen. --For pricing information about reservation capacity for Azure NetApp Files, see [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/). --## Reservation terms for Azure NetApp Files --This section describes the terms of an Azure NetApp Files capacity reservation. -->[!NOTE] ->Azure NetApp Files reserved capacity covers matching capacity pools in the selected service level and region. When using capacity pools configured with [Standard storage with cool access](manage-cool-access.md), only "hot" tier consumption is covered by the reserved capacity benefit. --### Reservation capacity --You can purchase Azure NetApp Files reserved capacity in units of 100 TiB and 1 PiB per month for a one- or three-year term for a particular service level within a region. --### Reservation scope --Azure NetApp Files reserved capacity is available for a single subscription and multiple subscriptions (shared scope). When scoped to a single subscription, the reservation discount is applied to the selected subscription only. When scoped to multiple subscriptions, the reservation discount is shared across those subscriptions within the customer's billing context. --A reservation applies to your usage within the purchased scope and cannot be limited to a specific NetApp account, capacity pools, container, or object within the subscription. --Any capacity reservation for Azure NetApp Files covers only the capacity pools within the service level selected. Add-on features such as cross-region replication and backup are not included in the reservation. As soon as you buy a reservation, the capacity charges that match the reservation attributes are charged at the discount rates instead of the pay-as-you go rates. --For more information on Azure reservations, see [What are Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md). --### Supported service level options --Azure NetApp Files reserved capacity is available for Standard, Premium, and Ultra service levels in units of 100 TiB and 1 PiB. --### Requirements for purchase --To purchase reserved capacity: -* You must be in the **Owner** role for at least one Enterprise or individual subscription with pay-as-you-go rates. -* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the EA portal. Or, if that setting is disabled, you must be an EA Admin on the subscription. -* For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy Azure NetApp Files reserved capacity. --## Determine required capacity before purchase --When you purchase an Azure NetApp Files reservation, you must choose the region and tier for the reservation. Your reservation is valid only for data stored in that region and tier. For example, suppose you purchase a reservation for Azure NetApp Files *Premium* service level in US East. That reservation applies to neither *Premium* capacity pools for that subscription in US West nor capacity pools for other service levels (for example, *Ultra* service level in US East). Additional reservations can be purchased. --Reservations are available for 100-TiB or 1-PiB increments, with higher discounts for 1-PiB increments. When you purchase a reservation in the Azure portal, Microsoft might provide you with recommendations based on your previous usage to help determine which reservation you should purchase. --Purchasing an Azure NetApp Files reserved capacity does not automatically increase your regional capacity. Azure reservations for Azure NetApp Files are not an on-demand capacity guarantee. If your capacity reservation requires a quota increase, it's recommended you complete that before making the reservation. For more information, see [Regional capacity in Azure NetApp Files](regional-capacity-quota.md). --## Purchase Azure NetApp Files reserved capacity --You can purchase Azure NetApp Files reserved capacity through the [Azure portal](https://portal.azure.com/). You can pay for the reservation up front or with monthly payments. For more information about purchasing with monthly payments, see [Purchase Azure reservations with up front or monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). --To purchase reserved capacity: --1. Navigate to the [**Purchase reservations**](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/Browse_AddCommand) blade in the Azure portal. --2. Select **Azure NetApp Files** to buy a new reservation. --3. Fill in the required fields as described in the table that appears. --4. After you select the parameters for your reservation, the Azure portal displays the cost. The portal also shows the discount percentage over pay-as-you-go billing. --5. In the **Purchase reservations** blade, review the total cost of the reservation. You can also provide a name for the reservation. --After you purchase a reservation, it is automatically applied to any existing [Azure NetApp Files capacity pools](azure-netapp-files-set-up-capacity-pool.md) that match the terms of the reservation. If you haven't created any Azure NetApp Files capacity pools, the reservation applies when you create a resource that matches the terms of the reservation. In either case, the term of the reservation begins immediately after a successful purchase. --## Exchange or refund a reservation --You can exchange or refund a reservation, with certain limitations. For more information about Azure Reservations policies, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md). --<!-- -### Exchange a reservation --Exchanging a reservation enables you to receive a prorated refund based on the unused portion of the reservation. You can then apply the refund to the purchase price of a new Azure NetApp Files reservation. --There's no limit on the number of exchanges you can make. Also, there's no fee associated with an exchange. The new reservation that you purchase must be of equal or greater value than the prorated credit from the original reservation. An Azure NetApp Files reservation can be exchanged only for another Azure NetApp Files reservation, and not for a reservation for any other Azure service. --### Refund a reservation --You can cancel an Azure NetApp Files reservation at any time. When you cancel, you'll receive a prorated refund based on the remaining term of the reservation, minus a 12% early termination fee. The maximum refund per year is $50,000. --Cancelling a reservation immediately terminates the reservation and returns the remaining months to Microsoft. The remaining prorated balance, minus the fee, will be refunded to your original form of purchase. --> --## Expiration of a reservation --When a reservation expires, any Azure NetApp Files capacity that you are using under that reservation is billed at the pay-as-you go rate. Reservations don't renew automatically. --An email notification is sent 30 days prior to the expiration of the reservation, and again on the expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later than the expiration date. --## Need help? Contact us --If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458). --## Next steps --* [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md) -* [Understand how reservation discounts are applied to Azure storage services](../cost-management-billing/reservations/understand-storage-charges.md) |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t ## September 2024 -* [Reserved capacity](reserved-capacity.md) is now generally available (GA) +* [Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) is now generally available (GA) - Pay-as-you-go pricing is the most convenient way to purchase cloud storage when your workloads are dynamic or changing over time. However, some workloads are more predictable with stable capacity usage over an extended period. These workloads can benefit from savings in exchange for a longer-term commitment. With a one-year or three-year commitment of Azure NetApp Files reserved capacity, you can save up to 34% on sustained usage of Azure NetApp Files. Reserved capacity is available in stackable increments of 100 TiB and 1 PiB on Standard, Premium and Ultra service levels in a given region. Reserved capacity can be used in a single subscription (single-subscription scope), or across multiple subscriptions (shared scope) in the same Azure tenant. Azure NetApp Files reserved capacity benefits are automatically applied to existing Azure NetApp Files capacity pools in the matching region and service level. Azure NetApp Files reserved capacity not only provides cost savings but also improves financial predictability and stability, allowing for more effective budgeting. Additional usage is conveniently billed at the regular pay-as-you-go rate. + In environments with Azure NetApp Files volumes shared among multiple departments, projects, and users, many users can see the existence of other files and folders in directory listings even if they don't have permissions to access those items. Enabling Access-based enumeration (ABE) on Azure NetApp Files volumes ensures users only see those files and folders in directory listings that they have permission to access. If a user doesn't have read or equivalent permissions for a folder, the Windows client hides the folder from the userΓÇÖs view. This capability provides an additional layer of security by only displaying files and folders a user has access to, and conversely hiding file and folder information a user has no access. You can enable ABE on Azure NetApp Files SMB volume and dual-protocol volume with NTFS security style. ++* [Non-browsable shares](azure-netapp-files-create-volumes-smb.md#non-browsable-share) are now generally available (GA) ++ By default, Azure NetApp Files SMB and dual-protocol volumes show up in the list of shares in Windows Files Explorer. You might want to exclude specific Azure NetApp Files volumes from being listed. You can configure these volumes as non-browsable in Azure NetApp Files. This feature prevents the Windows client from browsing the share so the share doesn't show up in the Windows File Explorer. This capability provides an additional layer of security by not displaying these shares. This setting doesn't impact permissions. Users who have access to the share maintain their existing access. - For more detail, see the [Azure NetApp Files reserved capacity](reserved-capacity.md) or see reservations in the Azure portal. - ## August 2024 * [Azure NetApp Files storage with cool access](cool-access-introduction.md) is now generally available (GA) and supported with the Standard, Premium, and Ultra service levels. Cool access is also now supported for destination volumes in cross-region/cross-zone relationships. |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md | Bicep provides the following advantages: You can also create Bicep files in Visual Studio with the [Bicep extension for Visual Studio](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.visualstudiobicep). -- **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Bicep files are idempotent, which means you can deploy the same file many times and get the same resource types in the same state. You can develop one file that represents the desired state, rather than developing lots of separate files to represent updates. For example, the following file creates a storage account. If you deploy this template and the storage account with the specified properties already exists , no changes are made.+- **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Bicep files are idempotent, which means you can deploy the same file many times and get the same resource types in the same state. You can develop one file that represents the desired state, rather than developing lots of separate files to represent updates. For example, the following file creates a storage account. If you deploy this template and the storage account with the specified properties already exists, no changes are made. # [Bicep](#tab/bicep) Bicep automatically manages dependencies between resources. You can avoid settin The structure of the Bicep file is more flexible than the JSON template. You can declare parameters, variables, and outputs anywhere in the file. In JSON, you have to declare all parameters, variables, and outputs within the corresponding sections of the template. + ## Next steps Get started with the [Quickstart](./quickstart-create-bicep-use-visual-studio-code.md). |
azure-resource-manager | Deployment History Deletions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-history-deletions.md | Title: Deployment history deletions description: Describes how Azure Resource Manager automatically deletes deployments from the deployment history. Deployments are deleted when the history is close to exceeding the limit of 800. Previously updated : 03/20/2024 Last updated : 08/28/2024 Every time you deploy a template, information about the deployment is written to Azure Resource Manager automatically deletes deployments from your history as you near the limit. Automatic deletion is a change from past behavior. Previously, you had to manually delete deployments from the deployment history to avoid getting an error. This change was implemented on August 6, 2020. -**Automatic deletions are supported for resource group and subscription deployments. Currently, deployments in the history for [management group](deploy-to-management-group.md) and [tenant](deploy-to-tenant.md) deployments aren't automatically deleted.** - > [!NOTE] > Deleting a deployment from the history doesn't affect any of the resources that were deployed. If the current user doesn't have the required permissions, automatic deletion is You can opt out of automatic deletions from the history. **Use this option only when you want to manage the deployment history yourself.** The limit of 800 deployments in the history is still enforced. If you exceed 800 deployments, you'll receive an error and your deployment will fail. -To disable automatic deletions, register the `Microsoft.Resources/DisableDeploymentGrooming` feature flag. When you register the feature flag, you opt out of automatic deletions for the entire Azure subscription. You can't opt out for only a particular resource group. To reenable automatic deletions, unregister the feature flag. +To disable automatic deletions at the tenant or the management group scope, open a support ticket. For the instructions, see [Request support](./overview.md#get-support). ++To disable automatic deletions at the subscription scope, register the `Microsoft.Resources/DisableDeploymentGrooming` feature flag. When you register the feature flag, you opt out of automatic deletions for the entire Azure subscription. You can't opt out for only a particular resource group. To reenable automatic deletions, unregister the feature flag. # [PowerShell](#tab/azure-powershell) |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md | If you're trying to decide between using ARM templates and one of the other infr * **Declarative syntax**: ARM templates allow you to create and deploy an entire Azure infrastructure declaratively. For example, you can deploy not only virtual machines, but also the network infrastructure, storage systems, and any other resources you may need. -* **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Templates are idempotent, which means you can deploy the same template many times and get the same resource types in the same state. You can develop one template that represents the desired state, rather than developing lots of separate templates to represent updates. For example, the following file creates a storage account. If you deploy this template and the storage account with the specified properties already exists , no changes is made. +* **Repeatable results**: Repeatedly deploy your infrastructure throughout the development lifecycle and have confidence your resources are deployed in a consistent manner. Templates are idempotent, which means you can deploy the same template many times and get the same resource types in the same state. You can develop one template that represents the desired state, rather than developing lots of separate templates to represent updates. For example, the following file creates a storage account. If you deploy this template and the storage account with the specified properties already exists, no changes are made. ```json { After creating your template, you may wish to share it with other users in your This approach means you can safely share templates that meet your organization's standards. + ## Next steps * For a step-by-step tutorial that guides you through the process of creating a template, see [Tutorial: Create and deploy your first ARM template](template-tutorial-create-first-template.md). |
azure-vmware | Azure Vmware Solution Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md | Refer to the table to find details about resolution dates or possible workaround | [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. Azure VMware Solution is currently rolling out [7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) to address this issue. | March 2024 - Resolved in [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) | | The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | The AV64 SKU now supports 7 Fault Domains and all vSAN storage policies. For more information, see [AV64 supported Azure regions](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) | June 2024 | | VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you're using NE appliances in a HA configuration. | Feb 2024 - Resolved in [VMware HCX 4.8.2](https://docs.vmware.com/en/VMware-HCX/4.8.2/rn/vmware-hcx-482-release-notes/https://docsupdatetracker.net/index.html) |-| [VMSA-2024-0006](https://www.vmware.com/security/advisories/VMSA-2024-0006.html) ESXi Use-after-free and Out-of-bounds write vulnerability | March 2024 | For ESXi 7.0, Microsoft worked with Broadcom on an AVS specific hotfix as part of the [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) rollout. For the 8.0 rollout, Azure VMware Solution is deploying [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) which is not vulnerable. | Auguest 2024 - Resolved in [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) and [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) | +| [VMSA-2024-0006](https://www.vmware.com/security/advisories/VMSA-2024-0006.html) ESXi Use-after-free and Out-of-bounds write vulnerability | March 2024 | For ESXi 7.0, Microsoft worked with Broadcom on an AVS specific hotfix as part of the [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) rollout. For the 8.0 rollout, Azure VMware Solution is deploying [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) which is not vulnerable. | August 2024 - Resolved in [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) and [vCenter Server 8.0 U2b & ESXi 8.0 U2b](architecture-private-clouds.md#vmware-software-versions) | | When I run the VMware HCX Service Mesh Diagnostic wizard, all diagnostic tests will be passed (green check mark), yet failed probes will be reported. See [HCX - Service Mesh diagnostics test returns 2 failed probes](https://knowledge.broadcom.com/external/article?legacyId=96708) | 2024 | None, this will be fixed in 4.9+. | N/A | | [VMSA-2024-0011](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24308) Out-of-bounds read/write vulnerability (CVE-2024-22273) | June 2024 | Microsoft has confirmed the applicability of the CVE-2024-22273 vulnerability and it will be addressed in the upcoming 8.0u2b Update. | July 2024 | | [VMSA-2024-0012](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24453) Multiple Vulnerabilities in the DCERPC Protocol and Local Privilege Escalations | June 2024 | Microsoft, working with Broadcom, adjudicated the risk of these vulnerabilities at an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 aren't exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. A plan is being put in place to address these vulnerabilities at a future date TBD. | N/A | |
bastion | Configuration Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md | A SKU is also known as a Tier. Azure Bastion supports multiple SKU tiers. When y ### <a name="premium"></a>Premium SKU (Preview) -The Premium SKU is a new SKU that supports Bastion features such as Session Recording and Private-Only Bastion. When you deploy bastion, only select the Premium SKU if you need the features that it supports. +The Premium SKU is a new SKU that supports Bastion features such as [Session Recording](session-recording.md) and [Private-Only Bastion](private-only-deployment.md). When you deploy Bastion, we recommend that you select the Premium SKU only if you need the features that it supports. ### Specify SKU |
batch | Tutorial Parallel Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-dotnet.md | The following sections break down the sample application into the steps that it ### Authenticate Blob and Batch clients -To interact with the linked storage account, the app uses the Azure.Storage.Blobs Library for .NET. Using the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class which takes a reference to the account Uri and authenticating [Token](/dotnet/api/azure.core.tokencredentia) such as [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential). +To interact with the linked storage account, the app uses the Azure.Storage.Blobs Library for .NET. Using the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class which takes a reference to the account Uri and authenticating [Token](/dotnet/api/azure.core.tokencredential) such as [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential). ```csharp // TODO: Replace <storage-account-name> with your actual storage account name |
container-apps | Environment Variables | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment-variables.md | These environment variables are loaded onto your Container App during runtime. You can configure the Environment Variables upon the creation of the Container App or later by creating a new revision. +> [!NOTE] +> To avoid confusion, it is not recommended to duplicate environment variables. When multiple environment variables have the same name, the last one in the list takes effect. + ### [Azure portal](#tab/portal) If you're creating a new Container App through the [Azure portal](https://portal.azure.com), you can set up the environment variables on the Container section: |
cost-management-billing | Save Compute Costs Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/save-compute-costs-reservations.md | For more information, see [Self-service exchanges and refunds for Azure Reservat - **Azure Dedicated Host** - Only the compute costs are included with the Dedicated host. - **Azure Disk Storage reservations** - A reservation only covers premium SSDs of P30 size or greater. It doesn't cover any other disk types or sizes smaller than P30. - **Azure Backup Storage reserved capacity** - A capacity reservation lowers storage costs of backup data in a Recovery Services Vault.-- **Azure NetApp Files** - A capacity reservation covers matching capacity pools in the selected service level and region. When using capacity pools configured with [Standard storage with cool access](../../azure-netapp-files/manage-cool-access.md), only "hot" tier consumption is covered by the reserved capacity benefit. Software plans: For Windows virtual machines and SQL Database, the reservation discount doesn't ## Need help? Contact us. -If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458). +If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458). ## Next steps |
defender-for-iot | Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md | Defender for IoT's device inventory helps you identify details about specific de - **Manage all your IoT/OT devices** by building up-to-date inventory that includes all your managed and unmanaged devices -- **Protect devices with risk-based approach** to identify risks such as missing patches, vulnerabilities and prioritize fixes based on risk scoring and automated threat modeling+- **Protect devices with risk-based approach** to identify risks such as missing patches, vulnerabilities, and prioritize fixes based on risk scoring and automated threat modeling - **Update your inventory** by deleting irrelevant devices and adding organization-specific information to emphasize your organization preferences For more information, see: ## Automatically consolidated devices -When you've deployed Defender for IoT at scale, with several OT sensors, each sensor might detect different aspects of the same device. To prevent duplicated devices in your device inventory, Defender for IoT assumes that any devices found in the same zone, with a logical combination of similar characteristics, is the same device. Defender for IoT automatically consolidates these devices and lists them only once in the device inventory. +When you deploy Defender for IoT at scale, with several OT sensors, each sensor might detect different aspects of the same device. To prevent duplicated devices in your device inventory, Defender for IoT assumes that any devices found in the same zone, with a logical combination of similar characteristics, is the same device. Defender for IoT automatically consolidates these devices and lists them only once in the device inventory. For example, any devices with the same IP and MAC address detected in the same zone are consolidated and identified as a single device in the device inventory. If you have separate devices from recurring IP addresses that are detected by multiple sensors, you want each of these devices to be identified separately. In such cases, [onboard your OT sensors](onboard-sensors.md) to different zones so that each device is identified as a separate and unique device, even if they have the same IP address. Devices that have the same MAC addresses, but different IP addresses aren't merged, and continue to be listed as unique devices. Mark OT devices as *important* to highlight them for extra tracking. On an OT se ## Device inventory column data -The following table lists the columns available in the Defender for IoT device inventory on the Azure portal. Starred items **(*)** are also available from the OT sensor. +The following table lists the columns available in the Defender for IoT device inventory on the Azure portal and the OT sensor, a description of each column and whether and in which platform it is editible. Starred items **(*)** are also available from the OT sensor. > [!NOTE] > Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -|Name |Description | -||| -|**Authorization** * |Editable. Determines whether or not the device is marked as *authorized*. This value might need to change as the device security changes. | -|**Business Function** | Editable. Describes the device's business function. | -| **Class** | Editable. The device's class. <br>Default: `IoT` | -|**Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent`| -|**Description** * | Editable. The device's description. | -| **Device Id** | The device's Azure-assigned ID number. | -| **Firmware model** | The device's firmware model.| -| **Firmware vendor** | Editable. The vendor of the device's firmware. | -| **Firmware version** * |Editable. The device's firmware version. | -|**First seen** * | The date and time the device was first seen. Shown in `MM/DD/YYYY HH:MM:SS AM/PM` format. On the OT sensor, shown as **Discovered**.| -|**Importance** | Editable. The device's important level: `Low`, `Medium`, or `High`. | -| **IPv4 Address** | The device's IPv4 address. | -|**IPv6 Address** | The device's IPv6 address.| -|**Last activity** * | The date and time the device last sent an event through to Azure or to the OT sensor, depending on where you're viewing the device inventory. Shown in `MM/DD/YYYY HH:MM:SS AM/PM` format. | -|**Location** | Editable. The device's physical location. | -| **MAC Address** * | The device's MAC address. | -|**Model** *| Editable The device's hardware model. | -|**Name** * | Mandatory, and editable. The device's name as the sensor discovered it, or as entered by the user. | -|**Network location** (Public preview) | The device's network location. Displays whether the device is defined as *local* or *routed*, according to the configured subnets. | -|**OS architecture** | Editable. The device's operating system architecture. | -|**OS distribution** | Editable. The device's operating system distribution, such as Android, Linux, and Haiku. | -|**OS platform** * | Editable. The device's operating system, if detected. On the OT sensor, shown as **Operating System**. | -|**OS version** | Editable. The device's operating system version, such as Windows 10 or Ubuntu 20.04.1. | -|**PLC mode** * | The device's PLC operating mode, including both the *Key* state (physical / logical) and the *Run* state (logical). If both states are the same, then only one state is listed.<br><br>- Possible *Key* states include: `Run`, `Program`, `Remote`, `Stop`, `Invalid`, and `Programming Disabled`. <br><br>- Possible *Run* states are `Run`, `Program`, `Stop`, `Paused`, `Exception`, `Halted`, `Trapped`, `Idle`, or `Offline`. | -|**Programming device** * | Editable. Defines whether the device is defined as a *Programming Device*, performing programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. | -|**Protocols** *| The protocols that the device uses. | -| **Purdue level** | Editable. The Purdue level in which the device exists.| -|**Scanner device** * | Editable. Defines whether the device performs scanning-like activities in the network. | -|**Sensor**| The sensor the device is connected to. | -|**Serial number** *| The device's serial number. | -| **Site** | The device's site. <br><br>All Enterprise IoT sensors are automatically added to the **Enterprise network** site. | -| **Slots** | The number of slots the device has. | -| **Subtype** | Editable. The device's subtype, such as *Speaker* or *Smart TV*. <br>**Default**: `Managed Device` | -| **Tags** | Editable. The device's tags. | -|**Type** * | Editable. The device type, such as *Communication* or *Industrial*. <br>**Default**: `Miscellaneous` | -|**Vendor** *| The name of the device's vendor, as defined in the MAC address. | -| **VLAN** * | The device's VLAN. | -|**Zone** | The device's zone. | --The following columns are available on OT sensors only: --- The device's **DHCP Address**-- The device's **FQDN** address and **FQDN Last Lookup Time**-- The device **Groups** that include the device, as [defined on the OT sensor's device map](how-to-work-with-the-sensor-device-map.md#create-a-custom-device-group)-- The device's **Module address**-- The device's **Rack** and **Slot**-- The number of **Unacknowledged Alerts** alerts associated with the device+|Name |Description | Editable| +|||-| +|**Authorization** * |Determines whether or not the device is marked as *authorized*. This value might need to change as the device security changes. Toggle **Authorized device**. | Editable in Azure and OT Sensor| +|**Business Function** | Describes the device's business function. |Editable in Azure| +| **Class** | The device's class. <br>Default: `IoT` |Editable in Azure| +|**Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent` | Not editable| +|**Description** * |The device's description. |Editable in both Azure and the OT Sensor| +| **Device Id** | The device's Azure-assigned ID number. |Not editable| +| **Firmware model** | The device's firmware model. |Editable in Azure| +| **Firmware vendor** | The vendor of the device's firmware. |Not editable| +| **Firmware version** * |The device's firmware version. |Editable in Azure | +|**First seen** * | The date and time the device was first seen. Shown in `MM/DD/YYYY HH:MM:SS AM/PM` format. On the OT sensor, shown as **Discovered**.|Not editable| +|**Importance** | The device's important level: `Low`, `Medium`, or `High`. |Editable in Azure| +| **IPv4 Address** *| The device's IPv4 address. |Not editable| +|**IPv6 Address** | The device's IPv6 address.|Not editable| +|**Last activity** * | The date and time the device last sent an event through to Azure or to the OT sensor, depending on where you're viewing the device inventory. Shown in `MM/DD/YYYY HH:MM:SS AM/PM` format. |Not editable| +|**Location** | The device's physical location. |Editable in Azure| +| **MAC Address** * | The device's MAC address. |Not editable| +|**Model** *| The device's hardware model. |Editable in Azure | +|**Name** * | Mandatory. The device's name as the sensor discovered it, or as entered by the user. |Editable in Azure and OT sensor| +|**Network location** (Public preview) * | The device's network location. Displays whether the device is defined as *local* or *routed*, according to the configured subnets. |Not editable| +|**OS architecture** |The device's operating system architecture. |Not editable| +|**OS distribution** | The device's operating system distribution, such as Android, Linux, and Haiku. |Not editable| +|**OS platform** * | The device's operating system, if detected. On the OT sensor, shown as **Operating System**. |Editable in OT Sensor| +|**OS version** | The device's operating system version, such as Windows 10 or Ubuntu 20.04.1. |Not editable| +|**PLC mode** * | The device's PLC operating mode, including both the *Key* state (physical / logical) and the *Run* state (logical). If both states are the same, then only one state is listed.<br><br>- Possible *Key* states include: `Run`, `Program`, `Remote`, `Stop`, `Invalid`, and `Programming Disabled`. <br><br>- Possible *Run* states are `Run`, `Program`, `Stop`, `Paused`, `Exception`, `Halted`, `Trapped`, `Idle`, or `Offline`. | Editable in OT Sensor| +|**Programming device** * | Defines whether the device is defined as a *Programming Device*, performing programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. |Editable in Azure and OT sensor | +|**Protocols** *| The protocols that the device uses. |Not editable| +| **Purdue level** | The Purdue level in which the device exists.| Editable in OT sensor | +|**Scanner device** * |Defines whether the device performs scanning-like activities in the network. |Editable in OT Sensor| +|**Sensor**| The sensor the device is connected to. |Not editable| +|**Serial number** *| The device's serial number. |Not editable| +| **Site** | The device's site. <br><br>All Enterprise IoT sensors are automatically added to the **Enterprise network** site. |Not editable| +| **Slots** * | The number of slots the device has. |Not editable| +| **Subtype** | The device's subtype, such as *Speaker* or *Smart TV*. <br>**Default**: `Managed Device` |Editable in Azure| +| **Tags** | The device's tags. |Editable in Azure| +|**Type** * | The device type, such as *Communication* or *Industrial*. <br>**Default**: `Miscellaneous` |Editable in Azure and OT sensor | +|**Vendor** *| The name of the device's vendor, as defined in the MAC address. < Also inconsistent - in inventory called vendor, in pane called hardware vendor>|Editable in Azure | +| **VLAN** * | The device's VLAN. |Not editable| +|**Zone** | The device's zone. |Not editable| ++The following columns are available in the OT sensors only, and aren't editible. ++- The device's **DHCP Address**. +- The device's **FQDN** address and **FQDN Last Lookup Time**. +- The device **Groups** that include the device, as [defined on the OT sensor's device map](how-to-work-with-the-sensor-device-map.md#create-a-custom-device-group). +- The device's **Module address**. +- The device's **Rack**. +- The number of **Unacknowledged Alerts** alerts associated with the device. > [!NOTE] > The additional **Agent type** and **Agent version** columns are used for by device builders. For more information, see [Microsoft Defender for IoT for device builders documentation](../device-builders/index.yml). |
dns | Dns Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-overview.md | Azure DNS enables multiple scenarios, including: * To learn about Public DNS zones and records, see [DNS zones and records overview](dns-zones-records.md). * To learn about Private DNS zones, see [What is an Azure Private DNS zone](private-dns-privatednszone.md). * To learn about private resolver endpoints and rulesets, see [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md).-* For frequently asked questions about Azure DNS, see [Azure DNS FAQ](dns-faq-private.yml). -* For frequently asked questions about Azure Private DNS, see [Azure Private DNS FAQ](dns-faq.yml). +* For frequently asked questions about Azure DNS, see [Azure DNS FAQ](dns-faq.yml). +* For frequently asked questions about Azure Private DNS, see [Azure Private DNS FAQ](dns-faq-private.yml). * For frequently asked questions about Traffic Manager, see [Traffic Manager routing methods](/azure/traffic-manager/traffic-manager-faqs) * Also see [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns). |
event-hubs | Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/geo-replication.md | These features shouldn't be confused with Availability Zones. Both geographic re > - This feature is currently in public preview, and as such shouldn't be used in production scenarios. > - The following regions are currently supported in the public preview. >-> | US | Europe | -> ||| -> | Central US EUAP | Italy North | -> | | Spain Central | -> | | Norway East | +> | US | Europe | APAC | +> |||| +> | Central US EUAP | Italy North | Australia Central | +> | Canada Central | Spain Central | Australia East | +> | Canada East | Norway East || ## Metadata disaster recovery vs. Geo-replication of metadata and data Clients that use the Event Hubs SDK need to upgrade to the April 2024 version of Event Hubs dedicated clusters are priced independently of geo-replication. Use of geo-replication with Event Hubs dedicated requires you to have at least two dedicated clusters in separate regions. The dedicated clusters used as secondary instances for geo-replication can be used for other workloads. There's a charge for geo-replication based on the published bandwidth * the number of secondary regions. The geo-replication charge is waived in early public preview. ## Related content-To learn how to use the Geo-replication feature, see [Use Geo-replication](use-geo-replication.md). +To learn how to use the Geo-replication feature, see [Use Geo-replication](use-geo-replication.md). |
healthcare-apis | Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md | The following information can help you resolve problems with exporting FHIR data In some situations, there's a potential for a job to be stuck in a bad state while the FHIR service is attempting to export data. This can occur especially if the Data Lake Storage Gen2 account permissions haven't been set up correctly. -One way to check the status of your `$export` operation is to go to your storage account's *storage browser* and see whether any `.ndjson` files are present in the export container. If the files aren't present and no other `$export` jobs are running, it's possible that the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later, you can requeue the `$export` job and try again. --For more information about canceling an `$export` operation, see the [Bulk Data Delete Request](https://www.hl7.org/fhir/uv/bulkdata/) documentation from HL7. +One way to check the status of your `$export` operation is to go to your storage account's *storage browser* and see whether any `.ndjson` files are present in the export container. If the files aren't present and no other `$export` jobs are running, it's possible that the current job is stuck in a bad state. In this case, you can cancel the `$export` job by sending a DELETE request to the URL provided in the Content-Location header to cancel the request > [!NOTE] > In the FHIR service, the default time for an `$export` operation to idle in a bad state is 10 minutes before the service stops the operation and moves to a new job. |
healthcare-apis | Release Notes 2024 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2024.md | +## September 2024 ++### Azure Health Data Services ++### FHIR service ++#### Enhanced Export Efficiency +The export functionality has been improved to optimize memory usage. With this change the export process now pushes data to blob storage one resource at a time, reducing memory consumption. + ## August 2024 ### Azure Health Data Services |
load-balancer | Load Balancer Ipv6 For Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-for-linux.md | For Ubuntu versions 17.10 or higher, follow these steps: # [Debian](#tab/debian) +All supported Debian images in Azure have been preconfigured with DHCPv6. No other changes are required when you use these images. If you have a VM based on an older or custom Debian image, follow these steps: + 1. Edit the */etc/dhcp/dhclient6.conf* file, and add the following line: ```config |
load-balancer | Load Balancer Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md | An **[internal (or private) load balancer](./components.md#frontend-ip-configura :::image type="content" source="media/load-balancer-overview/load-balancer.png" alt-text="Diagram depicts a load balancer directing traffic."::: -*Figure: Balancing multi-tier applications by using both public and internal Load Balancer* - For more information on the individual load balancer components, see [Azure Load Balancer components](./components.md). ## Why use Azure Load Balancer? |
logic-apps | Logic Apps Limits And Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md | For Azure Logic Apps to receive incoming communication through your firewall, yo | Azure Government region | Azure Logic Apps IP | |-||-| US Gov Arizona | 52.244.67.164, 52.244.67.64, 52.244.66.82, 52.126.52.254, 52.126.53.145, 52.182.49.105, 52.182.49.175 | +| US Gov Arizona | 52.244.67.164, 52.244.67.64, 52.244.66.82, 52.126.52.254, 52.126.53.145, 52.244.187.241, 52.244.17.238, 52.244.23.110, 52.244.20.213, 52.244.16.162, 52.244.15.25, 52.244.16.141, 52.244.15.26 | | US Gov Texas | 52.238.119.104, 52.238.112.96, 52.238.119.145, 52.245.171.151, 52.245.163.42 |-| US Gov Virginia | 52.227.159.157, 52.227.152.90, 23.97.4.36, 13.77.239.182, 13.77.239.190 | -| US DoD Central | 52.182.49.204, 52.182.52.106 | +| US Gov Virginia | 52.227.159.157, 52.227.152.90, 23.97.4.36, 13.77.239.182, 13.77.239.190, 20.159.220.127, 62.10.96.217, 62.10.102.236, 62.10.102.136, 62.10.111.137, 62.10.111.152, 62.10.111.128, 62.10.111.123 | +| US DoD Central | 52.182.49.204, 52.182.52.106, 52.182.49.105, 52.182.49.175, 52.180.225.24, 52.180.225.43, 52.180.225.50, 52.180.252.28, 52.180.225.29, 52.180.231.56, 52.180.231.50, 52.180.231.65 | <a name="outbound"></a> This section lists the outbound IP addresses that Azure Logic Apps requires in y | Region | Azure Logic Apps IP | |--||-| US DoD Central | 52.182.48.215, 52.182.92.143, 52.182.53.147, 52.182.52.212, 52.182.49.162, 52.182.49.151 | -| US Gov Arizona | 52.244.67.143, 52.244.65.66, 52.244.65.190, 52.126.50.197, 52.126.49.223, 52.126.53.144, 52.126.36.100 | +| US DoD Central | 52.182.48.215, 52.182.92.143, 52.182.53.147, 52.182.52.212, 52.182.49.162, 52.182.49.151, 52.180.225.0, 52.180.251.16, 52.180.250.135, 52.180.251.20, 52.180.231.89, 52.180.224.251, 52.180.252.222, 52.180.225.21 | +| US Gov Arizona | 52.244.67.143, 52.244.65.66, 52.244.65.190, 52.126.50.197, 52.126.49.223, 52.126.53.144, 52.126.36.100, 52.244.187.5, 52.244.19.121, 52.244.18.105, 52.244.51.113, 52.244.17.113, 52.244.26.122, 52.244.22.195, 52.244.19.137 | | US Gov Texas | 52.238.114.217, 52.238.115.245, 52.238.117.119, 20.141.120.209, 52.245.171.152, 20.141.123.226, 52.245.163.1 |-| US Gov Virginia | 13.72.54.205, 52.227.138.30, 52.227.152.44, 13.77.239.177, 13.77.239.140, 13.77.239.187, 13.77.239.184 | +| US Gov Virginia | 13.72.54.205, 52.227.138.30, 52.227.152.44, 13.77.239.177, 13.77.239.140, 13.77.239.187, 13.77.239.184, 20.159.219.180, 62.10.96.177, 62.10.102.138, 62.10.102.94, 62.10.111.134, 62.10.111.151, 62.10.110.102, 62.10.109.190 | ## Next steps |
logic-apps | Logic Apps Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md | In single-tenant Azure Logic Apps, a logic app and its workflows follow the [**S When you create or deploy logic apps with the **Logic App (Standard)** resource type, and you select any Azure region for deployment, you'll also select a Workflow Standard hosting plan. However, if you select an existing **App Service Environment v3** resource for your deployment location, you must then select an [App Service Plan](../app-service/overview-hosting-plans.md). > [!IMPORTANT]-> The following plans and resources are no longer available or supported with the public release of the **Logic App (Standard)** resource type in Azure regions: -> Functions Premium plan, App Service Environment v1, and App Service Environment v2. Except with ASEv3, the App Service Plan is unavailable and unsupported. +> The following plans and resources are no longer available or supported with the public release of Standard +> logic app workflows in single-tenant Azure Logic Apps: Functions Premium plan, App Service Environment v1, +> and App Service Environment v2. The App Service Plan is available and supported only with App Service Environment v3 (ASE v3). The following table summarizes how the Standard model handles metering and billing for the following components when used with a logic app and a workflow in single-tenant Azure Logic Apps: |
migrate | Common Questions Server Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md | While you can create assessments for multiple regions in an Azure Migrate projec Yes, you can migrate to multiple subscriptions (same Azure tenant) in the same target region for an Azure Migrate project. You can select the target subscription while enabling replication for a machine or a set of machines. The target region is locked post first replication for agentless VMware migrations and during the replication appliance and Hyper-V provider installation for agent-based migrations and agentless Hyper-V migrations respectively. +### Does Azure Migrate support Azure Resource Graph? +Currently, Azure Migrate isn't integrated with Azure Resource Graph. It does support performing ARG-related queries. + ### How is the data transmitted from on-premises environment to Azure? Is it encrypted before transmission? The Azure Migrate appliance in the agentless replication case compresses data and encrypts before uploading. Data is transmitted over a secure communication channel over https and uses TLS 1.2 or later. Additionally, Azure Storage automatically encrypts your data when it's persisted it to the cloud (encryption-at-rest). The Migration and modernization tool is application agnostic and works for most ### Can I upgrade my OS while migrating? -The Migration and modernization tool only supports like-for-like migrations currently. The tool doesnΓÇÖt support upgrading the OS version during migration. The migrated machine will have the same OS as the source machine. +The Migration and modernization tool now supports Windows OS upgrade during Migration. For Linux this option is currently not available. More details on [Windows OS upgrade](how-to-upgrade-windows.md). ### Do I need VMware vCenter to migrate VMware VMs? To [migrate VMware VMs](server-migrate-overview.md) using VMware agent-based or ### Can I consolidate multiple source VMs into one VM while migrating? -Migration and modernization capabilities support like-for-like migrations currently. We don't support consolidating servers or upgrading the operating system as part of the migration. +Migration and modernization capabilities support like-for-like migrations currently. We don't support consolidating servers as part of the migration. ### Will Windows Server 2008 and 2008 R2 be supported in Azure after migration? |
network-watcher | Nsg Flow Logs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-overview.md | Currently, these Azure services don't support NSG flow logs: > App services deployed under an Azure App Service plan don't support NSG flow logs. To learn more, see [How virtual network integration works](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works). ### Incompatible virtual machines+ NSG flow logs aren't supported on the following virtual machine sizes:-- D family v6 series-- E family v6 series-- F family v6 series -We recommend that you use virtual network flow logs [Virtual network flow logs](vnet-flow-logs-overview.md) for these virtual machine sizes. +- [D family v6 series](/azure/virtual-machines/sizes/general-purpose/d-family) +- [E family v6 series](/azure/virtual-machines/sizes/memory-optimized/e-family) +- [F family v6 series](/azure/virtual-machines/sizes/compute-optimized/f-family) ++We recommend that you use [Virtual network flow logs](vnet-flow-logs-overview.md) for these virtual machine sizes. > [!NOTE]-> Virtual Machines that run heavy networking traffic might encounter flow logging failures. We recommend that you migrate NSG flow logs to virtual network flow logs [Virtual network flow logs](vnet-flow-logs-overview.md) for these types of workloads. +> Virtual machines that run heavy networking traffic might encounter flow logging failures. We recommend that you [migrate NSG flow logs](nsg-flow-logs-migrate.md) to [Virtual network flow logs](vnet-flow-logs-overview.md) for this type of workloads. ## Best practices |
notification-hubs | Firebase Migration Update Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/firebase-migration-update-sdk.md | + + Title: Updating Azure notification hub with FCMv1 credentials +description: Describes how Azure Notification Hubs can be updated using FCMv1 credentials ++++ Last updated : 09/12/2024+++ms.lastreviewed: 09/12/2024 +++# Updating Azure Notification Hub with FCMv1 Credentials ++This guide explains how to update an Azure notification hub with FCMv1 credentials using the Azure Management SDK for .NET. This is essential for enabling push notifications to Android devices via Firebase Cloud Messaging (FCMv1). ++## Prerequisites +- An existing Azure Notification Hub within a namespace. +- FCMv1 credentials including `clientEmail`, `privateKey`, and `projectId`. ++### Step 1: Set up and retrieve the Notification Hub +Before you can update the Notification Hub, ensure that you have set up the `ArmClient` and retrieved the relevant Notification Hub resource. ++```csharp +ArmClient client = new ArmClient(new DefaultAzureCredential()); +SubscriptionResource subscription = client.GetSubscriptionResource(new ResourceIdentifier($"/subscriptions/{subscriptionId}")); +ResourceGroupResource resourceGroup = subscription.GetResourceGroups().Get(resourceGroupName); +NotificationHubNamespaceResource notificationHubNamespaceResource = resourceGroup.GetNotificationHubNamespaces().Get(namespaceName); +NotificationHubResource notificationHubResource = notificationHubNamespaceResource.GetNotificationHubs().Get(notificationHubName); +``` ++### Step 2: Define and update FCMv1 credentials +Next, create an `FcmV1Credential` object with your FCMv1 details and use it to update the Notification Hub. ++```csharp +NotificationHubUpdateContent updateContent = new() +{ + FcmV1Credential = new FcmV1Credential("clientEmail", "privateKey", "projectid") +}; ++NotificationHubResource updatedNotificationHub = await notificationHubResource.UpdateAsync(updateContent); +Console.WriteLine($"Notification Hub '{notificationHubName}' updated successfully with FCMv1 credentials."); +``` ++### Step 3: Verify the update +After updating, you can verify the credentials by retrieving and printing them. ++```csharp +var notificationHubCredentials = updatedNotificationHub.GetPnsCredentials().Value; +Console.WriteLine($"FCMv1 Credentials Email: '{notificationHubCredentials.FcmV1Credential.ClientEmail}'"); +``` ++This step confirms that the Notification Hub has been updated with the correct FCMv1 credentials. ++## Complete code example +Below is the complete code example that includes the setup, creation, update, and verification of the Notification Hub. ++```csharp +using Azure; +using Azure.Core; +using Azure.Identity; +using Azure.ResourceManager; +using Azure.ResourceManager.NotificationHubs; +using Azure.ResourceManager.NotificationHubs.Models; +using Azure.ResourceManager.Resources; ++class Program +{ + static async Task Main(string[] args) + { + string subscriptionId = "<Replace with your subscriptionid>"; + string resourceGroupName = "<Replace with your resourcegroupname>"; + string location = "<Replace with your location>"; + string namespaceName = "<Replace with your notificationhubnamespacename>"; + string notificationHubName = "<Replace with your notificationhubname>"; ++ Console.WriteLine("Started Program"); + ArmClient client = new ArmClient(new DefaultAzureCredential()); + SubscriptionResource subscription = client.GetSubscriptionResource(new ResourceIdentifier($"/subscriptions/{subscriptionId}")); ++ // Create or get the resource group + ResourceGroupCollection resourceGroups = subscription.GetResourceGroups(); + ResourceGroupResource? resourceGroup = null; + bool resourceGroupExists = resourceGroups.Exists(resourceGroupName); + if (!resourceGroupExists) + { + ArmOperation<ResourceGroupResource> operation = await resourceGroups.CreateOrUpdateAsync(WaitUntil.Completed, resourceGroupName, new ResourceGroupData(location)); + resourceGroup = operation.Value; + Console.WriteLine($"ResourceGroup '{resourceGroupName}' created successfully."); + } + else + { + resourceGroup = resourceGroups.Get(resourceGroupName); + Console.WriteLine($"ResourceGroup '{resourceGroupName}' already exists."); + } ++ // Create or get a Notification Hub namespace with the required SKU + NotificationHubNamespaceData namespaceData = new NotificationHubNamespaceData(location) + { + Sku = new NotificationHubSku(NotificationHubSkuName.Standard) + }; ++ NotificationHubNamespaceCollection notificationHubNamespaces = resourceGroup.GetNotificationHubNamespaces(); + NotificationHubNamespaceResource? notificationHubNamespaceResource = null; + bool notificationHubNamespaceResourceExists = notificationHubNamespaces.Exists(namespaceName); + if (!notificationHubNamespaceResourceExists) + { + ArmOperation<NotificationHubNamespaceResource> namespaceOperation = await notificationHubNamespaces.CreateOrUpdateAsync(WaitUntil.Completed, namespaceName, namespaceData); + notificationHubNamespaceResource = namespaceOperation.Value; + Console.WriteLine($"Notification Hub Namespace '{namespaceName}' created successfully."); + } + else + { + notificationHubNamespaceResource = notificationHubNamespaces.Get(namespaceName); + Console.WriteLine($"NotificationHubNamespace '{namespaceName}' already exists."); + } ++ // Create or get a Notification Hub in the namespace + NotificationHubCollection notificationHubs = notificationHubNamespaceResource.GetNotificationHubs(); + NotificationHubResource? notiticationHubResource = null; + bool notificationHubResourceExists = notificationHubs.Exists(notificationHubName); + if (!notificationHubResourceExists) + { + ArmOperation<NotificationHubResource> hubOperation = await notificationHubs.CreateOrUpdateAsync(WaitUntil.Completed, notificationHubName, new NotificationHubData(location)); + notiticationHubResource = hubOperation.Value; + Console.WriteLine($"Notification Hub '{notificationHubName}' created successfully in Namespace '{namespaceName}'."); + } + else + { + notiticationHubResource = notificationHubs.Get(notificationHubName); + Console.WriteLine($"NotificationHub '{notificationHubName}' already exists."); + } ++ // Update the Notification Hub with FCMv1 credentials + NotificationHubUpdateContent updateContent = new() + { + FcmV1Credential = new FcmV1Credential("<Replace with your clientEmail>", "<Replace with your privateKey>", "<Replace with your projectid>") + }; ++ NotificationHubResource notificationHubResource = await notiticationHubResource.UpdateAsync(updateContent); + Console.WriteLine($"Notification Hub '{notificationHubName}' updated successfully with FCMv1 credentials."); ++ // Get Notification Hub Credentials + var notificationHubCredentials = notiticationHubResource.GetPnsCredentials().Value; + Console.WriteLine($"FCMv1 Credentials Email '{notificationHubCredentials.FcmV1Credential.ClientEmail}'"); + } +} +``` |
openshift | Howto Large Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-large-clusters.md | When deploying a large cluster, you must start with at most 50 worker nodes at c > While you can define up to 50 worker nodes at creation time, it's best to start with a small cluster (e.g, three (3) worker nodes) and then scale out to the desired number of worker nodes after the cluster is installed. > -Follow the steps provided in [Create an Azure Red Hat OpenShift cluster](https://learn.microsoft.com/azure/openshift/create-cluster?tabs=azure-cli) until the "Create the cluster" steps, then continue as instructed: +Follow the steps provided in [Create an Azure Red Hat OpenShift cluster](create-cluster.md?tabs=azure-cli) until the "Create the cluster" steps, then continue as instructed: The sample command below using the Azure CLI can be used to deploy a cluster with Standard_D32s_v5 as the control plane nodes, requesting three public IP addresses, and defining nine worker nodes: |
operator-service-manager | Manage Network Function Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/manage-network-function-operator.md | + + Title: Manage the Azure Operator Service Manager cluster extension +description: Command reference syntax and examples guiding management of the Azure Operator Service Manager network function operator extension. ++ Last updated : 09/16/2024+++++# Manage network function operator extension +This article guides user management of the Azure Operator Service Manager (AOSM) network function operator (NFO) extension. This kubernetes cluster extension is used as part of the AOSM service offering and used to manage container based workloads, hosted by the Azure Operator Nexus platform. ++## Overview +These commands are executed after making the NAKS cluster ready for the add-on extension and presume prior installation of the Azure CLI and authentication into the target subscription. ++## Create network function extension +The Azure CLI command 'az k8s-extension create' is executed to install the NFO extension. ++### Command +```bash +az k8s-extension create --cluster-name + --cluster-type {connectedClusters} + --extension-type {Microsoft.Azure.HybridNetwork} + --name + --resource-group + --scope {cluster} + --release-namespace {azurehybridnetwork} + --release-train {preview, stable} + --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator + [--auto-upgrade {false, true}] + [--config global.networkfunctionextension.enableClusterRegistry={false, true}] + [--config global.networkfunctionextension.enableLocalRegistry={false, true}] + [--config global.networkfunctionextension.enableEarlyLoading={false,true}] + [--config global.networkfunctionextension.clusterRegistry.highAvailability.enabled={true, false}] + [--config global.networkfunctionextension.clusterRegistry.autoScaling.enabled={true, false}] + [--config global.networkfunctionextension.webhook.highAvailability.enabled={true, false}] + [--config global.networkfunctionextension.webhook.autoScaling.enabled={true, false}] + [--config global.networkfunctionextension.clusterRegistry.storageClassName=] + [--config global.networkfunctionextension.clusterRegistry.storageSize=] + [--config global.networkfunctionextension.webhook.pod.mutation.matchConditionExpression=] + [--version] +``` ++### Required Parameters +`--cluster-name -c` +* Name of the Kubernetes cluster. ++`--cluster-type -t` +* Specify Arc clusters or Azure kubernetes service (AKS) managed clusters or Arc appliances or provisionedClusters. +* Accepted values: connectedClusters. ++`--extension-type` +* Name of the extension type. +* Accepted values: Microsoft.Azure.HybridNetwork. ++`--name -n` +* Name of the extension instance. ++`--resource-group -g` +* Name of resource group. You can configure the default group using 'az configure --defaults group=groupname'. + +`--config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator` +* This configuration must be provided. ++### Optional Parameters +`--auto-upgrade` +* Automatically upgrade minor version of the extension instance. +* Accepted values: false, true. +* Default value: true. ++`--release-train` +* Specify the release train for the extension type. +* Accepted values: preview, stable. +* Default value: stable. ++`--version` +* Specify the explicit version to install for the extension instance if '--auto-upgrade-minor-version' isn't enabled. ++### Optional feature specific configurations ++#### Pod Mutating Webhook +`--config global.networkfunctionextension.webhook.pod.mutation.matchConditionExpression=` +* This configuration is an optional parameter. It comes into play only when container network functions (CNFs) are installed in the corresponding release namespace. +* This configuration configures more granular control on top of rules and namespaceSelectors. +* Default value: + ```bash + "((object.metadata.namespace != \"kube-system\") || (object.metadata.namespace == \"kube-system\" && has(object.metadata.labels) && (has(object.metadata.labels.app) && (object.metadata.labels.app == \"commissioning\") || (has(object.metadata.labels.name) && object.metadata.labels.name == \"cert-exporter\") || (has(object.metadata.labels.app) && object.metadata.labels.app == \"descheduler\"))))" + ``` +The referenced matchCondition implies that the pods getting accepted in kube-system namespace are mutated only if they have at least one of the following labels: app == "commissioning", app == "descheduler", or name == "cert-exporter." Otherwise, they aren't mutated and continue to be pulled from the original source as per the helm chart of CNF/Component/Application. +* Accepted value: Any valid CEL expression. +* This parameter can be set or updated during either network function (NF) extension installation or update. +* This condition comes into play only when the CNF/Component/Application are getting installed into the namespace as per the rules and namespaceSelectors. If there are more pods getting spin up in that namespace, this condition is applied. ++#### Cluster registry +`--config global.networkfunctionextension.enableClusterRegistry=` +* This configuration provisions a registry in the cluster to locally cache artifacts. +* Default values enable lazy loading mode unless global.networkfunctionextension.enableEarlyLoading=true. +* Accepted values: false, true. +* Default value: false. ++`--config global.networkfunctionextension.clusterRegistry.highAvailability.enabled=` +* This configuration provisions the cluster registry in high availability mode if cluster registry is enabled. +* Default value is true and uses Nexus Azure kubernetes service (NAKS) nexus-shared volume on AKS recommendation is set false. +* Accepted values: true, false. +* Default value: true. ++`--config global.networkfunctionextension.clusterRegistry.autoScaling.enabled=` +* This configuration provisions the cluster registry pods with horizontal auto scaling. +* Accepted values: true, false. +* Default value: true. ++`--config global.networkfunctionextension.webhook.highAvailability.enabled=` +* This configuration provisions multiple replicas of webhook for high availability. +* Accepted values: true, false. +* Default value: true. ++`--config global.networkfunctionextension.webhook.autoScaling.enabled=` +* This configuration provisions the webhook pods with horizontal auto scaling. +* Accepted values: true, false. +* Default value: true. ++`--config global.networkfunctionextension.enableEarlyLoading=` +* This configuration enables artifacts early loading into cluster registry before helm installation or upgrade. +* This configuration can only be enabled when global.networkfunctionextension.enableClusterRegistry=true. +* Accepted values: false, true. +* Default value: false. ++`--config global.networkfunctionextension.clusterRegistry.storageClassName=` +* This configuration must be provided when global.networkfunctionextension.enableClusterRegistry=true. +* NetworkFunctionExtension provisions a PVC to local cache artifacts from this storage class. +* Platform specific values + * AKS: managed-csi + * NAKS(Default): nexus-shared + * NAKS(Non-HA): nexus-volume + * Azure Stack Edge (ASE): managed-premium +* Default value: nexus-shared. ++`--config global.networkfunctionextension.clusterRegistry.storageSize=` +* This configuration must be provided when global.networkfunctionextension.enableClusterRegistry=true. +* This configuration configures the size we reserve for cluster registry. +* This configuration uses unit as Gi and Ti for sizing. +* Default value: 100Gi ++#### Side loading ++`--config global.networkfunctionextension.enableLocalRegistry=` +* This configuration allows artifacts to be delivered to edge via hardware drive. +* Accepted values: false, true. +* Default value: false. ++### Recommended NFO config for AKS ++The default NFO config configures HA on NAKS but none of the disk drives on AKS support ReadWriteX access mode. Where HA needs to be disabled, use the following config options; ++``` --config global.networkfunctionextension.clusterRegistry.highAvailability.enabled=false``` ++``` --config global.networkfunctionextension.webhook.highAvailability.enabled=false``` ++(optional) ++``` --config global.networkfunctionextension.clusterRegistry.storageClassName=managed-csi``` ++## Update network function extension +The Azure CLI command 'az k8s-extension update' is executed to update the NFO extension. ++## Delete network function extension +The Azure CLI command 'az k8s-extension delete' is executed to delete the NFO extension. ++## Examples +Create a network function extension with auto upgrade. +```bash +az k8s-extension create --resource-group myresourcegroup --cluster-name mycluster --name myextension --cluster-type connectedClusters --extension-type Microsoft.Azure.HybridNetwork --scope cluster --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator --release-namespace azurehybridnetwork +``` ++Create a network function extension with a pined version. +```bash +az k8s-extension create --resource-group myresourcegroup --cluster-name mycluster --name myextension --cluster-type connectedClusters --extension-type Microsoft.Azure.HybridNetwork --auto-upgrade-minor-version false --scope cluster --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator --release-namespace azurehybridnetwork --version 1.0.2711-7 +``` ++Create a network function extension with cluster registry (default lazy loading mode) feature enabled on NAKS. +```bash +az k8s-extension create --resource-group myresourcegroup --cluster-name mycluster --name myextension --cluster-type connectedClusters --extension-type Microsoft.Azure.HybridNetwork --scope cluster --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator --release-namespace azurehybridnetwork --config global.networkfunctionextension.enableClusterRegistry=true --config global.networkfunctionextension.clusterRegistry.storageSize=100Gi +``` ++Create a network function extension with cluster registry (default lazy loading mode) feature enabled on AKS. +```bash +az k8s-extension create --resource-group myresourcegroup --cluster-name mycluster --name myextension --cluster-type connectedClusters --extension-type Microsoft.Azure.HybridNetwork --scope cluster --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator --release-namespace azurehybridnetwork --config global.networkfunctionextension.enableClusterRegistry=true --config global.networkfunctionextension.clusterRegistry.highAvailability.enabled=false --config global.networkfunctionextension.clusterRegistry.storageClassName=managed-csi --config global.networkfunctionextension.clusterRegistry.storageSize=100Gi +``` ++Create a network function extension with cluster registry (early loading) feature enabled. +```bash +az k8s-extension create --resource-group myresourcegroup --cluster-name mycluster --name myextension --cluster-type connectedClusters --extension-type Microsoft.Azure.HybridNetwork --scope cluster --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator --release-namespace azurehybridnetwork --config global.networkfunctionextension.enableClusterRegistry=true --config global.networkfunctionextension.enableEarlyLoading=true --config global.networkfunctionextension.clusterRegistry.storageClassName=managed-csi --config global.networkfunctionextension.clusterRegistry.storageSize=100Gi +``` ++Create a network function extension with side loading feature enabled. +```bash +az k8s-extension create --resource-group myresourcegroup --cluster-name mycluster --name myextension --cluster-type connectedClusters --extension-type Microsoft.Azure.HybridNetwork --scope cluster --config Microsoft.CustomLocation.ServiceAccount=azurehybridnetwork-networkfunction-operator --release-namespace azurehybridnetwork --config global.networkfunctionextension.enableLocalRegistry=true +``` |
partner-solutions | Dynatrace How To Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md | Title: Manage your Azure Native Dynatrace Service integration description: This article describes how to manage Dynatrace on the Azure portal. Previously updated : 02/02/2023 Last updated : 08/28/2024 The column **Logs to Dynatrace** indicates whether the resource is sending logs - _Logs not configured_ - Only Azure resources that have the appropriate resource tags are configured to send logs to Dynatrace. - _Agent not configured_ - Virtual machines without the Dynatrace OneAgent installed don't emit logs to Dynatrace. +## Use one Dynatrace resource with multiple subscriptions ++You can now monitor all your subscriptions through a single Dynatrace resource using **Monitored Subscriptions**. Your experience is simplified because you don't have to set up a Dynatrace resource in every subscription that you intend to monitor. You can monitor multiple subscriptions by linking them to a single Dynatrace resource that is tied to a Dynatrace environment. This provides a single pane view for all resources across multiple subscriptions. ++To manage multiple subscriptions that you want to monitor, select **Monitored Subscriptions** in the **Dynatrace environment configurations** section of the Resource menu. +++From **Monitored Subscriptions** in the Resource menu, select **Add Subscriptions**. The **Add Subscriptions** experience that opens and shows the subscriptions you have *Owner* role assigned to and any Dynatrace resource created in those subscriptions that is already linked to the same Dynatrace environment as the current resource. ++If the subscription you want to monitor has a resource already linked to the same Dynatrace org, we recommend that you delete the Dynatrace resources to avoid shipping duplicate data and incurring double the charges. ++Select the subscriptions you want to monitor through the Dynatrace resource and select **Add**. +++If the list doesn't get updated automatically, select **Refresh** to view the subscriptions and their monitoring status. You might see an intermediate status of *In Progress* while a subscription gets added. When the subscription is successfully added, you see the status is updated to **Active**. If a subscription fails to get added, **Monitoring Status** shows as **Failed**. +++The set of tag rules for metrics and logs defined for the Dynatrace resource applies to all subscriptions that are added for monitoring. Setting separate tag rules for different subscriptions isn't supported. Diagnostics settings are automatically added to resources in the added subscriptions that match the tag rules defined for the Dynatrace resource. ++If you have existing Dynatrace resources that are linked to the account for monitoring, you can end up with duplication of logs that can result in added charges. Ensure you delete redundant Dynatrace resources that are already linked to the account. You can view the list of connected resources and delete the redundant ones. We recommend consolidating subscriptions into the same Dynatrace resource where possible. + ## Monitor virtual machines using Dynatrace OneAgent You can install Dynatrace OneAgent on virtual machines as an extension. Select **Virtual Machines** under **Dynatrace environment config** in the Resource menu. In the working pane, you see a list of all virtual machines in the subscription. |
route-server | About Dual Homed Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/about-dual-homed-network.md | You can build a dual-homed network that involves two or more ExpressRoute connec * Create a route server in each hub VNet that has an ExpressRoute gateway. * Configure BGP peering between the NVA and the route server in the hub VNet.-* [Enable route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange) between the ExpressRoute gateway and the route server in the hub VNet. +* Enable [route exchange](configure-route-server.md#configure-route-exchange) between the ExpressRoute gateway and the route server in the hub VNet. * Make sure ΓÇ£Use Remote Gateway or Remote Route ServerΓÇ¥ is **disabled** in the spoke virtual network VNet peering configuration. :::image type="content" source="./media/about-dual-homed-network/dual-homed-topology-expressroute.png" alt-text="Diagram of Route Server in a dual-homed topology with ExpressRoute."::: ### How does it work? -In the control plane, the NVA in the hub VNet will learn about on-premises routes from the ExpressRoute gateway through [route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange) with the route server in the hub. In return, the NVA will send the spoke VNet addresses to the ExpressRoute gateway using the same route server. The route server in both the spoke and hub VNets will then program the on-premises network addresses to the virtual machines in their respective virtual network. +In the control plane, the NVA in the hub VNet will learn about on-premises routes from the ExpressRoute gateway through [route exchange](configure-route-server.md#configure-route-exchange) with the route server in the hub. In return, the NVA will send the spoke VNet addresses to the ExpressRoute gateway using the same route server. The route server in both the spoke and hub VNets will then program the on-premises network addresses to the virtual machines in their respective virtual network. > [!IMPORTANT] > BGP prevents a loop by verifying the AS number in the AS Path. If the receiving route server sees its own AS number in the AS Path of a received BGP packet, it will drop the packet. In this example, both route servers have the same AS number, 65515. To prevent each route server from dropping the routes from the other route server, the NVA must apply **as-override** BGP policy when peering with each route server. |
route-server | Expressroute Vpn Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md | -Azure Route Server supports not only third-party network virtual appliances (NVA) in Azure but also seamlessly integrates with ExpressRoute and Azure VPN gateways. You donΓÇÖt need to configure or manage the BGP peering between the gateway and Azure Route Server. You can enable route exchange between the gateways and Azure Route Server by enabling [branch-to-branch](quickstart-configure-route-server-portal.md#configure-route-exchange) in Azure portal. If you prefer, you can use [Azure PowerShell](quickstart-configure-route-server-powershell.md#route-exchange) or [Azure CLI](quickstart-configure-route-server-cli.md#configure-route-exchange) to enable the route exchange with the Route Server. +Azure Route Server supports not only third-party network virtual appliances (NVA) in Azure but also seamlessly integrates with ExpressRoute and Azure VPN gateways. You donΓÇÖt need to configure or manage the BGP peering between the gateway and Azure Route Server. You can enable route exchange between the gateways and Azure Route Server by enabling [branch-to-branch](configure-route-server.md?tabs=portal#configure-route-exchange) in Azure portal. If you prefer, you can use [Azure PowerShell](configure-route-server.md?tabs=powershell#configure-route-exchange) or [Azure CLI](configure-route-server.md?tabs=cli#configure-route-exchange) to enable the route exchange with the Route Server. [!INCLUDE [downtime note](../../includes/route-server-note-vng-downtime.md)] The following diagram shows an example of using Route Server to exchange routes You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN and ExpressRoute gateways are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other. The Azure VPN and ExpressRoute gateway must be deployed in the same virtual network as Route Server in order for BGP peering to be successfully established. -If you enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes dynamically over BGP. For more information, see [How to configure BGP for Azure VPN Gateway](../vpn-gateway/bgp-howto.md). If you donΓÇÖt enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes that are defined in the local network gateway of *On-premises 1*. For more information, see [Create a local network gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway). Whether you enable BGP on the VPN gateway or not, the gateway advertises the routes it learns to the Route Server if route exchange is enabled. For more information, see [Configure route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange). +If you enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes dynamically over BGP. For more information, see [How to configure BGP for Azure VPN Gateway](../vpn-gateway/bgp-howto.md). If you donΓÇÖt enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes that are defined in the local network gateway of *On-premises 1*. For more information, see [Create a local network gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway). Whether you enable BGP on the VPN gateway or not, the gateway advertises the routes it learns to the Route Server if route exchange is enabled. For more information, see [Configure route exchange](configure-route-server.md?tabs=portal#configure-route-exchange). [!INCLUDE [active-active vpn gateway](../../includes/route-server-note-vpn-gateway.md)] :::image type="content" source="./media/expressroute-vpn-support/expressroute-and-vpn-with-route-server.png" alt-text="Diagram showing ExpressRoute and VPN gateways exchanging routes through Azure Route Server."::: ## Considerations-* When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred by default. You can configure routing preference to influence Route Server route selection. For more information, see [Routing preference (preview)](hub-routing-preference.md). +* When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred by default. You can configure routing preference to influence Route Server route selection. For more information, see [Routing preference](hub-routing-preference.md). * If **branch-to-branch** is enabled and your on-premises advertises a route with Azure BGP community 65517:65517, then the ExpressRoute gateway will drop this route. ## Related content |
route-server | Quickstart Configure Route Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-portal.md | Title: 'Quickstart: Create and configure Route Server - Azure portal' -description: In this quickstart, you learn how to create and configure an Azure Route Server using the Azure portal. + Title: 'Quickstart: Create Azure Route Server - Azure portal' +description: In this quickstart, you learn how to create an Azure Route Server using the Azure portal. Previously updated : 08/13/2024 Last updated : 09/17/2024 -# Quickstart: Create and configure Route Server using the Azure portal +# Quickstart: Create Azure Route Server using the Azure portal -In this quickstart, you configure Azure Route Server to peer with a Network Virtual Appliance (NVA) in your virtual network using the Azure portal. Azure Route Server will learn routes from the NVA and program them on the virtual machines in the virtual network. Azure Route Server will also advertise the virtual network routes to the NVA. For more information, read [Azure Route Server](overview.md). +In this quickstart, you learn how to create an Azure Route Server to peer with a Network Virtual Appliance (NVA) in your virtual network using the Azure portal. :::image type="content" source="media/quickstart-configure-route-server-portal/environment-diagram.png" alt-text="Diagram of Route Server deployment environment using the Azure portal." lightbox="media/quickstart-configure-route-server-portal/environment-diagram.png"::: +If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + [!INCLUDE [route server preview note](../../includes/route-server-note-preview-date.md)] ## Prerequisites -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* Review the [service limits for Azure Route Server](route-server-faq.md#limitations). +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- Review the [service limits for Azure Route Server](route-server-faq.md#limitations). ## Create a Route Server -### Sign in to your Azure account and select your subscription --From a browser, navigate to the [Azure portal](https://portal.azure.com) and sign in with your Azure account. --### Create a Route Server +In this section, you create a route server. 1. Sign in to [Azure portal](https://portal.azure.com), search and select **Route Server**. -1. Select **+ Create new route server**. +1. In the search box at the top of the portal, enter ***route server***, and select **Route Server** from the search results. ++ :::image type="content" source="./media/quickstart-configure-route-server-portal/portal-search.png" alt-text="Screenshot of searching for Route Server in the Azure portal." lightbox="./media/quickstart-configure-route-server-portal/portal-search.png"::: - :::image type="content" source="./media/quickstart-configure-route-server-portal/route-server-landing-page.png" alt-text="Screenshot of Route Server landing page."::: +1. On the **Route Servers** page, select **+ Create**. -1. On the **Create a Route Server** page, enter, or select the required information. +1. On the **Basics** tab of **Create a Route Server**, enter, or select the following information: - :::image type="content" source="./media/quickstart-configure-route-server-portal/create-route-server-page.png" alt-text="Screenshot of create Route Server page."::: | Settings | Value | |-|-|- | Subscription | Select the Azure subscription you want to use to deploy the Route Server. | - | Resource group | Select a resource group to create the Route Server in. If you don't have an existing resource group, you can create a new one. | - | Name | Enter a name for the Route Server. | - | Region | Select the region the Route Server will be created in. Select the same region as the virtual network you created previously to see the virtual network in the drop-down. | - | Virtual Network | Select the virtual network in which the Route Server will be created. You can create a new virtual network or use an existing virtual network. If you're using an existing virtual network, make sure the existing virtual network has enough space for a minimum of a /27 subnet to accommodate the Route Server subnet requirement. If you don't see your virtual network from the dropdown, make sure you've selected the correct Resource Group or region. | - | Subnet | Once you've created or select a virtual network, the subnet field will appear. This subnet is dedicated to Route Server only. Select **Manage subnet configuration** and create the Azure Route Server subnet. Select **+ Subnet** and create a subnet using the following guidelines:</br><br>- The subnet must be named *RouteServerSubnet*.</br><br>- The subnet must be a minimum of /27 or larger.</br> | - | Public IP address | Create a new or select an existing Standard public IP resource to assign to the Route Server. To ensure connectivity to the backend service that manages the Route Server configuration, a public IP address is required. | --1. Select **Review + create**, review the summary, and then select **Create**. + | **Project details** | | + | Subscription | Select the Azure subscription that you want to use to deploy the route server. | + | Resource group | Select **Create new**. <br>In **Name**, enter ***RouteServerRG***. <br>Select **OK**. | + | **Instance details** | | + | Name | Enter ***myRouteServer***. | + | Region | Select **West US** or any region you prefer to create the route server in. | + | Routing Preference | Select **ExpressRoute**. Other available options: **VPN** and **ASPath**. | + | **Configure virtual networks** | | + | Virtual network | Select **Create new**. <br>In **Name**, enter ***myRouteServerVNet***. <br>In **Address range**, enter ***10.0.0.0/16***. <br>In **Subnet name** and **Address range**, enter ***RouteServerSubnet*** and ***10.0.1.0/27*** respectively. <br>Select **OK**. | + | Subnet | Once you created the virtual network and subnet, the **RouteServerSubnet** will populate. <br>- The subnet must be named *RouteServerSubnet*.<br>- The subnet must be a minimum of /27 or larger. | + | **Public IP address** | | + | Public IP address | Select **Create new**. or select an existing Standard public IP resource to assign to the Route Server. To ensure connectivity to the backend service that manages the Route Server configuration, a public IP address is required. | + | Public IP address name | Enter ***myRouteServerVNet-ip***. A Standard public IP address is required to ensure connectivity to the backend service that manages the route server. | ++ :::image type="content" source="./media/quickstart-configure-route-server-portal/create-route-server.png" alt-text="Screenshot that shows the Basics tab or creating a route server." lightbox="./media/quickstart-configure-route-server-portal/create-route-server.png"::: ++1. Select **Review + create** and then select **Create** after the validation passes. [!INCLUDE [Deployment note](../../includes/route-server-note-creation-time.md)] ## Set up peering with NVA -The section will help you configure BGP peering with your NVA. --1. Go to [Route Server](./overview.md) in the Azure portal and select the Route Server you want to configure. +In this section, you learn how to configure BGP peering with a network virtual appliance (NVA). - :::image type="content" source="./media/quickstart-configure-route-server-portal/select-route-server.png" alt-text="Screenshot of Route Server list."::: +1. Once the deployment is complete, select **Go to resource** to go to the **myRouteServer**. -1. Select **Peers** under *Settings* in the left navigation panel. Then select **+ Add** to add a new peer. +1. Under **Settings**, select **Peers**. - :::image type="content" source="./media/quickstart-configure-route-server-portal/peers-landing-page.png" alt-text="Screenshot of peers landing page."::: +1. Select **+ Add** to add a peer. -1. Enter the following information about your NVA peer. +1. On the **Add Peer** page, enter the following information: - :::image type="content" source="./media/quickstart-configure-route-server-portal/add-peer-page.png" alt-text="Screenshot of add peer page."::: + | Setting | Value | + | - | -- | + | Name | A name to identify the peer. This example uses **myNVA**. | + | ASN | The Autonomous System Number (ASN) of the NVA. For more information, see [What Autonomous System Numbers (ASNs) can I use?](route-server-faq.md#what-autonomous-system-numbers-asns-can-i-use) | + | IPv4 Address | The private IP address of the NVA that **myRouteServer** will communicate with to establish BGP. | - | Settings | Value | - |-|-| - | Name | Give a name for the peering between your Route Server and the NVA. | - | ASN | Enter the Autonomous Systems Number (ASN) of your NVA. | - | IPv4 Address | Enter the IP address of the NVA the Route Server will communicate with to establish BGP. | +1. Select **Add** to add the peer. -1. Select **Add** to add this peer. + :::image type="content" source="./media/quickstart-configure-route-server-portal/add-peer.png" alt-text="Screenshot that shows how to add the NVA to the route server as a peer." lightbox="./media/quickstart-configure-route-server-portal/add-peer.png"::: ## Complete the configuration on the NVA -You'll need the Azure Route Server's peer IPs and ASN to complete the configuration on your NVA to establish a BGP session. You can obtain this information from the overview page your Route Server. +To complete the peering setup, you must configure the NVA to establish a BGP session with the route server's peer IPs and ASN. You can find the peer IPs and ASN of **myRouteServer** in the **Overview** page: [!INCLUDE [NVA peering note](../../includes/route-server-note-nva-peering.md)] -## Configure route exchange --If you have a virtual network gateway (ExpressRoute or VPN) in the same virtual network, you can enable *branch-to-branch* traffic to exchange routes between the gateway and the Route Server. ----1. Go to the Route Server that you want to configure. --1. Select **Configuration** under **Settings** in the left navigation panel. --1. Select **Enable** for the **Branch-to-Branch** setting and then select **Save**. +## Clean up resources - :::image type="content" source="./media/quickstart-configure-route-server-portal/enable-route-exchange.png" alt-text="Screenshot of how to enable route exchange."::: +When no longer needed, you can delete all resources created in this quickstart by deleting **RouteServerRG** resource group: -## Clean up resources +1. In the search box at the top of the portal, enter ***RouteServerRG***. Select **RouteServerRG** from the search results. -If you no longer need the Azure Route Server, select **Delete** from the overview page to deprovision the Route Server. +1. Select **Delete resource group**. +1. In **Delete a resource group**, enter ***RouteServerRG***, and then select **Delete**. -## Next steps +1. Select **Delete** to confirm the deletion of the resource group and all its resources. -After you create the Azure Route Server, continue to learn about how Azure Route Server interacts with ExpressRoute and VPN Gateways: +## Next step > [!div class="nextstepaction"]-> [Azure ExpressRoute and Azure VPN support](expressroute-vpn-support.md) +> [Configure and manage Azure Route Server](configure-route-server.md) |
sentinel | Billing Pre Purchase Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing-pre-purchase-plan.md | Purchase Microsoft Sentinel pre-purchase plans in the [Azure portal reservations 1. Go to the [Azure portal](https://portal.azure.com) 1. Navigate to the **Reservations** service.-1. On the **Purchase reservations page**, select **Microsoft Sentinel Pre-Purchase Plan**. +1. On the **Purchase reservations page**, select **Microsoft Sentinel Pre-Purchase Plan**. + :::image type="content" source="media/sentinel-plan.png" alt-text="Screenshot showing Microsoft Sentinel pre-purchase plan." lightbox="media/sentinel-plan.png"::: 1. On the **Select the product you want to purchase** page, select a subscription. Use the **Subscription** list to select the subscription used to pay for the reserved capacity. The payment method of the subscription is charged the upfront costs for the reserved capacity. Charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. 1. Select a scope. - **Single resource group scope** - Applies the reservation discount to the matching resources in the selected resource group only. Purchase Microsoft Sentinel pre-purchase plans in the [Azure portal reservations - **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. 1. Select how many Microsoft Sentinel commit units you want to purchase. - `Need Sentinel screenshot here` :::image type="content" source="media/sentinel-pre-purchase-plan.png" alt-text="Screenshot showing Microsoft Sentinel pre-purchase plan discount tiers and their term lengths." lightbox="media/sentinel-pre-purchase-plan.png"::: 1. Choose to automatically renew the pre-purchase reservation. *The setting is configured to renew automatically by default*. For more information, see [Renew a reservation](../cost-management-billing/reservations/reservation-renew.md). |
sentinel | Troubleshooting Cef Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshooting-cef-syslog.md | -# Troubleshoot your CEF or Syslog data connector +# [Deprecated] Troubleshoot your CEF or Syslog data connector + > [!CAUTION] > This article references CentOS, a Linux distribution that has reached End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life). |
storage | Storage Srp Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-srp-overview.md | The following table shows the Azure Storage client library for resource manageme | - | | - | | | **Azure.ResourceManager.Storage** | [Reference](/dotnet/api/azure.resourcemanager.storage) | [NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Storage/) | [GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/storage/Azure.ResourceManager.Storage) | -To learn more about using the Azure Storage management library for specific resource management scenarios, see the [Azure Storage management library developer guide for .NET](../blobs/storage-blob-dotnet-get-started.md). +To learn more about using the Azure Storage management library for specific resource management scenarios, see the [Azure Storage management library developer guide for .NET](storage-srp-dotnet-get-started.md). ## [Java](#tab/java) The following table shows the Azure Storage client libraries for resource manage - [TypeScript](../blobs/storage-blob-typescript-get-started.md) - [Python](../blobs/storage-blob-python-get-started.md) - [Go](../blobs/storage-blob-go-get-started.md)-- To learn more about using the Azure Storage management library for specific resource management scenarios, see [Get started with Azure Storage management library for .NET](storage-srp-dotnet-get-started.md).+- To learn more about using the Azure Storage management library for specific resource management scenarios, see [Get started with Azure Storage management library for .NET](storage-srp-dotnet-get-started.md). |
synapse-analytics | Get Started Analyze Sql On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-on-demand.md | In this tutorial, you'll learn how to analyze data with serverless SQL pool. Serverless SQL pools let you use SQL without having to reserve capacity. Billing for a serverless SQL pool is based on the amount of data processed to run the query and not the number of nodes used to run the query. -Every workspace comes with a pre-configured serverless SQL pool called **Built-in**. +Every workspace comes with a preconfigured serverless SQL pool called **Built-in**. ## Analyze NYC Taxi data with a serverless SQL pool > [!NOTE] > Make sure you have [placed the sample data into the primary storage account](get-started-create-workspace.md#place-sample-data-into-the-primary-storage-account) -1. In Synapse Studio, go to the **Develop** hub +1. In the Synapse Studio, go to the **Develop** hub 1. Create a new SQL script.-1. Paste the following code into the script. +1. Paste the following code into the script. (Update `contosolake` to the name of your storage account and `users` with the name of your container.) ```sql SELECT Every workspace comes with a pre-configured serverless SQL pool called **Built-i FORMAT='PARQUET' ) AS [result] ```-1. Select **Run**. ++1. Select **Run**. Data exploration is just a simplified scenario where you can understand the basic characteristics of your data. Learn more about data exploration and analysis in this [tutorial](sql/tutorial-data-analyst.md). However, as you continue data exploration, you might want to create some utility - Database users with the permissions to access some data sources or database objects. - Utility views, procedures, and functions that you can use in the queries. -1. Use the `master` database to create a separate database for custom database objects. Custom database objects, cannot be created in the `master` database. +1. Use the `master` database to create a separate database for custom database objects. Custom database objects can't be created in the `master` database. ```sql CREATE DATABASE DataExplorationDB However, as you continue data exploration, you might want to create some utility ``` > [!IMPORTANT]- > Use a collation with `_UTF8` suffix to ensure that UTF-8 text is properly converted to `VARCHAR` columns. `Latin1_General_100_BIN2_UTF8` provides the best performance in the queries that read data from Parquet files and Azure Cosmos DB containers. For more information on changing collations, refer to [Collation types supported for Synapse SQL](sql/reference-collation-types.md). + > Use a collation with `_UTF8` suffix to ensure that UTF-8 text is properly converted to `VARCHAR` columns. `Latin1_General_100_BIN2_UTF8` provides the best performance in the queries that read data from Parquet files and Azure Cosmos DB containers. For more information on changing collations, see [Collation types supported for Synapse SQL](sql/reference-collation-types.md). 1. Switch the database context from `master` to `DataExplorationDB` using the following command. You can also use the UI control **use database** to switch your current database: However, as you continue data exploration, you might want to create some utility USE DataExplorationDB ``` -1. From `DataExplorationDB`, create utility objects such as credentials and data sources. +1. From `DataExplorationDB` create utility objects such as credentials and data sources. ```sql CREATE EXTERNAL DATA SOURCE ContosoLake |
synapse-analytics | Synapse Workspace Synapse Rbac Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md | -The article describes the built-in Synapse RBAC (role-based access control) roles, the permissions they grant, and the scopes at which they can be used. +The article describes the built-in Synapse RBAC (role-based access control) roles, the permissions they grant, and the scopes at which they can be used. For more information on reviewing and assigning Synapse role memberships, see [how to review Synapse RBAC role assignments](./how-to-review-synapse-rbac-role-assignments.md) and [how to assign Synapse RBAC roles](./how-to-manage-synapse-rbac-role-assignments.md). The following table describes the built-in roles and the scopes at which they ca |Role |Permissions|Scopes| |||--|-|Synapse Administrator |Full Synapse access to serverless and dedicated SQL pools, Data Explorer pools, Apache Spark pools, and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts. Includes Compute Operator, Linked Data Manager, and Credential User permissions on the workspace system identity credential. Includes assigning Synapse RBAC roles. In addition to Synapse Administrator, Azure Owners can also assign Synapse RBAC roles. Azure permissions are required to create, delete, and manage compute resources. Synapse RBAC roles can be assigned even when the associated subscription is disabled.</br></br>_Can read and write artifacts</br> Can do all actions on Spark activities.</br> Can view Spark pool logs</br> Can view saved notebook and pipeline output </br> Can use the secrets stored by linked services or credentials</br>Can assign and revoke Synapse RBAC roles at current scope_|Workspace </br> Spark pool<br/>Integration runtime </br>Linked service</br>Credential | -|Synapse Apache Spark Administrator</br>|Full Synapse access to Apache Spark Pools. Create, read, update, and delete access to published Spark job definitions, notebooks and their outputs, and to libraries, linked services, and credentials.  Includes read access to all other published code artifacts. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can do all actions on Spark artifacts</br>Can do all actions on Spark activities_|Workspace</br>Spark pool| -|Synapse SQL Administrator|Full Synapse access to serverless SQL pools. Create, read, update, and delete access to published SQL scripts, credentials, and linked services.  Includes read access to all other published code artifacts.  Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>*Can do all actions on SQL scripts<br/>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions*|Workspace| -|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including scheduled pipelines, credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime| +|Synapse Administrator |Full Synapse access to serverless and dedicated SQL pools, Data Explorer pools, Apache Spark pools, and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts. Includes Compute Operator, Linked Data Manager, and Credential User permissions on the workspace system identity credential. Includes assigning Synapse RBAC roles. In addition to Synapse Administrator, Azure Owners can also assign Synapse RBAC roles. Azure permissions are required to create, delete, and manage compute resources. Synapse RBAC roles can be assigned even when the associated subscription is disabled.</br></br>_Can read and write artifacts</br> Can do all actions on Spark activities.</br> Can view Spark pool logs</br> Can view saved notebook and pipeline output </br> Can use the secrets stored by linked services or credentials</br>Can assign and revoke Synapse RBAC roles at current scope_|Workspace </br> Spark pool<br/>Integration runtime </br>Linked service</br>Credential | +|Synapse Apache Spark Administrator</br>|Full Synapse access to Apache Spark Pools. Create, read, update, and delete access to published Spark job definitions, notebooks, and their outputs, and to libraries, linked services, and credentials. Includes read access to all other published code artifacts. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can do all actions on Spark artifacts</br>Can do all actions on Spark activities_|Workspace</br>Spark pool| +|Synapse SQL Administrator|Full Synapse access to serverless SQL pools. Create, read, update, and delete access to published SQL scripts, credentials, and linked services. Includes read access to all other published code artifacts. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>*Can do all actions on SQL scripts<br/>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions*|Workspace| +|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including scheduled pipelines, credentials, and linked services. Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime| |Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs, including scheduled pipelines. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace-|Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace -|Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime| -|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace | +|Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without more permissions.|Workspace +|Synapse Compute Operator |Submit Spark jobs and notebooks and view logs. Includes canceling Spark jobs submitted by any user. Requires other use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime| +|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires other permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace | |Synapse Credential User|Runtime and configuration-time use of secrets within credentials and linked services in activities like pipeline runs. To run pipelines, this role is required, scoped to the workspace system identity. </br></br>_Scoped to a credential, permits access to data via a linked service that is protected by the credential (may also require compute use permission) </br>Allows execution of pipelines protected by the workspace system identity credential_|Workspace </br>Linked Service</br>Credential |Synapse Linked Data Manager|Creation and management of managed private endpoints, linked services, and credentials. Can create managed private endpoints that use linked services protected by credentials|Workspace|-|Synapse User|List and view details of SQL pools, Apache Spark pools, Integration runtimes, and published linked services and credentials. Doesn't include other published code artifacts.  Can create new artifacts but can't run or publish without additional permissions. </br></br>_Can list and read Spark pools, Integration runtimes._|Workspace, Spark pool</br>Linked service </br>Credential| +|Synapse User|List and view details of SQL pools, Apache Spark pools, Integration runtimes, and published linked services and credentials. Doesn't include other published code artifacts. Can create new artifacts but can't run or publish without more permissions. </br></br>_Can list and read Spark pools, Integration runtimes._|Workspace, Spark pool</br>Linked service </br>Credential| ## Synapse RBAC roles and the actions they permit > [!NOTE] >- All actions listed in the tables below are prefixed, "Microsoft.Synapse/..."</br>->- All artifact read, write, and delete actions are with respect to published artifacts in the live service. These permissions do not affect access to artifacts in a connected Git repo. +>- All artifact read, write, and delete actions are with respect to published artifacts in the live service. These permissions do not affect access to artifacts in a connected Git repo. The following table lists the built-in roles and the actions/permissions that each support. -Role|Actions |---Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, delete</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action</br>workspaces/linkConnections/read</br>workspaces/linkConnections/write</br>workspaces/linkConnections/delete</br>workspaces/linkConnections/useCompute/action| +|Role|Actions| +|--|--| +|Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, delete</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action</br>workspaces/linkConnections/read</br>workspaces/linkConnections/write</br>workspaces/linkConnections/delete</br>workspaces/linkConnections/useCompute/action| |Synapse Apache Spark Administrator|workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/notebooks/viewOutputs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete| |Synapse SQL Administrator|workspaces/read</br>workspaces/artifacts/read</br>workspaces/sqlScripts/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete| |Synapse Contributor|workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/linkConnections/read</br>workspaces/linkConnections/write</br>workspaces/linkConnections/delete</br>workspaces/linkConnections/useCompute/action| Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, dele The following table lists Synapse actions and the built-in roles that permit these actions: -Action|Role |---workspaces/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Monitoring Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User -workspaces/roleAssignments/write, delete|Synapse Administrator -workspaces/managedPrivateEndpoint/write, delete|Synapse Administrator</br>Synapse Linked Data Manager -workspaces/bigDataPools/useCompute/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator </br>Synapse Monitoring Operator -workspaces/bigDataPools/viewLogs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator -workspaces/integrationRuntimes/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator -workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator -workspaces/linkConnections/read|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator -workspaces/linkConnections/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator -workspaces/artifacts/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User -workspaces/notebooks/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/sparkJobDefinitions/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/sqlScripts/write, delete|Synapse Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/kqlScripts/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/dataFlows/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/pipelines/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/linkConnections/write, delete|Synapse Administrator</br>Synapse Contributor -workspaces/triggers/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/datasets/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/libraries/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher -workspaces/linkedServices/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Linked Data Manager -workspaces/credentials/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Linked Data Manager -workspaces/notebooks/viewOutputs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User -workspaces/pipelines/viewOutputs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User -workspaces/linkedServices/useSecret/action|Synapse Administrator</br>Synapse Credential User -workspaces/credentials/useSecret/action|Synapse Administrator</br>Synapse Credential User +|Action|Role| +|--|--| +|workspaces/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Monitoring Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User| +|workspaces/roleAssignments/write, delete|Synapse Administrator| +|workspaces/managedPrivateEndpoint/write, delete|Synapse Administrator</br>Synapse Linked Data Manager| +|workspaces/bigDataPools/useCompute/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator </br>Synapse Monitoring Operator| +|workspaces/bigDataPools/viewLogs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator| +|workspaces/integrationRuntimes/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator| +|workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator| +|workspaces/linkConnections/read|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator| +|workspaces/linkConnections/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator| +|workspaces/artifacts/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User| +|workspaces/notebooks/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/sparkJobDefinitions/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/sqlScripts/write, delete|Synapse Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/kqlScripts/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/dataFlows/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/pipelines/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/linkConnections/write, delete|Synapse Administrator</br>Synapse Contributor| +|workspaces/triggers/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/datasets/write, delete|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/libraries/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher| +|workspaces/linkedServices/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Linked Data Manager| +|workspaces/credentials/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Linked Data Manager| +|workspaces/notebooks/viewOutputs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User| +|workspaces/pipelines/viewOutputs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User| +|workspaces/linkedServices/useSecret/action|Synapse Administrator</br>Synapse Credential User| +|workspaces/credentials/useSecret/action|Synapse Administrator</br>Synapse Credential User| ## Synapse RBAC scopes and their supported roles |
update-manager | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md | Following is the list of supported images and no other marketplace images releas | **Publisher**| **Offer** | **Plan**|**Unsupported image(s)** | |-|-|--| |-| | cis-windows-server-2012-r2-v2-2-1-l2 | cis-ws2012-r2-l2 || -| | cis-windows-server-2016-v1-0-0-l1 | cis--l1 || -| | cis-windows-server-2016-v1-0-0-l2 | cis-ws2016-l2 || -| | cis-windows-server-2019-v1-0-0-l1 | cis-ws2019-l1 || -| | cis-windows-server-2019-v1-0-0-l2 | cis-ws2019-l2 || -| | cis-windows-server-2022-l1| cis-windows-server-2022-l1 </br> cis-windows-server-2022-l1-gen2 | | -| | cis-windows-server-2022-l2 | cis-windows-server-2022-l2 </br> cis-windows-server-2022-l2-gen2 | | -| | cis-windows-server| cis-windows-server2016-l1-gen1 </br> cis-windows-server2019-l1-gen1 </br> cis-windows-server2019-l1-gen2 </br> cis-windows-server2019-l2-gen1 </br> cis-windows-server2022-l1-gen2 </br> cis-windows-server2022-l2-gen2 </br> cis-windows-server2022-l1-gen1 | | +|center-for-internet-security-inc | cis-windows-server-2012-r2-v2-2-1-l2 | cis-ws2012-r2-l2 || +|center-for-internet-security-inc | cis-windows-server-2016-v1-0-0-l1 | cis--l1 || +| center-for-internet-security-inc| cis-windows-server-2016-v1-0-0-l2 | cis-ws2016-l2 || +|center-for-internet-security-inc | cis-windows-server-2019-v1-0-0-l1 | cis-ws2019-l1 || +|center-for-internet-security-inc | cis-windows-server-2019-v1-0-0-l2 | cis-ws2019-l2 || +|center-for-internet-security-inc| cis-windows-server-2022-l1| cis-windows-server-2022-l1 </br> cis-windows-server-2022-l1-gen2 | | +|center-for-internet-security-inc | cis-windows-server-2022-l2 | cis-windows-server-2022-l2 </br> cis-windows-server-2022-l2-gen2 | | +|center-for-internet-security-inc | cis-windows-server| cis-windows-server2016-l1-gen1 </br> cis-windows-server2019-l1-gen1 </br> cis-windows-server2019-l1-gen2 </br> cis-windows-server2019-l2-gen1 </br> cis-windows-server2022-l1-gen2 </br> cis-windows-server2022-l2-gen2 </br> cis-windows-server2022-l1-gen1 | | | | hpc2019-windows-server-2019| hpc2019-windows-server-2019|| | | sql2016sp2-ws2016 | standard| | | sql2017-ws2016 | enterprise | Following is the list of supported images and no other marketplace images releas | |centos-hpc | 7.1, 7.3, 7.4 | | |centos-lvm | 7-lvm-gen2 | | |centos-lvm | 7-lvm, 8-lvm |-| |cis-oracle-linux-8-l1 | cis-oracle8-l1|| -| |cis-rhel | cis-redhat7-l1-gen1 </br> cis-redhat8-l1-gen1 </br> cis-redhat8-l2-gen1 </br> cis-redhat9-l1-gen1 </br> cis-redhat9-l1-gen2| | -| |cis-rhel-7-l2 | cis-rhel7-l2 | | -| |cis-rhel-8-l1 | | | -| |cis-rhel-8-l2 | cis-rhel8-l2 | | -| |cis-rhel9-l1 | cis-rhel9-l1 </br> cis-rhel9-l1-gen2 || -| |cis-ubuntu | cis-ubuntu1804-l1 </br> cis-ubuntulinux2004-l1-gen1 </br> cis-ubuntulinux2204-l1-gen1 </br> cis-ubuntulinux2204-l1-gen2 || +|center-for-internet-security-inc |cis-oracle-linux-8-l1 | cis-oracle8-l1|| +| center-for-internet-security-inc +|cis-rhel | cis-redhat7-l1-gen1 </br> cis-redhat8-l1-gen1 </br> cis-redhat8-l2-gen1 </br> cis-redhat9-l1-gen1 </br> cis-redhat9-l1-gen2| | +|center-for-internet-security-inc + |cis-rhel-7-l2 | cis-rhel7-l2 | | +|center-for-internet-security-inc + |cis-rhel-8-l1 | | | +|center-for-internet-security-inc + |cis-rhel-8-l2 | cis-rhel8-l2 | | +| center-for-internet-security-inc +|cis-rhel9-l1 | cis-rhel9-l1 </br> cis-rhel9-l1-gen2 || +|center-for-internet-security-inc |cis-ubuntu | cis-ubuntu1804-l1 </br> cis-ubuntulinux2004-l1-gen1 </br> cis-ubuntulinux2204-l1-gen1 </br> cis-ubuntulinux2204-l1-gen2 || | |cis-ubuntu-linux-1804-l1| cis-ubuntu1804-l1|| | |cis-ubuntu-linux-2004-l1 | cis-ubuntu2004-l1 </br> cis-ubuntu-linux-2204-l1-gen2||-| |cis-ubuntu-linux-2004-l1| cis-ubuntu2004-l1|| -| |cis-ubuntu-linux-2204-l1 | cis-ubuntu-linux-2204-l1 </br> cis-ubuntu-linux-2204-l1-gen2 | | +|center-for-internet-security-inc |cis-ubuntu-linux-2004-l1| cis-ubuntu2004-l1|| +| center-for-internet-security-inc|cis-ubuntu-linux-2204-l1 | cis-ubuntu-linux-2204-l1 </br> cis-ubuntu-linux-2204-l1-gen2 | | | |debian-10-daily | 10, 10-gen2,</br> 10-backports,</br> 10-backports-gen2| | |debian-11 | 11, 11-gen2,</br> 11-backports, </br> 11-backports-gen2 | | |debian-11-daily | 11, 11-gen2,</br> 11-backports, </br> 11-backports-gen2 | Following is the list of supported images and no other marketplace images releas |esri |pro-byol| pro-byol-29|| |esri|arcgis-enterprise | byol-108 </br> byol-109 </br> byol-111 </br> byol-1081 </br> byol-1091| |esri|arcgis-enterprise-106| byol-1061||+|erockyenterprisesoftwarefoundationinc1653071250513 | rockylinux | free | +|erockyenterprisesoftwarefoundationinc1653071250513 | rockylinux-9 | rockylinux-9 | |microsoft-aks |aks |aks-engine-ubuntu-1804-202112 | | |microsoft-dsvm |aml-workstation | ubuntu-20, ubuntu-20-gen2 | | |microsoft-dsvm |aml-workstation | ubuntu | The following table lists the operating systems supported on [Azure Arc-enabled |**Operating system**| |-|- | Amazon Linux 2023 | - | Windows Server 2012 R2 and higher (including Server Core) | - | Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS | - | SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) | - | Red Hat Enterprise Linux (RHEL) 7, 8, 9 (x64) | + | Alma Linux 9 | | Amazon Linux 2 (x64) |- | Oracle 7.x, 8.x| + | Amazon Linux 2023 | | Debian 10 and 11|- | Rocky Linux 8| + | Oracle 7.x, 8.x| + | Oracle Linux 9 | + | Red Hat Enterprise Linux (RHEL) 7, 8, 9 (x64) | + | Rocky Linux 8, 9| + | SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) | + | Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS | + | Windows Server 2012 R2 and higher (including Server Core) | # [Windows IoT Enterprise on Arc enabled servers (preview)](#tab/winio-arc) |
update-manager | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md | If the extension is already present on the machine but the extension status is n #### Issue -The property [AllowExtensionOperations](https://learn.microsoft.com/dotnet/api/microsoft.azure.management.compute.models.osprofile.allowextensionoperations?view=azure-dotnet-legacy) is set to false in the machine OSProfile. +The property [AllowExtensionOperations](/dotnet/api/microsoft.azure.management.compute.models.osprofile.allowextensionoperations) is set to false in the machine OSProfile. #### Resolution The property should be set to true to allow extensions to work properly. Proxy is configured on Windows or Linux machines that may block access to endpoi #### Resolution -For Windows, see [issues related to proxy](https://learn.microsoft.com/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting?toc=%2Fwindows%2Fdeployment%2Ftoc.json&bc=%2Fwindows%2Fdeployment%2Fbreadcrumb%2Ftoc.json#issues-related-to-httpproxy). +For Windows, see [issues related to proxy](/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting#issues-related-to-httpproxy). For Linux, ensure proxy setup doesn't block access to repositories that are required for downloading and installing updates. TLS 1.0 and TLS 1.1 are deprecated. Use TLS 1.2 or higher. -For Windows, see [Protocols in TLS/SSL Schannel SSP](https://learn.microsoft.com/windows/win32/secauthn/protocols-in-tls-ssl--schannel-ssp-). +For Windows, see [Protocols in TLS/SSL Schannel SSP](/windows/win32/secauthn/protocols-in-tls-ssl--schannel-ssp-). For Linux, execute the following command to see the supported versions of TLS for your distro. `nmap --script ssl-enum-ciphers -p 443 www.azure.com` |
update-manager | Troubleshooter Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshooter-known-issues.md | + + Title: Azure Update Manager Troubleshooter +description: Identify common issues using the Troubleshooter in Azure Update Manager. + Last updated : 09/17/2024++++++# Troubleshoot issues in Azure Update Manager ++This article describes how to use the Troubleshooter in Azure Update Manager to identify common issues and how to resolve them. ++The Troubleshooter option is enabled when checking for history of only **Failed** operations in Azure Update Manager. +++The Troubleshooter can also be seen when checking history of **Failed** operations in the machineΓÇÖs updates history tab. ++++## Prerequisites ++- For Azure Machines, ensure that the guest agent version on the VM is 2.4.0.2 or higher and agent status is **Ready**. ++- For Arc Machines, ensure that the arc agent version on the machine is 1.39 or higher and agent status is **Connected**. ++- The Troubleshooter isn't applicable if the operation type is Azure Managed Safe Deployment, that is, Auto Patching. ++- Ensure that the machine is present and running. ++- For executing RUN commands, you must have the following permissions: ++ - Microsoft.Compute/virtualMachines/runCommand/write permission (for Azure) - The [Virtual Machine Contributor](/azure/role-based-access-control/built-in-roles#virtual-machine-contributor) role and higher levels have this permission. + - Microsoft.HybridCompute/machines/runCommands/write permission (for Arc) - The [Azure Connected Machine Resource Administrator](/azure/role-based-access-control/built-in-roles) role and higher levels have this permission. ++- Ensure the machine can access [user content](https://raw.githubusercontent.com/) as it needs to retrieve the [scripts](https://github.com/Azure/AzureUpdateManager) during the execution of the Troubleshooter. +++## What does the Troubleshooter do? ++The troubleshooter performs two types of checks for both Azure and Arc machines. ++- For the machine under troubleshooting, the Troubleshooter runs Resource Graph Queries to obtain details about the machine's current state, assessment mode settings, patch mode settings, and the status of various services running on it. For example, for Azure machines, it gets details about the guest agent while for Arc machines it gets details about arc agent and its status. +- The troubleshooter executes Managed RUN Commands on the machine to execute scripts that fetch information about the update related service and configurations for the machines. *The script doesn't make any modifications to your machine.* ++You can find the [scripts](https://github.com/Azure/AzureUpdateManager/tree/main/Troubleshooter) here. ++Post performing the checks, the troubleshooter suggests possible mitigations to test for checks that failed. Follow the mitigation links and take appropriate actions. ++++## What are Managed RUN Commands? ++- Managed RUN commands use the Guest agent for Azure machines and Arc Agent for Arc machines to remotely and securely executed commands or scripts inside your machine. ++- Managed RUN commands don't require any additional extension to be installed on your machines. ++- Managed RUN commands are generally available for Azure while it is in preview for Arc. ++- Learn more about [Managed RUN command for Azure](/azure/virtual-machines/run-command-overview) and [Managed RUN command for Arc](/azure/azure-arc/servers/run-command). ++## Next steps +* To learn more about Update Manager, see the [Overview](overview.md). +* To view logged results from all your machines, see [Querying logs and results from Update Manager](query-logs.md). + |
virtual-desktop | Configure Session Lock Behavior | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-session-lock-behavior.md | description: Learn how to configure session lock behavior for Azure Virtual Desk Previously updated : 09/02/2024 Last updated : 09/17/2024 # Configure the session lock behavior for Azure Virtual Desktop -You can choose whether the session is disconnected or the remote lock screen shown when a remote session is locked, either by the user or by policy. When the session lock behavior is set to disconnect, a dialog is shown to let users know they were disconnected. Users can choose the **Reconnect** option from the dialog when they're ready to connect again. +You can choose whether the session is disconnected or the remote lock screen is shown when a remote session is locked, either by the user or by policy. When the session lock behavior is set to disconnect, a dialog is shown to let users know they were disconnected. Users can choose the **Reconnect** option from the dialog when they're ready to connect again. When used with single sign-on using Microsoft Entra ID, disconnecting the session provides the following benefits: When used with single sign-on using Microsoft Entra ID, disconnecting the sessio - You can require multifactor authentication to return to the session and prevent users from unlocking with a simple username and password. -For scenarios that rely on legacy authentication, including NTLM, CredSSP, RDSTLS, TLS, and RDP basic authentication protocols, users are prompted to re-enter their credentials. +For scenarios that rely on legacy authentication, including NTLM, CredSSP, RDSTLS, TLS, and RDP basic authentication protocols, users are prompted to re-enter their credentials when they reconnect or start a new connection. The default session lock behavior is different depending on whether you're using single sign-on with Microsoft Entra ID or legacy authentication. The following table shows the default configuration for each scenario: To configure the session lock experience using Intune: To configure the session lock experience using Group Policy, follow these steps. -1. The Group Policy settings are only available the operating systems listed in [Prerequisites](#prerequisites). To make them available on other versions of Windows Server, you need to copy the administrative template files `C:\Windows\PolicyDefinitions\terminalserver.admx` and `C:\Windows\PolicyDefinitions\en-US\terminalserver.adml` from a session host to the same location on your domain controllers or the [Group Policy Central Store](/troubleshoot/windows-client/group-policy/create-and-manage-central-store), depending on your environment. In the file path for `terminalserver.adml` replace `en-US` with the appropriate language code if you're using a different language. +1. The Group Policy settings are only available on the operating systems listed in [Prerequisites](#prerequisites). To make them available on other versions of Windows Server, you need to copy the administrative template files `C:\Windows\PolicyDefinitions\terminalserver.admx` and `C:\Windows\PolicyDefinitions\en-US\terminalserver.adml` from a session host to the same location on your domain controllers or the [Group Policy Central Store](/troubleshoot/windows-client/group-policy/create-and-manage-central-store), depending on your environment. In the file path for `terminalserver.adml` replace `en-US` with the appropriate language code if you're using a different language. -1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain. +1. Open the **Group Policy Management** console on the device you use to manage the Active Directory domain. 1. Create or edit a policy that targets the computers providing a remote session you want to configure. To configure the session lock experience using Group Policy, follow these steps. 1. Double-click **Disconnect remote session on lock for legacy authentication** to open it. - - To disconnect the remote session when the session locks, select **Enabled** or **Not configured**. + - To disconnect the remote session when the session locks, select **Enabled**. - - To show the remote lock screen when the session locks, select **Disabled**. + - To show the remote lock screen when the session locks, select **Disabled** or **Not configured**. 1. Select **OK**. To configure the session lock experience using Group Policy, follow these steps. ## Related content - Learn how to [Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID](configure-single-sign-on.md).--- Check out [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication) to learn how to enable passwordless authentication.--- For more information about Microsoft Entra Kerberos, see [Deep dive: How Microsoft Entra Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889) |
virtual-desktop | Configure Single Sign On | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md | description: Learn how to configure single sign-on for an Azure Virtual Desktop Previously updated : 09/02/2024 Last updated : 09/17/2024 # Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID To enable single sign-on using Microsoft Entra ID authentication, there are five 1. Enable Microsoft Entra authentication for Remote Desktop Protocol (RDP). -1. Configure the target device groups. +1. Hide the consent prompt dialog. 1. Create a *Kerberos Server object*, if Active Directory Domain Services is part of your environment. More information on the criteria is included in its section. Before you enable single sign-on, review the following information for using it ### Session lock behavior -When single sign-on using Microsoft Entra ID is enabled and the remote session is locked, either by the user or by policy, you can choose whether the session is disconnected or the remote lock screen shown. The default behavior is to disconnect the session when it locks. +When single sign-on using Microsoft Entra ID is enabled and the remote session is locked, either by the user or by policy, you can choose whether the session is disconnected or the remote lock screen is shown. The default behavior is to disconnect the session when it locks. -When the session lock behavior is set to disconnect, and a dialog is shown to let users know they were disconnected. Users can choose the **Reconnect** option from the dialog when they're ready to connect again. This behavior is done for security reasons and to ensure full support of passwordless authentication. Disconnecting the session provides the following benefits: +When the session lock behavior is set to disconnect, a dialog is shown to let users know they were disconnected. Users can choose the **Reconnect** option from the dialog when they're ready to connect again. This behavior is done for security reasons and to ensure full support of passwordless authentication. Disconnecting the session provides the following benefits: - Consistent sign-in experience through Microsoft Entra ID when needed. Before you can enable single sign-on, you must meet the following prerequisites: - [Android client](users/connect-android-chrome-os.md), version 10.0.16 or later. -- To configure allowing Active Directory domain administrator account to connect when single sign-on is enabled, you need an account that is a member of the **Domain Admins** security group.- ## Enable Microsoft Entra authentication for RDP You must first allow Microsoft Entra authentication for Windows in your Microsoft Entra tenant, which enables issuing RDP access tokens allowing users to sign in to your Azure Virtual Desktop session hosts. You set the `isRemoteDesktopProtocolEnabled` property to true on the service principal's `remoteDesktopSecurityConfiguration` object for the following Microsoft Entra applications: |
virtual-desktop | Redirection Configure Webauthn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-webauthn.md | To allow or disable WebAuthn redirection using Microsoft Intune: 1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/). -1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Templates** profile type, and the **Administrative templates** template. +1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type. -1. In the **Configuration settings** tab, browse to **Computer configuration** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**, then select **Do not allow WebAuthn redirection**. +1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**. - :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune-template-webauthn.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune-template-webauthn.png"::: + :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png"::: -1. Select **Do not allow WebAuthn redirection**. In the pane separate pane that opens: +1. Check the box for **Do not allow WebAuthn redirection**, then close the settings picker. - - To allow WebAuthn redirection, select **Disabled** or **Not configured**, then select **OK**. +1. Expand the **Administrative templates** category, then toggle the switch for **Do not allow WebAuthn redirection** to **Enabled** or **Disabled**, depending on your requirements: - - To disable WebAuthn redirection, select **Enabled**, then select **OK**. + - To allow WebAuthn redirection, toggle the switch to **Disabled**, then select **OK**. ++ - To disable WebAuthn redirection, toggle the switch to **Enabled**, then select **OK**. 1. Select **Next**. |
virtual-desktop | Required Fqdn Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/required-fqdn-endpoint.md | The following table is the list of FQDNs and endpoints your session host VMs nee | `oneocsp.microsoft.com` | TCP | 80 | Certificates | N/A | | `www.microsoft.com` | TCP | 80 | Certificates | N/A | + The following table lists optional FQDNs and endpoints that your session host virtual machines might also need to access for other | Address | Protocol | Outbound port | Purpose | Service tag | Select the relevant tab based on which cloud you're using. | `learn.microsoft.com` | TCP | 443 | Documentation | All | | `privacy.microsoft.com` | TCP | 443 | Privacy statement | All | | `query.prod.cms.rt.microsoft.com` | TCP | 443 | Download an MSI to update the client. Required for automatic updates. | [Windows Desktop](users/connect-windows.md) |+| `graph.microsoft.com` | TCP | 443 | Service traffic | All | +| `windows.cloud.microsoft.com` | TCP | 443 | Connection center | All | +| `windows365.microsoft.com` | TCP | 443 | Service traffic | All | # [Azure for US Government](#tab/azure-for-us-government) Select the relevant tab based on which cloud you're using. | `learn.microsoft.com` | TCP | 443 | Documentation | All | | `privacy.microsoft.com` | TCP | 443 | Privacy statement | All | | `query.prod.cms.rt.microsoft.com` | TCP | 443 | Download an MSI to update the client. Required for automatic updates. | [Windows Desktop](users/connect-windows.md) |+| `graph.microsoft.com` | TCP | 443 | Service traffic | All | +| `windows.cloud.microsoft.com` | TCP | 443 | Connection center | All | +| `windows365.microsoft.com` | TCP | 443 | Service traffic | All | These FQDNs and endpoints only correspond to client sites and resources. This list doesn't include FQDNs and endpoints for other services such as Microsoft Entra ID or Office 365. Microsoft Entra FQDNs and endpoints can be found under ID *56*, *59* and *125* in [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). +If you're on a closed network with restricted internet access, you may also need to allow the FQDNs listed here for certificate checks: [Azure Certificate Authority details | Microsoft Learn](../security/fundamentals/azure-CA-details.md#certificate-downloads-and-revocation-lists). + ## Next steps - [Check access to required FQDNs and endpoints for Azure Virtual Desktop](check-access-validate-required-fqdn-endpoint.md). |
virtual-network | Create Peering Different Deployment Models Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-deployment-models-subscriptions.md | The steps to create a virtual network peering are different, depending on whethe |[Both Resource Manager](create-peering-different-subscriptions.md) |Different| |[One Resource Manager, one classic](create-peering-different-deployment-models.md) |Same| -A virtual network peering cannot be created between two virtual networks deployed through the classic deployment model. This tutorial uses virtual networks that exist in the same region. This tutorial peers virtual networks in the same region. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region). It's recommended that you familiarize yourself with the [peering requirements and constraints](virtual-network-manage-peering.md#requirements-and-constraints) before peering virtual networks. +A virtual network peering cannot be created between two virtual networks deployed through the classic deployment model. This tutorial peers virtual networks in the same region. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region). It's recommended that you familiarize yourself with the [peering requirements and constraints](virtual-network-manage-peering.md#requirements-and-constraints) before peering virtual networks. -When creating a virtual network peering between virtual networks that exist in different subscriptions, the subscriptions can associated to the same Microsoft Entra tenant. If you don't already have a Microsoft Entra tenant, you can quickly [create one](../active-directory/develop/quickstart-create-new-tenant.md?toc=%2fazure%2fvirtual-network%2ftoc.json#create-a-new-azure-ad-tenant). +When creating a virtual network peering between virtual networks that exist in different subscriptions, the subscriptions can be associated to the same Microsoft Entra tenant. If you don't already have a Microsoft Entra tenant, you can quickly [create one](../active-directory/develop/quickstart-create-new-tenant.md?toc=%2fazure%2fvirtual-network%2ftoc.json#create-a-new-azure-ad-tenant). You can use the [Azure portal](#portal), the [Azure CLI](#cli), or [Azure PowerShell](#powershell) to create a virtual network peering. Click any of the previous tool links to go directly to the steps for creating a virtual network peering using your tool of choice. |
virtual-wan | Virtual Wan Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md | The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub For new deployments, this connectivity is blocked by default. To allow this connectivity, you can enable these [ExpressRoute gateway toggles](virtual-wan-expressroute-portal.md#enable-or-disable-vnet-to-virtual-wan-traffic-over-expressroute) in the "Edit virtual hub" blade and "Virtual network gateway" blade in Portal. However, it is recommended to keep these toggles disabled and instead [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect standalone VNets to a Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN hub router, which offers better performance than the ExpressRoute path. The ExpressRoute path includes the ExpressRoute gateway, which has lower bandwidth limits than the hub router, as well as the Microsoft Enterprise Edge routers/MSEE, which is an extra hop in the datapath. -In the diagram below, both toggles need to be enabled to allow connectivity between the standalone VNet 4 and the VNets directly connected to hub 2 (VNet 2 and VNet 3): **Allow traffic from remote Virtual WAN networks** for the virtual network gateway and **Allow traffic from non Virtual WAN networks** for the virtual hub's ExpressRoute gateway. If an Azure Route Server is deployed in standalone VNet 4, and the Route Server has [branch-to-branch](../route-server/quickstart-configure-route-server-portal.md#configure-route-exchange) enabled, then connectivity will be blocked between VNet 1 and standalone VNet 4. +In the diagram below, both toggles need to be enabled to allow connectivity between the standalone VNet 4 and the VNets directly connected to hub 2 (VNet 2 and VNet 3): **Allow traffic from remote Virtual WAN networks** for the virtual network gateway and **Allow traffic from non Virtual WAN networks** for the virtual hub's ExpressRoute gateway. If an Azure Route Server is deployed in standalone VNet 4, and the Route Server has [branch-to-branch](../route-server/configure-route-server.md#configure-route-exchange) enabled, then connectivity will be blocked between VNet 1 and standalone VNet 4. Enabling or disabling the toggle will only affect the following traffic flow: traffic flowing between the Virtual WAN hub and standalone VNet(s) via the ExpressRoute circuit. Enabling or disabling the toggle will **not** incur downtime for all other traffic flows (Ex: on-premises site to spoke VNet 2 won't be impacted, VNet 2 to VNet 3 won't be impacted, etc.). |
vpn-gateway | Feedback Hub Azure Vpn Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/feedback-hub-azure-vpn-client.md | In this section, you add diagnostic and other details. * If your feedback is a **Suggestion**, the app takes you directly to the **Attachments (optional)** section. * If your feedback is a **Problem** and you feel the problem merits more urgent attention, you can specify this problem as a high priority or blocking issue. -In the **Attachments (optional)** section, you should supply as much comprehensive information as possible. If a screen doesn't look right, [attach a screenshot](#attach-a-screenshot). If you're reporting a problem other than a screen issue, it's best to follow the steps in [Recreate my problem](#recreate-my-problem) and then use the steps in [Attach a file](#attach-a-file) to attach the log files. +In the **Attachments (optional)** section, you should supply as much comprehensive information as possible. If a screen doesn't look right, [attach a screenshot](#attach-a-screenshot). If you're reporting a problem other than a screen issue, it's best to follow the steps in [Recreate my problem](#recreate-my-problem). ### Attach a screenshot The **Recreate my problem** option provides us with crucial information. This op :::image type="content" source="./media/feedback-hub-azure-vpn-client/stop-recording.png" alt-text="Screenshot showing the Azure VPN Client recording." lightbox="./media/feedback-hub-azure-vpn-client/stop-recording.png"::: -### Attach a file --Attach the Azure VPN Client **log files**. It's best if you attach the file after you recreate the problem to make sure the problem is included in the log file. To attach the client log files: --1. Select **Attach a file** and locate the log files for the Azure VPN client. Log files are stored locally in the following folder: **%localappdata%\Packages\Microsoft.AzureVpn_8wekyb3d8bbwe\LocalState\LogFiles**. --1. From the LogFiles folder, select **Azure VPNClient.log** and **Azure VpnCxn.log**. -- :::image type="content" source="./media/feedback-hub-azure-vpn-client/locate-files.png" alt-text="Screenshot showing the Azure VPN client log files." lightbox="./media/feedback-hub-azure-vpn-client/locate-files.png"::: ### Submit the problem If you need immediate attention for your issue, open an Azure Support ticket and ## Next steps -For more information about FeedBack Hub, see [Send feedback to Microsoft with the Feedback Hub app](https://support.microsoft.com/windows/send-feedback-to-microsoft-with-the-feedback-hub-app-f59187f8-8739-22d6-ba93-f66612949332). +For more information about FeedBack Hub, see [Send feedback to Microsoft with the Feedback Hub app](https://support.microsoft.com/windows/send-feedback-to-microsoft-with-the-feedback-hub-app-f59187f8-8739-22d6-ba93-f66612949332). |
vpn-gateway | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/whats-new.md | You can also find the latest VPN Gateway updates and subscribe to the RSS feed [ | Type | Area | Name | Description | Date added | Limitations | ||||||| | P2S VPN | P2S | [Azure VPN Client for Linux](#linux)| [Certificate](point-to-site-certificate-client-linux-azure-vpn-client.md) authentication, [Microsoft Entra ID ](point-to-site-entra-vpn-client-linux.md) authentication.| May 2024 | N/A|-| P2S VPN | P2S | [Azure VPN Client for macOS](#macos) | Microsoft Entra ID authentication updates, additional features. | May 2024 | N/A| +| P2S VPN | P2S | [Azure VPN Client for macOS](#macos) | Microsoft Entra ID authentication updates, additional features. | Sept 2024 | N/A| | P2S VPN | P2S | [Azure VPN Client for Windows](#windows) | Microsoft Entra ID authentication updates, additional features. | May 2024 | N/A| |SKU deprecation | N/A | [Standard/High performance VPN gateway SKU](vpn-gateway-about-skus-legacy.md#sku-deprecation) | Legacy SKUs (Standard and HighPerformance) will be deprecated on 30 Sep 2025. View the announcement [here](https://go.microsoft.com/fwlink/?linkid=2255127). | Nov 2023 | N/A | |Feature | All | [Customer-controlled gateway maintenance](customer-controlled-gateway-maintenance.md) |Customers can schedule maintenance (Guest OS and Service updates) during a time of the day that best suits their business needs. | Nov 2023 (Public preview)| See the [FAQ](vpn-gateway-vpn-faq.md#customer-controlled). |