Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | App Service Environment v3 is available in the following regions: An App Service Environment will only store customer data including app content, settings and secrets within the region where it's deployed. All data is guaranteed to remain in the region. For more information, see [Data residency in Azure](https://azure.microsoft.com/explore/global-infrastructure/data-residency/#overview). -## App Service Environment v2 --App Service Environment has three versions: App Service Environment v1, App Service Environment v2, and App Service Environment v3. The information in this article is based on App Service Environment v3. To learn more about App Service Environment v2, see [App Service Environment v2 introduction](./intro.md). - ## Next steps > [!div class="nextstepaction"] |
application-gateway | How To Frontend Mtls Gateway Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-frontend-mtls-gateway-api.md | See the following figure: The valid client certificate flow shows a client presenting a certificate to the frontend of Application Gateway for Containers. Application Gateway for Containers determines the certificate is valid and proxies the request to the backend target. The response is ultimately returned to the client. -The revoked client certificate flow shows a client presenting a revoked certificate to the frontend of Application Gateway for Containers. Application Gateway for Containers determines the certificate is not valid and prevents the request from being proxied to the client. The client will receive an HTTP 400 bad request and corresponding reason. +The revoked client certificate flow shows a client presenting a revoked certificate to the frontend of Application Gateway for Containers. Application Gateway for Containers determines the certificate isn't valid and prevents the request from being proxied to the client. The client will receive an HTTP 400 bad request and corresponding reason. ## Prerequisites The revoked client certificate flow shows a client presenting a revoked certific ### Generate certificate(s) -For this example, we will create a root certificate and issue a client certificate from the root. If you already have a root certificate and client certificate, you may skip these steps. +For this example, we'll create a root certificate and issue a client certificate from the root. If you already have a root certificate and client certificate, you may skip these steps. #### Generate a private key for the root certificate spec: certificateRefs: - kind : Secret group: ""- name: contoso.com + name: listener-tls-secret EOF ``` EOF certificateRefs: - kind : Secret group: ""- name: contoso.com + name: listener-tls-secret addresses: - type: alb.networking.azure.io/alb-frontend value: $FRONTEND_NAME spec: - name: gateway-01 rules: - backendRefs:- - name: mtls-app - port: 443 + - name: echo + port: 80 EOF ``` status: namespace: test-infra ``` +Create a Kubernetes secret using kubectl that contains the certificate chain to the client certificate. ++```bash +kubectl create secret generic ca.bundle -n test-infra --from-file=ca.crt=root.crt +``` + Create a FrontendTLSPolicy ```bash spec: group: "" kind: Secret namespace: test-infra- subjectAltName: "contoso-client" EOF ``` Now we're ready to send some traffic to our sample application, via the FQDN ass fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}') ``` -Curling this FQDN should return responses from the backend as configured on the HTTPRoute. +Curling the FQDN of your frontend without the client certificate. ++```bash +curl --insecure https://$fqdn/ +``` ++Note the response alerts a certificate is required. ++``` +curl: (56) OpenSSL SSL_read: OpenSSL/1.1.1k: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0 +``` ++Curl the FQDN presenting the client certificate generated. ```bash curl --cert client.crt --key client.key --insecure https://$fqdn/ ``` -Congratulations, you have installed ALB Controller, deployed a backend application, authenticated via client certificate, and routed traffic to the application via the gateway on Application Gateway for Containers. +Note the response is from the backend service behind Application Gateway for Containers. ++Congratulations, you installed ALB Controller, deployed a backend application, authenticated via client certificate, and returned traffic from your backend service via Application Gateway for Containers. |
automation | Automation Managed Identity Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managed-identity-faq.md | - Title: Azure Automation migration to managed identities FAQ -description: This article gives answers to frequently asked questions when you're migrating from a Run As account to a managed identity. --- Previously updated : 09/09/2024--#Customer intent: As an implementer, I want answers to various questions. ----# FAQ for migrating from a Run As account to managed identities --The following FAQ can help you migrate from a Run As account to a Managed identity in Azure Automation. If you have any other questions about the capabilities, post them on the [discussion forum](https://aka.ms/retirement-announcement-automation-runbook-start-using-managed-identities). When a question is frequently asked, we add it to this article so that it benefits everyone. --## How long will you support a Run As account? - -Automation Run As accounts will be supported until *30 September 2023*. Moreover, starting 01 April 2023, creation of **new** Run As accounts in Azure Automation will not be possible. Renewing of certificates for existing Run As accounts would be possible only till the end of support. --## Will existing runbooks that use the Run As account be able to authenticate? -Yes, they'll be able to authenticate. There will be no impact to existing runbooks that use a Run As account. After 30 September 2023, all runbook executions using RunAs accounts, including Classic Run As accounts wouldn't be supported. Hence, you must migrate all runbooks to use Managed identities before that date. --## My Run as account will expire soon, how can I renew it? -If your Run As account certificate is going to expire soon, it's a good time to start using Managed identities for authentication instead of renewing the certificate. However, if you still want to renew it, you would be able to do it through the portal only till 30 September 2023. --## Can I create new Run As accounts? -From 1 April 2023, creation of new Run As accounts wouldn't be possible. We strongly recommend that you start using Managed identities for authentication instead of creating new Run As accounts. - -## Will runbooks that still use the Run As account be able to authenticate after September 30, 2023? -Yes, the runbooks will be able to authenticate until the Run As account certificate expires. After 30 September 2023, all runbook executions using RunAs accounts wouldn't be supported. --## Are Connections and Credentials assets retiring on 30th Sep 2023? --Automation Run As accounts will not be supported after **30 September 2023**. Connections and Credentials assets don't come under the purview of this retirement. For more secure way of authentication, we recommend you to use [Managed Identities](automation-security-overview.md#managed-identities). ---## What is a managed identity? -Applications use managed identities in Microsoft Entra ID when they're connecting to resources that support Microsoft Entra authentication. Applications can use managed identities to obtain Microsoft Entra tokens without managing credentials, secrets, certificates, or keys. --For more information about managed identities in Microsoft Entra ID, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). --## What can I do with a managed identity in Automation accounts? -An Azure Automation managed identity from Microsoft Entra ID allows your runbook to access other Microsoft Entra protected resources easily. This identity is managed by the Azure platform and doesn't require you to provision or rotate any secrets. --Key benefits are: -- You can use managed identities to authenticate to any Azure service that supports Microsoft Entra authentication.-- Managed identities eliminate the overhead associated with managing Run As accounts in your runbook code. You can access resources via a managed identity of an Automation account from a runbook without worrying about creating the service principal, Run As certificate, Run As connection, and so on.-- You don't have to renew the certificate that the Automation Run As account uses.- -## Are managed identities more secure than a Run As account? -A Run As account creates a Microsoft Entra app that's used to manage the resources within the subscription through a certificate that has contributor access at the subscription level by default. A malicious user could use this certificate to perform a privileged operation against resources in the subscription, leading to potential vulnerabilities. --Run As accounts also have a management overhead that involves creating a service principal, Run As certificate, Run As connection, certificate renewal, and so on. Managed identities eliminate this overhead by providing a secure method for users to authenticate and access resources that support Microsoft Entra authentication without worrying about any certificate or credential management. --## Can a managed identity be used for both cloud and hybrid jobs? -Azure Automation supports [system-assigned managed identities](./automation-security-overview.md#managed-identities) for both cloud and hybrid jobs. Currently, Azure Automation [user-assigned managed identities](./automation-security-overview.md) can be used for cloud jobs only and can't be used for jobs that run on a hybrid worker. --## How can I migrate from an existing Run As account to a managed identity? -Follow the steps in [Migrate an existing Run As account to a managed identity](./migrate-run-as-accounts-managed-identity.md). --## How do I see the runbooks that are using a Run As account and know what permissions are assigned to that account? -Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) to find out which Automation accounts are using a Run As account. If your Azure Automation accounts contain a Run As account, it will have the built-in contributor role assigned to it by default. You can use the script to check the Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition. --## Next steps --If your question isn't answered here, you can refer to the following sources for more questions and answers: --- [Azure Automation](/answers/topics/azure-automation.html)-- [Feedback forum](https://feedback.azure.com/d365community/forum/721a322e-bd25-ec11-b6e6-000d3a4f0f1c) |
automation | Automation Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/automation-account.md | - Title: Troubleshoot Azure Automation account issues -description: This article tells how to troubleshoot and resolve issues with an Azure account. - Previously updated : 03/28/2022----# Troubleshoot Azure Automation account issues --This article discusses solutions to problems that you might encounter when you use an Azure Automation account. For general information about Automation accounts, see [Azure Automation account authentication overview](../automation-security-overview.md). --## Scenario: Unable to create an Automation account when GUID is used as account name --### Issue --When you create an Automation account with a GUID as an account name, you encounter an error. --### Cause --An *accountid* is a unique identifier across all Automation accounts in a region and when the account name is a GUID, we keep both Automation *accountid* and *name* as GUID. In this scenario, when you create a new Automation account and specify a GUID (as an account name) and, if it conflicts with any existing Automation *accountid*, you encounter an error. --For example, when you try to create an Automation account with the name *8a2f48c1-9e99-472c-be1b-dcc11429c9ff* and if there is already an existing Automation *accountid* across all Automation accounts in that region, then the account creation will fail and you will see the following error: -- ```error - { -- "code": "BadRequest", -- "message": Automation account already exists with this account id. AccountId: 8a2f48c1-9e99-472c-be1b-dcc11429c9ff. -- } --``` - ### Resolution --Ensure that you create an Automation account with a new name. --## <a name="rp-register"></a>Scenario: Unable to register Automation Resource Provider for subscriptions --### Issue --When you work with management features, for example, Update Management, in your Automation account, you encounter the following error: --```error -Error details: Unable to register Automation Resource Provider for subscriptions: -``` --### Cause --The Automation Resource Provider isn't registered in the subscription. --### Resolution --To register the Automation Resource Provider, follow these steps in the Azure portal: --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. Go to **Subscriptions**, and select your subscription. --3. Under **Settings**, select **Resource Providers**. --4. From the list of resource providers, verify that the **Microsoft.Automation** resource provider is registered. --5. If the provider isn't listed, register it as described in [Resolve errors for resource provider registration](../../azure-resource-manager/templates/error-register-resource-provider.md). --## Next steps --If this article doesn't resolve your issue, try one of the following channels for additional support: --* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). -* Connect with [@AzureSupport](https://x.com/azuresupport). This is the official Microsoft Azure account for connecting the Azure community to the right resources: answers, support, and experts. -* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**. |
azure-app-configuration | Feature Management Dotnet Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/feature-management-dotnet-reference.md | Allocation logic is similar to the [Microsoft.Targeting](#microsofttargeting) fe ### Overriding Enabled State with a Variant -You can use variants to override the enabled state of a feature flag. This gives variants an opportunity to extend the evaluation of a feature flag. If a caller is checking whether a flag that has variants is enabled, the feature manager will check if the variant assigned to the current user is set up to override the result. This is done using the optional variant property `status_override`. By default, this property is set to `None`, which means the variant doesn't affect whether the flag is considered enabled or disabled. Setting `status_override` to `Enabled` allows the variant, when chosen, to override a flag to be enabled. Setting `status_override` to `Disabled` provides the opposite functionality, therefore disabling the flag when the variant is chosen. A feature with an `enabled` state of `false` can't be overridden. +You can use variants to override the enabled state of a feature flag. This gives variants an opportunity to extend the evaluation of a feature flag. When calling `IsEnabled` on a flag with variants, the feature manager will check if the variant assigned to the current user is configured to override the result. This is done using the optional variant property `status_override`. By default, this property is set to `None`, which means the variant doesn't affect whether the flag is considered enabled or disabled. Setting `status_override` to `Enabled` allows the variant, when chosen, to override a flag to be enabled. Setting `status_override` to `Disabled` provides the opposite functionality, therefore disabling the flag when the variant is chosen. A feature with an `enabled` state of `false` can't be overridden. If you're using a feature flag with binary variants, the `status_override` property can be very helpful. It allows you to continue using APIs like `IsEnabledAsync` and `FeatureGateAttribute` in your application, all while benefiting from the new features that come with variants, such as percentile allocation and seed. To learn how to use feature filters, continue to the following tutorials. To learn how to run experiments with variant feature flags, continue to the following tutorial. > [!div class="nextstepaction"]-> [Run experiments with variant feature flags](./howto-feature-filters.md) +> [Run experiments with variant feature flags](./howto-feature-filters.md) |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | Azure Functions currently can be used with the following "Preview" or "Go-live" | Operating system | .NET preview version | | - | - |-| Linux | .NET 9 Preview 7<sup>1</sup> | +| Linux | .NET 9 Preview 7<sup>1, 2</sup> | <sup>1</sup> To successfully target .NET 9, your project needs to reference the [2.x versions of the core packages](#version-2x-preview). If using Visual Studio, .NET 9 requires version 17.12 or later. +<sup>2</sup> .NET 9 is not yet supported on the Flex Consumption SKU. + See [Supported versions][supported-versions] for a list of generally available releases that you can use. ### Using a preview .NET SDK |
azure-functions | Functions Dotnet Class Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md | The following is an example of a minimal `project` file with these changes: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net8.0</TargetFramework>- <AzureFunctionsVersion>V4</AzureFunctionsVersion> + <AzureFunctionsVersion>v4</AzureFunctionsVersion> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.4.0" /> |
azure-functions | Functions Move Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-move-across-regions.md | - Title: Move your function app between regions in Azure Functions -description: Learn how to move Azure Functions resources from one region to another by creating a copy of your existing Azure Function resources in the target region. ---- Previously updated : 11/09/2021---#Customer intent: As an Azure service administrator, I want to move my Azure Functions resources to another Azure region. ---# Move your function app between regions in Azure Functions --This article describes how to move Azure Functions resources to a different Azure region. You might move your resources to another region for one of the following reasons: - + Take advantage of a new Azure region - + Deploy features or services that are available only in specific regions - + Meet internal policy and governance requirements - + Respond to capacity planning requirements --Azure Functions resources are region-specific and can't be moved across regions. You must create a copy of your existing function app resources in the target region, then redeploy your functions code over to the new app. --If minimal downtime is a requirement, consider running your function app in both regions to implement a disaster recovery architecture: -+ [Azure Functions geo-disaster recovery](functions-geo-disaster-recovery.md) -+ [Disaster recovery and geo-distribution in Azure Durable Functions](durable/durable-functions-disaster-recovery-geo-distribution.md) --## Prerequisites --+ Make sure that the target region supports Azure Functions and any related service whose resources you want to move -+ Have access to the original source code for the functions you're migrating --## Prepare --Identify all the function app resources used on the source region, which may include the following: --+ Function app -+ [Hosting plan](functions-scale.md#overview-of-plans) -+ [Deployment slots](functions-deployment-slots.md) -+ [Custom domains purchased in Azure](../app-service/manage-custom-dns-buy-domain.md) -+ [TLS/SSL certificates and settings](../app-service/configure-ssl-certificate.md) -+ [Configured networking options](functions-networking-options.md) -+ [Managed identities](../app-service/overview-managed-identity.md) -+ [Configured application settings](functions-how-to-use-azure-function-app-settings.md) - users with the enough access can copy all the source application settings by using the Advanced Edit feature in the portal -+ [Scaling configurations](functions-scale.md#scale) --Your functions may connect to other resources by using triggers or bindings. For information on how to move those resources across regions, see the documentation for the respective services. --You should be able to also [export a template from existing resources](../azure-resource-manager/templates/export-template-portal.md). --## Move --Deploy the function app to the target region and review the configured resources. --### Redeploy function app --If you have access to the deployment and automation resources that created the function app in the source region, re-run the same deployment steps in the target region to create and redeploy your app. --If you only have access to the source code but not the deployment and automation resources you can deploy and configure the function app on the target region using any of the available [deployment technologies](functions-deployment-technologies.md) or using one of the [continuous deployment methods](functions-continuous-deployment.md). --### Review configured resources --Review and configure the resources identified in the [Prepare](#prepare) step above in the target region if they weren't configured during the deploy. --### Move considerations -+ If your deployment resources and automation doesn't create a function app, [create an app of the same type in a new hosting plan](functions-scale.md#overview-of-plans) in the target region -+ Function app names are globally unique in Azure, so the app in the target region can't have the same name as the one in the source region -+ References and application settings that connect your function app to dependencies need to be reviewed and, when needed, updated. For example, when you move a database that your functions call, you must also update the application settings or configuration to connect to the database in the target region. Some application settings such as the Application Insights instrumentation key or the Azure storage account used by the function app can be already be configured on the target region and do not need to be updated -+ Remember to verify your configuration and test your functions in the target region -+ If you had custom domain configured, [remap the domain name](../app-service/manage-custom-dns-migrate-domain.md#4-remap-the-active-dns-name) -+ For Functions running on Dedicated plans also review the [App Service Migration Plan](../app-service/manage-move-across-regions.md) in case the plan is shared with web apps --## Clean up source resources --After the move is complete, delete the function app and hosting plan from the source region. You pay for function apps in Premium or Dedicated plans, even when the app itself isn't running. --## Next steps --+ Review the [Azure Architecture Center](/azure/architecture/browse/?expanded=azure&products=azure-functions) for examples of Azure Functions running in multiple regions as part of more advanced solution architectures |
azure-maps | Map Add Heat Map Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md | The `zoom` expression can only be used in `step` and `interpolate` expressions. ```json [- `'interpolate', + 'interpolate', ['exponential', 2], ['zoom'], 0, ['*', radiusMeters, 0.000012776039596366526],- 24, [`'*', radiusMeters, 214.34637593279402] + 24, ['*', radiusMeters, 214.34637593279402] ] ``` |
azure-netapp-files | Backup Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md | Azure NetApp Files backup is supported for the following regions: * South Central US * South India * Southeast Asia+* Spain Central * Sweden Central * Switzerland North * Switzerland West |
azure-netapp-files | Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/reservations.md | + + Title: Reserved capacity for Azure NetApp Files +description: Learn how to optimize total cost of ownership (TCO) with Azure NetApp Files reservations. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 09/17/2024+++# Reserved capacity for Azure NetApp Files ++You can save money on the storage costs for Azure NetApp Files with reservations. Azure NetApp Files reservations offer a discount on capacity for storage costs when you commit to a reservation for one or three years, optimizing your total cost of ownership (TCO). A reservation provides a fixed amount of storage capacity for the term of the reservation. ++Azure NetApp Files reservations can significantly reduce your capacity costs for storing data in your Azure NetApp Files volumes. How much you save depends on the total capacity you choose to reserve, and the [service level](azure-netapp-files-service-levels.md) chosen. ++For pricing information about reservations in Azure NetApp Files, see [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/). ++## Reservation terms for Azure NetApp Files ++This section describes the terms of an Azure NetApp Files reservation. ++>[!NOTE] +>Azure NetApp Files reservations cover matching capacity pools in the selected service level and region. When using capacity pools configured with [cool access](manage-cool-access.md), only "hot" tier consumption is covered by the reservation benefit. ++### Reservation quantity ++You can purchase a reservation in 100-TiB and 1-PiB units per month for a one- or three-year term for a particular service level within a region. ++### Reservation scope ++A reservation applies to your usage within the purchased scope. It can't be limited to a specific NetApp account, capacity pool, container, or object within the subscription. ++When you purchase the reservation, you choose the subscription scope. You can change the scope after the purchase. The scope options are: ++- **Single resource group scope:** The reservation discount applies to the selected resource group only. +- **Single subscription scope:** The reservation discount applies to the matching resources in the selected subscription. +- **Shared scope:** The reservation discount applies to matching resources in eligible subscriptions in the billing context. If a subscription moves to a different billing context, the benefit no longer applies to the subscription. The benefit continues to apply to other subscription in the billing context. + - If you're an enterprise customer, the billing context is the EA enrollment. The reservation shared scope includes multiple Microsoft Entra tenants in an enrollment. + - If you're a Microsoft Customer Agreement customer, the billing scope is the billing profile. + - If you're a pay-as-you-go customer, the shared scope is all pay-as-you-go subscriptions created by the account administrator. +- **Management group:** the reservation discount applies to the matching resource in the list of subscriptions that are a part of both the management group and billing scope. The management group scope applies to all subscriptions throughout the entire management group hierarchy. To buy a reservation for a management group, you must have read permission on the management group and be a reservation owner or reservation purchaser on the billing subscription. ++Any reservation for Azure NetApp Files covers only the capacity pools within the service level selected. Add-on features such as cross-region replication and backup aren't included in the reservation. As soon as you buy a reservation, the capacity charges that match the reservation attributes are charged at the discount rates instead of the pay-as-you go rates. ++For more information on Azure reservations, see [What are Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md). ++### Supported service level options ++Azure NetApp Files reservations are available for Standard, Premium, and Ultra service levels in units of 100 TiB and 1 PiB. ++### Requirements for purchase ++To purchase a reservation: +* You must be in the **Owner** role for at least one Enterprise or individual subscription with pay-as-you-go rates. +* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the EA portal. Or, if that setting is disabled, you must be an EA Admin on the subscription. +* For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy Azure NetApp Files reservations. ++## Determine required capacity before purchase ++When you purchase an Azure NetApp Files reservation, you must choose the region and tier for the reservation. Your reservation is valid only for data stored in that region and tier. For example, suppose you purchase a reservation for Azure NetApp Files *Premium* service level in US East. That reservation applies to neither *Premium* capacity pools for that subscription in US West nor capacity pools for other service levels (for example, *Ultra* service level in US East). Additional reservations can be purchased. ++Reservations are available in 100-TiB or 1-PiB increments; higher discounts are available for 1-PiB increments. When you purchase a reservation in the Azure portal, Microsoft might provide you with recommendations based on your previous usage to help determine which reservation you should purchase. ++Purchasing an Azure NetApp Files reservation doesn't automatically increase your regional capacity. Azure NetApp Files reservations aren't an on-demand capacity guarantee. If your reservation requires a quota increase, it's recommended you complete that before making the reservation. For more information, see [Regional capacity in Azure NetApp Files](regional-capacity-quota.md). ++## Purchase an Azure NetApp Files reservation ++You can purchase Azure NetApp Files reservation through the [Azure portal](https://portal.azure.com/). You can pay for the reservation up front or with monthly payments. For more information about purchasing with monthly payments, see [Purchase Azure reservations with up front or monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). ++To purchase a reservation: ++1. Log in to the Azure portal. +1. To buy a new reservation, select **All services** > **Reservations** then **Azure NetApp Files**. ++ :::image type="content" source="./media/reservations/reservations-services.png" alt-text="Screenshot of the reservations services menu." lightbox="./media/reservations/reservations-services.png"::: + +1. Select a subscription. Use the subscription list to choose the subscription used to pay for the reservation. The payment method of the subscription is charged the cost of the reservation. The subscription type must be an enterprise agreement (offer numbers MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreement, or pay-as-you-go (offer numbers MS-AZR-0003P or MS-AZR-0023P). + 1. For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously known as monetary commitment) balance or charged as overage. + 1. For a pay-as-you-go subscription, the charges are billed to the credit card or invoice payment method on the subscription. +1. Select a scope. +1. Select the Azure region to be covered by the reservation. ++ :::image type="content" source="./media/reservations/subscription-scope.png" alt-text="Screenshot of the subscription choices." lightbox="./media/reservations/subscription-scope.png"::: ++1. Select **Add to cart**. +1. In the cart, choose the quantity of provisioned throughput units you want to purchase. For example, choosing 64 covers up to 64 deployed provisioned throughput units every hour. +1. Select **Next: Review + Buy** to review your purchase choices and their prices. +1. Select **Buy now**. +1. After purchase, you can select **View this reservation** to review your purchase status. ++After you purchase a reservation, it's automatically applied to any existing [Azure NetApp Files capacity pools](azure-netapp-files-set-up-capacity-pool.md) that match the terms of the reservation. If you haven't created any Azure NetApp Files capacity pools, the reservation applies when you create a resource that matches the terms of the reservation. In either case, the term of the reservation begins immediately after a successful purchase. ++## Exchange or refund a reservation ++You can exchange or refund a reservation, with certain limitations. For more information about Azure Reservations policies, see [Self-service exchanges and refunds for Azure Reservations](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md). ++## Expiration of a reservation ++When a reservation expires, any Azure NetApp Files capacity you're using under that reservation is billed at the pay-as-you go rate. By default, reservations are set to renew automatically at time of purchase. You can modify the renewal option at the time or purchase or after. ++An email notification is sent 30 days prior to the expiration of the reservation, and again on the expiration date. To continue taking advantage of the cost savings that a reservation provides, renew it no later than the expiration date. ++## Need help? Contact us ++If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458). ++## Next steps ++* [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md) +* [Understand how reservation discounts are applied to Azure storage services](../cost-management-billing/reservations/understand-storage-charges.md) |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | -## September 2024 +## September 2024 ++* [Reserved capacity](reservations.md) is now generally available (GA) ++ Pay-as-you-go pricing is the most convenient way to purchase cloud storage when your workloads are dynamic or changing over time. However, some workloads are more predictable with stable capacity usage over an extended period. These workloads can benefit from savings in exchange for a longer-term commitment. With a one-year or three-year commitment of an Azure NetApp Files reservation, you can save up to 34% on sustained usage of Azure NetApp Files. Reservations are available in stackable increments of 100 TiB and 1 PiB on Standard, Premium and Ultra service levels in a given region. Azure NetApp Files reservations benefits are automatically applied to existing Azure NetApp Files capacity pools in the matching region and service level. Azure NetApp Files reservations provide cost savings and financial predictability and stability, allowing for more effective budgeting. Additional usage is conveniently billed at the regular pay-as-you-go rate. ++ For more detail, see the [Azure NetApp Files reserved capacity](reserved-capacity.md) or see reservations in the Azure portal. * [Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) is now generally available (GA) Azure NetApp Files is updated regularly. This article provides a summary about t Volume encryption with customer-managed keys with managed HSM extends the [customer-managed keys](configure-customer-managed-keys.md), enabling you to store your keys in a more secure FIPS 140-2 Level 3 HSM service instead of the FIPS 140-2 Level 1 or 2 encryption offered with Azure Key Vault. -* [Volume enhancement: Azure NetApp Files now supports 50 GiB minimum volume sizes](azure-netapp-files-resource-limits.md) (preview) +* [Volume enhancement: Azure NetApp Files now supports 50 GiB minimum volume sizes](azure-netapp-files-create-volumes.md#50-gib) (preview) - You can now create an Azure NetApp Files volume as small as 50 GiB--a reduction from the initial minimum size of 100 GiB. 50 GiB volumes save costs for workloads that require volumes smaller than 100 GiB, allowing you to appropriately size storage volumes. 50 GiB volumes are supported for all protocols with Azure NetApp Files: [NFS](azure-netapp-files-create-volumes.md#50-gib), [SMB](azure-netapp-files-create-volumes-smb.md#50-gib), and [dual-protocol](create-volumes-dual-protocol.md#50-gib). You must register for the feature before creating a volume smaller than 100 GiB. + You can now create an Azure NetApp Files volume as small as [50 GiB](azure-netapp-files-resource-limits.md)--a reduction from the initial minimum size of 100 GiB. 50 GiB volumes save costs for workloads that require volumes smaller than 100 GiB, allowing you to appropriately size storage volumes. 50 GiB volumes are supported for all protocols with Azure NetApp Files: [NFS](azure-netapp-files-create-volumes.md#50-gib), [SMB](azure-netapp-files-create-volumes-smb.md#50-gib), and [dual-protocol](create-volumes-dual-protocol.md#50-gib). You must register for the feature before creating a volume smaller than 100 GiB. * [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) is now generally available (GA). |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | For more information, see [VM Applications](/azure/virtual-machines/vm-applicati #### Disk encryption sets -There's a limitation of 1000 disk encryption sets per region, per subscription. For more +There's a limitation of 5000 disk encryption sets per region, per subscription. For more information, see the encryption documentation for [Linux](/azure/virtual-machines/disk-encryption#restrictions) or [Windows](/azure/virtual-machines/disk-encryption#restrictions) virtual machines. If you |
azure-resource-manager | App Service Move Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md | Title: Move Azure App Service resources across resource groups or subscriptions description: Use Azure Resource Manager to move App Service resources to a new resource group or subscription. Previously updated : 06/20/2024 Last updated : 09/13/2024 # Move App Service resources to a new resource group or subscription -This article describes the steps to move App Service resources between resource groups or Azure subscriptions. There are specific requirements for moving App Service resources to a new subscription. +This article describes the steps to move App Service resources between resource groups or Azure subscriptions. There are specific requirements for moving App Service resources to a new subscription. Unless otherwise noted, these steps apply to both App Service Web Apps and Azure Functions. -If you want to move App Services to a new region, see [Move an App Service resource to another region](../../../app-service/manage-move-across-regions.md). +If you want to move your app to a new region, see the Relocate to another region guidance for either [App Service](/azure/operational-excellence/relocation-app-service) or [Azure Functions](/azure/operational-excellence/relocation-functions). You can move App Service resources to a new resource group or subscription but you need to delete and upload its TLS/SSL certificates to the new resource group or subscription. Also, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). ## Move across subscriptions -When you move a Web App across subscriptions, the following guidance applies: +When you move an app across subscriptions, the following guidance applies: - Moving a resource to a new resource group or subscription is a metadata change that shouldn't affect anything about how the resource functions. For example, the inbound IP address for an app service doesn't change when moving the app service. - The destination resource group must not have any existing App Service resources. App Service resources include:- - Web Apps + - Web apps - App Service plans - Uploaded or imported TLS/SSL certificates - App Service Environments - All App Service resources in the resource group must be moved together. - App Service Environments can't be moved to a new resource group or subscription.- - You can move a Web App and App Service plan hosted on an App Service Environment to a new subscription without moving the App Service Environment. The Web App and App Service plan that you move will always be associated with your initial App Service Environment. You can't move a Web App/App Service plan to a different App Service Environment. - - If you need to move a Web App and App Service plan to a new App Service Environment, you'll need to recreate these resources in your new App Service Environment. Consider using the [backup and restore feature](../../../app-service/manage-backup.md) as way of recreating your resources in a different App Service Environment. -- App Service apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move.-- App Service apps with virtual network integration cannot be moved. Remove the virtual network integration and reconnect it after the move.+ - You can move an app and plan hosted on an App Service Environment to a new subscription without moving the App Service Environment. The app and plan that you move are always associated with your initial App Service Environment. You can't move an app/plan to a different App Service Environment. + - If you need to move an app and plan to a new App Service Environment, you'll need to recreate these resources in your new App Service Environment. Consider using the [backup and restore feature](../../../app-service/manage-backup.md) as way of recreating your resources in a different App Service Environment. +- Apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move. +- Apps with virtual network integration can't be moved. Remove the virtual network integration and reconnect it after the move. - App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section.-- When you move a Web App to a different subscription, the location of the Web App remains the same, but its policy is changed. For example, if your Web App is in Subscription1 located in Central US and has Policy1, and Subscription2 is in the UK South and has Policy2. If you move the Web App to Subscription2, the location of the Web App remains the same (Central US); however, it will be under the new policy which is policy2.+- When you move an app to a different resource group or subscription, the location of the app remains the same, but its policy is changed. For example, consider a case where your app runs in `Subscription1` (Central US) and has `Policy1` and `Subscription2` (UK South) that has `Policy2`. If you move your app to Subscription2, the location of the app remains the same (Central US); however, it falls under the new policy `Policy2`. ## Find original resource group -If you don't remember the original resource group, you can find it through diagnostics. For your web app, select **Diagnose and solve problems**. Then, select **Configuration and Management**. +If you don't remember the original resource group, you can find it through diagnostics. In your app page in the [Azure portal](https://portal.azure.com), select **Diagnose and solve problems**. Then, select **Configuration and Management**. :::image type="content" source="./media/app-service-move-limitations/select-diagnostics.png" alt-text="Screenshot of the Diagnose and solve problems section with the Configuration and Management option highlighted."::: Select **Migration Options**. :::image type="content" source="./media/app-service-move-limitations/select-migration.png" alt-text="Screenshot of the Migration Options section in the Configuration and Management menu."::: -Select the option for recommended steps to move the web app. +Select the option for recommended steps to move the app. :::image type="content" source="./media/app-service-move-limitations/recommended-steps.png" alt-text="Screenshot of the Recommended Steps option in the Migration Options section."::: |
azure-signalr | Signalr Tutorial Build Blazor Server Chat App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md | description: In this tutorial, you learn how to build and modify a Blazor Server Previously updated : 05/22/2022 Last updated : 08/28/2024 ms.devlang: csharp # Tutorial: Build a Blazor Server chat app -This tutorial shows you how to build and modify a Blazor Server app. You'll learn how to: +This tutorial shows you how to build and modify a Blazor Server app. You learn how to: > [!div class="checklist"] > * Build a simple chat room with the Blazor Server app template. > * Work with Razor components. Ready to start? ## Build a local chat room in Blazor Server app -Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built into the web application publish process to make managing the dependencies between the web app and SignalR service much more convenient. You can work in a local SignalR instance in a local development environment and work in Azure SignalR Service for Azure App Service at the same time without any code changes. +Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built into the web application publish process to make managing the dependencies between the web app and SignalR service much more convenient. You can work without any code changes at the same time: ++* in a local SignalR instance, in a local development environment. +* in Azure SignalR Service for Azure App Service. 1. Create a Blazor chat app: 1. In Visual Studio, choose **Create a new project**. Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built i dotnet add package Microsoft.AspNetCore.SignalR.Client --version 3.1.7 ``` -1. Create a new [Razor component](/aspnet/core/blazor/components/) called `ChatRoom.razor` under the `Pages` folder to implement the SignalR client. Follow the steps below or use the [ChatRoom.razor](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BlazorChat/Pages/ChatRoom.razor) file. +1. To implement the SignalR client, create a new [Razor component](/aspnet/core/blazor/components/) called `ChatRoom.razor` under the `Pages` folder. Use the [ChatRoom.razor](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BlazorChat/Pages/ChatRoom.razor) file or perform the following steps: 1. Add the [`@page`](/aspnet/core/mvc/views/razor#page) directive and the using statements. Use the [`@inject`](/aspnet/core/mvc/views/razor#inject) directive to inject the [`NavigationManager`](/aspnet/core/blazor/fundamentals/routing#uri-and-navigation-state-helpers) service. Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built i } ``` -1. Press <kbd>F5</kbd> to run the app. Now, you can initiate the chat: +1. To run the app, press <kbd>F5</kbd>. Now, you can initiate the chat: [ ![An animated chat between Bob and Alice is shown. Alice says Hello, Bob says Hi.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat.gif) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat.gif#lightbox) Beginning in Visual Studio 2019 version 16.2.0, Azure SignalR Service is built i When you deploy the Blazor app to Azure App Service, we recommend that you use [Azure SignalR Service](/aspnet/core/signalr/scale#azure-signalr-service). Azure SignalR Service allows for scaling up a Blazor Server app to a large number of concurrent SignalR connections. In addition, the SignalR service's global reach and high-performance datacenters significantly aid in reducing latency due to geography. > [!IMPORTANT]-> In a Blazor Server app, UI states are maintained on the server side, which means a sticky server session is required to preserve state. If there is a single app server, sticky sessions are ensured by design. However, if there are multiple app servers, there are chances that the client negotiation and connection may go to different servers which may lead to an inconsistent UI state management in a Blazor app. Hence, it is recommended to enable sticky server sessions as shown below in *appsettings.json*: +> In a Blazor Server app, UI states are maintained on the server side, which means a sticky server session is required to preserve state. If there is a single app server, sticky sessions are ensured by design. However, if multiple app servers are in use, the client negotiation and connection may be redirected to different servers, which may lead to an inconsistent UI state management in a Blazor app. Hence, it is recommended to enable sticky server sessions as shown in *appsettings.json*: > > ```json > "Azure:SignalR:ServerStickyMode": "Required" When you deploy the Blazor app to Azure App Service, we recommend that you use [ [ ![On Publish, the link to Configure is highlighted.](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-dependency.png) ](media/signalr-tutorial-build-blazor-server-chat-app/blazor-chat-dependency.png#lightbox) - The service dependency will carry out the following activities to enable your app to automatically switch to Azure SignalR Service when on Azure: + The service dependency carries out the following activities to enable your app to automatically switch to Azure SignalR Service when on Azure: * Update [`HostingStartupAssembly`](/aspnet/core/fundamentals/host/platform-specific-configuration) to use Azure SignalR Service. * Add the Azure SignalR Service NuGet package reference. When you deploy the Blazor app to Azure App Service, we recommend that you use [ dotnet add package Microsoft.Azure.SignalR ``` -1. Add a call to `AddAzureSignalR()` in `Startup.ConfigureServices()` as demonstrated below. +1. Add a call to `AddAzureSignalR()` in `Startup.ConfigureServices()` as shown in the following example: ```cs public void ConfigureServices(IServiceCollection services) When you deploy the Blazor app to Azure App Service, we recommend that you use [ > "Azure": { > "SignalR": { > "Enabled": true,-> "ConnectionString": <your-connection-string> +> "ConnectionString": <your-connection-string> > } > } > |
backup | Backup Mabs Unattended Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-unattended-install.md | Title: Silent installation of Azure Backup Server V4 description: Use a PowerShell script to silently install Azure Backup Server V4. This kind of installation is also called an unattended installation. Previously updated : 04/18/2024 Last updated : 09/18/2024 To install the Backup Server, run the following command: 3. On the server that hosts Azure Backup Server V4 or later, create a text file. (You can create the file in Notepad or in another text editor.) Save the file as MABSSetup.ini. 4. Paste the following code in the MABSSetup.ini file. Replace the text inside the brackets (\< \>) with values from your environment. The following text is an example: + >[!Caution] + >Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a very high degree of trust in the application, and carries risks that are not present in other flows. Ensure that you delete the **MABSSetup.ini** file once the installation is complete. + ```text [OPTIONS] UserName=administrator |
communication-services | Chat App Teams Embed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-app-teams-embed.md | + + Title: Embed Azure Communication Services Chat in a custom Microsoft Teams app +description: Learn how to use an Azure Communication Services Chat app from within Teams, without using interoperability linkage. +++++ Last updated : 09/16/2024++++++# Embed chat in a Microsoft Teams custom app ++This article describes how to create a Microsoft Teams custom app to interact with an Azure Communication Services instance. This chat app enables interwork functions between the two systems while maintaining separated backend environments and identity configurations. ++## Use cases ++- **Point-of-sales and post-sales support** + + Consumer websites can provide a quick access to a chat channel, either to automated bots or sales associate or both. We recommend using an isolated Azure Communication Service instance. ++ Similarly, post-sales support and coordination benefit from the ability to use chat independently while incorporated within Teams. ++- **Secured remote consultation services** + + For medical telepresence applications, banking services, and other privacy-sensitive scenarios, the encryption and security provided by Azure Communication Services enables these use cases without needing the remote participants to use Teams. You can brand the solution as needed and employees of your organization can access the consults from their existing Teams installation. ++- **Data separation requirement scenarios** + + For some areas, you might need to ensure geographical quarantining of data to specific jurisdictions. A companyΓÇÖs legal data storage area might be different from the location required to store customer data. You can configure such a scenario using the Azure Communications Services storage location at instance creation. The location can be different from the [TeamsΓÇÖ storage location](/microsoftteams/privacy/location-of-data-in-teams). ++ :::image type="content" source="media/chat-app-teams-instance-details.png" alt-text="Screen capture of selecting an Azure Communications Services storage location at instance creation." lightbox="media/chat-app-teams-instance-details.png"::: ++## Architecture ++The following diagram shows the overall view of the Teams extensibility chat app. +++1. An Azure Communications Services instance enables the solution. ++2. The Web API provides server-side function for the solution, for both internal and external facing applications. ++3. Contoso customers use client (web or mobile) applications to interface with employees. This example shows a web app used to host the content. ++4. Contoso employees use the Teams app within their Teams client. The Teams client is a web app hosted within a Teams custom app and deployed to Teams through an iframe inside the Teams client. ++ The Azure Communication Services instance isn't directly connected to the Teams environment. The Communications Services environment is surfaced through the Teams custom app. ++ The Teams custom app gains TeamsΓÇÖ single sign-on (SSO), which provides the Teams user ID to the app. The custom messaging app uses the Teams user ID to map to the communication service user ID. ++## Build the Solution ++You need the following components to create the chat app. ++1. An Azure Communication Services instance. + - For more information, see [Quickstart: Create and manage Communication Services resources](../quickstarts/create-communication-resource.md). + - Configure the storage area for the instance during creation. ++2. An API server to host the backend components. The API server provides the backend logic required. Common use cases include user authentication and chat thread management APIs. ++3. A Web app instance to host the frontend components. The frontend components for both the customer-facing web and potentially for driving the Teams custom app layout via embedded iframe. ++4. A Teams custom app configured Teams to enable this app installation. To provide the experience for employees, Set up the Teams custom app to use web content driven from a web app or through a Teams custom app deployment. ++## Design the app ++You can design the customer-facing portal site as needed to meet your business needs. A simple call / chat web app usually requires two features: ++- Authentication (sign up / sign-on) +- Primary chat (and call) user interface ++Teams single sign-on (SSO) provides authentication in the employee-facing Teams custom app. In this case, the employee needs to see a further list of customers before the main chat (and call) experience. ++Some other considerations for the design work within teams include guidelines to ensure a cohesive, inclusive, and accessible experience. For more information, see [Designing your Microsoft Teams app](/microsoftteams/platform/concepts/design/design-teams-app-overview). ++## Implement the Teams custom app ++Start your dedicated journey at [Get started > Build your first Teams app](/microsoftteams/platform/get-started/get-started-overview#build-your-first-teams-app). ++To get the development toolkit for Visual Studio Code, a quick reference to learning materials, code samples, and project templates, see [Microsoft Teams Toolkit Overview](/microsoftteams/platform/toolkit/teams-toolkit-fundamentals). ++In the Microsoft Teams Toolkit, select **New Project** > **Tab**. +++A Tab app provides the simplest framework, which you can further refine to use React with Fluent UI. +++You can quickly create an app skeleton and try it locally in Teams using a Microsoft 365 developer account. Use the React with Fluent UI capability and follow the basic install in Visual Studio Code. +++This project has a templated API implementation through Azure Functions. At this point, you need to create the complete backend implementation for a chat platform. The Basic Tab option provides the web app frontend structure. It also enables SSO for the app as described in [Develop single sign on experience in Teams | GitHub](https://github.com/OfficeDev/teams-toolkit/wiki/Develop-single-sign-on-experience-in-Teams). ++### Other ways to implement the Teams custom app ++You can create a Tab app that links each of the tabs through to an external app using the Teams app `manifest.json` file. For more information, see [Sample app manifest](/microsoftteams/platform/resources/schema/manifest-schema#sample-app-manifest). ++You can also use an existing Microsoft Entra app, as described in [Use existing Microsoft Entra app in TeamsFx project](/microsoftteams/platform/toolkit/use-existing-aad-app). ++For more information about Tabs capabilities, see [Configure Tab capability within your Teams app | GitHub](https://github.com/OfficeDev/teams-toolkit/wiki/How-to-configure-Tab-capability-within-your-Teams-app). ++## Build the chat app ++To build a fully featured chat app, you need some key functions. ++You need an Azure Communication Services instance to host the chats and provide the function to send and receive messages (and other communication types). Within this system, each communication ID represents a user, provided by the API service for the app. The user receives a communication ID once the user authentication flow is complete. ++### Authentication flow ++Azure Communication Services endpoints require authentication, provided in the form of a bearer token. The authentication service provides one token per communication ID. ++Depending upon your requirements, you might need to provide a means for users to sign up, sign-on, or resolve some other form of one-time authentication link. ++You need to create identities and issue authentication tokens within a backend service for security. For more information, see [Quickstart: Create and manage access tokens](../quickstarts/identity/access-tokens.md). ++### Chat UI ++The quickest method to provide a chat pane with message entry for the web UI is to use the React Web UI Library composites, from the Azure communication-react package. The Storybook documentation explains the integration of these and also direct usage within the Storybook environment. +++### Chat composite with the participants list ++The chat composite component requires the user identifier and token from the authentication flow, the Communication Services endpoint, and the thread ID to which it must be attached. ++Thread IDs represent conversations between groups of communication identifiers. Before a conversation, you need to create the thread and add users to that thread. You can automate this procedure or provide the function from a Tab in the Teams app for the employees to configure. ++### Chat bots ++You can add bots to your chat app. For more information, see [Quickstart: Add a bot to your chat app](..//quickstarts/chat/quickstart-botframework-integration.md). ++## Distribute the Teams app ++To use a Teams app in an organization, The Teams admin must approve it. You can [build a Teams custom app](/microsoftteams/platform/concepts/build-and-test/apps-package) and install the app package via the [Teams admin center](https://admin.teams.microsoft.com/). For more information, see [Manage custom apps in Microsoft Teams admin center](/microsoftteams/teams-custom-app-policies-and-settings). ++## Next steps ++- [Quickstart: Add a bot to your chat app](../quickstarts/chat/quickstart-botframework-integration.md) +- [Enable file sharing using UI Library in Teams Interoperability Chat](../tutorials/file-sharing-tutorial-interop-chat.md) ++## Related articles ++- For more information about building an app with Teams interop, see [Contact center](./contact-center.md). |
cost-management-billing | Tutorial Acm Create Budgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md | In this tutorial, you learn how to: Budgets are supported for the following types of Azure account types and scopes: - Azure role-based access control (Azure RBAC) scopes- - Management groups - - Subscription + - Management group + + - Subscription + - Resource group + - Enterprise Agreement scopes - Billing account - Department |
cost-management-billing | Save Compute Costs Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/save-compute-costs-reservations.md | For more information, see [Self-service exchanges and refunds for Azure Reservat - **Azure Dedicated Host** - Only the compute costs are included with the Dedicated host. - **Azure Disk Storage reservations** - A reservation only covers premium SSDs of P30 size or greater. It doesn't cover any other disk types or sizes smaller than P30. - **Azure Backup Storage reserved capacity** - A capacity reservation lowers storage costs of backup data in a Recovery Services Vault.+- **Azure NetApp Files** - A capacity reservation covers matching capacity pools in the selected service level and region. When using capacity pools configured with [cool access](../../azure-netapp-files/manage-cool-access.md), only "hot" tier consumption is covered by the reservation benefit. Software plans: For Windows virtual machines and SQL Database, the reservation discount doesn't ## Need help? Contact us. -If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458). +If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458). ## Next steps If you have questions or need help, [create a support request](https://go.micro - [Azure Cosmos DB resources with Azure Cosmos DB reserved capacity](/azure/cosmos-db/cosmos-db-reserved-capacity) - [SQL Database compute resources with Azure SQL Database reserved capacity](/azure/azure-sql/database/reserved-capacity-overview) - [Azure Cache for Redis resources with Azure Cache for Redis reserved capacity](../../azure-cache-for-redis/cache-reserved-pricing.md)-Learn more about reservations for software plans: ++- Learn more about reservations for software plans: - [Red Hat software plans from Azure Reservations](/azure/virtual-machines/linux/prepay-suse-software-charges) - [SUSE software plans from Azure Reservations](/azure/virtual-machines/linux/prepay-suse-software-charges) |
cost-management-billing | Permission Buy Savings Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/permission-buy-savings-plan.md | Saving plan purchasing for Enterprise Agreement customers is limited to: - Enterprise Agreement admins with write permissions can purchase savings plans from **Cost Management + Billing** > **Savings plan**. No subscription-specific permissions are needed. - Users with subscription owner or savings plan purchaser roles in at least one subscription in the enrollment account can purchase savings plans from **Home** > **Savings plan**. -Enterprise Agreement customers can limit savings plan purchases to only Enterprise Agreement admins by disabling the **Add Savings Plan** option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). To change settings, go to the **Policies** menu. +Enterprise Agreement customers can limit savings plan purchases to only Enterprise Agreement admins by disabling the **Add Savings Plan** option in the [Azure portal](https://portal.azure.com). To change settings, go to the **Policies** menu. ### Microsoft Customer Agreement customers Savings plan purchasing for Microsoft Customer Agreement customers is limited to: Savings plan purchasing for Microsoft Customer Agreement customers is limited to - Users with billing profile contributor permissions or higher can purchase savings plans from **Cost Management + Billing** > **Savings plan** experience. No subscription-specific permissions are needed. - Users with subscription owner or savings plan purchaser roles in at least one subscription in the billing profile can purchase savings plans from **Home** > **Savings plan**. -If the **Add Savings Plan** option is disabled in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts), then no user can purchase the Savings Plan. Go to the **Policies** menu to change settings to purchase the Savings Plan. +If the **Add Savings Plan** option is disabled in the [Azure portal](https://portal.azure.com), then no user can purchase the Savings Plan. Go to the **Policies** menu to change settings to purchase the Savings Plan. ### Microsoft Partner Agreement partners |
data-factory | Connector Deprecation Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md | This article describes future deprecations for some connectors of Azure Data Fac > [!NOTE] > "Deprecated" means we intend to remove the connector from a future release. Unless they are in *Preview*, connectors remain fully supported until they are officially deprecated. This deprecation notification can span a few months or longer. After removal, the connector will no longer work. This notice is to allow you sufficient time to plan and update your code before the connector is deprecated. +## Overview ++| Connector|Release stage |End of Support Date |Disabled Date | +|:-- |:-- |:-- | :-- | +| [Google BigQuery (legacy)](connector-google-bigquery-legacy.md)  | End of support announced and new version available | October 31, 2024 | January 10, 2024 | +| [MariaDB (legacy driver version)](connector-mariadb.md)  | End of support announced and new version available | October 31, 2024 | January 10, 2024 | +| [MySQL (legacy driver version)](connector-mysql.md)  | End of support announced and new version available | October 31, 2024| January 10, 2024| +| [Salesforce (legacy)](connector-salesforce-legacy.md)   | End of support announced and new version available | October 11, 2024 | January 10, 2024 | +| [Salesforce Service Cloud (legacy)](connector-salesforce-service-cloud-legacy.md)   | End of support announced and new version available | October 11, 2024 |January 10, 2024 | +| [PostgreSQL (legacy)](connector-postgresql-legacy.md)   | End of support announced and new version available |October 31, 2024 | January 10, 2024 | +| [Snowflake (legacy)](connector-snowflake-legacy.md)   | End of support announced and new version available | October 31, 2024 | January 10, 2024 | +| [Azure Database for MariaDB](connector-azure-database-for-mariadb.md) | End of support announced |December 31, 2024 | December 31, 2024 | +| [Concur (Preview)](connector-concur.md) | End of support announced | December 31, 2024 | December 31, 2024 | +| [Drill](connector-drill.md) | End of support announced | December 31, 2024 | December 31, 2024 | +| [Hbase](connector-hbase.md) | End of support announced | December 31, 2024 | December 31, 2024 | +| [Magento (Preview)](connector-magento.md) | End of support announced | December 31, 2024 | December 31, 2024 | +| [Marketo (Preview)](connector-marketo.md) | End of support announced | December 31, 2024| December 31, 2024 | +| [Oracle Responsys (Preview)](connector-oracle-responsys.md) | End of support announced | December 31, 2024 | December 31, 2024 | +| [Paypal (Preview)](connector-paypal.md) | End of support announced |December 31, 2024 | December 31, 2024| +| [Phoenix](connector-phoenix.md) | End of support announced | December 31, 2024 | December 31, 2024 | +| [Amazon Marketplace Web Service](connector-amazon-marketplace-web-service.md)| Disabled |/ |/ | +++## Release stages and support ++This section describes the different release stages and support for each stage. ++| Release stage |Notes | +|:-- |:-- | +| End of Support announcement | Before the end of the lifecycle at any stage, an end of support announcement is performed.<br><br>Support Service Level Agreements (SLAs) are applicable for End of Support announced connectors, but all customers must upgrade to a new version of the connector no later than the End of Support date.<br><br>During this stage, the existing connectors function as expected, but objects such as linked service can be created only on the new version of the connector. | +| End of Support | At this stage, the connector is considered as deprecated, and no longer supported.<br> • No plan to fix bugs. <br> • No plan to add any new features. <br><br> If necessary due to outstanding security issues, or other factors, **Microsoft might expedite moving into the final disabled stage at any time, at Microsoft's discretion**.| +|Disabled |All pipelines that are running on legacy version connectors will no longer be able to execute.| + ## Legacy connectors with updated connectors or drivers available now The following legacy connectors or legacy driver versions will be deprecated, but new updated versions are available in Azure Data Factory. You can update existing data sources to use the new connectors moving forward. -- [Google Ads/Adwords](connector-google-adwords.md#upgrade-the-google-ads-driver-version) - [Google BigQuery](connector-google-bigquery.md#upgrade-the-google-bigquery-linked-service) - [MariaDB](connector-mariadb.md#upgrade-the-mariadb-driver-version)-- [MongoDB](connector-mongodb.md#upgrade-the-mongodb-linked-service) - [MySQL](connector-mysql.md#upgrade-the-mysql-driver-version)+- [PostgreSQL](connector-postgresql.md#upgrade-the-postgresql-linked-service) - [Salesforce](connector-salesforce.md#upgrade-the-salesforce-linked-service) - [Salesforce Service Cloud](connector-salesforce-service-cloud.md#upgrade-the-salesforce-service-cloud-linked-service) - [ServiceNow](connector-servicenow.md#upgrade-your-servicenow-linked-service) |
databox | Data Box Disk Deploy Copy Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-copy-data.md | Review the following considerations before you copy the data to the disks: - Always copy the VHDs to one of the precreated folders. VHDs placed outside of these folders or in a folder that you created are uploaded to Azure Storage accounts as page blobs instead of managed disks. - Only fixed VHDs can be uploaded to create managed disks. Dynamic VHDs, differencing VHDs, and VHDX files aren't supported. - The Data Box Disk Split Copy and Validation tools, `DataBoxDiskSplitCopy.exe` and `DataBoxDiskValidation.cmd`, report failures when long paths are processed. These failures are common when long paths aren't enabled on the client, and your data copy's paths and file names exceed 256 characters. To avoid these failures, follow the guidance within the [enable long paths on your Windows client](/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd#enable-long-paths-in-windows-10-version-1607-and-later) article.+ > [!IMPORTANT] + > Powershell ISE is not supported for the Data Box Disk Tools Perform the following steps to connect and copy data from your computer to the Data Box Disk. |
databox | Data Box Disk Deploy Set Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-set-up.md | Perform the following steps to connect and unlock your disks. 1. In the Azure portal, navigate to your Data Box Disk Order. Search for it by navigating to **General > All resources**, then select your Data Box Disk Order. 2. Download the Data Box Disk toolset corresponding to the Windows client. This toolset contains 3 tools: Data Box Disk Unlock tool, Data Box Disk Validation tool, and Data Box Disk Split Copy tool.-+ + > [!NOTE] + > Powershell ISE is not supported for the Data Box Disk Tools + This procedure requires only the Data Box Disk Unlock tool. The remaining tools will be used in subsequent steps. > [!div class="nextstepaction"] |
defender-for-iot | Cli Ot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md | Use the following commands to restart the OT sensor appliance. |User |Command |Full command syntax | |||| |**admin** | `system reboot` | No attributes |-|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo reboot` | No attributes | |**cyberx_host** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo reboot` | No attributes | For example, for the *admin* user: Use the following commands to shut down the OT sensor appliance. |User |Command |Full command syntax | |||| |**admin** | `system shutdown` | No attributes |-|**cyberx** , or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo shutdown -r now` | No attributes | |**cyberx_host**, or **admin** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-an-admin-user) | `sudo shutdown -r now` | No attributes | For example, for the *admin* user: |
event-grid | Communication Services Chat Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-chat-events.md | This section contains an example of what that data would look like for each even "key": "value", "description": "A map of data associated with the message" },- "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724", "senderCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724", "communicationUser": { This section contains an example of what that data would look like for each even "composeTime": "2021-02-19T00:25:58.927Z", "type": "Text", "version": 1613694358927,- "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d05-83fe-084822000f6d", "recipientCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d05-83fe-084822000f6d", "communicationUser": { This section contains an example of what that data would look like for each even "key": "value", "description": "A map of data associated with the message" },- "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724", "senderCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724", "communicationUser": { This section contains an example of what that data would look like for each even "composeTime": "2021-02-19T00:25:57.917Z", "type": "Text", "version": 1613694500784,- "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f", "recipientCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f", "communicationUser": { This section contains an example of what that data would look like for each even "data": { "deleteTime": "2021-02-19T00:43:10.14Z", "messageId": "1613695388152",- "senderId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e", "senderCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e", "communicationUser": { This section contains an example of what that data would look like for each even "composeTime": "2021-02-19T00:43:08.152Z", "type": "Text", "version": 1613695390361,- "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f", "recipientCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d60-83fe-084822000f6f", "communicationUser": { This section contains an example of what that data would look like for each even "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}", "subject": "thread/{thread-id}/createdBy/rawId/recipient/rawId", "data": {- "createdBy": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9", "createdByCommunicationIdentifier": { "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9", "communicationUser": { This section contains an example of what that data would look like for each even "properties": { "topic": "Chat about new communication services" },- "members": [ - { - "displayName": "Bob", - "memberId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286d-e1fe-0848220013b9" - }, - { - "displayName": "John", - "memberId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-289b-07fd-0848220015ea" - } - ], "participants": [ { "displayName": "Bob", This section contains an example of what that data would look like for each even ], "createTime": "2021-02-18T23:47:26.91Z", "version": 1613692046910,- "recipientId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286e-84f5-08482200181c", "recipientCommunicationIdentifier": { "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-576c-286e-84f5-08482200181c", "communicationUser": { This section contains an example of what that data would look like for each even "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}", "subject": "thread/{thread-id}/deletedBy/{rawId}/recipient/{rawId}", "data": {- "deletedBy": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-6473-83fe-084822000e21", "deletedByCommunicationIdentifier": { "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-6473-83fe-084822000e21", "communicationUser": { This section contains an example of what that data would look like for each even "deleteTime": "2021-02-18T23:57:51.5987591Z", "createTime": "2021-02-18T23:54:15.683Z", "version": 1613692578672,- "recipientId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-647b-e1fe-084822001416", "recipientCommunicationIdentifier": { "rawId": "8:acs:3d703c91-9657-4b3f-b19c-ef9d53f99710_00000008-5772-647b-e1fe-084822001416", "communicationUser": { This section contains an example of what that data would look like for each even "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}", "subject": "thread/{thread-id}/editedBy/{rawId}/recipient/{rawId}", "data": {- "editedBy": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e", "editedByCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7d07-83fe-084822000f6e", "communicationUser": { This section contains an example of what that data would look like for each even }, "createTime": "2021-02-19T00:28:25.864Z", "version": 1613694508719,- "recipientId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724", "recipientCommunicationIdentifier": { "rawId": "8:acs:109f0644-b956-4cd9-87b1-71024f6e2f44_00000008-578d-7caf-07fd-084822001724", "communicationUser": { |
expressroute | Cross Connections Api Development | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/cross-connections-api-development.md | Once you receive the ExpressRoute service key from the target customer, follow t }, { "name": "9ee700ad-50b2-4b98-a63a-4e52f855ac24",- "id": "/subscriptions/8030cec9-2c0c-4361-9949-1655c6e4b0fa/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/<ProviderManagementSubscription>", + "id": "/subscriptions/00001111-aaaa-2222-bbbb-3333cccc4444/resourceGroups/CrossConnection-EUAPTest/providers/Microsoft.Network/expressRouteCrossConnections/<ProviderManagementSubscription>", "etag": "W/\"f07a267f-4a5c-4538-83e5-de1fcb183801\"", "type": "Microsoft.Network/expressRouteCrossConnections", "location": "eastus2euap", Once you receive the ExpressRoute service key from the target customer, follow t Pragma: no-cache Retry-After: 10 x-ms-request-id: 0a8d458b-8fe3-44e6-89c9-1b156b946693- Azure-AsyncOperation: https://management.azure.com/subscriptions/8030cec9-2c0c-4361-9949-1655c6e4b0fa/providers/Microsoft.Network/locations/eastus2euap/operations/0a8d458b-8fe3-44e6-89c9-1b156b946693?api-version=2018-02-01 + Azure-AsyncOperation: https://management.azure.com/subscriptions/00001111-aaaa-2222-bbbb-3333cccc4444/providers/Microsoft.Network/locations/eastus2euap/operations/0a8d458b-8fe3-44e6-89c9-1b156b946693?api-version=2018-02-01 Strict-Transport-Security: max-age=31536000; includeSubDomains Cache-Control: no-cache Server: Microsoft-HTTPAPI/2.0; Microsoft-HTTPAPI/2.0 Once you receive the ExpressRoute service key from the target customer, follow t } } - C:\Users\kaanan\Documents\Expressroute\Partner APIs\ARMClient-master\ARMClient-master>armclient get https://management.azure.com/subscriptions/<ProviderManagementSubscription>/providers/Microsoft.Network/locations/eastus2euap/operations/0a8d458b-8fe3-44e6-89c9-1b156b946693?api-version=2018-02-01 + C:\Users\Admin\Documents\Expressroute\Partner APIs\ARMClient-master\ARMClient-master>armclient get https://management.azure.com/subscriptions/<ProviderManagementSubscription>/providers/Microsoft.Network/locations/eastus2euap/operations/0a8d458b-8fe3-44e6-89c9-1b156b946693?api-version=2018-02-01 { "status": "Succeeded" } Once you receive the ExpressRoute service key from the target customer, follow t "properties": { "peeringType": "MicrosoftPeering", "peerASN": 900,- "primaryPeerAddressPrefix": "123.0.0.0/30", - "secondaryPeerAddressPrefix": "123.0.0.4/30", + "primaryPeerAddressPrefix": "203.0.113.0/30", + "secondaryPeerAddressPrefix": "203.0.113.4/30", "vlanId": 300, "microsoftPeeringConfig": { "advertisedPublicPrefixes": [- "123.1.0.0/24" + "203.0.113.128/25" ], "customerASN": 45, "routingRegistryName": "ARIN" Once you receive the ExpressRoute service key from the target customer, follow t Pragma: no-cache Retry-After: 10 x-ms-request-id: e3aa0bbd-4709-4092-a1f1-aa78080929d0- Azure-AsyncOperation: https://management.azure.com/subscriptions/8030cec9-2c0c-4361-9949-1655c6e4b0fa/providers/Microsoft.Network/locations/eastus2euap/operations/e3aa0bbd-4709-4092-a1f1-aa78080929d0?api-version=2018-02-01 + Azure-AsyncOperation: https://management.azure.com/subscriptions/00001111-aaaa-2222-bbbb-3333cccc4444/providers/Microsoft.Network/locations/eastus2euap/operations/e3aa0bbd-4709-4092-a1f1-aa78080929d0?api-version=2018-02-01 Strict-Transport-Security: max-age=31536000; includeSubDomains Cache-Control: no-cache Server: Microsoft-HTTPAPI/2.0; Microsoft-HTTPAPI/2.0 Once you receive the ExpressRoute service key from the target customer, follow t "peeringType": "MicrosoftPeering", "azureASN": 0, "peerASN": 900,- "primaryPeerAddressPrefix": "123.0.0.0/30", - "secondaryPeerAddressPrefix": "123.0.0.4/30", + "primaryPeerAddressPrefix": "203.0.113.0/30", + "secondaryPeerAddressPrefix": "203.0.113.4/30", "state": "Disabled", "vlanId": 300, "lastModifiedBy": "", "microsoftPeeringConfig": { "advertisedPublicPrefixes": [- "123.1.0.0/24" + "203.0.113.128/25" ], "advertisedPublicPrefixesState": "NotConfigured", "customerASN": 45, Once you receive the ExpressRoute service key from the target customer, follow t } } - C:\Users\kaanan\Documents\Expressroute\Partner APIs\ARMClient-master\ARMClient-master>armclient get https://management.azure.com/subscriptions/<ProviderManagementSubscription>/providers/Microsoft.Network/locations/eastus2euap/operations/e3aa0bbd-4709-4092-a1f1-aa78080929d0?api-version=2018-02-01 + C:\Users\Admin\Documents\Expressroute\Partner APIs\ARMClient-master\ARMClient-master>armclient get https://management.azure.com/subscriptions/<ProviderManagementSubscription>/providers/Microsoft.Network/locations/eastus2euap/operations/e3aa0bbd-4709-4092-a1f1-aa78080929d0?api-version=2018-02-01 { "status": "Succeeded" } |
expressroute | Expressroute Locations Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md | The following table shows connectivity locations and the service providers for e | **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | ✓ | | | **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | ✓ | Airtel<br/>Lightstorm<br/>Tata Communications | | **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | ✓ | Bell Canada<br/>Equinix<br/>Megaport<br/>RISQ<br/>Telus |-| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | ✗ | ✓ | Cirion Technologies<br/>Equinix<br/>MCM Telecom<br/>Megaport<br/>Transtelco | +| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | ✗ | ✓ | Cirion Technologies<br/>Equinix<br/>KIO<br/>MCM Telecom<br/>Megaport<br/>Transtelco | | **Quincy** | Sabey Datacenter - Building A | 1 | West US 2 | ✓ | | |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | The following table shows locations by service provider. If you want to view ava | **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** | ✓ | ✓ | London<br/>London2<br/>Newport(Wales) | | **KDDI** | ✓ | ✓ | Osaka<br/>Tokyo<br/>Tokyo2 | | **[KINX](https://www.kinx.net/service/cloudhub/clouds/microsoft_azure_expressroute/?lang=en)** | ✓ | ✓ | Seoul |+| **[KIO](https://www.kio.tech/es-mx/microsoft-expressroute)** | ✓ | ✓ | Queretaro (Mexico) | | **[Kordia](https://www.kordia.co.nz/cloudconnect)** | ✓ | ✓ | Auckland<br/>Sydney | | **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | ✓ | ✓ | Amsterdam<br/>Dublin2| | **[KT](https://cloud.kt.com/)** | ✓ | ✓ | Seoul<br/>Seoul2 | |
kubernetes-fleet | Cluster Resource Override | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/cluster-resource-override.md | - Title: "Customize cluster scoped resources in Azure Kubernetes Fleet Manager with cluster resource overrides" -description: This article provides an overview of how to use the Fleet ClusterResourceOverride API to override cluster scoped resources in Azure Kubernetes Fleet Manager. - Previously updated : 05/10/2024----- - build-2024 ---# Customize cluster scoped resources in Azure Kubernetes Fleet Manager with cluster resource overrides (preview) --This article provides an overview of how to use the Fleet `ClusterResourceOverride` API to customize cluster scoped resources in Azure Kubernetes Fleet Manager. ---## Cluster resource override overview --The cluster resource override feature allows you to modify or override specific attributes across cluster-wide resources. With `ClusterResourceOverride`, you can define rules based on cluster labels, specifying changes to be applied to various cluster-wide resources such as namespaces, cluster roles, cluster role bindings, or custom resource definitions (CRDs). These modifications might include updates to permissions, configurations, or other parameters, ensuring consistent management and enforcement of configurations across your Fleet-managed Kubernetes clusters. --## API components --The `ClusterResourceOverride` API consists of the following components: --* **`clusterResourceSelectors`**: Specifies the set of cluster resources selected for overriding. -* **`policy`**: Specifies the set of rules to apply to the selected cluster resources. --### Cluster resource selectors --A `ClusterResourceOverride` object can include one or more cluster resource selectors to specify which resources to override. The `ClusterResourceSelector` object supports the following fields: --> [!NOTE] -> If you select a namespace in the `ClusterResourceSelector`, the override will apply to all resources in the namespace. --* `group`: The API group of the resource. -* `version`: The API version of the resource. -* `kind`: The kind of the resource. -* `name`: The name of the resource. --To add a cluster resource selector to a `ClusterResourceOverride` object, use the `clusterResourceSelectors` field with the following YAML format: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1alpha1 -kind: ClusterResourceOverride -metadata: - name: example-cro -spec: - clusterResourceSelectors: - - group: rbac.authorization.k8s.io - kind: ClusterRole - version: v1 - name: secret-reader -``` --This example selects a `ClusterRole` named `secret-reader` from the `rbac.authorization.k8s.io/v1` API group, as shown below, for overriding. --```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: secret-reader -rules: -- apiGroups: [""]- resources: ["secrets"] - verbs: ["get", "watch", "list"] -``` --## Policy --A `Policy` object consists of a set of rules, `overrideRules`, that specify the changes to apply to the selected cluster resources. Each `overrideRule` object supports the following fields: --* `clusterSelector`: Specifies the set of clusters to which the override rule applies. -* `jsonPatchOverrides`: Specifies the changes to apply to the selected resources. --To add an override rule to a `ClusterResourceOverride` object, use the `policy` field with the following YAML format: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1alpha1 -kind: ClusterResourceOverride -metadata: - name: example-cro -spec: - clusterResourceSelectors: - - group: rbac.authorization.k8s.io - kind: ClusterRole - version: v1 - name: secret-reader - policy: - overrideRules: - - clusterSelector: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: prod - jsonPatchOverrides: - - op: remove - path: /rules/0/verbs/2 -``` --This example removes the verb "list" in the `ClusterRole` named `secret-reader` on clusters with the label `env: prod`. --### Cluster selector --You can use the `clusterSelector` field in the `overrideRule` object to specify the clusters to which the override rule applies. The `ClusterSelector` object supports the following field: --* `clusterSelectorTerms`: A list of terms that specify the criteria for selecting clusters. Each term includes a `labelSelector` field that defines a set of labels to match. --> [!IMPORTANT] -> Only `labelSelector` is supported in the `clusterSelectorTerms` field. --### JSON patch overrides --You can use `jsonPatchOverrides` in the `overrideRule` object to specify the changes to apply to the selected resources. The `JsonPatch` object supports the following fields: --* `op`: The operation to perform. - * Supported operations include `add`, `remove`, and `replace`. - * `add`: Adds a new value to the specified path. - * `remove`: Removes the value at the specified path. - * `replace`: Replaces the value at the specified path. -* `path`: The path to the field to modify. - * Guidance on specifying paths includes: - * Must start with a `/` character. - * Can't be empty or contain an empty string. - * Can't be a `TypeMeta` field ("/kind" or "/apiVersion"). - * Can't be a `Metadata` field ("/metadata/name" or "/metadata/namespace") except the fields "/metadata/labels" and "/metadata/annotations". - * Can't be any field in the status of the resource. - * Examples of valid paths include: - * `/metadata/labels/new-label` - * `/metadata/annotations/new-annotation` - * `/spec/template/spec/containers/0/resources/limits/cpu` - * `/spec/template/spec/containers/0/resources/requests/memory` -* `value`: The value to add, remove, or replace. - * If the `op` is `remove`, you can't specify a `value`. --### Use multiple override patches --You can add multiple `jsonPatchOverrides` to an `overrideRule` to apply multiple changes to the select cluster resources, as shown in the following example: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1alpha1 -kind: ClusterResourceOverride -metadata: - name: cro-1 -spec: - clusterResourceSelectors: - - group: rbac.authorization.k8s.io - kind: ClusterRole - version: v1 - name: secret-reader - policy: - overrideRules: - - clusterSelector: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: prod - jsonPatchOverrides: - - op: remove - path: /rules/0/verbs/2 - - op: remove - path: /rules/0/verbs/1 -``` --This example removes the verbs "list" and "watch" in the `ClusterRole` named `secret-reader` on clusters with the label `env: prod`, as shown below. --```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: secret-reader -rules: -- apiGroups: [""]- resources: ["secrets"] - verbs: ["get", "watch", "list"] -``` --`jsonPatchOverrides` apply a JSON patch on the selected resources following [RFC 6902](https://datatracker.ietf.org/doc/html/rfc6902). --## Apply the cluster resource placement --### [Azure CLI](#tab/azure-cli) --1. Create a `ClusterResourcePlacement` resource to specify the placement rules for distributing the cluster resource overrides across the cluster infrastructure, as shown in the following example. Make sure you select the appropriate resource. -- ```yaml - apiVersion: placement.kubernetes-fleet.io/v1beta1 - kind: ClusterResourcePlacement - metadata: - name: crp - spec: - resourceSelectors: - - group: rbac.authorization.k8s.io - kind: ClusterRole - version: v1 - name: secret-reader - policy: - placementType: PickAll - affinity: - clusterAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: prod - ``` -- This example distributes resources across all clusters labeled with `env: prod`. As the changes are implemented, the corresponding `ClusterResourceOverride` configurations will be applied to the designated clusters, triggered by the selection of matching cluster role resource, `secret-reader`. --2. Apply the `ClusterResourcePlacement` using the `kubectl apply` command. -- ```bash - kubectl apply -f cluster-resource-placement.yaml - ``` --3. Verify the `ClusterResourceOverride` object applied to the selected resources by checking the status of the `ClusterResourcePlacement` resource using the `kubectl describe` command. -- ```bash - kubectl describe clusterresourceplacement crp - ``` -- Your output should resemble the following example output: -- ```output - Status: - Conditions: - ... - Last Transition Time: 2024-04-27T04:18:00Z - Message: The selected resources are successfully overridden in the 10 clusters - Observed Generation: 1 - Reason: OverriddenSucceeded - Status: True - Type: ClusterResourcePlacementOverridden - ... - Observed Resource Index: 0 - Placement Statuses: - Applicable Cluster Resource Overrides: - example-cro-0 - Cluster Name: member-50 - Conditions: - ... - Message: Successfully applied the override rules on the resources - Observed Generation: 1 - Reason: OverriddenSucceeded - Status: True - Type: Overridden - ... - ``` - - The `ClusterResourcePlacementOverridden` condition indicates whether the resource override was successfully applied to the selected resources in the clusters. Each cluster maintains its own `Applicable Cluster Resource Overrides` list, which contains the cluster resource override snapshot if relevant. Individual status messages for each cluster indicate whether the override rules were successfully applied. --### [Portal](#tab/azure-portal) --1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**. --1. Select **Create**. --1. Create a `ClusterResourcePlacement` resource to specify the placement rules for distributing the cluster resource overrides across the cluster infrastructure, as shown in the following example. Make sure you select the appropriate resource. Replace the default template with the YAML example below, and select **Add**. -- :::image type="content" source="./media/cluster-resource-override/crp-create-inline.png" lightbox="./media/cluster-resource-override/crp-create.png" alt-text="A screenshot of the Azure portal page for creating a resource placement, showing the YAML template with placeholder values."::: -- ```yaml - apiVersion: placement.kubernetes-fleet.io/v1beta1 - kind: ClusterResourcePlacement - metadata: - name: crp - spec: - resourceSelectors: - - group: rbac.authorization.k8s.io - kind: ClusterRole - version: v1 - name: secret-reader - policy: - placementType: PickAll - affinity: - clusterAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: prod - ``` -- This example distributes resources across all clusters labeled with `env: prod`. As the changes are implemented, the corresponding `ClusterResourceOverride` configurations will be applied to the designated clusters, triggered by the selection of matching cluster role resource, `secret-reader`. ---1. Verify that the cluster resource placement is created successfully. -- :::image type="content" source="./media/cluster-resource-override/crp-success-inline.png" lightbox="./media/cluster-resource-override/crp-success.png" alt-text="A screenshot of the Azure portal page for cluster resource placements, showing a successfully created cluster resource placement."::: --1. Verify the cluster resource placement applied to the selected resources by selecting the resource from the list and checking the status. ----## Next steps --To learn more about Fleet, see the following resources: --* [Upstream Fleet documentation](https://github.com/Azure/fleet/tree/main/docs) -* [Azure Kubernetes Fleet Manager overview](./overview.md) |
kubernetes-fleet | Concepts Choosing Fleet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-choosing-fleet.md | - Title: "Choose an Azure Kubernetes Fleet Manager option" -description: This article provides a conceptual overview of the various Azure Kubernetes Fleet Manager options and why you may choose a specific configuration. Previously updated : 05/01/2024----- - build-2024 ----# Choosing an Azure Kubernetes Fleet Manager option --This article provides an overview of the various Azure Kubernetes Fleet Manager (Fleet) options and the considerations you should use to guide your selection of a specific configuration. --## Fleet types --A Kubernetes Fleet resource can be created with or without a hub cluster. A hub cluster is a managed Azure Kubernetes Service (AKS) cluster that acts as a hub to store and propagate Kubernetes resources. --The following table compares the scenarios enabled by the hub cluster: --| Capability | Kubernetes Fleet resource without hub cluster | Kubernetes Fleet resource with hub cluster | -|-|-|-| -|**Hub cluster hosting**|<span class='red-x'>❌</span>|<span class='green-check'>✅</span>| -|**Update orchestration**|<span class='green-check'>✅</span>|<span class='green-check'>✅</span>| -|**Workload orchestration**|<span class='red-x'>❌</span>|<span class='green-check'>✅</span>| -|**Layer 4 load balancing**|<span class='red-x'>❌</span>|<span class='green-check'>✅</span>| -|**Billing considerations**|No cost|You pay cost associated with the hub, which is a standard-tier AKS-cluster.| -|**Converting fleet types**|Can be upgraded to a Kubernetes Fleet resource with a hub cluster.|Can't be downgraded to a Kubernetes Fleet resource without a hub cluster.| --## Kubernetes Fleet resource without hub clusters --Without a hub cluster, Kubernetes Fleet acts solely as a grouping entity in Azure Resource Manager (ARM). Certain scenarios, such as update runs, don't require a Kubernetes API and thus don't require a hub cluster. To take full advantage of all the features available, you need a Kubernetes Fleet resource with a hub cluster. --For more information, see [Create a Kubernetes Fleet resource without a hub cluster][create-fleet-without-hub]. --## Kubernetes Fleet resource with hub clusters --A Kubernetes Fleet resource with a hub cluster has an associated AKS-managed cluster, which hosts the open sourced [fleet manager][fleet-github] and [fleet network manager][fleet-networking-github] solution for workload orchestration and layer-4 load balancing. --Upon the creation of a Kubernetes Fleet resource with a hub cluster, a hub AKS cluster is automatically created in the same subscription under a managed resource group that begins with `FL_`. To improve reliability, hub clusters are locked down by denying any user-initiated mutations to the corresponding AKS clusters (under the Fleet-managed resource group `FL_`) and their underlying Azure resources (under the AKS-managed resource group `MC_FL_*`), such as virtual machines (VMs), via Azure deny assignments. Control plane operations, such as changing the hub cluster's configuration through Azure Resource Manager (ARM) or deleting the cluster entirely, are denied. Data plane operations, such as connecting to the hub cluster's Kubernetes API server in order to configure workload orchestration, are not denied. --Hub clusters are exempted from [Azure policies][azure-policy-overview] to avoid undesirable policy effects upon hub clusters. --### Network access modes for hub cluster --For a Kubernetes Fleet resource with a hub cluster, there are two network access modes: --- **Public hub clusters** expose the hub cluster to the internet. This means that with the right credentials, anyone on the internet can connect to the hub cluster. This configuration can be useful during the development and testing phase, but represents a security concern, which is largely undesirable in production.--For more information, see [Create a Kubernetes Fleet resource with a public hub cluster][create-public-hub-cluster]. --- **Private hub clusters** use a [private AKS cluster][aks-private-cluster] as the hub, which prevents open access over the internet. All considerations for a private AKS cluster apply, so review the prerequisites and limitations to determine whether a Kubernetes Fleet resource with a private hub cluster meets your needs.--Some other details to consider: --- Whether you choose a public or private hub, the type can't be changed after creation.-- When using an AKS private cluster, you have the ability to configure fully qualified domain names (FQDNs) and FQDN subdomains. This functionality doesn't apply to the private hub cluster of the Kubernetes Fleet resource.-- When you connect to a private hub cluster, you can use the same methods that you would use to [connect to any private AKS cluster][aks-private-cluster-connect]. However, connecting using AKS command invoke and private endpoints aren't currently supported.-- When you use private hub clusters, you're required to specify the subnet in which the Kubernetes Fleet hub cluster's node VMs reside. This process differs slightly from the AKS private cluster equivalent. For more information, see [create a Kubernetes Fleet resource with a private hub cluster][create-private-hub-cluster].---## Next steps --Now that you understand the different types of Kubernetes fleet resources, see [Create an Azure Kubernetes Fleet Manager resource and join member clusters][quickstart-create-fleet]. --<!-- LINKS --> -[aks-private-cluster]: /azure/aks/private-clusters -[aks-private-cluster-connect]: /azure/aks/private-clusters?tabs=azure-portal#options-for-connecting-to-the-private-cluster -[azure-policy-overview]: /azure/governance/policy/overview -[quickstart-create-fleet]: quickstart-create-fleet-and-members.md -[create-fleet-without-hub]: quickstart-create-fleet-and-members.md?tabs=without-hub-cluster#create-a-fleet-resource -[create-public-hub-cluster]: quickstart-create-fleet-and-members.md?tabs=with-hub-cluster#public-hub-cluster -[create-private-hub-cluster]: quickstart-create-fleet-and-members.md?tabs=with-hub-cluster#private-hub-cluster --<!-- LINKS - external --> -[fleet-github]: https://github.com/Azure/fleet -[fleet-networking-github]: https://github.com/Azure/fleet-networking |
kubernetes-fleet | Concepts Fleet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-fleet.md | - Title: "Azure Kubernetes Fleet Manager and member clusters" -description: This article provides a conceptual overview of Azure Kubernetes Fleet Manager and member clusters. Previously updated : 04/23/2024----- - build-2024 ----# Azure Kubernetes Fleet Manager and member clusters --This article provides a conceptual overview of fleets, member clusters, and hub clusters in Azure Kubernetes Fleet Manager (Fleet). --## What are fleets? --A fleet resource acts as a grouping entity for multiple AKS clusters. You can use them to manage multiple AKS clusters as a single entity, orchestrate updates across multiple clusters, propagate Kubernetes resources across multiple clusters, and provide a single pane of glass for managing multiple clusters. You can create a fleet with or without a [hub cluster](concepts-choosing-fleet.md). --A fleet consists of the following components: ---* **fleet-hub-agent**: A Kubernetes controller that creates and reconciles all the fleet-related custom resources (CRs) in the hub cluster. -* **fleet-member-agent**: A Kubernetes controller that creates and reconciles all the fleet-related CRs in the member clusters. This controller pulls the latest CRs from the hub cluster and consistently reconciles the member clusters to match the desired state. --## What are member clusters? --The `MemberCluster` represents a cluster-scoped API established within the hub cluster, serving as a representation of a cluster within the fleet. This API offers a dependable, uniform, and automated approach for multi-cluster applications to identify registered clusters within a fleet. It also facilitates applications in querying a list of clusters managed by the fleet or in observing cluster statuses for subsequent actions. --You can join Azure Kubernetes Service (AKS) clusters to a fleet as member clusters. Member clusters must reside in the same Microsoft Entra tenant as the fleet, but they can be in different regions, different resource groups, and/or different subscriptions. --### Taints --Member clusters support the specification of taints, which apply to the `MemberCluster` resource. Each taint object consists of the following fields: --* `key`: The key of the taint. -* `value`: The value of the taint. -* `effect`: The effect of the taint, such as `NoSchedule`. --Once a `MemberCluster` is tainted, it lets the [scheduler](./concepts-scheduler-scheduling-framework.md) know that the cluster shouldn't receive resources as part of the [resource propagation](./concepts-resource-propagation.md) from the hub cluster. The `NoSchedule` effect is a signal to the scheduler to avoid scheduling resources from a [`ClusterResourcePlacement`](./concepts-resource-propagation.md#what-is-a-clusterresourceplacement) to the `MemberCluster`. --For more information, see [the upstream Fleet documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/MemberCluster/README.md). --## Next steps --* [Create a fleet and join member clusters](./quickstart-create-fleet-and-members.md). |
kubernetes-fleet | Concepts L4 Load Balancing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-l4-load-balancing.md | - Title: "Multi-cluster layer-4 load balancing (preview)" -description: This article describes the concept of multi-cluster layer-4 load balancing. Previously updated : 03/04/2024-------# Multi-cluster layer-4 load balancing (preview) ---Azure Kubernetes Fleet Manager (Fleet) can be used to set up layer 4 multi-cluster load balancing across workloads deployed across member clusters. --[ ![Diagram that shows how multi-cluster load balancing works.](./media/conceptual-load-balancing.png) ](./media/conceptual-load-balancing.png#lightbox) --For multi-cluster load balancing, Fleet requires target clusters to be using [Azure CNI networking](/azure/aks/configure-azure-cni). Azure CNI networking enables pod IPs to be directly addressable on the Azure virtual network so that they can be routed to from the Azure Load Balancer. --The `ServiceExport` itself can be propagated from the fleet cluster to a member cluster using the Kubernetes resource propagation feature, or it can be created directly on the member cluster. Once this `ServiceExport` resource is created, it results in a `ServiceImport` being created on the fleet cluster, and all other member clusters to build the awareness of the service. --The user can then create a `MultiClusterService` custom resource to indicate that they want to set up Layer 4 multi-cluster load balancing. This `MultiClusterService` results in the member cluster mapped Azure Load Balancer being configured to load balance incoming traffic across endpoints of this service on multiple member clusters. --## Next steps --* [Set up multi-cluster layer-4 load balancing](./l4-load-balancing.md). |
kubernetes-fleet | Concepts Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-rbac.md | - Title: "Grant access to Azure Kubernetes Fleet Manager resources with Azure role-based access control" -description: This article provides an overview of the Azure role-based access control roles that can be used to access Azure Kubernetes Fleet Manager resources. Previously updated : 04/29/2024--------# Grant access to Azure Kubernetes Fleet Manager resources with Azure role-based access control --[Azure role-based access control (Azure RBAC)][azure-rbac-overview] is an authorization system built on Azure Resource Manager that provides fine-grained access management to Azure resources. --This article provides an overview of the various built-in Azure RBAC roles that you can use to access Azure Kubernetes Fleet Manager (Kubernetes Fleet) resources. --## Control plane --This role grants access to Azure Resource Manager (ARM) Fleet resources and subresources, and is applicable both Kubernetes Fleet resource with and without a hub cluster. --|Role name|Description|Usage| -||--|--| -|[Azure Kubernetes Fleet Manager Contributor][azure-rbac-fleet-manager-contributor-role]|This role grants read and write access to Azure resources provided by Azure Kubernetes Fleet Manager, including fleets, fleet members, fleet update strategies, fleet update runs, and more.|You can use this role to grant Contributor permissions that apply solely to Kubernetes Fleet resources and subresources. For example, this role can be given to an Azure administrator tasked with defining and maintaining Fleet resources.| --## Data plane --These roles grant access to Fleet hub Kubernetes objects, and are therefore only applicable to Kubernetes Fleet resources with a hub cluster. --You can assign data plane roles at the Fleet hub cluster scope, or at an individual Kubernetes namespace scope by appending `/namespace/<namespace>` to the role assignment scope. --|Role name|Description|Usage| -||--|--| -|[Azure Kubernetes Fleet Manager RBAC Reader][azure-rbac-fleet-manager-rbac-reader]|Grants read-only access to most Kubernetes resources within a namespace in the fleet-managed hub cluster. It doesn't allow viewing roles or role bindings. This role doesn't allow viewing Secrets, since reading the contents of Secrets enables access to `ServiceAccount` credentials in the namespace, which would allow API access as any `ServiceAccount` in the namespace (a form of privilege escalation). Applying this role at cluster scope gives access across all namespaces.|You can use this role to grant the capability to read selected nonsensitive Kubernetes objects at either namespace or cluster scope. For example, you can grant this role for review purposes.| -|[Azure Kubernetes Fleet Manager RBAC Writer][azure-rbac-fleet-manager-rbac-writer]|Grants read and write access to most Kubernetes resources within a namespace in the fleet-managed hub cluster. This role doesn't allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any `ServiceAccount` in the namespace, so it can be used to gain the API access levels of any `ServiceAccount` in the namespace. Applying this role at cluster scope gives access across all namespaces.|You can use this role to grant the capability to write selected Kubernetes objects at either namespace or cluster scope. For example, for use by a project team responsible for objects in a given namespace.| -|[Azure Kubernetes Fleet Manager RBAC Admin][azure-rbac-fleet-manager-rbac-admin]|Grants read and write access to Kubernetes resources within a namespace in the fleet-managed hub cluster. Provides write permissions on most objects within a namespace, except for `ResourceQuota` object and the namespace object itself. Applying this role at cluster scope gives access across all namespaces.|You can use this role to grant the capability to administer selected Kubernetes objects (including roles and role bindings) at either namespace or cluster scope. For example, for use by a project team responsible for objects in a given namespace.| -|[Azure Kubernetes Fleet Manager RBAC Cluster Admin][azure-rbac-fleet-manager-rbac-cluster-admin]|Grants read/write access to all Kubernetes resources in the fleet-managed hub cluster.|You can use this role to grant access to all Kubernetes objects (including CRDs) at either namespace or cluster scope.| --## Example role assignments --You can grant Azure RBAC roles using the [Azure CLI][azure-cli-overview]. For example, to create a role assignment at the Kubernetes Fleet hub cluster scope: --```azurecli-interactive -IDENTITY=$(az ad signed-in-user show --output tsv --query id) -FLEET_ID=$(az fleet show --resource-group $GROUP --name $FLEET --output tsv --query id) --az role assignment create --role 'Azure Kubernetes Fleet Manager RBAC Reader' --assignee "$IDENTITY" --scope "$FLEET_ID" -``` --You can also scope role assignments to an individual Kubernetes namespace. For example, to create a role assignment for a Kubernetes Fleet hub's default Kubernetes namespace: --```azurecli-interactive -IDENTITY=$(az ad signed-in-user show --output tsv --query id) -FLEET_ID=$(az fleet show --resource-group $GROUP --name $FLEET --output tsv --query id) --az role assignment create --role 'Azure Kubernetes Fleet Manager RBAC Reader' --assignee "$IDENTITY" --scope "$FLEET_ID/namespaces/default" -``` --<!-- LINKS --> -[azure-cli-overview]: /cli/azure/what-is-azure-cli -[azure-rbac-overview]: /azure/role-based-access-control/overview -[azure-rbac-fleet-manager-contributor-role]: /azure/role-based-access-control/built-in-roles/containers#azure-kubernetes-fleet-manager-contributor-role -[azure-rbac-fleet-manager-rbac-reader]: /azure/role-based-access-control/built-in-roles/containers#azure-kubernetes-fleet-manager-rbac-reader -[azure-rbac-fleet-manager-rbac-writer]: /azure/role-based-access-control/built-in-roles/containers#azure-kubernetes-fleet-manager-rbac-writer -[azure-rbac-fleet-manager-rbac-admin]: /azure/role-based-access-control/built-in-roles/containers#azure-kubernetes-fleet-manager-rbac-admin -[azure-rbac-fleet-manager-rbac-cluster-admin]: /azure/role-based-access-control/built-in-roles/containers#azure-kubernetes-fleet-manager-rbac-cluster-admin |
kubernetes-fleet | Concepts Resource Propagation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-resource-propagation.md | - Title: "Kubernetes resource propagation from hub cluster to member clusters" -description: This article describes the concept of Kubernetes resource propagation from hub cluster to member clusters. Previously updated : 03/04/2024----- - build-2024 ----# Kubernetes resource propagation from hub cluster to member clusters --This article describes the concept of Kubernetes resource propagation from hub clusters to member clusters using Azure Kubernetes Fleet Manager (Fleet). --Platform admins often need to deploy Kubernetes resources into multiple clusters for various reasons, for example: --* Managing access control using roles and role bindings across multiple clusters. -* Running infrastructure applications, such as Prometheus or Flux, that need to be on all clusters. --Application developers often need to deploy Kubernetes resources into multiple clusters for various reasons, for example: --* Deploying a video serving application into multiple clusters in different regions for a low latency watching experience. -* Deploying a shopping cart application into two paired regions for customers to continue to shop during a single region outage. -* Deploying a batch compute application into clusters with inexpensive spot node pools available. --It's tedious to create, update, and track these Kubernetes resources across multiple clusters manually. Fleet provides Kubernetes resource propagation to enable at-scale management of Kubernetes resources. With Fleet, you can create Kubernetes resources in the hub cluster and propagate them to selected member clusters via Kubernetes Custom Resources: `MemberCluster` and `ClusterResourcePlacement`. Fleet supports these custom resources based on an [open-source cloud-native multi-cluster solution][fleet-github]. For more information, see the [upstream Fleet documentation][fleet-github]. ---## Resource propagation workflow --[![Diagram that shows how Kubernetes resource are propagated to member clusters.](./media/conceptual-resource-propagation.png)](./media/conceptual-resource-propagation.png#lightbox) --## What is a `MemberCluster`? --Once a cluster joins a fleet, a corresponding `MemberCluster` custom resource is created on the hub cluster. You can use this custom resource to select target clusters in resource propagation. --The following labels can be used for target cluster selection in resource propagation and are automatically added to all member clusters: --* `fleet.azure.com/location` -* `fleet.azure.com/resource-group` -* `fleet.azure.com/subscription-id` --For more information, see the [MemberCluster API reference][membercluster-api]. --## What is a `ClusterResourcePlacement`? --A `ClusterResourcePlacement` object is used to tell the Fleet scheduler how to place a given set of cluster-scoped objects from the hub cluster into member clusters. Namespace-scoped objects like Deployments, StatefulSets, DaemonSets, ConfigMaps, Secrets, and PersistentVolumeClaims are included when their containing namespace is selected. --With `ClusterResourcePlacement`, you can: --* Select which cluster-scoped Kubernetes resources to propagate to member clusters. -* Specify placement policies to manually or automatically select a subset or all of the member clusters as target clusters. -* Specify rollout strategies to safely roll out any updates of the selected Kubernetes resources to multiple target clusters. -* View the propagation progress towards each target cluster. --The `ClusterResourcePlacement` object supports [using ConfigMap to envelope the object][envelope-object] to help propagate to member clusters without any unintended side effects. Selection methods include: --* **Group, version, and kind**: Select and place all resources of the given type. -* **Group, version, kind, and name**: Select and place one particular resource of a given type. -* **Group, version, kind, and labels**: Select and place all resources of a given type that match the labels supplied. --For more information, see the [`ClusterResourcePlacement` API reference][clusterresourceplacement-api]. --When creating the `ClusterResourcePlacement`, the following affinity types can be specified: --- **requiredDuringSchedulingIgnoredDuringExecution**: As this affinity is of the required type during scheduling, it **filters** the clusters based on their properties.-- **preferredDuringSchedulingIgnoredDuringExecution**: As this affinity is only of the preferred type, but is not required during scheduling, it provides preferential ranking to clusters based on properties specified by you such as cost or resource availability.--Multiple placement types are available for controlling the number of clusters to which the Kubernetes resource needs to be propagated: --* `PickAll` places the resources into all available member clusters. This policy is useful for placing infrastructure workloads, like cluster monitoring or reporting applications. -* `PickFixed` places the resources into a specific list of member clusters by name. -* `PickN` is the most flexible placement option and allows for selection of clusters based on affinity or topology spread constraints and is useful when spreading workloads across multiple appropriate clusters to ensure availability is desired. --### `PickAll` placement policy --You can use a `PickAll` placement policy to deploy a workload across all member clusters in the fleet (optionally matching a set of criteria). --The following example shows how to deploy a `prod-deployment` namespace and all of its objects across all clusters labeled with `environment: production`: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp-1 -spec: - policy: - placementType: PickAll - affinity: - clusterAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - labelSelector: - matchLabels: - environment: production - resourceSelectors: - - group: "" - kind: Namespace - name: prod-deployment - version: v1 -``` --This simple policy takes the `prod-deployment` namespace and all resources contained within it and deploys it to all member clusters in the fleet with the given `environment` label. If all clusters are desired, you can remove the `affinity` term entirely. --### `PickFixed` placement policy --If you want to deploy a workload into a known set of member clusters, you can use a `PickFixed` placement policy to select the clusters by name. --The following example shows how to deploy the `test-deployment` namespace into member clusters `cluster1` and `cluster2`: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp-2 -spec: - policy: - placementType: PickFixed - clusterNames: - - cluster1 - - cluster2 - resourceSelectors: - - group: "" - kind: Namespace - name: test-deployment - version: v1 -``` --### `PickN` placement policy --The `PickN` placement policy is the most flexible option and allows for placement of resources into a configurable number of clusters based on both affinities and topology spread constraints. --#### `PickN` with affinities --Using affinities with a `PickN` placement policy functions similarly to using affinities with pod scheduling. You can set both required and preferred affinities. Required affinities prevent placement to clusters that don't match them those specified affinities, and preferred affinities allow for ordering the set of valid clusters when a placement decision is being made. --The following example shows how to deploy a workload into three clusters. Only clusters with the `critical-allowed: "true"` label are valid placement targets, and preference is given to clusters with the label `critical-level: 1`: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp -spec: - resourceSelectors: - - ... - policy: - placementType: PickN - numberOfClusters: 3 - affinity: - clusterAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - weight: 20 - preference: - - labelSelector: - matchLabels: - critical-level: 1 - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - labelSelector: - matchLabels: - critical-allowed: "true" -``` --#### `PickN` with topology spread constraints --You can use topology spread constraints to force the division of the cluster placements across topology boundaries to satisfy availability requirements, for example, splitting placements across regions or update rings. You can also configure topology spread constraints to prevent scheduling if the constraint can't be met (`whenUnsatisfiable: DoNotSchedule`) or schedule as best possible (`whenUnsatisfiable: ScheduleAnyway`). --The following example shows how to spread a given set of resources out across multiple regions and attempts to schedule across member clusters with different update days: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp -spec: - resourceSelectors: - - ... - policy: - placementType: PickN - topologySpreadConstraints: - - maxSkew: 2 - topologyKey: region - whenUnsatisfiable: DoNotSchedule - - maxSkew: 2 - topologyKey: updateDay - whenUnsatisfiable: ScheduleAnyway -``` --For more information, see the [upstream topology spread constraints Fleet documentation][crp-topo]. --## Update strategy --Fleet uses a rolling update strategy to control how updates are rolled out across multiple cluster placements. --The following example shows how to configure a rolling update strategy using the default settings: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp -spec: - resourceSelectors: - - ... - policy: - ... - strategy: - type: RollingUpdate - rollingUpdate: - maxUnavailable: 25% - maxSurge: 25% - unavailablePeriodSeconds: 60 -``` --The scheduler rolls out updates to each cluster sequentially, waiting at least `unavailablePeriodSeconds` between clusters. Rollout status is considered successful if all resources were correctly applied to the cluster. Rollout status checking doesn't cascade to child resources, for example, it doesn't confirm that pods created by a deployment become ready. --For more information, see the [upstream rollout strategy Fleet documentation][fleet-rollout]. --## Placement status --The Fleet scheduler updates details and status on placement decisions onto the `ClusterResourcePlacement` object. You can view this information using the `kubectl describe crp <name>` command. The output includes the following information: --* The conditions that currently apply to the placement, which include if the placement was successfully completed. -* A placement status section for each member cluster, which shows the status of deployment to that cluster. --The following example shows a `ClusterResourcePlacement` that deployed the `test` namespace and the `test-1` ConfigMap into two member clusters using `PickN`. The placement was successfully completed and the resources were placed into the `aks-member-1` and `aks-member-2` clusters. --``` -Name: crp-1 -Namespace: -Labels: <none> -Annotations: <none> -API Version: placement.kubernetes-fleet.io/v1beta1 -Kind: ClusterResourcePlacement -Metadata: - ... -Spec: - Policy: - Number Of Clusters: 2 - Placement Type: PickN - Resource Selectors: - Group: - Kind: Namespace - Name: test - Version: v1 - Revision History Limit: 10 -Status: - Conditions: - Last Transition Time: 2023-11-10T08:14:52Z - Message: found all the clusters needed as specified by the scheduling policy - Observed Generation: 5 - Reason: SchedulingPolicyFulfilled - Status: True - Type: ClusterResourcePlacementScheduled - Last Transition Time: 2023-11-10T08:23:43Z - Message: All 2 cluster(s) are synchronized to the latest resources on the hub cluster - Observed Generation: 5 - Reason: SynchronizeSucceeded - Status: True - Type: ClusterResourcePlacementSynchronized - Last Transition Time: 2023-11-10T08:23:43Z - Message: Successfully applied resources to 2 member clusters - Observed Generation: 5 - Reason: ApplySucceeded - Status: True - Type: ClusterResourcePlacementApplied - Placement Statuses: - Cluster Name: aks-member-1 - Conditions: - Last Transition Time: 2023-11-10T08:14:52Z - Message: Successfully scheduled resources for placement in aks-member-1 (affinity score: 0, topology spread score: 0): picked by scheduling policy - Observed Generation: 5 - Reason: ScheduleSucceeded - Status: True - Type: ResourceScheduled - Last Transition Time: 2023-11-10T08:23:43Z - Message: Successfully Synchronized work(s) for placement - Observed Generation: 5 - Reason: WorkSynchronizeSucceeded - Status: True - Type: WorkSynchronized - Last Transition Time: 2023-11-10T08:23:43Z - Message: Successfully applied resources - Observed Generation: 5 - Reason: ApplySucceeded - Status: True - Type: ResourceApplied - Cluster Name: aks-member-2 - Conditions: - Last Transition Time: 2023-11-10T08:14:52Z - Message: Successfully scheduled resources for placement in aks-member-2 (affinity score: 0, topology spread score: 0): picked by scheduling policy - Observed Generation: 5 - Reason: ScheduleSucceeded - Status: True - Type: ResourceScheduled - Last Transition Time: 2023-11-10T08:23:43Z - Message: Successfully Synchronized work(s) for placement - Observed Generation: 5 - Reason: WorkSynchronizeSucceeded - Status: True - Type: WorkSynchronized - Last Transition Time: 2023-11-10T08:23:43Z - Message: Successfully applied resources - Observed Generation: 5 - Reason: ApplySucceeded - Status: True - Type: ResourceApplied - Selected Resources: - Kind: Namespace - Name: test - Version: v1 - Kind: ConfigMap - Name: test-1 - Namespace: test - Version: v1 -Events: - Type Reason Age From Message - - - - - - Normal PlacementScheduleSuccess 12m (x5 over 3d22h) cluster-resource-placement-controller Successfully scheduled the placement - Normal PlacementSyncSuccess 3m28s (x7 over 3d22h) cluster-resource-placement-controller Successfully synchronized the placement - Normal PlacementRolloutCompleted 3m28s (x7 over 3d22h) cluster-resource-placement-controller Resources have been applied to the selected clusters -``` --## Placement changes --The Fleet scheduler prioritizes the stability of existing workload placements. This prioritization can limit the number of changes that cause a workload to be removed and rescheduled. The following scenarios can trigger placement changes: --* Placement policy changes in the `ClusterResourcePlacement` object can trigger removal and rescheduling of a workload. - * Scale out operations (increasing `numberOfClusters` with no other changes) place workloads only on new clusters and don't affect existing placements. -* Cluster changes, including: - * A new cluster becoming eligible might trigger placement if it meets the placement policy, for example, a `PickAll` policy. - * A cluster with a placement is removed from the fleet will attempt to replace all affected workloads without affecting their other placements. --Resource-only changes (updating the resources or updating the `ResourceSelector` in the `ClusterResourcePlacement` object) roll out gradually in existing placements but do **not** trigger rescheduling of the workload. --## Tolerations --`ClusterResourcePlacement` objects support the specification of tolerations, which apply to the `ClusterResourcePlacement` object. Each toleration object consists of the following fields: --* `key`: The key of the toleration. -* `value`: The value of the toleration. -* `effect`: The effect of the toleration, such as `NoSchedule`. -* `operator`: The operator of the toleration, such as `Exists` or `Equal`. --Each toleration is used to tolerate one or more specific taints applied on the `ClusterResourcePlacement`. Once all taints on a [`MemberCluster`](./concepts-fleet.md#what-are-member-clusters) are tolerated, the scheduler can then propagate resources to the cluster. You can't update or remove tolerations from a `ClusterResourcePlacement` object once it's created. --For more information, see [the upstream Fleet documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md#tolerations). --## Access the Kubernetes API of the Fleet resource cluster --If you created an Azure Kubernetes Fleet Manager resource with the hub cluster enabled, you can use it to centrally control scenarios like Kubernetes object propagation. To access the Kubernetes API of the Fleet resource cluster, follow the steps in [Access the Kubernetes API of the Fleet resource cluster with Azure Kubernetes Fleet Manager](./quickstart-access-fleet-kubernetes-api.md). --## Next steps --[Set up Kubernetes resource propagation from hub cluster to member clusters](./quickstart-resource-propagation.md). --<!-- LINKS - external --> -[fleet-github]: https://github.com/Azure/fleet -[membercluster-api]: https://github.com/Azure/fleet/blob/main/docs/api-references.md#membercluster -[clusterresourceplacement-api]: https://github.com/Azure/fleet/blob/main/docs/api-references.md#clusterresourceplacement -[envelope-object]: https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md#envelope-object -[crp-topo]: https://github.com/Azure/fleet/blob/main/docs/howtos/topology-spread-constraints.md -[fleet-rollout]: https://github.com/Azure/fleet/blob/main/docs/howtos/crp.md#rollout-strategy |
kubernetes-fleet | Concepts Scheduler Scheduling Framework | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-scheduler-scheduling-framework.md | - Title: "Azure Kubernetes Fleet Manager scheduler and scheduling framework" -description: This article provides a conceptual overview of the Azure Kubernetes Fleet Manager scheduler and scheduling framework. Previously updated : 04/01/2024-------# Azure Kubernetes Fleet Manager scheduler and scheduling framework --This article provides a conceptual overview of the scheduler and scheduling framework in Azure Kubernetes Fleet Manager (Fleet). --## What is the scheduler? --The scheduler is a core component in the fleet workload with the primary responsibility of determining scheduling decisions for a bundle of resources based on the latest `ClusterSchedulingPolicySnapshot` generated by the [`ClusterResourcePlacement`](./concepts-resource-propagation.md). --By default, the scheduler operates in *batch mode*, which enhances performance. In this mode, it binds a `ClusterResourceBinding` from a `ClusterResourcePlacement` to multiple clusters whenever possible. --### Batch mode --Scheduling resources within a `ClusterResourcePlacement` involves more dependencies compared to scheduling pods within a Kubernetes Deployment. There are two notable distinctions: --* In a `ClusterResourcePlacement`, multiple replicas of resources can't be scheduled on the same cluster. -* The `ClusterResourcePlacement` supports different placement types within a single object. --For more information, see [the upstream Fleet Scheduler documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/Scheduler/README.md). --## What is the scheduling framework? --The fleet scheduling framework closely aligns with the native [Kubernetes scheduling framework](https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/), incorporating several modifications and tailored functionalities to support the fleet workload. ---The primary advantage of this framework is its capability to compile plugins directly into the scheduler. Its API facilitates the implementation of diverse scheduling features as plugins, ensuring a lightweight and maintainable core. --The fleet scheduler integrates the following fundamental built-in plugins: --* **Topology spread plugin**: Supports the `TopologySpreadConstraints` in the placement policy. -* **Cluster affinity plugin**: Facilitates the affinity clause in the placement policy. -* **Same placement affinity plugin**: Designed specifically for fleet and prevents multiple replicas from being placed within the same cluster. -* **Cluster eligibility plugin**: Enables cluster selection based on specific status criteria. -* **Taint & toleration plugin**: Enables cluster selection based on [taints on the cluster](./concepts-fleet.md#taints) and [tolerations on the `ClusterResourcePlacement`](./concepts-resource-propagation.md#tolerations). --For more information, see the [upstream Fleet Scheduling Framework documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/Scheduling-Framework/README.md). --## Next steps --* [Create a fleet and join member clusters](./quickstart-create-fleet-and-members.md). |
kubernetes-fleet | Concepts Update Orchestration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/concepts-update-orchestration.md | - Title: "Update orchestration across multiple member clusters" -description: This article describes the concept of update orchestration across multiple clusters. Previously updated : 03/04/2024----- - build-2024 ----# Update orchestration across multiple member clusters --Platform admins managing large number of clusters often have problems with staging the updates of multiple clusters (for example, upgrading node OS image versions, upgrading Kubernetes versions) in a safe and predictable way. To address this pain point, Azure Kubernetes Fleet Manager (Fleet) allows you to orchestrate updates across multiple clusters using update runs, stages, groups, and strategies. ---* **Update run**: An update run represents an update being applied to a collection of AKS clusters, consisting of the update goal and sequence. The update goal describes the desired updates (for example, upgrading to Kubernetes version 1.28.3). The update sequence describes the exact order to apply the updates to multiple member clusters, expressed using stages and groups. If unspecified, all the member clusters are updated one by one sequentially. An update run can be stopped and started. -* **Update stage**: Update runs are divided into stages, which are applied sequentially. For example, a first update stage might update test environment member clusters, and a second update stage would then later update production environment member clusters. A wait time can be specified to delay between the application of subsequent update stages. -* **Update group**: Each update stage contains one or more update groups, which are used to select the member clusters to be updated. Update groups are also used to order the application of updates to member clusters. Within an update stage, updates are applied to all the different update groups in parallel; within an update group, member clusters update sequentially. Each member cluster of the fleet can only be a part of one update group. -* **Update strategy**: An update strategy describes the update sequence with stages and groups. You can reuse a strategy in your update runs instead of defining the sequence repeatedly in each run. --Currently, the supported update operations on the cluster are upgrades. There are three types of upgrades you can choose from: --- Upgrade Kubernetes versions for the Kubernetes control plane and the nodes (which includes upgrading the node images).-- Upgrade Kubernetes versions for only the control planes of the clusters-- Upgrade only the node images.--You can specify the target Kubernetes version to upgrade to, but you can't specify the exact target node image versions as the latest available node image versions may vary depending on the region of the cluster (check [release tracker](/azure/aks/release-tracker) for more information). -The target node image versions are automatically selected for you based on your preferences: --- `Latest`: Use the latest node images available in the region of a cluster when the upgrade of the cluster starts. As a result, different image versions could be used depending on which region a cluster is in and when its upgrade actually starts.-- `Consistent`: When the update run starts, it picks the **latest common** image versions across the regions of the member clusters in this run, such that the same, consistent image versions are used across clusters.--You should choose `Latest` to use fresher image versions and minimize security risks, and choose `Consistent` to improve reliability by using and verifying those images in clusters in earlier stages before using them in later clusters. --## Planned maintenance --Update runs honor [planned maintenance windows](/azure/aks/planned-maintenance) that you set at the Azure Kubernetes Service (AKS) cluster level. --Within an update run (for both [One by one](./update-orchestration.md#update-all-clusters-one-by-one) or [Stages](./update-orchestration.md#update-clusters-in-a-specific-order) type update runs), update run prioritizes upgrading the clusters in the following order: - 1. Member with an open ongoing maintenance window. - 1. Member with maintenance window opening in the next four hours. - 1. Member with no maintenance window. - 1. Member with a closed maintenance window. --## Update run states --An update run can be in one of the following states: --- **NotStarted**: State of the update run before it is started.-- **Running**: Upgrade is in progress for at least one of the clusters in the update run.-- **Pending**: - - **Member cluster**: A member cluster can be in the pending state for any of the following reasons and are surfaced under the message field. - - Maintenance window is not open. Message indicates next opening time. - - Target Kubernetes version is not yet available in the region of the member. Message links to the release tracker so that you can check status of the release across regions. - - Target node image version is not yet available in the region of the member. Message links to the release tracker so that you can check status of the release across regions. - - **Group**: A group is in `Pending` state if all members in the groups are in `Pending` state or not started. When a member moves to `Pending`, the update run will attempt to upgrade the next member in the group. If all members are in `Pending` status, the group moves to `Pending` state. All groups must be in terminal state before moving to the next stage. That is, if a group is in `Pending` state, the update run waits for it to complete before moving on to the next stage for execution. - - **Stage**: A stage is in `Pending` if all groups under that stage are in `Pending` state or not started. - - **Run**: A run is in `Pending` state if the current stage that should be running is in `Pending` state. -- **Skipped**: All levels of an update run can be skipped and this could either be system-detected or user-initiated.- - **Member**: - - You have skipped upgrade for a member or one of its parents. - - Member cluster is already at the target Kubernetes version (if update run mode is `Full` or `ControlPlaneOnly`). - - Member cluster is already at the target Kubernetes version and all node pools are at the target node image version. - - When consistent node image is chosen for an upgrade run, if it's not possible to find the target image version for one of the node pools, then upgrade is skipped for that cluster. An example situation for this is when a new node pool with a new VM SKU is added after an update run has started. - - **Group**: - - All member clusters were detected as `Skipped` by the system. - - You initiated a skip at the group level. - - **Stage**: - - All groups in the stage were detected as `Skipped` by the system. - - You initiated a skip at the stage level. - - **Run**: - - All stages were detected as `Skipped` by the system. --- **Stopped**: All levels of an update run can be stopped. There are two possibilities for entering a stopped state:- - You stop the update run, at which point update run stops tracking all operations. If an operation was already initiated by update run (for example, a cluster upgrade is in progress), then that operation is not aborted for that individual cluster. - - If a failure is encountered during the update run (for example upgrades failed on one of the clusters), the entire update run enters into a stop state and operated are not attempted for any subsequent cluster in the update run. --- **Failed**: A failure to upgrade a cluster will result in the following actions:- - Marks the `MemberUpdateStatus` as `Failed` on the member cluster. - - Marks all parents (group -> stage -> run) as `Failed` with a summary error message. - - Stops the update run from progressing any further. --## Next steps --* [Orchestrate updates across multiple member clusters](./update-orchestration.md). |
kubernetes-fleet | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/faq.md | - Title: "Frequently asked questions - Azure Kubernetes Fleet Manager" -description: This article covers the frequently asked questions for Azure Kubernetes Fleet Manager Previously updated : 10/03/2022-------# Frequently Asked Questions - Azure Kubernetes Fleet Manager --This article covers the frequently asked questions for Azure Kubernetes Fleet Manager. --## Relationship to Azure Kubernetes Service clusters --Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance. Since the Kubernetes control plane is managed by Azure, you only manage and maintain the agent nodes. You run your actual workloads on the AKS clusters. --Azure Kubernetes Fleet Manager (Fleet) will help you address at-scale and multi-cluster scenarios for Azure Kubernetes Service clusters. Azure Kubernetes Fleet Manager provides a group representation for your AKS clusters and helps users with orchestrating cluster updates, Kubernetes resource propagation and multi-cluster load balancing. User workloads can't be run on the fleet cluster itself. --## Creation of AKS clusters from fleet resource --Today, Azure Kubernetes Fleet Manager supports joining existing AKS clusters as fleet members. Creation and lifecycle management of new AKS clusters from fleet cluster is in the [roadmap](https://aka.ms/fleet/roadmap). --## Number of clusters --Fleets support joining up to 100 AKS clusters, regardless of whether they have a hub cluster or not. --## AKS clusters that can be joined as members --Fleet supports joining the following types of AKS clusters as member clusters: --* AKS clusters across same or different resource groups within same subscription -* AKS clusters across different subscriptions of the same Microsoft Entra tenant -* AKS clusters from different regions but within the same tenant --## Relationship to Azure-Arc enabled Kubernetes --Today, Azure Kubernetes Fleet Manager supports joining AKS clusters as member clusters. Support for joining member clusters to the fleet resource is in the [roadmap](https://aka.ms/fleet/roadmap). --## Regional or global --Azure Kubernetes Fleet Manager resource is a regional resource. Support for region failover for disaster recovery use cases is in the [roadmap](https://aka.ms/fleet/roadmap). --## What happens when the user changes the cluster identity of a joined cluster? -Changing the identity of a member AKS cluster will break the communication between fleet and that member cluster. While the member agent will use the new identity to communicate with the fleet cluster, fleet still needs to be made aware of this new identity. To achieve this, run the following command: --```azurecli -az fleet member create \ - --resource-group ${GROUP} \ - --fleet-name ${FLEET} \ - --name ${MEMBER_NAME} \ - --member-cluster-id ${MEMBER_CLUSTER_ID} -``` --## Roadmap --The roadmap for Azure Kubernetes Fleet Manager resource is available [here](https://aka.ms/fleet/roadmap). --## Next steps --* [Create a fleet and join member clusters](./quickstart-create-fleet-and-members.md). |
kubernetes-fleet | Intelligent Resource Placement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/intelligent-resource-placement.md | - Title: "Intelligent cross-cluster Kubernetes resource placement using Azure Kubernetes Fleet Manager (Preview)" -description: Learn how to use Kubernetes Fleet to intelligently place your workloads on target member clusters based on cost and resource availability. - Previously updated : 05/13/2024----- - build-2024 ---# Intelligent cross-cluster Kubernetes resource placement using Azure Kubernetes Fleet Manager (Preview) --Application developers often need to deploy Kubernetes resources into multiple clusters. Fleet operators often need to pick the best clusters for placing the workloads based on heuristics such as cost of compute in the clusters or available resources such as memory and CPU. It's tedious to create, update, and track these Kubernetes resources across multiple clusters manually. This article covers how Azure Kubernetes Fleet Manager (Kubernetes Fleet) allows you to address these scenarios using the intelligent Kubernetes resource placement feature. --## Overview --Kubernetes Fleet provides resource placement capability that can make scheduling decisions based on the following properties: -- Node count-- Cost of compute in target member clusters-- Resource (CPU/Memory) availability in target member clusters---This article discusses creating cluster resource placements, which can be done via Azure CLI or the Azure portal. For more, see [Propagate resources from a Fleet hub cluster to member clusters](./quickstart-resource-propagation.md). --## Prerequisites --* Read the [resource propagation conceptual overview](./concepts-resource-propagation.md) to understand the concepts and terminology used in this quickstart. -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* You need a Fleet resource with a hub cluster and member clusters. If you don't have one, see [Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI](quickstart-create-fleet-and-members.md). -* You need access to the Kubernetes API of the hub cluster. If you don't have access, see [Access the Kubernetes API of the Fleet resource with Azure Kubernetes Fleet Manager](./quickstart-access-fleet-kubernetes-api.md). ---## Filter clusters at the time of scheduling based on member cluster properties --**requiredDuringSchedulingIgnoredDuringExecution** affinity type allows for **filtering** the member clusters eligible for placement using property selectors. A property selector is an array of expression conditions against cluster properties. --In each condition you specify: --* **Name**: Name of the property, which should be in the following format: -- ``` - resources.kubernetes-fleet.io/<CAPACITY-TYPE>-<RESOURCE-NAME> - ``` -- `<CAPACITY-TYPE>` is one of `total`, `allocatable`, or `available`, depending on which capacity (usage information) you would like to check against, and `<RESOURCE-NAME>` is the name of the resource (CPU/memory). -- For example, if you would like to select clusters based on the available CPU capacity of a cluster, the name used in the property selector should be `resources.kubernetes-fleet.io/available-cpu`. For allocatable memory capacity, you can use `resources.kubernetes-fleet.io/allocatable-memory`. --* A list of values, which are possible values of the property. -* An operator used to express the condition between the constraint/desired value and the observed value on the cluster. The following operators are currently supported: -- * `Gt` (Greater than): a cluster's observed value of the given property must be greater than the value in the condition before it can be picked for resource placement. - * `Ge` (Greater than or equal to): a cluster's observed value of the given property must be greater than or equal to the value in the condition before it can be picked for resource placement. - * `Lt` (Less than): a cluster's observed value of the given property must be less than the value in the condition before it can be picked for resource placement. - * `Le` (Less than or equal to): a cluster's observed value of the given property must be less than or equal to the value in the condition before it can be picked for resource placement. - * `Eq` (Equal to): a cluster's observed value of the given property must be equal to the value in the condition before it can be picked for resource placement. - * `Ne` (Not equal to): a cluster's observed value of the given property must be not equal to the value in the condition before it can be picked for resource placement. -- If you use the operator `Gt`, `Ge`, `Lt`, `Le`, `Eq`, or `Ne`, the list of values in the condition should have exactly one value. --Fleet evaluates each cluster based on the properties specified in the condition. Failure to satisfy conditions listed under `requiredDuringSchedulingIgnoredDuringExecution` excludes this member cluster from resource placement. --> [!NOTE] -> If a member cluster does not possess the property expressed in the condition, it will automatically fail the condition. --Example placement policy to select only clusters with greater than or equal to five nodes for resource placement: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp -spec: - resourceSelectors: - - ... - policy: - placementType: PickAll - affinity: - clusterAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - propertySelector: - matchExpressions: - - name: "kubernetes.azure.com/node-count" - operator: Ge - values: - - "5" -``` --You can use both label and property selectors under -`requiredDuringSchedulingIgnoredDuringExecution` affinity term to filter the eligible member clusters on both these constraints. --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp -spec: - resourceSelectors: - - ... - policy: - placementType: PickAll - affinity: - clusterAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - labelSelector: - matchLabels: - region: east - propertySelector: - matchExpressions: - - name: "kubernetes.azure.com/node-count" - operator: Ge - values: - - "5" -``` --In this example, Kubernetes Fleet only considers a cluster for resource placement if it has the `region=east` label and a node count greater than or equal to five. --## Rank order clusters at the time of scheduling based on member cluster properties --When `preferredDuringSchedulingIgnoredDuringExecution` is used, a property sorter ranks all the clusters in the fleet based on their values in the ascending or descending order. The weights are calculated based on the weight value specified under `preferredDuringSchedulingIgnoredDuringExecution`. --A property sorter consists of: --* **Name**: Name of the property with more information of the formatting of the property covered in the previous section. -* **Sort order**: Sort order can be either `Ascending` or `Descending`. When `Ascending` order is used, Kubernetes Fleet prefers member clusters with lower observed values. When `Descending` order is used, member clusters with higher observed value are preferred. --For sort order `Descending`, the proportional weight is calculated using the formula: --``` -((Observed Value - Minimum observed value) / (Maximum observed value - Minimum observed value)) * Weight -``` --For example, let's say you want to rank clusters based on the property of available CPU capacity in descending order and that you have a fleet of three clusters with the following available CPU: --| Cluster | Available CPU capacity | -| -- | - | -| `bravelion` | 100 | -| `smartfish` | 20 | -| `jumpingcat` | 10 | --In this case, the sorter computes the following weights: --| Cluster | Available CPU capacity | Weight | -| -- | - | - | -| `bravelion` | 100 | (100 - 10) / (100 - 10) = 100% of the weight | -| `smartfish` | 20 | (20 - 10) / (100 - 10) = 11.11% of the weight | -| `jumpingcat` | 10 | (10 - 10) / (100 - 10) = 0% of the weight | ---For sort order `Ascending`, the proportional weight is calculated using the formula: --``` -(1 - ((Observed Value - Minimum observed value) / (Maximum observed value - Minimum observed value))) * Weight -``` --For example, let's say you want to rank clusters based on their per-CPU-core-cost in ascending order and that you have a fleet of three clusters with the following CPU core costs: --| Cluster | Per-CPU core cost | -| -- | - | -| `bravelion` | 1 | -| `smartfish` | 0.2 | -| `jumpingcat` | 0.1 | --In this case, the sorter computes the following weights: --| Cluster | Per-CPU core cost | Weight | -| -- | - | - | -| `bravelion` | 1 | 1 - ((1 - 0.1) / (1 - 0.1)) = 0% of the weight | -| `smartfish` | 0.2 | 1 - ((0.2 - 0.1) / (1 - 0.1)) = 88.89% of the weight | -| `jumpingcat` | 0.1 | 1 - (0.1 - 0.1) / (1 - 0.1) = 100% of the weight | --The example below showcases a property sorter using the `Descending` order: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp -spec: - resourceSelectors: - - ... - policy: - placementType: PickN - numberOfClusters: 10 - affinity: - clusterAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 20 - preference: - metricSorter: - name: kubernetes.azure.com/node-count - sortOrder: Descending -``` --In this example, Fleet will prefer clusters with higher node counts. The cluster with the highest node count would receive a weight of 20, and the cluster with the lowest would receive 0. Other clusters receive proportional weights calculated using the weight calculation formula. --You may use both label selector and property sorter under `preferredDuringSchedulingIgnoredDuringExecution` affinity. A member cluster that fails the label selector won't receive any weight. Member clusters that satisfy the label selector receive proportional weights as specified under property sorter. --```yaml -apiVersion: placement.kubernetes-fleet.io/v1beta1 -kind: ClusterResourcePlacement -metadata: - name: crp -spec: - resourceSelectors: - - ... - policy: - placementType: PickN - numberOfClusters: 10 - affinity: - clusterAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 20 - preference: - labelSelector: - matchLabels: - env: prod - metricSorter: - name: resources.kubernetes-fleet.io/total-cpu - sortOrder: Descending -``` --In this example, a cluster would only receive extra weight if it has the label `env=prod`. If it satisfies that label based constraint, then the cluster is given proportional weight based on the amount of total CPU in that member cluster. --## Clean up resources --### [Azure CLI](#tab/azure-cli) --If you no longer wish to use the `ClusterResourcePlacement` object, you can delete it using the `kubectl delete` command. The following example deletes the `ClusterResourcePlacement` object named `crp`: --```bash -kubectl delete clusterresourceplacement crp -``` --### [Portal](#tab/azure-portal) --If you no longer wish to use your cluster resource placement, you can delete it from the Azure portal: --1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**. --1. Select the cluster resource placement objects you want to delete, then select **Delete**. --1. In the **Delete** tab, verify the correct objects are chosen. Once you're ready, select **Confirm delete** and **Delete**. ----## Next steps --To learn more about resource propagation, see the following resources: --* [Intelligent cross-cluster Kubernetes resource placement based on member clusters properties](./intelligent-resource-placement.md) -* [Upstream Fleet documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md) |
kubernetes-fleet | L4 Load Balancing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/l4-load-balancing.md | - Title: "How to set up multi-cluster Layer 4 load balancing across Azure Kubernetes Fleet Manager member clusters (preview)" -description: Learn how to use Azure Kubernetes Fleet Manager to set up multi-cluster Layer 4 load balancing across workloads deployed on multiple member clusters. - Previously updated : 03/20/2024----- - devx-track-azurecli ---# Set up multi-cluster layer 4 load balancing across Azure Kubernetes Fleet Manager member clusters (preview) --For applications deployed across multiple clusters, admins often want to route incoming traffic to them across clusters. --You can follow this document to set up layer 4 load balancing for such multi-cluster applications. ---## Prerequisites ---* Read the [conceptual overview of this feature](./concepts-l4-load-balancing.md), which provides an explanation of `ServiceExport` and `MultiClusterService` objects referenced in this document. --* You must have a Fleet resource with a hub cluster and member clusters. If you don't have this resource, follow [Quickstart: Create a Fleet resource and join member clusters](quickstart-create-fleet-and-members.md). --* The target Azure Kubernetes Service (AKS) clusters on which the workloads are deployed need to be present on either the same [virtual network](../virtual-network/virtual-networks-overview.md) or on [peered virtual networks](../virtual-network/virtual-network-peering-overview.md). -- * These target clusters have to be [added as member clusters to the Fleet resource](./quickstart-create-fleet-and-members.md#join-member-clusters). - * These target clusters should be using [Azure CNI (Container Networking Interface) networking](/azure/aks/configure-azure-cni). --* You must gain access to the Kubernetes API of the hub cluster by following the steps in [Access the Kubernetes API of the Fleet resource](./quickstart-access-fleet-kubernetes-api.md). --* Set the following environment variables and obtain the kubeconfigs for the fleet and all member clusters: -- ```bash - export GROUP=<resource-group> - export FLEET=<fleet-name> - export MEMBER_CLUSTER_1=aks-member-1 -- az fleet get-credentials --resource-group ${GROUP} --name ${FLEET} --file fleet -- az aks get-credentials --resource-group ${GROUP} --name ${MEMBER_CLUSTER_1} --file aks-member-1 - ``` ---## Deploy a workload across member clusters of the Fleet resource --> [!NOTE] -> -> * The steps in this how-to guide refer to a sample application for demonstration purposes only. You can substitute this workload for any of your own existing Deployment and Service objects. -> -> * These steps deploy the sample workload from the Fleet cluster to member clusters using Kubernetes configuration propagation. Alternatively, you can choose to deploy these Kubernetes configurations to each member cluster separately, one at a time. --1. Create a namespace on the fleet cluster: -- ```bash - KUBECONFIG=fleet kubectl create namespace kuard-demo - ``` -- Output looks similar to the following example: -- ```console - namespace/kuard-demo created - ``` --1. Apply the Deployment, Service, ServiceExport objects: -- ```bash - KUBECONFIG=fleet kubectl apply -f https://raw.githubusercontent.com/Azure/AKS/master/examples/fleet/kuard/kuard-export-service.yaml - ``` -- The `ServiceExport` specification in the above file allows you to export a service from member clusters to the Fleet resource. Once successfully exported, the service and all its endpoints are synced to the fleet cluster and can then be used to set up multi-cluster load balancing across these endpoints. The output looks similar to the following example: -- ```console - deployment.apps/kuard created - service/kuard created - serviceexport.networking.fleet.azure.com/kuard created - ``` --1. Create the following `ClusterResourcePlacement` in a file called `crp-2.yaml`. Notice we're selecting clusters in the `eastus` region: -- ```yaml - apiVersion: placement.kubernetes-fleet.io/v1beta1 - kind: ClusterResourcePlacement - metadata: - name: kuard-demo - spec: - resourceSelectors: - - group: "" - version: v1 - kind: Namespace - name: kuard-demo - policy: - affinity: - clusterAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - labelSelector: - matchLabels: - fleet.azure.com/location: eastus - ``` --1. Apply the `ClusterResourcePlacement`: -- ```bash - KUBECONFIG=fleet kubectl apply -f crp-2.yaml - ``` -- If successful, the output looks similar to the following example: -- ```console - clusterresourceplacement.placement.kubernetes-fleet.io/kuard-demo created - ``` --1. Check the status of the `ClusterResourcePlacement`: --- ```bash - KUBECONFIG=fleet kubectl get clusterresourceplacements - ``` -- If successful, the output looks similar to the following example: -- ```console - NAME GEN SCHEDULED SCHEDULEDGEN APPLIED APPLIEDGEN AGE - kuard-demo 1 True 1 True 1 20s - ``` --## Create MultiClusterService to load balance across the service endpoints in multiple member clusters --1. Check whether the service is successfully exported for the member clusters in `eastus` region: -- ```bash - KUBECONFIG=aks-member-1 kubectl get serviceexport kuard --namespace kuard-demo - ``` -- Output looks similar to the following example: -- ```console - NAME IS-VALID IS-CONFLICTED AGE - kuard True False 25s - ``` -- ```bash - KUBECONFIG=aks-member-2 kubectl get serviceexport kuard --namespace kuard-demo - ``` -- Output looks similar to the following example: -- ```console - NAME IS-VALID IS-CONFLICTED AGE - kuard True False 55s - ``` -- You should see that the service is valid for export (`IS-VALID` field is `true`) and has no conflicts with other exports (`IS-CONFLICT` is `false`). -- > [!NOTE] - > It may take a minute or two for the ServiceExport to be propagated. --1. Create `MultiClusterService` on one member to load balance across the service endpoints in these clusters: -- ```bash - KUBECONFIG=aks-member-1 kubectl apply -f https://raw.githubusercontent.com/Azure/AKS/master/examples/fleet/kuard/kuard-mcs.yaml - ``` -- > [!NOTE] - > To expose the service via the internal IP instead of public one, add the annotation to the MultiClusterService: - > - > ```yaml - > apiVersion: networking.fleet.azure.com/v1alpha1 - > kind: MultiClusterService - > metadata: - > name: kuard - > namespace: kuard-demo - > annotations: - > service.beta.kubernetes.io/azure-load-balancer-internal: "true" - > ... - > ``` -- Output looks similar to the following example: -- ```console - multiclusterservice.networking.fleet.azure.com/kuard created - ``` --1. Verify the MultiClusterService is valid by running the following command: -- ```bash - KUBECONFIG=aks-member-1 kubectl get multiclusterservice kuard --namespace kuard-demo - ``` -- The output should look similar to the following example: -- ```console - NAME SERVICE-IMPORT EXTERNAL-IP IS-VALID AGE - kuard kuard <a.b.c.d> True 40s - ``` -- The `IS-VALID` field should be `true` in the output. Check out the external load balancer IP address (`EXTERNAL-IP`) in the output. It may take a while before the import is fully processed and the IP address becomes available. --1. Run the following command multiple times using the external load balancer IP address: -- ```bash - curl <a.b.c.d>:8080 | grep addrs - ``` -- Notice that the IPs of the pods serving the request is changing and that these pods are from member clusters `aks-member-1` and `aks-member-2` from the `eastus` region. You can verify the pod IPs by running the following commands on the clusters from `eastus` region: -- ```bash - KUBECONFIG=aks-member-1 kubectl get pods -n kuard-demo -o wide - ``` -- ```bash - KUBECONFIG=aks-member-2 kubectl get pods -n kuard-demo -o wide - ``` |
kubernetes-fleet | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/overview.md | - Title: "Overview of Azure Kubernetes Fleet Manager"--- - - ignite-2023 Previously updated : 11/06/2023----description: "This article provides an overview of Azure Kubernetes Fleet Manager." -keywords: "Kubernetes, Azure, multi-cluster, multi, containers" ---# What is Azure Kubernetes Fleet Manager? --Azure Kubernetes Fleet Manager (Fleet) enables at-scale management of multiple Azure Kubernetes Service (AKS) clusters. Fleet supports the following scenarios: --* Create a Fleet resource and join AKS clusters across regions and subscriptions as member clusters. --* Orchestrate Kubernetes version upgrades and node image upgrades across multiple clusters by using update runs, stages, and groups. --* Create Kubernetes resource objects on the Fleet resource's hub cluster and control their propagation to member clusters. --* Export and import services between member clusters, and load balance incoming layer-4 traffic across service endpoints on multiple clusters (preview). --## Next steps --* [Conceptual overview of Fleets and member clusters](./concepts-fleet.md). -* [Conceptual overview of Update orchestration across multiple member clusters](./concepts-update-orchestration.md). -* [Conceptual overview of Kubernetes resource propagation from hub cluster to member clusters](./concepts-resource-propagation.md). -* [Conceptual overview of Multi-cluster layer-4 load balancing](./concepts-l4-load-balancing.md). -* [Create a fleet and join member clusters](./quickstart-create-fleet-and-members.md). |
kubernetes-fleet | Quickstart Access Fleet Kubernetes Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-access-fleet-kubernetes-api.md | - Title: "Quickstart: Access the Kubernetes API of the Fleet resource" -description: Learn how to access the Kubernetes API of the Fleet resource with Azure Kubernetes Fleet Manager. - Previously updated : 04/01/2024------# Quickstart: Access the Kubernetes API of the Fleet resource --If your Azure Kubernetes Fleet Manager resource was created with the hub cluster enabled, then it can be used to centrally control scenarios like Kubernetes resource propagation. In this article, you learn how to access the Kubernetes API of the hub cluster managed by the Fleet resource. --## Prerequisites ---* You need a Fleet resource with a hub cluster and member clusters. If you don't have one, see [Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI](quickstart-create-fleet-and-members.md). -* The identity (user or service principal) you're using needs to have the Microsoft.ContainerService/fleets/listCredentials/action on the Fleet resource. --## Access the Kubernetes API of the Fleet resource --1. Set the following environment variables for your subscription ID, resource group, and Fleet resource: -- ```azurecli-interactive - export SUBSCRIPTION_ID=<subscription-id> - export GROUP=<resource-group-name> - export FLEET=<fleet-name> - ``` --2. Set the default Azure subscription to use using the [`az account set`][az-account-set] command. -- ```azurecli-interactive - az account set --subscription ${SUBSCRIPTION_ID} - ``` --3. Get the kubeconfig file of the hub cluster Fleet resource using the [`az fleet get-credentials`][az-fleet-get-credentials] command. -- ```azurecli-interactive - az fleet get-credentials --resource-group ${GROUP} --name ${FLEET} - ``` -- Your output should look similar to the following example output: -- ```output - Merged "hub" as current context in /home/fleet/.kube/config - ``` --4. Set the following environment variable for the `id` of the hub cluster Fleet resource: -- ```azurecli-interactive - export FLEET_ID=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/fleets/${FLEET} - ``` --5. Authorize your identity to the hub cluster Fleet resource's Kubernetes API server using the following commands: -- For the `ROLE` environment variable, you can use one of the following four built-in role definitions as the value: -- * Azure Kubernetes Fleet Manager RBAC Reader - * Azure Kubernetes Fleet Manager RBAC Writer - * Azure Kubernetes Fleet Manager RBAC Admin - * Azure Kubernetes Fleet Manager RBAC Cluster Admin -- ```azurecli-interactive - export IDENTITY=$(az ad signed-in-user show --query "id" --output tsv) - export ROLE="Azure Kubernetes Fleet Manager RBAC Cluster Admin" - az role assignment create --role "${ROLE}" --assignee ${IDENTITY} --scope ${FLEET_ID} - ``` -- Your output should look similar to the following example output: -- ```output - { - "canDelegate": null, - "condition": null, - "conditionVersion": null, - "description": null, - "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>/providers/Microsoft.Authorization/roleAssignments/<assignment>", - "name": "<name>", - "principalId": "<id>", - "principalType": "User", - "resourceGroup": "<GROUP>", - "roleDefinitionId": "/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Authorization/roleDefinitions/18ab4d3d-a1bf-4477-8ad9-8359bc988f69", - "scope": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>", - "type": "Microsoft.Authorization/roleAssignments" - } - ``` --6. Verify you can access the API server using the `kubectl get memberclusters` command. -- ```bash - kubectl get memberclusters - ``` -- If successful, your output should look similar to the following example output: -- ```output - NAME JOINED AGE - aks-member-1 True 2m - aks-member-2 True 2m - aks-member-3 True 2m - ``` --## Next steps --* [Propagate resources from a Fleet hub cluster to member clusters](./quickstart-resource-propagation.md). --<!-- LINKS > -[fleet-apispec]: https://github.com/Azure/fleet/blob/main/docs/api-references.md -[troubleshooting-guide]: https://github.com/Azure/fleet/blob/main/docs/troubleshooting/README.md -[az-fleet-get-credentials]: /cli/azure/fleet#az-fleet-get-credentials -[az-account-set]: /cli/azure/account#az-account-set |
kubernetes-fleet | Quickstart Create Fleet And Members Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members-portal.md | - Title: "Quickstart: Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure portal" -description: In this quickstart, you learn how to create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure portal. Previously updated : 03/20/2024--------# Quickstart: Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure portal --Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure portal to create a Fleet resource and later connect Azure Kubernetes Service (AKS) clusters as member clusters. --## Prerequisites ---* Read the [conceptual overview of this feature](./concepts-fleet.md), which provides an explanation of fleets and member clusters referenced in this document. -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* An identity (user or service principal) with the following permissions on the Fleet and AKS resource types for completing the steps listed in this quickstart: -- * Microsoft.ContainerService/fleets/read - * Microsoft.ContainerService/fleets/write - * Microsoft.ContainerService/fleets/members/read - * Microsoft.ContainerService/fleets/members/write - * Microsoft.ContainerService/fleetMemberships/read - * Microsoft.ContainerService/fleetMemberships/write - * Microsoft.ContainerService/managedClusters/read - * Microsoft.ContainerService/managedClusters/write - * Microsoft.ContainerService/managedClusters/listClusterUserCredential/action --* The AKS clusters that you want to join as member clusters to the Fleet resource need to be within the supported versions of AKS. Learn more about AKS version support policy [here](/azure/aks/supported-kubernetes-versions#kubernetes-version-support-policy). --## Create a Fleet resource --1. Sign in to the [Azure portal](https://portal.azure.com/). -2. On the Azure portal home page, select **Create a resource**. -3. In the search box, enter **Kubernetes Fleet Manager** and select **Create > Kubernetes Fleet Manager** from the search results. -4. On the **Basics** tab, configure the following options: -- * Under **Project details**: - * **Subscription**: Select the Azure subscription that you want to use. - * **Resource group**: Select an existing resource group or select **Create new** to create a new resource group. - * Under **Fleet details**: - * **Name**: Enter a unique name for the Fleet resource. - * **Region**: Select the region where you want to create the Fleet resource. - * **Hub cluster mode**: Select **Without hub cluster** if you want to use Fleet only for update orchestration. Select **With hub cluster (preview)** if you want to use Fleet for Kubernetes object propagation and multi-cluster load balancing in addition to update orchestration. -- ![Create Fleet resource](./media/quickstart-create-fleet-and-members-portal-basics.png) --5. Select **Next: Member clusters**. -6. On the **Member clusters** tab, select **Add** to add an existing AKS cluster as a member cluster to the Fleet resource. You can add multiple member clusters to the Fleet resource. -- ![Add member clusters](./media/quickstart-create-fleet-and-members-portal-members.png) --7. Select **Review + create** > **Create** to create the Fleet resource. -- It takes a few minutes to create the Fleet resource. When your deployment is complete, you can navigate to your resource by selecting **Go to resource**. --## Next steps --* [Access the Kubernetes API of the Fleet resource](./quickstart-access-fleet-kubernetes-api.md). |
kubernetes-fleet | Quickstart Create Fleet And Members | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members.md | - Title: "Quickstart: Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI" -description: In this quickstart, you learn how to create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI. Previously updated : 03/18/2024--------# Quickstart: Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI --Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure CLI to create a Fleet resource and later connect Azure Kubernetes Service (AKS) clusters as member clusters. --## Prerequisites ---* Read the [conceptual overview of this feature](./concepts-fleet.md), which provides an explanation of fleets and member clusters referenced in this document. -* Read the [conceptual overview of fleet types](./concepts-choosing-fleet.md), which provides a comparison of different fleet configuration options. -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* An identity (user or service principal) which can be used to [log in to Azure CLI](/cli/azure/authenticate-azure-cli). This identity needs to have the following permissions on the Fleet and AKS resource types for completing the steps listed in this quickstart: -- * Microsoft.ContainerService/fleets/read - * Microsoft.ContainerService/fleets/write - * Microsoft.ContainerService/fleets/members/read - * Microsoft.ContainerService/fleets/members/write - * Microsoft.ContainerService/fleetMemberships/read - * Microsoft.ContainerService/fleetMemberships/write - * Microsoft.ContainerService/managedClusters/read - * Microsoft.ContainerService/managedClusters/write --* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version `2.53.1` or later. --* Install the **fleet** Azure CLI extension using the [`az extension add`][az-extension-add] command and Make sure your version is at least `1.0.0`. -- ```azurecli-interactive - az extension add --name fleet - ``` --* Set the following environment variables: -- ```azurecli - export SUBSCRIPTION_ID=<subscription_id> - export GROUP=<your_resource_group_name> - export FLEET=<your_fleet_name> - ``` --* Install `kubectl` and `kubelogin` using the [`az aks install-cli`][az-aks-install-cli] command. -- ```azurecli-interactive - az aks install-cli - ``` --* The AKS clusters that you want to join as member clusters to the Fleet resource need to be within the supported versions of AKS. Learn more about AKS version support policy [here](/azure/aks/supported-kubernetes-versions#kubernetes-version-support-policy). --## Create a resource group --An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another location during resource creation. --Set the Azure subscription and create a resource group using the [`az group create`][az-group-create] command. --```azurecli-interactive -az account set -s ${SUBSCRIPTION_ID} -az group create --name ${GROUP} --location eastus -``` --The following output example resembles successful creation of the resource group: --```output -{ - "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/fleet-demo", - "location": "eastus", - "managedBy": null, - "name": "fleet-demo", - "properties": { - "provisioningState": "Succeeded" - }, - "tags": null, - "type": "Microsoft.Resources/resourceGroups" -} -``` --## Create a Fleet resource --You can create a Fleet resource to later group your AKS clusters as member clusters. When created via Azure CLI, by default, this resource enables member cluster grouping and update orchestration. If the Fleet hub is enabled, other preview features are enabled, such as Kubernetes object propagation to member clusters and L4 service load balancing across multiple member clusters. For more information, see the [conceptual overview of fleet types](./concepts-choosing-fleet.md), which provides a comparison of different fleet configurations. --> [!IMPORTANT] -> Once a Kubernetes Fleet resource has been created, it's possible to upgrade a Kubernetes Fleet resource without a hub cluster to one with a hub cluster. For Kubernetes Fleet resources with a hub cluster, once private or public has been selected it cannot be changed. ---### [Kubernetes Fleet resource without hub cluster](#tab/without-hub-cluster) --If you want to use Fleet only for update orchestration, which is the default experience when creating a new Fleet resource via Azure CLI, you can create a Fleet resource without the hub cluster using the [`az fleet create`][az-fleet-create] command. --```azurecli-interactive -az fleet create --resource-group ${GROUP} --name ${FLEET} --location eastus -``` --Your output should look similar to the following example output: --```output -{ - "etag": "...", - "hubProfile": null, - "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/fleet-demo/providers/Microsoft.ContainerService/fleets/fleet-demo", - "identity": { - "principalId": null, - "tenantId": null, - "type": "None", - "userAssignedIdentities": null - }, - "location": "eastus", - "name": "fleet-demo", - "provisioningState": "Succeeded", - "resourceGroup": "fleet-demo", - "systemData": { - "createdAt": "2023-11-03T17:15:19.610149+00:00", - "createdBy": "<user>", - "createdByType": "User", - "lastModifiedAt": "2023-11-03T17:15:19.610149+00:00", - "lastModifiedBy": "<user>", - "lastModifiedByType": "User" - }, - "tags": null, - "type": "Microsoft.ContainerService/fleets" -} -``` --### [Kubernetes Fleet resource with hub cluster](#tab/with-hub-cluster) --If you want to use Fleet for Kubernetes object propagation and multi-cluster load balancing in addition to update orchestration, then you need to create the Fleet resource with the hub cluster enabled by specifying the `--enable-hub` parameter with the [`az fleet create`][az-fleet-create] command. --Kubernetes Fleet clusters with a hub cluster support both public and private modes for network access. For more information, see [Choose an Azure Kubernetes Fleet Manager option](./concepts-choosing-fleet.md#network-access-modes-for-hub-cluster. --> [!NOTE] -> By default, Kubernetes Fleet resources with hub clusters are public, and Fleet will choose the VM SKU used for the hub node (at this time, it tries "Standard_D4s_v4", "Standard_D4s_v3", "Standard_D4s_v5", "Standard_Ds3_v2", "Standard_E4as_v4" in order). If none of these options are acceptable or available, you can select a VM SKU by setting `--vm-size <SKU>`. --#### Public hub cluster --To create a public Kubernetes Fleet resource with a hub cluster, use the `az fleet create` command with the `--enable-hub` flag set. --```azurecli-interactive -az fleet create --resource-group ${GROUP} --name ${FLEET} --location eastus --enable-hub -``` --Your output should look similar to the following example output: --```output -{ - "etag": "...", - "hubProfile": null, - "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/fleet-demo/providers/Microsoft.ContainerService/fleets/fleet-demo", - "identity": { - "principalId": null, - "tenantId": null, - "type": "None", - "userAssignedIdentities": null - }, - "location": "eastus", - "name": "fleet-demo", - "provisioningState": "Succeeded", - "resourceGroup": "fleet-demo", - "systemData": { - "createdAt": "2023-11-03T17:15:19.610149+00:00", - "createdBy": "<user>", - "createdByType": "User", - "lastModifiedAt": "2023-11-03T17:15:19.610149+00:00", - "lastModifiedBy": "<user>", - "lastModifiedByType": "User" - }, - "tags": null, - "type": "Microsoft.ContainerService/fleets" -} -``` --#### Private hub cluster --When creating a private access mode Kubernetes Fleet resource with a hub cluster, some extra considerations apply: -- Fleet requires you to provide the subnet on which the Fleet hub cluster's node VMs will be placed. You can specify this at creation time by setting `--agent-subnet-id <subnet>`. This command differs from the one you use to work directly with a private AKS cluster in that it's a required argument for Fleet but not for AKS.-- The address prefix of the vnet whose subnet is passed via `--vnet-subnet-id` must not overlap with the AKS default service range of `10.0.0.0/16`.-- When using an AKS private cluster, you have the ability to configure fully qualified domain names (FQDNs) and FQDN subdomains. This functionality doesn't apply to the private access mode type hub cluster.--First, create a virtual network and subnet for your hub cluster's node VMs using the `az network vnet create` and `az network vnet subnet create` commands. --```azurecli-interactive -az network vnet create --resource-group ${GROUP} --name vnet --address-prefixes 192.168.0.0/16 -az network vnet subnet create --resource-group ${GROUP} --vnet-name vnet --name subnet --address-prefixes 192.168.0.0/24 --SUBNET_ID=$(az network vnet subnet show --resource-group ${GROUP} --vnet-name vnet -n subnet -o tsv --query id) -``` --To create a private access mode Kubernetes Fleet resource, use `az fleet create` command with the `--enable-private-cluster` flag and provide the subnet ID obtained in the previous step to the `--agent-subnet-id <subnet>` argument. --```azurecli-interactive -az fleet create --resource-group ${GROUP} --name ${FLEET} --enable-hub --enable-private-cluster --agent-subnet-id "${SUBNET_ID}" -``` ----## Join member clusters --Fleet currently supports joining existing AKS clusters as member clusters. --1. Set the following environment variables for member clusters: -- ```azurecli-interactive - export MEMBER_NAME_1=aks-member-1 - export MEMBER_CLUSTER_ID_1=/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${GROUP}/providers/Microsoft.ContainerService/managedClusters/${MEMBER_NAME_1} - ``` --2. Join your existing AKS clusters to the Fleet resource using the [`az fleet member create`][az-fleet-member-create] command. -- ```azurecli-interactive - # Join the first member cluster - az fleet member create --resource-group ${GROUP} --fleet-name ${FLEET} --name ${MEMBER_NAME_1} --member-cluster-id ${MEMBER_CLUSTER_ID_1} - ``` -- Your output should look similar to the following example output: -- ```output - { - "clusterResourceId": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-x", - "etag": "...", - "id": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/fleets/<FLEET>/members/aks-member-x", - "name": "aks-member-1", - "provisioningState": "Succeeded", - "resourceGroup": "<GROUP>", - "systemData": { - "createdAt": "2022-10-04T19:04:56.455813+00:00", - "createdBy": "<user>", - "createdByType": "User", - "lastModifiedAt": "2022-10-04T19:04:56.455813+00:00", - "lastModifiedBy": "<user>", - "lastModifiedByType": "User" - }, - "type": "Microsoft.ContainerService/fleets/members" - } - ``` --3. Verify that the member clusters successfully joined the Fleet resource using the [`az fleet member list`][az-fleet-member-list] command. -- ```azurecli-interactive - az fleet member list --resource-group ${GROUP} --fleet-name ${FLEET} -o table - ``` -- If successful, your output should look similar to the following example output: -- ```output - ClusterResourceId Name ProvisioningState ResourceGroup - -- - - /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-1 aks-member-1 Succeeded <GROUP> - /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-2 aks-member-2 Succeeded <GROUP> - /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<GROUP>/providers/Microsoft.ContainerService/managedClusters/aks-member-3 aks-member-3 Succeeded <GROUP> - ``` --## Next steps --* [Access the Kubernetes API of the Fleet resource](./quickstart-access-fleet-kubernetes-api.md). --<!-- INTERNAL LINKS --> -[az-extension-add]: /cli/azure/extension#az-extension-add -[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli -[az-group-create]: /cli/azure/group#az-group-create -[az-fleet-create]: /cli/azure/fleet#az-fleet-create -[az-fleet-member-create]: /cli/azure/fleet/member#az-fleet-member-create -[az-fleet-member-list]: /cli/azure/fleet/member#az-fleet-member-list |
kubernetes-fleet | Quickstart Resource Propagation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-resource-propagation.md | - Title: "Quickstart: Propagate resources from an Azure Kubernetes Fleet Manager (Fleet) hub cluster to member clusters (Preview)" -description: In this quickstart, you learn how to propagate resources from an Azure Kubernetes Fleet Manager (Fleet) hub cluster to member clusters. Previously updated : 03/28/2024----- - build-2024 ----# Quickstart: Propagate resources from an Azure Kubernetes Fleet Manager (Fleet) hub cluster to member clusters --In this quickstart, you learn how to propagate resources from an Azure Kubernetes Fleet Manager (Fleet) hub cluster to member clusters. --## Prerequisites ---* Read the [resource propagation conceptual overview](./concepts-resource-propagation.md) to understand the concepts and terminology used in this quickstart. -* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* You need a Fleet resource with a hub cluster and member clusters. If you don't have one, see [Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI](quickstart-create-fleet-and-members.md). -* Member clusters must be labeled appropriately in the hub cluster to match the desired selection criteria. Example labels include region, environment, team, availability zones, node availability, or anything else desired. -* You need access to the Kubernetes API of the hub cluster. If you don't have access, see [Access the Kubernetes API of the Fleet resource with Azure Kubernetes Fleet Manager](./quickstart-access-fleet-kubernetes-api.md). --## Use the `ClusterResourcePlacement` API to propagate resources to member clusters --The `ClusterResourcePlacement` API object is used to propagate resources from a hub cluster to member clusters. The `ClusterResourcePlacement` API object specifies the resources to propagate and the placement policy to use when selecting member clusters. The `ClusterResourcePlacement` API object is created in the hub cluster and is used to propagate resources to member clusters. This example demonstrates how to propagate a namespace to member clusters using the `ClusterResourcePlacement` API object with a `PickAll` placement policy. --For more information, see [Kubernetes resource propagation from hub cluster to member clusters (Preview)](./concepts-resource-propagation.md) and the [upstream Fleet documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md). --### [Azure CLI](#tab/azure-cli) --1. Create a namespace to place onto the member clusters using the `kubectl create namespace` command. The following example creates a namespace named `my-namespace`: -- ```bash - kubectl create namespace my-namespace - ``` --2. Create a `ClusterResourcePlacement` API object in the hub cluster to propagate the namespace to the member clusters and deploy it using the `kubectl apply -f` command. The following example `ClusterResourcePlacement` creates an object named `crp` and uses the `my-namespace` namespace with a `PickAll` placement policy to propagate the namespace to all member clusters: -- ```bash - kubectl apply -f - <<EOF - apiVersion: placement.kubernetes-fleet.io/v1beta1 - kind: ClusterResourcePlacement - metadata: - name: crp - spec: - resourceSelectors: - - group: "" - kind: Namespace - version: v1 - name: my-namespace - policy: - placementType: PickAll - EOF - ``` --3. Check the progress of the resource propagation using the `kubectl get clusterresourceplacement` command. The following example checks the status of the `ClusterResourcePlacement` object named `crp`: -- ```bash - kubectl get clusterresourceplacement crp - ``` -- Your output should look similar to the following example output: -- ```output - NAME GEN SCHEDULED SCHEDULEDGEN APPLIED APPLIEDGEN AGE - crp 2 True 2 True 2 10s - ``` --4. View the details of the `crp` object using the `kubectl describe crp` command. The following example describes the `ClusterResourcePlacement` object named `crp`: -- ```bash - kubectl describe clusterresourceplacement crp - ``` -- Your output should look similar to the following example output: -- ```output - Name: crp - Namespace: - Labels: <none> - Annotations: <none> - API Version: placement.kubernetes-fleet.io/v1beta1 - Kind: ClusterResourcePlacement - Metadata: - Creation Timestamp: 2024-04-01T18:55:31Z - Finalizers: - kubernetes-fleet.io/crp-cleanup - kubernetes-fleet.io/scheduler-cleanup - Generation: 2 - Resource Version: 6949 - UID: 815b1d81-61ae-4fb1-a2b1-06794be3f986 - Spec: - Policy: - Placement Type: PickAll - Resource Selectors: - Group: - Kind: Namespace - Name: my-namespace - Version: v1 - Revision History Limit: 10 - Strategy: - Type: RollingUpdate - Status: - Conditions: - Last Transition Time: 2024-04-01T18:55:31Z - Message: found all the clusters needed as specified by the scheduling policy - Observed Generation: 2 - Reason: SchedulingPolicyFulfilled - Status: True - Type: ClusterResourcePlacementScheduled - Last Transition Time: 2024-04-01T18:55:36Z - Message: All 3 cluster(s) are synchronized to the latest resources on the hub cluster - Observed Generation: 2 - Reason: SynchronizeSucceeded - Status: True - Type: ClusterResourcePlacementSynchronized - Last Transition Time: 2024-04-01T18:55:36Z - Message: Successfully applied resources to 3 member clusters - Observed Generation: 2 - Reason: ApplySucceeded - Status: True - Type: ClusterResourcePlacementApplied - Observed Resource Index: 0 - Placement Statuses: - Cluster Name: membercluster1 - Conditions: - Last Transition Time: 2024-04-01T18:55:31Z - Message: Successfully scheduled resources for placement in membercluster1 (affinity score: 0, topology spread score: 0): picked by scheduling policy - Observed Generation: 2 - Reason: ScheduleSucceeded - Status: True - Type: ResourceScheduled - Last Transition Time: 2024-04-01T18:55:36Z - Message: Successfully Synchronized work(s) for placement - Observed Generation: 2 - Reason: WorkSynchronizeSucceeded - Status: True - Type: WorkSynchronized - Last Transition Time: 2024-04-01T18:55:36Z - Message: Successfully applied resources - Observed Generation: 2 - Reason: ApplySucceeded - Status: True - Type: ResourceApplied - Cluster Name: membercluster2 - Conditions: - Last Transition Time: 2024-04-01T18:55:31Z - Message: Successfully scheduled resources for placement in membercluster2 (affinity score: 0, topology spread score: 0): picked by scheduling policy - Observed Generation: 2 - Reason: ScheduleSucceeded - Status: True - Type: ResourceScheduled - Last Transition Time: 2024-04-01T18:55:36Z - Message: Successfully Synchronized work(s) for placement - Observed Generation: 2 - Reason: WorkSynchronizeSucceeded - Status: True - Type: WorkSynchronized - Last Transition Time: 2024-04-01T18:55:36Z - Message: Successfully applied resources - Observed Generation: 2 - Reason: ApplySucceeded - Status: True - Type: ResourceApplied - Cluster Name: membercluster3 - Conditions: - Last Transition Time: 2024-04-01T18:55:31Z - Message: Successfully scheduled resources for placement in membercluster3 (affinity score: 0, topology spread score: 0): picked by scheduling policy - Observed Generation: 2 - Reason: ScheduleSucceeded - Status: True - Type: ResourceScheduled - Last Transition Time: 2024-04-01T18:55:36Z - Message: Successfully Synchronized work(s) for placement - Observed Generation: 2 - Reason: WorkSynchronizeSucceeded - Status: True - Type: WorkSynchronized - Last Transition Time: 2024-04-01T18:55:36Z - Message: Successfully applied resources - Observed Generation: 2 - Reason: ApplySucceeded - Status: True - Type: ResourceApplied - Selected Resources: - Kind: Namespace - Name: my-namespace - Version: v1 - Events: - Type Reason Age From Message - - - - - - Normal PlacementScheduleSuccess 108s cluster-resource-placement-controller Successfully scheduled the placement - Normal PlacementSyncSuccess 103s cluster-resource-placement-controller Successfully synchronized the placement - Normal PlacementRolloutCompleted 103s cluster-resource-placement-controller Resources have been applied to the selected clusters - ```` --### [Portal](#tab/azure-portal) --1. Sign in to the Azure portal. --1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**. --1. Select **Create**. --1. Replace the placeholder values with the following YAML, and select **Add**. -- :::image type="content" source="./media/quickstart-resource-propagation/create-crp-inline.png" lightbox="./media/quickstart-resource-propagation/create-crp.png" alt-text="A screenshot of the Azure portal page for creating a resource placement, showing the YAML template with placeholder values."::: -- ```yml - apiVersion: placement.kubernetes-fleet.io/v1beta1 - kind: ClusterResourcePlacement - metadata: - name: crp - spec: - resourceSelectors: - - group: "" - kind: Namespace - version: v1 - name: my-namespace - policy: - placementType: PickAll - ``` - --1. Verify that the cluster resource placement is created successfully. -- :::image type="content" source="./media/quickstart-resource-propagation/crp-success-inline.png" lightbox="./media/quickstart-resource-propagation/crp-success.png" alt-text="A screenshot of the Azure portal page for cluster resource placements, showing a successfully created cluster resource placement."::: --1. To see more details on an individual cluster resource placement, select it from the list. -- :::image type="content" source="./media/quickstart-resource-propagation/crp-details-inline.png" lightbox="./media/quickstart-resource-propagation/crp-details.png" alt-text="A screenshot of the Azure portal overview page for an individual cluster resource placement, showing events and details."::: --1. You can view additional details on the cluster resource placement's snapshots, bindings, works, and scheduling policy snapshots using the individual tabs. For example, select the **Cluster Resources Snapshots** tab. -- :::image type="content" source="./media/quickstart-resource-propagation/crp-snapshot-inline.png" lightbox="./media/quickstart-resource-propagation/crp-snapshot.png" alt-text="A screenshot of the Azure portal page for a cluster resource placement, with the cluster resources snapshots tab selected."::: ----## Clean up resources --### [Azure CLI](#tab/azure-cli) --If you no longer wish to use the `ClusterResourcePlacement` object, you can delete it using the `kubectl delete` command. The following example deletes the `ClusterResourcePlacement` object named `crp`: --```bash -kubectl delete clusterresourceplacement crp -``` --### [Portal](#tab/azure-portal) --If you no longer wish to use your cluster resource placement, you can delete it from the Azure portal: --1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**. --1. Select the cluster resource placement objects you want to delete, then select **Delete**. --1. In the **Delete** tab, verify the correct objects are chosen. Once you're ready, select **Confirm delete** and **Delete**. ----## Next steps --To learn more about resource propagation, see the following resources: --* [Intelligent cross-cluster Kubernetes resource placement based on member clusters properties](./intelligent-resource-placement.md) -* [Upstream Fleet documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md) |
kubernetes-fleet | Resource Override | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/resource-override.md | - Title: "Customize namespace scoped resources in Azure Kubernetes Fleet Manager with resource overrides" -description: This article provides an overview of how to use the Fleet ResourceOverride API to override namespace scoped resources in Azure Kubernetes Fleet Manager. - Previously updated : 05/10/2024----- - build-2024 ---# Customize namespace scoped resources in Azure Kubernetes Fleet Manager with resource overrides (preview) --This article provides an overview of how to use the Fleet `ResourceOverride` API to override namespace scoped resources in Azure Kubernetes Fleet Manager. ---## Resource override overview --The resource override feature allows you to modify or override specific attributes of existing resources within a namespace. With `ResourceOverride`, you can define rules based on cluster labels, specifying changes to be applied to resources such as Deployments, StatefulSets, ConfigMaps, or Secrets. These changes can include updates to container images, environment variables, resource limits, or any other configurable parameters, ensuring consistent management and enforcement of configurations across your Fleet-managed Kubernetes clusters. --## API components --The `ResourceOverride` API consists of the following components: --* `resourceSelectors`: Specifies the set of resources selected for overriding. -* `policy`: Specifies the set of rules to apply to the selected resources. --### Resource selectors --A `ResourceOverride` object can include one or more resource selectors to specify which resources to override. The `ResourceSelector` object includes the following fields: --> [!NOTE] -> If you select a namespace in the `ResourceSelector`, the override will apply to all resources in the namespace. --* `group`: The API group of the resource. -* `version`: The API version of the resource. -* `kind`: The kind of the resource. -* `namespace`: The namespace of the resource. --To add a resource selector to a `ResourceOverride` object, use the `resourceSelectors` field with the following YAML format: --> [!IMPORTANT] -> The `ResourceOverride` needs to be in the same namespace as the resource you want to override. --```yaml -apiVersion: placement.kubernetes-fleet.io/v1alpha1 -kind: ResourceOverride -metadata: - name: example-resource-override - namespace: test-namespace -spec: - resourceSelectors: - - group: apps - kind: Deployment - version: v1 - name: test-nginx -``` --This example selects a `Deployment` named `test-nginx` from the `test-namespace` namespace for overriding. --## Policy --A `Policy` object consists of a set of rules, `overrideRules`, that specify the changes to apply to the selected resources. Each `overrideRule` object supports the following fields: --* `clusterSelector`: Specifies the set of clusters to which the override rule applies. -* `jsonPatchOverrides`: Specifies the changes to apply to the selected resources. --To add an override rule to a `ResourceOverride` object, use the `policy` field with the following YAML format: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1alpha1 -kind: ResourceOverride -metadata: - name: example-resource-override - namespace: test-namespace -spec: - resourceSelectors: - - group: apps - kind: Deployment - version: v1 - name: test-nginx - policy: - overrideRules: - - clusterSelector: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: prod - jsonPatchOverrides: - - op: replace - path: /spec/template/spec/containers/0/image - value: "nginx:1.20.0" -``` --This example replaces the container image in the `Deployment` with the `nginx:1.20.0` image for clusters with the `env: prod` label. --### Cluster selector --You can use the `clusterSelector` field in the `overrideRule` object to specify the resources to which the override rule applies. The `ClusterSelector` object supports the following field: --* `clusterSelectorTerms`: A list of terms that specify the criteria for selecting clusters. Each term includes a `labelSelector` field that defines a set of labels to match. --### JSON patch overrides --You can use `jsonPatchOverrides` in the `overrideRule` object to specify the changes to apply to the selected resources. The `JsonPatch` object supports the following fields: --* `op`: The operation to perform. - * Supported operations include `add`, `remove`, and `replace`. - * `add`: Adds a new value to the specified path. - * `remove`: Removes the value at the specified path. - * `replace`: Replaces the value at the specified path. -* `path`: The path to the field to modify. - * Guidance on specifying paths includes: - * Must start with a `/` character. - * Can't be empty or contain an empty string. - * Can't be a `TypeMeta` field ("/kind" or "/apiVersion"). - * Can't be a `Metadata` field ("/metadata/name" or "/metadata/namespace") except the fields "/metadata/labels" and "/metadata/annotations". - * Can't be any field in the status of the resource. - * Examples of valid paths include: - * `/metadata/labels/new-label` - * `/metadata/annotations/new-annotation` - * `/spec/template/spec/containers/0/resources/limits/cpu` - * `/spec/template/spec/containers/0/resources/requests/memory` -* `value`: The value to add, remove, or replace. - * If the `op` is `remove`, you can't specify a `value`. --`jsonPatchOverrides` apply a JSON patch on the selected resources following [RFC 6902](https://datatracker.ietf.org/doc/html/rfc6902). --### Use multiple override rules --You can add multiple `overrideRules` to a `policy` to apply multiple changes to the selected resources, as shown in the following example: --```yaml -apiVersion: placement.kubernetes-fleet.io/v1alpha1 -kind: ResourceOverride -metadata: - name: ro-1 - namespace: test -spec: - resourceSelectors: - - group: apps - kind: Deployment - version: v1 - name: test-nginx - policy: - overrideRules: - - clusterSelector: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: prod - jsonPatchOverrides: - - op: replace - path: /spec/template/spec/containers/0/image - value: "nginx:1.20.0" - - clusterSelector: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: test - jsonPatchOverrides: - - op: replace - path: /spec/template/spec/containers/0/image - value: "nginx:latest" -``` --This example replaces the container image in the `Deployment` with the `nginx:1.20.0` image for clusters with the `env: prod` label and the `nginx:latest` image for clusters with the `env: test` label. --## Apply the cluster resource placement --### [Azure CLI](#tab/azure-cli) --1. Create a `ClusterResourcePlacement` resource to specify the placement rules for distributing the resource overrides across the cluster infrastructure, as shown in the following example. Make sure you select the appropriate namespaces. -- ```yaml - apiVersion: placement.kubernetes-fleet.io/v1beta1 - kind: ClusterResourcePlacement - metadata: - name: crp-example - spec: - resourceSelectors: - - group: "" - kind: Namespace - name: test-namespace - version: v1 - policy: - placementType: PickAll - affinity: - clusterAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: prod - - labelSelector: - matchLabels: - env: test - ``` -- This example distributes resources within the `test-namespace` across all clusters labeled with `env:prod` and `env:test`. As the changes are implemented, the corresponding `ResourceOverride` configurations will be applied to the designated resources, triggered by the selection of matching deployment resource, `my-deployment`. --2. Apply the `ClusterResourcePlacement` using the `kubectl apply` command. -- ```bash - kubectl apply -f cluster-resource-placement.yaml - ``` --3. Verify the `ResourceOverride` object applied to the selected resources by checking the status of the `ClusterResourcePlacement` resource using the `kubectl describe` command. -- ```bash - kubectl describe clusterresourceplacement crp-example - ``` -- Your output should resemble the following example output: -- ```output - Status: - Conditions: - ... - Message: The selected resources are successfully overridden in the 10 clusters - Observed Generation: 1 - Reason: OverriddenSucceeded - Status: True - Type: ClusterResourcePlacementOverridden - ... - Observed Resource Index: 0 - Placement Statuses: - Applicable Resource Overrides: - Name: ro-1-0 - Namespace: test-namespace - Cluster Name: member-50 - Conditions: - ... - Last Transition Time: 2024-04-26T22:57:14Z - Message: Successfully applied the override rules on the resources - Observed Generation: 1 - Reason: OverriddenSucceeded - Status: True - Type: Overridden - ... - ``` -- The `ClusterResourcePlacementOverridden` condition indicates whether the resource override was successfully applied to the selected resources. Each cluster maintains its own `Applicable Resource Overrides` list, which contains the resource override snapshot if relevant. Individual status messages for each cluster indicate whether the override rules were successfully applied. --### [Portal](#tab/azure-portal) --1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**. --1. Select **Create**. --1. Create a `ClusterResourcePlacement` resource to specify the placement rules for distributing the resource overrides across the cluster infrastructure, as shown in the following example. Make sure you select the appropriate namespaces. When you're ready, select **Add**. -- ```yaml - apiVersion: placement.kubernetes-fleet.io/v1beta1 - kind: ClusterResourcePlacement - metadata: - name: crp-example - spec: - resourceSelectors: - - group: "" - kind: Namespace - name: test-namespace - version: v1 - policy: - placementType: PickAll - affinity: - clusterAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - clusterSelectorTerms: - - labelSelector: - matchLabels: - env: prod - - labelSelector: - matchLabels: - env: test - ``` -- This example distributes resources within the `test-namespace` across all clusters labeled with `env:prod` and `env:test`. As the changes are implemented, the corresponding `ResourceOverride` configurations will be applied to the designated resources, triggered by the selection of matching deployment resource, `my-deployment`. -- :::image type="content" source="./media/quickstart-resource-propagation/create-resource-propagation-inline.png" lightbox="./media/quickstart-resource-propagation/create-resource-propagation.png" alt-text="A screenshot of the Azure portal page for creating a resource placement, showing the YAML template with placeholder values."::: --1. Verify that the cluster resource placement is created successfully. -- :::image type="content" source="./media/quickstart-resource-propagation/overview-cluster-resource-inline.png" lightbox="./media/quickstart-resource-propagation/overview-cluster-resource.png" alt-text="A screenshot of the Azure portal page for cluster resource placements, showing a successfully created cluster resource placement."::: --1. Verify the cluster resource placement applied to the selected resources by selecting the resource from the list and checking the status. ----## Next steps --To learn more about Fleet, see the following resources: --* [Upstream Fleet documentation](https://github.com/Azure/fleet/tree/main/docs) -* [Azure Kubernetes Fleet Manager overview](./overview.md) |
kubernetes-fleet | Update Orchestration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/update-orchestration.md | - Title: "Orchestrate updates across multiple clusters by using Azure Kubernetes Fleet Manager" -description: Learn how to orchestrate updates across multiple clusters by using Azure Kubernetes Fleet Manager. - Previously updated : 11/06/2023----- - devx-track-azurecli - - ignite-2023 - - build-2024 ---# Orchestrate updates across multiple clusters by using Azure Kubernetes Fleet Manager --Platform admins managing Kubernetes fleets with large number of clusters often have problems with staging their updates in a safe and predictable way across multiple clusters. To address this pain point, Kubernetes Fleet Manager (Fleet) allows you to orchestrate updates across multiple clusters using update runs, stages, groups, and strategies. ---## Prerequisites --* Read the [conceptual overview of this feature](./concepts-update-orchestration.md), which provides an explanation of update strategies, runs, stages, and groups references in this document. --* You must have a fleet resource with one or more member clusters. If not, follow the [quickstart][fleet-quickstart] to create a Fleet resource and join Azure Kubernetes Service (AKS) clusters as members. This walkthrough demonstrates a fleet resource with five AKS member clusters as an example. --* Set the following environment variables: -- ```bash - export GROUP=<resource-group> - export FLEET=<fleet-name> - ``` --* If you're following the Azure CLI instructions in this article, you need Azure CLI version 2.53.1 or later installed. To install or upgrade, see [Install the Azure CLI][azure-cli-install]. --* You also need the `fleet` Azure CLI extension, which you can install by running the following command: -- ```azurecli-interactive - az extension add --name fleet - ``` -- Run the following command to update to the latest version of the extension released: -- ```azurecli-interactive - az extension update --name fleet - ``` --> [!NOTE] -> Update runs honor [planned maintenance windows](/azure/aks/planned-maintenance) that you set at the AKS cluster level. For more information, see [planned maintenance across multiple member clusters](./concepts-update-orchestration.md#planned-maintenance) which explains how update runs handle member clusters that have been configured with planned maintenance windows. ---Update run supports two options for the sequence in which the clusters are upgraded: --- **One-by-one**: If you don't care about controlling the sequence in which the clusters are upgraded, `one-by-one` provides a simple approach to upgrade all member clusters of the fleet in sequence one-by-one-- **Control sequence of clusters using update groups and stages** - If you want to control the sequence in which the clusters are upgraded, you can structure member clusters in update groups and update stages. Further, this sequence can be stored as a template in the form of update strategy. Update runs can later be created from update strategies instead of defining the sequence every time one needs to create an update run based on stages.--## Update all clusters one by one --### [Azure portal](#tab/azure-portal) --1. On the page for your Azure Kubernetes Fleet Manager resource, go to the **Multi-cluster update** menu and select **Create**. --1. Choosing **One by one** upgrades all member clusters of the fleet in sequence one-by-one. -- :::image type="content" source="./media/update-orchestration/update-run-one-by-one.png" alt-text="Screenshot of the Azure portal pane for creating update runs that update clusters one by one in Azure Kubernetes Fleet Manager." lightbox="./media/update-orchestration/update-run-one-by-one.png"::: --1. For **upgrade scope**, you can choose one of these three options: -- - Kubernetes version for both control plane and node pools - - Kubernetes version for only control plane of the cluster - - Node image version only -- :::image type="content" source="./media/update-orchestration/upgrade-scope.png" alt-text="Screenshot of the Azure portal pane for creating update runs. The upgrade scope section is shown." lightbox="./media/update-orchestration/upgrade-scope.png"::: -- For the node image, the following options are available: - - **Latest**: Updates every AKS cluster in the update run to the latest image available for that cluster in its region. - - **Consistent**: As it's possible for an update run to have AKS clusters across multiple regions where the latest available node images can be different (check [release tracker](/azure/aks/release-tracker) for more information). The update run picks the **latest common** image across all these regions to achieve consistency. --### [Azure CLI](#tab/cli) --**Creating an update run**: --- Run the following command to update the Kubernetes version and the node image version for all clusters of the fleet one by one:-- ```azurecli-interactive - az fleet updaterun create --resource-group $GROUP --fleet-name $FLEET --name run-1 --upgrade-type Full --kubernetes-version 1.26.0 - ``` --- Run the following command to update the Kubernetes version for only the control plane of all member clusters of the fleet one by one:-- ```azurecli-interactive - az fleet updaterun create --resource-group $GROUP --fleet-name $FLEET --name run-2 --upgrade-type ControlPlaneOnly --kubernetes-version 1.26.0 - ``` --- Run the following command to update only the node image versions for all clusters of the fleet one by one:-- ```azurecli-interactive - az fleet updaterun create --resource-group $GROUP --fleet-name $FLEET --name run-3 --upgrade-type NodeImageOnly - ``` --When creating an update run, you have the ability to control the scope of the update run. The `--upgrade-type` flag supports the following values: -- `ControlPlaneOnly` only upgrades the Kubernetes version for the control plane of the cluster. -- `Full` upgrades Kubernetes version for control plane and node pools along with the node images.-- `NodeImageOnly` only upgrades the node images.--Also, `--node-image-selection` flag supports the following values: -- **Latest**: Updates every AKS cluster in the update run to the latest image available for that cluster in its region.-- **Consistent**: As it's possible for an update run to have AKS clusters across multiple regions where the latest available node images can be different (check [release tracker](/azure/aks/release-tracker) for more information). The update run picks the **latest common** image across all these regions to achieve consistency.---**Starting an update run**: --To start update runs, run the following command: --```azurecli-interactive -az fleet updaterun start --resource-group $GROUP --fleet-name $FLEET --name <run-name> -``` ----## Update clusters in a specific order --Update groups and stages provide more control over the sequence that update runs follow when you're updating the clusters. Within an update stage, updates are applied to all the different update groups in parallel; within an update group, member clusters update sequentially. --### Assign a cluster to an update group --You can assign a member cluster to a specific update group in one of two ways. --* Assign to group when adding member cluster to the fleet. For example: --#### [Azure portal](#tab/azure-portal) --1. On the page for your Azure Kubernetes Fleet Manager resource, go to **Member clusters**. -- :::image type="content" source="./media/update-orchestration/add-members-inline.png" alt-text="Screenshot of the Azure portal page for Azure Kubernetes Fleet Manager member clusters." lightbox="./media/update-orchestration/add-members.png"::: --1. Specify the update group that the member cluster should belong to. -- :::image type="content" source="./media/update-orchestration/add-members-assign-group-inline.png" alt-text="Screenshot of the Azure portal page for adding member clusters to Azure Kubernetes Fleet Manager and assigning them to groups." lightbox="./media/update-orchestration/add-members-assign-group.png"::: --#### [Azure CLI](#tab/cli) --```azurecli-interactive -az fleet member create --resource-group $GROUP --fleet-name $FLEET --name member1 --member-cluster-id $AKS_CLUSTER_ID --update-group group-1a -``` ----* The second method is to assign an existing fleet member to an update group. For example: --#### [Azure portal](#tab/azure-portal) --1. On the page for your Azure Kubernetes Fleet Manager resource, navigate to **Member clusters**. Choose the member clusters that you want, and then select **Assign update group**. -- :::image type="content" source="./media/update-orchestration/existing-members-assign-group-inline.png" alt-text="Screenshot of the Azure portal page for assigning existing member clusters to a group." lightbox="./media/update-orchestration/existing-members-assign-group.png"::: --1. Specify the group name, and then select **Assign**. -- :::image type="content" source="./media/update-orchestration/group-name-inline.png" alt-text="Screenshot of the Azure portal page for member clusters that shows the form for updating a member cluster's group." lightbox="./media/update-orchestration/group-name.png"::: --#### [Azure CLI](#tab/cli) --```azurecli-interactive -az fleet member update --resource-group $GROUP --fleet-name $FLEET --name member1 --update-group group-1a -``` ----> [!NOTE] -> Any fleet member can only be a part of one update group, but an update group can have multiple fleet members inside it. -> An update group itself is not a separate resource type. Update groups are only strings representing references from the fleet members. So, if all fleet members with references to a common update group are deleted, that specific update group will cease to exist as well. --### Define an update run and stages --You can define an update run using update stages in order to sequentially order the application of updates to different update groups. For example, a first update stage might update test environment member clusters, and a second update stage would then update production environment member clusters. You can also specify a wait time between the update stages. --#### [Azure portal](#tab/azure-portal) --1. On the page for your Azure Kubernetes Fleet Manager resource, navigate to **Multi-cluster update**. Under the **Runs** tab, select **Create**. --1. Provide a name for your update run and then select 'Stages' for update sequence type. -- :::image type="content" source="./media/update-orchestration/update-run-stages-inline.png" alt-text="Screenshot of the Azure portal page for choosing stages mode within update run." lightbox="./media/update-orchestration/update-run-stages-lightbox.png"::: --1. Choose **Create Stage**. You can now specify the stage name and the duration to wait after each stage. -- :::image type="content" source="./media/update-orchestration/create-stage-basics-inline.png" alt-text="Screenshot of the Azure portal page for creating a stage and defining wait time." lightbox="./media/update-orchestration/create-stage-basics.png"::: --1. Choose the update groups that you want to include in this stage. -- :::image type="content" source="./media/update-orchestration/create-stage-choose-groups-inline.png" alt-text="Screenshot of the Azure portal page for stage creation that shows the selection of upgrade groups." lightbox="./media/update-orchestration/create-stage-choose-groups.png"::: --1. After you define all your stages, you can order them by using the **Move up** and **Move down** controls. --1. For **upgrade scope**, you can choose one of these three options: -- - Kubernetes version for both control plane and node pools - - Kubernetes version for only control plane of the cluster - - Node image version only -- :::image type="content" source="./media/update-orchestration/upgrade-scope.png" alt-text="Screenshot of the Azure portal pane for creating update runs. The upgrade scope section is shown." lightbox="./media/update-orchestration/upgrade-scope.png"::: -- For the node image, the following options are available: - - **Latest**: Updates every AKS cluster in the update run to the latest image available for that cluster in its region. - - **Consistent**: As it's possible for an update run to have AKS clusters across multiple regions where the latest available node images can be different (check [release tracker](/azure/aks/release-tracker) for more information). The update run picks the **latest common** image across all these regions to achieve consistency. ---1. Click on **Create** at the bottom of the page to create the update run. Specifying stages and their order every time when creating an update run can get repetitive and cumbersome. Update strategies simplify this process by allowing you to store templates for update runs. For more information, see [update strategy creation and usage](#create-an-update-run-using-update-strategies). --1. In the **Multi-cluster update** menu, choose the update run and select **Start**. --#### [Azure CLI](#tab/cli) --1. Run the following command to create the update run: -- ```azurecli-interactive - az fleet updaterun create --resource-group $GROUP --fleet-name $FLEET --name run-4 --upgrade-type Full --kubernetes-version 1.26.0 --stages example-stages.json - ``` -- Here's an example of input from the stages file (*example-stages.json*): -- ```json - { - "stages": [ - { - "name": "stage1", - "groups": [ - { - "name": "group-1a" - }, - { - "name": "group-1b" - }, - { - "name": "group-1c" - } - ], - "afterStageWaitInSeconds": 3600 - }, - { - "name": "stage2", - "groups": [ - { - "name": "group-2a" - }, - { - "name": "group-2b" - }, - { - "name": "group-2c" - } - ] - } - ] - } - ``` -- When creating an update run, you have the ability to control the scope of the update run. The `--upgrade-type` flag supports the following values: - - `ControlPlaneOnly` only upgrades the Kubernetes version for the control plane of the cluster. - - `Full` upgrades Kubernetes version for control plane and node pools along with the node images. - - `NodeImageOnly` only upgrades the node images. -- Also, `--node-image-selection` flag supports the following values: - - **Latest**: Updates every AKS cluster in the update run to the latest image available for that cluster in its region. - - **Consistent**: As it's possible for an update run to have AKS clusters across multiple regions where the latest available node images can be different (check [release tracker](/azure/aks/release-tracker) for more information). The update run picks the **latest common** image across all these regions to achieve consistency. --1. Run the following command to start this update run: -- ```azurecli-interactive - az fleet updaterun start --resource-group $GROUP --fleet-name $FLEET --name run-4 - ``` ----### Create an update run using update strategies --Creating an update run required the stages, groups, and their order to be specified each time. Update strategies simplify this process by allowing you to store templates for update runs. --> [!NOTE] -> It is possible to create multiple update runs with unique names from the same update strategy. --#### [Azure portal](#tab/azure-portal) --**Create an update strategy**: There are two ways to create an update strategy: --- **Approach 1**: You can save an update strategy while creating an update run.-- :::image type="content" source="./media/update-orchestration/update-strategy-creation-from-run-inline.png" alt-text="A screenshot of the Azure portal showing update run stages being saved as an update strategy." lightbox="./media/update-orchestration/update-strategy-creation-from-run-lightbox.png"::: --- **Approach 2**: You can navigate to **Multi-cluster update** and choose **Create** under the **Strategy** tab.-- :::image type="content" source="./media/update-orchestration/create-strategy-inline.png" alt-text="A screenshot of the Azure portal showing creation of update strategy." lightbox="./media/update-orchestration/create-strategy-lightbox.png"::: --**Use an update strategy to create update run**: The update strategy you created can later be referenced when creating new subsequent update runs: ---#### [Azure CLI](#tab/cli) --1. Run the following command to create a new update strategy: -- ```azurecli-interactive - az fleet updatestrategy create --resource-group $GROUP --fleet-name $FLEET --name strategy-1 --stages example-stages.json - ``` --1. Run the following command to create an update run referencing this strategy: -- ```azurecli-interactive - az fleet updaterun create --resource-group $GROUP --fleet-name $FLEET --name run-5 --update-strategy-name strategy-1 --upgrade-type NodeImageOnly --node-image-selection Consistent - ``` ---### Manage an Update run --There are a few options to manage update runs: --#### [Azure portal](#tab/azure-portal) --- Under **Multi-cluster update** tab of the fleet resource, you can **Start** an update run that is either in **Not started** or **Failed** state.-- :::image type="content" source="./media/update-orchestration/run-start.png" alt-text="A screenshot of the Azure portal showing how to start an update run in the 'Not started' state." lightbox="./media/update-orchestration/run-start.png"::: --- Under **Multi-cluster update** tab of the fleet resource, you can **Stop** a currently **Running** update run.-- :::image type="content" source="./media/update-orchestration/run-stop.png" alt-text="A screenshot of the Azure portal showing how to stop an update run in the 'Running' state." lightbox="./media/update-orchestration/run-stop.png"::: --- Within any update run in **Not Started**, **Failed**, or **Running** state, you can select any **Stage** and **Skip** the upgrade.-- :::image type="content" source="./media/update-orchestration/skip-stage.png" alt-text="A screenshot of the Azure portal showing how to skip upgrade for a specific stage in an update run." lightbox="./media/update-orchestration/skip-stage.png"::: -- You can similarly skip the upgrade at the update group or member cluster level too. -- For more information, see [conceptual overview on the update run states and skip behavior](concepts-update-orchestration.md#update-run-states) on runs/stages/groups. --#### [Azure CLI](#tab/cli) --- You can **Start** an update run that is either in **Not started** or **Failed** state:-- ```azurecli-interactive - az fleet updaterun start --resource-group $GROUP --fleet-name $FLEET --name <run-name> - ``` --- You can **Stop** a currently **Running** update run:-- ```azurecli-interactive - az fleet updaterun stop --resource-group $GROUP --fleet-name $FLEET --name <run-name> - ``` --- You can skip update stages or groups by specifying them under targets of the skip command:-- ```azurecli-interactive - az fleet updaterun skip --resource-group $GROUP --fleet-name $FLEET --name <run-name> --targets Group:my-group-name Stage:my-stage-name - ``` -- For more information, see [conceptual overview on the update run states and skip behavior](concepts-update-orchestration.md#update-run-states) on runs/stages/groups. ----[fleet-quickstart]: quickstart-create-fleet-and-members.md -[azure-cli-install]: /cli/azure/install-azure-cli |
kubernetes-fleet | Upgrade Hub Cluster Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/upgrade-hub-cluster-type.md | - Title: "How to upgrade an Azure Kubernetes Fleet Manager resource between hub types" -description: Learn how to upgrade an Azure Kubernetes Fleet Manager resource from hubless to hubful. - Previously updated : 05/02/2024----- - build-2024 ---# Upgrade hub cluster type for Azure Kubernetes Fleet Manager resource --In this article, you learn how to upgrade an Azure Kubernetes Fleet Manager (Kubernetes Fleet) resource without a hub cluster to a Kubernetes Fleet resource that has a hub cluster. When a Kubernetes Fleet resource is created without a hub cluster, a central Azure Kubernetes Service (AKS) cluster isn't created for the Kubernetes Fleet resource. When a Kubernetes Fleet resource with a hub cluster is created, a central and managed AKS cluster is created to enable scenarios such as workload orchestration and layer-4 load balancing. --For more information, see [Choosing an Azure Kubernetes Fleet Manager option][concepts-choose-fleet]. --## Prerequisites and limitations --- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- You must have an existing Kubernetes Fleet resource without a hub cluster. The steps in this article show you how to create a Kubernetes Fleet resource without a hub cluster. If you already have one, you can skip the initial setup and begin at [Upgrade hub cluster type for the Kubernetes Fleet resource](#upgrade-hub-cluster-type-for-the-kubernetes-fleet-resource).-- This article also includes steps on joining member clusters. If you plan to follow along, you need at least one AKS cluster.---> [!IMPORTANT] -> Kubernetes Fleet resources without a hub cluster can be upgraded to a Kubernetes Fleet resource with a hub cluster. However, a Kubernetes Fleet resource that already has a hub cluster can't be downgraded to a Kubernetes Fleet resource without a hub cluster. -> All configuration options and settings associated with Kubernetes Fleet resource that has a hub cluster are immutable and can't be changed after creation or upgrade time. -> Upgrading from a Kubernetes Fleet resource without a hub cluster to one with a hub cluster can only be done through the Azure CLI. Currently there's no equivalent Azure portal experience. --## Initial setup --To begin, create a resource group and a Kubernetes Fleet resource without a hub cluster, and join your existing AKS cluster as a member. You'll need to repeat the `az fleet member create` command for each individual member cluster you want to associate with the fleet resource. --```azurecli-interactive -RG=myResourceGroup -LOCATION=eastus -FLEET=myKubernetesFleet -FLEET_MEMBER=<name-identifying-member-cluster> -SUBSCRIPTION_ID=<your-subscription-id> -CLUSTER=<your-aks-cluster-name> --# Create resource group -az group create -n $RG -l $LOCATION --# Create a hubless fleet resource -az fleet create -g $RG -n $FLEET --# Join member cluster to hubless fleet resource -az fleet member create --name $FLEET_MEMBER --fleet-name $FLEET --resource-group $RG --member-cluster-id /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RG/providers/Microsoft.ContainerService/managedClusters/$CLUSTER -``` --## Upgrade hub cluster type for the Kubernetes Fleet resource --To upgrade the hub cluster type for the Kubernetes Fleet resource, use the `az fleet create` command with the `--enable-hub` flag set. Be sure to include any other relevant configuration options, as the fleet resource will become immutable after this operation is complete. --```azurecli-interactive -# Upgrade the Kubernetes fleet resource without a hub cluster to one with a hub cluster -az fleet create --name $FLEET --resource-group $RG --enable-hub --``` --## Validate the upgrade --After running the `az fleet create` command to upgrade the fleet resource, verify that the upgrade succeeded by viewing the output. The `provisioningState` should read `Succeeded` and the `hubProfile` field should exist. For example, see the following output: --```bash -{ - ... - "hubProfile": { - "agentProfile": { - "subnetId": null, - "vmSize": null - }, - "apiServerAccessProfile": { - "enablePrivateCluster": false, - "enableVnetIntegration": false, - "subnetId": null - }, - "dnsPrefix": "contoso-user-xxxx-xxxxxxx", - "fqdn": "contoso-user-flth-xxxxxx-xxxxxxxx.hcp.eastus.azmk8s.io", - "kubernetesVersion": "1.28.5", - "portalFqdn": "contoso-user-flth-xxxxxxx-xxxxxxxx.portal.hcp.eastus.azmk8s.io" - }, - "provisioningState": "Succeeded" - ... -} -``` --## Rejoin member clusters --To rejoin member clusters to the newly upgrade fleet resource, use the `az fleet member reconcile` command for each individual member cluster. --```azurecli-interactive -az fleet member reconcile -g $RG -f $FLEET -n $FLEET_MEMBER -``` --> [!NOTE] -> Any AKS clusters that you're joining to the fleet resource for the first time after the upgrade has already taken place don't need to be reconciled using `az fleet member reconcile`. --## Verify member clusters joined successfully --For each member cluster that you rejoin to the newly upgraded fleet, view the output and verify that `provisioningState` reads `Succeeded`. For example: --```bash -{ - ... - "provisioningState": "Succeeded" - ... -} -``` --## Verify functionality --You need access to the Kubernetes API of the hub cluster. If you don't have access, see [Access the Kubernetes API of the Fleet resource with Azure Kubernetes Fleet Manager](./quickstart-access-fleet-kubernetes-api.md). --To verify that your newly upgraded Kubernetes Fleet resource is functioning properly and that the member clusters joined successfully, confirm that you're able to access the hub cluster's API server using the `kubectl get memberclusters` command. --If successful, your output should look similar to the following example output: --```bash -NAME JOINED AGE -aks-member-1 True 2m -aks-member-2 True 2m -aks-member-3 True 2m -``` --## Clean up resources --Once you're finished, you can remove the fleet resource and related resources by deleting the resource group. Keep in mind that this operation won't remove your AKS clusters if they reside in a different resource group. --```azurecli-interactive -az group delete -n $RG -``` --## Next steps --Now that your Kubernetes Fleet resource is upgraded to have a hub cluster, you can take advantage of features that were previously unavailable to you. For example, see: --> [!div class="nextstepaction"] -> [Layer-4 load balancing across Fleet member clusters](l4-load-balancing.md) --<!-- LINKS --> -[concepts-choose-fleet]: concepts-choosing-fleet.md -[quickstart-create-fleet]: quickstart-create-fleet-and-members.md?tabs=hubless -[workload-orchestration]: /azure/kubernetes-fleet/concepts-resource-propagation |
kubernetes-fleet | Use Taints Tolerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/use-taints-tolerations.md | - Title: "Use taints on member clusters and tolerations on cluster resource placements in Azure Kubernetes Fleet Manager" -description: Learn how to use taints on `MemberCluster` resources and tolerations on `ClusterResourcePlacement` resources in Azure Kubernetes Fleet Manager. - Previously updated : 04/23/2024------# Use taints on member clusters and tolerations on cluster resource placements --This article explains how to add/remove taints on `MemberCluster` resources and tolerations on `ClusterResourcePlacement` resources in Azure Kubernetes Fleet Manager. --Taints and tolerations work together to ensure member clusters only receive specified resources during resource propagation. Taints are applied to `MemberCluster` resources to prevent resources from being propagated to the member cluster. Tolerations are applied to `ClusterResourcePlacement` resources to allow resources to be propagated to the member cluster, even if the member cluster has a taint. --## Prerequisites --* [!INCLUDE [free trial note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] -* Read the conceptual overviews for [taints](./concepts-fleet.md#taints) and [tolerations](./concepts-resource-propagation.md#tolerations). -* You must have a Fleet resource with a hub cluster and member clusters. If you don't have this resource, follow [Quickstart: Create a Fleet resource and join member clusters](quickstart-create-fleet-and-members.md). -* You must gain access to the Kubernetes API of the hub cluster by following the steps in [Access the Kubernetes API of the Fleet resource](./quickstart-access-fleet-kubernetes-api.md). --## Add taints to a member cluster --In this example, we add a taint to a `MemberCluster` resource, then try to propagate resources to the member cluster using a `ClusterResourcePlacement` with a `PickAll` placement policy. The resources shouldn't be propagated to the member cluster because of the taint. --1. Create a namespace to propagate to the member cluster using the `kubectl create ns` command. -- ```bash - kubectl create ns test-ns - ``` --2. Create a taint on the `MemberCluster` resource using the following example code: -- ```yml - apiVersion: cluster.kubernetes-fleet.io/v1beta1 - kind: MemberCluster - metadata: - name: kind-cluster-1 - spec: - identity: - name: fleet-member-agent-cluster-1 - kind: ServiceAccount - namespace: fleet-system - apiGroup: "" - taints: # Add taint to the member cluster - - key: test-key1 - value: test-value1 - effect: NoSchedule - ``` - -3. Apply the taint to the `MemberCluster` resource using the `kubectl apply` command. Make sure you replace the file name with the name of your file. -- ```bash - kubectl apply -f member-cluster-taint.yml - ``` --4. Create a `PickAll` placement policy on the `ClusterResourcePlacement` resource using the following example code: -- ```yml - resourceSelectors: - - group: "" - kind: Namespace - version: v1 - name: test-ns - policy: - placementType: PickAll - ``` --5. Apply the `ClusterResourcePlacement` resource using the `kubectl apply` command. Make sure you replace the file name with the name of your file. -- ```bash - kubectl apply -f cluster-resource-placement-pick-all.yml - ``` --6. Verify that the resources weren't propagated to the member cluster by checking the details of the `ClusterResourcePlacement` resource using the `kubectl describe` command. -- ```bash - kubectl describe clusterresourceplacement test-ns - ``` -- Your output should look similar to the following example output: -- ```output - status: - conditions: - - lastTransitionTime: "2024-04-16T19:03:17Z" - message: found all the clusters needed as specified by the scheduling policy - observedGeneration: 2 - reason: SchedulingPolicyFulfilled - status: "True" - type: ClusterResourcePlacementScheduled - - lastTransitionTime: "2024-04-16T19:03:17Z" - message: All 0 cluster(s) are synchronized to the latest resources on the hub - cluster - observedGeneration: 2 - reason: SynchronizeSucceeded - status: "True" - type: ClusterResourcePlacementSynchronized - - lastTransitionTime: "2024-04-16T19:03:17Z" - message: There are no clusters selected to place the resources - observedGeneration: 2 - reason: ApplySucceeded - status: "True" - type: ClusterResourcePlacementApplied - observedResourceIndex: "0" - selectedResources: - - kind: Namespace - name: test-ns - version: v1 - ``` --## Remove taints from a member cluster --In this example, we remove the taint we created in [add taints to a member cluster](#add-taints-to-a-member-cluster). This should automatically trigger the Fleet scheduler to propagate the resources to the member cluster. --1. Open your `MemberCluster` YAML file and remove the taint section. -2. Apply the changes to the `MemberCluster` resource using the `kubectl apply` command. Make sure you replace the file name with the name of your file. -- ```bash - kubectl apply -f member-cluster-taint.yml - ``` --3. Verify that the resources were propagated to the member cluster by checking the details of the `ClusterResourcePlacement` resource using the `kubectl describe` command. -- ```bash - kubectl describe clusterresourceplacement test-ns - ``` -- Your output should look similar to the following example output: -- ```output - status: - conditions: - - lastTransitionTime: "2024-04-16T20:00:03Z" - message: found all the clusters needed as specified by the scheduling policy - observedGeneration: 2 - reason: SchedulingPolicyFulfilled - status: "True" - type: ClusterResourcePlacementScheduled - - lastTransitionTime: "2024-04-16T20:02:57Z" - message: All 1 cluster(s) are synchronized to the latest resources on the hub - cluster - observedGeneration: 2 - reason: SynchronizeSucceeded - status: "True" - type: ClusterResourcePlacementSynchronized - - lastTransitionTime: "2024-04-16T20:02:57Z" - message: Successfully applied resources to 1 member clusters - observedGeneration: 2 - reason: ApplySucceeded - status: "True" - type: ClusterResourcePlacementApplied - observedResourceIndex: "0" - placementStatuses: - - clusterName: kind-cluster-1 - conditions: - - lastTransitionTime: "2024-04-16T20:02:52Z" - message: 'Successfully scheduled resources for placement in kind-cluster-1 (affinity - score: 0, topology spread score: 0): picked by scheduling policy' - observedGeneration: 2 - reason: ScheduleSucceeded - status: "True" - type: Scheduled - - lastTransitionTime: "2024-04-16T20:02:57Z" - message: Successfully Synchronized work(s) for placement - observedGeneration: 2 - reason: WorkSynchronizeSucceeded - status: "True" - type: WorkSynchronized - - lastTransitionTime: "2024-04-16T20:02:57Z" - message: Successfully applied resources - observedGeneration: 2 - reason: ApplySucceeded - status: "True" - type: Applied - selectedResources: - - kind: Namespace - name: test-ns - version: v1 - ``` --## Add tolerations to a cluster resource placement --In this example, we add a toleration to a `ClusterResourcePlacement` resource to propagate resources to a member cluster that has a taint. The toleration allows the resources to be propagated to the member cluster. --1. Create a namespace to propagate to the member cluster using the `kubectl create ns` command. -- ```bash - kubectl create ns test-ns - ``` --2. Create a taint on the `MemberCluster` resource using the following example code: -- ```yml - apiVersion: cluster.kubernetes-fleet.io/v1beta1 - kind: MemberCluster - metadata: - name: kind-cluster-1 - spec: - identity: - name: fleet-member-agent-cluster-1 - kind: ServiceAccount - namespace: fleet-system - apiGroup: "" - taints: # Add taint to the member cluster - - key: test-key1 - value: test-value1 - effect: NoSchedule - ``` - -3. Apply the taint to the `MemberCluster` resource using the `kubectl apply` command. Make sure you replace the file name with the name of your file. -- ```bash - kubectl apply -f member-cluster-taint.yml - ``` --4. Create a toleration on the `ClusterResourcePlacement` resource using the following example code: -- ```yml - spec: - policy: - placementType: PickAll - tolerations: - - key: test-key1 - operator: Exists - resourceSelectors: - - group: "" - kind: Namespace - name: test-ns - version: v1 - revisionHistoryLimit: 10 - strategy: - type: RollingUpdate - ``` --5. Apply the `ClusterResourcePlacement` resource using the `kubectl apply` command. Make sure you replace the file name with the name of your file. -- ```bash - kubectl apply -f cluster-resource-placement-toleration.yml - ``` --6. Verify that the resources were propagated to the member cluster by checking the details of the `ClusterResourcePlacement` resource using the `kubectl describe` command. -- ```bash - kubectl describe clusterresourceplacement test-ns - ``` -- Your output should look similar to the following example output: -- ```output - status: - conditions: - - lastTransitionTime: "2024-04-16T20:16:10Z" - message: found all the clusters needed as specified by the scheduling policy - observedGeneration: 3 - reason: SchedulingPolicyFulfilled - status: "True" - type: ClusterResourcePlacementScheduled - - lastTransitionTime: "2024-04-16T20:16:15Z" - message: All 1 cluster(s) are synchronized to the latest resources on the hub - cluster - observedGeneration: 3 - reason: SynchronizeSucceeded - status: "True" - type: ClusterResourcePlacementSynchronized - - lastTransitionTime: "2024-04-16T20:16:15Z" - message: Successfully applied resources to 1 member clusters - observedGeneration: 3 - reason: ApplySucceeded - status: "True" - type: ClusterResourcePlacementApplied - observedResourceIndex: "0" - placementStatuses: - - clusterName: kind-cluster-1 - conditions: - - lastTransitionTime: "2024-04-16T20:16:10Z" - message: 'Successfully scheduled resources for placement in kind-cluster-1 (affinity - score: 0, topology spread score: 0): picked by scheduling policy' - observedGeneration: 3 - reason: ScheduleSucceeded - status: "True" - type: Scheduled - - lastTransitionTime: "2024-04-16T20:16:15Z" - message: Successfully Synchronized work(s) for placement - observedGeneration: 3 - reason: WorkSynchronizeSucceeded - status: "True" - type: WorkSynchronized - - lastTransitionTime: "2024-04-16T20:16:15Z" - message: Successfully applied resources - observedGeneration: 3 - reason: ApplySucceeded - status: "True" - type: Applied - selectedResources: - - kind: Namespace - name: test-ns - version: v1 - ``` --## Next steps --For more information on Azure Kubernetes Fleet Manager, see the [upstream Fleet documentation](https://github.com/Azure/fleet/tree/main/docs). |
load-balancer | Load Balancer Nat Pool Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-nat-pool-migration.md | az network lb inbound-nat-rule delete -g MyResourceGroup --lb-name MyLoadBalance az network nic ip-config inbound-nat-rule remove -g MyResourceGroup --nic-name MyNic -n MyIpConfig --inbound-nat-rule MyNatRule -az network lb inbound-nat-rule create -g MyResourceGroup --lb-name MyLoadBalancer -n MyNatRule --protocol Tcp --frontend-port-range-start 201 --frontend-port-range-end 500 --backend-port 22 +az network lb inbound-nat-rule create -g MyResourceGroup --lb-name MyLoadBalancer -n MyNatRule --protocol Tcp --frontend-port-range-start 201 --frontend-port-range-end 500 --backend-port 22 --backend-address-pool MybackendPool ``` az vmss update -g MyResourceGroup -n MyVMScaleSet --remove virtualMachineProfile az vmss update-instances --instance-ids '*' --resource-group MyResourceGroup --name MyVMScaleSet -az network lb inbound-nat-rule create -g MyResourceGroup --lb-name MyLoadBalancer -n MyNatRule --protocol Tcp --frontend-port-range-start 201 --frontend-port-range-end 500 --backend-port 22 +az network lb inbound-nat-rule create -g MyResourceGroup --lb-name MyLoadBalancer -n MyNatRule --protocol Tcp --frontend-port-range-start 201 --frontend-port-range-end 500 --backend-port 22 --backend-address-pool MybackendPool ``` |
logic-apps | Logic Apps Limits And Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md | For Azure Logic Apps to receive incoming communication through your firewall, yo | Switzerland West | 51.107.225.180, 51.107.225.167, 51.107.225.163, 51.107.239.66, 51.107.235.139,51.107.227.18, 20.199.218.139, 20.199.219.180, 20.199.216.255, 20.199.217.34, 20.208.231.200, 20.199.217.39, 20.199.216.16, 20.199.216.98 | | UAE Central | 20.45.75.193, 20.45.64.29, 20.45.64.87, 20.45.71.213, 40.126.212.77, 40.126.209.97, 40.125.29.71, 40.125.28.162, 40.125.25.83, 40.125.24.49, 40.125.3.59, 40.125.3.137, 40.125.2.220, 40.125.3.139 | | UAE North | 20.46.42.220, 40.123.224.227, 40.123.224.143, 20.46.46.173, 20.74.255.147, 20.74.255.37, 20.233.241.162, 20.233.241.99, 20.174.64.131, 20.233.241.184, 20.174.48.155, 20.233.241.200, 20.174.56.89, 20.174.41.1 |-| UK South | 51.140.79.109, 51.140.78.71, 51.140.84.39, 51.140.155.81, 20.108.102.180, 20.90.204.232, 20.108.148.173, 20.254.10.157, 4.159.25.35, 4.159.25.50, 4.250.87.43, 4.158.106.183, 4.250.53.153, 4.159.26.160, 4.159.25.103, 4.159.59.224, 4.158.138.59, 85.210.163.36, 85.210.34.209, 85.210.36.40 | +| UK South | 51.140.79.109, 51.140.78.71, 51.140.84.39, 51.140.155.81, 20.108.102.180, 20.90.204.232, 20.108.148.173, 20.254.10.157, 4.159.25.35, 4.159.25.50, 4.250.87.43, 4.158.106.183, 4.250.53.153, 4.159.26.160, 4.159.25.103, 4.159.59.224, 4.158.138.59, 85.210.163.36, 85.210.34.209, 85.210.36.40, 85.210.185.43 | | UK West | 51.141.48.98, 51.141.51.145, 51.141.53.164, 51.141.119.150, 51.104.62.166, 51.141.123.161, 20.162.86.241, 20.162.87.200, 51.141.80.175, 20.162.87.253, 20.254.244.41, 20.254.244.108, 20.254.241.7, 20.254.245.81 | | West Central US | 52.161.26.172, 52.161.8.128, 52.161.19.82, 13.78.137.247, 52.161.64.217, 52.161.91.215, 20.165.255.229, 4.255.162.134, 20.165.228.184, 4.255.178.108, 20.165.225.209, 4.255.145.22, 20.165.245.151, 20.165.232.221 | | West Europe | 13.95.155.53, 52.174.54.218, 52.174.49.6, 20.103.21.113, 20.103.18.84, 20.103.57.210, 20.101.174.52, 20.93.236.81, 20.103.94.255, 20.82.87.229, 20.76.171.34, 20.103.84.61, 98.64.193.78, 98.64.194.143, 98.64.198.223, 98.64.198.203, 98.64.208.186, 98.64.209.52, 172.211.196.189, 172.211.195.251, 98.64.154.66, 98.64.156.81, 98.64.156.180, 98.64.156.68, 20.238.229.165, 20.8.128.2, 20.238.230.113, 108.141.139.111, 108.142.111.162, 108.142.111.174, 108.142.111.178, 108.142.111.183, 108.142.111.152, 108.142.111.156, 108.142.111.179, 108.142.111.169, 98.64.203.30, 98.64.156.172, 20.56.202.157, 20.56.203.30, 57.153.19.33, 57.153.59.202, 108.141.95.140, 20.61.147.216, 57.153.83.52, 57.153.38.174, 57.153.3.13, 57.153.1.223, 108.142.29.55, 108.142.31.220, 108.142.31.202, 20.61.153.22, 57.153.7.252, 108.141.83.61 | This section lists the outbound IP addresses that Azure Logic Apps requires in y | Switzerland West | 51.107.239.66, 51.107.231.86, 51.107.239.112, 51.107.239.123, 51.107.225.190, 51.107.225.179, 51.107.225.186, 51.107.225.151, 51.107.239.83, 51.107.232.61, 51.107.234.254, 51.107.226.253, 20.199.193.249, 20.199.217.37, 20.199.219.154, 20.199.216.246, 20.199.219.21, 20.208.230.30, 20.199.216.63, 20.199.218.36, 20.199.216.44 | | UAE Central | 20.45.75.200, 20.45.72.72, 20.45.75.236, 20.45.79.239, 20.45.67.170, 20.45.72.54, 20.45.67.134, 20.45.67.135, 40.126.210.93, 40.126.209.151, 40.126.208.156, 40.126.214.92, 40.125.28.217, 40.125.28.159, 40.125.25.44, 40.125.29.66, 40.125.3.49, 40.125.3.66, 40.125.3.111, 40.125.3.63| | UAE North | 40.123.230.45, 40.123.231.179, 40.123.231.186, 40.119.166.152, 40.123.228.182, 40.123.217.165, 40.123.216.73, 40.123.212.104, 20.74.255.28, 20.74.250.247, 20.216.16.75, 20.74.251.30, 20.233.241.106, 20.233.241.102, 20.233.241.85, 20.233.241.25, 20.174.64.128, 20.174.64.55, 20.233.240.41, 20.233.241.206, 20.174.48.149, 20.174.48.147, 20.233.241.187, 20.233.241.165, 20.174.56.83, 20.174.56.74, 20.174.40.222, 20.174.40.91 |-| UK South | 51.140.74.14, 51.140.73.85, 51.140.78.44, 51.140.137.190, 51.140.153.135, 51.140.28.225, 51.140.142.28, 51.140.158.24, 20.108.102.142, 20.108.102.123, 20.90.204.228, 20.90.204.188, 20.108.146.132, 20.90.223.4, 20.26.15.70, 20.26.13.151, 4.159.24.241, 4.250.55.134, 4.159.24.255, 4.250.55.217, 172.165.88.82, 4.250.82.111, 4.158.106.101, 4.158.105.106, 4.250.51.127, 4.250.49.230, 4.159.26.128, 172.166.86.30, 4.159.26.151, 4.159.26.77, 4.159.59.140, 4.159.59.13, 85.210.65.206, 85.210.120.102, 4.159.57.40, 85.210.66.97 | +| UK South | 51.140.74.14, 51.140.73.85, 51.140.78.44, 51.140.137.190, 51.140.153.135, 51.140.28.225, 51.140.142.28, 51.140.158.24, 20.108.102.142, 20.108.102.123, 20.90.204.228, 20.90.204.188, 20.108.146.132, 20.90.223.4, 20.26.15.70, 20.26.13.151, 4.159.24.241, 4.250.55.134, 4.159.24.255, 4.250.55.217, 172.165.88.82, 4.250.82.111, 4.158.106.101, 4.158.105.106, 4.250.51.127, 4.250.49.230, 4.159.26.128, 172.166.86.30, 4.159.26.151, 4.159.26.77, 4.159.59.140, 4.159.59.13, 85.210.65.206, 85.210.120.102, 4.159.57.40, 85.210.66.97, 20.117.192.192 | | UK West | 51.141.54.185, 51.141.45.238, 51.141.47.136, 51.141.114.77, 51.141.112.112, 51.141.113.36, 51.141.118.119, 51.141.119.63, 51.104.58.40, 51.104.57.160, 51.141.121.72, 51.141.121.220, 20.162.84.125, 20.162.86.120, 51.141.86.225, 20.162.80.198, 20.254.242.187, 20.254.242.213, 20.254.244.189, 20.254.245.102 | | West Central US | 52.161.27.190, 52.161.18.218, 52.161.9.108, 13.78.151.161, 13.78.137.179, 13.78.148.140, 13.78.129.20, 13.78.141.75, 13.71.199.128 - 13.71.199.159, 13.78.212.163, 13.77.220.134, 13.78.200.233, 13.77.219.128, 52.150.226.148, 4.255.161.16, 4.255.195.186, 4.255.168.251, 4.255.219.152, 20.165.235.148, 20.165.249.200, 20.165.232.68 | | West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126, 13.69.71.160, 13.69.71.161, 13.69.71.162, 13.69.71.163, 13.69.71.164, 13.69.71.165, 13.69.71.166, 13.69.71.167, 20.103.21.81, 20.103.17.247, 20.103.17.223, 20.103.16.47, 20.103.58.116, 20.103.57.29, 20.101.174.49, 20.101.174.23, 20.93.236.26, 20.93.235.107, 20.103.94.250, 20.76.174.72, 20.82.87.192, 20.82.87.16, 20.76.170.145, 20.103.91.39, 20.103.84.41, 20.76.161.156, 98.64.193.64, 98.64.194.135, 98.64.198.219, 98.64.198.194, 98.64.208.46, 98.64.209.43, 172.211.196.188, 172.211.195.181, 98.64.157.37, 98.64.156.69, 98.64.156.152, 98.64.156.62, 20.238.229.108, 108.141.139.225, 20.238.230.87, 108.141.139.80, 108.142.111.161, 108.142.111.173, 108.142.111.175, 108.142.111.182, 108.142.111.151, 108.142.111.155, 108.142.111.157, 108.142.111.167, 98.64.203.5, 98.64.156.150, 20.56.202.134, 20.56.202.244, 57.153.19.27, 57.153.59.193, 108.141.95.129, 20.61.147.200, 57.153.83.40, 57.153.38.60, 57.153.2.162, 57.153.1.215, 108.142.24.182, 108.142.31.170, 108.142.31.143, 20.61.152.226, 57.153.7.245, 108.141.83.46 | |
migrate | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/azure-monitor-agent-migration.md | + + Title: Migrate to Azure Monitor Agent from Log Analytics agent +description: Procedure to migrate to Azure Monitor Agent from MMA +++ Last updated : 09/18/2024++# Customer intent: As an azure administrator, I want to understand the process of migrating from the MMA agent to the AMA agent. ++++# Agent-based dependency analysis using Azure monitor agent (AMA) ++Dependency analysis helps you to identify and understand dependencies across servers that you want to assess and migrate to Azure. We currently perform agent-based dependency analysis by downloading the [MMA agent and associating a Log Analytics workspace](concepts-dependency-visualization.md) with the Azure Migrate project. ++[Azure Monitor Agent (AMA)](/azure/azure-monitor/agents/azure-monitor-agent-overview) replaces the Log Analytics agent, also known as Microsoft Monitor Agent (MMA) and OMS, for Windows and Linux machines, in Azure and non-Azure environments, on-premises, and other clouds. ++This article describes the impact on agent-based dependency analysis because of Azure Monitor Agent (AMA) replacing the Log Analytics agent (also known as Microsoft Monitor agent (MMA)) and provides guidance to migrate from the Log Analytics agent to Azure Monitor Agent. ++> [!IMPORTANT] +> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). You can expect the following when you use the MMA or OMS agent after this date. +> - **Data upload:** Cloud ingestion services will gradually reduce support for MMA agents, which may result in decreased support and potential compatibility issues for MMA agents over time. Ingestion for MMA will be unchanged until February 1 2025. +> - **Installation:** The ability to install the legacy agents will be removed from the Azure portal and installation policies for legacy agents will be removed. You can still install the MMA agents extension as well as perform offline installations. +> - **Customer Support:** You will not be able to get support for legacy agent issues. +> - **OS Support:** Support for new Linux or Windows distros, including service packs, won't be added after the deprecation of the legacy agents. ++> [!Note] +> Starting July 1, 2024, [Standard Log Analytics charges](https://go.microsoft.com/fwlink/?linkid=2278207) are applicable for Agent-based dependency visualization. We suggest moving to [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md) for a seamless experience. ++> [!Note] +> The pricing estimation has been covered in [Estimate the price change](#estimate-the-price-change) section. ++## Migrate from Log analytics agent (MMA) to Azure Monitor agent (AMA) ++If you already set up MMA and the associated Log Analytics workspace with your Azure Migrate project, you can migrate from the existing Log analytics agent to Azure Monitor agent without breaking/changing the association of the Log Analytics workspace with the Azure Migrate project by following these steps. ++1. To deploy the Azure Monitor agent, it's recommended to first clean up the existing Service Map to avoid duplicates.ΓÇ»[Learn more](/azure/azure-monitor/vm/vminsights-migrate-from-service-map#remove-the-service-map-solution-from-the-workspace). ++1. Review the [prerequisites](/azure/azure-monitor/agents/azure-monitor-agent-manage#prerequisites) to install the Azure Monitor Agent. ++1. Download and run the script on the host machine as detailed in [Installation options](/azure/azure-monitor/agents/azure-monitor-agent-manage?tabs=azure-portal#installation-options). To get the Azure Monitor agent and the Dependency agent deployed on the guest machine, create the [Data collection rule (DCR)](/azure/azure-monitor/agents/azure-monitor-agent-data-collection) that maps to the Log analytics workspace ID. ++In the transition scenario, the Log analytics workspace would be the same as the one that was configured for Service Map agent. DCR allows you to enable the collection of Processes and Dependencies. By default, it's disabled. ++## Estimate the price change ++You'll now be charged for associating a Log Analytics workspace with Azure Migrate project. This was earlier free for the first 180 days. +As per the pricing change, you'll be billed against the volume of data gathered by the AMA agent and transmitted to the workspace. To review the volume of data you're gathering, follow these steps: ++1. Sign in to the Log analytics workspace. +1. Navigate to the **Logs** section and run the following query: + + ``` + let AzureMigrateDataTables = dynamic(["ServiceMapProcess_CL","ServiceMapComputer_CL","VMBoundPort","VMConnection","VMComputer","VMProcess","InsightsMetrics"]); Usage ++ | where StartTime >= startofday(ago(30d)) and StartTime < startofday(now()) ++ | where DataType in (AzureMigrateDataTables) ++ | summarize AzureMigateGBperMonth=sum(Quantity)/1000 + ``` ++## Support for Azure Monitor agent in Azure Migrate ++Install and manage Azure Monitor agent as mentioned [here](/azure/azure-monitor/agents/azure-monitor-agent-manage?tabs=azure-portal). Currently, you can download the Log Analytics agent through the Azure Migrate portal. ++## Next steps +[Learn](how-to-create-group-machine-dependencies.md) how to create dependencies for a group. |
openshift | Support Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md | See the following guide for the [past Red Hat OpenShift Container Platform (upst |4.9|November 2021| February 1 2022|March 2 2023| |4.10|March 2022| June 21 2022|August 19 2023| |4.11|August 2022| March 2 2023|February 10 2024|-|4.12|January 2023| August 19 2023|October 17 2024| +|4.12|January 2023| August 19 2023|January 17 2025| |4.13|May 2023| December 15 2023|November 17 2024| |4.14|October 2023| April 25 2024|May 1 2025| |4.15|February 2024| September 4 2024|June 27 2025| |
operational-excellence | Relocation Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-functions.md | This article describes how to move an Azure Functions-hosted function app to ano [!INCLUDE [relocate-reasons](./includes/service-relocation-reason-include.md)] -The Azure resources that host your function app are region-specific and can't be moved across regions. Instead, you must create a copy of your existing function app resources in the target region, and then redeploy your functions code over to the new app. +The Azure resources that host your function app are region-specific and can't be moved across regions. Instead, you must create a copy of your existing function app resources in the target region, and then redeploy your functions code over to the new app. ++You can move these same resources to another resource group or subscription, as long as they remain in the same region. For more information, see [Move App Service resources to a new resource group or subscription](/azure/azure-resource-manager/management/move-limitations/app-service-move-limitations). ## Prerequisites |
operator-service-manager | Azure Operator Service Manager Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/azure-operator-service-manager-overview.md | Title: What is Azure Operator Service Manager? -description: Learn about Azure Operator Service Manager, an Azure Service for the management of Network Services for telecom operators. -+description: Learn about Azure Operator Service Manager, an Azure orchestration service used to managed network service in large scale operator environments. + Last updated 10/18/2023 -Azure Operator Service Manager is an Azure service specifically designed to assist telecom operators in deploying and operating complex network services. It provides management capabilities for multi-vendor applications across Azure Operator Nexus sites and Azure regions. Azure Operator Service Manager caters to the needs of telecom operators, simplifying migration of workloads to Azure cloud and edge environments while accelerating both service innovation and service monetization. +Azure Operator Service Manager is a cloud orchestration service designed to simplify management of complex edge network services hosted on the Azure Operator Nexus platform. It provides persona-based capabilities to onboard, compose, deploy and update multi-vendor applications across one-to-many Azure sites and regions. Azure Operator Service Manager caters to the needs of large scale operator environments, helping to accelerate the migration of workloads to Azure cloud, while utilizing trusted Azure safe practices, to ensure service resiliency and reliability. :::image type="content" source="media/overview-unified-service.png" alt-text="Diagram that shows unified service orchestration across Azure domains." lightbox="media/overview-unified-service-lightbox.png"::: ## Technical overview -Managing complex network services efficiently and reliably can be a challenge. Azure Operator Service ManagerΓÇÖs unique role-based approach introduces curated experiences for publishers, designers, and operators. The following diagram illustrates the Azure Operator Service Manager (AOSM) deployment workflow. +Managing complex network services efficiently and reliably can be a challenge. Azure Operator Service ManagerΓÇÖs unique approach introduces curated experiences for publishers, designers, and operators. The following diagram illustrates the Azure Operator Service Manager (AOSM) deployment workflow. :::image type="content" source="media/overview-deployment-workflows.png" alt-text="Illustration that shows the Azure Operator Service Manager (AOSM) deployment workflow." lightbox="media/overview-deployment-workflows-lightbox.png"::: Managing complex network services efficiently and reliably can be a challenge. A ### Unified service orchestration -Consolidate software management tasks into a single set of end-to-end Azure operations to seamlessly compose, deploy, and update complex multi-vendor multi-region services. Model network services using Azure Resource Manager (ARM), just like other Azure resources and drive run-time operations via any Azure interfaces, such as portal, CLI, API, or SDK. +Consolidate service management tasks into a single set of end-to-end Azure operations to seamlessly manage infrastructure, software and configuration for complex multi-vendor multi-region services. Model network services using Azure Resource Manager (ARM), just like other Azure resources and drive run-time operations via any Azure interfaces, such as portal, CLI, API, or SDK. -### Operations at scale +### Hybrid operations at scale Azure Operator Service Manager is built to scale using the same underlying databases, analytics and cloud-native foundational engineering as Azure itself. Core and edge workloads can be managed together, as complete network deployments ranging in scale from private 5G to national networks. Alternatively use Azure subscription or role-based separation of different parts of your network to segregate management by operational team or service type. -### Telco-grade security +### Operator grade security Azure Operator Service Manager works seamlessly with Azure security features including Private Link, embedded registries, artifact stores and secret management options to ensure that the operator has confidence that what is deployed matches what was on-boarded by each publisher. Accelerate your journey towards autonomous operations using Azure Operator Servi ## Conclusion -By unifying service management, facilitating deployments at national scale and ensuring service consistency, operators can achieve accelerated service velocity, improved service reliability, and optimize service cost. Harness the power of Microsoft Azure through Azure Operator Service Manager to drive network services forward. +By unifying service management, facilitating deployments at fleet-wide scale and ensuring service consistency, operators can achieve accelerated service velocity, improved service reliability, and optimize service cost. Harness the power of Microsoft Azure through Azure Operator Service Manager to drive network services forward. ## Service Level Agreement |
operator-service-manager | Manage Network Function Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/manage-network-function-operator.md | Title: Manage the Azure Operator Service Manager cluster extension -description: Command reference syntax and examples guiding management of the Azure Operator Service Manager network function operator extension. +description: AOSM NFO extension command reference and examples. Last updated 09/16/2024 az k8s-extension create --cluster-name ### Optional feature specific configurations +#### Side Loading ++`--config global.networkfunctionextension.enableLocalRegistry=` +* This configuration allows artifacts to be delivered to edge via hardware drive. +* Accepted values: false, true. +* Default value: false. + #### Pod Mutating Webhook `--config global.networkfunctionextension.webhook.pod.mutation.matchConditionExpression=` * This configuration is an optional parameter. It comes into play only when container network functions (CNFs) are installed in the corresponding release namespace. The referenced matchCondition implies that the pods getting accepted in kube-sys * This parameter can be set or updated during either network function (NF) extension installation or update. * This condition comes into play only when the CNF/Component/Application are getting installed into the namespace as per the rules and namespaceSelectors. If there are more pods getting spin up in that namespace, this condition is applied. -#### Cluster registry +#### Cluster Registry `--config global.networkfunctionextension.enableClusterRegistry=` * This configuration provisions a registry in the cluster to locally cache artifacts. * Default values enable lazy loading mode unless global.networkfunctionextension.enableEarlyLoading=true. The referenced matchCondition implies that the pods getting accepted in kube-sys * This configuration uses unit as Gi and Ti for sizing. * Default value: 100Gi -#### Side loading --`--config global.networkfunctionextension.enableLocalRegistry=` -* This configuration allows artifacts to be delivered to edge via hardware drive. -* Accepted values: false, true. -* Default value: false. --### Recommended NFO config for AKS --The default NFO config configures HA on NAKS but none of the disk drives on AKS support ReadWriteX access mode. Where HA needs to be disabled, use the following config options; --``` --config global.networkfunctionextension.clusterRegistry.highAvailability.enabled=false``` --``` --config global.networkfunctionextension.webhook.highAvailability.enabled=false``` --(optional) --``` --config global.networkfunctionextension.clusterRegistry.storageClassName=managed-csi``` +> [!NOTE] +> * When managing a NAKS cluster with AOSM, the default parameter values enable HA as the recommended configuration. +> * When managing a AKS cluster with AOSM, HA must be disabled using the following configuration options: +> +>``` +> --config global.networkfunctionextension.clusterRegistry.highAvailability.enabled=false +> --config global.networkfunctionextension.webhook.highAvailability.enabled=false +> --config global.networkfunctionextension.clusterRegistry.storageClassName=managed-csi +>``` ## Update network function extension The Azure CLI command 'az k8s-extension update' is executed to update the NFO extension. |
operator-service-manager | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/release-notes.md | The following release notes are generally available (GA): * Release Notes for Version 2.0.2783-134 * Release Notes for Version 2.0.2788-135 * Release Notes for Version 2.0.2804-137-+* Release Notes for Version 2.0.2810-144 + ### Release Attestation These releases are produced compliant with MicrosoftΓÇÖs Secure Development Lifecycle. This lifecycle includes processes for authorizing software changes, antimalware scanning, and scanning and mitigating security bugs and vulnerabilities. The following bug fixes, or other defect resolutions, are delivered with this re #### Security Related Updates * CVE - A total of one CVE is addressed in this release.++* ## Release 2.0.2810-144 ++Document Revision 1.1 ++### Release Summary +Azure Operator Service Manager is a cloud orchestration service that enables automation of operator network-intensive workloads, and mission critical applications hosted on Azure Operator Nexus. Azure Operator Service Manager unifies infrastructure, software, and configuration management with a common model into a single interface, both based on trusted Azure industry standards. This Septemer 13, 2024 Azure Operator Service Manager release includes updating the NFO version to 2.0.2810-144, the details of which are further outlined in the remainder of this document. ++### Release Details +* Release Version: Version 2.0.2810-144 +* Release Date: September 13, 2024 +* Is NFO update required: YES, Update only +* Dependency Versions: Go/1.22.4 - Helm/3.15.2 ++### Release Installation +This release can be installed with as an update on top of release 2.0.2788-144. Please see the following [learn documentation](manage-network-function-operator.md) for additional installation guidance. ++### Issues Resolved in This Release ++#### Bugfix Related Updates +The following bug fixes, or other defect resolutions, are delivered with this release, for either Network Function Operator (NFO) or resource provider (RP) components. ++* NFO - Prevent the cluster registry certificate from being invalidated during Arc extension controller reconciliation. ++#### Security Related Updates ++None |
role-based-access-control | Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles/containers.md | Grants access to read and write Azure Kubernetes Service clusters > [!div class="mx-tableFixed"] > | Actions | Description | > | | |-> | [Microsoft.ContainerService](../permissions/containers.md#microsoftcontainerservice)/managedClusters/read | Get a managed cluster | -> | [Microsoft.ContainerService](../permissions/containers.md#microsoftcontainerservice)/managedClusters/write | Creates a new managed cluster or updates an existing one | +> | [Microsoft.Authorization](../permissions/management-and-governance.md#microsoftauthorization)/*/read | Read roles and role assignments | +> | [Microsoft.ContainerService](../permissions/containers.md#microsoftcontainerservice)/locations/* | Read locations available to ContainerService resources | +> | [Microsoft.ContainerService](../permissions/containers.md#microsoftcontainerservice)/managedClusters/* | Create and manage a managed cluster | +> | [Microsoft.ContainerService](../permissions/containers.md#microsoftcontainerservice)/managedclustersnapshots/* | Create and manage a managed cluster snapshot | +> | [Microsoft.ContainerService](../permissions/containers.md#microsoftcontainerservice)/snapshots/* | Create and manage a snapshot | +> | [Microsoft.Insights](../permissions/monitor.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert | > | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/deployments/* | Create and manage a deployment |+> | [Microsoft.Resources](../permissions/management-and-governance.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | **NotActions** | | > | *none* | | > | **DataActions** | | Grants access to read and write Azure Kubernetes Service clusters "permissions": [ { "actions": [- "Microsoft.ContainerService/managedClusters/read", - "Microsoft.ContainerService/managedClusters/write", - "Microsoft.Resources/deployments/*" + "Microsoft.Authorization/*/read", + "Microsoft.ContainerService/locations/*", + "Microsoft.ContainerService/managedClusters/*", + "Microsoft.ContainerService/managedclustersnapshots/*", + "Microsoft.ContainerService/snapshots/*", + "Microsoft.Insights/alertRules/*", + "Microsoft.Resources/deployments/*", + "Microsoft.Resources/subscriptions/resourceGroups/read" ], "notActions": [], "dataActions": [], |
route-server | Configure Route Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/configure-route-server.md | -# Configure Azure Route Server +# Configure and manage Azure Route Server In this article, you learn how to configure and manage Azure Route Server using the Azure portal, PowerShell, or Azure CLI. |
sentinel | Data Connectors Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md | For more information about the codeless connector platform, see [Create a codele ## Akamai -- [[Deprecated] Akamai Security Events via Legacy Agent](data-connectors/deprecated-akamai-security-events-via-legacy-agent.md) - [[Recommended] Akamai Security Events via AMA](data-connectors/recommended-akamai-security-events-via-ama.md) ## AliCloud For more information about the codeless connector platform, see [Create a codele - [Amazon Web Services](data-connectors/amazon-web-services.md) - [Amazon Web Services S3](data-connectors/amazon-web-services-s3.md) -## Apache --- [Apache Tomcat](data-connectors/apache-tomcat.md)- ## Apache Software Foundation - [Apache HTTP Server](data-connectors/apache-http-server.md) For more information about the codeless connector platform, see [Create a codele - [ARGOS Cloud Security](data-connectors/argos-cloud-security.md) -## Arista Networks --- [Awake Security](data-connectors/awake-security.md)- ## Armis, Inc. - [Armis Activities (using Azure Functions)](data-connectors/armis-activities.md) For more information about the codeless connector platform, see [Create a codele ## Aruba -- [[Deprecated] Aruba ClearPass via Legacy Agent](data-connectors/deprecated-aruba-clearpass-via-legacy-agent.md) - [[Recommended] Aruba ClearPass via AMA](data-connectors/recommended-aruba-clearpass-via-ama.md) ## Atlassian For more information about the codeless connector platform, see [Create a codele - [Bitsight data connector (using Azure Functions)](data-connectors/bitsight-data-connector.md) -## Blackberry --- [Blackberry CylancePROTECT](data-connectors/blackberry-cylanceprotect.md)- ## Bosch Global Software Technologies Pvt Ltd - [AIShield](data-connectors/aishield.md) For more information about the codeless connector platform, see [Create a codele ## Broadcom -- [[Deprecated] Broadcom Symantec DLP via Legacy Agent](data-connectors/deprecated-broadcom-symantec-dlp-via-legacy-agent.md) - [[Recommended] Broadcom Symantec DLP via AMA](data-connectors/recommended-broadcom-symantec-dlp-via-ama.md) ## Cisco -- [Cisco Application Centric Infrastructure](data-connectors/cisco-application-centric-infrastructure.md) - [Cisco AS) - [Cisco Duo Security (using Azure Functions)](data-connectors/cisco-duo-security.md) - [Cisco Identity Services Engine](data-connectors/cisco-identity-services-engine.md) For more information about the codeless connector platform, see [Create a codele ## Claroty -- [[Deprecated] Claroty via Legacy Agent](data-connectors/deprecated-claroty-via-legacy-agent.md) - [[Recommended] Claroty via AMA](data-connectors/recommended-claroty-via-ama.md) - [Claroty xDome](data-connectors/claroty-xdome.md) For more information about the codeless connector platform, see [Create a codele ## Crowdstrike -- [[Deprecated] CrowdStrike Falcon Endpoint Protection via Legacy Agent](data-connectors/deprecated-crowdstrike-falcon-endpoint-protection-via-legacy-agent.md) - [CrowdStrike Falcon Adversary Intelligence (using Azure Functions)](data-connectors/crowdstrike-falcon-adversary-intelligence.md) - [Crowdstrike Falcon Data Replicator (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator.md) - [Crowdstrike Falcon Data Replicator V2 (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator-v2.md) For more information about the codeless connector platform, see [Create a codele ## Fireeye -- [[Deprecated] FireEye Network Security (NX) via Legacy Agent](data-connectors/deprecated-fireeye-network-security-nx-via-legacy-agent.md) - [[Recommended] FireEye Network Security (NX) via AMA](data-connectors/recommended-fireeye-network-security-nx-via-ama.md) ## Flare Systems For more information about the codeless connector platform, see [Create a codele ## Fortinet -- [[Deprecated] Fortinet via Legacy Agent](data-connectors/deprecated-fortinet-via-legacy-agent.md) - [Fortinet FortiNDR Cloud (using Azure Functions)](data-connectors/fortinet-fortindr-cloud.md)-- [[Deprecated] Fortinet FortiWeb Web Application Firewall via Legacy Agent](data-connectors/deprecated-fortinet-fortiweb-web-application-firewall-via-legacy-agent.md) ## Gigamon, Inc For more information about the codeless connector platform, see [Create a codele ## Illumio -- [[Deprecated] Illumio Core via Legacy Agent](data-connectors/deprecated-illumio-core-via-legacy-agent.md) - [[Recommended] Illumio Core via AMA](data-connectors/recommended-illumio-core-via-ama.md) ## Imperva - [Imperva Cloud WAF (using Azure Functions)](data-connectors/imperva-cloud-waf.md) -## Infoblox --- [Infoblox NIOS](data-connectors/infoblox-nios.md)- ## Infosec Global - [InfoSecGlobal Data Connector](data-connectors/infosecglobal-data-connector.md) For more information about the codeless connector platform, see [Create a codele - [Juniper IDP](data-connectors/juniper-idp.md) - [Juniper SRX](data-connectors/juniper-srx.md) -## Kaspersky --- [[Deprecated] Kaspersky Security Center via Legacy Agent](data-connectors/deprecated-kaspersky-security-center-via-legacy-agent.md)-- [[Recommended] Kaspersky Security Center via AMA](data-connectors/recommended-kaspersky-security-center-via-ama.md)- ## Linux - [Microsoft Sysmon For Linux](data-connectors/microsoft-sysmon-for-linux.md) For more information about the codeless connector platform, see [Create a codele ## Microsoft Sentinel Community, Microsoft Corporation -- [[Deprecated] Forcepoint CASB via Legacy Agent](data-connectors/deprecated-forcepoint-casb-via-legacy-agent.md)-- [[Deprecated] Forcepoint CSG via Legacy Agent](data-connectors/deprecated-forcepoint-csg-via-legacy-agent.md)-- [[Deprecated] Forcepoint NGFW via Legacy Agent](data-connectors/deprecated-forcepoint-ngfw-via-legacy-agent.md) - [[Recommended] Forcepoint CASB via AMA](data-connectors/recommended-forcepoint-casb-via-ama.md) - [[Recommended] Forcepoint CSG via AMA](data-connectors/recommended-forcepoint-csg-via-ama.md) - [[Recommended] Forcepoint NGFW via AMA](data-connectors/recommended-forcepoint-ngfw-via-ama.md) For more information about the codeless connector platform, see [Create a codele ## Netwrix -- [[Deprecated] Netwrix Auditor via Legacy Agent](data-connectors/deprecated-netwrix-auditor-via-legacy-agent.md) - [[Recommended] Netwrix Auditor via AMA](data-connectors/recommended-netwrix-auditor-via-ama.md) ## Nginx For more information about the codeless connector platform, see [Create a codele ## Nozomi Networks -- [[Deprecated] Nozomi Networks N2OS via Legacy Agent](data-connectors/deprecated-nozomi-networks-n2os-via-legacy-agent.md) - [[Recommended] Nozomi Networks N2OS via AMA](data-connectors/recommended-nozomi-networks-n2os-via-ama.md) ## NXLog Ltd. For more information about the codeless connector platform, see [Create a codele ## OSSEC -- [[Deprecated] OSSEC via Legacy Agent](data-connectors/deprecated-ossec-via-legacy-agent.md) - [[Recommended] OSSEC via AMA](data-connectors/recommended-ossec-via-ama.md) ## Palo Alto Networks -- [[Deprecated] Palo Alto Networks Cortex Data Lake (CDL) via Legacy Agent](data-connectors/deprecated-palo-alto-networks-cortex-data-lake-cdl-via-legacy-agent.md) - [[Recommended] Palo Alto Networks Cortex Data Lake (CDL) via AMA](data-connectors/recommended-palo-alto-networks-cortex-data-lake-cdl-via-ama.md) - [Palo Alto Prisma Cloud CSPM (using Azure Functions)](data-connectors/palo-alto-prisma-cloud-cspm.md) For more information about the codeless connector platform, see [Create a codele ## Ping Identity -- [[Deprecated] PingFederate via Legacy Agent](data-connectors/deprecated-pingfederate-via-legacy-agent.md) - [[Recommended] PingFederate via AMA](data-connectors/recommended-pingfederate-via-ama.md) ## PostgreSQL For more information about the codeless connector platform, see [Create a codele ## TrendMicro -- [[Deprecated] Trend Micro Apex One via Legacy Agent](data-connectors/deprecated-trend-micro-apex-one-via-legacy-agent.md) - [[Recommended] Trend Micro Apex One via AMA](data-connectors/recommended-trend-micro-apex-one-via-ama.md) ## Ubiquiti |
sentinel | Apache Tomcat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/apache-tomcat.md | - Title: "Apache Tomcat connector for Microsoft Sentinel" -description: "Learn how to install the connector Apache Tomcat to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Apache Tomcat connector for Microsoft Sentinel --The Apache Tomcat solution provides the capability to ingest [Apache Tomcat](http://tomcat.apache.org/) events into Microsoft Sentinel. Refer to [Apache Tomcat documentation](http://tomcat.apache.org/tomcat-10.0-doc/logging.html) for more information. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Tomcat_CL<br/> | -| **Data collection rules support** | Not currently supported | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Clients (Source IP)** -- ```kusto -TomcatEvent - - | summarize count() by SrcIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias TomcatEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Tomcat/Parsers/TomcatEvent.txt).The function usually takes 10-15 minutes to activate after solution installation/update. ---> [!NOTE] - > This data connector has been developed using Apache Tomcat version 10.0.4 --1. Install and onboard the agent for Linux or Windows --Install the agent on the Apache Tomcat Server where the logs are generated. --> Logs from Apache Tomcat Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----2. Configure the logs to be collected --Configure the custom log directory to be collected ----1. Select the link above to open your workspace advanced settings -2. From the left pane, select **Data**, select **Custom Logs** and click **Add+** -3. Click **Browse** to upload a sample of a Tomcat log file (e.g. access.log or error.log). Then, click **Next >** -4. Select **New line** as the record delimiter and click **Next >** -5. Select **Windows** or **Linux** and enter the path to Tomcat logs based on your configuration. Example: -6. After entering the path, click the '+' symbol to apply, then click **Next >** -7. Add **Tomcat_CL** as the custom log Name and click **Done** ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-apachetomcat?tab=Overview) in the Azure Marketplace. |
sentinel | Blackberry Cylanceprotect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/blackberry-cylanceprotect.md | - Title: "Blackberry CylancePROTECT connector for Microsoft Sentinel" -description: "Learn how to install the connector Blackberry CylancePROTECT to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Blackberry CylancePROTECT connector for Microsoft Sentinel --The [Blackberry CylancePROTECT](https://www.blackberry.com/us/en/products/blackberry-protect) connector allows you to easily connect your CylancePROTECT logs with Microsoft Sentinel. This gives you more insight into your organization's network and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (CylancePROTECT)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Event Types** -- ```kusto -CylancePROTECTΓÇï - - | summarize count() by EventName - - | top 10 by count_ - ``` --**Top 10 Triggered Policies** -- ```kusto -CylancePROTECTΓÇï - - | where EventType == "Threat" - - | summarize count() by PolicyName - - | top 10 by count_ - ``` ----## Prerequisites --To integrate with Blackberry CylancePROTECT make sure you have: --- **CylancePROTECT**: must be configured to export logs via Syslog.---## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias CyclanePROTECT and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Blackberry%20CylancePROTECT/Parsers/CylancePROTECT.txt), on the second line of the query, enter the hostname(s) of your CyclanePROTECT device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. --1. Select the link below to open your workspace **agents configuration**, and select the **Syslog** tab. -2. Select **Add facility** and choose from the drop-down list of facilities. Repeat for all the facilities you want to add. -3. Mark the check boxes for the desired severities for each facility. -4. Click **Apply**. ---3. Configure and connect the CylancePROTECT --[Follow these instructions](https://docs.blackberry.com/) to configure the CylancePROTECT to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-blackberrycylanceprotect?tab=Overview) in the Azure Marketplace. |
sentinel | Cisco Application Centric Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-application-centric-infrastructure.md | - Title: "Cisco Application Centric Infrastructure connector for Microsoft Sentinel" -description: "Learn how to install the connector Cisco Application Centric Infrastructure to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Cisco Application Centric Infrastructure connector for Microsoft Sentinel --[Cisco Application Centric Infrastructure (ACI)](https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-741487.html) data connector provides the capability to ingest [Cisco ACI logs](https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/all/syslog/guide/b_ACI_System_Messages_Guide/m-aci-system-messages-reference.html) into Microsoft Sentinel. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (CiscoACIEvent)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Resources (DstResourceId)** -- ```kusto -CiscoACIEvent - - | where notempty(DstResourceId) - - | summarize count() by DstResourceId - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**CiscoACIEvent**](https://aka.ms/sentinel-CiscoACI-parser) which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > This data connector has been developed using Cisco ACI Release 1.x --1. Configure Cisco ACI system sending logs via Syslog to remote server where you will install the agent. --[Follow these steps](https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/basic-config/b_ACI_Config_Guide/b_ACI_Config_Guide_chapter_010.html#d2933e4611a1635) to configure Syslog Destination, Destination Group, and Syslog Source. --2. Install and onboard the agent for Linux or Windows --Install the agent on the Server to which the logs will be forwarded. --> Logs on Linux or Windows servers are collected by **Linux** or **Windows** agents. -----3. Check logs in Microsoft Sentinel --Open Log Analytics to check if the logs are received using the Syslog schema. -->**NOTE:** It may take up to 15 minutes before new logs will appear in Syslog table. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoaci?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Akamai Security Events Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-akamai-security-events-via-legacy-agent.md | - Title: "[Deprecated] Akamai Security Events via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Akamai Security Events via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Akamai Security Events via Legacy Agent connector for Microsoft Sentinel ---Akamai Solution for Microsoft Sentinel provides the capability to ingest [Akamai Security Events](https://www.akamai.com/us/en/products/security/) into Microsoft Sentinel. Refer to [Akamai SIEM Integration documentation](https://developer.akamai.com/tools/integrations/siem) for more information. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (AkamaiSecurityEvents)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Countries** - ```kusto -AkamaiSIEMEvent - - | summarize count() by SrcGeoCountry - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Akamai Security Events and load the function code or click [here](https://aka.ms/sentinel-akamaisecurityevents-parser), on the second line of the query, enter the hostname(s) of your Akamai Security Events device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --[Follow these steps](https://developer.akamai.com/tools/integrations/siem) to configure Akamai CEF connector to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-akamai?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Aruba Clearpass Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-aruba-clearpass-via-legacy-agent.md | - Title: "[Deprecated] Aruba ClearPass via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Aruba ClearPass via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Aruba ClearPass via Legacy Agent connector for Microsoft Sentinel --The [Aruba ClearPass](https://www.arubanetworks.com/products/security/network-access-control/secure-access/) connector allows you to easily connect your Aruba ClearPass with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs network and improves your security operation capabilities. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (ArubaClearPass)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | --## Query samples --**Top 10 Events by Username** - ```kusto -ArubaClearPass - - | summarize count() by UserName -- | top 10 by count_ - ``` --**Top 10 Error Codes** - ```kusto -ArubaClearPass - - | summarize count() by ErrorCode -- | top 10 by count_ - ``` ----## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ArubaClearPass and load the function code or click [here](https://aka.ms/sentinel-arubaclearpass-parser).The function usually takes 10-15 minutes to activate after solution installation/update. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Aruba ClearPass logs to a Syslog agent --Configure Aruba ClearPass to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent. -1. [Follow these instructions](https://www.arubanetworks.com/techdocs/ClearPass/6.7/PolicyManager/Content/CPPM_UserGuide/Admin/syslogExportFilters_add_syslog_filter_general.htm) to configure the Aruba ClearPass to forward syslog. -2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Broadcom Symantec Dlp Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-broadcom-symantec-dlp-via-legacy-agent.md | - Title: "[Deprecated] Broadcom Symantec DLP via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Broadcom Symantec DLP via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Broadcom Symantec DLP via Legacy Agent connector for Microsoft Sentinel --The [Broadcom Symantec Data Loss Prevention (DLP)](https://www.broadcom.com/products/cyber-security/information-protection/data-loss-prevention) connector allows you to easily connect your Symantec DLP with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs information, where it travels, and improves your security operation capabilities. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (SymantecDLP)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Triggered Activities** - ```kusto -SymantecDLP - - | summarize count() by Activity -- | top 10 by count_ - ``` --**Top 10 Filenames** - ```kusto -SymantecDLP - - | summarize count() by FileName -- | top 10 by count_ - ``` ----## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias SymantecDLP and load the function code or click [here](https://aka.ms/sentinel-symantecdlp-parser). The function usually takes 10-15 minutes to activate after solution installation/update. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python ΓÇôversion. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Symantec DLP logs to a Syslog agent --Configure Symantec DLP to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent. -1. [Follow these instructions](https://knowledge.broadcom.com/external/article/159509/generating-syslog-messages-from-data-los.html) to configure the Symantec DLP to forward syslog -2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python ΓÇôversion -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-broadcomsymantecdlp?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Cisco Secure Email Gateway Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-cisco-secure-email-gateway-via-legacy-agent.md | - Title: "[Deprecated] Cisco Secure Email Gateway via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Cisco Secure Email Gateway via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Cisco Secure Email Gateway via Legacy Agent connector for Microsoft Sentinel --The [Cisco Secure Email Gateway (SEG)](https://www.cisco.com/c/en/us/products/security/email-security/https://docsupdatetracker.net/index.html) data connector provides the capability to ingest [Cisco SEG Consolidated Event Logs](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1061902) into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (CiscoSEG)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Senders** - ```kusto -CiscoSEGEvent - - | where isnotempty(SrcUserName) - - | summarize count() by SrcUserName - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**CiscoSEGEvent**](https://aka.ms/sentinel-CiscoSEG-parser) which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > This data connector has been developed using AsyncOS 14.0 for Cisco Secure Email Gateway --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --Follow these steps to configure Cisco Secure Email Gateway to forward logs via syslog: --2.1. Configure [Log Subscription](https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0/b_ESA_Admin_Guide_12_1_chapter_0100111.html#con_1134718) -->**NOTE:** Select **Consolidated Event Logs** in Log Type field. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoseg?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Claroty Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-claroty-via-legacy-agent.md | - Title: "[Deprecated] Claroty via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Claroty via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Claroty via Legacy Agent connector for Microsoft Sentinel --The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/industrial-cybersecurity/sra) events into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (Claroty)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Destinations** - ```kusto -ClarotyEvent - - | where isnotempty(DstIpAddr) - - | summarize count() by DstIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**ClarotyEvent**](https://aka.ms/sentinel-claroty-parser) which is deployed with the Microsoft Sentinel Solution. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Configure Claroty to send logs using CEF --Configure log forwarding using CEF: --1. Navigate to the **Syslog** section of the Configuration menu. --2. Select **+Add**. --3. In the **Add New Syslog Dialog** specify Remote Server **IP**, **Port**, **Protocol** and select **Message Format** - **CEF**. --4. Choose **Save** to exit the **Add Syslog dialog**. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-claroty?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Crowdstrike Falcon Endpoint Protection Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-crowdstrike-falcon-endpoint-protection-via-legacy-agent.md | - Title: "[Deprecated] CrowdStrike Falcon Endpoint Protection via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] CrowdStrike Falcon Endpoint Protection via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 05/30/2024------# [Deprecated] CrowdStrike Falcon Endpoint Protection via Legacy Agent connector for Microsoft Sentinel --The [CrowdStrike Falcon Endpoint Protection](https://www.crowdstrike.com/endpoint-security-products/) connector allows you to easily connect your CrowdStrike Falcon Event Stream with Microsoft Sentinel, to create custom dashboards, alerts, and improve investigation. This gives you more insight into your organization's endpoints and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (CrowdStrikeFalconEventStream)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Hosts with Detections** -- ```kusto -CrowdStrikeFalconEventStream - - | where EventType == "DetectionSummaryEvent" -- | summarize count() by DstHostName - - | top 10 by count_ - ``` --**Top 10 Users with Detections** -- ```kusto -CrowdStrikeFalconEventStream - - | where EventType == "DetectionSummaryEvent" -- | summarize count() by DstUserName - - | top 10 by count_ - ``` ----## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Crowd Strike Falcon Endpoint Protection and load the function code or click [here](https://aka.ms/sentinel-crowdstrikefalconendpointprotection-parser), on the second line of the query, enter the hostname(s) of your CrowdStrikeFalcon device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. --1.0 Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --1. Make sure that you have Python on your machine using the following command: python -version. --2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward CrowdStrike Falcon Event Stream logs to a Syslog agent --Deploy the CrowdStrike Falcon SIEM Collector to forward Syslog messages in CEF format to your Microsoft Sentinel workspace via the Syslog agent. -1. [Follow these instructions](https://www.crowdstrike.com/blog/tech-center/integrate-with-your-siem/) to deploy the SIEM Collector and forward syslog -2. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. --It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --1. Make sure that you have Python on your machine using the following command: python -version. --2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-crowdstrikefalconep?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Fireeye Network Security Nx Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-fireeye-network-security-nx-via-legacy-agent.md | - Title: "[Deprecated] FireEye Network Security (NX) via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] FireEye Network Security (NX) via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] FireEye Network Security (NX) via Legacy Agent connector for Microsoft Sentinel --The [FireEye Network Security (NX)](https://www.fireeye.com/products/network-security.html) data connector provides the capability to ingest FireEye Network Security logs into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (FireEyeNX)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Sources** - ```kusto -FireEyeNXEvent - - | where isnotempty(SrcIpAddr) - - | summarize count() by SrcIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**FireEyeNXEvent**](https://aka.ms/sentinel-FireEyeNX-parser) which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > This data connector has been developed using FEOS release v9.0 --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Configure FireEye NX to send logs using CEF --Complete the following steps to send data using CEF: --2.1. Log into the FireEye appliance with an administrator account --2.2. Click **Settings** --2.3. Click **Notifications** --Click **rsyslog** --2.4. Check the **Event type** check box --2.5. Make sure Rsyslog settings are: --- Default format: CEF--- Default delivery: Per event--- Default send as: Alert--3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fireeyenx?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Forcepoint Casb Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-forcepoint-casb-via-legacy-agent.md | - Title: "[Deprecated] Forcepoint CASB via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Forcepoint CASB via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Forcepoint CASB via Legacy Agent connector for Microsoft Sentinel --The Forcepoint CASB (Cloud Access Security Broker) Connector allows you to automatically export CASB logs and events into Microsoft Sentinel in real-time. This enriches visibility into user activities across locations and cloud applications, enables further correlation with data from Azure workloads and other feeds, and improves monitoring capability with Workbooks inside Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (ForcepointCASB)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | --## Query samples --**Top 5 Users With The Highest Number Of Logs** - ```kusto -CommonSecurityLog -- | summarize Count = count() by DestinationUserName -- | top 5 by DestinationUserName -- | render barchart - ``` --**Top 5 Users by Number of Failed Attempts ** - ```kusto -CommonSecurityLog -- | extend outcome = coalesce(column_ifexists("EventOutcome", ""), tostring(split(split(AdditionalExtensions, ";", 2)[0], "=", 1)[0]), "") -- | extend reason = coalesce(column_ifexists("Reason", ""), tostring(split(split(AdditionalExtensions, ";", 3)[0], "=", 1)[0]), "") -- | where outcome =="Failure" -- | summarize Count= count() by DestinationUserName -- | render barchart - ``` ----## Vendor installation instructions --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel. This machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version - ->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) --5. Forcepoint integration installation guide --To complete the installation of this Forcepoint product integration, follow the guide linked below. --[Installation Guide >](https://frcpnt.com/casb-sentinel) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-casb?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Forcepoint Csg Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-forcepoint-csg-via-legacy-agent.md | - Title: "[Deprecated] Forcepoint CSG via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Forcepoint CSG via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 11/29/2023------# [Deprecated] Forcepoint CSG via Legacy Agent connector for Microsoft Sentinel --Forcepoint Cloud Security Gateway is a converged cloud security service that provides visibility, control, and threat protection for users and data, wherever they are. For more information visit: https://www.forcepoint.com/product/cloud-security-gateway --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (Forcepoint CSG)<br/> CommonSecurityLog (Forcepoint CSG)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | --## Query samples --**Top 5 Web requested Domains with log severity equal to 6 (Medium)** - ```kusto -CommonSecurityLog -- | where TimeGenerated <= ago(0m) -- | where DeviceVendor == "Forcepoint CSG" -- | where DeviceProduct == "Web" -- | where LogSeverity == 6 -- | where DeviceCustomString2 != "" -- | summarize Count=count() by DeviceCustomString2 -- | top 5 by Count -- | render piechart - ``` --**Top 5 Web Users with 'Action' equal to 'Blocked'** - ```kusto -CommonSecurityLog -- | where TimeGenerated <= ago(0m) -- | where DeviceVendor == "Forcepoint CSG" -- | where DeviceProduct == "Web" -- | where Activity == "Blocked" -- | where SourceUserID != "Not available" -- | summarize Count=count() by SourceUserID -- | top 5 by Count -- | render piechart - ``` --**Top 5 Sender Email Addresses Where Spam Score Greater Than 10.0** - ```kusto -CommonSecurityLog -- | where TimeGenerated <= ago(0m) -- | where DeviceVendor == "Forcepoint CSG" -- | where DeviceProduct == "Email" -- | where DeviceCustomFloatingPoint1 > 10.0 -- | summarize Count=count() by SourceUserName -- | top 5 by Count -- | render barchart - ``` ----## Vendor installation instructions --1. Linux Syslog agent configuration --This integration requires the Linux Syslog agent to collect your Forcepoint Cloud Security Gateway Web/Email logs on port 514 TCP as Common Event Format (CEF) and forward them to Microsoft Sentinel. -- Your Data Connector Syslog Agent Installation Command is: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Implementation options --The integration is made available with two implementations options. --2.1 Docker Implementation --Leverages docker images where the integration component is already installed with all necessary dependencies. --Follow the instructions provided in the Integration Guide linked below. --[Integration Guide >](https://frcpnt.com/csg-sentinel) --2.2 Traditional Implementation --Requires the manual deployment of the integration component inside a clean Linux machine. --Follow the instructions provided in the Integration Guide linked below. --[Integration Guide >](https://frcpnt.com/csg-sentinel) --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version - ->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF). ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-csg?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Forcepoint Ngfw Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-forcepoint-ngfw-via-legacy-agent.md | - Title: "[Deprecated] Forcepoint NGFW via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Forcepoint NGFW via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Forcepoint NGFW via Legacy Agent connector for Microsoft Sentinel --The Forcepoint NGFW (Next Generation Firewall) connector allows you to automatically export user-defined Forcepoint NGFW logs into Microsoft Sentinel in real-time. This enriches visibility into user activities recorded by NGFW, enables further correlation with data from Azure workloads and other feeds, and improves monitoring capability with Workbooks inside Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (ForcePointNGFW)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Community](https://github.com/Azure/Azure-Sentinel/issues) | --## Query samples --**Show all terminated actions from the Forcepoint NGFW** - ```kusto --CommonSecurityLog -- | where DeviceVendor == "Forcepoint" -- | where DeviceProduct == "NGFW" -- | where DeviceAction == "Terminate" -- ``` --**Show all Forcepoint NGFW with suspected compromise behaviour** - ```kusto --CommonSecurityLog -- | where DeviceVendor == "Forcepoint" -- | where DeviceProduct == "NGFW" -- | where Activity contains "compromise" -- ``` --**Show chart grouping all Forcepoint NGFW events by Activity type** - ```kusto --CommonSecurityLog -- | where DeviceVendor == "Forcepoint" -- | where DeviceProduct == "NGFW" -- | summarize count=count() by Activity - - | render barchart -- ``` ----## Vendor installation instructions --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python - version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) --5. Forcepoint integration installation guide --To complete the installation of this Forcepoint product integration, follow the guide linked below. --[Installation Guide >](https://frcpnt.com/ngfw-sentinel) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-forcepoint-ngfw?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Fortinet Fortiweb Web Application Firewall Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-fortinet-fortiweb-web-application-firewall-via-legacy-agent.md | - Title: "[Deprecated] Fortinet FortiWeb Web Application Firewall via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Fortinet FortiWeb Web Application Firewall via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 05/30/2024------# [Deprecated] Fortinet FortiWeb Web Application Firewall via Legacy Agent connector for Microsoft Sentinel --The [fortiweb](https://www.fortinet.com/products/web-application-firewall/fortiweb) data connector provides the capability to ingest Threat Analytics and events into Microsoft Sentinel. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (Fortiweb)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | --## Query samples --**Top 10 Threats** -- ```kusto -Fortiweb - - | where isnotempty(EventType) - - | summarize count() by EventType - - | top 10 by count_ - ``` ----## Vendor installation instructions --1.0 Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --1. Make sure that you have Python on your machine using the following command: python -version. --2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machine's IP address. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. --It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --1. Make sure that you have Python on your machine using the following command: python -version --2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fortiwebcloud?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Fortinet Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-fortinet-via-legacy-agent.md | - Title: "[Deprecated] Fortinet via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Fortinet via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 05/30/2024------# [Deprecated] Fortinet via Legacy Agent connector for Microsoft Sentinel --The Fortinet firewall connector allows you to easily connect your Fortinet logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (Fortinet)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**All logs** -- ```kusto --CommonSecurityLog -- | where DeviceVendor == "Fortinet" -- | where DeviceProduct startswith "Fortigate" -- - | sort by TimeGenerated - ``` --**Summarize by destination IP and port** -- ```kusto --CommonSecurityLog -- | where DeviceVendor == "Fortinet" -- | where DeviceProduct startswith "Fortigate" -- - | summarize count() by DestinationIP, DestinationPort, TimeGeneratedΓÇï - - | sort by TimeGenerated - ``` ----## Vendor installation instructions --1.0 Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --1. Make sure that you have Python on your machine using the following command: python --version. --2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py &&sudo python cef_installer.py {0} {1}` --2. Forward Fortinet logs to Syslog agent --Set your Fortinet to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machineΓÇÖs IP address. ---Copy the CLI commands below and: -- Replace "server <ip address>" with the Syslog agent's IP address.-- Set the "<facility_name>" to use the facility you configured in the Syslog agent (by default, the agent sets this to local4).-- Set the Syslog port to 514, the port your agent uses.-- To enable CEF format in early FortiOS versions, you may need to run the command "set csv disable".--For more information, go to the [Fortinet Document Library](https://aka.ms/asi-syslog-fortinet-fortinetdocumentlibrary), choose your version, and use the "Handbook" and "Log Message Reference" PDFs. --[Learn more](https://aka.ms/CEF-Fortinet) -- Set up the connection using the CLI to run the following commands: -- -```bash -config log syslogd setting - set status enable -set format cef -set port 514 -set server <ip_address_of_Receiver> -end -``` ---1.3 Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. --It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --1. Make sure that you have Python on your machine using the following command: python --version --2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py &&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-fortinetfortigate?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Illumio Core Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-illumio-core-via-legacy-agent.md | - Title: "[Deprecated] Illumio Core via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Illumio Core via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Illumio Core via Legacy Agent connector for Microsoft Sentinel --The [Illumio Core](https://www.illumio.com/products/) data connector provides the capability to ingest Illumio Core logs into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (IllumioCore)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft](https://support.microsoft.com) | --## Query samples --**Top 10 Event Types** - ```kusto -IllumioCoreEvent - - | where isnotempty(EventType) - - | summarize count() by EventType - - | top 10 by count_ - ``` ----## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias IllumioCoreEvent and load the function code or click [here](https://aka.ms/sentinel-IllumioCore-parser).The function usually takes 10-15 minutes to activate after solution installation/update and maps Illumio Core events to Microsoft Sentinel Information Model (ASIM). --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Configure Ilumio Core to send logs using CEF --2.1 Configure Event Format -- 1. From the PCE web console menu, choose **Settings > Event Settings** to view your current settings. -- 2. Click **Edit** to change the settings. -- 3. Set **Event Format** to CEF. -- 4. (Optional) Configure **Event Severity** and **Retention Period**. --2.2 Configure event forwarding to an external syslog server -- 1. From the PCE web console menu, choose **Settings > Event Settings**. -- 2. Click **Add**. -- 3. Click **Add Repository**. -- 4. Complete the **Add Repository** dialog. -- 5. Click **OK** to save the event forwarding configuration. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-illumiocore?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Kaspersky Security Center Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-kaspersky-security-center-via-legacy-agent.md | - Title: "[Deprecated] Kaspersky Security Center via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Kaspersky Security Center via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Kaspersky Security Center via Legacy Agent connector for Microsoft Sentinel --The [Kaspersky Security Center](https://support.kaspersky.com/KSC/13/en-US/3396.htm) data connector provides the capability to ingest [Kaspersky Security Center logs](https://support.kaspersky.com/KSC/13/en-US/151336.htm) into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (KasperskySC)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Destinations** - ```kusto -KasperskySCEvent - - | where isnotempty(DstIpAddr) - - | summarize count() by DstIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected **KasperskySCEvent** which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > This data connector has been developed using Kaspersky Security Center 13.1. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Microsoft or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Configure Kaspersky Security Center to send logs using CEF --[Follow the instructions](https://support.kaspersky.com/KSC/13/en-US/89277.htm) to configure event export from Kaspersky Security Center. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) |
sentinel | Deprecated Netwrix Auditor Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-netwrix-auditor-via-legacy-agent.md | - Title: "[Deprecated] Netwrix Auditor via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Netwrix Auditor via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Netwrix Auditor via Legacy Agent connector for Microsoft Sentinel --Netwrix Auditor data connector provides the capability to ingest [Netwrix Auditor (formerly Stealthbits Privileged Activity Manager)](https://www.netwrix.com/auditor.html) events into Microsoft Sentinel. Refer to [Netwrix documentation](https://helpcenter.netwrix.com/) for more information. --## Connector attributes --| Connector attribute | Description | -| | | -| **Kusto function alias** | NetwrixAuditor | -| **Kusto function url** | https://aka.ms/sentinel-netwrixauditor-parser | -| **Log Analytics table(s)** | CommonSecurityLog<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Netwrix Auditor Events - All Activities.** - ```kusto -NetwrixAuditor - - | sort by TimeGenerated desc - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on NetwrixAuditor parser based on a Kusto Function to work as expected. This parser is installed along with solution installation. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Configure Netwrix Auditor to send logs using CEF --[Follow the instructions](https://www.netwrix.com/download/QuickStart/Netwrix_Auditor_Add-on_for_HPE_ArcSight_Quick_Start_Guide.pdf) to configure event export from Netwrix Auditor. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-netwrixauditor?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Nozomi Networks N2os Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-nozomi-networks-n2os-via-legacy-agent.md | - Title: "[Deprecated] Nozomi Networks N2OS via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Nozomi Networks N2OS via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Nozomi Networks N2OS via Legacy Agent connector for Microsoft Sentinel --The [Nozomi Networks](https://www.nozominetworks.com/) data connector provides the capability to ingest Nozomi Networks Events into Microsoft Sentinel. Refer to the Nozomi Networks [PDF documentation](https://www.nozominetworks.com/resources/data-sheets-brochures-learning-guides/) for more information. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (NozomiNetworks)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Devices** - ```kusto -NozomiNetworksEvents - - | summarize count() by DvcHostname - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**NozomiNetworksEvents**](https://aka.ms/sentinel-NozomiNetworks-parser) which is deployed with the Microsoft Sentinel Solution. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --Follow these steps to configure Nozomi Networks device for sending Alerts, Audit Logs, Health Logs log via syslog in CEF format: --> 1. Log in to the Guardian console. --> 2. Navigate to Administration->Data Integration, press +Add and select the Common Event Format (CEF) from the drop down --> 3. Create New Endpoint using the appropriate host information and enable Alerts, Audit Logs, Health Logs for sending. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-nozominetworks?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Ossec Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-ossec-via-legacy-agent.md | - Title: "[Deprecated] OSSEC via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] OSSEC via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] OSSEC via Legacy Agent connector for Microsoft Sentinel --OSSEC data connector provides the capability to ingest [OSSEC](https://www.ossec.net/) events into Microsoft Sentinel. Refer to [OSSEC documentation](https://www.ossec.net/docs) for more information. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (OSSEC)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | --## Query samples --**Top 10 Rules** - ```kusto -OSSECEvent - - | summarize count() by RuleName - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias OSSEC and load the function code or click [here](https://aka.ms/sentinel-OSSECEvent-parser), on the second line of the query, enter the hostname(s) of your OSSEC device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --[Follow these steps](https://www.ossec.net/docs/docs/manual/output/syslog-output.html) to configure OSSEC sending alerts via syslog. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ossec?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Palo Alto Networks Cortex Data Lake Cdl Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-palo-alto-networks-cortex-data-lake-cdl-via-legacy-agent.md | - Title: "[Deprecated] Palo Alto Networks Cortex Data Lake (CDL) via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Palo Alto Networks Cortex Data Lake (CDL) via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] Palo Alto Networks Cortex Data Lake (CDL) via Legacy Agent connector for Microsoft Sentinel --The [Palo Alto Networks CDL](https://www.paloaltonetworks.com/cortex/cortex-data-lake) data connector provides the capability to ingest [CDL logs](https://docs.paloaltonetworks.com/strata-logging-service/log-reference/log-forwarding-schema-overview) into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (PaloAltoNetworksCDL)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Destinations** - ```kusto -PaloAltoCDLEvent - - | where isnotempty(DstIpAddr) - - | summarize count() by DstIpAddr - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**PaloAltoCDLEvent**](https://aka.ms/sentinel-paloaltocdl-parser) which is deployed with the Microsoft Sentinel Solution. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Configure Cortex Data Lake to forward logs to a Syslog Server using CEF --[Follow the instructions](https://docs.paloaltonetworks.com/cortex/cortex-data-lake/cortex-data-lake-getting-started/get-started-with-log-forwarding-app/forward-logs-from-logging-service-to-syslog-server.html) to configure logs forwarding from Cortex Data Lake to a Syslog Server. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Pingfederate Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-pingfederate-via-legacy-agent.md | - Title: "[Deprecated] PingFederate via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] PingFederate via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Deprecated] PingFederate via Legacy Agent connector for Microsoft Sentinel --The [PingFederate](https://www.pingidentity.com/en/software/pingfederate.html) data connector provides the capability to ingest [PingFederate events](https://docs.pingidentity.com/bundle/pingfederate-102/page/lly1564002980532.html) into Microsoft Sentinel. Refer to [PingFederate documentation](https://docs.pingidentity.com/bundle/pingfederate-102/page/tle1564002955874.html) for more information. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (PingFederate)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Devices** - ```kusto -PingFederateEvent - - | summarize count() by DvcHostname - - | top 10 by count_ - ``` ----## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected [**PingFederateEvent**](https://aka.ms/sentinel-PingFederate-parser) which is deployed with the Microsoft Sentinel Solution. --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --[Follow these steps](https://docs.pingidentity.com/bundle/pingfederate-102/page/gsn1564002980953.html) to configure PingFederate sending audit log via syslog in CEF format. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-pingfederate?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Trend Micro Apex One Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-trend-micro-apex-one-via-legacy-agent.md | - Title: "[Deprecated] Trend Micro Apex One via Legacy Agent connector for Microsoft Sentinel" -description: "Learn how to install the connector [Deprecated] Trend Micro Apex One via Legacy Agent to connect your data source to Microsoft Sentinel." -- Previously updated : 11/29/2023------# [Deprecated] Trend Micro Apex One via Legacy Agent connector for Microsoft Sentinel --The [Trend Micro Apex One](https://www.trendmicro.com/en_us/business/products/user-protection/sps/endpoint.html) data connector provides the capability to ingest [Trend Micro Apex One events](https://aka.ms/sentinel-TrendMicroApex-OneEvents) into Microsoft Sentinel. Refer to [Trend Micro Apex Central](https://aka.ms/sentinel-TrendMicroApex-OneCentral) for more information. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (TrendMicroApexOne)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**All logs** - ```kusto --TMApexOneEvent -- | sort by TimeGenerated - ``` ----## Vendor installation instructions --->This data connector depends on a parser based on a Kusto Function to work as expected [**TMApexOneEvent**](https://aka.ms/sentinel-TMApexOneEvent-parser) which is deployed with the Microsoft Sentinel Solution. ---> [!NOTE] - > This data connector has been developed using Trend Micro Apex Central 2019 --1. Linux Syslog agent configuration --Install and configure the Linux agent to collect your Common Event Format (CEF) Syslog messages and forward them to Microsoft Sentinel. --> Notice that the data from all regions will be stored in the selected workspace --1.1 Select or create a Linux machine --Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. --1.2 Install the CEF collector on the Linux machine --Install the Microsoft Monitoring Agent on your Linux machine and configure the machine to listen on the necessary port and forward messages to your Microsoft Sentinel workspace. The CEF collector collects CEF messages on port 514 TCP. --> 1. Make sure that you have Python on your machine using the following command: python -version. --> 2. You must have elevated permissions (sudo) on your machine. -- Run the following command to install and apply the CEF collector: -- `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}` --2. Forward Common Event Format (CEF) logs to Syslog agent --[Follow these steps](https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-central-2019-online-help/detections/logs_001/syslog-forwarding.aspx) to configure Apex Central sending alerts via syslog. While configuring, on step 6, select the log format **CEF**. --3. Validate connection --Follow the instructions to validate your connectivity: --Open Log Analytics to check if the logs are received using the CommonSecurityLog schema. -->It may take about 20 minutes until the connection streams data to your workspace. --If the logs are not received, run the following connectivity validation script: --> 1. Make sure that you have Python on your machine using the following command: python -version -->2. You must have elevated permissions (sudo) on your machine -- Run the following command to validate your connectivity: -- `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}` --4. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-trendmicroapexone?tab=Overview) in the Azure Marketplace. |
sentinel | Infoblox Nios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/infoblox-nios.md | - Title: "Infoblox NIOS connector for Microsoft Sentinel" -description: "Learn how to install the connector Infoblox NIOS to connect your data source to Microsoft Sentinel." -- Previously updated : 04/26/2024------# Infoblox NIOS connector for Microsoft Sentinel --The [Infoblox Network Identity Operating System (NIOS)](https://www.infoblox.com/glossary/network-identity-operating-system-nios/) connector allows you to easily connect your Infoblox NIOS logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities. --This is autogenerated content. For changes, contact the solution provider. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | Syslog (InfobloxNIOS)<br/> | -| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Total Count by DHCP Request Message Types** -- ```kusto -union isfuzzy=true - Infoblox_dhcpdiscover,Infoblox_dhcprequest,Infoblox_dhcpinform -- | summarize count() by Log_Type - ``` --**Top 5 Source IP address** -- ```kusto -Infoblox_dnsclient - - | summarize count() by SrcIpAddr - - | top 10 by count_ desc - ``` ----## Prerequisites --To integrate with Infoblox NIOS make sure you have: --- **Infoblox NIOS**: must be configured to export logs via Syslog---## Vendor installation instructions ---**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Infoblox and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Infoblox%20NIOS/Parser/Infoblox.txt), on the second line of the query, enter the hostname(s) of your Infoblox device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. --1. Install and onboard the agent for Linux --Typically, you should install the agent on a different computer from the one on which the logs are generated. --> Syslog logs are collected only from **Linux** agents. ---2. Configure the logs to be collected --Configure the facilities you want to collect and their severities. - 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**. - 2. Select **Apply below configuration to my machines** and select the facilities and severities. - 3. Click **Save**. ---3. Configure and connect the Infoblox NIOS --[Follow these instructions](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-slog-and-snmp-configuration-for-nios.pdf) to enable syslog forwarding of Infoblox NIOS Logs. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. ----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-infobloxnios?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Kaspersky Security Center Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-kaspersky-security-center-via-ama.md | - Title: "[Recommended] Kaspersky Security Center via AMA connector for Microsoft Sentinel" -description: "Learn how to install the connector [Recommended] Kaspersky Security Center via AMA to connect your data source to Microsoft Sentinel." -- Previously updated : 10/23/2023------# [Recommended] Kaspersky Security Center via AMA connector for Microsoft Sentinel --The [Kaspersky Security Center](https://support.kaspersky.com/KSC/13/en-US/3396.htm) data connector provides the capability to ingest [Kaspersky Security Center logs](https://support.kaspersky.com/KSC/13/en-US/151336.htm) into Microsoft Sentinel. --## Connector attributes --| Connector attribute | Description | -| | | -| **Log Analytics table(s)** | CommonSecurityLog (KasperskySC)<br/> | -| **Data collection rules support** | [Azure Monitor Agent DCR](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) | -| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | --## Query samples --**Top 10 Destinations** - ```kusto -KasperskySCEvent - - | where isnotempty(DstIpAddr) - - | summarize count() by DstIpAddr - - | top 10 by count_ - ``` ----## Prerequisites --To integrate with [Recommended] Kaspersky Security Center via AMA make sure you have: --- ****: To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc)-- ****: Common Event Format (CEF) via AMA and Syslog via AMA data connectors must be installed [Learn more](/azure/sentinel/connect-cef-ama#open-the-connector-page-and-create-the-dcr)---## Vendor installation instructions ---> [!NOTE] - > This data connector depends on a parser based on a Kusto Function to work as expected **KasperskySCEvent** which is deployed with the Microsoft Sentinel Solution. ---2. Secure your machine --Make sure to configure the machine's security according to your organization's security policy ---[Learn more >](https://aka.ms/SecureCEF) |
sentinel | Sentinel Security Copilot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-security-copilot.md | Last updated 07/04/2024 #Customer intent: As a SOC administer or analyst, understand how to use Microsoft Sentinel data with Copilot for Security. -# Investigate Microsoft Sentinel incidents in Copilot for Security +# Microsoft Sentinel incidents in Copilot for Security Microsoft Copilot for Security is a platform that helps you defend your organization at machine speed and scale. Microsoft Sentinel provides a plugin for Copilot to help analyze incidents and generate hunting queries. |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | The listed features were released in the last three months. For information abou [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] -## September 2024 +## September 2024 - [Azure reservations now have pre-purchase plans available for Microsoft Sentinel](#pre-purchase-plans-now-available-for-microsoft-sentinel) For more information, see as [Microsoft Sentinel feature support for Azure comme ## August 2024 +- [Log Analytics agent retirement](#log-analytics-agent-retirement) - [Export and import automation rules (Preview)](#export-and-import-automation-rules-preview) - [Microsoft Sentinel support in Microsoft Defender multitenant management (Preview)](#microsoft-sentinel-support-in-microsoft-defender-multitenant-management-preview) - [Premium Microsoft Defender Threat Intelligence data connector (Preview)](#premium-microsoft-defender-threat-intelligence-data-connector-preview) For more information, see as [Microsoft Sentinel feature support for Azure comme - [New Auxiliary logs retention plan (Preview)](#new-auxiliary-logs-retention-plan-preview) - [Create summary rules for large sets of data (Preview)](#create-summary-rules-in-microsoft-sentinel-for-large-sets-of-data-preview) +### Log Analytics agent retirement ++As of August 31, 2024, the [Log Analytics Agent (MMA/OMS) is retired](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). ++Log collection from many appliances and devices is now supported by the Common Event Format (CEF) via AMA, Syslog via AMA, or Custom Logs via AMA data connector in Microsoft Sentinel. If you've been using the Log Analytics agent in your Microsoft Sentinel deployment, we recommend that you migrate to the Azure Monitor Agent (AMA). ++For more information, see: ++- [Find your Microsoft Sentinel data connector](data-connectors-reference.md) +- [Migrate to Azure Monitor Agent from Log Analytics agent](/azure/azure-monitor/agents/azure-monitor-agent-migration) +- [AMA migration for Microsoft Sentinel](ama-migrate.md) +- Blogs: ++ - [Revolutionizing log collection with Azure Monitor Agent](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/revolutionizing-log-collection-with-azure-monitor-agent/ba-p/4218129) + - [The power of Data Collection Rules: Collecting events for advanced use cases in Microsoft USOP](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/the-power-of-data-collection-rules-collecting-events-for/ba-p/4236486) + ### Export and import automation rules (Preview) Manage your Microsoft Sentinel automation rules as code! You can now export your automation rules to Azure Resource Manager (ARM) template files, and import rules from these files, as part of your program to manage and control your Microsoft Sentinel deployments as code. The export action will create a JSON file in your browser's downloads location, that you can then rename, move, and otherwise handle like any other file. |
service-bus-messaging | Authenticate Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/authenticate-application.md | Azure Service Bus supports using Microsoft Entra ID to authorize requests to Ser ## Overview When a security principal (a user, group, or application) attempts to access a Service Bus entity, the request must be authorized. With Microsoft Entra ID, access to a resource is a two-step process. - 1. First, the security principalΓÇÖs identity is authenticated, and an OAuth 2.0 token is returned. The resource name to request a token is `https://servicebus.azure.net`. +1. First, the security principalΓÇÖs identity is authenticated, and an OAuth 2.0 token is returned. The resource name to request a token is `https://servicebus.azure.net`. 1. Next, the token is passed as part of a request to the Service Bus service to authorize access to the specified resource. The authentication step requires that an application request contains an OAuth 2.0 access token at runtime. If an application is running within an Azure entity such as an Azure VM, a Virtual Machine Scale Set, or an Azure Function app, it can use a managed identity to access the resources. To learn how to authenticate requests made by a managed identity to the Service Bus service, see [Authenticate access to Azure Service Bus resources with Microsoft Entra ID and managed identities for Azure Resources](service-bus-managed-service-identity.md). Native applications and web applications that make requests to Service Bus can a Microsoft Entra authorizes access rights to secured resources through [Azure RBAC](../role-based-access-control/overview.md). Azure Service Bus defines a set of Azure built-in roles that encompass common sets of permissions used to access Service Bus entities and you can also define custom roles for accessing the data. -When an Azure role is assigned to a Microsoft Entra security principal, Azure grants access to those resources for that security principal. Access can be scoped to the level of subscription, the resource group, or the Service Bus namespace. A Microsoft Entra security principal can be a user, a group, an application service principal, or a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). +When an Azure role is assigned to a Microsoft Entra security principal, Azure grants access to those resources for that security principal. Access can be scoped to the level of subscription, the resource group, the Service Bus namespace or entity (queue, topic or subscription). A Microsoft Entra security principal can be a user, a group, an application service principal, or a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). For Azure Service Bus, the management of namespaces and all related resources through the Azure portal and the Azure resource management API is already protected using the Azure RBAC model. Azure provides the following built-in roles for authorizing access to a Service Bus namespace: Before you assign an Azure role to a security principal, determine the scope of The following list describes the levels at which you can scope access to Service Bus resources, starting with the narrowest scope: -- **Queue**, **topic**, or **subscription**: Role assignment applies to the specific Service Bus entity. Currently, the Azure portal doesn't support assigning users/groups/managed identities to Service Bus Azure roles at the subscription level. -- **Service Bus namespace**: Role assignment spans the entire topology of Service Bus under the namespace and to the consumer group associated with it.+- **Queue**, **topic**, or **subscription**: Role assignment applies to the specific Service Bus entity. Currently, the Azure portal doesn't support assigning users/groups/managed identities to Service Bus Azure roles at the topic subscription level. ++- **Service Bus namespace**: Role assignment spans the entire topology of Service Bus under the namespace and to the queue or topic subscription associated with it. + - **Resource group**: Role assignment applies to all the Service Bus resources under the resource group.-- **Subscription**: Role assignment applies to all the Service Bus resources in all of the resource groups in the subscription.+- **Azure Subscription**: Role assignment applies to all the Service Bus resources in all of the resource groups in the subscription. > [!NOTE] > Keep in mind that Azure role assignments may take up to five minutes to propagate. The application needs a client secret to prove its identity when requesting a to If your application is a console application, you must register a native application and add API permissions for **Microsoft.ServiceBus** to the **required permissions** set. Native applications also need a **redirect-uri** in Microsoft Entra ID, which serves as an identifier; the URI doesn't need to be a network destination. Use `https://servicebus.microsoft.com` for this example, because the sample code already uses that URI. ## Assign Azure roles using the Azure portal -Assign one of the [Service Bus roles](#azure-built-in-roles-for-azure-service-bus) to the application's service principal at the desired scope (Service Bus namespace, resource group, subscription). For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). +Assign one of the [Service Bus roles](#azure-built-in-roles-for-azure-service-bus) to the application's service principal at the desired scope (entity, Service Bus namespace, resource group, Azure subscription). For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml). Once you define the role and its scope, you can test this behavior with the [sample on GitHub](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample00_AuthenticateClient.md#authenticate-with-azureidentity). |
service-bus-messaging | Service Bus Premium Messaging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-premium-messaging.md | The following network security features are available only in the premium tier. Configuring IP firewall using the Azure portal is available only for the premium tier namespaces. However, you can configure IP firewall rules for other tiers using Azure Resource Manager templates, CLI, PowerShell, or REST API. For more information, see [Configure IP firewall](service-bus-ip-filtering.md). ## Encryption of data at rest-Azure Service Bus Premium provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). Service Bus Premium uses Azure Storage to store the data. All the data stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as customer managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key is encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the customer-managed key feature is a one time setup process on your namespace. For more information, see [Encrypting Azure Service Bus data at rest](configure-customer-managed-key.md). +All the data stored in the storage subsystem is encrypted using Microsoft-managed keys. If you use your own key (also referred to as customer managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key is encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the customer-managed key feature is a one time setup process on your namespace. For more information, see [Encrypting Azure Service Bus data at rest](configure-customer-managed-key.md). ## Partitioning There are some differences between the standard and premium tiers when it comes to partitioning. |
site-recovery | Azure Stack Site Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-stack-site-recovery.md | Title: Replicate Azure Stack VMs to Azure using Azure Site Recovery description: Learn how to set up disaster recovery to Azure for Azure Stack VMs with the Azure Site Recovery service.- Previously updated : 02/20/2024+ Last updated : 09/11/2024 |
site-recovery | Azure To Azure About Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-about-networking.md | Title: About networking in Azure VM disaster recovery with Azure Site Recovery description: Provides an overview of networking for replication of Azure VMs using Azure Site Recovery. - - Previously updated : 11/21/2021+ Last updated : 09/11/2024 -# About networking in Azure VM disaster recovery +# About networking in Azure virtual machine disaster recovery -This article provides networking guidance for platform connectivity when you're replicating Azure VMs from one region to another, using [Azure Site Recovery](site-recovery-overview.md). +This article provides networking guidance for platform connectivity when you're replicating Azure virtual machines from one region to another, using [Azure Site Recovery](site-recovery-overview.md). ## Before you start Learn how Site Recovery provides disaster recovery for [this scenario](azure-to- ## Typical network infrastructure -The following diagram depicts a typical Azure environment, for applications running on Azure VMs: +The following diagram depicts a typical Azure environment, for applications running on Azure virtual machines: -![Diagram that depicts a typical Azure environment for applications running on Azure VMs.](./media/site-recovery-azure-to-azure-architecture/source-environment.png) +![Diagram that depicts a typical Azure environment for applications running on Azure virtual machines.](./media/site-recovery-azure-to-azure-architecture/source-environment.png) If you're using Azure ExpressRoute or a VPN connection from your on-premises network to Azure, the environment is as follows: If you're using a URL-based firewall proxy to control outbound connectivity, all **URL** | **Details** | -*.blob.core.windows.net | Required so that data can be written to the cache storage account in the source region from the VM. If you know all the cache storage accounts for your VMs, you can allow access to the specific storage account URLs (Ex: cache1.blob.core.windows.net and cache2.blob.core.windows.net) instead of *.blob.core.windows.net +*.blob.core.windows.net | Required so that data can be written to the cache storage account in the source region from the virtual machine. If you know all the cache storage accounts for your virtual machines, you can allow access to the specific storage account URLs (Ex: cache1.blob.core.windows.net and cache2.blob.core.windows.net) instead of *.blob.core.windows.net login.microsoftonline.com | Required for authorization and authentication to the Site Recovery service URLs.-*.hypervrecoverymanager.windowsazure.com | Required so that the Site Recovery service communication can occur from the VM. -*.servicebus.windows.net | Required so that the Site Recovery monitoring and diagnostics data can be written from the VM. +*.hypervrecoverymanager.windowsazure.com | Required so that the Site Recovery service communication can occur from the virtual machine. +*.servicebus.windows.net | Required so that the Site Recovery monitoring and diagnostics data can be written from the virtual machine. *.vault.azure.net | Allows access to enable replication for ADE-enabled virtual machines via portal *.automation.ext.azure.com | Allows enabling autoupgrade of mobility agent for a replicated item via portal login.microsoftonline.com | Required for authorization and authentication to the Apart from controlling URLs, you can also use service tags to control connectivity. To do so, you must first create a [Network Security Group](../virtual-network/network-security-group-how-it-works.md) in Azure. Once created, you need to use our existing service tags and create an NSG rule to allow access to Azure Site Recovery services. -The advantages of using service tags to control connectivity, when compared to controlling connectivity using IP addresses, is that there is no hard dependency on a particular IP address to stay connected to our services. In such a scenario, if the IP address of one of our services changes, then the ongoing replication is not impacted for your machines. Whereas, a dependency on hard coded IP addresses causes the replication status to become critical and put your systems at risk. Moreover, service tags ensure better security, stability and resiliency than hard coded IP addresses. +The advantages of using service tags to control connectivity, when compared to controlling connectivity using IP addresses, is that there's no hard dependency on a particular IP address to stay connected to our services. In such a scenario, if the IP address of one of our services changes, then the ongoing replication isn't impacted for your machines. Whereas, a dependency on hard coded IP addresses causes the replication status to become critical and put your systems at risk. Moreover, service tags ensure better security, stability and resiliency than hard coded IP addresses. While using NSG to control outbound connectivity, these service tags need to be allowed. - For the storage accounts in source region: - Create a [Storage service tag](../virtual-network/network-security-groups-overview.md#service-tags) based NSG rule for the source region.- - Allow these addresses so that data can be written to the cache storage account, from the VM. + - Allow these addresses so that data can be written to the cache storage account, from the virtual machine. - Create a [Microsoft Entra service tag](../virtual-network/network-security-groups-overview.md#service-tags) based NSG rule for allowing access to all IP addresses corresponding to Microsoft Entra ID - Create an EventsHub service tag-based NSG rule for the target region, allowing access to Site Recovery monitoring. - Create an Azure Site Recovery service tag-based NSG rule for allowing access to Site Recovery service in any region. While using NSG to control outbound connectivity, these service tags need to be ## Example NSG configuration -This example shows how to configure NSG rules for a VM to replicate. +This example shows how to configure NSG rules for a virtual machine to replicate. - If you're using NSG rules to control outbound connectivity, use "Allow HTTPS outbound" rules to port:443 for all the required IP address ranges.-- The example presumes that the VM source location is "East US" and the target location is "Central US".+- The example presumes that the virtual machine source location is "East US" and the target location is "Central US". ### NSG rules - East US These rules are required so that replication can be enabled from the target regi ## Network virtual appliance configuration -If you're using network virtual appliances (NVAs) to control outbound network traffic from VMs, the appliance might get throttled if all the replication traffic passes through the NVA. We recommend creating a network service endpoint in your virtual network for "Storage" so that the replication traffic doesn't go to the NVA. +If you're using network virtual appliances (NVAs) to control outbound network traffic from virtual machines, the appliance might get throttled if all the replication traffic passes through the NVA. We recommend creating a network service endpoint in your virtual network for "Storage" so that the replication traffic doesn't go to the NVA. ### Create network service endpoint for Storage You can create a network service endpoint in your virtual network for "Storage" so that the replication traffic doesn't leave Azure boundary. -- Select your Azure virtual network and click on 'Service endpoints'+- Select your Azure virtual network and select **Service endpoints**. ![storage-endpoint](./media/azure-to-azure-about-networking/storage-service-endpoint.png) -- Click 'Add' and 'Add service endpoints' tab opens-- Select 'Microsoft.Storage' under 'Service' and the required subnets under 'Subnets' field and click 'Add'+- Select **Add** and **Add service endpoints** tab opens. +- Select *Microsoft.Storage* under **Service** and the required subnets under 'Subnets' field and select **Add**. >[!NOTE] >If you're using firewall enabled cache storage account or target storage account, ensure you ['Allow trusted Microsoft services'](../storage/common/storage-network-security.md). Also, ensure that you allow access to at least one subnet of source Vnet. |
site-recovery | Azure To Azure Common Questions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md | description: This article answers common questions about Azure virtual machine d Previously updated : 08/30/2024 Last updated : 09/16/2024 Yes. By default, when you enable disaster recovery for Azure virtual machines, S Site Recovery tries to provide the IP address at the time of failover. If another virtual machine uses that address, Site Recovery sets the next available IP address as the target. -[Learn more about](azure-to-azure-network-mapping.md#set-up-ip-addressing-for-target-vms) setting up network mapping and IP addressing for virtual networks. +[Learn more about](azure-to-azure-network-mapping.md#set-up-ip-addressing-for-target-virtual-machines) setting up network mapping and IP addressing for virtual networks. ### What's the *Latest* recovery point? |
site-recovery | Azure To Azure Network Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-network-mapping.md | Title: Map virtual networks between two regions in Azure Site Recovery -description: Learn about mapping virtual networks between two Azure regions for Azure VM disaster recovery with Azure Site Recovery. +description: Learn about mapping virtual networks between two Azure regions for Azure virtual machine disaster recovery with Azure Site Recovery. - - Previously updated : 08/31/2023+ Last updated : 09/11/2024 Map networks as follows: :::image type="content" source="./media/site-recovery-network-mapping-azure-to-azure/network-mapping1.png" alt-text="Screenshot of Create a network mapping." lightbox="./media/site-recovery-network-mapping-azure-to-azure/network-mapping1.png"::: -3. In **Add network mapping**, select the source and target locations. In our example, the source VM is running in the East Asia region, and replicates to the Southeast Asia region. +3. In **Add network mapping**, select the source and target locations. In our example, the source virtual machine is running in the East Asia region, and replicates to the Southeast Asia region. :::image type="content" source="./media/site-recovery-network-mapping-azure-to-azure/network-mapping2.png" alt-text="Screenshot of Select source and target." lightbox="./media/site-recovery-network-mapping-azure-to-azure/network-mapping2.png":::-3. Now create a network mapping in the opposite direction. In our example, the source will now be Southeast Asia, and the target will be East Asia. +3. Now create a network mapping in the opposite direction. In our example, the source is now Southeast Asia, and the target is East Asia. :::image type="content" source="./media/site-recovery-network-mapping-azure-to-azure/network-mapping3.png" alt-text="Screenshot of Add network mapping pane - Select source and target locations for the target network." lightbox="./media/site-recovery-network-mapping-azure-to-azure/network-mapping3.png"::: ## Map networks when you enable replication -If you haven't prepared network mapping before you configure disaster recovery for Azure VMs, you can specify a target network when you [set up and enable replication](azure-to-azure-how-to-enable-replication.md). When you do this the following happens: +If you haven't prepared network mapping before you configure disaster recovery for Azure virtual machines, you can specify a target network when you [set up and enable replication](azure-to-azure-how-to-enable-replication.md). When you do this, the following happens: - Based on the target you select, Site Recovery automatically creates network mappings from the source to target region, and from the target to source region. - By default, Site Recovery creates a network in the target region that's identical to the source network. Site Recovery adds **-asr** as a suffix to the name of the target network. You can customize the target network. For example, if the source network name was *contoso-vnet*, then the target network is named *contoso-vnet-asr*. -So, if the source network name was "contoso-vnet", then the target network name will be "contoso-vnet-asr". Source network's name will not be edited by ASR. -- If network mapping has already occurred for a source network, the mapped target network will always be the default at the time of enabling replications for more VMs. You can choose to change the target virtual network by choosing other available options from the dropdown.+So, if the source network name was "contoso-vnet", then the target network name is `contoso-vnet-asr`. Source network's name won't be edited by Azure Site Recovery. +- If network mapping has already occurred for a source network, the mapped target network is always the default at the time of enabling replications for more virtual machines. You can choose to change the target virtual network by choosing other available options from the dropdown. - To change the default target virtual network for new replications, you need to modify the existing network mapping. - If you wish to modify a network mapping from region A to region B, ensure that you first delete the network mapping from region B to region A. After reverse mapping deletion, modify the network mapping from region A to region B and then create the relevant reverse mapping. >[!NOTE]->* Modifying the network mapping only changes the defaults for new VM replications. It does not impact the target virtual network selections for existing replications. +>* Modifying the network mapping only changes the defaults for new virtual machine replications. It does not impact the target virtual network selections for existing replications. >* If you wish to modify the target network for an existing replication, go to **Network** Settings of the replicated item. ## Specify a subnet -The subnet of the target VM is selected based on the name of the subnet of the source VM. +The subnet of the target virtual machine is selected based on the name of the subnet of the source virtual machine. -- If a subnet with the same name as the source VM subnet is available in the target network, that subnet is set for the target VM.+- If a subnet with the same name as the source virtual machine subnet is available in the target network, that subnet is set for the target virtual machine. - If a subnet with the same name doesn't exist in the target network, the first subnet in the alphabetical order is set as the target subnet.-- You can modify the target subnet in the **Network** settings for the VM.+- You can modify the target subnet in the **Network** settings for the virtual machine. :::image type="content" source="./media/site-recovery-network-mapping-azure-to-azure/modify-subnet.png" alt-text="Screenshot of Network compute properties window." lightbox="./media/site-recovery-network-mapping-azure-to-azure/modify-subnet.png"::: -## Set up IP addressing for target VMs +## Set up IP addressing for target virtual machines The IP address for each NIC on a target virtual machine is configured as follows: -- **DHCP**: If the NIC of the source VM uses DHCP, the NIC of the target VM is also set to use DHCP.-- **Static IP address**: If the NIC of the source VM uses static IP addressing, the target VM NIC will also use a static IP address.+- **DHCP**: If the NIC of the source virtual machine uses DHCP, the NIC of the target virtual machine is also set to use DHCP. +- **Static IP address**: If the NIC of the source virtual machine uses static IP addressing, the target virtual machine NIC also uses a static IP address. The same holds for the Secondary IP Configurations as well. ## IP address assignment during failover >[!Note]->The following approach is used to assign IP address to the target VM, irrespective of the NIC settings. +>The following approach is used to assign IP address to the target virtual machine, irrespective of the NIC settings. **Source and target subnets** | **Details** | -Same address space | IP address of the source VM NIC is set as the target VM NIC IP address.<br/><br/> If the address isn't available, the next available IP address is set as the target. -Different address space | The next available IP address in the target subnet is set as the target VM NIC address. +Same address space | IP address of the source virtual machine NIC is set as the target virtual machine NIC IP address.<br/><br/> If the address isn't available, the next available IP address is set as the target. +Different address space | The next available IP address in the target subnet is set as the target virtual machine NIC address. Different address space | The next available IP address in the target subnet is **Target network** | **Details** | -Target network is the failover VNet | - Target IP address will be static with the same IP address. <br/><br/> - If the same IP address is already assigned, then the IP address is the next one available at the end of the subnet range. For example: If the source IP address is 10.0.0.19 and failover network uses range 10.0.0.0/24, then the next IP address assigned to the target VM is 10.0.0.254. -Target network isn't the failover VNet | - Target IP address will be static with the same IP address, only if it is available in the target virtual network. <br/><br/> - If the same IP address is already assigned, then the IP address is the next one available at the end of the subnet range.<br/><br/> For example: If the source static IP address is 10.0.0.19 and failover is on a network that isn't the failover network, with the range 10.0.0.0/24, then the target static IP address will be 10.0.0.19 if available, and otherwise it will be 10.0.0.254. +Target network is the failover VNet | - Target IP address is static with the same IP address. <br/><br/> - If the same IP address is already assigned, then the IP address is the next one available at the end of the subnet range. For example: If the source IP address is `10.0.0.19` and failover network uses range `10.0.0.0/24`, then the next IP address assigned to the target virtual machine is `10.0.0.254`. +Target network isn't the failover VNet | - Target IP address is static with the same IP address, only if it's available in the target virtual network. <br/><br/> - If the same IP address is already assigned, then the IP address is the next one available at the end of the subnet range.<br/><br/> For example: If the source static IP address is `10.0.0.19` and failover is on a network that isn't the failover network, with the range `10.0.0.0/24`, then the target static IP address is `10.0.0.19` if available. Otherwise it is `10.0.0.254`. - The failover VNet is the target network that you select when you set up disaster recovery.-- We recommend that you always use a non-production network for test failover.-- You can modify the target IP address in the **Network** settings of the VM.+- We recommend that you always use a nonproduction network for test failover. +- You can modify the target IP address in the **Network** settings of the virtual machine. ## Next steps -- Review [networking guidance](./azure-to-azure-about-networking.md) for Azure VM disaster recovery.+- Review [networking guidance](./azure-to-azure-about-networking.md) for Azure virtual machine disaster recovery. - [Learn more](site-recovery-retain-ip-azure-vm-failover.md) about retaining IP addresses after failover. |
site-recovery | Site Recovery Ipconfig Cmdlet Parameter Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-ipconfig-cmdlet-parameter-deprecation.md | Title: Deprecation of IPConfig parameters for the cmdlet New-AzRecoveryServicesAsrVMNicConfig | Microsoft Docs description: Details about deprecation of IPConfig parameters of the cmdlet New-AzRecoveryServicesAsrVMNicConfig and information about the use of new cmdlet New-AzRecoveryServicesAsrVMNicIPConfig - - Previously updated : 04/30/2021+ Last updated : 09/11/2024 # Deprecation of IP Config parameters for the cmdlet New-AzRecoveryServicesAsrVMNicConfig |
site-recovery | Site Recovery Retain Ip Azure Vm Failover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-retain-ip-azure-vm-failover.md | +- By default, when you enable disaster recovery for Azure VMs, Site Recovery creates target resources based on source resource settings. For Azure VMs configured with static IP addresses, Site Recovery tries to provision the same IP address for the target VM, if it's not in use. For a full explanation of how Site Recovery handles addressing, [review this article](azure-to-azure-network-mapping.md#set-up-ip-addressing-for-target-virtual-machines). - For simple applications, the default configuration is sufficient. For more complex apps, you might need to provision additional resource to make sure that connectivity works as expected after failover. |
site-recovery | Vmware Azure Multi Tenant Csp Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-multi-tenant-csp-disaster-recovery.md | Title: Set up VMware disaster recovery to Azure in a multi-tenancy environment using Site Recovery and the Cloud Solution Provider (CSP) program | Microsoft Docs -description: Describes how to set up VMware disaster recovery in a multi-tenant environment with Azure Site Recovery. +description: Describes how to set up VMware disaster recovery in a multitenant environment with Azure Site Recovery. - - Previously updated : 04/03/2022+ Last updated : 09/11/2024 The [CSP program](https://partner.microsoft.com/cloud-solution-provider) fosters With [Azure Site Recovery](site-recovery-overview.md), as partners you can manage disaster recovery for customers directly through CSP. Alternately, you can use CSP to set up Site Recovery environments, and let customers manage their own disaster recovery needs in a self-service manner. In both scenarios, partners are the liaison between Site Recovery and their customers. Partners service the customer relationship, and bill customers for Site Recovery usage. -This article describes how you as a partner can create and manage tenant subscriptions through CSP, for a multi-tenant VMware replication scenario. +This article describes how you as a partner can create and manage tenant subscriptions through CSP, for a multitenant VMware replication scenario. ## Prerequisites To set up VMware replication, you need to do the following: - [Prepare](tutorial-prepare-azure.md) Azure resources, including an Azure subscription, an Azure virtual network, and a storage account. - [Prepare](vmware-azure-tutorial-prepare-on-premises.md) on-premises VMware servers and VMs.-- For each tenant, create a separate management server that can communicate with the tenant VMs, and your vCenter servers. Only you as a partner should have access rights to this management server. Learn more about [multi-tenant environments](vmware-azure-multi-tenant-overview.md).+- For each tenant, create a separate management server that can communicate with the tenant VMs, and your vCenter servers. Only you as a partner should have access rights to this management server. Learn more about [multitenant environments](vmware-azure-multi-tenant-overview.md). ## Create a tenant account The following steps describe how to assign a role to a user. For detailed steps, 1. On the **Review + assign** tab, select **Review + assign** to assign the role. -## Multi-tenant environments +## Multitenant environments -There are three major multi-tenant models: +There are three major multitenant models: * **Shared Hosting Services Provider (HSP)**: The partner owns the physical infrastructure, and uses shared resources (vCenter, datacenters, physical storage, and so on) to host multiple tenant VMs on the same infrastructure. The partner can provide disaster-recovery management as a managed service, or the tenant can own disaster recovery as a self-service solution. There are three major multi-tenant models: * **Managed Services Provider (MSP)**: The customer owns the physical infrastructure that hosts the VMs, and the partner provides disaster-recovery enablement and management. -By setting up tenant subscriptions as described in this article, you can quickly start enabling customers in any of the relevant multi-tenant models. You can learn more about the different multi-tenant models and enabling on-premises access controls [here](vmware-azure-multi-tenant-overview.md). +By setting up tenant subscriptions as described in this article, you can quickly start enabling customers in any of the relevant multitenant models. You can learn more about the different multitenant models and enabling on-premises access controls [here](vmware-azure-multi-tenant-overview.md). ## Next steps - Learn more about [Azure role-based access control (Azure RBAC)](site-recovery-role-based-linked-access-control.md) to manage Azure Site Recovery deployments. - Learn more about VMware to Azure [replication architecture](vmware-azure-architecture.md). - [Review the tutorial](vmware-azure-tutorial.md) for replicating VMware VMs to Azure.-Learn more about [multi-tenant environments](vmware-azure-multi-tenant-overview.md) for replicating VMware VMs to Azure. +Learn more about [multitenant environments](vmware-azure-multi-tenant-overview.md) for replicating VMware VMs to Azure. |
site-recovery | Vmware Azure Multi Tenant Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-multi-tenant-overview.md | Title: VMware VM multi-tenant disaster recovery with Azure Site Recovery description: Provides an overview of Azure Site Recovery support for VMWare disaster recovery to Azure in a multi-tenant environment (CSP) program. - Last updated 09/06/2024 |
static-web-apps | Deploy Web Framework | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-web-framework.md | On most systems, you can select the URL of the site to open it in your default b * [Authentication and authorization](./authentication-authorization.yml) * [Database connections](./database-overview.md) * [Custom Domains](./custom-domain.md)+* [Video series: Deploy websites to the cloud with Azure Static Web Apps](https://aka.ms/azure/beginnervideos/learn/swa) <!-- Links --> [1]: https://azure.microsoft.com/free |
static-web-apps | Get Started Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-cli.md | Go to https://github.com/login/device and enter the code you get from GitHub to If you're not going to continue to use this application, delete the resource group and the static web app using the [az group delete](/cli/azure/group#az-group-delete) command. +## Related content ++* [Video series: Deploy websites to the cloud with Azure Static Web Apps](https://aka.ms/azure/beginnervideos/learn/swa) + ## Next steps > [!div class="nextstepaction"] |
static-web-apps | Get Started Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-portal.md | description: Learn to deploy a static site to Azure Static Web Apps with the Azu Previously updated : 05/17/2024 Last updated : 09/18/2024 zone_pivot_groups: devops-or-github If you're not going to continue to use this application, you can delete the Azur 1. Select **Delete**. 1. Select **Yes** to confirm the delete action (this action may take a few moments to complete). +## Related content ++* [Video series: Deploy websites to the cloud with Azure Static Web Apps](https://aka.ms/azure/beginnervideos/learn/swa) + ## Next steps > [!div class="nextstepaction"] |
static-web-apps | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/getting-started.md | If you're not going to continue to use this application, you can delete the Azur In the Visual Studio Code Azure window, return to the _Resources_ section and under _Static Web Apps_, right-click **my-first-static-web-app** and select **Delete**. +## Related content ++* [Video series: Deploy websites to the cloud with Azure Static Web Apps](https://aka.ms/azure/beginnervideos/learn/swa) + ## Next steps > [!div class="nextstepaction"] |
storage | Client Side Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md | Decryption via the envelope technique works as follows: ### Encryption/decryption on blob upload/download -The Blob Storage client library supports encryption of whole blobs only on upload. For downloads, both complete and range downloads are supported. Client-side encryption v2 chunks data into 4MB buffered authenticated encryption blocks which can only be transformed whole. +The Blob Storage client library supports encryption of whole blobs only on upload. For downloads, both complete and range downloads are supported. Client-side encryption v2 chunks data into 4MB buffered authenticated encryption blocks which can only be transformed whole. To adjust the chunk size, ensure you are using the most recent version of the SDK that supports client-side encryption v2.1. The region length is configurable from 16 bytes up to 1 GiB. During encryption, the client library generates a random initialization vector (IV) of 16 bytes and a random CEK of 32 bytes, and performs envelope encryption of the blob data using this information. The wrapped CEK and some additional encryption metadata are then stored as blob metadata along with the encrypted blob. |
storage | Container Storage Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-release-notes.md | + + Title: Release notes for Azure Container Storage +description: Release notes for Azure Container Storage ++++ Last updated : 09/15/2024+++# Release notes for Azure Container Storage +This article provides the release notes for Azure Container Storage. It's important to note that minor releases introduce new functionalities in a backward-compatible manner (for example, 1.1.0 GA). Patch releases focus on bug fixes, security updates, and smaller improvements (for example, 1.1.1). ++## Supported versions ++The following Azure Container Storage versions are supported: ++| Milestone | Status | +|-|-| +|1.1.1- Hotfix | Supported | +|1.1.0- General Availability| Supported | ++## Unsupported versions +The following Azure Container Storage versions are no longer supported: 1.0.6-preview, 1.0.3-preview, 1.0.2-preview, 1.0.1-preview, 1.0.0-preview. Please refer to the section ΓÇ£Upgrade a preview installation to GAΓÇ¥ for upgrading guidance. ++## Minor vs. patch versions +Minor versions introduce small improvements, performance enhancements, or minor new features without breaking existing functionality. For example, version 1.1.0 would move to 1.2.0. Patch versions are released more frequently than minor versions. They focus solely on bug fixes and security updates. For example, version 1.1.1 would be updated to 1.1.2. ++## Version 1.1.1 ++### Improvements and issues that are fixed +- This hotfix release addresses specific issues that some customers experienced during the creation of Azure Elastic SAN storage pools. It resolves exceptions that were causing disruptions in the setup process, ensuring smoother and more reliable storage pool creation. +- We've also made improvements to cluster restart scenarios. Previously, some corner-case situations caused cluster restarts to fail. This update ensures that cluster restarts are more reliable and resilient. ++## Version 1.1.0 ++### Improvements and issues that are fixed +- **Security Enhancements**: This update addresses vulnerabilities in container environments, enhancing security enforcement to better protect workloads. +- **Data plane stability**: We've also improved the stability of data-plane components, ensuring more reliable access to Azure Container Storage volumes and storage pools. This also enhances the management of data replication between storage nodes. +- **Volume management improvements**: The update resolves issues with volume detachment during node drain scenarios, ensuring that volumes are safely and correctly detached, and allowing workloads to migrate smoothly without interruptions or data access issues. ++## Upgrade a preview installation to GA ++If you already have a preview instance of Azure Container Storage running on your cluster, we recommend updating to the latest generally available (GA) version by running the following command: +```azurecli-interactive +az k8s-extension update --cluster-type managedClusters --cluster-name <cluster-name> --resource-group <resource-group> --name azurecontainerstorage --version <version> --release-train stable +``` ++Remember to replace `<cluster-name>` and `<resource-group>` with your own values and `<version>` with the desired supported version. ++Please note that preview versions are no longer supported, and customers should promptly upgrade to the GA versions to ensure continued stability and access to the latest features and fixes. If you're installing Azure Container Storage for the first time on the cluster, proceed instead to [Install Azure Container Storage and create a storage pool](container-storage-aks-quickstart.md#install-azure-container-storage-and-create-a-storage-pool). You can also [Install Azure Container Storage on specific node pools](container-storage-aks-quickstart.md#install-azure-container-storage-on-specific-node-pools). ++## Auto-upgrade policy ++To receive the latest features and fixes for Azure Container Storage in future versions, you can enable auto-upgrade. However, please note that this may result in a brief interruption in the I/O operations of applications using PVs with Azure Container Storage during the upgrade process. To minimize potential impact, we recommend setting the auto-upgrade window to a time period with low activity or traffic, ensuring that upgrades occur during less critical times. ++To enable auto-upgrade, run the following command: +```azurecli-interactive +az k8s-extension update --cluster-name <cluster name> --resource-group <resource-group> --cluster-type managedClusters --auto-upgrade-minor-version true -n azurecontainerstorage +``` ++If you would like to disable auto-upgrades, run the following command: +```azurecli-interactive +az k8s-extension update --cluster-name <cluster name> --resource-group <resource-group> --cluster-type managedClusters --auto-upgrade-minor-version false -n azurecontainerstorage +``` |
storage | Elastic San Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md | Deploying a private endpoint for an Elastic SAN Volume group using PowerShell in 1. Create the private endpoint using the subnet and the private link service connection as input. 1. **(Optional)** *if you're using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection. -Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace the values of `RgName`, `VnetName`, `SubnetName`, `EsanName`, `EsanVgName`, `PLSvcConnectionName`, `EndpointName`, and `Location` with your own values: +Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace the values of `RgName`, `VnetName`, `SubnetName`, `EsanName`, `EsanVgName`, `PLSvcConnectionName`, `EndpointName`, and `Location`(Region) with your own values: ```powershell # Set the resource group name. You can manage virtual network rules for volume groups through the Azure portal, ```azurepowershell $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $Subnet.Id -Action Allow - Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $RgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule + Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $RgName -ElasticSanName $EsanName -VolumeGroupName $EsanVgName -NetworkAclsVirtualNetworkRule $rule ``` > [!TIP] |
synapse-analytics | Develop Openrowset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md | |
trusted-signing | How To Device Guard Signing Service Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-device-guard-signing-service-migration.md | If isolation is desired, deploy a new CI policy by following steps outlined in S - [Understand Windows Defender Application Control (WDAC) policy rules and file rules](/windows/security/threat-protection/windows-defender-application-control/select-types-of-rules-to-create). - [Deploy catalog files to support Windows Defender Application Control (Windows 10) - Windows security](/windows/security/threat-protection/windows-defender-application-control/deploy-catalog-files-to-support-windows-defender-application-control#:~:text=%20Deploy%20catalog%20files%20to%20support%20Windows%20Defender,signing%20certificate%20to%20a%20Windows%20Defender...%20More%20). - [Example Windows Defender Application Control (WDAC) base policies (Windows 10) - Windows security | Microsoft Docs](/windows/security/threat-protection/windows-defender-application-control/example-wdac-base-policies)-- [Use multiple Windows Defender Application Control Policies (Windows 10)](/windows/security/threat-protection/windows-defender-application-control/deploy-multiple-windows-defender-application-control-policies#deploying-multiple-policies-locally)+- [Use multiple Windows Defender Application Control Policies (Windows 10)](/windows/security/threat-protection/windows-defender-application-control/deploy-multiple-windows-defender-application-control-policies#deploying-multiple-policies-locally) +- Need help with the migration: Contact us via: + - Support + troubleshooting (on Azure portal) + - [Microsoft Q&A](https://learn.microsoft.com/answers/tags/509/trusted-signing) (use the tag **Azure Trusted Signing**) + - [Stack Overflow](https://stackoverflow.com/questions/tagged/trusted-signing) (use the tag **trusted-signing**). |
virtual-desktop | Whats New Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md | |
virtual-wan | How To Routing Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md | Consider the following configuration where Hub 1 (Normal) and Hub 2 (Secured) ar * Encrypted ExpressRoute (Site-to-site VPN tunnels running over ExpressRoute circuits) is supported in hubs where routing intent is configured if Azure Firewall is configured to allow traffic between VPN tunnel endpoints (Site-to-site VPN Gateway private IP and on-premises VPN device private IP). For more information on the required configurations, see [Encrypted ExpressRoute with routing intent](#encryptedER). * The following connectivity use cases are **not** supported with Routing Intent: * Static routes in the defaultRouteTable that point to a Virtual Network connection can't be used in conjunction with routing intent. However, you can use the [BGP peering feature](scenario-bgp-peering-hub.md).+ * Static routes on the Virtual Network connection with "static route propagation" are not applied to the next-hop resource specified in private routing policies. Support for applying static routes on Virtual Network connections to private routing policy next-hop is on the roadmap. * The ability to deploy both an SD-WAN connectivity NVA and a separate Firewall NVA or SaaS solution in the **same** Virtual WAN hub is currently in the road-map. Once routing intent is configured with next hop SaaS solution or Firewall NVA, connectivity between the SD-WAN NVA and Azure is impacted. Instead, deploy the SD-WAN NVA and Firewall NVA or SaaS solution in different Virtual Hubs. Alternatively, you can also deploy the SD-WAN NVA in a spoke Virtual Network connected to the hub and leverage the virtual hub [BGP peering](scenario-bgp-peering-hub.md) capability. * Network Virtual Appliances (NVAs) can only be specified as the next hop resource for routing intent if they're Next-Generation Firewall or dual-role Next-Generation Firewall and SD-WAN NVAs. Currently, **checkpoint**, **fortinet-ngfw** and **fortinet-ngfw-and-sdwan** are the only NVAs eligible to be configured to be the next hop for routing intent. If you attempt to specify another NVA, Routing Intent creation fails. You can check the type of the NVA by navigating to your Virtual Hub -> Network Virtual Appliances and then looking at the **Vendor** field. [**Palo Alto Networks Cloud NGFW**](how-to-palo-alto-cloud-ngfw.md) is also supported as the next hop for Routing Intent, but is considered a next hop of type **SaaS solution**. * Routing Intent users who want to connect multiple ExpressRoute circuits to Virtual WAN and want to send traffic between them via a security solution deployed in the hub can enable open up a support case to enable this use case. Reference [enabling connectivity across ExpressRoute circuits](#expressroute) for more information. Before enabling routing intent, consider the following: * You may open a support case to enable connectivity across ExpressRoute circuits via a Firewall appliance in the hub. Enabling this connectivity pattern modifies the prefixes advertised to ExpressRoute circuits. See [About ExpressRoute](#expressroute) for more information. * Routing intent is the only mechanism in Virtual WAN to enable inter-hub traffic inspection via security appliances deployed in the hub. Inter-hub traffic inspection also requires routing intent to be enabled on all hubs to ensure traffic is routed symmetrically between security appliances deployed in Virtual WAN hubs. * Routing intent sends Virtual Network and on-premises traffic to the next hop resource specified in the routing policy. Virtual WAN programs the underlying Azure platform to route your on-premises and Virtual Network traffic in accordance with the configured routing policy and does not process the traffic through the Virtual Hub router. Because packets routed via routing intent are not processed by the router, you do not need to allocate additional [routing infrastructure units](hub-settings.md#capacity) for data-plane packet forwarding on hubs configured with routing intent. However, you may need to allocate additional routing infastructure units based on the number of Virtual Machines in Virtual Networks connected to the Virtual WAN Hub. -+* Routing intent allows you to configure different next-hop resources for private and internet routing policies. For example, you can set the next hop for private routing policies to Azure Firewall in the hub and the next hop for internet routing policy to a NVA or SaaS solution in the hub. Because SaaS solutions and Firewall NVAs are deployed in the same subnet in the Virtual WAN hub, deploying SaaS solutions with a Firewall NVA in the same hub can impact the horizontal scalability of the SaaS solutions as there are less IP addresses avaialble for horizontal scale-out. Additionally, you can have at most one SaaS solution deployed in each Virtual WAN hub. ### <a name="prereq"></a> Prerequisites To enable routing intent and policies, your Virtual Hub must meet the below prerequisites: |
virtual-wan | Point To Site Ipsec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/point-to-site-ipsec.md | The following table shows the default IPsec parameters for Point-to-site VPN con | Phase 1 IKE Integrity | SHA256 | | DH Group | DHGroup24 | | Phase 2 IPsec Encryption | GCMAES256|-| Phase 2 IPsec Integrity | GCMAES25 | +| Phase 2 IPsec Integrity | GCMAES256 | | PFS Group |PFS24| ## Custom IPsec policies |
virtual-wan | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md | The following features are currently in gated public preview. After working with |4| ExpressRoute ECMP Support | Today, ExpressRoute ECMP is not enabled by default for virtual hub deployments. When multiple ExpressRoute circuits are connected to a Virtual WAN hub, ECMP enables traffic from spoke virtual networks to on-premises over ExpressRoute to be distributed across all ExpressRoute circuits advertising the same on-premises routes. | | To enable ECMP for your Virtual WAN hub, please reach out to virtual-wan-ecmp@microsoft.com. | | 5| Virtual WAN hub address prefixes are not advertised to other Virtual WAN hubs in the same Virtual WAN.| You can't leverage Virtual WAN hub-to-hub full mesh routing capabilities to provide connectivity between NVA orchestration software deployed in a VNET or on-premises connected to a Virtual WAN hub to an Integrated NVA or SaaS solution deployed in a different Virtual WAN hub. | | If your NVA or SaaS orchestrator is deployed on-premises, connect that on-premises site to all Virtual WAN hubs with NVAs or SaaS solutions deployed in them. If your orchestrator is in an Azure VNET, manage NVAs or SaaS solutions using public IP. Support for Azure VNET orchestrators is on the roadmap.| |6| Configuring routing intent to route between connectivity and firewall NVAs in the same Virtual WAN Hub| Virtual WAN routing intent private routing policy does not support routing between a SD-WAN NVA and a Firewall NVA (or SaaS solution) deployed in the same Virtual hub.| | Deploy the connectivity and firewall integrated NVAs in two different hubs in the same Azure region. Alternatively, deploy the connectivity NVA to a spoke Virtual Network connected to your Virtual WAN Hub and leverage the [BGP peering](scenario-bgp-peering-hub.md).|+| 7| BGP between the Virtual WAN hub router and NVAs deployed in the Virtual WAN hub does not come up if the ASN used for BGP peering is updated post-deployment.|Delete and recreate the NVA with the correct ASN. | ## Next steps |
vpn-gateway | Point To Site About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-about.md | -A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few clients that need to connect to a VNet. Point-to-site configurations require a **route-based** VPN type. --This article applies to the current deployment model. See [P2S - Classic](vpn-gateway-howto-point-to-site-classic-azure-portal.md) for legacy deployments. +A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure virtual networks from a remote location, such as from home or a conference. P2S VPN is also a useful solution to use instead of site-to-site (S2S) VPN when you have only a few clients that need to connect to a virtual network. Point-to-site configurations require a **route-based** VPN type. ## <a name="protocol"></a>What protocol does P2S use? Point-to-site VPN can use one of the following protocols: Before Azure accepts a P2S VPN connection, the user has to be authenticated first. There are three authentication types that you can select when you configure your P2S gateway. The options are: -* Azure certificate -* Microsoft Entra ID -* RADIUS and Active Directory Domain Server +* [Certificate](#certificate) +* [Microsoft Entra ID](#entra-id) +* [RADIUS and Active Directory Domain Server](#active-directory) You can select multiple authentication types for your P2S gateway configuration. If you select multiple authentication types, the VPN client you use must be supported by at least one authentication type and corresponding tunnel type. For example, if you select "IKEv2 and OpenVPN" for tunnel types, and "Microsoft Entra ID and Radius" or "Microsoft Entra ID and Azure Certificate" for authentication type, Microsoft Entra ID will only use the OpenVPN tunnel type since it's not supported by IKEv2. The following table shows authentication mechanisms that are compatible with sel [!INCLUDE [All client articles](../../includes/vpn-gateway-vpn-multiauth-tunnel-mapping.md)] -### Certificate authentication +### <a name="certificate"></a>Certificate authentication When you configure your P2S gateway for certificate authentication, you upload the trusted root certificate public key to the Azure gateway. You can use a root certificate that was generated using an Enterprise solution, or you can generate a self-signed certificate. To authenticate, each client that connects must have an installed client certificate that's generated from the trusted root certificate. This is in addition to VPN client software. The validation of the client certificate is performed by the VPN gateway and happens during establishment of the P2S VPN connection. -#### <a name='certificate-workflow'></a>Certificate Workflow +#### <a name='certificate-workflow'></a>Certificate authentication workflow At a high level, you need to perform the following steps to configure Certificate authentication: You can configure your P2S gateway to allow VPN users to authenticate using Micr [!INCLUDE [entra app id descriptions](../../includes/vpn-gateway-entra-app-id-descriptions.md)] -#### <a name='entra-workflow'></a>Microsoft Entra ID Workflow +#### <a name='entra-workflow'></a>Microsoft Entra ID authentication workflow At a high level, you need to perform the following steps to configure Microsoft Entra ID authentication: -1. If using manual app registration, perform the necessary steps on the Entra tenant. +1. If using manual app registration, perform the necessary steps on the Microsoft Entra tenant. 1. Enable Microsoft Entra ID authentication on the P2S gateway, along with the additional required settings (client address pool, etc.). 1. Generate and download VPN client profile configuration files (profile configuration package). 1. Download, install, and configure the Azure VPN Client on the client computer. 1. Connect. -### Active Directory (AD) Domain Server +### <a name='active-directory'></a>RADIUS - Active Directory (AD) Domain Server authentication AD Domain authentication allows users to connect to Azure using their organization domain credentials. It requires a RADIUS server that integrates with the AD server. Organizations can also use their existing RADIUS deployment. -The RADIUS server could be deployed on-premises or in your Azure VNet. During authentication, the Azure VPN Gateway acts as a pass through and forwards authentication messages back and forth between the RADIUS server and the connecting device. So Gateway reachability to the RADIUS server is important. If the RADIUS server is present on-premises, then a VPN S2S connection from Azure to the on-premises site is required for reachability. +The RADIUS server could be deployed on-premises or in your Azure virtual network. During authentication, the Azure VPN Gateway acts as a pass through and forwards authentication messages back and forth between the RADIUS server and the connecting device. So Gateway reachability to the RADIUS server is important. If the RADIUS server is present on-premises, then a VPN S2S connection from Azure to the on-premises site is required for reachability. The RADIUS server can also integrate with AD certificate services. This lets you use the RADIUS server and your enterprise certificate deployment for P2S certificate authentication as an alternative to the Azure certificate authentication. The advantage is that you donΓÇÖt need to upload root certificates and revoked certificates to Azure. The client configuration requirements vary, based on the VPN client that you use [!INCLUDE [All client articles](../../includes/vpn-gateway-vpn-client-install-articles.md)] +## What versions of the Azure VPN Client are available? ++For information about available Azure VPN Client versions, release dates, and what's new in each release, see [Azure VPN Client versions](azure-vpn-client-versions.md). + ## <a name="gwsku"></a>Which gateway SKUs support P2S VPN? The following table shows gateway SKUs by tunnel, connection, and throughput. For more information, see [About gateway SKUs](about-gateway-skus.md). There are multiple FAQ entries for point-to-site. See the [VPN Gateway FAQ](vpn- * [Configure a P2S connection - Azure certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md) * [Configure a P2S connection - Microsoft Entra ID authentication](point-to-site-entra-gateway.md)+ **"OpenVPN" is a trademark of OpenVPN Inc.** |
vpn-gateway | Tutorial Site To Site Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md | You can configure more settings for your connection, if necessary. Otherwise, sk ## Optional steps -### <a name="resize"></a>Resize a gateway SKU --There are specific rules about resizing versus changing a gateway SKU. In this section, you resize the SKU. For more information, see [Resize or change gateway SKUs](about-gateway-skus.md#resizechange). -- ### <a name="reset"></a>Reset a gateway Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connectivity on one or more site-to-site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly but aren't able to establish IPsec tunnels with the Azure VPN gateways. If you need to reset an active-active gateway, you can reset both instances using the portal. You can also use PowerShell or CLI to reset each gateway instance separately using instance VIPs. For more information, see [Reset a connection or a gateway](reset-gateway.md#reset-a-gateway). A gateway can have multiple connections. If you want to configure connections to ### Update a connection shared key -You can specify a different shared key for your connection. In the portal, go to the connection. Change the shared key on the **Authentication** page. +You can specify a different shared key for your connection. ++1. In the portal, go to the connection. +1. Change the shared key on the **Authentication** page. +1. Save your changes. +1. Update your VPN device with the new shared key as necessary. ++### <a name="resize"></a>Resize or change a gateway SKU ++You can resize a gateway SKU, or you can change the gateway SKU. There are specific rules regarding which option is available, depending on the SKU your gateway is currently using. For more information, see [Resize or change gateway SKUs](about-gateway-skus.md#resizechange). ### <a name="additional"></a>More configuration considerations |
vpn-gateway | Vpn Gateway About Forced Tunneling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-forced-tunneling.md | - Title: 'Configure forced tunneling - Site-to-Site connections: classic'- -description: Learn how to configure forced tunneling for virtual networks created using the classic deployment model. ---- Previously updated : 06/09/2023---# Configure forced tunneling using the classic deployment model --Forced tunneling lets you redirect or "force" all Internet-bound traffic back to your on-premises location via a Site-to-Site VPN tunnel for inspection and auditing. This is a critical security requirement for most enterprise IT policies. Without forced tunneling, Internet-bound traffic from your VMs in Azure will always traverse from Azure network infrastructure directly out to the Internet, without the option to allow you to inspect or audit the traffic. Unauthorized Internet access can potentially lead to information disclosure or other types of security breaches. --The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-forced-tunneling-rm.md). ---## Requirements and considerations --Forced tunneling in Azure is configured via virtual network user-defined routes (UDR). Redirecting traffic to an on-premises site is expressed as a Default Route to the Azure VPN gateway. The following section lists the current limitation of the routing table and routes for an Azure Virtual Network: --* Each virtual network subnet has a built-in, system routing table. The system routing table has the following three groups of routes: -- * **Local VNet routes:** Directly to the destination VMs in the same virtual network. - * **On-premises routes:** To the Azure VPN gateway. - * **Default route:** Directly to the Internet. Packets destined to the private IP addresses not covered by the previous two routes will be dropped. -* With the release of user-defined routes, you can create a routing table to add a default route, and then associate the routing table to your VNet subnet(s) to enable forced tunneling on those subnets. -* You need to set a "default site" among the cross-premises local sites connected to the virtual network. -* Forced tunneling must be associated with a VNet that has a dynamic routing VPN gateway (not a static gateway). -* ExpressRoute forced tunneling isn't configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions. For more information, see the [What is ExpressRoute?](../expressroute/expressroute-introduction.md) --## Configuration overview --In the following example, the Frontend subnet isn't forced tunneled. The workloads in the Frontend subnet can continue to accept and respond to customer requests from the Internet directly. The Mid-tier and Backend subnets are forced tunneled. Any outbound connections from these two subnets to the Internet are forced or redirected back to an on-premises site via one of the S2S VPN tunnels. --This allows you to restrict and inspect Internet access from your virtual machines or cloud services in Azure, while continuing to enable your multi-tier service architecture required. You also can apply forced tunneling to the entire virtual networks if there are no Internet-facing workloads in your virtual networks. ---## Prerequisites --Verify that you have the following items before beginning configuration: --* An Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/). -* A configured virtual network. -* [!INCLUDE [vpn-gateway-classic-powershell](../../includes/vpn-gateway-powershell-classic-locally.md)] --## Configure forced tunneling --The following procedure helps you specify forced tunneling for a virtual network. The configuration steps correspond to the VNet network configuration file. In this example, the virtual network 'MultiTier-VNet' has three subnets: Frontend, Midtier, and Backend subnets, with four cross premises connections: 'DefaultSiteHQ', and three Branches. ---```xml -<VirtualNetworkSite name="MultiTier-VNet" Location="North Europe"> - <AddressSpace> - <AddressPrefix>10.1.0.0/16</AddressPrefix> - </AddressSpace> - <Subnets> - <Subnet name="Frontend"> - <AddressPrefix>10.1.0.0/24</AddressPrefix> - </Subnet> - <Subnet name="Midtier"> - <AddressPrefix>10.1.1.0/24</AddressPrefix> - </Subnet> - <Subnet name="Backend"> - <AddressPrefix>10.1.2.0/23</AddressPrefix> - </Subnet> - <Subnet name="GatewaySubnet"> - <AddressPrefix>10.1.200.0/28</AddressPrefix> - </Subnet> - </Subnets> - <Gateway> - <ConnectionsToLocalNetwork> - <LocalNetworkSiteRef name="DefaultSiteHQ"> - <Connection type="IPsec" /> - </LocalNetworkSiteRef> - <LocalNetworkSiteRef name="Branch1"> - <Connection type="IPsec" /> - </LocalNetworkSiteRef> - <LocalNetworkSiteRef name="Branch2"> - <Connection type="IPsec" /> - </LocalNetworkSiteRef> - <LocalNetworkSiteRef name="Branch3"> - <Connection type="IPsec" /> - </LocalNetworkSiteRef> - </Gateway> - </VirtualNetworkSite> - </VirtualNetworkSite> -``` --The following steps set the 'DefaultSiteHQ' as the default site connection for forced tunneling, and configure the Midtier and Backend subnets to use forced tunneling. --1. Open your PowerShell console with elevated rights. Connect to your account using the following example: -- ```powershell - Add-AzureAccount - ``` --1. Create a routing table. Use the following cmdlet to create your route table. -- ```powershell - New-AzureRouteTable ΓÇôName "MyRouteTable" ΓÇôLabel "Routing Table for Forced Tunneling" ΓÇôLocation "North Europe" - ``` --1. Add a default route to the routing table. -- The following example adds a default route to the routing table created in Step 1. The only route supported is the destination prefix of "0.0.0.0/0" to the "VPNGateway" NextHop. -- ```powershell - Get-AzureRouteTable -Name "MyRouteTable" | Set-AzureRoute ΓÇôRouteTable "MyRouteTable" ΓÇôRouteName "DefaultRoute" ΓÇôAddressPrefix "0.0.0.0/0" ΓÇôNextHopType VPNGateway - ``` --1. Associate the routing table to the subnets. -- After a routing table is created and a route added, use the following example to add or associate the route table to a VNet subnet. The example adds the route table "MyRouteTable" to the Midtier and Backend subnets of VNet MultiTier-VNet. -- ```powershell - Set-AzureSubnetRouteTable -VirtualNetworkName "MultiTier-VNet" -SubnetName "Midtier" -RouteTableName "MyRouteTable" - Set-AzureSubnetRouteTable -VirtualNetworkName "MultiTier-VNet" -SubnetName "Backend" -RouteTableName "MyRouteTable" - ``` --1. Assign a default site for forced tunneling. -- In the preceding step, the sample cmdlet scripts created the routing table and associated the route table to two of the VNet subnets. The remaining step is to select a local site among the multi-site connections of the virtual network as the default site or tunnel. -- ```powershell - $DefaultSite = @("DefaultSiteHQ") - Set-AzureVNetGatewayDefaultSite ΓÇôVNetName "MultiTier-VNet" ΓÇôDefaultSite "DefaultSiteHQ" - ``` --## Additional PowerShell cmdlets --### To delete a route table --```powershell -Remove-AzureRouteTable -Name <routeTableName> -``` - -### To list a route table --```powershell -Get-AzureRouteTable [-Name <routeTableName> [-DetailLevel <detailLevel>]] -``` --### To delete a route from a route table --```powershell -Remove-AzureRouteTable ΓÇôName <routeTableName> -``` --### To remove a route from a subnet --```powershell -Remove-AzureSubnetRouteTable ΓÇôVirtualNetworkName <virtualNetworkName> -SubnetName <subnetName> -``` --### To list the route table associated with a subnet --```powershell -Get-AzureSubnetRouteTable -VirtualNetworkName <virtualNetworkName> -SubnetName <subnetName> -``` --### To remove a default site from a VNet VPN gateway --```powershell -Remove-AzureVnetGatewayDefaultSite -VNetName <virtualNetworkName> -``` |