Updates from: 02/27/2023 02:07:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c App Registrations Training Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/app-registrations-training-guide.md
In the legacy experience, apps were always created as customer-facing applicatio
> [!NOTE] > This option is required to be able to run Azure AD B2C user flows to authenticate users for this application. Learn [how to register an application for use with user flows.](tutorial-register-applications.md)
-You can also use this option to use Azure AD B2C as a SAML service provider. [Learn more](identity-provider-adfs.md).
+You can also use this option to use Azure AD B2C as a SAML service provider. [Learn more](saml-service-provider.md).
## Applications for DevOps scenarios
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
- Deprecated functionality - Plans for changes ++
+## August 2022
+
+### General Availability - Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users
+++
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+++
+Customers can now require a fresh authentication each time a user performs a certain action. Forced reauthentication supports requiring a user to reauthenticate during Intune device enrollment, password change for risky users, and risky sign-ins.
+
+For more information, see: [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time)
+++
+### General Availability - Multi-Stage Access Reviews
+
+**Type:** Changed feature
+**Service category:** Access Reviews
+**Product capability:** Identity Governance
+
+Customers can now meet their complex audit and recertification requirements through multiple stages of reviews. For more information, see: [Create a multi-stage access review](../governance/create-access-review.md#create-a-multi-stage-access-review).
+++++
+### Public Preview - External user leave settings
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** B2B/B2C
+
+Currently, users can self-service leave for an organization without the visibility of their IT administrators. Some organizations may want more control over this self-service process.
+
+With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra portal. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties.
+
+A new policy API is available for the administrators to control tenant wide policy:
+[externalIdentitiesPolicy resource type](/graph/api/resources/externalidentitiespolicy?view=graph-rest-beta&preserve-view=true)
+
+ For more information, see:
+
+- [Leave an organization as an external user](../external-identities/leave-the-organization.md)
+- [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md)
+++++
+### Public Preview - Restrict self-service BitLocker for devices
+
+**Type:** New feature
+**Service category:** Device Registration and Management
+**Product capability:** Access Control
+
+In some situations, you may want to restrict the ability for end users to self-service BitLocker keys. With this new functionality, you can now turn off self-service of BitLocker keys, so that only specific individuals with right privileges can recover a BitLocker key.
+
+For more information, see: [Block users from viewing their BitLocker keys (preview)](../devices/device-management-azure-portal.md#block-users-from-viewing-their-bitlocker-keys-preview)
++++
+### Public Preview- Identity Protection Alerts in Microsoft 365 Defender
+
+**Type:** New feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+Identity Protection risk detections (alerts) are now also available in Microsoft 365 Defender to provide a unified investigation experience for security professionals. For more information, see: [Investigate alerts in Microsoft 365 Defender](/microsoft-365/security/defender/investigate-alerts?view=o365-worldwide#alert-sources&preserve-view=true)
++++++
+### New Federated Apps available in Azure AD Application gallery - August 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In August 2022, we've added the following 40 new applications in our App gallery with Federation support
+
+[Albourne Castle](https://village.albourne.com/castle), [Adra by Trintech](../saas-apps/adra-by-trintech-tutorial.md), [workhub](../saas-apps/workhub-tutorial.md), [4DX](../saas-apps/4dx-tutorial.md), [Ecospend IAM V1](https://iamapi.sb.ecospend.com/account/login), [TigerGraph](../saas-apps/tigergraph-tutorial.md), [Sketch](../saas-apps/sketch-tutorial.md), [Lattice](../saas-apps/lattice-tutorial.md), [snapADDY Single Sign On](https://app.snapaddy.com/login), [RELAYTO Content Experience Platform](https://relayto.com/signin), [oVice](https://tour.ovice.in/login), [Arena](../saas-apps/arena-tutorial.md), [QReserve](../saas-apps/qreserve-tutorial.md), [Curator](../saas-apps/curator-tutorial.md), [NetMotion Mobility](../saas-apps/netmotion-mobility-tutorial.md), [HackNotice](../saas-apps/hacknotice-tutorial.md), [ERA_EHS_CORE](../saas-apps/era-ehs-core-tutorial.md), [AnyClip Teams Connector](https://videomanager.anyclip.com/login), [Wiz SSO](../saas-apps/wiz-sso-tutorial.md), [Tango Reserve by AgilQuest (EU Instance)](../saas-apps/tango-reserve-tutorial.md), [valid8Me](../saas-apps/valid8me-tutorial.md), [Ahrtemis](../saas-apps/ahrtemis-tutorial.md), [KPMG Leasing Tool](../saas-apps/kpmg-tool-tutorial.md) [Mist Cloud Admin SSO](../saas-apps/mist-cloud-admin-tutorial.md), [Work-Happy](https://live.work-happy.com/?azure=true), [Ediwin SaaS EDI](../saas-apps/ediwin-saas-edi-tutorial.md), [LUSID](../saas-apps/lusid-tutorial.md), [Next Gen Math](https://nextgenmath.com/), [Total ID](https://www.tokyo-shoseki.co.jp/ict/), [Cheetah For Benelux](../saas-apps/cheetah-for-benelux-tutorial.md), [Live Center Australia](https://au.livecenter.com/), [Shop Floor Insight](https://www.dmsiworks.com/apps/shop-floor-insight), [Warehouse Insight](https://www.dmsiworks.com/apps/warehouse-insight), [myAOS](../saas-apps/myaos-tutorial.md), [Hero](https://admin.linc-ed.com/), [FigBytes](../saas-apps/figbytes-tutorial.md), [VerosoftDesign](https://verosoft-design.vercel.app/), [ViewpointOne - UK](https://identity-uk.team.viewpoint.com/), [EyeRate Reviews](https://azure-login.eyeratereviews.com/), [Lytx DriveCam](../saas-apps/lytx-drivecam-tutorial.md)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
++++++
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - August 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Ideagen Cloud](../saas-apps/ideagen-cloud-provisioning-tutorial.md)
+- [Lucid (All Products)](../saas-apps/lucid-all-products-provisioning-tutorial.md)
+- [Palo Alto Networks Cloud Identity Engine - Cloud Authentication Service](../saas-apps/palo-alto-networks-cloud-identity-engine-provisioning-tutorial.md)
+- [SuccessFactors Writeback](../saas-apps/sap-successfactors-writeback-tutorial.md)
+- [Tableau Cloud](../saas-apps/tableau-online-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
++++
+### General Availability - Workload Identity Federation with App Registrations are available now
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** Developer Experience
+
+Entra Workload Identity Federation allows developers to exchange tokens issued by another identity provider with Azure AD tokens, without needing secrets. It eliminates the need to store, and manage, credentials inside the code or secret stores to access Azure AD protected resources such as Azure and Microsoft Graph. By removing the secrets required to access Azure AD protected resources, workload identity federation can improve the security posture of your organization. This feature also reduces the burden of secret management and minimizes the risk of service downtime due to expired credentials.
+
+For more information on this capability and supported scenarios, see [Workload identity federation](../develop/workload-identity-federation.md).
++++
+### Public Preview - Entitlement management automatic assignment policies
+
+**Type:** Changed feature
+**Service category:** Entitlement Management
+**Product capability:** Identity Governance
+
+In Azure AD entitlement management, a new form of access package assignment policy is being added. The automatic assignment policy includes a filter rule, similar to a dynamic group, that specifies the users in the tenant who should have assignments. When users come into scope of matching that filter rule criteria, an assignment is automatically created, and when they no longer match, the assignment is removed.
+
+ For more information, see: [Configure an automatic assignment policy for an access package in Azure AD entitlement management (Preview)](../governance/entitlement-management-access-package-auto-assignment-policy.md).
+++ ## July 2022
For more information, see:
- [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md) --
-## December 2017
-
-### Terms of use in the Access Panel
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Compliance
-
-You now can go to the Access Panel and view the terms of use that you previously accepted.
-
-Follow these steps:
-
-1. Go to the [MyApps portal](https://myapps.microsoft.com), and sign in.
-
-2. In the upper-right corner, select your name, and then select **Profile** from the list.
-
-3. On your **Profile**, select **Review terms of use**.
-
-4. Now you can review the terms of use you accepted.
-
-For more information, see the [Azure AD terms of use feature (preview)](../conditional-access/terms-of-use.md).
---
-### New Azure AD sign-in experience
-
-**Type:** New feature
-**Service category:** Azure AD
-**Product capability:** User authentication
-
-The Azure AD and Microsoft account identity system UIs were redesigned so that they have a consistent look and feel. In addition, the Azure AD sign-in page collects the user name first, followed by the credential on a second screen.
-
-For more information, see [The new Azure AD sign-in experience is now in public preview](https://cloudblogs.microsoft.com/enterprisemobility/2017/08/02/the-new-azure-ad-signin-experience-is-now-in-public-preview/).
---
-### Fewer sign-in prompts: A new "keep me signed in" experience for Azure AD sign-in
-
-**Type:** New feature
-**Service category:** Azure AD
-**Product capability:** User authentication
-
-The **Keep me signed in** check box on the Azure AD sign-in page was replaced with a new prompt that shows up after you successfully authenticate.
-
-If you respond **Yes** to this prompt, the service gives you a persistent refresh token. This behavior is the same as when you selected the **Keep me signed in** check box in the old experience. For federated tenants, this prompt shows after you successfully authenticate with the federated service.
-
-For more information, see [Fewer sign-in prompts: The new "keep me signed in" experience for Azure AD is in preview](https://cloudblogs.microsoft.com/enterprisemobility/2017/09/19/fewer-login-prompts-the-new-keep-me-signed-in-experience-for-azure-ad-is-in-preview/).
---
-### Add configuration to require the terms of use to be expanded prior to accepting
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Compliance
-
-An option for administrators requires their users to expand the terms of use prior to accepting the terms.
-
-Select either **On** or **Off** to require users to expand the terms of use. The **On** setting requires users to view the terms of use prior to accepting them.
-
-For more information, see the [Azure AD terms of use feature (preview)](../conditional-access/terms-of-use.md).
---
-### Scoped activation for eligible role assignments
-
-**Type:** New feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-You can use scoped activation to activate eligible Azure resource role assignments with less autonomy than the original assignment defaults. An example is if you're assigned as the owner of a subscription in your tenant. With scoped activation, you can activate the owner role for up to five resources contained within the subscription (such as resource groups and virtual machines). Scoping your activation might reduce the possibility of executing unwanted changes to critical Azure resources.
-
-For more information, see [What is Azure AD Privileged Identity Management?](../privileged-identity-management/pim-configure.md).
---
-### New federated apps in the Azure AD app gallery
-
-**Type:** New feature
-**Service category:** Enterprise apps
-**Product capability:** 3rd Party Integration
-
-In December 2017, we've added these new apps with Federation support to our app gallery:
-
-[Accredible](../saas-apps/accredible-tutorial.md), Adobe Experience Manager, [EFI Digital StoreFront](../saas-apps/efidigitalstorefront-tutorial.md), [Communifire](../saas-apps/communifire-tutorial.md)
-CybSafe, [FactSet](../saas-apps/factset-tutorial.md), [IMAGE WORKS](../saas-apps/imageworks-tutorial.md), [MOBI](../saas-apps/mobi-tutorial.md), [MobileIron Azure AD integration](../saas-apps/mobileiron-tutorial.md), [Reflektive](../saas-apps/reflektive-tutorial.md), [SAML SSO for Bamboo by resolution GmbH](../saas-apps/bamboo-tutorial.md), [SAML SSO for Bitbucket by resolution GmbH](../saas-apps/bitbucket-tutorial.md), [Vodeclic](../saas-apps/vodeclic-tutorial.md), WebHR, Zenegy Azure AD Integration.
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md).
-
-For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Approval workflows for Azure AD directory roles
-
-**Type:** Changed feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-Approval workflow for Azure AD directory roles is generally available.
-
-With approval workflow, privileged-role administrators can require eligible-role members to request role activation before they can use the privileged role. Multiple users and groups can be delegated approval responsibilities. Eligible role members receive notifications when approval is finished and their role is active.
---
-### Pass-through authentication: Skype for Business support
-
-**Type:** Changed feature
-**Service category:** Authentications (Logins)
-**Product capability:** User authentication
-
-Pass-through authentication now supports user sign-ins to Skype for Business client applications that support modern authentication, which includes online and hybrid topologies.
-
-For more information, see [Skype for Business topologies supported with modern authentication](/skypeforbusiness/plan-your-deployment/modern-authentication/topologies-supported).
---
-### Updates to Azure AD Privileged Identity Management for Azure RBAC (preview)
-
-**Type:** Changed feature
-**Service category:** Privileged Identity Management
-**Product capability:** Privileged Identity Management
-
-With the public preview refresh of Azure AD Privileged Identity Management (PIM) for Azure role-based access control (Azure RBAC), you can now:
-
-* Use Just Enough Administration.
-* Require approval to activate resource roles.
-* Schedule a future activation of a role that requires approval for both Azure AD and Azure roles.
-
-For more information, see [Privileged Identity Management for Azure resources (preview)](../privileged-identity-management/azure-pim-resource-rbac.md).
---
-## November 2017
-
-### Access Control service retirement
-
-**Type:** Plan for change
-**Service category:** Access Control service
-**Product capability:** Access Control service
-
-Azure Active Directory Access Control (also known as the Access Control service) will be retired in late 2018. More information that includes a detailed schedule and high-level migration guidance will be provided in the next few weeks. You can leave comments on this page with any questions about the Access Control service, and a team member will answer them.
---
-### Restrict browser access to the Intune Managed Browser
-
-**Type:** Plan for change
-**Service category:** Conditional Access
-**Product capability:** Identity security and protection
-
-You can restrict browser access to Office 365 and other Azure AD-connected cloud apps by using the Intune Managed Browser as an approved app.
-
-You now can configure the following condition for application-based Conditional Access:
-
-**Client apps:** Browser
-
-**What is the effect of the change?**
-
-Today, access is blocked when you use this condition. When the preview is available, all access will require the use of the managed browser application.
-
-Look for this capability and more information in upcoming blogs and release notes.
-
-For more information, see [Conditional Access in Azure AD](../conditional-access/overview.md).
---
-### New approved client apps for Azure AD app-based Conditional Access
-
-**Type:** Plan for change
-**Service category:** Conditional Access
-**Product capability:** Identity security and protection
-
-The following apps are on the list of [approved client apps](../conditional-access/concept-conditional-access-conditions.md#client-apps):
--- [Microsoft Kaizala](https://www.microsoft.com/garage/profiles/kaizala/)-- Microsoft StaffHub-
-For more information, see:
--- [Approved client app requirement](../conditional-access/concept-conditional-access-conditions.md#client-apps)-- [Azure AD app-based Conditional Access](../conditional-access/app-based-conditional-access.md)---
-### Terms-of-use support for multiple languages
-
-**Type:** New feature
-**Service category:** Terms of use
-**Product capability:** Compliance
-
-Administrators now can create new terms of use that contain multiple PDF documents. You can tag these PDF documents with a corresponding language. Users are shown the PDF with the matching language based on their preferences. If there is no match, the default language is shown.
---
-### Real-time password writeback client status
-
-**Type:** New feature
-**Service category:** Self-service password reset
-**Product capability:** User authentication
-
-You now can review the status of your on-premises password writeback client. This option is available in the **On-premises integration** section of the [Password reset](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/PasswordReset) page.
-
-If there are issues with your connection to your on-premises writeback client, you see an error message that provides you with:
--- Information on why you can't connect to your on-premises writeback client.-- A link to documentation that assists you in resolving the issue.-
-For more information, see [on-premises integration](../authentication/concept-sspr-howitworks.md#on-premises-integration).
---
-### Azure AD app-based Conditional Access
-
-**Type:** New feature
-**Service category:** Azure AD
-**Product capability:** Identity security and protection
-
-You now can restrict access to Office 365 and other Azure AD-connected cloud apps to [approved client apps](../conditional-access/concept-conditional-access-conditions.md#client-apps) that support Intune app protection policies by using [Azure AD app-based Conditional Access](../conditional-access/app-based-conditional-access.md). Intune app protection policies are used to configure and protect company data on these client applications.
-
-By combining [app-based](../conditional-access/app-based-conditional-access.md) with [device-based](../conditional-access/require-managed-devices.md) Conditional Access policies, you have the flexibility to protect data for personal and company devices.
-
-The following conditions and controls are now available for use with app-based Conditional Access:
-
-**Supported platform condition**
--- iOS-- Android-
-**Client apps condition**
--- Mobile apps and desktop clients-
-**Access control**
--- Require approved client app-
-For more information, see [Azure AD app-based Conditional Access](../conditional-access/app-based-conditional-access.md).
---
-### Manage Azure AD devices in the Azure portal
-
-**Type:** New feature
-**Service category:** Device registration and management
-**Product capability:** Identity security and protection
-
-You now can find all your devices connected to Azure AD and the device-related activities in one place. There is a new administration experience to manage all your device identities and settings in the Azure portal. In this release, you can:
--- View all your devices that are available for Conditional Access in Azure AD.-- View properties, which include your hybrid Azure AD-joined devices.-- Find BitLocker keys for your Azure AD-joined devices, manage your device with Intune, and more.-- Manage Azure AD device-related settings.-
-For more information, see [Manage devices by using the Azure portal](../devices/device-management-azure-portal.md).
---
-### Support for macOS as a device platform for Azure AD Conditional Access
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity security and protection
-
-You now can include (or exclude) macOS as a device platform condition in your Azure AD Conditional Access policy. With the addition of macOS to the supported device platforms, you can:
--- **Enroll and manage macOS devices by using Intune.** Similar to other platforms like iOS and Android, a company portal application is available for macOS to do unified enrollments. You can use the new company portal app for macOS to enroll a device with Intune and register it with Azure AD.-- **Ensure macOS devices adhere to your organization's compliance policies defined in Intune.** In Intune on the Azure portal, you now can set up compliance policies for macOS devices.-- **Restrict access to applications in Azure AD to only compliant macOS devices.** Conditional Access policy authoring has macOS as a separate device platform option. Now you can author macOS-specific Conditional Access policies for the targeted application set in Azure.-
-For more information, see:
--- [Create a device compliance policy for macOS devices with Intune](/mem/intune/protect/compliance-policy-create-mac-os)-- [Conditional Access in Azure AD](../conditional-access/overview.md)---
-### Network Policy Server extension for Azure AD Multi-Factor Authentication
-
-**Type:** New feature
-**Service category:** Multifactor authentication
-**Product capability:** User authentication
-
-The Network Policy Server extension for Azure Active Directory (Azure AD) Multi-Factor Authentication adds cloud-based multifactor authentication capabilities to your authentication infrastructure by using your existing servers. With the Network Policy Server extension, you can add phone call, text message, or phone app verification to your existing authentication flow. You don't have to install, configure, and maintain new servers.
-
-This extension was created for organizations that want to protect virtual private network connections without deploying the Azure Active Directory Multi-Factor Authentication Server. The Network Policy Server extension acts as an adapter between RADIUS and cloud-based Azure AD Multi-Factor Authentication to provide a second factor of authentication for federated or synced users.
-
-For more information, see [Integrate your existing Network Policy Server infrastructure with Azure AD Multi-Factor Authentication](../authentication/howto-mfa-nps-extension.md).
---
-### Restore or permanently remove deleted users
-
-**Type:** New feature
-**Service category:** User management
-**Product capability:** Directory
-
-In the Azure AD admin center, you can now:
--- Restore a deleted user.-- Permanently delete a user.-
-**To try it out:**
-
-1. In the Azure AD admin center, select [All users](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/UserManagementMenuBlade/All) in the **Manage** section.
-
-2. From the **Show** list, select **Recently deleted users**.
-
-3. Select one or more recently deleted users, and then either restore them or permanently delete them.
---
-### New approved client apps for Azure AD app-based Conditional Access
-
-**Type:** Changed feature
-**Service category:** Conditional Access
-**Product capability:** Identity security and protection
-
-The following apps were added to the list of [approved client apps](../conditional-access/concept-conditional-access-conditions.md#client-apps):
--- Microsoft Planner-- Azure Information Protection-
-For more information, see:
--- [Approved client app requirement](../conditional-access/concept-conditional-access-conditions.md#client-apps)-- [Azure AD app-based Conditional Access](../conditional-access/app-based-conditional-access.md)---
-### Use "OR" between controls in a Conditional Access policy
-
-**Type:** Changed feature
-**Service category:** Conditional Access
-**Product capability:** Identity security and protection
-
-You now can use "OR" (require one of the selected controls) for Conditional Access controls. You can use this feature to create policies with "OR" between access controls. For example, you can use this feature to create a policy that requires a user to sign in by using multifactor authentication "OR" to be on a compliant device.
-
-For more information, see [Controls in Azure AD Conditional Access](../conditional-access/controls.md).
---
-### Aggregation of real-time risk detections
-
-**Type:** Changed feature
-**Service category:** Identity protection
-**Product capability:** Identity security and protection
-
-In Azure AD Identity Protection, all real-time risk detections that originated from the same IP address on a given day are now aggregated for each risk detection type. This change limits the volume of risk detections shown without any change in user security.
-
-The underlying real-time detection works each time the user signs in. If you have a sign-in risk security policy set up to multifactor authentication or block access, it is still triggered during each risky sign-in.
--+
active-directory How To Use Vm Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md
This article provides a list of SDK samples, which demonstrate use of their resp
| .NET Core | [Call Azure services from a Linux VM using managed identities for Azure resources](https://github.com/Azure-Samples/linuxvm-msi-keyvault-arm-dotnet/) | | Go | [Azure identity client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ManagedIdentityCredential) | Node.js | [Manage resources using managed identities for Azure resources](https://github.com/Azure-Samples/resources-node-manage-resources-with-msi) |
-| Python | [Use managed identities for Azure resources to authenticate simply from inside a VM](https://azure.microsoft.com/resources/samples/resource-manager-python-manage-resources-with-msi/) |
+| Python | Use managed identities for Azure resources to authenticate simply from inside a VM |
| Ruby | [Manage resources from a VM with managed identities for Azure resources enabled](https://github.com/Azure-Samples/resources-ruby-manage-resources-with-msi/) | ## Next steps - See [Azure SDKs](https://azure.microsoft.com/downloads/) for the full list of Azure SDK resources, including library downloads, documentation, and more. - To enable managed identities for Azure resources on an Azure VM, see [Configure managed identities for Azure resources on a VM using the Azure portal](qs-configure-portal-windows-vm.md).--------
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
Title: Web Application Routing add-on on Azure Kubernetes Service (AKS) (Preview
description: Use the Web Application Routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS). -+ Last updated 05/13/2021-+ # Web Application Routing (Preview)
-The Web Application Routing add-on configures an [Ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. Optionally, it also integrates with Open Service Mesh (OSM) for end-to-end encryption of inter cluster communication using mutual TLS (mTLS). As applications are deployed, the add-on creates publicly accessible DNS names for endpoints.
+The Web Application Routing add-on configures an [Ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. Optionally, it also integrates with Open Service Mesh (OSM) for end-to-end encryption of inter cluster communication using mutual TLS (mTLS). When you deploy ingresses, the add-on creates publicly accessible DNS names for endpoints on an Azure DNS zone.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-## Limitations
--- Web Application Routing currently doesn't support named ports in ingress backend.- ## Web Application Routing add-on overview The add-on deploys the following components: - **[nginx ingress controller][nginx]**: The ingress controller exposed to the internet.-- **[external-dns controller][external-dns]**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone. Note that this is only deployed when you pass in the `--dns-zone-resource-id` argument.
+- **[external-dns controller][external-dns]**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone. This controller is only deployed when you pass in the `--dns-zone-resource-id` argument.
## Prerequisites
az extension update --name aks-preview
### Create and export a self-signed SSL certificate (if you don't already own one)
-If you already have an SSL certificate, you can skip this step, otherwise you can use these commands to create a self-signed SSL certificate to use with the Ingress. You will need to replace *`<Hostname>`* with the DNS name that you will be using.
+If you already have an SSL certificate, you can skip this step, otherwise you can use these commands to create a self-signed SSL certificate to use with the Ingress. You need to replace *`<Hostname>`* with the DNS name that you are using.
```bash # Create a self-signed SSL certificate
az network dns zone create -g <ResourceGroupName> -n <ZoneName>
## Enable Web Application Routing via the Azure CLI
-The Web Application Routing routing add-on can be enabled with the Azure CLI when deploying an AKS cluster. To do so, use the [az aks create][az-aks-create] command with the `--enable-addons` argument. You can also enable Web Application Routing on an existing AKS cluster using the [az aks enable-addons][az-aks-enable-addons] command.
+The Web Application Routing routing add-on can be enabled with the Azure CLI when deploying an AKS cluster. To do so, use the `[az aks create][az-aks-create]` command with the `--enable-addons` argument. You can also enable Web Application Routing on an existing AKS cluster using the `[az aks enable-addons][az-aks-enable-addons]` command.
+
+# [Without Open Service Mesh (OSM)](#tab/without-osm)
+
+The following extra add-on is required:
+* **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.
+
+> [!IMPORTANT]
+> To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature](./csi-secrets-store-driver.md#enable-and-disable-autorotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is 2 minutes.
+
+```azurecli-interactive
+az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,web_application_routing --generate-ssh-keys --enable-secret-rotation
+```
+
+To enable Web Application Routing on an existing cluster, add the `--addons` parameter and specify *web_application_routing* as shown in the following example:
+
+```azurecli-interactive
+az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,web_application_routing --enable-secret-rotation
+```
# [With Open Service Mesh (OSM)](#tab/with-osm)
-The following additional add-ons are required:
+The following extra add-ons are required:
* **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault. * **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the nginx ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS).
az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyv
> [!NOTE] > To use the add-on with Open Service Mesh, you should install the `osm` command-line tool. This command-line tool contains everything needed to configure and manage Open Service Mesh. The latest binaries are available on the [OSM GitHub releases page][osm-release].
+# [With service annotations (retired)](#tab/service-annotations)
-# [Without Open Service Mesh (OSM)](#tab/without-osm)
+> [!WARNING]
+> Configuring ingresses by adding annotations on the Service object is retired. Please consider [configuring via an Ingress object](?tabs=without-osm).
-The following additional add-on is required:
+The following extra add-on is required:
* **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault. > [!IMPORTANT]
az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-add
To enable Web Application Routing on an existing cluster, add the `--addons` parameter and specify *web_application_routing* as shown in the following example: ```azurecli-interactive
-az aks enable-addons-g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,web_application_routing --enable-secret-rotation
+az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,web_application_routing --enable-secret-rotation
``` ## Retrieve the add-on's managed identity object ID
-Retrieve user managed identity object ID for the add-on. This will be used in the next steps to grant permissions against the Azure DNS zone and the Azure Key Vault. Provide your *`<ResourceGroupName>`*, *`<ClusterName>`*, and *`<Location>`* in the script below which will retrieve the managed identity's object ID.
+Retrieve user managed identity object ID for the add-on. This identity is used in the next steps to grant permissions to manage the Azure DNS zone and retrieve certificates from the Azure Key Vault. Provide your *`<ResourceGroupName>`*, *`<ClusterName>`*, and *`<Location>`* in the script to retrieve the managed identity's object ID.
```azurecli-interactive # Provide values for your environment
MANAGEDIDENTITY_OBJECTID=$(az resource show --id $USERMANAGEDIDENTITY_RESOURCEID
## Configure the add-on to use Azure DNS to manage creating DNS zones
-If you are going to use Azure DNS, update the add-on to pass in the `--dns-zone-resource-id`.
+If you're going to use Azure DNS, update the add-on to pass in the `--dns-zone-resource-id`.
Retrieve the resource ID for the DNS zone.
Grant **DNS Zone Contributor** permissions on the DNS zone to the add-on's manag
az role assignment create --role "DNS Zone Contributor" --assignee $MANAGEDIDENTITY_OBJECTID --scope $ZONEID ```
-Update the add-on to enable the integration with Azure DNS. This will create the **external-dns** controller.
+Update the add-on to enable the integration with Azure DNS. This command installs the **external-dns** controller.
```azurecli-interactive az aks addon update -g <ResourceGroupName> -n <ClusterName> --addon web_application_routing --dns-zone-resource-id=$ZONEID
az aks addon update -g <ResourceGroupName> -n <ClusterName> --addon web_applicat
## Grant the add-on permissions to retrieve certificates from Azure Key Vault
-The Web Application Routing add-on creates a user created managed identity in the cluster resource group. This managed identity will need to be granted permissions to retrieve SSL certificates from the Azure Key Vault.
+The Web Application Routing add-on creates a user created managed identity in the cluster resource group. This managed identity needs to be granted permissions to retrieve SSL certificates from the Azure Key Vault.
Grant `GET` permissions for the Web Application Routing add-on to retrieve certificates from Azure Key Vault: ```azurecli-interactive
az aks get-credentials -g <ResourceGroupName> -n <ClusterName>
## Deploy an application Web Application Routing uses annotations on Kubernetes Ingress objects to create the appropriate resources, create records on Azure DNS (when configured), and retrieve the SSL certificates from Azure Key Vault.
+# [Without Open Service Mesh (OSM)](#tab/without-osm)
+
+### Create the application namespace
+
+For the sample application environment, let's first create a namespace called `hello-web-app-routing` to run the example pods:
+
+```bash
+kubectl create namespace hello-web-app-routing
+```
+
+### Create the deployment
+
+Create a file named **deployment.yaml** and copy in the following YAML.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld
+ spec:
+ containers:
+ - name: aks-helloworld
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
+```
+
+### Create the service
+
+Create a file named **service.yaml** and copy in the following YAML.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld
+```
+
+### Create the ingress
+
+The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this activates the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command.
+
+```azurecli-interactive
+az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
+```
+
+Create a file named **ingress.yaml** and copy in the following YAML.
+
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. `secretName` is the name of the secret that going to be generated to store the certificate. This is the certificate that's going to be presented in the browser.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+spec:
+ ingressClassName: webapprouting.kubernetes.azure.com
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+ tls:
+ - hosts:
+ - <Hostname>
+ secretName: keyvault-aks-helloworld
+```
+
+### Create the resources on the cluster
+
+Use the [kubectl apply][kubectl-apply] command to create the resources.
+
+```bash
+kubectl apply -f deployment.yaml -n hello-web-app-routing
+kubectl apply -f service.yaml -n hello-web-app-routing
+kubectl apply -f ingress.yaml -n hello-web-app-routing
+```
+
+The following example output shows the created resources:
+
+```bash
+deployment.apps/aks-helloworld created
+service/aks-helloworld created
+ingress.networking.k8s.io/aks-helloworld created
+```
+ # [With Open Service Mesh (OSM)](#tab/with-osm)
spec:
### Create the ingress
-The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this will activate the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command.
+The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this activates the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command.
```azurecli-interactive az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
spec:
### Create the ingress backend
-Open Service Mesh (OSM) leverages its [IngressBackend API](https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api) to configure a backend service to accept ingress traffic from trusted sources. To proxy connections to HTTPS backends, we will configure the Ingress and IngressBackend configurations to use https as the backend protocol, and have OSM issue a certificate that Nginx will use as the client certificate to proxy HTTPS connections to TLS backends. The client certificate and CA certificate will be stored in a Kubernetes secret that Nginx will use to authenticate service mesh backends. For more information, refer to [Open Service Mesh: Ingress with Kubernetes Nginx Ingress Controller](https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/).
+Open Service Mesh (OSM) uses its [IngressBackend API](https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api) to configure a backend service to accept ingress traffic from trusted sources. To proxy connections to HTTPS backends, you configure the Ingress and IngressBackend configurations to use https as the backend protocol. OSM issues a certificate that Nginx will use as the client certificate to proxy HTTPS connections to TLS backends. The client certificate and CA certificate are stored in a Kubernetes secret that Nginx will use to authenticate service mesh backends. For more information, see [Open Service Mesh: Ingress with Kubernetes Nginx Ingress Controller](https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/).
Create a file named **ingressbackend.yaml** and copy in the following YAML.
ingress.networking.k8s.io/aks-helloworld created
ingressbackend.policy.openservicemesh.io/aks-helloworld created ```
-# [Without Open Service Mesh (OSM)](#tab/without-osm)
+# [With service annotations (retired)](#tab/service-annotations)
+
+> [!WARNING]
+> Configuring ingresses by adding annotations on the Service object is retired. Please consider [configuring via an Ingress object](?tabs=without-osm).
### Create the application namespace
apiVersion: apps/v1
kind: Deployment metadata: name: aks-helloworld
+ namespace: hello-web-app-routing
spec: replicas: 1 selector:
spec:
value: "Welcome to Azure Kubernetes Service (AKS)" ```
-### Create the service
+### Create the service with the annotations (retired)
Create a file named **service.yaml** and copy in the following YAML.
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. This is the certificate that's going to be presented in the browser.
+ ```yaml apiVersion: v1 kind: Service metadata: name: aks-helloworld
+ namespace: hello-web-app-routing
+ annotations:
+ kubernetes.azure.com/ingress-host: <Hostname>
+ kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
spec: type: ClusterIP ports:
spec:
app: aks-helloworld ```
-### Create the ingress
-
-The Web Application Routing add-on creates an Ingress class on the cluster called `webapprouting.kubernetes.azure.com `. When you create an ingress object with this class, this will activate the add-on. To obtain the certificate URI to use in the Ingress from Azure Key Vault, run the following command.
-
-```azurecli-interactive
-az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
-```
-
-Create a file named **ingress.yaml** and copy in the following YAML.
-
-> [!NOTE]
-> Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. `secretName` is the name of the secret that going to be generated to store the certificate. This is the certificate that's going to be presented in the browser.
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- annotations:
- kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
- name: aks-helloworld
- namespace: hello-web-app-routing
-spec:
- ingressClassName: webapprouting.kubernetes.azure.com
- rules:
- - host: <Hostname>
- http:
- paths:
- - backend:
- service:
- name: aks-helloworld
- port:
- number: 80
- path: /
- pathType: Prefix
- tls:
- - hosts:
- - <Hostname>
- secretName: keyvault-aks-helloworld
-```
- ### Create the resources on the cluster Use the [kubectl apply][kubectl-apply] command to create the resources.
Use the [kubectl apply][kubectl-apply] command to create the resources.
```bash kubectl apply -f deployment.yaml -n hello-web-app-routing kubectl apply -f service.yaml -n hello-web-app-routing
-kubectl apply -f ingress.yaml -n hello-web-app-routing
``` The following example output shows the created resources:
The following example output shows the created resources:
```bash deployment.apps/aks-helloworld created service/aks-helloworld created
-ingress.networking.k8s.io/aks-helloworld created
```
aks-helloworld webapprouting.kubernetes.azure.com myapp.contoso.com 20.51.
## Accessing the endpoint over a DNS hostname
-If you have not configured Azure DNS integration, you will need to configure your own DNS provider with an **A record** pointing to the ingress IP address and the host name you configured for the ingress, for example *myapp.contoso.com*.
+If you haven't configured Azure DNS integration, you need to configure your own DNS provider with an **A record** pointing to the ingress IP address and the host name you configured for the ingress, for example *myapp.contoso.com*.
## Remove Web Application Routing
First, remove the associated namespace:
kubectl delete namespace hello-web-app-routing ```
-The Web Application Routing add-on can be removed using the Azure CLI. To do so run the following command, substituting your AKS cluster and resource group name. Be careful if you already have some of the other add-ons (open-service-mesh or azure-keyvault-secrets-provider) enabled on your cluster so that you don't accidentally disable them.
+You can remove the Web Application Routing add-on using the Azure CLI. To do so run the following command, substituting your AKS cluster and resource group name. Be careful if you already have some of the other add-ons (open-service-mesh or azure-keyvault-secrets-provider) enabled on your cluster so that you don't accidentally disable them.
```azurecli az aks disable-addons --addons web_application_routing --name myAKSCluster --resource-group myResourceGroup
api-management Api Management Application Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-application-templates.md
Last updated 11/04/2019
# Application templates in Azure API Management
-Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using [DotLiquid](http://dotliquidmarkup.org/) syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
+Azure API Management provides you the ability to customize the content of developer portal pages using a set of templates that configure their content. Using DotLiquid syntax and the editor of your choice, such as [DotLiquid for Designers](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers), and a provided set of localized [String resources](api-management-template-resources.md#strings), [Glyph resources](api-management-template-resources.md#glyphs), and [Page controls](api-management-page-controls.md), you have great flexibility to configure the content of the pages as you see fit using these templates.
The templates in this section allow you to customize the content of the Application pages in the developer portal.
api-management Api Management Howto Add Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-add-products.md
After you publish a product, developers can access the APIs. Depending on how th
When a client makes an API request without a subscription key:
- * API Management checks whether the API is associated with an open product.
+ * API Management checks whether the API is associated with an open product. An API can be associated with at most one open product.
* If the open product exists, it then processes the request in the context of that open product. Policies and access control rules configured for the open product can be applied.
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
When API Management receives an API request from a client with a subscription ke
When API Management receives an API request from a client without a subscription key, it handles the request according to these rules:
-1. Check first for the existence of a product that includes the API but doesn't require a subscription (an *open* product). If the open product exists, handle the request in the context of the APIs, policies, and access rules configured for the product.
+1. Check first for the existence of a product that includes the API but doesn't require a subscription (an *open* product). If the open product exists, handle the request in the context of the APIs, policies, and access rules configured for the product. An API can be associated with at most one open product.
1. If an open product including the API isn't found, check whether the API requires a subscription. If a subscription isn't required, handle the request in the context of that API and operation. 1. If no configured product or API is found, then access is denied (401 Access denied error).
The following table summarizes how the gateway handles API requests with or with
|❌<sup>1</sup> | ✔️ | Access allowed:<br/><br/>• Product-scoped key<br/>• API-scoped key<br/>• All APIs-scoped key<br/>• Service-scoped key<br/><br/>Access denied:<br/><br/>• Other key not scoped to applicable product or API | Access allowed (open product context) | • Protected API access with API-scoped subscription<br/><br/>• Anonymous access to API. If anonymous access isn’t intended, configure with product policies to enforce authentication and authorization | |❌<sup>1</sup> | ❌ | Access allowed:<br/><br/>• Product-scoped key<br/>• API-scoped key<br/>• All APIs-scoped key<br/>• Service-scoped key<br/><br/>Access denied:<br/><br/>• Other key not scoped to applicable product or API | Access allowed (open product context) | Anonymous access to API. If anonymous access isn’t intended, configure with product policies to enforce authentication and authorization |
-<sup>1</sup> An open product exists.
+<sup>1</sup> An open product exists that's associated with the API.
### Considerations
app-service Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md
This document describes the App Service language runtime support policy for updating existing stacks and retiring process for upcoming end-of-life stacks. This policy is to clarify existing practices and doesn't represent a change to customer commitments. ## Updates to existing stacks
-App Service will update existing stacks after they become available from each community. App Service will update major versions of stacks but can't guarantee any specific patch versions. Patch versions are controlled by the platform, and it is not possible for App Service to pin a specific patch version. For example, Python 3.10 will be updated by App Service, but a specific Python 3.10.x version won't be guaranteed. If you need a specific patch version, use a [custom container](quickstart-custom-container.md).
+App Service will update existing stacks after they become available from each community. App Service will update major versions of stacks but can't guarantee any specific patch versions. Patch versions are controlled by the platform, and it isn't possible for App Service to pin a specific patch version. For example, Python 3.10 will be updated by App Service, but a specific Python 3.10.x version won't be guaranteed. If you need a specific patch version, use a [custom container](quickstart-custom-container.md).
## Retirements
-App Service follows community support timelines for the lifecycle of the runtime. Once community support for a given language reaches end-of-life, your applications will continue to run unchanged. However, App Service cannot provide security patches or related customer support for that runtime version past its end-of-life date. If your application has any issues past the end-of-life date for that version, you should move up to a supported version to receive the latest security patches and features.
+App Service follows community support timelines for the lifecycle of the runtime. Once community support for a given language reaches end-of-life, your applications will continue to run unchanged. However, App Service can't provide security patches or related customer support for that runtime version past its end-of-life date. If your application has any issues past the end-of-life date for that version, you should move up to a supported version to receive the latest security patches and features.
> [!IMPORTANT] > You're encouraged to upgrade the language version of your affected apps to a supported version. If you're running apps using an unsupported language version, you'll be required to upgrade before receiving support for your app.
App Service follows community support timelines for the lifecycle of the runtime
## Notifications End-of-life dates for runtime versions are determined independently by their respective stacks and are outside the control of App Service. App Service will send reminder notifications to subscription owners for upcoming end-of-life runtime versions 12 months prior to the end-of-life date.
-Those who receive notifications include account administrators, service administrators, and co-administrators. Contributors, readers, or other roles won't directly receive notifications, unless they opt-in to receive notification emails, using [Service Health Alerts](/service-health/alerts-activity-log-service-notifications-portal.md).
+Those who receive notifications include account administrators, service administrators, and co-administrators. Contributors, readers, or other roles won't directly receive notifications, unless they opt-in to receive notification emails, using [Service Health Alerts](/azure/service-health/alerts-activity-log-service-notifications-portal).
## Language runtime version support timelines To learn more about specific language support policy timelines, visit the following resources:
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
The following environment variables are related to the app environment in genera
| `WEBSITE_CONTENTSHARE` | When you use specify a custom storage account with `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`, App Service creates a file share in that storage account for your app. To use a custom name, set this variable to the name you want. If a file share with the specified name doesn't exist, App Service creates it for you. | `myapp123` | | `WEBSITE_SCM_ALWAYS_ON_ENABLED` | Read-only. Shows whether Always On is enabled (`1`) or not (`0`). || | `WEBSITE_SCM_SEPARATE_STATUS` | Read-only. Shows whether the Kudu app is running in a separate process (`1`) or not (`0`). ||-
+| `WEBSITE_DNS_ATTEMPTS` | Number of times to try name resolve. ||
+| `WEBSITE_DNS_TIMEOUT` | Number of seconds to wait for name resolve ||
<!-- WEBSITE_PROACTIVE_STACKTRACING_ENABLED WEBSITE_CLOUD_NAME
app-service Tutorial Secure Ntier App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-ntier-app.md
+
+ Title: 'Tutorial: Create a secure N-tier web app'
+description: Learn how to securely deploy your N-tier web app to Azure App Service.
++ Last updated : 2/25/2023+++
+# Tutorial: Create a secure n-tier app in Azure App Service
+
+Many applications have more than a single component. For example, you may have a front end that is publicly accessible and connects to a back-end database, storage account, key vault, another VM, or a combination of these resources. This architecture makes up an N-tier application. It's important that applications like this are architected to protect back-end resources to the greatest extent possible.
+
+In this tutorial, you learn how to deploy a secure N-tier application, with a front-end web app that connects to another network-isolated web app. All traffic is isolated within your Azure Virtual Network using [Virtual Network integration](overview-vnet-integration.md) and [private endpoints](networking/private-endpoint.md). For more comprehensive guidance that includes other scenarios, see:
+
+- [Multi-region N-tier application](/azure/architecture/reference-architectures/n-tier/multi-region-sql-server.md)
+- [Reliable web app pattern planning (.NET)](/azure/architecture/reference-architectures/reliable-web-app/dotnet/pattern-overview.md).
+
+## Scenario architecture
+
+The following diagram shows the architecture you'll create during this tutorial.
++
+- **Virtual network** Contains two subnets, one is integrated with the front-end web app, and the other has a private endpoint for the back-end web app. The virtual network blocks all inbound network traffic, except for the front-end app that's integrated with it.
+- **Front-end web app** Integrated into the virtual network and accessible from the public internet.
+- **Back-end web app** Accessible only through the private endpoint in the virtual network.
+- **Private endpoint** Integrates with the back-end web app and makes the web app accessible with a private IP address.
+- **Private DNS zone** Lets you resolve a DNS name to the private endpoint's IP address.
+
+> [!NOTE]
+> Virtual network integration and private endpoints are available all the way down to the **Basic** tier in App Service. The **Free** tier doesn't support these features.
+With this architecture:
+
+- Public traffic to the back-end app is blocked.
+- Outbound traffic from App Service is routed to the virtual network and can reach the back-end app.
+- App Service is able to perform DNS resolution to the back-end app.
+
+This scenario shows one of the possible N-tier scenarios in App Service. You can use the concepts covered in this tutorial to build more complex N-tier apps.
+
+What you'll learn:
+
+> [!div class="checklist"]
+> * Create a virtual network and subnets for App Service virtual network integration.
+> * Create private DNS zones.
+> * Create private endpoints.
+> * Configure virtual network integration in App Service.
+> * Disable basic auth in app service.
+> * Continuously deploy to a locked down backend web app.
+
+## Prerequisites
+
+The tutorial uses two sample Node.js apps that are hosted on GitHub. If you don't already have a GitHub account, [create an account for free](https://github.com/).
++
+To complete this tutorial:
++
+## 1. Create two instances of a web app
+
+You need two instances of a web app, one for the frontend and one for the backend. You need to use at least the **Basic** tier in order to use virtual network integration and private endpoints. You'll configure the virtual network integration and other configurations later on.
+
+1. Create a resource group to manage all of the resources you're creating in this tutorial.
+
+ ```azurecli-interactive
+ # Save resource group name and region as variables for convenience
+ groupName=myresourcegroup
+ region=eastus
+ az group create --name $groupName --location $region
+ ```
+
+1. Create an App Service plan. Replace `<app-service-plan-name>` with a unique name. Modify the `--sku` parameter if you need to use a different SKU. Ensure that you aren't using the free tier since that SKU doesn't support the required networking features.
+
+ ```azurecli-interactive
+ # Save App Service plan name as a variable for convenience
+ aspName=<app-service-plan-name>
+ az appservice plan create --name $aspName --resource-group $groupName --is-linux --location $region --sku P1V3
+ ```
+
+1. Create the web apps. Replace `<frontend-app-name>` and `<backend-app-name>` with two globally unique names (valid characters are `a-z`, `0-9`, and `-`). For this tutorial, you're provided with sample Node.js apps. If you'd like to use your own apps, change the `--runtime` parameter accordingly. Run `az webapp list-runtimes` for the list of available runtimes.
+
+ ```azurecli-interactive
+ az webapp create --name <frontend-app-name> --resource-group $groupName --plan $aspName --runtime "NODE:18-lts"
+ az webapp create --name <backend-app-name> --resource-group $groupName --plan $aspName --runtime "NODE:18-lts"
+ ```
+
+## 2. Create network infrastructure
+
+You'll create the following network resources:
+
+- A virtual network.
+- A subnet for the App Service virtual network integration.
+- A subnet for the private endpoint.
+- A private DNS zone.
+- A private endpoint.
+
+1. Create a *virtual network*. Replace `<virtual-network-name>` with a unique name.
+
+ ```azurecli-interactive
+ # Save vnet name as variable for convenience
+ vnetName=<virtual-network-name>
+ az network vnet create --resource-group $groupName --location $region --name $vnetName --address-prefixes 10.0.0.0/16
+ ```
+
+1. Create a *subnet for the App Service virtual network integration*.
+
+ ```azurecli-interactive
+ az network vnet subnet create --resource-group $groupName --vnet-name $vnetName --name vnet-integration-subnet --address-prefixes 10.0.0.0/24 --delegations Microsoft.Web/serverfarms --disable-private-endpoint-network-policies false
+ ```
+
+ For App Service, the virtual network integration subnet is recommended to [have a CIDR block of `/26` at a minimum](overview-vnet-integration.md#subnet-requirements). `/24` is more than sufficient. `--delegations Microsoft.Web/serverfarms` specifies that the subnet is [delegated for App Service virtual network integration](../virtual-network/subnet-delegation-overview.md).
+
+1. Create another *subnet for the private endpoints*.
+
+ ```azurecli-interactive
+ az network vnet subnet create --resource-group $groupName --vnet-name $vnetName --name private-endpoint-subnet --address-prefixes 10.0.1.0/24 --disable-private-endpoint-network-policies true
+ ```
+
+ For private endpoint subnets, you must [disable private endpoint network policies](../private-link/disable-private-endpoint-network-policy.md) by setting `--disable-private-endpoint-network-policies` to `true`.
+
+1. Create the *private DNS zone*.
+
+ ```azurecli-interactive
+ az network private-dns zone create --resource-group $groupName --name privatelink.azurewebsites.net
+ ```
+
+ For more information on these settings, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
+
+ > [!NOTE]
+ > If you create the private endpoint using the portal, a private DNS zone is created automatically for you and you don't need to create it separately. For consistency with this tutorial, you create the private DNS zone and private endpoint separately using the Azure CLI.
+
+1. Link the private DNS zone to the virtual network.
+
+ ```azurecli-interactive
+ az network private-dns link vnet create --resource-group $groupName --name myDnsLink --zone-name privatelink.azurewebsites.net --virtual-network $vnetName --registration-enabled False
+ ```
+
+1. In the private endpoint subnet of your virtual network, create a *private endpoint* for your backend web app. Replace `<backend-app-name>` with your backend web app name.
+
+ ```azurecli-interactive
+ # Get backend web app resource ID
+ resourceId=$(az webapp show --resource-group $groupName --name <backend-app-name> --query id --output tsv)
+ az network private-endpoint create --resource-group $groupName --name myPrivateEndpoint --location $region --connection-name myConnection --private-connection-resource-id $resourceId --group-id sites --vnet-name $vnetName --subnet private-endpoint-subnet
+ ```
+
+1. Link the private endpoint to the private DNS zone with a DNS zone group for the backend web app private endpoint. This DNS zone group helps you to auto-update the private DNS Zone when there's an update to the private endpoint.
+
+ ```azurecli-interactive
+ az network private-endpoint dns-zone-group create --resource-group $groupName --endpoint-name myPrivateEndpoint --name myZoneGroup --private-dns-zone privatelink.azurewebsites.net --zone-name privatelink.azurewebsites.net
+ ```
+
+1. When you create a private endpoint for an App Service, public access gets implicitly disabled. If you try to access your backend web app using its default URL, your access is denied. From a browser, navigate to `<backend-app-name>.azurewebsites.net` to confirm this behavior.
+
+ :::image type="content" source="./media/tutorial-secure-ntier-app/backend-app-service-forbidden.png" alt-text="Screenshot of 403 error when trying to access backend web app directly.":::
+
+ For more information on App Service access restrictions with private endpoints, see [Azure App Service access restrictions](overview-access-restrictions.md#app-access).
+
+## 3. Configure virtual network integration in your frontend web app
+
+Enable virtual network integration on your app. Replace `<frontend-app-name>` with your frontend web app name.
+
+```azurecli-interactive
+az webapp vnet-integration add --resource-group $groupName --name <frontend-app-name> --vnet $vnetName --subnet vnet-integration-subnet
+```
+
+Virtual network integration allows outbound traffic to flow directly into the virtual network. By default, only local IP traffic defined in [RFC-1918](https://tools.ietf.org/html/rfc1918#section-3) is routed to the virtual network, which is what you need for the private endpoints. To route all your traffic to the virtual network, see [Manage virtual network integration routing](configure-vnet-integration-routing.md). Routing all traffic can also be used if you want to route internet traffic through your virtual network, such as through an [Azure Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) or an [Azure Firewall](../firewall/overview.md).
+
+## 4. Enable deployment to back-end web app from internet
+
+Since your backend web app isn't publicly accessible, you must let your continuous deployment tool reach your app by [making the SCM site publicly accessible](app-service-ip-restrictions.md#restrict-access-to-an-scm-site). The main web app itself can continue to deny all traffic.
+
+1. Enable public access for the back-end web app.
+
+ ```azurecli-interactive
+ az webapp update --resource-group $groupName --name <backend-app-name> --set publicNetworkAccess=Enabled
+ ```
+
+1. Set the unmatched rule action for the main web app to deny all traffic. This setting denies public access to the main web app even though the general app access setting is set to allow public access.
+
+ ```azurecli-interactive
+ az resource update --resource-group $groupName --name <backend-app-name> --namespace Microsoft.Web --resource-type sites --set properties.siteConfig.ipSecurityRestrictionsDefaultAction=Deny
+ ```
+
+1. Set the unmatched rule action for the SCM site to allow all traffic.
+
+ ```azurecli-interactive
+ az resource update --resource-group $groupName --name <backend-app-name> --namespace Microsoft.Web --resource-type sites --set properties.siteConfig.scmIpSecurityRestrictionsDefaultAction=Allow
+ ```
+
+## 5. Lock down FTP and SCM access
+
+Now that the back-end SCM site is publicly accessible, you need to lock it down with better security.
+
+1. Disable FTP access for both the front-end and back-end web apps. Replace `<frontend-app-name>` and `<backend-app-name>` with your app names.
+
+ ```azurecli-interactive
+ az resource update --resource-group $groupName --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<frontend-app-name> --set properties.allow=false
+ az resource update --resource-group $groupName --name ftp --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<backend-app-name> --set properties.allow=false
+ ```
+
+1. Disable basic auth access to the WebDeploy ports and SCM/advanced tool sites for both web apps. Replace `<frontend-app-name>` and `<backend-app-name>` with your app names.
+
+ ```azurecli-interactive
+ az resource update --resource-group $groupName --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<frontend-app-name> --set properties.allow=false
+ az resource update --resource-group $groupName --name scm --namespace Microsoft.Web --resource-type basicPublishingCredentialsPolicies --parent sites/<backend-app-name> --set properties.allow=false
+ ```
+
+[Disabling basic auth on App Service](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html) limits access to the FTP and SCM endpoints to users that are backed by Azure Active Directory, which further secures your apps. For more information on disabling basic auth including how to test and monitor logins, see [Disabling basic auth on App Service](https://azure.github.io/AppService/2020/08/10/securing-data-plane-access.html).
+
+## 6. Configure continuous deployment using GitHub Actions
+
+1. Navigate to the [Node.js backend sample app](https://github.com/seligj95/nodejs-backend). This app is a simple Hello World app.
+1. Select the **Fork** button in the upper right on the GitHub page.
+1. Select the **Owner** and leave the default Repository name.
+1. Select **Create** fork.
+1. Repeat the same process for the [Node.js frontend sample app](https://github.com/seligj95/nodejs-frontend). This app is a basic web app that accesses a remote URL.
+
+1. Create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). Replace `<subscription-id>`, `<frontend-app-name>`, and `<backend-app-name>` with your values.
+
+ ```azurecli-interactive
+ az ad sp create-for-rbac --name "myApp" --role contributor --scopes /subscriptions/<subscription-id>/resourceGroups/$groupName/providers/Microsoft.Web/sites/<frontend-app-name> /subscriptions/<subscription-id>/resourceGroups/$groupName/providers/Microsoft.Web/sites/<backend-app-name> --sdk-auth
+ ```
+
+ The output is a JSON object with the role assignment credentials that provide access to your App Service apps. Copy this JSON object for the next step. It includes your client secret, which is only visible at this time. It's always a good practice to grant minimum access. The scope in this example is limited to just the apps, not the entire resource group.
+
+1. To store the service principal's credentials as GitHub secrets, go to one of the forked sample repositories in GitHub and go to **Settings** > **Security** > **Secrets and variables** > **Actions**.
+1. Select **New repository secret** and create a secret for each of the following values. The values can be found in the json output you copied earlier.
+
+ |Name |Value |
+ |-||
+ |AZURE_APP_ID |`<application/client-id>` |
+ |AZURE_PASSWORD |`<client-secret>` |
+ |AZURE_TENANT_ID |`<tenant-id>` |
+ |AZURE_SUBSCRIPTION_ID |`<subscription-id>` |
+
+1. Repeat this process for the other forked sample repository.
+
+1. To set up continuous deployment with GitHub Actions, sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to the **Overview** page for your front-end web app.
+1. In the left pane, select **Deployment Center**. Then select **Settings**.
+1. In the **Source** box, select "GitHub" from the CI/CD options.
+
+ :::image type="content" source="media/app-service-continuous-deployment/choose-source.png" alt-text="Screenshot that shows how to choose the deployment source.":::
+
+1. If you're deploying from GitHub for the first time, select **Authorize** and follow the authorization prompts. If you want to deploy from a different user's repository, select **Change Account**.
+
+1. If you're using the Node.js sample app that was forked as part of the prerequisites, use the following settings for **Organization**, **Repository**, and **Branch**.
+
+ |Setting |Value |
+ |--|--|
+ |Organization |`<your-GitHub-organization>` |
+ |Repository |nodejs-frontend |
+ |Branch |main |
+
+1. Select **Save**.
+
+1. Repeat the same steps for your back-end web app. The Deployment Center settings are given in the following table.
+
+ |Setting |Value |
+ |--|--|
+ |Organization |`<your-GitHub-organization>` |
+ |Repository |nodejs-backend |
+ |Branch |main |
+
+## 7. Use a service principal for GitHub Actions deployment
+
+Your Deployment Center configuration has created a default workflow file in each of your sample repositories, but it uses a publish profile by default, which uses basic auth. Since you've disabled basic auth, if you check the **Logs** tab in Deployment Center, you'll see that the automatically triggered deployment results in an error. You must modify the workflow file to use the service principal to authenticate with App Service. For sample workflows, see [Deploy to App Service](deploy-github-actions.md?tabs=userlevel#deploy-to-app-service).
+
+1. Open one of your forked GitHub repositories and go to the `<repo-name>/.github/workflows/` directory.
+
+1. Select the auto-generated workflow file and then select the "pencil" button in the top right to edit the file. Replace the contents with the following text, which assumes you created the GitHub secrets earlier for your credential. Update the placeholder for `<web-app-name>` under the "env" section, and then commit directly to the main branch. This commit triggers the GitHub Action to run again and deploy your code, this time using the service principal to authenticate.
+
+ ```yml
+ name: Build and deploy Node.js app to Azure Web App
+
+ on:
+ push:
+ branches:
+ - main
+ workflow_dispatch:
+
+ env:
+ AZURE_WEBAPP_NAME: <web-app-name> # set this to your application's name
+ NODE_VERSION: '18.x' # set this to the node version to use
+ AZURE_WEBAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
+
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v2
+
+ - name: Set up Node.js version
+ uses: actions/setup-node@v1
+ with:
+ node-version: ${{ env.NODE_VERSION }}
+
+ - name: npm install, build
+ run: |
+ npm install
+ npm run build --if-present
+
+ - name: Upload artifact for deployment job
+ uses: actions/upload-artifact@v2
+ with:
+ name: node-app
+ path: .
+
+ deploy:
+ runs-on: ubuntu-latest
+ needs: build
+ environment:
+ url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
+
+ steps:
+ - name: Download artifact from build job
+ uses: actions/download-artifact@v2
+ with:
+ name: node-app
+ - uses: azure/login@v1
+ with:
+ creds: |
+ {
+ "clientId": "${{ secrets.AZURE_APP_ID }}",
+ "clientSecret": "${{ secrets.AZURE_PASSWORD }}",
+ "subscriptionId": "${{ secrets.AZURE_SUBSCRIPTION_ID }}",
+ "tenantId": "${{ secrets.AZURE_TENANT_ID }}"
+ }
+
+ - name: 'Deploy to Azure Web App'
+ id: deploy-to-webapp
+ uses: azure/webapps-deploy@v2
+ with:
+ app-name: ${{ env.AZURE_WEBAPP_NAME }}
+ package: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
+
+ - name: logout
+ run: |
+ az logout
+ ```
+
+1. Repeat this process for the workflow file in your other forked GitHub repository.
+
+The new GitHub commits trigger another deployment for each of your apps. This time, deployment should succeed since the workflow uses the service principal to authenticate with the apps' SCM sites.
+
+For detailed guidance on how to configure continuous deployment with providers such as GitHub Actions, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md).
+
+## 8. Validate connections and app access
+
+1. Browse to the front-end web app with its URL: `https://<frontend-app-name>.azurewebsites.net`.
+
+1. In the textbox, input the URL for your backend web app: `https://<backend-app-name>.azurewebsites.net`. If you set up the connections properly, you should get the message "Hello from the backend web app!", which is the entire content of the backend web app. All *outbound* traffic from the front-end web app is routed through the virtual network. Your frontend web app is securely connecting to your backend web app through the private endpoint. If something is wrong with your connections, your frontend web app crashes.
+
+1. Try navigating directly to your backend web app with its URL: `https://<backend-app-name>.azurewebsites.net`. You should see the message `Web App - Unavailable`. If you can reach the app, ensure you've configured the private endpoint and that the access restrictions for your app are set to deny all traffic for the main web app.
+
+1. To further validate that the frontend web app is reaching the backend web app over private link, SSH to one of your front end's instances. To SSH, run the following command, which establishes an SSH session to the web container of your app and opens a remote shell in your browser.
+
+ ```azurecli-interactive
+ az webapp ssh --resource-group $groupName --name <frontend-app-name>
+ ```
+
+1. When the shell opens in your browser, run `nslookup` to confirm your back-end web app is being reached using the private IP of your backend web app. You can also run `curl` to validate the site content again. Replace `<backend-app-name>` with your backend web app name.
+
+ ```bash
+ nslookup <backend-app-name>.azurewebsites.net
+ curl https://<backend-app-name>.azurewebsites.net
+ ```
+
+ :::image type="content" source="./media/tutorial-secure-ntier-app/frontend-ssh-validation.png" alt-text="Screenshot of SSH session showing how to validate app connections.":::
+
+ The `nslookup` should resolve to the private IP address of your back-end web app. The private IP address should be an address from your virtual network. To confirm your private IP, go to the **Networking** page for your back-end web app.
+
+ :::image type="content" source="./media/tutorial-secure-ntier-app/backend-app-service-inbound-ip.png" alt-text="Screenshot of App Service Networking page showing the inbound IP of the app.":::
+
+1. Repeat the same `nslookup` and `curl` commands from another terminal (one that isn't an SSH session on your front endΓÇÖs instances).
+
+ :::image type="content" source="./media/tutorial-secure-ntier-app/frontend-ssh-external.png" alt-text="Screenshot of an external terminal doing a nslookup and curl of the back-end web app.":::
+
+ The `nslookup` returns the public IP for the back-end web app. Since public access to the back-end web app is disabled, if you try to reach the public IP, you get an access denied error. This error means this site isn't accessible from the public internet, which is the intended behavior. The `nslookup` doesnΓÇÖt resolve to the private IP because that can only be resolved from within the virtual network through the private DNS zone. Only the front-end web app is within the virtual network. If you try to run `curl` on the back-end web app from the external terminal, the HTML that is returned contains `Web App - Unavailable`. This error displays the HTML for the error page you saw earlier when you tried navigating to the back-end web app in your browser.
+
+## 9. Clean up resources
+
+In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell.
+
+```azurecli-interactive
+az group delete --name myresourcegroup
+```
+
+This command may take a few minutes to run.
+
+## Frequently asked questions
+
+- [Is there an alternative to deployment using a service principal?](#is-there-an-alternative-to-deployment-using-a-service-principal)
+- [What happens when I configure GitHub Actions deployment in App Service?](#what-happens-when-i-configure-github-actions-deployment-in-app-service)
+- [Is it safe to leave the back-end SCM publicly accessible?](#is-it-safe-to-leave-the-back-end-scm-publicly-accessible)
+- [Is there a way to deploy without opening up the back-end SCM site at all?](#is-there-a-way-to-deploy-without-opening-up-the-back-end-scm-site-at-all)
+- [How can I deploy this architecture with ARM/Bicep?](#how-can-i-deploy-this-architecture-with-armbicep)
+
+#### Is there an alternative to deployment using a service principal?
+
+Since in this tutorial you've [disabled basic auth](#5-lock-down-ftp-and-scm-access), you can't authenticate with the back end SCM site with a username and password, and neither can you with a publish profile. Instead of a service principal, you can also use [OpenID Connect](deploy-github-actions.md?tabs=openid#deploy-to-app-service).
+
+#### What happens when I configure GitHub Actions deployment in App Service?
+
+Azure auto-generates a workflow file in your repository. New commits in the selected repository and branch now deploy continuously into your App Service app. You can track the commits and deployments on the **Logs** tab.
+
+A default workflow file that uses a publish profile to authenticate to App Service is added to your GitHub repository. You can view this file by going to the `<repo-name>/.github/workflows/` directory.
+
+#### Is it safe to leave the back-end SCM publicly accessible?
+
+When you [lock down FTP and SCM access](#5-lock-down-ftp-and-scm-access), it ensures that only Azure AD backed principals can access the SCM endpoint even though it's publicly accessible. This setting should reassure you that your backend web app is still secure.
+
+#### Is there a way to deploy without opening up the back-end SCM site at all?
+
+If you're concerned about enabling public access to the SCM site, or you're restricted by policy, consider other App Service deployment options like [running from a ZIP package](deploy-run-package.md).
+
+#### How can I deploy this architecture with ARM/Bicep?
+
+The resources you created in this tutorial can be deployed using an ARM/Bicep template. The [App connected to a backend web app Bicep template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection) allows you to create a secure N-tier app solution.
+
+To learn how to deploy ARM/Bicep templates, see [How to deploy resources with Bicep and Azure CLI](../azure-resource-manager/bicep/deploy-cli.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Integrate your app with an Azure virtual network](overview-vnet-integration.md)
+> [!div class="nextstepaction"]
+> [App Service networking features](networking-features.md)
+> [!div class="nextstepaction"]
+> [Reliable web app pattern planning (.NET)](/azure/architecture/reference-architectures/reliable-web-app/dotnet/pattern-overview.md)
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
By default, an HTTP(S) response with status code between 200 and 399 is consider
The following are matching criteria: - **HTTP response status code match** - Probe matching criterion for accepting user specified http response code or response code ranges. Individual comma-separated response status codes or a range of status code is supported.-- **HTTP response body match** - Probe matching criterion that looks at HTTP response body and matches with a user specified string. The match only looks for presence of user specified string in response body and isn't a full regular expression match.
+- **HTTP response body match** - Probe matching criterion that looks at HTTP response body and matches with a user specified string. The match only looks for presence of user specified string in response body and isn't a full regular expression match. The specified match must be 4090 characters or less.
Match criteria can be specified using the `New-AzApplicationGatewayProbeHealthResponseMatch` cmdlet.
azure-app-configuration Howto Feature Filters Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md
The `Microsoft.FeatureManagement` library includes three feature filters:
- `TimeWindowFilter` enables the feature flag during a specified window of time. - `TargetingFilter` enables the feature flag for specified users and groups.
-You can also create your own feature filter that implements the [Microsoft.FeatureManagement.IFeatureFilter interface](/dotnet/api/microsoft.featuremanagement.ifeaturefilter).
+You can also create your own feature filter that implements the Microsoft.FeatureManagement.IFeatureFilter interface.
## Registering a feature filter
azure-app-configuration Quickstart Feature Flag Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md
This project will use [dependency injection in .NET Azure Functions](../azure-fu
## Next steps
-In this quickstart, you created a feature flag and used it with an Azure Functions app via the [Microsoft.FeatureManagement](/dotnet/api/microsoft.featuremanagement) library.
+In this quickstart, you created a feature flag and used it with an Azure Functions app via the [Microsoft.FeatureManagement](https://www.nuget.org/packages/Microsoft.FeatureManagement/) library.
- Learn more about [feature management](./concept-feature-management.md) - [Manage feature flags](./manage-feature-flags.md)
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
The Feature Management libraries also manage feature flag lifecycles behind the
The [Add feature flags to an ASP.NET Core app Quickstart](./quickstart-feature-flag-aspnet-core.md) shows a simple example of how to use feature flags in an ASP.NET Core application. This tutorial shows additional setup options and capabilities of the Feature Management libraries. You can use the sample app created in the quickstart to try out the sample code shown in this tutorial.
-For the ASP.NET Core feature management API reference documentation, see [Microsoft.FeatureManagement Namespace](/dotnet/api/microsoft.featuremanagement).
+For the ASP.NET Core feature management API reference documentation, see [Microsoft.FeatureManagement Namespace](https://www.nuget.org/packages/Microsoft.FeatureManagement/).
In this tutorial, you will learn how to:
To access the .NET Core feature manager, your app must have references to the `M
The .NET Core feature manager is configured from the framework's native configuration system. As a result, you can define your application's feature flag settings by using any configuration source that .NET Core supports, including the local *appsettings.json* file or environment variables.
-By default, the feature manager retrieves feature flag configuration from the `"FeatureManagement"` section of the .NET Core configuration data. To use the default configuration location, call the [AddFeatureManagement](/dotnet/api/microsoft.featuremanagement.servicecollectionextensions.addfeaturemanagement) method of the **IServiceCollection** passed into the **ConfigureServices** method of the **Startup** class.
+By default, the feature manager retrieves feature flag configuration from the `"FeatureManagement"` section of the .NET Core configuration data. To use the default configuration location, call the AddFeatureManagement method of the **IServiceCollection** passed into the **ConfigureServices** method of the **Startup** class.
```csharp
public class Startup
```
-If you use filters in your feature flags, you must include the [Microsoft.FeatureManagement.FeatureFilters](/dotnet/api/microsoft.featuremanagement.featurefilters) namespace and add a call to [AddFeatureFilters](/dotnet/api/microsoft.featuremanagement.ifeaturemanagementbuilder.addfeaturefilter) specifying the type name of the filter you want to use as the generic type of the method. For more information on using feature filters to dynamically enable and disable functionality, see [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md).
+If you use filters in your feature flags, you must include the [Microsoft.FeatureManagement.FeatureFilters](/dotnet/api/microsoft.azure.management.storsimple8000series.models.featurefilter) namespace and add a call to AddFeatureFilters specifying the type name of the filter you want to use as the generic type of the method. For more information on using feature filters to dynamically enable and disable functionality, see [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md).
The following example shows how to use a built-in feature filter called `PercentageFilter`:
By convention, the `FeatureManagement` section of this JSON document is used for
## Use dependency injection to access IFeatureManager
-For some operations, such as manually checking feature flag values, you need to get an instance of [IFeatureManager](/dotnet/api/microsoft.featuremanagement.ifeaturemanager?preserve-view=true&view=azure-dotnet-preview). In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument:
+For some operations, such as manually checking feature flag values, you need to get an instance of IFeatureManager. In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument:
### [.NET 5.x](#tab/core5x)
public IActionResult Index()
} ```
-When an MVC controller or action is blocked because the controlling feature flag is *off*, a registered [IDisabledFeaturesHandler](/dotnet/api/microsoft.featuremanagement.mvc.idisabledfeatureshandler?preserve-view=true&view=azure-dotnet-preview) interface is called. The default `IDisabledFeaturesHandler` interface returns a 404 status code to the client with no response body.
+When an MVC controller or action is blocked because the controlling feature flag is *off*, a registered IDisabledFeaturesHandler interface is called. The default `IDisabledFeaturesHandler` interface returns a 404 status code to the client with no response body.
## MVC views
app.UseForFeature(featureName, appBuilder => {
In this tutorial, you learned how to implement feature flags in your ASP.NET Core application by using the `Microsoft.FeatureManagement` libraries. For more information about feature management support in ASP.NET Core and App Configuration, see the following resources: * [ASP.NET Core feature flag sample code](./quickstart-feature-flag-aspnet-core.md)
-* [Microsoft.FeatureManagement documentation](/dotnet/api/microsoft.featuremanagement)
-* [Manage feature flags](./manage-feature-flags.md)
+* [Microsoft.FeatureManagement documentation](https://www.nuget.org/packages/Microsoft.FeatureManagement/)
+* [Manage feature flags](./manage-feature-flags.md)
azure-arc Tutorial Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-workload-management.md
The PR merging event starts a GitHub workflow `checkpromote` in the `control pla
Once the `checkpromote` is successful, it starts the `cd` workflow that promotes the change (application registration) to the `Stage` environment. For better visibility, it also updates the git commit status in the `control plane` repository:
-![Git commit status deploying to dev](media/tutorial-workload-management/dev-git-commit-status.png)
- :::image type="content" source="media/tutorial-workload-management/dev-git-commit-status.png" alt-text="Screenshot showing git commit status deploying to dev.":::
> [!NOTE] > If the `drone` cluster fails to reconcile the assignment manifests for any reason, the promotion flow will fail. The commit status will be marked as failed, and the application registration will not be promoted to the `Stage` environment.
azure-monitor Javascript Sdk Load Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-load-failure.md
- Title: Troubleshooting SDK load failure for JavaScript web applications - Azure Application Insights
-description: How to troubleshoot SDK load failure for JavaScript web applications
- Previously updated : 06/05/2020----
-# Troubleshooting SDK load failure for JavaScript web apps
-
-<!--
-Editor Note: This link name above "SDK Load Failure" has a direct references by https://go.microsoft.com/fwlink/?linkid=2128109 which is embedded in the snippet and from the JavaScript Page. If you change this text you MUST also update these references.
>-
-The SDK load failure exception is created and reported by the JavaScript snippet (v3 or later) when it detects that the SDK script failed to download or initialize. Simplistically, your end users' client (browser) was unable to download the Application Insights SDK, or initialize from the identified hosting page and therefore no telemetry or events will be reported.
-
-![Azure portal browser failure overview](./media/javascript-sdk-load-failure/overview.png)
-
-> [!NOTE]
-> This exception is supported on all major browsers that support the fetch() API or XMLHttpRequest. This excludes IE 8 and below, so you will not get this type of exception reported from those browsers (unless your environment includes a fetch polyfill).
-
-![browser exception detail](./media/javascript-sdk-load-failure/exception.png)
-
-The stack details include the basic information with the URLs being used by the end user.
-
-| Name | Description |
-||--|
-| &lt;CDN&nbsp;Endpoint&gt; | The URL that was used (and failed) to download the SDK. |
-| &lt;Help&nbsp;Link&gt; | A URL that links to troubleshooting documentation (this page). |
-| &lt;Host&nbsp;URL&gt; | The complete URL of the page that the end user was using. |
-| &lt;Endpoint&nbsp;URL&gt; | The URL that was used to report the exception, this value may be helpful in identifying whether the hosting page was accessed from the public internet or a private cloud.
-
-The most common reasons for this exception to occur:
--- Intermittent network connectivity failure.-- Application Insights CDN outage.-- SDK failed to initialize after loading the script.-- Application Insights JavaScript CDN has been blocked.-
-[Intermittent network connectivity failure](#intermittent-network-connectivity-failure) is the most common reason for this exception, especially in mobile roaming scenarios where the users lose network connectivity intermittently.
-
-The following sections will describe how to troubleshoot each potential root cause of this error.
-
-> [!NOTE]
-> Several of the troubleshooting steps assume that your application has direct control of the Snippet &lt;script /&gt; tag and its configuration that is returned as part of the hosting HTML page. If you don't then those identified steps will not apply for your scenario.
-
-## Intermittent network connectivity failure
-
-**If the user is experiencing intermittent network connectivity failures, then there are fewer possible solutions and they will generally resolve themselves over a short period of time.** For example, if the user reloads your site (refreshes the page) the files will (eventually) get downloaded and cached locally until an updated version is released.
-
-> [!NOTE]
-> If this exception is persistent and is occurring across many of your users (diagnosed by a rapid and sustained level of this exception being reported) along with a reduction in normal client telemetry, then intermittent network connectivity issues is _not-likely_ to be the true cause of the problem and you should continue diagnosing with the other known possible issues.
-
-With this situation hosting the SDK on your own CDN is unlikely to provide or reduce the occurrences of this exception, as your own CDN will be affected by the same issue.
-
-The same is also true when using the SDK via NPM packages solution. However, from the end users perspective when this occurs your entire application fails to load/initialize (rather than _just_ the telemetry SDK - which they don't see visually), so they will most likely refresh your site until is loads completely.
-
-You can also try to use [NPM packages](#use-npm-packages-to-embed-the-application-insight-sdk) to embed the Application Insights SDK.
-
-To minimize intermittent network connectivity failure, we have implemented Cache-Control headers on all of the CDN files so that once the end user's browser has downloaded the current version of the SDK it will not need to download again and the browser will reuse the previously obtained copy (see [how caching works](../../cdn/cdn-how-caching-works.md)). If the caching check fails or there has been a new release, then your end user's browser will need to download the updated version. So you may see a background level of _"noise"_ in the check failure scenario or a temporary spike when a new release occurs and is made generally available (deployed to the CDN).
-
-## Application Insights CDN outage
-
-You can confirm if there is an Application Insights CDN outage by attempting to access the CDN endpoint directly from the browser (for example, https://js.monitor.azure.com/scripts/b/ai.2.min.js) from a different location than your end users' probably from your own development machine (assuming that your organization has not blocked this domain).
-
-If you confirm there is an outage, you can [create a new support ticket](https://azure.microsoft.com/support/create-ticket/).
-
-## SDK failed to initialize after loading the script
-
-When the SDK fails to initialize, the &lt;script /&gt; was successfully downloaded from the CDN but it fails during initialization. This can be because of missing or invalid dependencies or some form of JavaScript exception.
-
-The first thing to check is whether the SDK was successfully downloaded, if the script was NOT downloaded then this scenario is __not__ the cause of the SDK load failure exception.
-
-Quick check: Using a browser that supports developer tools (F12), validate on the network tab that the script defined in the ```src``` snippet configuration was downloaded with a response code of 200 (success) or a 304 (not changed). You could also use a tool like fiddler to review the network traffic.
-
-The sections below includes different reporting options, it will recommend either creating a support ticket or raising an issue on GitHub.
-
- Basic reporting rules:
--- If the issue is only affecting a small number of users and a specific or subset of browser version(s) (check the details on the reported exception), then it's likely an end user or environment only issue, which will probably require your application to provide additional `polyfill` implementations. For these, [file an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-JS/issues).-- If this issue is affecting your entire application and all of your users are affected, then it's probably a release related issue and therefore you should [create a new support ticket](https://azure.microsoft.com/support/create-ticket/).-
-### JavaScript exceptions
-
-First lets check for JavaScript exceptions, using a browser that supports developer tools (F12) load the page and review if any exceptions occurred.
-
-If there are exceptions being reported in the SDK script (for example ai.2.min.js), then this may indicate that the configuration passed into the SDK contains unexpected or missing required configuration or a faulty release has been deployed to the CDN.
-
-To check for faulty configuration, change the configuration passed into the snippet (if not already) so that it only includes your instrumentation key as a string value.
--
-```js
-src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js",
-cfg: {
- instrumentationKey: "INSTRUMENTATION_KEY"
-}});
-```
-
-If when using this minimal configuration you are still seeing a JavaScript exception in the SDK script, [create a new support ticket](https://azure.microsoft.com/support/create-ticket/) as this will require the faulty build to be rolled back as it's probably an issue with a newly deployed version.
-
-If the exception disappears, then the issue is likely caused by a type mismatch or unexpected value. Start adding your configuration options back one-by-one and test until the exception occurs again. Then check the documentation for the item causing the issue. If the documentation is unclear or you need assistance, [file an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-JS/issues).
-
-If your configuration was previously deployed and working but just started reporting this exception, then it may be an issue with a newly deployed version, check whether it is affecting only a small set of your users / browser and either [file an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-JS/issues) or [create a new support ticket](https://azure.microsoft.com/support/create-ticket/).
-
-### Enable console debugging
-
-Assuming there are no exceptions being thrown the next step is to enabling console debugging by adding the `loggingLevelConsole` setting to the configuration, this will send all initialization errors and warnings to the browsers console (normally available via the developer tools (F12)). Any reported errors should be self-explanatory and if you need further assistance [file an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-JS/issues).
-
-```js
-cfg: {
- instrumentationKey: "INSTRUMENTATION_KEY",
- loggingLevelConsole: 2
-}});
-```
-
-> [!NOTE]
-> During initialization the SDK performs some basic checks for known major dependencies. If these are not provided by the current runtime it will report the failures out as warning messages to the console, but only if the `loggingLevelConsole` is greater than zero.
-
-If it still fails to initialize, try enabling the ```enableDebug``` configuration setting. This will cause all internal errors to be thrown as an exception (which will cause telemetry to be lost). As this is a developer only setting it will probably get noisy with exceptions getting thrown as part of some internal checks, so you will need to review each exception to determine which issue is causing the SDK to fail. Use the non-minified version of the script (note the extension below is ".js" and not ".min.js") otherwise the exceptions will be unreadable.
-
-> [!WARNING]
-> This is a developer only setting and should NEVER be enabled in a full production environment as you will lose telemetry.
-
-```js
-src: "https://js.monitor.azure.com/scripts/b/ai.2.js",
-cfg:{
- instrumentationKey: "INSTRUMENTATION_KEY",
- enableDebug: true
-}});
-```
-
-If this still does not provide any insights, you should [file an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-JS/issues) with the details and an example site if you have one. Include the browser version, operating system, and JS framework details to help identify the issue.
-
-## The Application Insights JavaScript CDN has been blocked
-
-The CDN being blocked is possible when an Application Insights JavaScript SDK CDN endpoint has been reported and/or identified as being unsafe. When this occurs, it will cause the endpoint to be publicly block-listed and consumers of these lists will begin to block all access.
-
-To resolve, it requires the owner of the CDN endpoint to work with the block-listing entity that has marked the endpoint as unsafe so that it can be removed from the relevant list.
-
-Check if the CDN endpoint has been identified as unsafe.
-- [Google Transparency Report](https://transparencyreport.google.com/safe-browsing/search)-- [VirusTotal](https://www.virustotal.com/gui/home/url)-- [Sucuri SiteCheck](https://sitecheck.sucuri.net/)-
-Depending on the frequency that the application, firewall, or environment update their local copies of these lists, it may take a considerable amount of time and/or require manual intervention by end users or corporate IT departments to force an update or explicitly allow the CDN endpoints to resolve the issue.
-
-If the CDN endpoint is identified as unsafe, [create a support ticket](https://azure.microsoft.com/support/create-ticket/) to ensure that the issue is resolved as soon as possible.
-
-### Application Insights JavaScript CDN is blocked (by end user - blocked by browser; installed blocker; personal firewall)
-
-Check if your end users have:
--- Installed a browser plug-in (typically some form of ad, malware, or popup blocker).-- Blocked (or not allowed) the Application Insights CDN endpoints in their browser or proxy.-- Configured a firewall rule that is causing the CDN domain for the SDK to be blocked (or the DNS entry to not be resolved).-
-If they have configured any of these options, you will need to work with them (or provide documentation) to allow the CDN endpoints.
-
-It is possible that the plug-in they have installed is using the public blocklist. If that is not the case, then it's most likely some other manually configured solution or it's using a private domain blocklist.
-
-#### Add exceptions for CDN endpoints
-
-Work with your end users or provide documentation informing them that they should allow scripts from the Application Insights CDN endpoints to be downloaded by including them in their browser's plug-in or firewall rule exception list (will vary based on the end user's environment).
-
-Here is an example of how to configure [Chrome to allow or block access to websites.](https://support.google.com/chrome/a/answer/7532419?hl=en)
-
-### Application Insights CDN is blocked (by corporate firewall)
-
-If your end users are on a corporate network, then it's most likely their firewall solution and that their IT department has implemented some form of internet filtering system. In this case, you will need to work with them to allow the necessary rules for your end users.
-
-#### Add exceptions for CDN endpoints for corporations
-
- This is similar to [adding exceptions for end users](#add-exceptions-for-cdn-endpoints), but you will need to work with the company's IT department to have them configure the Application Insights CDN endpoints to be downloaded by including (or removing) them in any domain block-listing or allow-listing services.
-
- > [!WARNING]
- > If your corporate user is using a [private cloud](https://azure.microsoft.com/overview/what-is-a-private-cloud/) and they cannot enable any form of exception to provide their internal users with public access for the CDN endpoints then you will need to use the [Application Insights NPM packages](https://www.npmjs.com/package/applicationinsights) or host the Application Insights SDK on your own CDN.
-
-### Additional troubleshooting for blocked CDN
-
-> [!NOTE]
-> If your users are using a [private cloud](https://azure.microsoft.com/overview/what-is-a-private-cloud/) and they do not have access to the public internet you will need to host the SDK on your own CDN or use NPM packages.
-
-#### Host the SDK on your own CDN
-
- Rather than your end users downloading the Application Insights SDK from the public CDN you could host the Application Insights SDK from your own CDN endpoint. It is recommended that you use a specific version (ai.2.#.#.min.js) so that it's easier to identify which version you are using. Also update it on a regular basis to the current version (ai.2.min.js) so you can leverage any bug fixes and new features that become available.
-
-#### Use NPM packages to embed the Application Insight SDK
-
-Rather than using the snippet and public CDN endpoints you could use the [NPM packages](https://www.npmjs.com/package/applicationinsights) to include the SDK as part of your own JavaScript files. The SDK will become just another package within your own scripts.
-
-> [!NOTE]
-> It is recommended that when using NPM packages you should also use some form of [JavaScript bundler](https://www.bing.com/search?q=javascript+bundler) to assist with code splitting and minification.
-
-As with the snippet, it is also possible that your own scripts (with or without using the SDK NPM packages) could be affected by the same blocking issues listed here, so depending on your application, your users, and your framework you may want to consider implementing something similar to the logic in the snippet to detect and report these issues.
--
-## Next steps
-- [Get additional help by filing an issue on GitHub](https://github.com/Microsoft/ApplicationInsights-JS/issues)-- [Monitor web page usage](javascript.md)
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
This version of the snippet detects and reports failures when the SDK is loaded
- Missing telemetry on how your users are using your site. - Missing JavaScript errors that could potentially be blocking your users from successfully using your site.
-For information on this exception, see the [SDK load failure](javascript-sdk-load-failure.md) troubleshooting page.
+For information on this exception, see the [SDK load failure](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting) troubleshooting page.
Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the Application Insights configuration. For this reason, if this failure occurs, it will always be reported by the snippet, even when `window.onerror` support is disabled.
If the SDK reports correlation recursively, enable the configuration setting of
* [Track usage](usage-overview.md) * [Custom events and metrics](api-custom-events-metrics.md) * [Build-measure-learn](usage-overview.md)
-* [Troubleshoot SDK load failure](javascript-sdk-load-failure.md)
+* [Troubleshoot SDK load failure](https://learn.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting)
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
public class Program
#### [Java](#tab/java)
-Coming soon.
+```java
+import io.opentelemetry.api.GlobalOpenTelemetry;
+import io.opentelemetry.api.metrics.DoubleHistogram;
+import io.opentelemetry.api.metrics.Meter;
+
+public class Program {
+
+ public static void main(String[] args) {
+ Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+ DoubleHistogram histogram = meter.histogramBuilder("histogram").build();
+ histogram.record(1.0);
+ histogram.record(100.0);
+ histogram.record(30.0);
+ }
+}
+```
#### [Node.js](#tab/nodejs)
public class Program
#### [Java](#tab/java)
-Coming soon.
+```Java
+import io.opentelemetry.api.GlobalOpenTelemetry;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.common.Attributes;
+import io.opentelemetry.api.metrics.LongCounter;
+import io.opentelemetry.api.metrics.Meter;
+
+public class Program {
+
+ public static void main(String[] args) {
+ Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+
+ LongCounter myFruitCounter = meter
+ .counterBuilder("MyFruitCounter")
+ .build();
+
+ myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
+ myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "green"));
+ myFruitCounter.add(5, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
+ myFruitCounter.add(4, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ }
+}
+```
#### [Node.js](#tab/nodejs)
public class Program
#### [Java](#tab/java)
-Coming soon.
+```Java
+import io.opentelemetry.api.GlobalOpenTelemetry;
+import io.opentelemetry.api.common.AttributeKey;
+import io.opentelemetry.api.common.Attributes;
+import io.opentelemetry.api.metrics.Meter;
+
+public class Program {
+
+ public static void main(String[] args) {
+ Meter meter = GlobalOpenTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+
+ meter.gaugeBuilder("gauge")
+ .buildWithCallback(
+ observableMeasurement -> {
+ double randomNumber = Math.floor(Math.random() * 100);
+ observableMeasurement.record(randomNumber, Attributes.of(AttributeKey.stringKey("testKey"), "testValue"));
+ });
+ }
+}
+```
#### [Node.js](#tab/nodejs)
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
Title: Monitor an Azure Kubernetes Service (AKS) cluster deployed
-description: Learn how to enable monitoring of an Azure Kubernetes Service (AKS) cluster with Container insights already deployed in your subscription.
+ Title: Enable Container insights for Azure Kubernetes Service (AKS) cluster
+description: Learn how to enable Container insights on an Azure Kubernetes Service (AKS) cluster.
Last updated 01/09/2023
AKS clusters with system-assigned identity must first disable monitoring and the
``` ## Private link
+Use one of the following procedures to enable network isolation by connecting your cluster to the Log Analytics workspace by using [Azure Private Link](../logs/private-link-security.md).
-To enable network isolation by connecting your cluster to the Log Analytics workspace by using [Azure Private Link](../logs/private-link-security.md), your cluster must be using managed identity authentication with Azure Monitor Agent.
+### Managed identity authentication
+Use the following procedure if your cluster is using managed identity authentication with Azure Monitor Agent.
1. Follow the steps in [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md) to create a data collection endpoint and add it to your Azure Monitor private link service.
To enable network isolation by connecting your cluster to the Log Analytics work
1. Enable monitoring with the managed identity authentication option by using the steps in [Migrate to managed identity authentication](#migrate-to-managed-identity-authentication).
+### Without managed identity authentication
+Use the following procedure if you're not using managed identity authentication. This requires a [private AKS cluster](../../aks/private-clusters.md).
+
+1. Create a private AKS cluster following the guidance in [Create a private Azure Kubernetes Service cluster](../../aks/private-clusters.md).
+
+2. Disable public Ingestion on your Log Analytics workspace.
+
+ Use the following command to disable public ingestion on an existing workspace.
+
+ ```cli
+ az monitor log-analytics workspace update --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled
+ ```
+
+ Use the following command to create a new workspace with public ingestion disabled.
+
+ ```cli
+ az monitor log-analytics workspace create --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled
+ ```
+
+3. Configure private link by following the instructions at [Configure your private link](../logs/private-link-configure.md). Set ingestion access to public and then set to private after the private endpoint is created but before monitoring is enabled. The private link resource region must be same as AKS cluster region.
++
+4. Enable monitoring for the AKS cluster.
+
+ ```cli
+ az aks enable-addons -a monitoring --resource-group <AKSClusterResourceGorup> --name <AKSClusterName> --workspace-resource-id <workspace-resource-id>
+ ```
++ ## Limitations - Enabling managed identity authentication (preview) isn't currently supported by using Terraform or Azure Policy.
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
The output is similar to the following:
### Prerequisites - Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.-- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, then please register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider following this [documentation](/azure-resource-manager/management/resource-providers-and-types#register-resource-provider.md#register-resource-provider).
+- If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, then please register the Azure Monitor Workspace subscription with the `Microsoft.Dashboard` resource provider following this [documentation](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created. - The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace. - Users with 'User Access Administrator' role in the subscription of the AKS cluster can be able to enable 'Monitoring Data Reader' role directly by deploying the template.
azure-monitor Access Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md
You can submit a query request to a workspace using the Azure Monitor Log Analytics endpoint `https://api.loganalytics.azure.com`. To access the endpoint, you must authenticate through Azure Active Directory (Azure AD). >[!Note]
-> The `api.loganalytics.io` endpoint is being replaced by `api.loganalytics.azure.com`. `api.loganalytics.io` will continue to be be supported for the forseeable future.
+> The `api.loganalytics.io` endpoint is being replaced by `api.loganalytics.azure.com`. `api.loganalytics.io` will continue to be supported for the forseeable future.
## Authenticating with a demo API key To quickly explore the API without Azure Active Directory authentication, use the demonstration workspace with sample data, which supports API key authentication.
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
To efficiently execute a query, it's partitioned and distributed to compute node
Query behaviors that can reduce parallelism include: -- Use of serialization and window functions, such as the [serialize operator](/azure/kusto/query/serializeoperator), [next()](/azure/kusto/query/nextfunction), [prev()](/azure/kusto/query/prevfunction), and the [row](/azure/kusto/query/rowcumsumfunction) functions. Time series and user analytics functions can be used in some of these cases. Inefficient serialization might also happen if the following operators aren't used at the end of the query: [range](/azure/kusto/query/rangeoperator), [sort](/azure/kusto/query/sortoperator), [order](/azure/kusto/query/orderoperator), [top](/azure/kusto/query/topoperator), [top-hitters](/azure/kusto/query/tophittersoperator), and [getschema](/azure/kusto/query/getschemaoperator).
+- Use of serialization and window functions, such as the [serialize operator](/azure/kusto/query/serializeoperator), [next()](/azure/kusto/query/nextfunction), [prev()](/azure/kusto/query/prevfunction), and the [row](/azure/kusto/query/rowcumsumfunction) functions. Time series and user analytics functions can be used in some of these cases. Inefficient serialization might also happen if the following operators aren't used at the end of the query: [range](/azure/kusto/query/rangeoperator), [sort](/azure/data-explorer/kusto/query/sort-operator), [order](/azure/kusto/query/orderoperator), [top](/azure/kusto/query/topoperator), [top-hitters](/azure/kusto/query/tophittersoperator), and [getschema](/azure/kusto/query/getschemaoperator).
- Use of the [dcount()](/azure/kusto/query/dcount-aggfunction) aggregation function forces the system to have a central copy of the distinct values. When the scale of data is high, consider using the `dcount` function optional parameters to reduce accuracy. - In many cases, the [join](/azure/kusto/query/joinoperator?pivots=azuremonitor) operator lowers overall parallelism. Examine `shuffle join` as an alternative when performance is problematic. - In resource-scope queries, the pre-execution Kubernetes role-based access control (RBAC) or Azure RBAC checks might linger in situations where there's a large number of Azure role assignments. This situation might lead to longer checks that would result in lower parallelism. For example, a query might be executed on a subscription where there are thousands of resources and each resource has many role assignments on the resource level, not on the subscription or resource group.
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
This article shows how to quickly apply a built-in monitoring role to a user in
## Built-in monitoring roles
-Built-in roles in Azure Monitor help limit access to resources in a subscription while still enabling staff who monitor infrastructure to obtain and configure the data they need. Azure Monitor provides two out-of-the-box roles: Monitoring Reader and Monitoring Contributor.
+Built-in roles in Azure Monitor help limit access to resources in a subscription while still enabling staff who monitor infrastructure to obtain and configure the data they need. Azure Monitor provides two out-of-the-box roles: Monitoring Reader and Monitoring Contributor. Azure Monitor Logs also provides built-in roles for managing access to data in a Log Analytics workspace, as described in [Manage access to Log Analytics workspaces](./logs/manage-access.md).
### Monitoring Reader
azure-monitor View Designer Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/view-designer-filters.md
The following table describes the settings for a filter.
| Setting | Description | |:|:| | Field Name | Name of the field used for filtering. This field must match the summarize field in **Query for Values**. |
-| Query for Values | Query to run to populate the **Filter** dropdown for the user. This query must use either [summarize](/azure/kusto/query/summarizeoperator) or [distinct](/azure/kusto/query/distinctoperator) to provide unique values for a particular field. It must match the **Field Name**. You can use [sort](/azure/kusto/query/sortoperator) to sort the values that are displayed to the user. |
+| Query for Values | Query to run to populate the **Filter** dropdown for the user. This query must use either [summarize](/azure/kusto/query/summarizeoperator) or [distinct](/azure/kusto/query/distinctoperator) to provide unique values for a particular field. It must match the **Field Name**. You can use [sort](/azure/data-explorer/kusto/query/sort-operator) to sort the values that are displayed to the user. |
| Tag | Name for the field that's used in queries supporting the filter and is also displayed to the user. | ### Examples
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Application-Insights|[Live Metrics: Monitor and diagnose with 1-second latency](
Application-Insights|[Application Insights for Azure VMs and Virtual Machine Scale Sets](app/azure-vm-vmss-apps.md)|Easily monitor your IIS-hosted .NET Framework and .NET Core applications running on Azure VMs and Virtual Machine Scale Sets using a new App Insights Extension.| Application-Insights|[Sampling in Application Insights](app/sampling.md)|We've added embedded links to assist with looking up type definitions. (Dependency, Event, Exception, PageView, Request, Trace)| Application-Insights|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|Instructions are now available on how to set the http proxy using an environment variable, which overrides the JSON configuration. We've also provided a sample to configure connection string at runtime.|
-Application-Insights|[Application Insights for Java 2.x](app/deprecated-java-2x.md)|The Java 2.x retirement notice is available at https://azure.microsoft.com/updates/application-insights-java-2x-retirement.|
+Application-Insights|[Application Insights for Java 2.x](app/deprecated-java-2x.md)|The Java 2.x retirement notice is available at https://azure.microsoft.com/updates/application-insights-java-2x-retirement .|
Autoscale|[Diagnostic settings in Autoscale](autoscale/autoscale-diagnostics.md)|Updated and expanded content| Autoscale|[Overview of common autoscale patterns](autoscale/autoscale-common-scale-patterns.md)|Clarification of weekend profiles| Autoscale|[Autoscale with multiple profiles](autoscale/autoscale-multiprofile.md)|Added clarifications for profile end times|
azure-portal Capture Browser Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/capture-browser-trace.md
Title: Capture a browser trace for troubleshooting description: Capture network information from a browser trace to help troubleshoot issues with the Azure portal. Previously updated : 09/01/2022 Last updated : 02/24/2023
If you're troubleshooting an issue with the Azure portal, and you need to contac
> [!IMPORTANT] > Microsoft support uses these traces for troubleshooting purposes only. Please be mindful who you share your traces with, as they may contain sensitive information about your environment.
-You can capture this information any [supported browser](azure-portal-supported-browsers-devices.md): Google Chrome, Microsoft Edge, Safari (on Mac), or Firefox. Steps for each browser are shown below.
+You can capture this information any [supported browser](azure-portal-supported-browsers-devices.md): Microsoft Edge, Google Chrome, Safari (on Mac), or Firefox. Steps for each browser are shown below.
-## Google Chrome and Microsoft Edge
+## Microsoft Edge
-Google Chrome and Microsoft Edge are both based on the [Chromium open source project](https://www.chromium.org/Home). The developer tools experience is very similar in the two browsers. The screenshots included here show the Google Chrome experience, but you'll see the same options in Microsoft Edge. For more information, see [Chrome DevTools](https://developers.google.com/web/tools/chrome-devtools) and [Microsoft Edge DevTools](/microsoft-edge/devtools-guide-chromium).
+The following steps show how to use the developer tools in Microsoft Edge. For more information, see [Microsoft Edge DevTools](/microsoft-edge/devtools-guide-chromium).
1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your account.
Google Chrome and Microsoft Edge are both based on the [Chromium open source pro
1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page:
+ 1. Select the **Console** tab, select **Console settings**, then select **Preserve Log**.
+
+ :::image type="content" source="media/capture-browser-trace/edge-console-preserve-log.png" alt-text="Screenshot that highlights the Preserve log option on the Console tab in Edge.":::
+ 1. Select the **Network** tab, then select **Preserve log**.
- ![Screenshot that highlights the Preserve log option on the Network tab.](media/capture-browser-trace/chromium-network-preserve-log.png)
+ :::image type="content" source="media/capture-browser-trace/edge-network-preserve-log.png" alt-text="Screenshot that highlights the Preserve log option on the Network tab in Edge.":::
- 1. Select the **Console** tab, select **Console settings**, then select **Preserve Log**.
+1. On the **Network** tab, select **Stop recording network log** and **Clear**.
+
+ :::image type="content" source="media/capture-browser-trace/edge-stop-clear-session.png" alt-text="Screenshot showing the Stop recording network log and Clear options on the Network tab in Edge.":::
+
+1. Select **Record network log**, then reproduce the issue in the portal.
- ![Screenshot that highlights the Preserve log option on the Console tab.](media/capture-browser-trace/chromium-console-preserve-log.png)
+ :::image type="content" source="media/capture-browser-trace/edge-start-session.png" alt-text="Screenshot showing how to record the network log in Edge.":::
-1. Select the **Network** tab, then select **Stop recording network log** and **Clear**.
+ You'll see session output similar to the following image.
+
+ :::image type="content" source="media/capture-browser-trace/edge-browser-trace-results.png" alt-text="Screenshot showing session output in Edge.":::
+
+1. After you have reproduced the unexpected portal behavior, select **Stop recording network log**, then select **Export HAR** and save the file.
+
+ :::image type="content" source="media/capture-browser-trace/edge-network-export-har.png" alt-text="Screenshot showing how to Export HAR on the Network tab in Edge.":::
+
+1. Stop the Steps Recorder and save the recording.
- ![Screenshot of "Stop recording network log" and "Clear" on the Network tab.](media/capture-browser-trace/chromium-stop-clear-session.png)
+1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Save as...**, and save the console output to a text file.
+
+ :::image type="content" source="media/capture-browser-trace/edge-console-select.png" alt-text="Sccreenshot showing how to save the console output in Edge.":::
+
+1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip.
+
+1. Share the compressed file with Microsoft support by [using the **File upload** option in your support request](supportability/how-to-manage-azure-support-request.md#upload-files).
+
+## Google Chrome
+
+The following steps show how to use the developer tools in Google Chrome. For more information, see [Chrome DevTools](https://developers.google.com/web/tools/chrome-devtools).
+
+1. Sign in to the [Azure portal](https://portal.azure.com). It's important to sign in _before_ you start the trace so that the trace doesn't contain sensitive information related to your account.
+
+1. Start recording the steps you take in the portal, using [Steps Recorder](https://support.microsoft.com/windows/record-steps-to-reproduce-a-problem-46582a9b-620f-2e36-00c9-04e25d784e47).
+
+1. In the portal, navigate to the step prior to where the issue occurs.
+
+1. Press F12 to launch the developer tools. You can also launch the tools from the toolbar menu under **More tools** > **Developer tools**.
+
+1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page:
+
+ 1. Select the **Console** tab, select **Console settings**, then select **Preserve Log**.
+
+ ![Screenshot that highlights the Preserve log option on the Console tab in Chrome.](media/capture-browser-trace/chromium-console-preserve-log.png)
+
+ 1. Select the **Network** tab, then select **Preserve log**.
+
+ ![Screenshot that highlights the Preserve log option on the Network tab in Chrome.](media/capture-browser-trace/chromium-network-preserve-log.png)
+
+1. On the **Network** tab, select **Stop recording network log** and **Clear**.
+
+ ![Screenshot of "Stop recording network log" and "Clear" on the Network tab in Chrome.](media/capture-browser-trace/chromium-stop-clear-session.png)
1. Select **Record network log**, then reproduce the issue in the portal.
- ![Screenshot that shows how to record the network log.](media/capture-browser-trace/chromium-start-session.png)
+ ![Screenshot that shows how to record the network log in Chrome.](media/capture-browser-trace/chromium-start-session.png)
- You will see session output similar to the following image.
+ You'll see session output similar to the following image.
- ![Screenshot that shows the session output.](media/capture-browser-trace/chromium-browser-trace-results.png)
+ ![Screenshot that shows the session output in Chrome.](media/capture-browser-trace/chromium-browser-trace-results.png)
1. After you have reproduced the unexpected portal behavior, select **Stop recording network log**, then select **Export HAR** and save the file.
- ![Screenshot that shows how to Export HAR on the Network tab.](media/capture-browser-trace/chromium-network-export-har.png)
+ ![Screenshot that shows how to Export HAR on the Network tab in Chrome.](media/capture-browser-trace/chromium-network-export-har.png)
1. Stop the Steps Recorder and save the recording. 1. Back in the browser developer tools pane, select the **Console** tab. Right-click one of the messages, then select **Save as...**, and save the console output to a text file.
- ![Screenshot that shows how to save the console output.](media/capture-browser-trace/chromium-console-select.png)
+ ![Screenshot that shows how to save the console output in Chrome.](media/capture-browser-trace/chromium-console-select.png)
1. Package the browser trace HAR file, console output, and screen recording files in a compressed format such as .zip.
The following steps show how to use the developer tools in Apple Safari on Mac.
1. Select **Develop**, then select **Show Web Inspector**.
- ![Screenshot of the "Show Web Inspector" command.](media/capture-browser-trace/safari-show-web-inspector.png)
+ ![Screenshot of the "Show Web Inspector" command.](media/capture-browser-trace/safari-show-web-inspector.png)
1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page:
- 1. Select the **Network** tab, then select **Preserve Log**.
+ 1. Select the **Console** tab, then select **Preserve Log**.
- ![Screenshot that shows the Preserve Log option on the Network tab.](media/capture-browser-trace/safari-network-preserve-log.png)
+ ![Screenshot that shows the Preserve Log on the Console tab.](media/capture-browser-trace/safari-console-preserve-log.png)
- 1. Select the **Console** tab, then select **Preserve Log**.
+ 1. Select the **Network** tab, then select **Preserve Log**.
- ![Screenshot that shows the Preserve Log on the Console tab.](media/capture-browser-trace/safari-console-preserve-log.png)
+ ![Screenshot that shows the Preserve Log option on the Network tab.](media/capture-browser-trace/safari-network-preserve-log.png)
-1. Select the **Network** tab, then select **Clear Network Items**.
+1. On the **Network** tab, select **Clear Network Items**.
![Screenshot of "Clear Network Items" on the Network tab.](media/capture-browser-trace/safari-clear-session.png)
-1. Reproduce the issue in the portal. You will see session output similar to the following image.
+1. Reproduce the issue in the portal. You'll see session output similar to the following image.
![Screenshot that shows the output after you've reproduced the issue.](media/capture-browser-trace/safari-browser-trace-results.png)
The following steps show how to use the developer tools in Firefox. For more inf
1. By default, the browser keeps trace information only for the page that's currently loaded. Set the following options so the browser keeps all trace information, even if your repro steps require going to more than one page:
- 1. Select the **Network** tab, select the **Settings** icon, and then select **Persist Logs**.
-
- :::image type="content" source="media/capture-browser-trace/firefox-network-persist-logs.png" alt-text="Screenshot of the Network setting for Persist Logs.":::
- 1. Select the **Console** tab, select the **Settings** icon, and then select **Persist Logs**. :::image type="content" source="media/capture-browser-trace/firefox-console-persist-logs.png" alt-text="Screenshot of the Console setting for Persist Logs.":::
-1. Select the **Network** tab, then select **Clear**.
+ 1. Select the **Network** tab, select the **Settings** icon, and then select **Persist Logs**.
+
+ :::image type="content" source="media/capture-browser-trace/firefox-network-persist-logs.png" alt-text="Screenshot of the Network setting for Persist Logs.":::
+
+1. On the **Network** tab, select **Clear**.
![Screenshot of the "Clear" option on the Network tab.](media/capture-browser-trace/firefox-clear-session.png)
-1. Reproduce the issue in the portal. You will see session output similar to the following image.
+1. Reproduce the issue in the portal. You'll see session output similar to the following image.
![Screenshot showing example browser trace results.](media/capture-browser-trace/firefox-browser-trace-results.png)
azure-video-analyzer Monitor Log Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/monitor-log-edge.md
Using [Prometheus endpoint](https://prometheus.io/docs/practices/naming/) along
[![Diagram that shows the metrics collection using Log Analytics.](./media/telemetry-schema/log-analytics.svg)](./media/telemetry-schema/log-analytics.svg#lightbox) 1. Learn how to [collect metrics](https://github.com/Azure/iotedge/blob/main/test/modules/TestMetricsCollector/Program.cs)
-1. Use Docker CLI commands to build the [Docker file](https://github.com/Azure/iotedge/blob/main/edge-hub/docker/linux/amd64/Dockerfile) and publish the image to your Azure container registry.
+1. Use Docker CLI commands to build the [Docker file](https://github.com/Azure/iotedge/blob/main/edge-hub/docker/linux/Dockerfile) and publish the image to your Azure container registry.
For more information about using the Docker CLI to push to a container registry, see [Push and pull Docker images](../../../container-registry/container-registry-get-started-docker-cli.md). For other information about Azure Container Registry, see the [documentation](../../../container-registry/index.yml). 1. After the push to Azure Container Registry is complete, the following is inserted into the deployment manifest:
cloudfoundry Cloudfoundry Deploy Your First App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/cloudfoundry-deploy-your-first-app.md
-
Title: Deploy your first app to Cloud Foundry on Microsoft Azure
-description: Deploy an application to Cloud Foundry on Azure
---- Previously updated : 06/14/2017---
-# Deploy your first app to Cloud Foundry on Microsoft Azure
-
-[Cloud Foundry](https://cloudfoundry.org) is a popular open-source application platform available on Microsoft Azure. In this article, we show how to deploy and manage an application on Cloud Foundry in an Azure environment.
-
-## Create a Cloud Foundry environment
-
-There are several options for creating a Cloud Foundry environment on Azure:
--- Use the Pivotal Cloud Foundry offer in the Azure Marketplace to create a standard environment that includes PCF Ops Manager and the Azure Service Broker. You can find [complete instructions][pcf-azuremarketplace-pivotaldocs] for deploying the marketplace offer in the Pivotal documentation.-- Create a customized environment by [deploying Pivotal Cloud Foundry manually][pcf-custom].-- [Deploy the open-source Cloud Foundry packages directly][oss-cf-bosh] by setting up a [BOSH](https://bosh.io) director, a VM that coordinates the deployment of the Cloud Foundry environment.-
-> [!IMPORTANT]
-> If you are deploying PCF from the Azure Marketplace, make a note of the SYSTEMDOMAINURL and the admin credentials required to access the Pivotal Apps Manager, both of which are described in the marketplace deployment guide. They are needed to complete this tutorial. For marketplace deployments, the SYSTEMDOMAINURL is in the form `https://system.*ip-address*.cf.pcfazure.com`.
-
-## Connect to the Cloud Controller
-
-The Cloud Controller is the primary entry point to a Cloud Foundry environment for deploying and managing applications. The core Cloud Controller API (CCAPI) is a REST API, but it is accessible through various tools. In this case, we interact with it through the [Cloud Foundry CLI][cf-cli]. You can install the CLI on Linux, macOS, or Windows, but if you'd prefer not to install it at all, it is available pre-installed in the [Azure Cloud Shell][cloudshell-docs].
-
-To log in, prepend `api` to the SYSTEMDOMAINURL that you obtained from the marketplace deployment. Since the default deployment uses a self-signed certificate, you should also include the `skip-ssl-validation` switch.
-
-```bash
-cf login -a https://api.SYSTEMDOMAINURL --skip-ssl-validation
-```
-
-You are prompted to log in to the Cloud Controller. Use the admin account credentials that you acquired from the marketplace deployment steps.
-
-Cloud Foundry provides *orgs* and *spaces* as namespaces to isolate the teams and environments within a shared deployment. The PCF marketplace deployment includes the default *system* org and a set of spaces created to contain the base components, like the autoscaling service and the Azure service broker. For now, choose the *system* space.
--
-## Create an org and space
-
-If you type `cf apps`, you see a set of system applications that have been deployed in the system space within the system org.
-
-You should keep the *system* org reserved for system applications, so create an org and space to house our sample application.
-
-```bash
-cf create-org myorg
-cf create-space dev -o myorg
-```
-
-Use the target command to switch to the new org and space:
-
-```bash
-cf target -o testorg -s dev
-```
-
-Now, when you deploy an application, it is automatically created in the new org and space. To confirm that there are currently no apps in the new org/space, type `cf apps` again.
-
-> [!NOTE]
-> For more information about orgs and spaces and how they can be used for Cloud Foundry role-based access control (Cloud Foundry RBAC), see the [Cloud Foundry documentation][cf-orgs-spaces-docs].
-
-## Deploy an application
-
-Let's use a sample Cloud Foundry application called Hello Spring Cloud, which is written in Java and based on the [Spring Framework](https://spring.io) and [Spring Boot](https://spring.io/projects/spring-boot).
-
-### Clone the Hello Spring Cloud repository
-
-The Hello Spring Cloud sample application is available on GitHub. Clone it to your environment and change into the new directory:
-
-```bash
-git clone https://github.com/cloudfoundry-samples/hello-spring-cloud
-cd hello-spring-cloud
-```
-
-### Build the application
-
-Build the app using [Apache Maven](https://maven.apache.org).
-
-```bash
-mvn clean package
-```
-
-### Deploy the application with cf push
-
-You can deploy most applications to Cloud Foundry using the `push` command:
-
-```bash
-cf push
-```
-
-When you *push* an application, Cloud Foundry detects the type of application (in this case, a Java app) and identifies its dependencies (in this case, the Spring framework). It then packages everything required to run your code into a standalone container image, known as a *droplet*. Finally, Cloud Foundry schedules the application on one of the available machines in your environment and creates a URL where you can reach it, which is available in the output of the command.
-
-![Output from cf push command][cf-push-output]
-
-To see the hello-spring-cloud application, open the provided URL in your browser:
-
-![Default UI for Hello Spring Cloud][hello-spring-cloud-basic]
-
-> [!NOTE]
-> To learn more about what happens during `cf push`, see [How Applications Are Staged][cf-push-docs] in the Cloud Foundry documentation.
-
-## View application logs
-
-You can use the Cloud Foundry CLI to view logs for an application by its name:
-
-```bash
-cf logs hello-spring-cloud
-```
-
-By default, the logs command uses *tail*, which shows new logs as they are written. To see new logs appear, refresh the hello-spring-cloud app in the browser.
-
-To view logs that have already been written, add the `recent` switch:
-
-```bash
-cf logs --recent hello-spring-cloud
-```
-
-## Scale the application
-
-By default, `cf push` only creates a single instance of your application. To ensure high availability and enable scale out for higher throughput, you generally want to run more than one instance of your applications. You can easily scale out already deployed applications using the `scale` command:
-
-```bash
-cf scale -i 2 hello-spring-cloud
-```
-
-Running the `cf app` command on the application shows that Cloud Foundry is creating another instance of the application. Once the application has started, Cloud Foundry automatically starts load balancing traffic to it.
--
-## Next steps
--- [Read the Cloud Foundry documentation][cloudfoundry-docs]-- [Set up the Azure DevOps Services plugin for Cloud Foundry][vsts-plugin]-- [Configure the Microsoft Log Analytics Nozzle for Cloud Foundry][loganalytics-nozzle]-
-<!-- LINKS -->
-
-[pcf-custom]: https://docs.pivotal.io/pivotalcf/1-10/customizing/azure.html
-[oss-cf-bosh]: https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs
-[pcf-azuremarketplace-pivotaldocs]: https://docs.pivotal.io/ops-manager/2-10/install/pcf_azure.html
-[cf-cli]: https://github.com/cloudfoundry/cli
-[cloudshell-docs]: ../cloud-shell/overview.md
-[cf-orgs-spaces-docs]: https://docs.cloudfoundry.org/concepts/roles.html
-[spring-boot]: https://spring.io/projects/spring-boot
-[spring-framework]: https://spring.io
-[cf-push-docs]: https://docs.cloudfoundry.org/concepts/how-applications-are-staged.html
-[cloudfoundry-docs]: https://docs.cloudfoundry.org
-[vsts-plugin]: https://github.com/Microsoft/vsts-cloudfoundry
-[loganalytics-nozzle]: https://github.com/Azure/oms-log-analytics-firehose-nozzle
-
-<!-- IMAGES -->
-[cf-push-output]: ./media/cloudfoundry-deploy-your-first-app/cf-push-output.png
-[hello-spring-cloud-basic]: ./media/cloudfoundry-deploy-your-first-app/hello-spring-cloud-basic.png
cloudfoundry Cloudfoundry Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/cloudfoundry-get-started.md
- Title: Getting Started with Cloud Foundry on Microsoft Azure
-description: Run OSS or Pivotal Cloud Foundry on Microsoft Azure
---- Previously updated : 01/19/2017---
-# Cloud Foundry on Azure
-
-Cloud Foundry is an open-source platform-as-a-service (PaaS) for building, deploying, and operating 12-factor applications developed in various languages and frameworks. This document describes the options you have for running Cloud Foundry on Azure and how you can get started.
-
-## Cloud Foundry offerings
-
-There are two forms of Cloud Foundry available to run on Azure: open-source Cloud Foundry (OSS CF) and Pivotal Cloud Foundry (PCF). OSS CF is an entirely [open-source](https://github.com/cloudfoundry) version of Cloud Foundry managed by the Cloud Foundry Foundation. Pivotal Cloud Foundry is an enterprise distribution of Cloud Foundry from Pivotal Software Inc. We look at some of the differences between the two offerings.
-
-### Open-source Cloud Foundry
-
-You can deploy OSS Cloud Foundry on Azure by first deploying a BOSH director and then deploying Cloud Foundry, using the [instructions provided on GitHub](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/blob/master/docs/guidance.md). To learn more about using OSS CF, see the [documentation](https://docs.cloudfoundry.org/) provided by the Cloud Foundry Foundation.
-
-Microsoft provides best-effort support for OSS CF through the following community channels:
--- #bosh-azure-cpi channel on [Cloud Foundry Slack](https://slack.cloudfoundry.org/)-- [cf-bosh mailing list](https://lists.cloudfoundry.org/pipermail/cf-bosh)-- GitHub issues for the [CPI](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/issues) and [service broker](https://github.com/Azure/meta-azure-service-broker/issues)-
->[!NOTE]
-> The level of support for your Azure resources, such as the virtual machines where you run Cloud Foundry, is based on your Azure support agreement. Best-effort community support only applies to the Cloud Foundry-specific components.
-
-### Pivotal Cloud Foundry
-
-Pivotal Cloud Foundry includes the same core platform as the OSS distribution, along with a set of proprietary management tools and enterprise support. To run PCF on Azure, you must acquire a license from Pivotal. The PCF offer from the Azure marketplace includes a 90-day trial license.
-
-The tools include [Pivotal Operations Manager](https://docs.pivotal.io/ops-manager/2-10/install/), a web application that simplifies deployment and management of a Cloud Foundry foundation, and [Pivotal Apps Manager](https://docs.pivotal.io/application-service/2-7/console/https://docsupdatetracker.net/index.html), a web application for managing users and applications.
-
-In addition to the support channels listed for OSS CF above, a PCF license entitles you to contact Pivotal for support. Microsoft and Pivotal have also enabled support workflows that allow you to contact either party for assistance and have your inquiry routed appropriately depending on where the issue lies.
-
-## Azure Service Broker
-
-Cloud Foundry encourages the ["twelve-factor app"](https://12factor.net/) methodology, which promotes a clean separation of stateless application processes and stateful backing services. [Service brokers](https://docs.cloudfoundry.org/services/api.html) offer a consistent way to provision and bind backing services to applications. The [Azure service broker](https://github.com/Azure/meta-azure-service-broker) provides some of the key Azure services through this channel, including Azure storage and Azure SQL.
-
-If you are using Pivotal Cloud Foundry, the service broker is also [available as a tile](https://docs.pivotal.io/azure-sb/installing.html) from the Pivotal Network.
-
-## Related resources
-
-### Azure DevOps Services plugin
-
-Cloud Foundry is well suited to agile software development, including the use of continuous integration (CI) and continuous delivery (CD). If you use Azure DevOps Services to manage your projects and would like to set up a CI/CD pipeline targeting Cloud Foundry, you can use the [Azure DevOps Services Cloud Foundry build extension](https://marketplace.visualstudio.com/items?itemName=ms-vsts.cloud-foundry-build-extension). The plugin makes it simple to configure and automate deployments to Cloud Foundry, whether running in Azure or another environment.
-
-## Next steps
--- [Deploy an app to Cloud Foundry in Azure](./cloudfoundry-deploy-your-first-app.md)
cloudfoundry Cloudfoundry Oms Nozzle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/cloudfoundry-oms-nozzle.md
- Title: Deploy Azure Log Analytics Nozzle for Cloud Foundry monitoring
-description: Step-by-step guidance on deploying the Cloud Foundry loggregator Nozzle for Azure Log Analytics. Use the Nozzle to monitor the Cloud Foundry system health and performance metrics.
--
-tags: Cloud-Foundry
--- Previously updated : 07/22/2017---
-# Deploy Azure Log Analytics Nozzle for Cloud Foundry system monitoring
-
-[Azure Monitor](https://azure.microsoft.com/services/log-analytics/) is a service in Azure. It helps you collect and analyze data that is generated from your cloud and on-premises environments.
-
-The Log Analytics Nozzle (the Nozzle) is a Cloud Foundry (CF) component, which forwards metrics from the [Cloud Foundry loggregator](https://docs.cloudfoundry.org/loggregator/architecture.html) firehose to Azure Monitor logs. With the Nozzle, you can collect, view, and analyze your CF system health and performance metrics, across multiple deployments.
-
-In this document, you learn how to deploy the Nozzle to your CF environment, and then access the data from the Azure Monitor logs console.
--
-## Prerequisites
-
-The following steps are prerequisites for deploying the Nozzle.
-
-### 1. Deploy a CF or Pivotal Cloud Foundry environment in Azure
-
-You can use the Nozzle with either an open source CF deployment or a Pivotal Cloud Foundry (PCF) deployment.
-
-* [Deploy Cloud Foundry on Azure](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/blob/master/docs/guidance.md)
-
-* [Deploy Pivotal Cloud Foundry on Azure](https://docs.pivotal.io/pivotalcf/1-11/customizing/azure.html)
-
-### 2. Install the CF command-line tools for deploying the Nozzle
-
-The Nozzle runs as an application in CF environment. You need CF CLI to deploy the application.
-
-The Nozzle also needs access permission to the loggregator firehose and the Cloud Controller. To create and configure the user, you need the User Account and Authentication (UAA) service.
-
-* [Install Cloud Foundry CLI](https://docs.cloudfoundry.org/cf-cli/install-go-cli.html)
-
-* [Install Cloud Foundry UAA command-line client](https://github.com/cloudfoundry/cf-uaac/blob/master/README.md)
-
-Before setting up the UAA command-Line client, ensure that RubyGems is installed.
-
-### 3. Create a Log Analytics workspace in Azure
-
-You can create the Log Analytics workspace manually or by using a template. The template will deploy a setup of pre-configured KPI views and alerts for the Azure Monitor logs console.
-
-#### To create the workspace manually:
-
-1. In the Azure portal, search the list of services in the Azure Marketplace, and then select Log Analytics workspaces.
-2. Select **Create**, and then select choices for the following items:
-
- * **Log Analytics workspace**: Type a name for your workspace.
- * **Subscription**: If you have multiple subscriptions, choose the one that is the same as your CF deployment.
- * **Resource group**: You can create a new resource group, or use the same one with your CF deployment.
- * **Location**: Enter the location.
- * **Pricing tier**: Select **OK** to complete.
-
-For more information, see [Get started with Azure Monitor logs](../azure-monitor/overview.md).
-
-#### To create the Log Analytics workspace through the monitoring template from Azure market place:
-
-1. Open Azure portal.
-1. Click the "+" sign, or "Create a resource" on the top left corner.
-1. Type "Cloud Foundry" in the search window, select "Cloud Foundry Monitoring Solution".
-1. The Cloud Foundry monitoring solution template front page is loaded, click "Create" to launch the template blade.
-1. Enter the required parameters:
- * **Subscription**: Select an Azure subscription for the Log Analytics workspace, usually the same with Cloud Foundry deployment.
- * **Resource group**: Select an existing resource group or create a new one for the Log Analytics workspace.
- * **Resource Group Location**: Select the location of the resource group.
- * **OMS_Workspace_Name**: Enter a workspace name, if the workspace does not exist, the template will create a new one.
- * **OMS_Workspace_Region**: Select the location for the workspace.
- * **OMS_Workspace_Pricing_Tier**: Select the Log Analytics workspace SKU. See the [pricing guidance](https://azure.microsoft.com/pricing/details/log-analytics/) for reference.
- * **Legal terms**: Click Legal terms, then click ΓÇ£CreateΓÇ¥ to accept the legal term.
-1. After all parameters are specified, click ΓÇ£CreateΓÇ¥ to deploy the template. When the deployment is completed, the status will show up at the notification tab.
--
-## Deploy the Nozzle
-
-There are a couple of different ways to deploy the Nozzle: as a PCF tile or as a CF application.
-
-### Deploy the Nozzle as a PCF Ops Manager tile
-
-Follow the steps to [install and configure the Azure Log Analytics Nozzle for PCF](https://docs.pivotal.io/partners/azure-log-analytics-nozzle/installing.html).This is the simplified approach, the PCF Ops manager tile will automatically configure and push the nozzle.
-
-### Deploy the Nozzle manually as a CF application
-
-If you are not using PCF Ops Manager, deploy the Nozzle as an application. The following sections describe this process.
-
-#### Sign in to your CF deployment as an admin through CF CLI
-
-Run the following command:
-```
-cf login -a https://api.${SYSTEM_DOMAIN} -u ${CF_USER} --skip-ssl-validation
-```
-
-"SYSTEM_DOMAIN" is your CF domain name. You can retrieve it by searching the "SYSTEM_DOMAIN" in your CF deployment manifest file.
-
-"CF_User" is the CF admin name. You can retrieve the name and password by searching the "scim" section, looking for the name and the "cf_admin_password" in your CF deployment manifest file.
-
-#### Create a CF user and grant required privileges
-
-Run the following commands:
-```
-uaac target https://uaa.${SYSTEM_DOMAIN} --skip-ssl-validation
-uaac token client get admin
-cf create-user ${FIREHOSE_USER} ${FIREHOSE_USER_PASSWORD}
-uaac member add cloud_controller.admin ${FIREHOSE_USER}
-uaac member add doppler.firehose ${FIREHOSE_USER}
-```
-
-"SYSTEM_DOMAIN" is your CF domain name. You can retrieve it by searching the "SYSTEM_DOMAIN" in your CF deployment manifest file.
-
-#### Download the latest Log Analytics Nozzle release
-
-Run the following command:
-```
-git clone https://github.com/Azure/oms-log-analytics-firehose-nozzle.git
-cd oms-log-analytics-firehose-nozzle
-```
-
-#### Set environment variables
-
-Now you can set environment variables in the manifest.yml file in your current directory. The following shows the app manifest for the Nozzle. Replace values with your specific Log Analytics workspace information.
-
-```
-OMS_WORKSPACE : Log Analytics workspace ID: Open your Log Analytics workspace in the Azure portal, select **Advanced settings**, select **Connected Sources**, and select **Windows Servers**.
-OMS_KEY : OMS key: Open your Log Analytics workspace in the Azure portal, select **Advanced settings**, select **Connected Sources**, and select **Windows Servers**.
-OMS_POST_TIMEOUT : HTTP post timeout for sending events to Azure Monitor logs. The default is 10 seconds.
-OMS_BATCH_TIME : Interval for posting a batch to Azure Monitor logs. The default is 10 seconds.
-OMS_MAX_MSG_NUM_PER_BATCH : The maximum number of messages in a batch to Azure Monitor logs. The default is 1000.
-API_ADDR : The API URL of the CF environment. For more information, see the preceding section, "Sign in to your CF deployment as an admin through CF CLI."
-DOPPLER_ADDR : Loggregator's traffic controller URL. For more information, see the preceding section, "Sign in to your CF deployment as an admin through CF CLI."
-FIREHOSE_USER : CF user you created in the preceding section, "Create a CF user and grant required privileges." This user has firehose and Cloud Controller admin access.
-FIREHOSE_USER_PASSWORD : Password of the CF user above.
-EVENT_FILTER : Event types to be filtered out. The format is a comma-separated list. Valid event types are METRIC, LOG, and HTTP.
-SKIP_SSL_VALIDATION : If true, allows insecure connections to the UAA and the traffic controller.
-CF_ENVIRONMENT : Enter any string value for identifying logs and metrics from different CF environments.
-IDLE_TIMEOUT : The Keep Alive duration for the firehose consumer. The default is 60 seconds.
-LOG_LEVEL : The logging level of the Nozzle. Valid levels are DEBUG, INFO, and ERROR.
-LOG_EVENT_COUNT : If true, the total count of events that the Nozzle has received and sent are logged to Azure Monitor logs as CounterEvents.
-LOG_EVENT_COUNT_INTERVAL : The time interval of the logging event count to Azure Monitor logs. The default is 60 seconds.
-```
-
-### Push the application from your development computer
-
-Ensure that you are under the oms-log-analytics-firehose-nozzle folder. Run the following command:
-```
-cf push
-```
-
-## Validate the Nozzle installation
-
-### From Apps Manager (for PCF)
-
-1. Sign in to Ops Manager, and make sure the tile is displayed on the installation dashboard.
-2. Sign in to Apps Manager, make sure the space you have created for the Nozzle is listed on the usage report, and confirm that the status is normal.
-
-### From your development computer
-
-In the CF CLI window, type:
-```
-cf apps
-```
-Make sure the OMS Nozzle application is running.
-
-## View the data in the Azure portal
-
-If you have deployed the monitoring solution through the market place template, go to Azure portal and locate the solution. You can find the solution in the resource group you specified in the template. Click the solution, browse to the "log analytics console", the pre-configured views are listed, with top Cloud Foundry system KPIs, application data, alerts and VM health metrics.
-
-If you have created the Log Analytics workspace manually, follow steps below to create the views and alerts:
-
-### 1. Import the OMS view
-
-From the OMS portal, browse to **View Designer** > **Import** > **Browse**, and select one of the omsview files. For example, select *Cloud Foundry.omsview*, and save the view. Now a tile is displayed on the **Overview** page. Select it to see visualized metrics.
-
-You can customize these views or create new views through **View Designer**.
-
-The *"Cloud Foundry.omsview"* is a preview version of the Cloud Foundry OMS view template. This is a fully configured, default template. If you have suggestions or feedback about the template, send them to the [issue section](https://github.com/Azure/oms-log-analytics-firehose-nozzle/issues).
-
-### 2. Create alert rules
-
-You can [create the alerts](../azure-monitor/alerts/alerts-overview.md), and customize the queries and threshold values as needed. The following are recommended alerts:
-
-| Search query | Generate alert based on | Description |
-| -- | -- | |
-| Type=CF_ValueMetric_CL Origin_s=bbs Name_s="Domain.cf-apps" | Number of results < 1 | **bbs.Domain.cf-apps** indicates if the cf-apps Domain is up-to-date. This means that CF App requests from Cloud Controller are synchronized to bbs.LRPsDesired (Diego-desired AIs) for execution. No data received means cf-apps Domain is not up-to-date in the specified time window. |
-| Type=CF_ValueMetric_CL Origin_s=rep Name_s=UnhealthyCell Value_d>1 | Number of results > 0 | For Diego cells, 0 means healthy, and 1 means unhealthy. Set the alert if multiple unhealthy Diego cells are detected in the specified time window. |
-| Type=CF_ValueMetric_CL Origin_s="bosh-hm-forwarder" Name_s="system.healthy" Value_d=0 | Number of results > 0 | 1 means the system is healthy, and 0 means the system is not healthy. |
-| Type=CF_ValueMetric_CL Origin_s=route_emitter Name_s=ConsulDownMode Value_d>0 | Number of results > 0 | Consul emits its health status periodically. 0 means the system is healthy, and 1 means that the route emitter detects that Consul is down. |
-| Type=CF_CounterEvent_CL Origin_s=DopplerServer (Name_s="TruncatingBuffer.DroppedMessages" or Name_s="doppler.shedEnvelopes") Delta_d>0 | Number of results > 0 | The delta number of messages intentionally dropped by Doppler due to back pressure. |
-| Type=CF_LogMessage_CL SourceType_s=LGR MessageType_s=ERR | Number of results > 0 | Loggregator emits **LGR** to indicate problems with the logging process. An example of such a problem is when the log message output is too high. |
-| Type=CF_ValueMetric_CL Name_s=slowConsumerAlert | Number of results > 0 | When the Nozzle receives a slow consumer alert from loggregator, it sends the **slowConsumerAlert** ValueMetric to Azure Monitor logs. |
-| Type=CF_CounterEvent_CL Job_s=nozzle Name_s=eventsLost Delta_d>0 | Number of results > 0 | If the delta number of lost events reaches a threshold, it means the Nozzle might have a problem running. |
-
-## Scale
-
-You can scale the Nozzle and the loggregator.
-
-### Scale the Nozzle
-
-You should start with at least two instances of the Nozzle. The firehose distributes the workload across all instances of the Nozzle.
-To make sure the Nozzle can keep up with the data traffic from the firehose, set up the **slowConsumerAlert** alert (listed in the preceding section, "Create alert rules"). After you have been alerted, follow the [guidance for slow Nozzle](https://docs.pivotal.io/pivotalcf/1-11/loggregator/log-ops-guide.html#slow-noz) to determine whether scaling is needed.
-To scale up the Nozzle, use Apps Manager or the CF CLI to increase the instance numbers or the memory or disk resources for the Nozzle.
-
-### Scale the loggregator
-
-Loggregator sends an **LGR** log message to indicate problems with the logging process. You can monitor the alert to determine whether the loggregator needs to be scaled up.
-To scale up the loggregator, either increase the Doppler buffer size, or add additional Doppler server instances in the CF manifest. For more information, see [the guidance for scaling the loggregator](https://docs.cloudfoundry.org/running/managing-cf/logging-config.html#scaling).
-
-## Update
-
-To update the Nozzle with a newer version, download the new Nozzle release, follow the steps in the preceding "Deploy the Nozzle" section, and push the application again.
-
-### Remove the Nozzle from Ops Manager
-
-1. Sign in to Ops Manager.
-2. Locate the **Microsoft Azure Log Analytics Nozzle for PCF** tile.
-3. Select the garbage icon, and confirm the deletion.
-
-### Remove the Nozzle from your development computer
-
-In your CF CLI window, type:
-```
-cf delete <App Name> -r
-```
-
-If you remove the Nozzle, the data in OMS portal is not automatically removed. It expires based on your Azure Monitor logs retention setting.
-
-## Support and feedback
-
-Azure Log Analytics Nozzle is open sourced. Send your questions and feedback to the [GitHub section](https://github.com/Azure/oms-log-analytics-firehose-nozzle/issues).
-To open an Azure support request, choose "Virtual Machine running Cloud Foundry" as the service category.
-
-## Next step
-
-From PCF2.0, VM performance metrics are transferred to Azure Log Analytics nozzle by System Metrics Forwarder, and integrated into the Log Analytics workspace. You no longer need the Log Analytics agent for the VM performance metrics.
-However you can still use the Log Analytics agent to collect Syslog information. The Log Analytics agent is installed as a Bosh add-on to your CF VMs.
-
-For details, see [Deploy Log Analytics agent to your Cloud Foundry deployment](https://github.com/Azure/oms-agent-for-linux-boshrelease).
cloudfoundry Create Cloud Foundry On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/create-cloud-foundry-on-azure.md
- Title: Create a Pivotal Cloud Foundry cluster on Azure
-description: Learn how to set up the parameters needed to provision a Pivotal Cloud Foundry (PCF) cluster on Azure
---- Previously updated : 09/13/2018------
-# Create a Pivotal Cloud Foundry cluster on Azure
-
-This tutorial provides quick steps to create and generate the parameters you need to provision a Pivotal Cloud Foundry (PCF) cluster on Azure. To find the Pivotal Cloud Foundry solution, perform a search in the Azure Marketplace.
-
-![Search Pivotal Cloud Foundry in Azure](media/deploy/pcf-marketplace.png)
--
-## Generate an SSH public key
-
-There are several ways to generate a public secure shell (SSH) key by using Windows, Mac, or Linux.
-
-```Bash
-ssh-keygen -t rsa -b 2048
-```
-
-For more information, see [Use SSH keys with Windows on Azure](../virtual-machines/linux/ssh-from-windows.md).
-
-## Create a service principal
-
-> [!NOTE]
->
-> To create a service principal, you need owner account permission. You also can write a script to automate creating the service principal. For example, you can use the Azure CLI [az ad sp create-for-rbac](/cli/azure/ad/sp).
-
-1. Sign in to your Azure account.
-
- ```azurecli
- az login
- ```
-
- ![Azure CLI login](media/deploy/az-login-output.png )
-
- Copy the "id" value as your **subscription ID**, and copy the "tenantId" value to use later.
-
-2. Set your default subscription for this configuration.
-
- ```azurecli
- az account set -s {id}
- ```
-
-3. Create an Azure Active Directory application for your PCF. Specify a unique alphanumeric password. Store the password as your **clientSecret** to use later.
-
- ```azurecli
- az ad app create --display-name "Svc Principal for OpsManager" --password {enter-your-password} --homepage "{enter-your-homepage}" --identifier-uris {enter-your-homepage}
- ```
-
- Copy the "appId" value in the output as your **clientID** to use later.
-
- > [!NOTE]
- >
- > Choose your own application home page and identifier URI, for example, http\://www\.contoso.com.
-
-4. Create a service principal with your new app ID.
-
- ```azurecli
- az ad sp create --id {appId}
- ```
-
-5. Set the permission role of your service principal as a Contributor.
-
- ```azurecli
- az role assignment create --assignee "{enter-your-homepage}" --role "Contributor" --scope /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}
- ```
-
- Or you also can use
-
- ```azurecli
- az role assignment create --assignee {service-principal-name} --role "Contributor" --scope /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}
- ```
-
- ![Service principal role assignment](media/deploy/svc-princ.png )
-
-6. Verify that you can successfully sign in to your service principal by using the app ID, password, and tenant ID.
-
- ```azurecli
- az login --service-principal -u {appId} -p {your-password} --tenant {tenantId}
- ```
-
-7. Create a .json file in the following format. Use the **subscription ID**, **tenantID**, **clientID**, and **clientSecret** values you copied previously. Save the file.
-
- ```json
- {
- "subscriptionID": "{enter-your-subscription-Id-here}",
- "tenantID": "{enter-your-tenant-Id-here}",
- "clientID": "{enter-your-app-Id-here}",
- "clientSecret": "{enter-your-key-here}"
- }
- ```
-
-## Get the Pivotal Network token
-
-1. Register or sign in to your [Pivotal Network](https://network.pivotal.io) account.
-2. Select your profile name in the upper-right corner of the page. Select **Edit Profile**.
-3. Scroll to the bottom of the page, and copy the **LEGACY API TOKEN** value. This value is your **Pivotal Network Token** value that you use later.
-
-## Provision your Cloud Foundry cluster on Azure
-
-Now you have all the parameters you need to provision your Pivotal Cloud Foundry cluster on Azure.
-Enter the parameters, and create your PCF cluster.
-
-## Verify the deployment, and sign in to the Pivotal Ops Manager
-
-1. Your PCF cluster shows a deployment status.
-
- ![Azure deployment status](media/deploy/deployment.png )
-
-2. Select the **Deployments** link in the navigation on the left to get credentials for your PCF Ops Manager. Select the **Deployment Name** on the next page.
-3. In the navigation on the left, select the **Outputs** link to display the URL, username, and password for the PCF Ops Manager. The "OPSMAN-FQDN" value is the URL.
-
- ![Cloud Foundry deployment output](media/deploy/deploy-outputs.png )
-
-4. Start the URL in a web browser. Enter the credentials from the previous step to sign in.
-
- ![Pivotal sign-in page](media/deploy/pivotal-login.png )
-
- > [!NOTE]
- >
- > If the Internet Explorer browser fails due to a "Site not secure" warning message, select **More information** and go to the webpage. For Firefox, select **Advance** and add the certification to proceed.
-
-5. Your PCF Ops Manager displays the deployed Azure instances. Now you can deploy and manage your applications here.
-
- ![Deployed Azure instance in Pivotal](media/deploy/ops-mgr.png )
cloudfoundry How Cloud Foundry Integrates With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/how-cloud-foundry-integrates-with-azure.md
- Title: How Cloud Foundry Integrates with Azure | Microsoft Docs
-description: Describes how Cloud Foundry can use Azure services to enhance the Enterprise experience
--
-tags: Cloud-Foundry
--- Previously updated : 11/14/2022---
-# Integrate Cloud Foundry with Azure
-
-[Cloud Foundry](https://docs.cloudfoundry.org/) is a PaaS platform running on top of cloud providersΓÇÖ IaaS platform. It offers consistent application deployment experience across cloud providers. It can also integrate with various Azure services, with enterprise grade HA, scalability, and cost savings.
-There are [6 subsystems of Cloud Foundry](https://docs.cloudfoundry.org/concepts/architecture/), that can be flexibly scale online, including: Routing, Authentication, Application life-cycle management, Service management, Messaging and Monitoring. For each of the subsystems, you can configure Cloud Foundry to use correspondent Azure service.
-
-![Cloud Foundry on Azure Integration Architecture](media/CFOnAzureEcosystem-colored.png)
-
-## 1. High Availability and Scalability
-### Managed Disk
-Bosh uses Azure CPI (Cloud Provider Interface) for disk creating and deleting routines. By default, unmanaged disks are used. It requires customer to manually create storage accounts, then configure the accounts in CF manifest files. This is because of the limitation on the number of disks per storage account.
-Now [Managed Disk](https://azure.microsoft.com/services/managed-disks/) is available, offers managed secure and reliable disk storage for virtual machines. Customer no longer need to deal with the storage account for scale and HA. Azure arranges disks automatically.
-Whether it's a new or an existing deployment, the Azure CPI will handle the creation or migration of the managed disk during a CF deployment. It's supported with PCF 1.11. You can also explore the open-source Cloud Foundry [Managed Disk guidance](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/managed-disks) for reference.
-### Availability Zone *
-As a cloud-native application platform, Cloud Foundry is designed with [four level of High availability](https://docs.pivotal.io/pivotalcf/2-1/concepts/high-availability.html). While the first
-three levels of software failures can be handled by CF system itself, platform fault tolerance is provided by cloud providers. The key CF components should be protected with a cloud providerΓÇÖs platform HA solution. This includes GoRouters, Diego Brains, CF database and service tiles. By default, [Azure Availability Set](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/deploy-cloudfoundry-with-availability-sets) is used for fault tolerance between clusters in a data center.
-The good new is, [Azure Availability Zone](../availability-zones/az-overview.md) is released now, bringing the fault tolerance to next level, with low latency redundancy across data centers.
-Azure Availability Zone achieves HA by placing a set of VMs into 2+ data centers, each set of VMs are redundant to other sets. If one Zone is down, the other sets are still live, isolated from the disaster.
-> [!NOTE]
-> Azure Availability Zone is not offered to all regions yet, check the latest [announcement for the list of supported regions](../availability-zones/az-overview.md). For Open Source Cloud Foundry, check [Azure Availability Zone for open source Cloud Foundry guidance](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/availability-zone).
-
-## 2. Network Routing
-By default, Azure basic load balancer is used for incoming CF API/apps requests, forwarding them to the Gorouters. CF components like Diego Brain, MySQL, ERT can also use the load balancer to balance the traffic for HA. Azure also provides a set of fully managed load-balancing solutions. If you're looking for TLS/SSL termination ("SSL offload") or per HTTP/HTTPS request application layer processing, consider Application Gateway. For high availability and scalability load balancing on layer 4, consider standard load balancer.
-### Azure Application Gateway *
-[Azure Application Gateway](../application-gateway/overview.md) offers various layer 7 load balancing capabilities, including SSL offloading, end to end TLS, Web Application Firewall, cookie-based session affinity and more. You can [configure Application Gateway in Open Source Cloud Foundry](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/application-gateway). For PCF, check the [PCF 2.1 release notes](https://docs.pivotal.io/pivotalcf/2-1/pcf-release-notes/opsmanager-rn.html#azure-application-gateway) for POC test.
-
-### Azure Standard Load Balancer *
-Azure Load Balancer is a Layer 4 load balancer. It's used to distribute the traffic among instances of services in a load-balanced set. The standard version provides [advanced features](../load-balancer/load-balancer-overview.md) on top of the basic version. For example 1. The backend pool max limit is raised from 100 to 1000 VMs. 2. The endpoints now support multiple availability sets instead of single availability set. 3. Additional features like HA ports, richer monitoring data, and so on. If you're moving to Azure Availability Zone, standard load balancer is required. For a new deployment, we recommend you to start with Azure Standard Load Balancer.
-
-## 3. Authentication
-[Cloud Foundry User Account and Authentication](https://docs.cloudfoundry.org/concepts/architecture/uaa.html) is the central identity management service for CF and its various components. [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) is MicrosoftΓÇÖs multi-tenant, cloud-based directory and identity management service.
-By default, UAA is used for Cloud Foundry authentication. As an advanced option, UAA also supports Azure AD as an external user store. Azure AD users can access Cloud Foundry using their LDAP identity, without a Cloud Foundry account. Follow these steps to [configure the Azure AD for UAA in PCF](https://docs.pivotal.io/p-identity/1-6/azure/https://docsupdatetracker.net/index.html).
-
-## 4. Data storage for Cloud Foundry Runtime System
-Cloud Foundry offers great extensibility to use Azure blobstore or Azure MySQL/PostgreSQL services for application runtime system storage.
-### Azure Blobstore for Cloud Foundry Cloud Controller blobstore
-The Cloud Controller blobstore is a critical data store for buildpacks, droplets, packages, and resource pools. By default, NFS server is used for Cloud Controller blobstore.
-To avoid single point of failure, use Azure Blob Storage as external store. Check out the [Cloud Foundry documentation](https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html) for background, and [options in Pivotal Cloud Foundry](https://docs.pivotal.io/pivotalcf/2-0/customizing/azure.html).
-
-### MySQL/PostgreSQL as Cloud Foundry Elastic Run Time Database *
-CF Elastic Runtime requires two major system databases:
-#### CCDB
-The Cloud Controller database. Cloud Controller provides REST API endpoints for clients to access the system. CCDB stores tables for orgs, spaces, services, user roles, and more for Cloud controller.
-#### UAADB
-The database for User Account and Authentication. It stores the user authentication related data, for example encrypted user names and passwords.
-
-By default, a local system database (MySQL) can be used. For HA and to scale, leverage Azure managed MySQL or PostgreSQL services.
-Here is the instruction of [enabling Azure MySQL/PostgreSQL for CCDB, UAADB and other system databases with Open Source Cloud Foundry](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/configure-cf-external-databases-using-azure-mysql-postgres-service).
-
-## 5. Open Service Broker
-Azure service broker offers consistent interface to manage applicationΓÇÖs access to Azure services. The new [Open Service Broker for Azure project](https://github.com/Azure/open-service-broker-azure) provides a single and simple way to deliver services to applications across Cloud Foundry, OpenShift, and Kubernetes. See the [Azure Open Service Broker for PCF tile](https://docs.pivotal.io/tiledev/2-2/service-brokers.html) for deployment instructions on PCF.
-
-## 6. Metrics and Logging
-The Azure Log Analytics Nozzle is a Cloud Foundry component, that forwards metrics from the [Cloud Foundry loggregator firehose](https://docs.cloudfoundry.org/loggregator/architecture.html) to [Azure Monitor logs](https://azure.microsoft.com/services/log-analytics/). With the Nozzle, you can collect, view, and analyze your CF system health and performance metrics across multiple deployments.
-Click [here](./cloudfoundry-oms-nozzle.md) to learn how to deploy the Azure Log Analytics Nozzle to both Open Source and Pivotal Cloud Foundry environment, and then access the data from the Azure Monitor logs console.
-> [!NOTE]
-> From PCF 2.0, BOSH health metrics for VMs are forwarded to the Loggregator Firehose by default, and are integrated into Azure Monitor logs console.
--
-## 7. Cost Saving
-### Cost Saving for Dev/Test Environments
-#### B-Series: *
-While F and D VM series were commonly recommended for Pivotal Cloud Foundry production environment, the new ΓÇ£burstableΓÇ¥ [B-series](https://azure.microsoft.com/blog/introducing-b-series-our-new-burstable-vm-size/) brings new options. The B-series burstable VMs are ideal for workloads that don't need the full performance of the CPU continuously, like web servers, small databases and development and test environments. These workloads typically have burstable performance requirements. It is $0.012/hour (B1) compared to $0.05/hour (F1), see the full list of [VM sizes](../virtual-machines/sizes-general.md) and [prices](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) for details.
-#### Managed Standard Disk:
-Premium disks were recommended for reliable performance in production. With [Managed Disk](https://azure.microsoft.com/services/managed-disks/), standard storage can also deliver similar reliability, with different performance. For workload that isn't performance-sensitive, like dev/Test or non-critical environment, managed standard disks offer an alternative option with lower cost.
-### Cost saving in General
-#### Significant VM Cost Saving with Azure reservations:
-Today all CF VMs are billed using ΓÇ£on-demandΓÇ¥ pricing, even though the environments typically stay up indefinitely. Now you can reserve VM capacity on a 1 or 3-year term, and gain discounts of 45-65%. Discounts are applied in the billing system, with no changes to your environment. For details, see [How Azure reservations works](https://azure.microsoft.com/pricing/reserved-vm-instances/).
-#### Managed Premium Disk with Smaller Sizes:
-Managed disks support smaller disk sizes, for example P4(32 GB) and P6(64 GB) for both premium and standard disks. If you have small workloads, you can save cost when migrating from standard premium disks to managed premium disks.
-#### Use Azure First Party
-Taking advantage of AzureΓÇÖs first party service will lower the long-term administration cost, in addition to HA and reliability mentioned in above sections.
-
-Pivotal has launched a [Small Footprint ERT](https://docs.pivotal.io/pivotalcf/2-0/customizing/small-footprint.html) for PCF customers, the components are co-located into just 4 VMs, running up to 2500 application instances. The trial version is now available through Azure Market place.
-
-## Next Steps
-Azure integration features are first available with [Open Source Cloud Foundry](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/), before it's available on Pivotal Cloud Foundry. Features marked with * are still not available through PCF. Cloud Foundry integration with Azure Stack isn't covered in this document either.
-For PCF support on the features marked with *, or Cloud Foundry integration with Azure Stack, contact your Pivotal and Microsoft account manager for latest status.
cognitive-services Speech Container Howto On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto-on-premises.md
For more details on installing applications with Helm in Azure Kubernetes Servic
[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [helm-test]: https://v2.helm.sh/docs/helm/#helm-test
-[ms-helm-hub]: https://hub.helm.sh/charts/microsoft
+[ms-helm-hub]: https://artifacthub.io/packages/search?repo=microsoft
[ms-helm-hub-speech-chart]: https://hub.helm.sh/charts/microsoft/cognitive-services-speech-onpremise <!-- LINKS - internal -->
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
zone_pivot_groups: acs-azcli-js-csharp-java-python
> [!IMPORTANT] > SMS capabilities depend on the phone number you use and the country that you're operating within as determined by your Azure billing address. For more information, visit the [Subscription eligibility](../../concepts/numbers/sub-eligibility-number-capability.md) documentation.
->
-> Currently, SMS messages can only be sent to and received from United States phone numbers. For more information, see [Phone number types](../../concepts/telephony/plan-solution.md).
<br/>
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 01/23/2023 Last updated : 02/25/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that change cluster internals, such as installing a [new minor PostgreSQ
* General availability: 4 TiB, 8 TiB, and 16 TiB storage per node is now supported for [multi-node configurations](resources-compute.md#multi-node-cluster) in addition to previously supported 0.5 TiB, 1 TiB, and 2 TiB storage sizes. * See cost details for your region in 'Multi-node' section of [the Azure Cosmos DB for PostgreSQL pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/postgresql/).
+* General availability: [Latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (11.19, 12.14, 13.10, 14.7, and 15.2) are now available in all supported regions.
+ * Existing clusters will get minor Postgres version update with [the next maintenance](concepts-maintenance.md)
+ * Major Postgres and minor Citus [version upgrades](concepts-upgrade.md) can be performed in-place.
+ ### January 2023
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 02/15/2023 Last updated : 02/25/2023 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
The versions of each extension installed in a cluster sometimes differ based on
> [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | > ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.8 | 11.1.4 | 11.1.4 |
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.8 | 11.1.5 | 11.2.0 |
### Data types extensions
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 11/20/2022 Last updated : 02/25/2023 # Supported database versions in Azure Cosmos DB for PostgreSQL
versions](https://www.postgresql.org/docs/release/):
### PostgreSQL version 15
-The current minor release is 15.1. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/15.1/) to
+The current minor release is 15.2. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/15.2/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 14
-The current minor release is 14.6. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/14.6/) to
+The current minor release is 14.7. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/14.7/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 13
-The current minor release is 13.9. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/13.9/) to
+The current minor release is 13.10. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/13.10/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 12
-The current minor release is 12.13. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/12.13/) to
+The current minor release is 12.14. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/12.14/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 11
-The current minor release is 11.18. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/11.17/) to
+The current minor release is 11.19. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/11.19/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 10 and older
data-factory Compute Optimized Data Flow Retire https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-optimized-data-flow-retire.md
From now through 31 August 2024, your Compute Optimized data flows will continue
* [Visit the Azure Data Factory pricing page for the latest updated pricing available for General Purpose and Memory Optimized data flows](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) * [Find more detailed information at the data flows FAQ here](./frequently-asked-questions.yml#mapping-data-flows)
-* [Post questions and find answers on data flows on Microsoft Q&A](/answers/questions/topics/azure-data-factory.html)
+* [Post questions and find answers on data flows on Microsoft Q&A](/azure/data-factory/frequently-asked-questions)
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
|--|--|--|--|--|--|--|--| | Compliance | Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds |
+| Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Vulnerability Assessment | View vulnerabilities for running images | AKS | GA | Preview | Defender profile | Defender for Containers | Commercial clouds | | Hardening | Control plane recommendations | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
defender-for-cloud Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/zero-trust.md
+
+ Title: Zero trust infrastructure and integrations
+description: Independent software vendors (ISVs) can integrate their solutions with Microsoft Defender for Cloud to help customers adopt a Zero Trust model and keep their organizations secure.
Last updated : 02/26/2023+++
+# Zero trust infrastructure and integrations
+
+Infrastructure comprises the hardware, software, micro-services, networking infrastructure, and facilities required to support IT services for an organization. Zero Trust infrastructure solutions assess, monitor, and prevent security threats to these services.
+
+Zero Trust infrastructure solutions support the principles of Zero Trust by ensuring that access to infrastructure resources is verified explicitly, access is granted using principles of least privilege access, and mechanisms are in place that assumes breach and look for and remediate security threats in infrastructure.
+
+This guidance is for software providers and technology partners who want to enhance their infrastructure security solutions by integrating with Microsoft products.
+
+## Zero Trust integration for Infrastructure guide
+
+This integration guide includes strategy and instructions for integrating with [Microsoft Defender for Cloud](defender-for-cloud-introduction.md) and its integrated cloud workload protection platform (CWPP), Microsoft Defender for Cloud.
+
+The guidance includes integrations with the most popular Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), Endpoint Detection and Response (EDR), and IT Service Management (ITSM) solutions.
+
+### Zero Trust and Defender for Cloud
+
+Our [Zero Trust infrastructure deployment guidance](/security/zero-trust/deploy/infrastructure) provides key stages of the Zero Trust strategy for infrastructure. Which are:
+
+1. [Assess compliance with chosen standards and policies](update-regulatory-compliance-packages.md)
+1. [Harden configuration](recommendations-reference.md) wherever gaps are found
+1. Employ other hardening tools such as [just-in-time (JIT)](just-in-time-access-usage.md) VM access
+1. Set up [threat detection and protections](/azure/azure-sql/database/threat-detection-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&view=azuresql)
+1. Automatically block and flag risky behavior and take protective actions
+
+There's a clear mapping from the goals we've described in the [infrastructure deployment guidance](/security/zero-trust/deploy/infrastructure) to the core aspects of Defender for Cloud.
+
+|Zero Trust goal | Defender for Cloud feature |
+|||
+|Assess compliance | In Defender for Cloud, every subscription automatically has the [Microsoft cloud security benchmark (MCSB) security initiative assigned](security-policy-concept.md).<br>Using the [secure score tools](secure-score-security-controls.md) and the [regulatory compliance dashboard](update-regulatory-compliance-packages.md) you can get a deep understanding of your customer's security posture. |
+| Harden configuration | [Review your security recommendations](review-security-recommendations.md) and [track your secure score improvement overtime](secure-score-access-and-track.md). You can also prioritize which recommendations to remediate based on potential attack paths, by leveraging the [attack path](how-to-manage-attack-path.md) feature. |
+|Employ hardening mechanisms | Least privilege access is one of the three principles of Zero Trust. Defender for Cloud can assist you to harden VMs and network using this principle by leveraging features such as:<br>[Just-in-time (JIT) virtual machine (VM) access](just-in-time-access-overview.md)<br>[Adaptive network hardening](adaptive-network-hardening.md)<br>[Adaptive application controls](adaptive-application-controls.md). |
+|Set up threat detection | Defender for Cloud offers an integrated cloud workload protection platform (CWPP), Microsoft Defender for Cloud.<br>Microsoft Defender for Cloud provides advanced, intelligent, protection of Azure and hybrid resources and workloads.<br>One of the Microsoft Defender plans, Microsoft Defender for servers, includes a native integration with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/).<br>Learn more in [Introduction to Microsoft Defender for Cloud](/azure/security-center/azure-defender). |
+|Automatically block suspicious behavior | Many of the hardening recommendations in Defender for Cloud offer a *deny* option. This feature lets you prevent the creation of resources that don't satisfy defined hardening criteria. Learn more in [Prevent misconfigurations with Enforce/Deny recommendations](/azure/defender-for-cloud/prevent-misconfigurations). |
+|Automatically flag suspicious behavior | Microsoft Defenders for Cloud's security alerts are triggered by advanced detections. Defender for Cloud prioritizes and lists the alerts, along with the information needed for you to quickly investigate the problem. Defender for Cloud also provides detailed steps to help you remediate attacks. For a full list of the available alerts, see [Security alerts - a reference guide](alerts-reference.md).|
+
+### Protect your Azure PaaS services with Defender for Cloud
+
+With Defender for Cloud enabled on your subscription, and Microsoft Defender for Cloud enabled for all available resource types, you'll have a layer of intelligent threat protection - powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) - protecting resources in Azure Key Vault, Azure Storage, Azure DNS, and other Azure PaaS services. For a full list, see [What resource types can Microsoft Defender for Cloud secure?](defender-for-cloud-introduction.md).
+
+### Azure Logic Apps
+Use [Azure Logic Apps](/azure/logic-apps/) to build automated scalable workflows, business processes, and enterprise orchestrations to integrate your apps and data across cloud services and on-premises systems.
+
+Defender for Cloud's [workflow automation](workflow-automation.md) feature lets you automate responses to Defender for Cloud triggers.
+
+This is great way to define and respond in an automated, consistent manner when threats are discovered. For example, to notify relevant stakeholders, launch a change management process, and apply specific remediation steps when a threat is detected.
+
+### Integrate Defender for Cloud with your SIEM, SOAR, and ITSM solutions
+
+Microsoft Defender for Cloud can stream your security alerts into the most popular Security Information and Event Management (SIEM), Security Orchestration Automated Response (SOAR), and IT Service Management (ITSM) solutions.
+
+There are Azure-native tools for ensuring you can view your alert data in all of the most popular solutions in use today, including:
+
+- Microsoft Sentinel
+- Splunk Enterprise and Splunk Cloud
+- IBM's QRadar
+- ServiceNow
+- ArcSight
+- Power BI
+- Palo Alto Networks
+
+#### Microsoft Sentinel
+
+Defender for Cloud natively integrates with [Microsoft Sentinel](/azure/sentinel/overview), Microsoft's cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution.
+
+There are two approaches to ensuring your Defender for Cloud data is represented in Microsoft Sentinel:
+
+- **Sentinel connectors** - Microsoft Sentinel includes built-in connectors for Microsoft Defender for Cloud at the subscription and tenant levels:
+
+ - [Stream alerts to Microsoft Sentinel at the subscription level](/azure/sentinel/connect-azure-security-center)
+ - [Connect all subscriptions in your tenant to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-security-center-auto-connect-to-sentinel/ba-p/1387539)
+
+ > [!TIP]
+ > Learn more in [Connect security alerts from Microsoft Defender for Cloud](../sentinel/connect-defender-for-cloud.md).
+
+- **Stream your audit logs** - An alternative way to investigate Defender for Cloud alerts in Microsoft Sentinel is to stream your audit logs into Microsoft Sentinel:
+
+ - [Connect Windows security events](/azure/sentinel/connect-windows-security-events)
+ - [Collect data from Linux-based sources using Syslog](/azure/sentinel/connect-syslog)
+ - [Connect data from Azure Activity log](/azure/sentinel/connect-azure-activity)
+
+#### Stream alerts with Microsoft Graph Security API
+
+Defender for Cloud has out-of-the-box integration with Microsoft Graph Security API. No configuration is required and there are no extra costs.
+
+You can use this API to stream alerts from the **entire tenant** (and data from many other Microsoft Security products) into third-party SIEMs and other popular platforms:
+
+- **Splunk Enterprise and Splunk Cloud** - [Use the Microsoft Graph Security API Add-On for Splunk](https://splunkbase.splunk.com/app/4564/)
+- **Power BI** - [Connect to the Microsoft Graph Security API in Power BI Desktop](/power-bi/connect-data/desktop-connect-graph-security)
+- **ServiceNow** - [Follow the instructions to install and configure the Microsoft Graph Security API application from the ServiceNow Store](https://docs.servicenow.com/bundle/orlando-security-management/page/product/secops-integration-sir/secops-integration-ms-graph/task/ms-graph-install.html)
+- **QRadar** - [IBM's Device Support Module for Microsoft Defender for Cloud via Microsoft Graph API](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_dsm_guide_ms_azure_security_center_overview.html)
+- **Palo Alto Networks**, **Anomali**, **Lookout**, **InSpark**, and more - [Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api#office-MultiFeatureCarousel-09jr2ji)
+
+[Learn more about Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api).
+
+#### Stream alerts with Azure Monitor
+
+Use Defender for Cloud's [continuous export](/azure/security-center/continuous-export) feature to connect Defender for Cloud with Azure monitor via Azure Event Hubs and stream alerts into **ArcSight**, **SumoLogic**, Syslog servers, **LogRhythm**, **Logz.io Cloud Observability Platform**, and other monitoring solutions.
+
+Learn more in [Stream alerts with Azure Monitor](/azure/security-center/export-to-siem#stream-alerts-with-azure-monitor).
+
+This can also be done at the Management Group level using Azure Policy, see [Create continuous export automation configurations at scale](continuous-export.md#configure-continuous-export-from-the-defender-for-cloud-pages-in-azure-portal).
+
+> [!TIP]
+> To view the event schemas of the exported data types, visit the [Event Hub event schemas](https://aka.ms/ASCAutomationSchemas).
+
+### Integrate Defender for Cloud with an Endpoint Detection and Response (EDR) solution
+
+#### Microsoft Defender for Endpoint
+
+[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) is a holistic, cloud-delivered endpoint security solution.
+
+Defender for Cloud's integrated CWPP for machines, [Microsoft Defender for servers](plan-defender-for-servers.md), includes an integrated license for [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities. For more information, see [Protect your endpoints](/azure/security-center/security-center-wdatp?tabs=linux).
+
+When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console and perform a detailed investigation to uncover the scope of the attack. Learn more about [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint).
+
+#### Other EDR solutions
+
+Defender for Cloud provides hardening recommendations to ensure you're securing your organization's resources according to the guidance of [Azure Security Benchmark](/security/benchmark/azure/introduction). One of the controls in the benchmark relates to endpoint security: [ES-1: Use Endpoint Detection and Response (EDR)](/security/benchmark/azure/security-controls-v2-endpoint-security).
+
+There are two recommendations in Defender for Cloud to ensure you've enabled endpoint protection and it's running well. These recommendations are checking for the presence and operational health of EDR solutions from:
+
+- Trend Micro
+- Symantec
+- McAfee
+- Sophos
+
+Learn more in [Endpoint protection assessment and recommendations in Microsoft Defender for Cloud](endpoint-protection-recommendations-technical.md).
+
+### Apply your Zero Trust strategy to hybrid and multicloud scenarios
+
+With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same.
+
+Microsoft Defender for Cloud protects workloads wherever they're running: in Azure, on-premises, Amazon Web Services (AWS), or Google Cloud Platform (GCP).
+
+#### Integrate Defender for Cloud with on-premises machines
+
+To secure hybrid cloud workloads, you can extend Defender for Cloud's protections by connecting on-premises machines to [Azure Arc enabled servers](/azure/azure-arc/servers/overview).
+
+Learn about how to connect machines in [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
+
+#### Integrate Defender for Cloud with other cloud environments
+
+To view the security posture of **Amazon Web Services** machines in Defender for Cloud, onboard AWS accounts into Defender for Cloud. This integrates AWS Security Hub and Microsoft Defender for Cloud for a unified view of Defender for Cloud recommendations and AWS Security Hub findings and provides a range of benefits as described in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md).
+
+To view the security posture of **Google Cloud Platform** machines in Defender for Cloud, onboard GCP accounts into Defender for Cloud. This integrates GCP Security Command and Microsoft Defender for Cloud for a unified view of Defender for Cloud recommendations and GCP Security Command Center findings and provides a range of benefits as described in [Connect your GCP accounts to Microsoft Defender for Cloud](quickstart-onboard-gcp.md).
+
+## Next steps
+
+To learn more about Microsoft Defender for Cloud and Microsoft Defender for Cloud, see the complete [Defender for Cloud documentation](index.yml).
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
Sign back in to the primary appliance after redirection.
Before you perform the procedures in this article, verify that you've met the following prerequisites: -- Make sure that you have an [on-premises management console installed](/ot-deploy/install-software-on-premises-management-console.md) on both a primary appliance and a secondary appliance.
+- Make sure that you have an [on-premises management console installed](./ot-deploy/install-software-on-premises-management-console.md) on both a primary appliance and a secondary appliance.
- Both your primary and secondary on-premises management console appliances must be running identical hardware models and software versions. - You must be able to access to both the primary and secondary on-premises management consoles as a [privileged user](references-work-with-defender-for-iot-cli-commands.md), for running CLI commands. For more information, see [On-premises users and roles for OT monitoring](roles-on-premises.md).
defender-for-iot Arcsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/arcsight.md
To configure your ArcSight server settings so that it can receive Defender for I
1. Sign in to your ArcSight server. 1. Configure your receiver type as a **CEF UDP Receiver**.
-For more information, see the [ArcSight SmartConnectors Documentation](https://www.microfocus.com/documentation/arcsight/arcsight-smartconnectors/#gsc.tab=0).
+For more information, see the [ArcSight SmartConnectors Documentation](https://www.microfocus.com/documentation/arcsight/arcsight-smartconnectors-8.4/#gsc.tab=0).
## Create a Defender for IoT forwarding rule
defender-for-iot Iot Advanced Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md
The **Microsoft Defender for IoT** solution includes a more detailed set of out-
## Investigate Defender for IoT incidents
-After youΓÇÖve [configured your Defender for IoT data to trigger new incidents in Microsoft Sentinel](#detect-threats-out-of-the-box-with-defender-for-iot-data), start investigating those incidents in Microsoft Sentinel [as you would other incidents](/sentinel/investigate-cases).
+After youΓÇÖve [configured your Defender for IoT data to trigger new incidents in Microsoft Sentinel](#detect-threats-out-of-the-box-with-defender-for-iot-data), start investigating those incidents in Microsoft Sentinel [as you would other incidents](../../sentinel/investigate-cases.md).
**To investigate Microsoft Defender for IoT incidents**:
This playbook updates the incident severity according to the importance level of
> [!div class="nextstepaction"] > [Use playbooks with automation rules](../../sentinel/tutorial-respond-threats-playbook.md)
-For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
+For more information, see our blog: [Defending Critical Infrastructure with the Microsoft Sentinel: IT/OT Threat Monitoring Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/defending-critical-infrastructure-with-the-microsoft-sentinel-it/ba-p/3061184)
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
You can access and manage your environments in the Microsoft Developer portal.
1. Sign in to the [developer portal](https://devportal.microsoft.com). 1. You'll be able to view all of your existing environments. To access the specific resources created as part of an Environment, select the **Environment Resources** link.+ :::image type="content" source="media/quickstart-create-access-environments/environment-resources.png" alt-text="Screenshot showing an environment card, with the environment resources link highlighted."::: 1. You'll be able to view the resources in your environment listed in the Azure portal.
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
result = iothub_job_manager.create_import_export_job(JobProperties(
## SDK samples - [.NET SDK sample](https://aka.ms/iothubmsicsharpsample)-- [Java SDK sample](https://aka.ms/iothubmsijavasample)
+- [Java SDK sample](https://github.com/Azure/azure-iot-sdk-java/tree/main/provisioning/provisioning-device-client/src/main/java/com/microsoft/azure/sdk/iot)
- [Python SDK sample](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) ## Next steps
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-node.md
This code uses the following [Key Vault Certificate classes and methods](/javasc
* [beginDeleteCertificate](/javascript/api/@azure/keyvault-certificates/certificateclient#@azure-keyvault-certificates-certificateclient-begindeletecertificate) * [PollerLike interface](/javascript/api/@azure/core-lro/pollerlike) * [getResult](/javascript/api/@azure/core-lro/pollerlike#@azure-core-lro-pollerlike-getresult)
- * [pollUntilDone](/javascript/api/@azure/core-lro/pollerlike@azure-core-lro-pollerlike-polluntildone)
### Set up the app framework
In this quickstart, you created a key vault, stored a certificate, and retrieved
- See an [Access Key Vault from App Service Application Tutorial](../general/tutorial-net-create-vault-azure-web-app.md) - See an [Access Key Vault from Virtual Machine Tutorial](../general/tutorial-net-virtual-machine.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-features.md)
key-vault About Keys Secrets Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/about-keys-secrets-certificates.md
Where:
| Element | Description | |-|-|
-|`vault-name` or `hsm-name`|The name for a vault or a Managed HSM pool in the Microsoft Azure Key Vault service.<br /><br />Vault names and Managed HSM pool names are selected by the user and are globally unique.<br /><br />Vault name and Managed HSM pool name must be a 3-24 character string, containing only 0-9, a-z, A-Z, and -.|
+|`vault-name` or `hsm-name`|The name for a vault or a Managed HSM pool in the Microsoft Azure Key Vault service.<br /><br />Vault names and Managed HSM pool names are selected by the user and are globally unique.<br /><br />Vault name and Managed HSM pool name must be a 3-24 character string, containing only 0-9, a-z, A-Z, and not consecutive -.|
|`object-type`|The type of the object, "keys", "secrets", or 'certificates'.| |`object-name`|An `object-name` is a user provided name for and must be unique within a Key Vault. The name must be a 1-127 character string, starting with a letter and containing only 0-9, a-z, A-Z, and -.| |`object-version`|An `object-version` is a system-generated, 32 character string identifier that is optionally used to address a unique version of an object.|
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-node.md
In this code, the name of your key vault is used to create the key vault URI, in
The code samples below will show you how to create a client, set a secret, retrieve a secret, and delete a secret.
-This code uses the following [Key Vault Secret classes and methods](/javascript/api/overview/azure/keyvault-secretss-readme):
+This code uses the following [Key Vault Secret classes and methods](/javascript/api/overview/azure/keyvault-secrets-readme):
* [DefaultAzureCredential](/javascript/api/@azure/identity/#@azure-identity-getdefaultazurecredential) * [SecretClient class](/javascript/api/@azure/keyvault-secrets/secretclient)
machine-learning Concept Automl Forecasting Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-deep-learning.md
+
+ Title: Deep learning with AutoML forecasting
+
+description: Learn how Azure Machine Learning's AutoML uses deep learning to forecast time series values
++++++++ Last updated : 02/24/2023
+show_latex: true
++
+# Deep learning with AutoML forecasting
+
+This article focuses on the deep learning methods for time series forecasting in AutoML. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
+
+Deep learning has made a major impact in fields ranging from [language modeling](../cognitive-services/openai/concepts/models.md) to [protein folding](https://www.deepmind.com/research/highlighted-research/alphafold), among many others. Time series forecasting has likewise benefitted from recent advances in deep learning technology. For example, deep neural network (DNN) models feature prominently in the top performing models from the [fourth](https://www.uber.com/blog/m4-forecasting-competition/) and [fifth](https://www.sciencedirect.com/science/article/pii/S0169207021001874) iterations of the high-profile Makridakis forecasting competition.
+
+In this article, we'll describe the structure and operation of the TCNForecaster model in AutoML to help you best apply the model to your scenario.
+
+## Introduction to TCNForecaster
+
+TCNForecaster is a [temporal convolutional network](https://arxiv.org/abs/1803.01271), or TCN, which has a DNN architecture specifically designed for time series data. The model uses historical data for a target quantity, along with related features, to make probabilistic forecasts of the target up to a specified forecast horizon. The following image shows the major components of the TCNForecaster architecture:
++
+TCNForecaster has the following main components:
+
+* A **pre-mix** layer that mixes the input time series and feature data into an array of signal **channels** that the convolutional stack will process.
+* A stack of **dilated convolution** layers that processes the channel array sequentially; each layer in the stack processes the output of the previous layer to produce a new channel array. Each channel in this output contains a mixture of convolution-filtered signals from the input channels.
+* A collection of **forecast head** units that coalesce the output signals from the convolution layers and generate forecasts of the target quantity from this latent representation. Each head unit produces forecasts up to the horizon for a quantile of the prediction distribution.
+
+### Dilated causal convolution
+
+The central operation of a TCN is a dilated, causal [convolution](https://en.wikipedia.org/wiki/Cross-correlation) along the time dimension of an input signal. Intuitively, convolution mixes together values from nearby time points in the input. The proportions in the mixture are the **kernel**, or the weights, of the convolution while the separation between points in the mixture is the **dilation**. The output signal is generated from the input by sliding the kernel in time along the input and accumulating the mixture at each position. A **causal** convolution is one in which the kernel only mixes input values in the past relative to each output point, preventing the output from "looking" into the future.
+
+Stacking dilated convolutions gives the TCN the ability to model correlations over long durations in input signals with relatively few kernel weights. For example, the following image shows three stacked layers with a two-weight kernel in each layer and exponentially increasing dilation factors:
++
+The dashed lines show paths through the network that end on the output at a time $t$. These paths cover the last eight points in the input, illustrating that each output point is a function of the eight most relatively recent points in the input. The length of history, or "look back," that a convolutional network uses to make predictions is called the **receptive field** and it is determined completely by the TCN architecture.
+
+### TCNForecaster architecture
+
+The core of the TCNForecaster architecture is the stack of convolutional layers between the pre-mix and the forecast heads. The stack is logically divided into repeating units called **blocks** that are, in turn, composed of **residual cells**. A residual cell applies causal convolutions at a set dilation along with normalization and nonlinear activation. Importantly, each residual cell adds its output to its input using a so-called residual connection. These connections [have been shown to benefit DNN training](https://arxiv.org/abs/1512.03385), perhaps because they facilitate more efficient information flow through the network. The following image shows the architecture of the convolutional layers for an example network with two blocks and three residual cells in each block:
++
+The number of blocks and cells, along with the number of signal channels in each layer, control the size of the network. The architectural parameters of TCNForecaster are summarized in the following table:
+
+|Parameter|Description|
+|--|--|
+|$n_{b}$|Number of blocks in the network; also called the _depth_|
+|$n_{c}$|Number of cells in each block|
+|$n_{\text{ch}}$|Number of channels in the hidden layers|
+
+The **receptive field** depends on the depth parameters and is given by the formula,
+
+$t_{\text{rf}} = 4n_{b}\left(2^{n_{c}} - 1\right) + 1.$
+
+We can give a more precise definition of the TCNForecaster architecture in terms of formulas. Let $X$ be an input array where each row contains feature values from the input data. We can divide $X$ into numeric and categorical feature arrays, $X_{\text{num}}$ and $X_{\text{cat}}$. Then, the TCNForecaster is given by the formulas,
+
+
+where $W_{e}$ is an [embedding](https://huggingface.co/blog/getting-started-with-embeddings) matrix for the categorical features, $n_{l} = n_{b}n_{c}$ is the total number of residual cells, the $H_{k}$ denote hidden layer outputs, and the $f_{q}$ are forecast outputs for given quantiles of the prediction distribution. To aid understanding, the dimensions of these variables are in the following table:
+
+|Variable|Description|Dimensions|
+|--|--|--|
+|$X$|Input array|$n_{\text{input}} \times t_{\text{rf}}$
+|$H_{i}$|Hidden layer output for $i=0,1,\ldots,n_{l}$|$n_{\text{ch}} \times t_{\text{rf}}$|
+|$f_{q}$|Forecast output for quantile $q$|$h$|
+
+In the table, $n_{\text{input}} = n_{\text{features}} + 1$, the number of predictor/feature variables plus the target quantity. The forecast heads generate all forecasts up to the maximum horizon, $h$, in a single pass, so TCNForecaster is a [direct forecaster](./concept-automl-forecasting-methods.md).
+
+## TCNForecaster in AutoML
+
+TCNForecaster is an optional model in AutoML. To learn how to use it, see [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning).
+
+In this section, we'll describe how AutoML builds TCNForecaster models with your data, including explanations of data preprocessing, training, and model search.
+
+### Data preprocessing steps
+
+AutoML executes several preprocessing steps on your data to prepare for model training. The following table describes these steps in the order they're performed:
+
+|Step|Description|
+|--|--|
+Fill missing data|[Impute missing values and observation gaps](./concept-automl-forecasting-methods.md#missing-data-handling) and optionally [pad or drop short time series](./how-to-auto-train-forecast.md#short-series-handling)|
+|Create calendar features|Augment the input data with [features derived from the calendar](./concept-automl-forecasting-calendar-features.md) like day of the week and, optionally, holidays for a specific region or country.|
+|Encode categorical data|[Label encode](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) strings and other categorical types; this includes all [time series ID columns](./how-to-auto-train-forecast.md#configuration-settings).|
+|Target transform|Optionally apply the natural logarithm function to the target depending on the results of certain statistical tests.|
+|Normalization|[Z-score normalize](https://en.wikipedia.org/wiki/Standard_score) all numeric data; normalization is performed per feature and per time series group, as defined by the [time series ID columns](./how-to-auto-train-forecast.md#configuration-settings).
+
+These steps are included in AutoML's transform pipelines, so they are automatically applied when needed at inference time. In some cases, the inverse operation to a step is included in the inference pipeline. For example, if AutoML applied a $\log$ transform to the target during training, the raw forecasts are exponentiated in the inference pipeline.
+
+### Training
+
+The TCNForecaster follows DNN training best practices common to other applications in images and language. AutoML divides preprocessed training data into **examples** that are shuffled and combined into **batches**. The network processes the batches sequentially, using back propagation and stochastic gradient descent to optimize the network weights with respect to a **loss function**. Training can require many passes through the full training data; each pass is called an **epoch**.
+
+The following table lists and describes input settings and parameters for TCNForecaster training:
+
+|Training input|Description|Value|
+|--|--|--|
+|Validation data|A portion of data that is held out from training to guide the network optimization and mitigate over fitting.| [Provided by the user](./how-to-auto-train-forecast.md#training-and-validation-data) or automatically created from training data if not provided.|
+|Primary metric|Metric computed from median-value forecasts on the validation data at the end of each training epoch; used for early stopping and model selection.|[Chosen by the user](./how-to-auto-train-forecast.md#configure-experiment); normalized root mean squared error or normalized mean absolute error.|
+|Training epochs|Maximum number of epochs to run for network weight optimization.|100; automated early stopping logic may terminate training at a smaller number of epochs.
+|Early stopping patience|Number of epochs to wait for primary metric improvement before training is stopped.|20|
+|Loss function|The objective function for network weight optimization.|[Quantile loss](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_pinball_loss.html) averaged over 10th, 25th, 50th, 75th, and 90th percentile forecasts.|
+|Batch size|Number of examples in a batch. Each example has dimensions $n_{\text{input}} \times t_{\text{rf}}$ for input and $h$ for output.|Determined automatically from the total number of examples in the training data; maximum value of 1024.|
+|Embedding dimensions|Dimensions of the embedding spaces for categorical features.|Automatically set to the fourth root of the number of distinct values in each feature, rounded up to the closest integer. Thresholds are applied at a minimum value of 3 and maximum value of 100.
+|Network architecture*|Parameters that control the size and shape of the network: depth, number of cells, and number of channels.|Determined by [model search](#model-search).|
+|Network weights|Parameters controlling signal mixtures, categorical embeddings, convolution kernel weights, and mappings to forecast values.|Randomly initialized, then optimized with respect to the loss function.
+|Learning rate*|Controls how much the network weights can be adjusted in each iteration of gradient descent; [dynamically reduced](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html) near convergence.|Determined by model search.|
+|Dropout ratio*|Controls the degree of [dropout regularization](https://en.wikipedia.org/wiki/Dilution_(neural_networks)) applied to the network weights.|Determined by model search.|
+
+Inputs marked with an asterisk (*) are determined by a hyper-parameter search that is described in the next section.
+
+### Model search
+
+AutoML uses model search methods to find values for the following hyper-parameters:
+
+* Network depth, or the number of [convolutional blocks](#tcnforecaster-architecture),
+* Number of cells per block,
+* Number of channels in each hidden layer,
+* Dropout ratio for network regularization,
+* Learning rate.
+
+Optimal values for these parameters can vary significantly depending on the problem scenario and training data, so AutoML trains several different models within the space of hyper-parameter values and picks the best one according to the primary metric score on the validation data.
+
+The model search has two phases:
+
+1. AutoML performs a search over 12 "landmark" models. The landmark models are static and chosen to reasonably span the hyper-parameter space.
+2. AutoML continues searching through the hyper-parameter space using a random search.
+
+The search terminates when stopping criteria are met. The stopping criteria depend on the [forecast training job configuration](./how-to-auto-train-forecast.md#configure-experiment), but some examples include time limits, limits on number of search trials to perform, and early stopping logic when the validation metric is not improving.
+
+## Next steps
+
+* Learn how to [set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
+* Learn about [forecasting methodology in AutoML](./concept-automl-forecasting-methods.md).
+* Browse [frequently asked questions about forecasting in AutoML](./how-to-automl-forecasting-faq.md).
+
machine-learning Concept Automl Forecasting Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md
The following table lists the forecasting models implemented in AutoML and what
Time Series Models | Regression Models -| --
-[Naive, Seasonal Naive, Average, Seasonal Average](https://otexts.com/fpp3/simple-methods.html), [ARIMA(X)](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html), [Exponential Smoothing](https://www.statsmodels.org/dev/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html) | [Linear SGD](https://scikit-learn.org/stable/modules/linear_model.html#stochastic-gradient-descent-sgd), [LARS LASSO](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso), [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net), [Prophet](https://facebook.github.io/prophet/), [K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression), [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression), [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests), [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees), [Gradient Boosted Trees](https://scikit-learn.org/stable/modules/ensemble.html#regression), [LightGBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html), [XGBoost](https://xgboost.readthedocs.io/en/latest/parameter.html), [ForecastTCN](./how-to-auto-train-forecast.md#enable-deep-learning)
+[Naive, Seasonal Naive, Average, Seasonal Average](https://otexts.com/fpp3/simple-methods.html), [ARIMA(X)](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html), [Exponential Smoothing](https://www.statsmodels.org/dev/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html) | [Linear SGD](https://scikit-learn.org/stable/modules/linear_model.html#stochastic-gradient-descent-sgd), [LARS LASSO](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso), [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net), [Prophet](https://facebook.github.io/prophet/), [K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression), [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression), [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests), [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees), [Gradient Boosted Trees](https://scikit-learn.org/stable/modules/ensemble.html#regression), [LightGBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html), [XGBoost](https://xgboost.readthedocs.io/en/latest/parameter.html), [TCNForecaster](./concept-automl-forecasting-deep-learning.md#introduction-to-tcnforecaster)
-The models in each category are listed roughly in order of the complexity of patterns they're able to incorporate, also known as the **model capacity**. A Naive model, which simply forecasts the last observed value, has low capacity while the Temporal Convolutional Network (ForecastTCN), a deep neural network with potentially millions of tunable parameters, has high capacity.
+The models in each category are listed roughly in order of the complexity of patterns they're able to incorporate, also known as the **model capacity**. A Naive model, which simply forecasts the last observed value, has low capacity while the Temporal Convolutional Network (TCNForecaster), a deep neural network with potentially millions of tunable parameters, has high capacity.
Importantly, AutoML also includes **ensemble** models that create weighted combinations of the best performing models to further improve accuracy. For forecasting, we use a [soft voting ensemble](https://scikit-learn.org/stable/modules/ensemble.html#voting-regressor) where composition and weights are found via the [Caruana Ensemble Selection Algorithm](http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf).
Lags of feature columns | Optional
Rolling window aggregations (for example, rolling average) of target quantity | Optional Seasonal decomposition ([STL](https://otexts.com/fpp3/stl.html)) | Optional
-You can configure featurization from the AutoML SDK via the [ForecastingJob](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) class or from the [Azure Machine Learning Studio web interface](how-to-use-automated-ml-for-ml-models.md#customize-featurization).
+You can configure featurization from the AutoML SDK via the [ForecastingJob](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) class or from the [Azure Machine Learning studio web interface](how-to-use-automated-ml-for-ml-models.md#customize-featurization).
### Non-stationary time series detection and handling
When a dataset contains more than one time series, as in the given data example,
Each Series in Own Group (1:1) | All Series in Single Group (N:1) -| --
-Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, ForecastTCN
+Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, TCNForecaster
More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb). ## Next steps
+* Learn about [deep learning models](./concept-automl-forecasting-deep-learning.md) for forecasting in AutoML
* Learn more about [model sweeping and selection](./concept-automl-forecasting-sweeping.md) for forecasting in AutoML. * Learn about how AutoML creates [features from the calendar](./concept-automl-forecasting-calendar-features.md). * Learn about how AutoML creates [lag features](./concept-automl-forecasting-lags.md).
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
Now, the job searches over all model classes _except_ Prophet. For a list of for
#### Enable deep learning
-AutoML ships with a custom deep neural network (DNN) model called `ForecastTCN`. This model is a [temporal convolutional network](https://arxiv.org/abs/1803.01271), or TCN, that applies common imaging task methods to time series modeling. Namely, one-dimensional "causal" convolutions form the backbone of the network and enable the model to learn complex patterns over long durations in the training history.
+AutoML ships with a custom deep neural network (DNN) model called `TCNForecaster`. This model is a [temporal convolutional network](https://arxiv.org/abs/1803.01271), or TCN, that applies common imaging task methods to time series modeling. Namely, one-dimensional "causal" convolutions form the backbone of the network and enable the model to learn complex patterns over long durations in the training history. For more details, see our [TCNForecaster article](./concept-automl-forecasting-deep-learning.md#introduction-to-tcnforecaster).
-The ForecastTCN often achieves higher accuracy than standard time series models when there are thousands or more observations in the training history. However, it also takes longer to train and sweep over ForecastTCN models due to their higher capacity.
+The TCNForecaster often achieves higher accuracy than standard time series models when there are thousands or more observations in the training history. However, it also takes longer to train and sweep over TCNForecaster models due to their higher capacity.
-You can enable the ForecastTCN in AutoML by setting the `enable_dnn_training` flag in the set_training() method as follows:
+You can enable the TCNForecaster in AutoML by setting the `enable_dnn_training` flag in the set_training() method as follows:
```python
-# Include ForecastTCN models in the model search
+# Include TCNForecaster models in the model search
forecasting_job.set_training( enable_dnn_training=True )
machine-learning How To Automl Forecasting Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md
There are four basic configurations supported by AutoML forecasting:
|Configuration|Scenario|Pros|Cons| |--|--|--|--|
-|**Default AutoML**|Recommended if the dataset has a small number of time series that have roughly similar historic behavior.|- Simple to configure from code/SDK or Azure Machine Learning Studio <br><br> - AutoML has the chance to cross-learn across different time series since the regression models pool all series together in training. See the [model grouping](./concept-automl-forecasting-methods.md#model-grouping) section for more information.|- Regression models may be less accurate if the time series in the training data have divergent behavior <br> <br> - Time series models may take a long time to train if there are a large number of series in the training data. See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer for more information.|
-|**AutoML with deep learning**|Recommended for datasets with more than 1000 observations and, potentially, numerous time series exhibiting complex patterns. When enabled, AutoML will sweep over temporal convolutional neural network (TCN) models during training. See the [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning) section for more information.|- Simple to configure from code/SDK or Azure Machine Learning Studio <br> <br> - Cross-learning opportunities since the TCN pools data over all series <br> <br> - Potentially higher accuracy due to the large capacity of DNN models. See the [forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section for more information.|- Training can take much longer due to the complexity of DNN models <br> <br> - Series with small amounts of history are unlikely to benefit from these models.|
-|**Many Models**|Recommended if you need to train and manage a large number of forecasting models in a scalable way. See the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) section for more information.|- Scalable <br> <br> - Potentially higher accuracy when time series have divergent behavior from one another.|- No cross-learning across time series <br> <br> - You can't configure or launch Many Models jobs from Azure Machine Learning Studio, only the code/SDK experience is currently available.|
+|**Default AutoML**|Recommended if the dataset has a small number of time series that have roughly similar historic behavior.|- Simple to configure from code/SDK or Azure Machine Learning studio <br><br> - AutoML has the chance to cross-learn across different time series since the regression models pool all series together in training. See the [model grouping](./concept-automl-forecasting-methods.md#model-grouping) section for more information.|- Regression models may be less accurate if the time series in the training data have divergent behavior <br> <br> - Time series models may take a long time to train if there are a large number of series in the training data. See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer for more information.|
+|**AutoML with deep learning**|Recommended for datasets with more than 1000 observations and, potentially, numerous time series exhibiting complex patterns. When enabled, AutoML will sweep over [temporal convolutional neural network (TCN) models](./concept-automl-forecasting-deep-learning.md#introduction-to-tcnforecaster) during training. See the [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning) section for more information.|- Simple to configure from code/SDK or Azure Machine Learning studio <br> <br> - Cross-learning opportunities since the TCN pools data over all series <br> <br> - Potentially higher accuracy due to the large capacity of DNN models. See the [forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section for more information.|- Training can take much longer due to the complexity of DNN models <br> <br> - Series with small amounts of history are unlikely to benefit from these models.|
+|**Many Models**|Recommended if you need to train and manage a large number of forecasting models in a scalable way. See the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) section for more information.|- Scalable <br> <br> - Potentially higher accuracy when time series have divergent behavior from one another.|- No cross-learning across time series <br> <br> - You can't configure or launch Many Models jobs from Azure Machine Learning studio, only the code/SDK experience is currently available.|
|**Hierarchical Time Series**|HTS is recommended if the series in your data have nested, hierarchical structure and you need to train or make forecasts at aggregated levels of the hierarchy. See the [hierarchical time series forecasting](how-to-auto-train-forecast.md#hierarchical-time-series-forecasting) section for more information.|- Training at aggregated levels can reduce noise in the leaf node time series and potentially lead to higher accuracy models. <br> <br> - Forecasts can be retrieved for any level of the hierarchy by aggregating or dis-aggregating forecasts from the training level.|- You need to provide the aggregation level for training. AutoML doesn't currently have an algorithm to find an optimal level.| > [!NOTE]
machine-learning How To R Modify Script For Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-modify-script-for-production.md
library(ggplot2)
myplot <- ggplot(...) ggsave(myplot,
- filename = "./outputs/myplot.png")
+ filename = file.path(args$output,"forecast-plot.png"))
# save an rds serialized object
-saveRDS(myobject, file = "./outputs/myobject.rds")
+saveRDS(myobject, file = file.path(args$output,"myobject.rds"))
```
managed-grafana Find Help Open Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/find-help-open-support-ticket.md
In the page below, find out how you can get technical information about Azure Ma
Before creating a support ticket, check out the following resources for answers and information.
-* [Technical documentation for Azure Managed Grafana](/index.yml): find content such as how-to guides, tutorials and the [troubleshooting guide](troubleshoot-managed-grafana.md) for Azure Managed Grafana.
+* [Technical documentation for Azure Managed Grafana](/azure/managed-grafan) for Azure Managed Grafana.
* [Microsoft Q&A](/answers/tags/249/azure-managed-grafana): browse existing questions and answers, and ask your questions around Azure Managed Grafana. * [Microsoft Technical Community](https://techcommunity.microsoft.com/) is the place for IT professionals and customers to collaborate, share, and learn. The website contains [Grafana-related content](https://techcommunity.microsoft.com/t5/forums/searchpage/tab/message?q=grafana).
marketplace Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics.md
description: Access analytic reports to monitor sales, evaluate performance, and
-- Previously updated : 06/21/2022++ Last updated : 02/20/2023
-# Access analytic reports for the commercial marketplace in Partner Center
+# Access insights for the commercial marketplace in Partner Center
-Learn how to access analytic reports in Microsoft Partner Center to monitor sales, evaluate performance, and optimize your offers in the marketplace. As a partner, you can monitor your offer listings using the data visualization and insight graphs supported by Partner Center and find ways to maximize your sales. The improved analytics tools enable you to act on performance results and maintain better relationships with your customers and resellers.
+Partner center provides dashboards that help you analyze the data related to your offers, customers, transactions and other activities on marketplace. You can also download the report from either through UI or API, in case you are interested in further data analysis.
+Users with **Developer** or **Manager** role will have access to the marketplace insights dashboards and reports.
+
+## Access the commercial marketplace insights dashboard
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
+1. On the Home page, select the **Insights** tile.
+
+ [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
+
+1. In the left menu, select **Summary**.
+
+ :::image type="content" source="./media/summary-dashboard/summary-left-nav.png" alt-text="Screenshot of the link for the Summary dashboard in the left nav.":::
+
+## Elements of the commercial marketplace insights dashboard
+
+The following sections describe how to use the insights dashboard and how to read the data.
+
+> [!NOTE]
+> Customers dashboard is shown as an example. These elements remain same across all the dashboards.
+
+### Download
+
+To download of the data for this dashboard, select **Download data** from the **Download** list. You can also download the snapshot of dashboard by selecting **Download as PDF**.
++
+Alternatively, you can go to the [Downloads dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) to download the report.
+
+### Share
+
+To share the dashboard widgets data via email, in the top menu, select **Share**.
++
+In the dialog box that appears, provide the recipient email address and message. To share the report URL, select the **Copy link** or **Share to teams** button. To take a snapshot of the charts data, select the **Copy as image** button.
++
+### What's new
+
+To learn about changes and enhancements that were made to the dashboard, select **WhatΓÇÖs new**.
++
+### About data refresh
+
+To view the data source and the data refresh details, such as the frequency of the data refresh, select the ellipsis (three dots) and then select **Data refresh details**.
++
+### Got feedback?
+
+To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the **Got feedback?** link.
++
+Provide your feedback in the dialog box that appears.
++
+### Month range
+
+You can find a month range selection at the top-right corner of each page. Customize the output of the **Summary** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
+ ## Partner Center analytics tools
To access the Partner Center analytics tools, go to the **[Summary](https://go.m
## Next steps - For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary dashboard in commercial marketplace analytics](summary-dashboard.md).
+- For information about revenue across customers, billing information, offer plans, see [Revenue dashboard in commercial marketplace analytics](revenue-dashboard.md)
- For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](orders-dashboard.md).-- For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](usage-dashboard.md).
+- For Virtual Machine (VM), Containers offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](usage-dashboard.md).
- For detailed information about your customers, including growth trends, see [Customer dashboard in commercial marketplace analytics](customer-dashboard.md).
+- For information about your offer performance on marketplace like page visits, CTA click and so on, see [Marketplace insights dashboard](insights-dashboard.md)
- For information about your licenses, see [License dashboard in commercial marketplace analytics](license-dashboard.md)-- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- For information about quality of deployment for all your offers, see [Quality of Service dashboard in commercial marketplace analytics](quality-of-service-dashboard.md)
- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings and reviews dashboard in commercial marketplace analytics](ratings-reviews.md).-- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.yml).
+- For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).
+- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.yml).
marketplace Customer Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/customer-dashboard.md
Title: Customers dashboard in Microsoft commercial marketplace analytics on Partner Center
+ Title: Customers dashboard at Microsoft commercial marketplace analytics on Partner Center
description: Learn how to access information about your customers, including growth trends, using the Customers dashboard in commercial marketplace analytics.
Previously updated : 12/23/2022 Last updated : 02/20/2023 # Customers dashboard in commercial marketplace analytics
The [Customers dashboard](https://go.microsoft.com/fwlink/?linkid=2166011) displ
> [!NOTE] > The maximum latency between customer acquisition and reporting in Partner Center is 48 hours.
-## Access the Customers dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. On the Home page, select the **Insights** tile.
-
- ![Screenshot showing the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png)
-
-1. In the left-nav menu, select **[Customers](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/customer)**.
-
- :::image type="content" source="media/customer-dashboard/menu-customer.png" alt-text="Screenshot showing the Customer option in the left-nav menu.":::
-
-## Elements of the Customers dashboard
-
-The following sections describe how to use the Customers dashboard and how to read the data.
-
-### Download
--
-To download a snapshot of the dashboard, select **Download as PDF**. Alternatively, go to the [Downloads](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) dashboard and download the report.
-
-### Share
--
-To email dashboard widgets data, select **Share** and provide the email information. Share report URLs using **Copy link** and **Share to Teams**, or **Copy as image** to send a snapshot of chart data.
--
-### WhatΓÇÖs new
--
-Use this to check on changes and enhancements.
-
-### About data refresh
--
-View the data source and the data refresh details, such as frequency of the data refresh.
-
-### Got feedback?
--
-Submit feedback about the report/dashboard along with an optional screenshot.
--
-### Month range
-
-A month range selection is at the top-right corner of each page.
--
- Customize the output of graphs by selecting a month range based on the last **six** or **12** months, or by selecting a **custom** month range with a maximum duration of 12 months. The default month range is six months.
- ### Customer page dashboard filters :::image type="content" source="media/customer-dashboard/button-filters.png" alt-text="Screenshot showing the Filters button on the Insights screen of the Customers dashboard.":::
The page has dashboard-level filters for the following:
- Sales Channel - Marketplace Subscription ID-- Customer Id
+- Customer ID
- Customer Name - Customer Company Name-- Country
+- Country/region
Each filter is expandable with multiple options that you can select. Filter options are dynamic and based on the selected date range.
Select the ellipsis (...) to copy the widget image, download aggregated widget d
### Customer details table
+> [!IMPORTANT]
+> To download the data in CSV, please use the Download data option available on top of page.
+ The **Customer details** table displays a numbered list of the top 1,000 customers sorted by the date they first acquired one of your offers. You can expand a section by selecting the expansion icon in the details ribbon. ![Screenshot showing the Customer Details table on the Insights screen of the Customers dashboard.](./media/customer-dashboard/customer-details-table.png) - Customer personal information will only be available if the customer has provided consent. You can only view this information if you've signed in with an owner role level of permissions. - Each column in the grid is sortable.-- The data can be extracted to a .CSV or .TSV file if the count of the records is less than 1,000.-- If records number is more than 1,000, exported data will be asynchronously placed in a downloads page for the next 30 days. - Apply filters to the table to display only the data you're interested in. Filter data by Company name, Customer ID, Marketplace Subscription ID, Azure License Type, Date Acquired, Date Lost, Customer Email, Customer Country/Region/State/City/Zip, Customer Language, and so on. - When an offer is purchased by a protected customer, information in **Customer Detailed Data** will be masked (************).-- Customer dimension details such as Company Name, Customer Name, and Customer Email are at an organization ID level, not at Azure Marketplace or Microsoft AppSource transaction level.
+- Customer dimension details such as Company Name, Customer Name, and Customer Email are at an organization ID level, not at Azure Marketplace or the Microsoft commercial marketplace transaction level.
-Select the ellipsis (...) to copy the widget image, download aggregated widget data as .csv file, or download the image as a PDF.
+Select the ellipsis (...) to copy the widget image, or download the image as a PDF.
_**Table 1: Dictionary of data terms**_
_**Table 1: Dictionary of data terms**_
| Marketplace Subscription ID | Marketplace Subscription ID | The unique identifier associated with the Azure subscription the customer used to purchase your commercial marketplace offer. For infrastructure offers, this is the customer's Azure subscription GUID. For SaaS offers, this is shown as zeros since SaaS purchases don't require an Azure subscription. | MarketplaceSubscriptionId | | DateAcquired | Date Acquired | The first date the customer purchased any offer you published. | DateAcquired | | DateLost | Date Lost | The last date the customer canceled the last of all previously purchased offers. | DateLost |
-| Provider Name | Provider Name | The name of the provider involved in the relationship between Microsoft and the customer. If the customer is an Enterprise through Reseller, this will be the reseller. If a Cloud Solution Provider (CSP) is involved, this will be the CSP. | ProviderName |
-| Provider Email | Provider Email | The email address of the provider involved in the relationship between Microsoft and the customer. If the customer is an Enterprise through Reseller, this will be the reseller. If a Cloud Solution Provider (CSP) is involved, this will be the CSP. | ProviderEmail |
+| Provider Name | Provider Name | The name of the provider involved in the relationship between Microsoft and the customer. If the customer is an Enterprise through Reseller, this is the reseller. If a Cloud Solution Provider (CSP) is involved, this is the CSP. | ProviderName |
+| Provider Email | Provider Email | The email address of the provider involved in the relationship between Microsoft and the customer. If the customer is an Enterprise through Reseller, this is the reseller. If a Cloud Solution Provider (CSP) is involved, this is the CSP. | ProviderEmail |
| FirstName | Customer First Name | The first name provided by the customer. Name could be different than the name provided in a customer's Azure subscription. | FirstName | | LastName | Customer Last Name | The last name provided by the customer. Name could be different than the name provided in a customer's Azure subscription. | LastName | | Email | Customer Email | The e-mail address provided by the end customer. Email could be different than the e-mail address in a customer's Azure subscription. | Email |
_**Table 1: Dictionary of data terms**_
| CustomerCommunicationCulture | Customer Communication Language | The language preferred by the customer for communication. | CustomerCommunicationCulture | | CustomerCountryRegion | Customer Country/Region | The country/region name provided by the customer. Country/region could be different than the country/region in a customer's Azure subscription. | CustomerCountryRegion | | AzureLicenseType | Azure License Type | The type of licensing agreement used by customers to purchase Azure. Also known as the _channel_. The possible values are:<br>- Cloud Solution Provider<br>- Enterprise<br>- Enterprise through Reseller<br>- Pay as You Go | AzureLicenseType |
-| PromotionalCustomers | Is Promotional Contact Opt In | The value will let you know if the customer proactively opted in for promotional contact from publishers. At this time, we aren't presenting the option to customers, so we've indicated "No" across the board. After this feature is deployed, we'll start updating accordingly. | PromotionalCustomers |
+| PromotionalCustomers | Is Promotional Contact Opt In | The value lets you know if the customer proactively opted in for promotional contact from publishers. At this time, we aren't presenting the option to customers, so we've indicated "No" across the board. After this feature is deployed, we'll start updating accordingly. | PromotionalCustomers |
| CustomerState | Customer State | The state of residence provided by the customer. State could be different than the state provided in a customer's Azure subscription. | CustomerState | | CommerceRootCustomer | Commerce Root Customer | One Billing Account ID can be associated with multiple Customer IDs.<br>One combination of a Billing Account ID and a Customer ID can be associated with multiple commercial marketplace subscriptions.<br>The Commerce Root Customer signifies the name of the subscriptionΓÇÖs customer. | CommerceRootCustomer | | Customer ID | Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. | CustomerId |
_**Table 1: Dictionary of data terms**_
| SKU | SKU | The plan associated with the offer | SKU | | N/A | lastModifiedAt | The latest timestamp for customer purchases. Use this field, via programmatic API access, to pull the latest snapshot of all customer purchase transactions since a specific date | lastModifiedAt | | N/A | AddressLine1 | The Address Line 1 section of customerΓÇÖs street address | AddressLine1 |
-| N/A | Billing Id | The Billing Id of the enterprise customer | Billing Id |
-| N/A | Private Offer Id | The Id to identify a private marketplace offer | Private Offer Id |
+| N/A | Billing ID | The Billing ID of the enterprise customer | Billing ID |
+| N/A | Private Offer ID | The ID to identify a private marketplace offer | Private Offer ID |
| N/A | Private Offer Name | The name provided during private offer creation | Private Offer Name | | N/A | Purchaser Email | Email of the entity purchasing or provisioning an offer. This could be same or different than the email of ΓÇ£bill-toΓÇ¥ entity | Purchaser Email | | N/A | ReferenceId | A unique identifier to indicate provisioned instances or purchased offers by the customer. This key can be used to link with Orders and Usage report | ReferenceId |
-| Offer Id | Offer Id | The Id to identify a marketplace offer | OfferId |
+| Offer ID | Offer ID | The ID to identify a marketplace offer | OfferId |
| Is Private Plan | Is Private Plan | Indicates whether a marketplace offer is private plan. <li> 0 value indicates false </li> <li> 1 value indicates true </li> | IsPrivatePlan | ## Next steps
_**Table 1: Dictionary of data terms**_
- For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](./orders-dashboard.md). - For virtual machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](./usage-dashboard.md). - For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and Microsoft AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and the Microsoft commercial marketplace, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.yml).
marketplace Downloads Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/downloads-dashboard.md
description: Learn how to access download requests for your marketplace offers.
-- Previously updated : 09/27/2021++ Last updated : 02/20/2023 # Downloads dashboard in commercial marketplace analytics
The [Downloads dashboard](https://go.microsoft.com/fwlink/?linkid=2165766) displ
You will receive a pop-up notification containing a link to the **Downloads** dashboard whenever you request a download with over 1000 rows of data. These data downloads will be available for a 30-day period and then removed.
-## Access the Downloads dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. On the Home page, select the **Insights** tile.
-
- [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
-
-1. In the left menu, select **Downloads**.
- ## Lifetime export of commercial marketplace Analytics reports On the Downloads page, end user can do the following:
A user can schedule asynchronous downloads of reports from the Downloads dashboa
- For an overview of analytics reports available in the Partner Center commercial marketplace, see [Analytics for the commercial marketplace in Partner Center](analytics.md). - For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary Dashboard in commercial marketplace analytics](summary-dashboard.md).-- For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](orders-dashboard.md).-- For Virtual Machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](usage-dashboard.md).-- For detailed information about your customers, including growth trends, see [Customer Dashboard in commercial marketplace analytics](customer-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and AppSource, see [Ratings and reviews dashboard in commercial marketplace analytics](ratings-reviews.md).-- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.yml).
+- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.yml).
marketplace Insights Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/insights-dashboard.md
description: Access a summary of marketplace web analytics in Partner Center, wh
--++ Previously updated : 08/09/2022 Last updated : 02/20/2023 # Marketplace Insights dashboard in commercial marketplace analytics
The Marketplace Insights dashboard provides clickstream data, which shouldn't be
> [!NOTE] > The maximum latency between users visiting offers on Azure Marketplace or AppSource and reporting in Partner Center is 48 hours.
-## Access the Marketplace insights dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. On the Home page, select the **Insights** tile.
-
- [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
-
-1. In the left menu, select **[Marketplace insights](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/marketplaceinsights)**.
-
- [ ![Screenshot of the Marketplace insights link in the left-nav.](./media/insights-dashboard/marketplace-insights.png) ](./media/insights-dashboard/marketplace-insights.png#lightbox)
-
-## Elements of the Marketplace Insights dashboard
-
-The following sections describe how to use the Marketplace Insights dashboard and how to read the data.
-
-### Download
-
-To download data for this dashboard, select **Download as PDF** from the **Download** list.
-
-[ ![Screenshot of the Download as PDF link in the Download list.](./media/insights-dashboard/download-as-pdf.png) ](./media/insights-dashboard/download-as-pdf.png#lightbox)
-
-Alternatively, you can go to the [Downloads dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) to download the report.
-
-### Share
-
-To share the dashboard widgets data via email, in the top menu, select **Share**.
-
-In the dialog box that appears, provide the recipient email address and message. To share the report URL, select the **Copy link** or **Share to Teams** button. To take a snapshot of charts data, select the **Copy as image** button.
-
-[ ![Screenshot of the Share button in the top menu.](./media/insights-dashboard/share.png) ](./media/insights-dashboard/share.png#lightbox)
-
-### What's new
-
-To learn about changes and enhancements that were made to the dashboard, select **WhatΓÇÖs new**. The _WhatΓÇÖs new_ side panel appears.
-
-[ ![Screenshot of the What's new button in the top menu.](./media/insights-dashboard/whats-new.png) ](./media/insights-dashboard/whats-new.png#lightbox)
-
-### About data refresh
-
-To view the data source and the data refresh details, such as the frequency of the data refresh, select the ellipsis (three dots) and then select **Data refresh details**.
-
-[ ![Screenshot of the Data refresh details option in the ellipsis menu.](./media/insights-dashboard/data-refresh-details.png) ](./media/insights-dashboard/data-refresh-details.png#lightbox)
-
-### Got feedback?
-
-To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the **Got feedback?** link.
-
-[ ![Screenshot of the Got feedback link in the ellipsis menu.](./media/insights-dashboard/got-feedback.png) ](./media/insights-dashboard/got-feedback.png#lightbox)
-
-### Month range
-
-You can find a month range selection at the top-right corner of each page. Customize the output of the *Marketplace Insights* page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
-
-[ ![Screenshot of the month filters on the Marketplace Insights dashboard.](./media/insights-dashboard/dashboard-time-range.png) ](./media/insights-dashboard/dashboard-time-range.png#lightbox)
- ### Marketplace Insights dashboard filters Filter the data by offer names. Filter options are dynamic and based on the selected date range. To select the filters, in the top-right of the page, select **Filters**. In the panel that appears on the right, select the offer names you want, and then select **Apply**.
Note the following:
### Marketplace Insights details table
+> [!IMPORTANT]
+> To download the data in CSV, please use the Download data option available on top of page.
+ This table provides a list view of the page visits and the calls to action of your selected offers' pages sorted by date. -- The data can be extracted to a .TSV or .CSV file if the count of records is less than 1,000. - If the count of records is over 1,000, exported data will be asynchronously placed in a downloads page for the next 30 days. - Filter data by Offer names and Campaign names to display the data you are interested in.
marketplace License Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/license-dashboard.md
description: Learn how to access information about your licenses using the Licen
--++ Previously updated : 04/26/2022 Last updated : 02/20/2023 # License dashboard in commercial marketplace analytics
This article provides information about the License dashboard in the commercial
- Number of licenses purchased and deployed by the customer - Distribution of licenses across countries and regions
-## Check license usage
-
-To check license usage of ISV apps in Partner Center, do the following:
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. On the Home page, select the **Insights** tile.
-
- [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
-
-1. In the left menu, select **[License](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/license)**.
-
- [ ![Screenshot of the License dashboard in Partner Center.](./media/license-dashboard/license-dashboard-workspaces.png) ](./media/license-dashboard/license-dashboard-workspaces.png#lightbox)
-
-## Elements of the License dashboard
-
-The following sections describe how to use the License dashboard and how to read the data.
-
-### Download
--
-To download a snapshot of the dashboard, select **Download as PDF**. Alternatively, go to the [Downloads](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) dashboard and download the report.
-
-### Share
--
-To email dashboard widgets data, select **Share** and provide the email information. Share report URLs using **Copy link** and **Share to Teams**, or **Copy as image** to send a snapshot of chart data.
--
-### WhatΓÇÖs new
--
-Use this to check on changes and enhancements.
-
-### Data refresh details
--
-View the data source and the data refresh details, such as frequency of the data refresh.
-
-### Got feedback?
--
-Submit feedback about the report/dashboard along with an optional screenshot.
--
-## Month range filter
--
-A month range selection is at the top-right corner of each page. Customize the output of the graphs by selecting a month range based on the past **six** or **12** months, or by selecting a **custom** month range with a maximum duration of 12 months. The default month range is six months.
- ### License page dashboard filters :::image type="content" source="./media/license-dashboard/button-filters.png" alt-text="Screenshot of filter selections on the Insights screen of the License dashboard.":::
Select the ellipsis (...) to copy the widget image, download aggregated widget d
## Data terms in License report downloads
-You can use the download icon in the upper-right corner of any widget to download the data.
+> [!IMPORTANT]
+> To download the data in CSV, please use the Download data option available on top of page.
| Attribute name | Definition | | | - |
marketplace Orders Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/orders-dashboard.md
Title: Partner Center Orders dashboard in Commercial Marketplace analytics | Microsoft AppSource and Azure Marketplace
+ Title: Partner Center Orders dashboard in Commercial Marketplace analytics | the Microsoft commercial marketplace and Azure Marketplace
description: Learn how to access analytic reports about your commercial marketplace offer orders in a graphical and downloadable format.
Previously updated : 12/23/2022 Last updated : 02/20/2023 # Orders dashboard in commercial marketplace analytics
The [Orders dashboard](https://partner.microsoft.com/dashboard/insights/commerci
> [!NOTE] > The maximum latency between customer acquisition and reporting in Partner Center is 48 hours.
-## Access the Orders dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. On the Home page, select the **Insights** tile.
-
- [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
-
-1. In the left menu, select **Orders**.
-
-## Elements of the Orders dashboard
-
-The following sections describe how to use the Orders dashboard and how to read the data.
-
-### Download
-
-To download of the data for this dashboard, select **Download as PDF** from the **Download** list.
--
-Alternatively, you can go to the [Downloads dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) to download the report.
-
-### Share
-
-To share the dashboard widgets data via email, in the top menu, select **Share**.
--
-In the dialog box that appears, provide the recipient email address and message. To share the report URL, select the **Copy link** or **Share to teams** button. To take a snapshot of the charts data, select the **Copy as image** button.
-
-### What's new
-
-To learn about changes and enhancements that were made to the dashboard, select **WhatΓÇÖs new**. The _WhatΓÇÖs new_ side panel appears.
--
-### Data refresh details
-
-To view the data source and the data refresh details, such as the frequency of the data refresh, select the ellipsis (three dots) and then select **Data refresh details**.
--
-### Got feedback?
-
-To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the **Got feedback?** link.
--
-Provide your feedback in the dialog box that appears.
-
-> [!NOTE]
-> A screenshot is automatically sent to us with your feedback.
-
-### Month range
-
-A month range selection is at the top-right corner of each page. Customize the output of the **Orders** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
-
-[ ![Illustrates the month filters on the Orders dashboard.](./media/orders-dashboard/time-range.png) ](./media/orders-dashboard/time-range.png#lightbox)
- ### Orders dashboard filters The page has different dashboard-level filters you can use to filter the data based on the following:
The page has different dashboard-level filters you can use to filter the data ba
- Is Free Trial - Subscription Status - Marketplace License Type-- Marketplace Subscription Id-- Customer Id
+- Marketplace Subscription ID
+- Customer ID
- Customer Company Name - Country - Offer Name
Each filter is expandable with multiple options that you can select. Filter opti
### Public and Private offer
-You can choose to view subscription and order details of public offers, private offers, or both by selecting the **Public offer** sub-tab, **Private offer** sub-tab, and the **All** sub-tab respectively.
+You can choose to view subscription and order details of public offers, private offers, or both by selecting the **Public offer** subtab, **Private offer** subtab, and the **All** subtab respectively.
[ ![Illustrates other filters on the Orders dashboard.](./media/orders-dashboard/offer-tabs.png) ](./media/orders-dashboard/offer-tabs.png#lightbox)
You can create multiple plans to configure different price points based on the n
Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
-For more details on seat, site, and metered-based billing, see [How to plan a SaaS offer for the commercial marketplace](plan-saas-offer.md) and [Changing prices in active commercial marketplace offers](price-changes.md).
+For more information on seat, site, and metered-based billing, see [How to plan a SaaS offer for the commercial marketplace](plan-saas-offer.md) and [Changing prices in active commercial marketplace offers](price-changes.md).
### Orders by offers
-The Orders by offers widget shows information about offers and SKUs (also known as plans). This widget shows the measures and trends of all purchased orders. Orders are categorized under different statuses: New, Convert, Renewed, Canceled.
+The Orders by offers widget show information about offers and SKUs (also known as plans). This widget shows the measures and trends of all purchased orders. Orders are categorized under different statuses: New, Convert, Renewed, Canceled.
For different statuses, the _Orders_ tab provides information about the count of purchased orders, the _Quantity_ tab provides information about the number of seats added and/or removed by customers for existing active subscriptions, and the _Revenue_ tab provides information about the billed revenue of orders for the selected month range. Each order is categorized with one of below statuses:
Note the following:
### Orders details table
+> [!IMPORTANT]
+> To download the data in CSV, please use the Download data option available on top of page.
+ This table displays a numbered list of the 500 top orders sorted by date of acquisition. - Each column in the grid is sortable.-- The data can be extracted to a .CSV or .TSV file if the count of the records is less than 500.-- If records number over 500, exported data will be asynchronously placed in a downloads page for the next 30 days.
+- If records number over 500, exported data is asynchronously placed in a downloads page for the next 30 days.
- Apply filters to the **Order details** table to display only the data you're interested in. Filter by Country/Region, Azure license type, commercial marketplace license type, Offer type, Order status, Free trails, commercial marketplace subscription ID, Customer ID, and Company name. - When an order is purchased by a protected customer, information in **Orders Detailed Data** is masked (************).
This table displays a numbered list of the 500 top orders sorted by date of acqu
| Marketplace License Type | Marketplace License Type | The billing method of the commercial marketplace offer. The possible values are:<ul><li>Billed through Azure</li><li>Bring Your Own License</li><li>Free</li><li>Microsoft as Reseller</li></ul> | MarketplaceLicenseType | | SKU | SKU | The plan associated with the offer | SKU | | Customer Country | Customer Country/Region | The country/region name provided by the customer. Country/region could be different than the country/region in a customer's Azure subscription. | CustomerCountry |
-| Is Preview SKU | Is Preview SKU | The value will let you know if you tagged the SKU as "preview". Value will be "Yes" if the SKU has been tagged accordingly, and only Azure subscriptions authorized by you can deploy and use this image. Value will be "No" if the SKU hasn't been identified as "preview". | IsPreviewSKU |
+| Is Preview SKU | Is Preview SKU | The value lets you know if you tagged the SKU as "preview". Value is "Yes" if the SKU has been tagged accordingly, and only Azure subscriptions authorized by you can deploy and use this image. Value will be "No" if the SKU hasn't been identified as "preview". | IsPreviewSKU |
| Asset ID | Asset ID | The unique identifier of the customer order for your commercial marketplace service. Virtual Machine usage-based offers aren't associated with an order. | AssetId | | Quantity | Quantity | Number of assets associated with the order ID for active orders | OrderQuantity | | Cloud Instance Name | Cloud Instance Name | The Microsoft Cloud in which a VM deployment occurred. | CloudInstanceName |
-| Is New Customer | Is New Customer | The value identifies whether a new customer acquired one or more of your offers for the first time. Value will be "Yes" if within the same calendar month for "Date Acquired". Value will be "No" if the customer has purchased any of your offers prior to the calendar month reported. | IsNewCustomer |
+| Is New Customer | Is New Customer | The value identifies whether a new customer acquired one or more of your offers for the first time. Value is "Yes" if within the same calendar month for "Date Acquired". Value is "No" if the customer has purchased any of your offers prior to the calendar month reported. | IsNewCustomer |
| Order Status | Order Status | The status of a commercial marketplace order at the time the data was last refreshed. Possible values are: <ul><li>**Active**: Subscription asset is active and used by customer</li><li>**Canceled**: Subscription of an asset is canceled by customer</li><li>**Expired**: Subscription for an offer expired in the system automatically post trial period</li><li>**Abandoned**: Indicates a system error during offer creation or subscription fulfillment wasn't completed<li><li>**Warning**: </li>Subscription order is still active but customer has defaulted in payments</ul> | OrderStatus | | Order Cancel Date | Order Cancel Date | The date the commercial marketplace order was canceled. | OrderCancelDate | | Customer Company Name | Customer Company Name | The company name provided by the customer. Name could be different than the city in a customer's Azure subscription. | CustomerCompanyName |
This table displays a numbered list of the 500 top orders sorted by date of acqu
| Is Trial | IsTrial | Represents whether an offer SKU is in trial period | IsTrial | | Order Action | Order Action | Indicates the customer action for an offer subscription. Possible values are: <ul><li>**Purchase**: Order was purchased</li><li>**Renewed**: Order was renewed</li><li>**Canceled**: Order was canceled</li></ul> | OrderAction | | Quantity changed | Quantity changed | The net change in seats added and seats removed for existing subscription orders. Same applies for sites (flat rate) pricing model | QuantityChanged |
-| Trial End Date | Trial End Date | The date the trial period for this order will end or has ended. | TrialEndDate |
+| Trial End Date | Trial End Date | The date the trial period for this order ends or has ended. | TrialEndDate |
| Customer ID | Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. | CustomerID | | Billing Account ID | Billing Account ID | The identifier of the account on which billing is generated. Map **Billing Account ID** to **customerID** to connect your Payout Transaction Report with the Customer, Order, and Usage Reports. | BillingAccountId | | Reference ID | ReferenceId | A key to link orders having usage details in usage report. Map this field value with the value for Reference ID key in usage report. This is applicable for SaaS with custom meters and VM software reservation offer types | ReferenceId |
-| PlanId | PlanId | The display name of the plan entered when the offer was created in Partner Center. Note that PlanId was originally a numeric number. | PlanId |
-| Auto Renew | Auto Renew | Indicates whether a subscription is due for an automatic renewal. Possible values are:<br><ul><li>TRUE: Indicates that on the TermEnd the subscription will renew automatically.</li><li>FALSE: Indicates that on the TermEnd the subscription will expire.</li><li>NULL: The product doesn't support renewals. Indicates that on the TermEnd the subscription will expire. This is displayed "-" on the UI</li></ul> | AutoRenew |
+| PlanId | PlanId | The display name of the plan entered when the offer was created in Partner Center. PlanId was originally a numeric number. | PlanId |
+| Auto Renew | Auto Renew | Indicates whether a subscription is due for an automatic renewal. Possible values are:<br><ul><li>TRUE: Indicates that on the TermEnd the subscription renews automatically.</li><li>FALSE: Indicates that on the TermEnd the subscription expires.</li><li>NULL: The product doesn't support renewals. Indicates that on the TermEnd the subscription expires. This is displayed "-" on the UI</li></ul> | AutoRenew |
| Not available | Event Timestamp | Indicates the timestamp of an order management event, such as an order purchase, cancelation, renewal, and so on | EventTimestamp | | Not available | OrderVersion | A key to indicate updated versions of an order purchase. The highest value indicates latest key | OrderVersion | | Not available | List Price(USD) | The publicly listed price of the offer plan in U.S dollars | ListPriceUSD |
This table displays a numbered list of the 500 top orders sorted by date of acqu
| Not available | Private Offer Name | The name provided during private offer creation | PrivateOfferName | | Not available | Billing ID | The Billing ID of the enterprise customer | BillingId |
-### Orders page filters
-
-These filters are applied at the Orders page level. You can select one or multiple filters to render the chart for the criteria you choose to view and the data you want to see in 'Detailed orders data' grid / export. Filters are applied on the data extracted for the month range that you've selected on the top-right corner of the orders page.
-
-> [!TIP]
-> You can use the download icon in the upper-right corner of any widget to download the data. You can provide feedback on each of the widgets by clicking on the ΓÇ£thumbs upΓÇ¥ or ΓÇ£thumbs downΓÇ¥ icon.
- ## Next steps - For an overview of analytics reports available in the commercial marketplace, see [Access analytic reports for the commercial marketplace in Partner Center](analytics.md).
marketplace Quality Of Service Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/quality-of-service-dashboard.md
description: Shows different reports available for quality of service in Partner
-- Previously updated : 04/29/2022++ Last updated : 02/20/2023 # Quality of service (QoS) dashboard
Additionally, view [offer deployment details](#detailed-data) in tabular form.
> [!IMPORTANT] > This dashboard is currently only available for **Azure application** offers available to all (not private offers).
-This feature is currently applicable to all partners performing deployment of Azure application offers using Azure Resource Manager (ARM) templates (but not for private offers). This report will not show data for other marketplace offers.
-
-## Access the Quality of service dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home)
-
-1. On the Home page, select the **Insights** tile.
-
- [ ![Illustrates the Insights tile in Partner Center.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
-
-1. In the left menu, select **[Quality of Service](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/qos)**.
-
- [ ![Illustrates the Quality of service dashboard.](./media/quality-of-service/quality-of-service-dashboard.png) ](./media/quality-of-service/quality-of-service-dashboard.png#lightbox)
-
-## Elements of the Quality of service dashboard
-
-The following sections describe how to use the Quality-of-Service (QoS) dashboard and how to read the data.
-
-### Download
-
-To download of the data for this dashboard, select **Download as PDF** from the **Download** list.
--
-Alternatively, you can go to the [Downloads dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) to download the report.
-
-### Share
-
-To share the dashboard widgets data via email, in the top menu, select **Share**.
--
-In the dialog box that appears, provide the recipient email address and message. To share the report URL, select the **Copy link** or **Share to Teams** button. To take a snapshot of the charts data, select the **Copy as image** button.
-
-### What's new
-
-To learn about changes and enhancements that were made to the dashboard, select **WhatΓÇÖs new**. The _WhatΓÇÖs new_ side panel appears.
--
-### Data refresh details
-
-To view the data source and the data refresh details, such as the frequency of the data refresh, select the ellipsis (three dots) and then select **Data refresh details**.
--
-### Got feedback?
-
-To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the **Got feedback?** link.
--
-Provide your feedback in the dialog box that appears.
-
-> [!NOTE]
-> A screenshot is automatically sent to us with your feedback.
+This feature is currently applicable to all partners performing deployment of Azure application offers using Azure Resource Manager (ARM) templates (but not for private offers). This report won't show data for other marketplace offers.
### Quality of service page dashboard filters The page has different dashboard-level filters you can use to filter the Quality of service data based on the following: - Application type-- Azure subscription Id-- Customer Id-- Offer Id
+- Azure subscription ID
+- Customer ID
+- Offer ID
- Pricing - Deployment location
In the panel that appears on the right, select the filters you want, and then se
:::image type="content" source="./media/quality-of-service/filters-panel.png" alt-text="Screenshot of the Filters panel.":::
-### Total deployments
+## Total deployments
This graph shows the total deployment of offers. Metrics and growth trends are represented by a line chart. View value for each month by hovering over the line chart.
About this graph:
- Change in percentage of offer deployments during the selected date range. - Month over month trend of total count for offer deployments.
-### Deployments by status
+## Deployments by status
This graph shows the metric and trend of successful and failed offer deployments by customers for the selected month range. Offer deployments can have two statuses: **Successful** or **Failed**.
About this graph:
- Change in percentage of successful and failed offer deployments for the selected date range. - Month over month trend of successful and failed offer deployment counts.
-### Quality by offers
+## Quality by offers
This graph shows quality-of-service by offers and their corresponding SKUs, also called plans. It provides metrics and trends for **Total**, **Successful**, and **Failed** offer deployments monthly. The bar chart represents the number of deployments.
About this graph:
- When viewing a month-over-month trend for an offer, select a maximum of three SKUs of that offer. - The line chart represents the same percentage changes as noted for the prior graph.
-### Deployment errors codes and resources
+## Deployment errors codes and resources
This graph shows metrics and trends of the offer deployments basis error codes and resources. The tabular section can be pivoted on error codes and resources. The first subtab provides analytics basis error codes, description, and error counts. The second provides an analytic basis for the deployment of resources. The line chart provides the total error count basis error codes and resources.
About these graphs:
- When viewing a month-over-month trend by error codes or resources, select a maximum of three items in the table. - Sort error codes and resources for deployment failures by basis error count in the table.
-### Deployment errors by offer plan
+## Deployment errors by offer plan
On this graph, the Y-axis represents deployment error count and the X-axis represents the percentile of top offer plans (by error count).
About this graph:
- The bar chart represents the deployment error counts for the selected month range. - The values on the line chart represent the cumulative error percentages by offer plan.
-### Quality by deployment duration
+## Quality by deployment duration
This graph shows the metric and trend for the average time duration for a successful and failed deployment. View the metrics by selecting an offer in the drop-down menu. Select a SKU in the tabular view or enter it in the search bar. The following list shows different mean deployments durations (in minutes): - **Success duration** ΓÇô Mean time of deployment duration with offer deployments status marked as Success. This aggregated metric is calculated using the time duration between start and end timestamps of deployments marked with successful status. - **Failure duration** ΓÇô Mean time of deployments duration with offer deployment status marked as Failure. This aggregated metric is calculated using the time duration between start and end timestamps of deployments marked with Failure status.-- **First Successful deployment duration** ΓÇô Mean time of deployment duration with offer deployment status marked as Success. This aggregated metric is calculated using the time duration between start timestamp of first deployment and end timestamp of the final deployment marked with Successful status. It is calculated for each deployment marked for a specific Offer SKU and Customer.
+- **First Successful deployment duration** ΓÇô Mean time of deployment duration with offer deployment status marked as Success. This aggregated metric is calculated using the time duration between start timestamp of first deployment and end timestamp of the final deployment marked with Successful status. It's calculated for each deployment marked for a specific Offer SKU and Customer.
Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
About this graph:
- The line graph presents the Mean duration of deployments marked as successful, failed, and successful deployments with failed prior attempts. - Mean time for first deployment factors the time spent on failure attempts before the deployment is marked as successful.
-### Geographical spread
+## Geographical spread
This graph shows the geographical spread heat map for successful and failed deployment counts for the selected month range. It also shows failure percentage against each region. The Green to Red color scale represents low to high value of failure rates. Select a record in the table to zoom in on a deployment region.
About this graph:
- Red regions indicate higher failure rates and green indicate lower. - Search and select a country/region in the grid to zoom to the location in the map. Revert to the original view with the **Home** icon.
-### Detailed data
+## Detailed data
+
+> [!IMPORTANT]
+> To download the data in CSV, please use the Download data option available on top of page.
This table shows all offer deployment details available. Download the report to view the raw data on offer deployments.
-Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
+Select the ellipsis (three dots) to copy the widget image, and download the image as a .PDF.
:::image type="content" source="media/quality-of-service/deployment-details.png" alt-text="Shows a deployment details table.":::
About this table:
- Expand the control and export the table. - The detail view is paginated. Select other pages at the bottom.
-### Dictionary of data terms
+## Dictionary of data terms
| Column name | Attribute Name | Definition | | | | |
About this table:
| Subscription ID | Subscription ID | The Subscription ID of the customer | | Customer Tenant ID | Customer Tenant ID | The Tenant ID of the customer | | Customer Name | Customer Name | The name of the customer |
-| Template Type | Template Type | Type of Azure App deployed. It can be either Managed App or Solution Templates and it cannot be private. |
+| Template Type | Template Type | Type of Azure App deployed. It can be either Managed App or Solution Templates and it can't be private. |
| Deployment Start Time | Deployment Start Time | The start time of the deployment | | Deployment End Time | Deployment End Time | The end time of the deployment |
-| Deployment Duration: | Deployment Duration: | The total time duration of offer deployment in milliseconds. It is shown in minutes in the graph. |
+| Deployment Duration: | Deployment Duration: | The total time duration of offer deployment in milliseconds. It's shown in minutes in the graph. |
| Deployment Region | Deployment Region | The location of the Azure App deployment | | Resource Provider | Resource Provider | The resource provider for the particular deployed resource | | Resource Uri | Resource Uri | The URI of the deployed resource |
About this table:
- For information about deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](../azure-resource-manager/templates/common-deployment-errors.md). - For information about resource providers, see [Resource providers for Azure services](../azure-resource-manager/management/azure-services-resource-providers.md). - For graphs, trends, and values of aggregate data that summarize marketplace activity for your offer, see [Summary dashboard in commercial marketplace analytics](./summary-dashboard.md).-- For information about your orders in a graphical and downloadable format, see [Orders dashboard in commercial marketplace analytics](./orders-dashboard.md).-- For virtual machine (VM) offers usage and metered billing metrics, see [Usage dashboard in commercial marketplace analytics](./usage-dashboard.md).-- For detailed information about your customers, including growth trends, see [Customers dashboard in commercial marketplace analytics](./customer-dashboard.md).-- For information about your licenses, see [License dashboard in commercial marketplace analytics](./license-dashboard.md). - For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](./downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Microsoft AppSource and Azure Marketplace, see [Ratings and Reviews dashboard in commercial marketplace analytics](./ratings-reviews.md). - For FAQs about commercial marketplace analytics and a comprehensive dictionary of data terms, see [Commercial marketplace analytics common questions](./analytics-faq.yml).
marketplace Ratings Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/ratings-reviews.md
description: Learn how to access a consolidated view of customer feedback for yo
--- Previously updated : 04/26/2022++ Last updated : 02/20/2023 # Ratings and Reviews dashboard in commercial marketplace analytics
This article provides information on the Ratings and Reviews dashboard in Partne
>[!NOTE] > For detailed definitions of analytics terminology, see [Frequently asked questions and terminology for commercial marketplace analytics](analytics-faq.yml).
-## Access the Ratings & reviews dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. On the Home page, select the **Insights** tile.
-
- ![Screenshot showing the Workspaces home page with the Insights tile highlighted.](media/ratings-reviews/menu-home.png)
-
-1. In the left-nav menu, select **[Ratings & reviews](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/ratingsandreviews)**.
-
- :::image type="content" source="media/ratings-reviews/ratings-reviews-main.png" alt-text="Screenshot showing the main Ratings and Reviews page of the Insights section." lightbox="media/ratings-reviews/ratings-reviews-main.png":::
-
-The dashboard displays a graphical representation of the following customer activity:
--- Ratings -- Review comments-
-Use the tabs to view your offer's **Azure Marketplace** and **Microsoft AppSource** metrics separately. To view an offer's specific metrics, select it from the dropdown list.
-
-## Elements of the Ratings and Reviews dashboard
-
-The following sections describe how to use this dashboard.
-
-### Download
--
-To download a snapshot of the dashboard, select **Download as PDF**. Alternatively, go to the [Downloads](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) dashboard and download the report.
-
-### Share
--
-To email dashboard widgets data, select **Share** and provide the email information. Share report URLs using **Copy link** and **Share to Teams**, or **Copy as image** to send a snapshot of chart data.
--
-### WhatΓÇÖs new
--
-Use this to check on changes and enhancements.
-
-### About data refresh
--
-View the data source and the data refresh details, such as frequency of the data refresh.
-
-### Got feedback?
--
-Submit feedback about the report/dashboard along with an optional screenshot.
--
-### Month range
--
-A month range selection is at the top-right corner of each page. Customize the output of graphs by selecting a month range based on the last **six** or **12** months, or by selecting a **custom** month range with a maximum duration of 12 months. The default month range is six months.
+### Ratings and Reviews dashboard filters
:::image type="content" source="media/ratings-reviews/button-filters.png" alt-text="Screenshot showing the Filters button on the Insights screen of the Customers dashboard.":::
The page has dashboard-level filters for the following:
Each filter is expandable with multiple options that you can select. Filter options are dynamic and based on the selected date range.
-### Ratings and reviews summary
+## Ratings and reviews summary
The summary section displays the following metrics for a selected date range:
Select the ellipsis (...) to copy the widget image, download aggregated widget d
:::image type="content" source="media/ratings-reviews/menu-elipsis.png" alt-text="Screenshot showing the Filters menu on the Insights screen of the Ratings and Reviews dashboard.":::
-### Review comments
+## Review comments
Reviews appear in chronological order as posted. The default view displays all reviews; filter reviews by star rating using the **rating filter** in the dropdown menu. Additionally, you can search by keywords that appear in the review. :::image type="content" source="media/ratings-reviews/review-contact.png" alt-text="Screenshot showing a sample review for an app in the commercial marketplace." :::
-### Respond to a review
+## Respond to a review
Your response will be visible on either the AppSource or Azure Marketplace storefront. This applies to the following offer types: Azure Application, Azure Container, Azure virtual machine, Dynamics 365 Business Central, Dynamics 365 apps on Dataverse and Power Apps, Dynamics 365 Operations Apps, IoT Edge Module, Managed service, Power BI app, and Software as a Service.
To respond to a review:
The response will appear under the text of the original review in the product detail page in the storefront:
-##### Microsoft AppSource
+### Microsoft AppSource
:::image type="content" source="media/ratings-reviews/review-appsource.png" alt-text="Screenshot showing a sample review and publisher reply for an offer in Microsoft Appsource." lightbox="media/ratings-reviews/review-appsource.png":::
-##### Azure Marketplace
+### Azure Marketplace
:::image type="content" source="media/ratings-reviews/review-azure.png" alt-text="Screenshot showing a sample review and publisher reply for an offer in Azure Marketplace." lightbox="media/ratings-reviews/review-azure.png":::
marketplace Revenue Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/revenue-dashboard.md
description: The Revenue dashboard shows the summary of your billed sales of all
--- Previously updated : 04/27/2022++ Last updated : 02/20/2023 # Revenue dashboard in commercial marketplace analytics
The [Revenue dashboard](https://partner.microsoft.com/dashboard/commercial-marke
- Geographical spread - Details
-## Access the Revenue dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-
-1. On the Home page, select the **Insights** tile.
-
- [ ![Screenshot of the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
-
-1. In the left menu, under **Marketplace offers**, select **Revenue**.
-
- [ ![Screenshot of the Revenue dashboard.](./media/revenue-dashboard/revenue-dashboard.png) ](./media/revenue-dashboard/revenue-dashboard.png#lightbox)
-
-## Elements of the Revenue dashboard
-
-The following sections describe how to use the Revenue dashboard and how to read the data.
-
-### Download
-
-To download data for this dashboard, select **Download as PDF** from the **Download** list.
-
-[ ![Screenshot of the Download as PDF button.](./media/revenue-dashboard/download-as-pdf.png) ](./media/revenue-dashboard/download-as-pdf.png#lightbox)
-
-Alternatively, you can go to the [Downloads dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) to download the report.
-
-### Share
-
-To share the dashboard widgets data via email, in the top menu, select Share.
-
-[ ![Screenshot of the Share button.](./media/revenue-dashboard/share.png) ](./media/revenue-dashboard/share.png#lightbox)
-
-In the dialog box that appears, provide the recipient email address and message. to share the report URL, select the **Copy link** or **Share to Teams** button. To take a snapshot of charts data, select the **Copy as image** button.
-
-### What's new?
-
-To learn about changes and enhancements that were made to the dashboard, select **WhatΓÇÖs new**. The _WhatΓÇÖs new_ side panel appears.
-
-[ ![Screenshot of the What's new button.](./media/revenue-dashboard/whats-new.png) ](./media/revenue-dashboard/whats-new.png#lightbox)
-
-### About data refresh
-
-To view the data source and the data refresh details, such as the frequency of the data refresh, select the ellipsis (three dots) and then select **Data refresh details**.
-
-[ ![Screenshot of the Data refresh details option.](./media/revenue-dashboard/data-refresh-details.png) ](./media/revenue-dashboard/data-refresh-details.png#lightbox)
-
-### Got feedback
-
-To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the **Got feedback?** link.
-
-[ ![Screenshot of the Got feedback option.](./media/revenue-dashboard/got-feedback.png) ](./media/revenue-dashboard/got-feedback.png#lightbox)
-
-Provide your feedback in the dialog box that appears.
-
-> [!NOTE]
-> A screenshot is automatically sent to us with your feedback.
-
-### Month range
-
-You can find a month range selection at the top-right corner of each page. Customize the output of the Revenue page graphs by selecting a month range based on the past 3, 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
-
-[ ![Screenshot of the month range filter on the revenue dashboard.](./media/revenue-dashboard/time-range.png) ](./media/revenue-dashboard/time-range.png#lightbox)
- ### Revenue dashboard filters
-The page has different dashboard-level filters you can use to filter the Revenue data based on the following:
+The page has different dashboard-level filters you can use to filter the Revenue data:
- Offer type - Offer listing - Billing model
Each filter is expandable with multiple options that you can select. Filter opti
### Estimated revenue
-In this section, you will find the _estimated revenue_ information that shows the overall billed sales of a partner for the selected date range and page filters.
+In this section, you find the _estimated revenue_ information that shows the overall billed sales of a partner for the selected date range and page filters.
The _Total revenue_ represents the billed sales of payouts or earnings mapped to different payout statuses: _sent_, _upcoming_, and _unprocessed_.
-The _Others revenue_ represents billed sales with earnings that either are rejected, reprocessed, not eligible, uncollected from the customers, or not reconcilable with transaction amounts in the earnings report.
+The _Others revenue_ represents billed sales of earnings with status as rejected, reprocessed, not eligible, uncollected from the customers, or not reconcilable with transaction amounts in the earnings report.
The growth rate denotes the percentage change of billed sales between the end and the start of the selected month range.
Select the ellipsis (three dots) to copy the widget image and download the image
### Transactions
-In this section, you will find the _transactions information_ that shows the overall count of the order purchases or offer consumption for a partner for the selected date range and page filters.
+In this section, you find the _transactions information_ that shows the overall count of the order purchases or offer consumption for a partner for the selected date range and page filters.
Each transaction represents a unique combination of purchase record ID and line-item ID in the revenue report. Transaction information is further categorized based on orders (subscriptions) and consumption (usage) based billing models.
Select the ellipsis (three dots) to copy the widget image and download the image
### Estimated revenue timeline
-In this section, you will find the _estimated revenue timeline_ information that displays the billed sales of the last payout amount, date, and the revenue figures of upcoming payments and their associated timelines. The upcoming revenue values shown are figures based on the current system date.
+In this section, you find the _estimated revenue timeline_ information that displays the billed sales of the last payout amount, date, and the revenue figures of upcoming payments and their associated timelines. The upcoming revenue values shown are figures based on the current system date.
Select the ellipsis (three dots) to copy the widget image, download aggregated widget data as a .CSV file, and download the image as a .PDF.
Select the ellipsis (three dots) to copy the widget image, download aggregated w
### Customers leader board
-In this section, you will find the information for top customers who contribute the most to estimated revenue. The ΓÇ£AllΓÇ¥ row denotes billed sales of all your customers. Up to 500 records can be displayed in this leaderboard table. All figures are reported in the partner preferred currency and can be sorted on different columns. You can select each row of the table and see the corresponding revenue split across different statuses, and the revenue trend for the selected month range. The dotted line in the revenue trend represents revenue figures for the open month.
+In this section, you find the information for top customers who contribute the most to estimated revenue. The ΓÇ£AllΓÇ¥ row denotes billed sales of all your customers. This table can have upto 500 records. All figures are reported in the partner preferred currency and can be sorted on different columns. You can select each row of the table and see the corresponding revenue split across different statuses, and the revenue trend for the selected month range. The dotted line in the revenue trend represents revenue figures for the open month.
Select the ellipsis (three dots) to copy the widget image and download the image as a .PDF.
Select the ellipsis (three dots) to copy the widget image and download the image
### Geographical spread
-In this section, you will find the geographic spread the total estimated revenue, estimated revenue for sent, upcoming, and unprocessed payout statuses. You can sort the table on different statuses. Total estimated revenue includes revenue for other statuses as well.
+In this section, you find the geographic spread the total estimated revenue, estimated revenue for sent, upcoming, and unprocessed payout statuses. You can sort the table on different statuses. Total estimated revenue includes revenue for other statuses as well.
The light-to-dark colors on the map represent the low to high value of the estimated revenue. Select a record in the table to zoom in on a specific country or region.
Select the ellipsis (three dots) to copy the widget image and download the image
[ ![Screenshot of the geographical spread section of the Revenue dashboard.](./media/revenue-dashboard/revenue-geographical-spread.png) ](./media/revenue-dashboard/revenue-geographical-spread.png#lightbox)
-Note the following:
--- You can move around the map to view an exact location.-- You can zoom into a specific location.-- The heatmap has a supplementary grid to view the details of country or region name, total revenue, estimated revenue of sent, unprocessed, and upcoming earnings.-- You can search and select a country/region in the grid to zoom to the location in the map. To revert to the original view, select the **Home** button in the map.
+> [!NOTE]
+> - You can move around the map to view an exact location.
+> - You can zoom into a specific location.
+> - The heatmap has a supplementary grid to view the details of country or region name, total revenue, estimated revenue of sent, unprocessed, and upcoming earnings.
+> - You can search and select a country/region in the grid to zoom to the location in the map. To revert to the original view, select the **Home** button in the map.
### Details
+> [!IMPORTANT]
+> To download the data in CSV, please use the Download data option available on top of page.
+ The _Revenue details_ table displays a numbered list of the 1,000 top orders sorted by transaction month. - Each column in the grid is sortable.-- The data can be extracted to a .CSV or .TSV file if the count of the records is less than 1,000. To download the report, select **Download raw data** (down arrow icon) in the upper right of the widget.-- If records number over 500, exported data will be asynchronously placed in a downloads page for the next 30 days.-- Use the expand and collapse widget icon at the rightmost side of each record to view billed sales revenue split across different statuses for a given _purchase order id_ and _line item id_.
+- Use the expand and collapse widget icon at the rightmost side of each record to view billed sales revenue split across different statuses for a given _purchase order ID_ and _line item ID_.
- Apply filters to the revenue details table to display only the data you're interested in. You can filter by order type, offer name, billing model, sales channel, payment instrument type, payout status, and estimated payout instrument. [ ![Screenshot of the Revenue details section of the Revenue dashboard.](./media/revenue-dashboard/details-widget.png) ](./media/revenue-dashboard/details-widget.png#lightbox)
Details widget with expandable and collapsible view.
Note the following: -- The revenue is an estimate since it factors the exchange currency rates. It is displayed in transaction currency, US dollar, or partner preferred currency. Values are displayed as per the selected date range and page filters.
+- The revenue is an estimate since it factors the exchange currency rates. It's displayed in transaction currency, US dollar, or partner preferred currency. Values are displayed as per the selected date range and page filters.
- Estimated revenue is tagged with different statuses as explained in the [data dictionary table](#data-dictionary-table). - Each row in the Details section has estimated revenue that is an aggregate of all revenue figures for a unique combination of purchase record ID and line-item ID. - Columns for customer attributes may contain empty values.
-### Providing feedback
-
-In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon. Selecting the thumbs down icon displays a dialog box that you can use to submit your feedback on the widget.
- ## Data dictionary table | Column name in user interface | Definition |
In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon
| Status - Rejected | The overall revenue for which payments or safe approval were rejected. | | Status - Not eligible | The overall revenue for which a partner isn't eligible to receive payouts. [Learn more](/partner-center/payout-statement) about eligibility. | | Status - Reprocessed | The overall revenue under-reprocessing due to various reasons. For example, invoice cancellation or safe approval cancelation, and so on. |
-| Status - Unreconciled | The overall revenue for which successful reconciliation with earnings couldn't happen. This can occur for multiple reasons:<ul><li>Estimated revenue is generated but earnings are not yet posted</li><li>Some issues with software systems</li></ul> |
+| Status - Unreconciled | The overall revenue for which successful reconciliation with earnings couldn't happen. This can occur for multiple reasons:<ul><li>Estimated revenue is generated but earnings aren't yet posted</li><li>Some issues with software systems</li></ul> |
| Status - Uncollected | The overall revenue for which an end customer hasn't yet paid or has defaulted. [Learn more](/partner-center/payout-policy-details#process-for-customer-non-payment) about write-offs. For enterprise agreement (EA) customers, there may be entries and for non-EA customers there will be no entries in the transaction history report. |
-| Transactions | An order purchase or an offer usage event for which a purchase order id and line-item id are generated in the customer invoice. |
-| Purchase record Id | Relates to a customer's invoice. Same as `order id` in the transaction history report. |
-| Line-item Id | Individual line in a customer's invoice. Same as `lineItemId` in the transaction history report. |
+| Transactions | An order purchase or an offer usage event for which a purchase order ID and line-item ID are generated in the customer invoice. |
+| Purchase record ID | Relates to a customer's invoice. Same as `order id` in the transaction history report. |
+| Line-item ID | Individual line in a customer's invoice. Same as `lineItemId` in the transaction history report. |
| Customer name | Name of the customer | | Customer company name | Name of the customerΓÇÖs company |
-| Customer Id | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. Same as `customer id` in the customers report. |
-| Billing account Id | Identifier for the billing account of the customer. Same as `customer id` in the transaction history report. |
-| Asset Id | An identifier for the software assets. Same as the `order id` in the orders report in Partner Center. |
+| Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. Same as `customer id` in the customers report. |
+| Billing account ID | Identifier for the billing account of the customer. Same as `customer id` in the transaction history report. |
+| Asset ID | An identifier for the software assets. Same as the `order id` in the orders report in Partner Center. |
| Offer type | Type of offer, such as SaaS, VM, and so on. | | Offer name | Display name of the offer | | Is Private Offer | Indicates whether a marketplace offer is a private or a public offer.<br><ul><li>0 value indicates false</li><li>1 value indicates true</li></ul>
In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon
| Earnings amount (PC) | Earnings amount in partner preferred payout currency | | Exchange rate date | The date used to calculate exchange rates for currency conversions | | Estimated pay out month | The month for receiving your estimated earnings |
-| Sales channel | Represents the sales channel for the customer. It is the same as `Azure license type` in the orders report and usage report. The possible values are:<ul><li>Cloud Solution Provider (CSP)</li><li>Enterprise (EA)</li><li>Enterprise through Reseller</li><li>Pay as You Go</li><li>Go to market (GTM)</li></ul> |
-| PlanId | The display name of the plan entered when the offer was created in Partner Center. Note that PlanId was originally a numeric number. |
+| Sales channel | Represents the sales channel for the customer. It's the same as `Azure license type` in the orders report and usage report. The possible values are:<ul><li>Cloud Solution Provider (CSP)</li><li>Enterprise (EA)</li><li>Enterprise through Reseller</li><li>Pay as You Go</li><li>Go to market (GTM)</li></ul> |
+| PlanId | The display name of the plan entered when the offer was created in Partner Center. Note: PlanId was originally a numeric number. |
| Billing model | Subscription or consumption-based billing model used for calculation of estimated revenue. It can have one of these two values:<ul><li>UsageBased</li><li>SubscriptionBased</li></ul> | | Customer postal code | The postal code name provided by the bill-to customer | | Customer city | The city name provided by the bill-to customer |
In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon
| Customer country | The country or region name provided by the customer. The country/region could be different than the country/region in a customer's Azure subscription. | | Customer company | The company name provided by the customer | | Customer email | The e-mail address provided by the end customer. This address could be different than the e-mail address in a customer's Azure subscription. |
-| Payout currency | The partner preferred currency to receive payout. This is the same as the _lastpaymentcurrency_ column in the transaction history report. |
+| Payout currency | The partner preferred currency to receive payout. It's same as the _lastpaymentcurrency_ column in the transaction history report. |
| Payment sent date | The date on which payment was sent to the partner | | Quantity | Indicates billed quantity for transactions. This can represent the seats and site purchase count for subscription-based offers, and usage units for consumption-based offers. | | Units | The unit quantity. Represents count of purchased seat/site SaaS orders and core hours for VM-based offers. Units will be displayed as NA for offers with custom meters. |
marketplace Summary Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/summary-dashboard.md
The [Summary dashboard](https://partner.microsoft.com/dashboard/insights/commerc
- Customers' usage of the offers - Customers' page visits in Azure Marketplace and AppSource
-## Access the Summary dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. On the Home page, select the **Insights** tile.
-
- [ ![Illustrates the Insights tile on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png) ](./media/workspaces/partner-center-insights-tile.png#lightbox)
-
-1. In the left menu, select **Summary**.
-
- :::image type="content" source="./media/summary-dashboard/summary-left-nav.png" alt-text="Screenshot of the link for the Summary dashboard in the left nav.":::
-
-## Elements of the Summary dashboard
-
-The following sections describe how to use the summary dashboard and how to read the data.
-
-### Download
-
-To download of the data for this dashboard, select **Download as PDF** from the **Download** list.
--
-Alternatively, you can go to the [Downloads dashboard](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) to download the report.
-
-### Share
-
-To share the dashboard widgets data via email, in the top menu, select **Share**.
--
-In the dialog box that appears, provide the recipient email address and message. To share the report URL, select the **Copy link** or **Share to teams** button. To take a snapshot of the charts data, select the **Copy as image** button.
-
-### What's new
-
-To learn about changes and enhancements that were made to the dashboard, select **WhatΓÇÖs new**. The _WhatΓÇÖs new_ side panel appears.
--
-### About data refresh
-
-To view the data source and the data refresh details, such as the frequency of the data refresh, select the ellipsis (three dots) and then select **Data refresh details**.
--
-### Got feedback?
-
-To provide instant feedback about the report/dashboard, select the ellipsis (three dots), and then select the **Got feedback?** link.
--
-Provide your feedback in the dialog box that appears.
-
-> [!NOTE]
-> A screenshot is automatically sent to us with your feedback.
-
-### Month range
-
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Summary** page graphs by selecting a month range based on the past 6 or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
-
-[ ![Illustrates the monthly range options on the summary dashboard.](./media/summary-dashboard/time-range.png) ](./media/summary-dashboard/time-range.png#lightbox)
- ### Orders widget The Orders widget on the **Summary** dashboard displays the current orders for all your transact-based offers. The Orders widget displays a count and trend of all purchased orders (excluding canceled orders) for the selected computation period. The percentage value **Orders** represents the amount of growth during the selected computation period.
marketplace Usage Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/usage-dashboard.md
- Previously updated : 12/23/2022 Last updated : 02/20/2023 # Usage dashboard in commercial marketplace analytics
-This article provides information on the Usage dashboard in Partner Center. This dashboard displays all virtual machine (VM) offers normalized usage, raw usage, and metered billing metrics in three separate tabs: VM Normalized usage, VM Raw usage, and metered billing usage.
-
->[!NOTE]
-> For detailed definitions of analytics terminology, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.yml).
-
-## Usage dashboard
-
-The [Usage dashboard](https://go.microsoft.com/fwlink/?linkid=2166106) displays the current orders for all your software as a service (SaaS) offers. You can view graphical representations of the following items:
+This article provides information on the Usage dashboard in Partner Center.
+The [Usage dashboard](https://go.microsoft.com/fwlink/?linkid=2166106) displays the insights for usage of your Virtual machines (VM), Software as a service (SaaS), Azure apps and container offers. You can view graphical representations of the following items:
- Usage trend - Normalized usage by offers
The [Usage dashboard](https://go.microsoft.com/fwlink/?linkid=2166106) displays
> [!NOTE] > The maximum latency between usage event generation and reporting in Partner Center is 48 hours.
-## Access the Usage dashboard
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-1. On the Home page, select the **Insights** tile.
-
- ![The screenshot illustrates selecting the Usage option on the Partner Center Home page.](./media/workspaces/partner-center-insights-tile.png)
-
-1. In the left menu, select **Usage**.
-
- [![Screenshot that illustrates the Insights tile on the Partner Center Home page.](./media/usage-dashboard/usage-select.png)](./media/usage-dashboard/usage-select.png#lightbox)
-
-## Elements of the Usage dashboard
-
-The following sections describe how to use the Usage dashboard and how to read the data.
-
-### Download
-
-You can download data for this dashboard by clicking on the Download option. To download snapshot of the dashboard, click on ΓÇÿDownload as PDFΓÇÖ option.
-
-Alternatively, you can also navigate to the [Downloads](https://partner.microsoft.com/dashboard/insights/commercial-marketplace/analytics/downloads) dashboard and download the report.
-
-![The screenshot illustrates the download option of the Insights page.](./media/usage-dashboard/download-dashboard.png)
-
-### Share
-
-You share the dashboard widgets data via email. Provide recipient email address and message to the email body. Share report URLs by 'Copy link' and 'Share to teams' option and the snapshot of charts data by 'Copy as image' option.
-
-![The screenshot illustrates the share option of the Insights page.](./media/usage-dashboard/share-dashboard.png)
-![The screenshot illustrates the share via email option of the Insights page.](./media/usage-dashboard/share-as-email.png)
-
-### What's new
-
-You can view the recent updates and changes.
-
-![The screenshot illustrates the whats new option of the Insights page.](./media/usage-dashboard/dashboard-whats-new.png)
-
-### About data refresh
-
-Use this option to view the Data source and the Data refresh details like the frequency of data refresh.
-### Got feedback
-
-You can give instant feedback about the report/dashboard with a screenshot.
-
-![The screenshot illustrates the feedback option of the Insights page.](./media/usage-dashboard/dashboard-feedback.png)
-
-### Month range
-
-You can find a month range selection at the top-right corner of each page. Customize the output of the **Usage** page graphs by selecting a month range based on the past 3, 6, or 12 months, or by selecting a custom month range with a maximum duration of 12 months. The default month range is six months.
-
-![The screenshot illustrates the Month filters on the Usage dashboard.](./media/usage-dashboard/dashboard-month-range.png)
-![The screenshot illustrates the selection of the time range.](./media/usage-dashboard/dashboard-month-range-select.png)
+>[!NOTE]
+> For detailed definitions of analytics terminology, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.yml).
### Usage page dashboard filters
The Usage page filters are applied at the Usage page level. You can select one o
- Sales Channel - Is Free Trial - Marketplace License Type-- Marketplace Subscription Id-- Customer Id
+- Marketplace Subscription ID
+- Customer ID
- Customer Company Name - Country - Offer Name
The Usage page filters are applied at the Usage page level. You can select one o
Each filter is expandable with multiple options that you can select. Filter options are dynamic and based on the selected range.
-The widgets and export report for VM Raw usage are like VM Normalized usage with the following distinctions:
--- Normalized usage hours are defined as the usage hours normalized to account for the number of VM cores: [number of VM cores] X [hours of raw usage]. VMs designated as "SHAREDCORE" use 1/6th (or 0.1666) the [number of VM cores] multiplier.-- Raw usage hours are defined as the amount of time VMs have been running in terms of usage units. - ### Selector for Usage type
-You can choose to analyze VM normalized usage, VM raw usage, Metered usage, and Metered usage anomalies from the dropdown picker at the top of the dashboard.
+You can choose to analyze usage for different offers from the dropdown picker at the top of the dashboard.
[![Screenshot of the dropdown picker on the Usage dashboard.](./media/usage-dashboard/usage-type-picker.png)](./media/usage-dashboard/usage-type-picker.png#lightbox)
+- **Virtual Machine**
+ - VM Normalized usage
+ - VM Raw usage
+- **Custom meters**
+ - Metered usage
+ - Metered usage anomalies
+- **Containers**
+ - Raw usage
+ ### Public and private offer
-You can choose to view subscription and order details of public offers, private offers, or both by selecting the **Public Offers** sub-tab, **Private Offers** sub-tab, and the **All** sub-tab respectively.
+You can choose to view subscription and order details of public offers, private offers, or both by selecting the **Public Offers** subtab, **Private Offers** subtab, and the **All** subtab respectively.
[ ![Screenshot of the three tabs on the Usage dashboard.](./media/usage-dashboard/usage-dashboard-tabs.png) ](./media/usage-dashboard/usage-dashboard-tabs.png#lightbox)
+## Virtual Machines (VM)
+In this section, you'll find information about the widgets for **VM Normalized usage** and **VM Raw usage**
+
+> [!NOTE]
+> These widgets are available in both **VM Normalized Usage** and **VM Raw Usage** dashboard. It will show Normalized usage and raw usage in respective dashboards.
+ ### Usage trend
-In this section, you'll find total usage hours and trend for all your offers that are consumed by your customers during the selected computation period. Metrics and growth trends are represented by a line chart. Show the value for each month by hovering over the line on the chart. The percentage value below the usage metrics in the widget represents the amount of growth or decline during the selected computation period.
+In this widget, you find total usage hours and trend for your VM offers that are consumed by your customers during the selected computation period. Metrics and growth trends are represented by a line chart. Show the value for each month by hovering over the line on the chart. The percentage value below the usage metrics in the widget represents the amount of growth or decline during the selected computation period.
There are two representations of usage hours: VM normalized usage and VM raw usage.--- Normalized usage hours are defined as the usage hours normalized to account for the number of VM cores ([number of VM cores] x [hours of raw usage]). VMs designated as "SHAREDCORE" use 1/6 (or 0.1666) as the [number of VM cores] multiplier.-- Raw usage hours are defined as the amount of time VMs have been running in terms of hours.
+- Normalized usage hours are defined as the usage hours normalized to account for the number of VM vCPU ([number of VM vCPU] x [hours of raw usage]). VMs designated as "SHAREDCORE" use 1/6 (or 0.1666) as the [number of VM vCPU] multiplier.
+- Raw usage hours are defined as the number of time VMs have been running in terms of hours.
[![Illustrates the normalized usage and raw usage data on the Usage dashboard.](./media/usage-dashboard/normalized-usage.png)](./media/usage-dashboard/normalized-usage.png#lightbox)
-Click the ellipsis (three dots '...') to copy the widget image, download aggregated widget data as a .csv file, or download the image as a pdf file for sharing purposes.
-
-### Normalized usage by offers
+### Offers
-This section provides the total usage hours and trend for your usage-based offers in Azure Marketplace. The Normalized usage by offers chart is described below.
+This widget provides the total usage hours and trend for your usage-based offers in commercial Marketplace.
-- The **normalized usage by offers** stacked column chart displays a breakdown of normalized usage hours for the top five offers according to the selected computation period. The top five offers are displayed in a graph, while the rest are grouped in the **Rest All** category.
+- The **Offers** stacked column chart displays a breakdown of usage hours for the top five offers according to the selected computation period. The top five offers are displayed in a graph, while the rest are grouped in the **Rest All** category.
- The stacked column chart depicts a month-by-month growth trend for the selected date range. The month columns represent usage hours from the offers with the highest usage hours for the respective month. The line chart depicts the growth percentage trend plotted on the secondary Y-axis. - You can select specific offers in the legend to display only those offers in the graph. :::image type="content" source="./media/usage-dashboard/normalized-usage-offers.png" alt-text="Illustrates the normalized usage offers data on the Usage dashboard.":::
-You can select any offer and a maximum of three SKUs of that offer to view the month-over-month usage trend for the offer and the selected SKUs.
+You can select any offer and a maximum of three plans of that offer to view the month-over-month usage trend for the offer and the selected plans.
-### Orders by offers and SKUs
+#### Usage: Other dimensions
-The **Orders by Offers and SKU** chart shows the measures and trends of all offers. Note the following:
+There are three tabs for the dimensions:
+- VM size
+- Sales channels
+- Offer type.
-- The top offers are displayed in the graph and the rest of the offers are grouped as **Rest All**.-- You can select specific offers in the legend to display only those offers in the graph.-- Hovering over a slice in the graph displays the number of orders and percentage of that offer compared to your total number of orders across all offers.-- The **orders by offers trend** displays month-by-month growth trends. The month column represents the number of orders by offer name. The line chart displays the growth percentage trend plotted on the z-axis.
+You can see the usage metrics and month-over-month trend against each of these dimensions.
-You can select any offer and a maximum of three SKUs of that offer to view the month-over-month trend for the offer, SKUs, and seats.
+### Geographical spread
+For the selected computation period, the heatmap displays the total usage against geography dimension. The light to dark color on the map represents the low to high value of the customer count. Select a record in the table to zoom in on a country/region.
-#### Normalized usage by other dimensions: VM size, Sales channels, and Offer type
+## Containers
+In this section, you'll find the information about widgets available for container raw usage.
-There are three tabs for the dimensions: VM size, Sales channels, and Offer type. You can see the usage metrics and month-over-month trend against each of these dimensions.
+### Raw usage
+This widget shows usage against all the pricing options you've configured for container offers. Possible pricing options are:
+- Per core
+- Per every core in cluster
+- Per node
+- Per every node in cluster
+- Per pod
+- Per cluster
-### Usage by geography
+Chart provides month over month trend of usage for selected period.
-For the selected computation period, the heatmap displays the total usage against geography dimension. The light to dark color on the map represents the low to high value of the customer count. Select a record in the table to zoom in on a country/region.
+### Offers
+
+This widget provides the total usage hours and trend for container offers based on **Pricing** selection in top of the page
+
+- The **Offers** stacked column chart displays a breakdown of usage hours for the top five offers according to the selected computation period. The top five offers are displayed in a graph, while the rest are grouped in the **Rest All** category.
+- The stacked column chart depicts a month-by-month growth trend for the selected date range. The month columns represent usage hours from the offers with the highest usage hours for the respective month. The line chart depicts the growth percentage trend plotted on the secondary Y-axis.
+- You can select specific offers in the legend to display only those offers in the graph.
+
+You can select any offer and a maximum of three plans of that offer to view the month-over-month usage trend for the offer and the selected plans.
++
+#### Sales channel
+
+You can see the usage metrics and month-over-month trend for container offers against sales channel. Data in this widget is filtered based on Pricing option selected on top.
-Note the following:
+### Geographical spread
-- You can move the map to view the exact location.-- You can zoom into a specific location.-- The heatmap has a supplementary grid to view the details of customer count, order count, and normalized usage hours in the specific location.-- You can search and select a country/region in the grid to zoom to the location in the map. Revert to the original view by selecting the **Home** button in the map.
+For the selected computation period, the heatmap displays the total usage against geography dimension. The light to dark color on the map represents the low to high value of the customer count. Select a record in the table to zoom in on a country/region. Data in this widget is filtered based on Pricing option selected on top.
-### Usage page filters
+### Customers
-The Usage page filters are applied at the Orders page level. You can select one or multiple filters to render the chart for the criteria you choose to view the data you want to see in the Usage orders data' grid / export. Filters are applied on the data extracted for the month range that you selected in the upper-right corner of the Usage page.
+This widget shows the usage for customers based on pricing option selected. You also have the flexibility to further filter the data based on *Offers* and *Plans*.
-The widgets and export report for VM Raw usage are similar to VM Normalized usage with the following distinctions:
-- Normalized usage hours are defined as the usage hours normalized to account for the number of VM cores: [number of VM cores] x [hours of raw usage]. VMs designated as "SHAREDCORE" use 1/6 (or 0.1666) the [number of VM cores] multiplier.-- Raw usage hours are defined as the amount of time VMs have been running in terms of usage units.
+## Usage details table
-### Usage details table
+> [!IMPORTANT]
+> To download the data in CSV, please use the Download data option available on top of page.
The **usage details** table displays a numbered list of the top 500 usage records sorted by usage. Note the following:
+- Data in this table is displayed based on the page you've selected
- Each column in the grid is sortable.-- The data can be extracted to a '.TSV' or '.CSV' file if the number of the records is less than 500.-- If records count is over 500, export data will be asynchronously placed on a downloads page that will be available for the next 30 days. - Apply filters to **detailed usage data** to display only the data you're interested in. Filter data by country/region, sales channel, Marketplace license type, usage type, offer name, offer type, free trials, Marketplace subscription ID, customer ID, and company name.
-Click on the ellipsis (three dots '...') to copy the widget image, download the aggregated widget data as a .csv file, or download the image as a pdf for sharing purposes.
+Click on the ellipsis (three dots '...') to copy the widget image, or download the image as a pdf for sharing purposes.
[![Illustrates the Usage details page.](./media/usage-dashboard/usage-details.png)](./media/usage-dashboard/usage-details.png#lightbox)
-_**Table 1: Dictionary of data terms**_
+**Table 1: Dictionary of data terms**
| Column name in<br>user interface | Attribute name | Definition | Column name in programmatic<br>access reports | | | - | - | - | | Marketplace Subscription ID | Marketplace Subscription ID | The unique identifier associated with the Azure subscription the customer used to purchase your commercial marketplace offer. ID was formerly the Azure Subscription GUID. | MarketplaceSubscriptionId | | MonthStartDate | Month Start Date | Month Start Date represents the month of Purchase. | MonthStartDate | | Offer Type | Offer Type | The type of commercial marketplace offering. | OfferType |
-| Azure License Type | Azure License Type | The type of licensing agreement used by customers to purchase Azure. Also known as the Channel. The possible values are:<ul><li>Cloud Solution Provider</li><li>Enterprise</li><li>Enterprise through Reseller</li><li>Pay as You Go</li></ul> | AzureLicenseType |
+| Sales channel | Azure License Type | The type of licensing agreement used by customers to purchase Azure. Also known as the Channel. The possible values are:<ul><li>Cloud Solution Provider</li><li>Enterprise</li><li>Enterprise through Reseller</li><li>Pay as You Go</li></ul> | AzureLicenseType |
| Marketplace License Type | Marketplace License Type | The billing method of the commercial marketplace offer. The possible values are:<ul><li>Billed Through Azure</li><li>Bring Your Own License</li><li>Free</li><li>Microsoft as Reseller</li></ul> | MarketplaceLicenseType |
-| SKU | SKU | The plan associated with the offer. | SKU |
+| Plan | SKU | The plan associated with the offer. | SKU |
| Customer Country | Customer Country/Region | The country/region name provided by the customer. Country/region could be different than the country/region in a customer's Azure subscription. | CustomerCountry |
-| Is Preview SKU | Is Preview SKU | The value shows if you've tagged the SKU as "preview". Value will be "Yes" if the SKU has been tagged accordingly, and only Azure subscriptions authorized by you can deploy and use this image. Value will be "No" if the SKU hasn't been identified as "preview". | IsPreviewSKU |
-| SKU Billing Type | SKU Billing Type | The Billing type associated with each SKU in the offer. The possible values are:<ul><li>Free</li><li>Paid</li></ul> | SKUBillingType |
-| VM Size | Virtual Machine Size | For VM-based offer types, this entity signifies the size of the VM associated with the SKU of the offer. | VMSize |
+| Is Preview SKU | Is Preview SKU | The value shows if you've tagged the plan as "preview". Value is "Yes" if the plan has been tagged accordingly, and only Azure subscriptions authorized by you can deploy and use this image. Value is "No" if the plan hasn't been identified as "preview". | IsPreviewSKU |
+| SKU Billing Type | SKU Billing Type | The Billing type associated with each plan in the offer. The possible values are:<ul><li>Free</li><li>Paid</li></ul> | SKUBillingType |
+| VM Size | Virtual Machine Size | For VM-based offer types, this entity signifies the size of the VM associated with the plan of the offer. | VMSize |
| Cloud Instance Name | Cloud Instance Name | The Microsoft Cloud in which a VM deployment occurred. | CloudInstanceName | | Offer Name | Offer Name | The name of the commercial marketplace offering. | OfferName | | Is Private Offer | Is Private Offer | Indicates whether a marketplace offer is a private or a public offer:<br><ul><li>0 value indicates false</li><li>1 value indicates true</li></ul> | IsPrivateOffer
_**Table 1: Dictionary of data terms**_
| Usage Type | Usage Type | Signifies whether the usage event associated with the offer is one of the following:<ul><li>Normalized usage</li><li>Raw usage</li><li>Metered usage</li></ul> | UsageType | | Trial End Date | Trial End Date | The date the trial period for this order will end or has ended. | TrialEndDate | | Customer Currency (CC) | Customer Currency | The currency used by the customer for the commercial marketplace transaction. | CustomerCurrencyCC |
-| Price (CC) | Price | Unit price of the SKU shown in customer currency. | PriceCC |
+| Price (CC) | Price | Unit price of the plan shown in customer currency. | PriceCC |
| Payout Currency (PC) | Payout Currency | Publisher is paid for the usage events associated with the asset in the currency configured by the publisher. | PayoutCurrencyPC |
-| Estimated Price (PC) | Estimated Price | Unit price of the SKU in the currency configured by the publisher. | EstimatedPricePC |
+| Estimated Price (PC) | Estimated Price | Unit price of the plan in the currency configured by the publisher. | EstimatedPricePC |
| Usage Reference | Usage Reference | A concatenated GUID that is used to connect the Usage Report (in commercial marketplace analytics) with the Payout transaction report. Usage Reference is connected with OrderId and LineItemId in the Payout transaction report. | UsageReference |
-| Usage Unit | Usage Unit | Unit of consumption associated with the SKU. | UsageUnit |
+| Usage Unit | Usage Unit | Unit of consumption associated with the plan | UsageUnit |
| Customer ID | Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. | CustomerId | | Billing Account ID | Billing Account ID | The identifier of the account on which billing is generated. Map **Billing Account ID** to **customerID** to connect your Payout Transaction Report with the Customer, Order, and Usage Reports. | BillingAccountId | | Usage Quantity | Usage Quantity | The total usage units consumed by the asset that is deployed by the customer.<br>This is based on Usage type item. For example, if the Usage Type is Normalized usage, then Usage Quantity is for Normalized Usage. | UsageQuantity |
-| NormalizedUsage | Normalized Usage | The total normalized usage units consumed by the asset that is deployed by the customer.<br>Normalized usage hours are defined as the usage hours normalized to account for the number of VM cores ([number of VM cores] x [hours of raw usage]). VMs designated as "SHAREDCORE" use 1/6 (or 0.1666) as the [number of VM cores] multiplier. | NormalizedUsage |
+| NormalizedUsage | Normalized Usage | The total normalized usage units consumed by the asset that is deployed by the customer.<br>Normalized usage hours are defined as the usage hours normalized to account for the number of VM cores ([number of VM vCPU] x [hours of raw usage]). VMs designated as "SHAREDCORE" use 1/6 (or 0.1666) as the [number of VM vCPU] multiplier. | NormalizedUsage |
| MeteredUsage | Metered Usage | The total usage units consumed by the meters that are configured with the offer that is deployed by the customer. | MeteredUsage |
-| RawUsage | Raw Usage | The total raw usage units consumed by the asset that is deployed by the customer.<br>Raw usage hours are defined as the amount of time VMs have been running in terms of usage units. | RawUsage |
+| RawUsage | Raw Usage | The total raw usage units consumed by the asset that is deployed by the customer.<br>Raw usage hours are defined as the number of time VMs have been running in terms of usage units. | RawUsage |
| Estimated Extended Charge (CC) | Estimated Extended Charge in Customer Currency | Signifies the charges associated with the usage. The column is the product of Price (CC) and Usage Quantity. | EstimatedExtendedChargeCC | | Estimated Extended Charge (PC) | Estimated Extended Charge in Payout Currency | Signifies the charges associated with the usage. The column is the product of Estimated Price (PC) and Usage Quantity. | EstimatedExtended ChargePC |
-| Meter Id | Meter Id | **Applicable for offers with custom meter dimensions.**<br>Signifies the meter ID for the offer. | MeterId |
+| Meter ID | Meter ID | **Applicable for offers with custom meter dimensions.**<br>Signifies the meter ID for the offer. | MeterId |
| Metered Dimension | Metered Dimension | **Applicable for offers with custom meter dimensions.**<br>Metered dimension of the custom meter. For example, user/device - billing unit | MeterDimension | | Partner Center Detected Anomaly | Partner Center Detected Anomaly | **Applicable for offers with custom meter dimensions**.<br>Signifies whether the publisher reported overage usage for the offerΓÇÖs custom meter dimension that was is flagged as an anomaly by Partner Center. The possible values are: <ul><li>0 (Not an anomaly)</li><li>1 (Anomaly)</li></ul>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | PartnerCenterDetectedAnomaly | | Publisher Marked Anomaly | Publisher Marked Anomaly | **Applicable for offers with custom meter dimensions**.<br>Signifies whether the publisher acknowledged the overage usage by the customer for the offerΓÇÖs custom meter dimension as genuine or false. The possible values are:<ul><li>0 (Publisher has marked it as not an anomaly)</li><li>1 (Publisher has marked it as an anomaly)</li></ul>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | PublisherMarkedAnomaly |
_**Table 1: Dictionary of data terms**_
| Action Taken At | Action Taken At | **Applicable for offers with custom meter dimensions**.<br>Specifies the time when the publisher acknowledged the overage usage by the customer for the offerΓÇÖs custom meter dimension as genuine or false.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | ActionTakenAt | | Action Taken By | Action Taken By | **Applicable for offers with custom meter dimensions**.<br>Specifies the person who acknowledged the overage usage by the customer for the offerΓÇÖs custom meter dimension as genuine or false.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic access, then the value will be null._ | ActionTakenBy | | Estimated Financial Impact (USD) | Estimated Financial Impact in USD | **Applicable for offers with custom meter dimensions**.<br>When Partner Center flags an overage usage by the customer for the offerΓÇÖs custom meter dimension as anomalous, the field specifies the estimated financial impact (in USD) of the anomalous overage usage.<br>_If the publisher doesnΓÇÖt have offers with custom meter dimensions, and exports this column through programmatic means, then the value will be null._ | EstimatedFinancialImpactUSD |
-| Asset Id | Asset Id | **Applicable for offers with custom meter dimensions**.<br>The unique identifier of the customer's order subscription for your commercial marketplace service. Virtual machine usage-based offers aren't associated with an order. | Asset Id |
-| PlanId | PlanID | The display name of the plan entered when the offer was created in Partner Center. Note that PlanId was originally a number. | PlanID |
-| Not available | Reference Id | A key to link transactions of usage-based offers with corresponding transactions in the orders report. For SaaS offers with custom meters, this key represents the AssetId. For VM software reservations, this key can be used for linking orders and usage reports. | ReferenceId |
+| Asset ID | Asset ID | **Applicable for offers with custom meter dimensions**.<br>The unique identifier of the customer's order subscription for your commercial marketplace service. Virtual machine usage-based offers aren't associated with an order. | Asset ID |
+| PlanId | PlanID | The display name of the plan entered when the offer was created in Partner Center. PlanId was originally a number. | PlanID |
+| Not available | Reference ID | A key to link transactions of usage-based offers with corresponding transactions in the orders report. For SaaS offers with custom meters, this key represents the AssetId. For VM software reservations, this key can be used for linking orders and usage reports. | ReferenceId |
| Not available | List Price(USD) | The publicly listed price of the offer plan in U.S dollars | ListPriceUSD | | Not available | Discount Price(USD) | The discounted price of the offer plan in U.S dollars | DiscountPriceUSD | | Not available | Is Private Plan | Indicates whether an offer plan is private plan <li> 0 value indicates false </li> <li> 1 value indicates true </li> | IsPrivatePlan |
_**Table 1: Dictionary of data terms**_
- For information about your orders in a graphical and downloadable format, see [Orders Dashboard in commercial marketplace analytics](./orders-dashboard.md) - For virtual machine (VM) offers usage and metered billing metrics, see [Usage Dashboard in commercial marketplace analytics](usage-dashboard.md). - For a list of your download requests over the last 30 days, see [Downloads dashboard in commercial marketplace analytics](downloads-dashboard.md).-- To see a consolidated view of customer feedback for offers on Azure Marketplace and Microsoft AppSource, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
+- To see a consolidated view of customer feedback for offers on Azure Marketplace and the Microsoft commercial marketplace, see [Ratings & Reviews analytics dashboard in Partner Center](ratings-reviews.md).
- For frequently asked questions about commercial marketplace analytics and for a comprehensive dictionary of data terms, see [Commercial marketplace analytics terminology and common questions](./analytics-faq.yml).
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Title: Introduction to flow logs for NSGs
+ Title: NSG flow logs
-description: This article explains how to use the NSG flow logs feature of Azure Network Watcher.
+description: Learn about NSG flow logs feature of Azure Network Watcher.
Previously updated : 10/06/2022- Last updated : 02/26/2023 +
-# Introduction to flow logs for network security groups
+# Flow logs for network security groups
-Flow logs are a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a [network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG). Flow data is sent to Azure Storage accounts. From there, you can access the data and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice.
+NSG Flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a [network security group (NSG)](../virtual-network/network-security-groups-overview.md). Flow data is sent to Azure Storage accounts. From there, you can access the data and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice.
-![Screenshot of the Azure portal that shows the main page for NSG flow logs in Network Watcher.](./media/network-watcher-nsg-flow-logging-overview/homepage.jpg)
-
-This article shows you how to use, manage, and troubleshoot NSG flow logs.
## Why use flow logs?
Flow logs are the source of truth for all network activity in your cloud environ
## Common use cases
-**Network monitoring**: Identify unknown or undesired traffic. Monitor traffic levels and bandwidth consumption. Filter flow logs by IP and port to understand application behavior. Export flow logs to analytics and visualization tools of your choice to set up monitoring dashboards.
+#### Network monitoring
+- Identify unknown or undesired traffic.
+- Monitor traffic levels and bandwidth consumption.
+- Filter flow logs by IP and port to understand application behavior.
+- Export flow logs to analytics and visualization tools of your choice to set up monitoring dashboards.
-**Usage monitoring and optimization:** Identify top talkers in your network. Combine with GeoIP data to identify cross-region traffic. Understand traffic growth for capacity forecasting. Use data to remove overly restrictive traffic rules.
+#### Usage monitoring and optimization
+- Identify top talkers in your network.
+- Combine with GeoIP data to identify cross-region traffic.
+- Understand traffic growth for capacity forecasting.
+- Use data to remove overly restrictive traffic rules.
-**Compliance**: Use flow data to verify network isolation and compliance with enterprise access rules.
+#### Compliance
+- Use flow data to verify network isolation and compliance with enterprise access rules.
-**Network forensics and security analysis**: Analyze network flows from compromised IPs and network interfaces. Export flow logs to any SIEM or IDS tool of your choice.
+#### Network forensics and security analysis
+- Analyze network flows from compromised IPs and network interfaces.
+- Export flow logs to any SIEM or IDS tool of your choice.
-## How flow logs work
+## How NSG flow logs work
-Key properties of flow logs include:
+Key properties of NSG flow logs include:
-- Flow logs operate at [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer) and record all IP flows going in and out of an NSG.-- Logs are collected at 1-minute intervals through the Azure platform. They don't affect customer resources or network performance in any way.-- Logs are written in JSON format and show outbound and inbound flows per NSG rule.
+- Flow logs operate at Layer 4 of the Open Systems Interconnection (OSI) model and record all IP flows going in and out of a network security group.
+- Logs are collected at 1-minute intervals through the Azure platform. They don't affect your Azure resources or network performance in any way.
+- Logs are written in JSON format and show outbound and inbound flows per network security group rule.
- Each log record contains the network interface (NIC) that the flow applies to, 5-tuple information, the traffic decision, and (for version 2 only) throughput information.-- Flow logs have a retention feature that allows automatically deleting the logs up to a year after their creation.
+- NSG Flow logs have a retention feature that allows deleting the logs automatically up to a year after their creation.
> [!NOTE] > Retention is available only if you use [general-purpose v2 storage accounts](../storage/common/storage-account-overview.md#types-of-storage-accounts). Core concepts for flow logs include: -- Software-defined networks are organized around virtual networks and subnets. You can manage the security of these virtual networks and subnets by using an NSG.-- An NSG contains a list of _security rules_ that allow or deny network traffic in resources that it's connected to. NSGs can be associated with each virtual network, subnet, and network interface in a virtual machine (VM). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).-- All traffic flows in your network are evaluated through the rules in the applicable NSG. The result of these evaluations is NSG flow logs. -- Flow logs are collected through the Azure platform and don't require any change to customer resources.-- Rules are of two types: terminating and non-terminating. Each has different logging behaviors:
- - NSG *deny* rules are terminating. The NSG that's denying the traffic will log it in flow logs. Processing in this case stops after any NSG denies traffic.
- - NSG *allow* rules are non-terminating. Even if one NSG allows it, processing continues to the next NSG. The last NSG that allows traffic will log the traffic to flow logs.
-- NSG flow logs are written to storage accounts. You can export, process, analyze, and visualize flow logs by using tools like Network Watcher traffic analytics, Splunk, Grafana, and Stealthwatch.
+- Software-defined networks are organized around virtual networks and subnets. You can manage the security of these virtual networks and subnets by using network security groups.
+- A network security group contains *security rules* that allow or deny network traffic to or from the Azure resources that the network security group is connected to. A network security group can be associated with a subnet or a network interface of a virtual machine (VM). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
+- All traffic flows in your network are evaluated through the rules in the applicable network security group. The result of these evaluations is NSG flow logs.
+- NSG Flow logs are collected through the Azure platform and don't require any change to your Azure resources.
+- There are two types of network security group rules: terminating and non-terminating. Each has different logging behaviors:
+ - *Deny* rules are terminating. The network security group that's denying the traffic will log it in the flow logs. Processing in this case stops after any NSG denies traffic.
+ - *Allow* rules are non-terminating. If the network security group allows the traffic, processing continues to the next network security group. The last network security group that allows traffic will log the traffic to the flow logs.
+- NSG flow logs are written to storage accounts. You can export, process, analyze, and visualize NSG flow logs by using tools like Network Watcher traffic analytics, Splunk, Grafana, and Stealthwatch.
## Log format
-Flow logs include the following properties:
+NSG Flow logs include the following properties:
* `time`: Time when the event was logged.
-* `systemId`: System ID of the NSG.
+* `systemId`: System ID of the network security group.
* `category`: Category of the event. The category is always `NetworkSecurityGroupFlowEvent`.
-* `resourceid`: Resource ID of the NSG.
+* `resourceid`: Resource ID of the network security group.
* `operationName`: Always `NetworkSecurityGroupFlowEvents`.
-* `properties`: Collection of properties of the flow.
- * `Version`: Version number of the flow log's event schema
+* `properties`: Collection of properties of the flow:
+ * `Version`: Version number of the flow log's event schema.
* `flows`: Collection of flows. This property has multiple entries for different rules. * `rule`: Rule for which the flows are listed. * `flows`: Collection of flows. * `mac`: MAC address of the NIC for the VM where the flow was collected.
- * `flowTuples`: String that contains multiple properties for the flow tuple, in comma-separated format.
+ * `flowTuples`: String that contains multiple properties for the flow tuple in comma-separated format:
* `Time Stamp`: Time stamp of when the flow occurred, in UNIX epoch format. * `Source IP`: Source IP address. * `Destination IP`: Destination IP address.
Flow logs include the following properties:
* `Protocol`: Protocol of the flow. Valid values are `T` for TCP and `U` for UDP. * `Traffic Flow`: Direction of the traffic flow. Valid values are `I` for inbound and `O` for outbound. * `Traffic Decision`: Whether traffic was allowed or denied. Valid values are `A` for allowed and `D` for denied.
- * `Flow State - Version 2 Only`: State of the flow. Possible states are: <br><br>`B`: Begin, when a flow is created. Statistics aren't provided. <br>`C`: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals. <br>`E`: End, when a flow is terminated. Statistics are provided.
+ * `Flow State - Version 2 Only`: State of the flow. Possible states are:
+ * `B`: Begin, when a flow is created. Statistics aren't provided.
+ * `C`: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals.
+ * `E`: End, when a flow is terminated. Statistics are provided.
* `Packets - Source to destination - Version 2 Only`: Total number of TCP packets sent from source to destination since the last update. * `Bytes sent - Source to destination - Version 2 Only`: Total number of TCP packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload. * `Packets - Destination to source - Version 2 Only`: Total number of TCP packets sent from destination to source since the last update. * `Bytes sent - Destination to source - Version 2 Only`: Total number of TCP packet bytes sent from destination to source since the last update. Packet bytes include packet header and payload. - Version 2 of NSG flow logs introduces the concept of flow state. You can configure which version of flow logs you receive. Flow state `B` is recorded when a flow is initiated. Flow state `C` and flow state `E` are states that mark the continuation of a flow and flow termination, respectively. Both `C` and `E` states contain traffic bandwidth information. ### Sample log records
-In the following examples of a flow log, multiple records follow the property list described earlier.
+In the following examples of NSG flow log, multiple records follow the property list described earlier.
> [!NOTE] > Values in the `flowTuples` property are a comma-separated list.
+#### version 1
+ Here's an example format of a version 1 NSG flow log: ```json
Here's an example format of a version 1 NSG flow log:
```
+#### version 2
+ Here's an example format of a version 2 NSG flow log: ```json
For more information about enabling flow logs, see the following guides:
On the Azure portal: 1. Go to the **NSG flow logs** section in Network Watcher.
-1. Select the name of the NSG.
-1. On the settings pane for the flow log, change the parameters that you want.
+1. Select the name of the network security group.
+1. On the settings pane for the NSG flow log, change the parameters that you want.
1. Select **Save** to deploy the changes. To update parameters via command-line tools, use the same command that you used to enable flow logs.
To update parameters via command-line tools, use the same command that you used
- [Read flow logs by using PowerShell functions](./network-watcher-read-nsg-flow-logs.md) - [Export NSG flow logs to Splunk](https://www.splunk.com/en_us/blog/platform/splunking-azure-nsg-flow-logs.html)
-Although flow logs target NSGs, they're not displayed the same way as the other logs. Flow logs are stored only within a storage account and follow the logging path shown in the following example:
+NSG flow logs target network security groups and aren't displayed the same way as the other logs. NSG Flow logs are stored only in a storage account and follow the logging path shown in the following example:
``` https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecuritygroupflowevent/resourceId=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecurity
### Visualize flow logs -- [Visualize NSG flow logs by using Azure Network Watcher traffic analytics](./traffic-analytics.md)-- [Visualize NSG flow logs by using Power BI](./network-watcher-visualize-nsg-flow-logs-power-bi.md)-- [Visualize NSG flow logs by using Elastic Stack](./network-watcher-visualize-nsg-flow-logs-open-source-tools.md)-- [Manage and analyze NSG flow logs by using Grafana](./network-watcher-nsg-grafana.md)-- [Manage and analyze NSG flow logs by using Graylog](./network-watcher-analyze-nsg-flow-logs-graylog.md)
+- [Visualize NSG flow logs using Network Watcher traffic analytics](./traffic-analytics.md)
+- [Visualize NSG flow logs using Power BI](./network-watcher-visualize-nsg-flow-logs-power-bi.md)
+- [Visualize NSG flow logs using Elastic Stack](./network-watcher-visualize-nsg-flow-logs-open-source-tools.md)
+- [Manage and analyze NSG flow logs using Grafana](./network-watcher-nsg-grafana.md)
+- [Manage and analyze NSG flow logs using Graylog](./network-watcher-analyze-nsg-flow-logs-graylog.md)
### Disable flow logs
-When you disable a flow log, you stop the flow logging for the associated NSG. But the flow log continues to exist as a resource, with all its settings and associations. You can enable it anytime to begin flow logging on the configured NSG.
+When you disable an NSG flow log, you stop the flow logging for the associated network security group. But the flow log continues to exist as a resource, with all its settings and associations. You can enable it anytime to begin flow logging on the configured network security group.
-For steps to disable and enable a flow logs, see [this how-to guide](./network-watcher-nsg-flow-logging-powershell.md).
+For steps to disable and enable NSG flow logs, see [Configure NSG flow logs](./network-watcher-nsg-flow-logging-powershell.md).
### Delete flow logs
-When you delete a flow log, you not only stop the flow logging for the associated NSG but also delete the flow log resource (with all its settings and associations). To begin flow logging again, you must create a new flow log resource for that NSG.
+When you delete an NSG flow log, you not only stop the flow logging for the associated network security group but also delete the flow log resource (with all its settings and associations). To begin flow logging again, you must create a new flow log resource for that network security group.
-You can delete a flow log by using [PowerShell](/powershell/module/az.network/remove-aznetworkwatcherflowlog), the [Azure CLI](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete), or the [REST API](/rest/api/network-watcher/flowlogs/delete). At this time, you can't delete flow logs from the Azure portal.
+You can delete a flow log using [PowerShell](/powershell/module/az.network/remove-aznetworkwatcherflowlog), the [Azure CLI](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete), or the [REST API](/rest/api/network-watcher/flowlogs/delete). At this time, you can't delete flow logs from the Azure portal.
-Also, when you delete an NSG, the associated flow log resource is deleted by default.
+wWhen you delete a network security group, the associated flow log resource is deleted by default.
> [!NOTE]
-> To move an NSG to a different resource group or subscription, you must delete the associated flow logs. Just disabling the flow logs won't work. After you migrate an NSG, you must re-create the flow logs to enable flow logging on it.
+> To move a network security group to a different resource group or subscription, you must delete the associated flow logs. Just disabling the flow logs won't work. After you migrate a network security group, you must re-create the flow logs to enable flow logging on it.
## Considerations for NSG flow logs ### Storage account -- **Location**: The storage account used must be in the same region as the NSG.
+- **Location**: The storage account used must be in the same region as the network security group.
- **Performance tier**: Currently, only standard-tier storage accounts are supported.-- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, NSG flow logs will stop working. To fix this problem, you must disable and then re-enable NSG flow logs.
+- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, NSG flow logs stop working. To fix this problem, you must disable and then re-enable NSG flow logs.
### Cost NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs.
-Pricing of NSG flow logs does not include the underlying costs of storage. Using the retention policy feature with NSG flow logs means incurring separate storage costs for extended periods of time.
+Pricing of NSG flow logs doesn't include the underlying costs of storage. Using the retention policy feature with NSG flow logs means incurring separate storage costs for extended periods of time.
If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). ### User-defined inbound TCP rules
-NSGs are implemented as a [stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). But because of current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless way.
+network security groups are implemented as a [stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). But because of current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless way.
-Flows that user-defined inbound rules affect become non-terminating. Additionally, byte and packet counts are not recorded for these flows. Because of those factors, the number of bytes and packets reported in NSG flow logs (and Network Watcher traffic analytics) could be different from actual numbers.
+Flows that user-defined inbound rules affect become non-terminating. Additionally, byte and packet counts aren't recorded for these flows. Because of those factors, the number of bytes and packets reported in NSG flow logs (and Network Watcher traffic analytics) could be different from actual numbers.
-You can resolve this difference by setting the [FlowTimeoutInMinutes](/powershell/module/az.network/set-azvirtualnetwork) property on the associated virtual networks to a non-null value. You can achieve default stateful behavior by setting `FlowTimeoutInMinutes` to 4 minutes. For long-running connections where you don't want flows to disconnect from a service or destination, you can set `FlowTimeoutInMinutes` to a value of up to 30 minutes.
+You can resolve this difference by setting the `FlowTimeoutInMinutes` property on the associated virtual networks to a non-null value. You can achieve default stateful behavior by setting `FlowTimeoutInMinutes` to 4 minutes. For long-running connections where you don't want flows to disconnect from a service or destination, you can set `FlowTimeoutInMinutes` to a value of up to 30 minutes. Use [Get-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to set `FlowTimeoutInMinutes` property:
```powershell
-$virtualNetwork = Get-AzVirtualNetwork -Name VnetName -ResourceGroupName RgName
+$virtualNetwork = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
$virtualNetwork.FlowTimeoutInMinutes = 4 $virtualNetwork | Set-AzVirtualNetwork ``` ### Inbound flows logged from internet IPs to VMs without public IPs
-VMs that don't have a public IP address associated with the NIC as an instance-level public IP, or that are part of a basic load balancer back-end pool, use [default SNAT](../load-balancer/load-balancer-outbound-connections.md). Azure assigns an IP address to those VMs to facilitate outbound connectivity. As a result, you might see flow log entries for flows from internet IP addresses, if the flow is destined to a port in the range of ports that are assigned for SNAT.
+Virtual machines (VMs) that don't have a public IP address associated with the NIC as an instance-level public IP, or that are part of a basic load balancer back-end pool, use [default SNAT](../load-balancer/load-balancer-outbound-connections.md). Azure assigns an IP address to those VMs to facilitate outbound connectivity. As a result, you might see flow log entries for flows from internet IP addresses, if the flow is destined to a port in the range of ports that are assigned for SNAT.
-Although Azure won't allow these flows to the VM, the attempt is logged and appears in the Network Watcher NSG flow log by design. We recommend that you explicitly block unwanted inbound internet traffic with an NSG.
+Although Azure doesn't allow these flows to the VM, the attempt is logged and appears in the Network Watcher NSG flow log by design. We recommend that you explicitly block unwanted inbound internet traffic with a network security group.
-### NSG on an ExpressRoute gateway subnet
+### Network security group on an ExpressRoute gateway subnet
We don't recommend that you log flows on an Azure ExpressRoute gateway subnet because traffic can bypass that type of gateway (for example, [FastPath](../expressroute/about-fastpath.md)). If an NSG is linked to an ExpressRoute gateway subnet and NSG flow logs are enabled, then outbound flows to virtual machines might not be captured. Such flows must be captured at the subnet or NIC of the VM. ### Traffic across a private link
-To log traffic while accessing platform as a service (PaaS) resources via private link, enable NSG flow logs on a subnet NSG that contains the private link. Because of platform limitations, only the traffic at all the source VMs can be captured. Traffic at the destination PaaS resource can't be captured.
+To log traffic while accessing platform as a service (PaaS) resources via private link, enable NSG flow logs on the network security group of the subnet that contains the private link. Because of platform limitations, only traffic at the source VMs can be captured. Traffic at the destination PaaS resource can't be captured.
-### Support for the Application Gateway V2 subnet NSG
+### Support for network security groups associated to Application Gateway v2 subnet
-NSG flow logs on the Azure Application Gateway V2 subnet NSG are currently [not supported](../application-gateway/application-gateway-faq.yml#are-nsg-flow-logs-supported-on-nsgs-associated-to-application-gateway-v2-subnet). NSG flow logs on Application Gateway V1 are supported.
+NSG flow logs for network security groups associated to Azure Application Gateway V2 subnet are currently [not supported](../application-gateway/application-gateway-faq.yml#are-nsg-flow-logs-supported-on-nsgs-associated-to-application-gateway-v2-subnet). NSG flow logs for network security groups associated to Application Gateway V1 subnet are supported.
### Incompatible services
-Because of current platform limitations, a few Azure services don't support NSG flow logs. The current list of incompatible services is:
+Currently, these Azure services don't support NSG flow logs:
- [Azure Container Instances](https://azure.microsoft.com/services/container-instances/) - [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) - [Azure Functions](https://azure.microsoft.com/services/functions/) > [!NOTE]
-> App services deployed under an Azure App Service plan don't support NSG flow logs. [Learn more](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works).
+> App services deployed under an Azure App Service plan don't support NSG flow logs. To learn more, see [How virtual network integration works](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works).
## Best practices -- **Enable flow logs on critical subnets**: Flow logs should be enabled on all critical subnets in your subscription as an auditing and security best practice.
+- **Enable NSG flow logs on critical subnets**: Flow logs should be enabled on all critical subnets in your subscription as an auditing and security best practice.
-- **Enable flow logs on all NSGs attached to a resource**: Flow logs in Azure are configured on the NSG resources. A flow will be associated with only one NSG rule. In scenarios where you use multiple NSGs, we recommend enabling NSG flow logs on all NSGs applied at the resource's subnet or network interface to ensure that all traffic is recorded. For more information, see [How network security groups filter network traffic](../virtual-network/network-security-group-how-it-works.md).
+- **Enable NSG flow logs on all network security groups attached to a resource**: NSG flow logs are configured on network security groups. A flow is associated with only one network security group rule. In scenarios where you use multiple network security groups, we recommend enabling NSG flow logs on all network security groups applied at the resource's subnet and network interface (NIC) to ensure that all traffic is recorded. For more information, see [How network security groups filter network traffic](../virtual-network/network-security-group-how-it-works.md).
Here are a few common scenarios:
- - **Multiple NICs at a VM**: If multiple NICs are attached to a virtual machine, you must enable flow logs on all of them.
- - **NSG at both the NIC and subnet levels**: If an NSG is configured at the NIC level and the subnet level, you must enable flow logs at both NSGs. The exact sequence of rule processing by NSGs at NIC and subnet levels is platform dependent and varies from case to case. Traffic flows will be logged against the NSG that's processed last. The platform state changes the processing order. You have to check both of the flow logs.
- - **Azure Kubernetes Service (AKS) cluster subnet**: AKS adds a default NSG at the cluster subnet. You must enable flow logs on this default NSG.
+ - **Multiple NICs at a virtual machine**: If multiple NICs are attached to a virtual machine, you must enable flow logs on all of them.
+ - **Network security group at both the NIC and subnet levels**: If a network security group is configured at the NIC level and the subnet level, you must enable flow logs at both network security groups. The exact sequence of rule processing by network security groups at NIC and subnet levels is platform dependent and varies from case to case. Traffic flows are logged against the network security group that's processed last. The platform state changes the processing order. You have to check both of the flow logs.
+ - **Azure Kubernetes Service (AKS) cluster subnet**: AKS adds a default network security group at the cluster subnet. You must enable NSG flow logs on this network security group.
- **Storage provisioning**: Provision storage in tune with the expected volume of flow logs. -- **Naming**: The NSG name must be up to 80 characters, and NSG rule names must be up to 65 characters. If the names exceed their character limits, they might be truncated during logging.
+- **Naming**: The network security group name must be up to 80 characters, and a network security group rule name must be up to 65 characters. If the names exceed their character limits, they might be truncated during logging.
## Troubleshooting common problems
-### I couldn't enable NSG flow logs
+### I can't enable NSG flow logs
-If you get an "AuthorizationFailed" or "GatewayAuthenticationFailed" error, you might not have enabled the **Microsoft.Insights** resource provider on your subscription. [Follow the instructions](./network-watcher-nsg-flow-logging-portal.md#register-insights-provider) to enable it.
+If you get an "AuthorizationFailed" or "GatewayAuthenticationFailed" error, you might not have enabled the **Microsoft.Insights** resource provider on your subscription. For more information, see [Register Insights provider](./network-watcher-nsg-flow-logging-portal.md#register-insights-provider).
### I enabled NSG flow logs but don't see data in my storage account This problem might be related to: -- **Setup time**: NSG flow logs can take up to 5 minutes to appear in your storage account (if they're configured correctly). A *PT1H.json* file will appear. You can access that file as described in [this article](./network-watcher-nsg-flow-logging-portal.md#download-flow-log).
+- **Setup time**: NSG flow logs can take up to 5 minutes to appear in your storage account (if they're configured correctly). A *PT1H.json* file appears. For more information, see [Download flow log](./network-watcher-nsg-flow-logging-portal.md#download-flow-log).
-- **Lack of traffic on your NSGs**: Sometimes you won't see logs because your VMs aren't active, or because upstream filters at Application Gateway or other devices are blocking traffic to your NSGs.
+- **Lack of traffic on your network security groups**: Sometimes you don't see logs because your virtual machines aren't active, or because upstream filters at Application Gateway or other devices are blocking traffic to your network security groups.
### I want to automate NSG flow logs
-Support for automation via Azure Resource Manager templates (ARM templates) is now available for NSG flow logs. For more information, read the [feature announcement](https://azure.microsoft.com/updates/arm-template-support-for-nsg-flow-logs/) and the [quickstart for configuring NSG flow logs by using an ARM template](quickstart-configure-network-security-group-flow-logs-from-arm-template.md).
+Support for automation via Azure Resource Manager templates (ARM templates) is now available for NSG flow logs. For more information, see [Configure network security group flow logs using an Azure Resource Manager (ARM) template](quickstart-configure-network-security-group-flow-logs-from-arm-template.md).
-## FAQ
+## Frequently asked questions (FAQ)
### What do NSG flow logs do?
-You can combine and manage Azure network resources through [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md). NSG flow logs enable you to log 5-tuple flow information about all traffic through your NSGs. The raw flow logs are written to an Azure Storage account. From there, you can further process, analyze, query, or export them as needed.
+NSG flow logs enable you to log 5-tuple flow information about all traffic passing through your network security groups. The raw flow logs are written to an Azure Storage account. From there, you can further process, analyze, query, or export them as needed.
-### Does using flow logs affect my network latency or performance?
+### Do flow logs affect my network latency or performance?
Flow log data is collected outside the path of your network traffic, so it doesn't affect network throughput or latency. You can create or delete flow logs without any risk of impact to network performance.
Flow log data is collected outside the path of your network traffic, so it doesn
To use a storage account behind a firewall, you have to provide an exception for trusted Microsoft services to access the storage account:
-1. Go to the storage account by typing the account's name in the global search on the portal or from the [Storage accounts page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Storage%2FStorageAccounts).
+1. Go to the storage account by entering the account's name in the portal search.
1. In the **Networking** section, select **Firewalls and virtual networks** at the top of the page. Then make sure that the following items are configured: - For **Public network access**, select **Enabled from selected virtual networks and IP addresses**.
To use a storage account behind a firewall, you have to provide an exception for
- For **Exceptions**, select **Allow Azure service on the trusted services list to access this storage account**.
-1. Find your target NSG on the [overview page for NSG flow logs](https://portal.azure.com/#blade/Microsoft_Azure_Network/NetworkWatcherMenuBlade/flowLogs), and then enable NSG flow logs by using the previously configured storage account.
+1. On the NSG flow logs page, find your target network security group and then enable flow logging using the previously configured storage account.
Check the storage logs after a few minutes. You should see an updated time stamp or a new JSON file created. ### How do I use NSG flow logs with a storage account behind a service endpoint?
-NSG flow logs are compatible with service endpoints without requiring any extra configuration. For more information, see the [tutorial on enabling service endpoints in your virtual network](../virtual-network/tutorial-restrict-network-access-to-resources.md#enable-a-service-endpoint).
+NSG flow logs are compatible with service endpoints without requiring any extra configuration. For more information, see [Enable a service endpoint](../virtual-network/tutorial-restrict-network-access-to-resources.md#enable-a-service-endpoint).
### What's the difference between versions 1 and 2 of flow logs?
Version 2 of flow logs introduces the concept of *flow state* and stores informa
## Pricing
-NSG flow logs are charged per gigabyte of logs collected and come with a free tier of 5 GB/month per subscription. For the current pricing in your region, see the [Network Watcher pricing page](https://azure.microsoft.com/pricing/details/network-watcher/).
+NSG flow logs are charged per gigabyte of logs collected and come with a free tier of 5 GB/month per subscription. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+
+Storage of logs is charged separately. For relevant prices, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+
+## Next steps
-Storage of logs is charged separately. For relevant prices, see the [Pricing page for Azure Storage block blobs](https://azure.microsoft.com/pricing/details/storage/blobs/).
+- Learn how to [Log network traffic to and from a virtual machine using the Azure portal](./network-watcher-nsg-flow-logging-portal.md)
+- Learn how to [Read NSG flow logs](./network-watcher-read-nsg-flow-logs.md)
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
In this article, you'll prepare your environment to create Azure Red Hat OpenShi
> * Setup the prerequisites and create the required virtual network and subnets > * Deploy a cluster with a private API server endpoint and a private ingress controller
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.30.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.30.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Before you begin
A Red Hat pull secret enables your cluster to access Red Hat container registrie
1. **[Go to your Red Hat OpenShift cluster manager portal](https://cloud.redhat.com/openshift/install/azure/aro-provisioned) and log in.**
- You will need to log in to your Red Hat account or create a new Red Hat account with your business email and accept the terms and conditions.
+ You'll need to log in to your Red Hat account or create a new Red Hat account with your business email and accept the terms and conditions.
2. **Click Download pull secret.**
Keep the saved `pull-secret.txt` file somewhere safe - it will be used in each c
When running the `az aro create` command, you can reference your pull secret using the `--pull-secret @pull-secret.txt` parameter. Execute `az aro create` from the directory where you stored your `pull-secret.txt` file. Otherwise, replace `@pull-secret.txt` with `@<path-to-my-pull-secret-file`.
-If you are copying your pull secret or referencing it in other scripts, your pull secret should be formatted as a valid JSON string.
+If you're copying your pull secret or referencing it in other scripts, your pull secret should be formatted as a valid JSON string.
### Create a virtual network containing two empty subnets
-Next, you will create a virtual network containing two empty subnets.
+Next, you'll create a virtual network containing two empty subnets.
1. **Set the following variables.**
Next, you will create a virtual network containing two empty subnets.
1. **Create a resource group**
- An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you are asked to specify a location. This location is where resource group metadata is stored, it is also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create][az-group-create] command.
+ An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're asked to specify a location. This location is where resource group metadata is stored, it's also where your resources run in Azure if you don't specify another region during resource creation. Create a resource group using the [az group create][az-group-create] command.
```azurecli-interactive az group create --name $RESOURCEGROUP --location $LOCATION
Next, you will create a virtual network containing two empty subnets.
Run the following command to create a cluster. Optionally, you can [pass your Red Hat pull secret](#get-a-red-hat-pull-secret-optional) which enables your cluster to access Red Hat container registries along with additional content. >[!NOTE]
-> If you are copy/pasting commands and using one of the optional parameters, be sure delete the initial hashtags and the trailing comment text. As well, close the argument on the preceding line of the command with a trailing backslash.
+> If you're copy/pasting commands and using one of the optional parameters, be sure delete the initial hashtags and the trailing comment text. As well, close the argument on the preceding line of the command with a trailing backslash.
```azurecli-interactive az aro create \
az aro create \
After executing the `az aro create` command, it normally takes about 35 minutes to create a cluster. > [!NOTE]
-> When attempting to create a cluster, if you receive an error message saying that your resource quota has been exceeded, see [Adding Quota to ARO account](https://mobb.ninja/docs/quickstart-aro.html#adding-quota-to-aro-account) to learn how to proceed.
+> When attempting to create a cluster, if you receive an error message saying that your resource quota has been exceeded, see [Adding Quota to ARO account](https://mobb.ninja/docs/quickstart-aro/#adding-quota-to-aro-account) to learn how to proceed.
>[!IMPORTANT] > If you choose to specify a custom domain, for example **foo.example.com**, the OpenShift console will be available at a URL such as `https://console-openshift-console.apps.foo.example.com`, instead of the built-in domain `https://console-openshift-console.apps.<random>.<location>.aroapp.io`. >
-> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you will need to follow the OpenShift documentation to [configure a custom certificate for your ingress controller](https://docs.openshift.com/container-platform/4.8/security/certificates/replacing-default-ingress-certificate.html) and [custom certificate for your API server](https://docs.openshift.com/container-platform/4.8/security/certificates/api-server.html).
+> By default OpenShift uses self-signed certificates for all of the routes created on `*.apps.<random>.<location>.aroapp.io`. If you choose Custom DNS, after connecting to the cluster, you'll need to follow the OpenShift documentation to [configure a custom certificate for your ingress controller](https://docs.openshift.com/container-platform/4.8/security/certificates/replacing-default-ingress-certificate.html) and [custom certificate for your API server](https://docs.openshift.com/container-platform/4.8/security/certificates/api-server.html).
### Create a private cluster without a public IP address (preview)
You can find the cluster console URL by running the following command, which wil
``` >[!IMPORTANT]
-> In order to connect to a private Azure Red Hat OpenShift cluster, you will need to perform the following step from a host that is either in the Virtual Network you created or in a Virtual Network that is [peered](../virtual-network/virtual-network-peering-overview.md) with the Virtual Network the cluster was deployed to.
+> In order to connect to a private Azure Red Hat OpenShift cluster, you'll need to perform the following step from a host that is either in the Virtual Network you created or in a Virtual Network that is [peered](../virtual-network/virtual-network-peering-overview.md) with the Virtual Network the cluster was deployed to.
Launch the console URL in a browser and login using the `kubeadmin` credentials.
apiServer=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.u
``` >[!IMPORTANT]
-> In order to connect to a private Azure Red Hat OpenShift cluster, you will need to perform the following step from a host that is either in the Virtual Network you created or in a Virtual Network that is [peered](../virtual-network/virtual-network-peering-overview.md) with the Virtual Network the cluster was deployed to.
+> In order to connect to a private Azure Red Hat OpenShift cluster, you'll need to perform the following step from a host that is either in the Virtual Network you created or in a Virtual Network that is [peered](../virtual-network/virtual-network-peering-overview.md) with the Virtual Network the cluster was deployed to.
Login to the OpenShift cluster's API server using the following command. Replace **\<kubeadmin password>** with the password you just retrieved.
operator-nexus Concepts Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-observability.md
+
+ Title: "Azure Operator Nexus: observability using Azure Monitor"
+description: Operator Nexus uses Azure Monitor and collects and aggregates data in Azure Log Analytics workspace. The analysis, visualization, and alerting is performed on this collected data.
++++ Last updated : 01/31/2023 #Required; mm/dd/yyyy format.+++
+# Azure Operator Nexus observability
+
+The Operator Nexus observability framework provides operational insights into your on-premises instances.
+The framework supports logging, monitoring, and alerting (LMA), analytics, and visualization of operational (platform and workloads) data and metrics.
+
+<! IMG ![ Operator Nexus Logging, Monitoring and Alerting (LMA) Framework](Docs/media/log-monitoring-analytics-framework.png) IMG >
++
+Figure: Operator Nexus Logging, Monitoring and Alerting (LMA) Framework
+
+The key highlights of Operator Nexus observability framework are:
+
+* **Centralized data collection**: Operator Nexus observability solution is based on a collection of all the data in a central place. In this place, you can observe the monitoring data from all of your on-premises instances.
+* **Well-defined and tested tooling**: The solution relies on Azure Monitor that collects, analyzes, and acts on telemetry data from your cloud and on-premises instances.
+* **Easy to learn and use**: The solution makes it easy for you to analyze and debug problems with the ability to search the data from within or across all of your cloud and on-premises instances.
+* **Visualization tools**: You create customized dashboards and workbooks per your needs.
+* **Integrated Alert tooling**: You create alerts based on custom thresholds. You can create and reuse alert templates across all of your instances.
+
+This article helps you understand Operator Nexus observability framework that consists of a stack of components:
+
+- Azure Monitor collects and aggregates logging data from the Operator Nexus components
+- Azure Log Analytics workspace collects and aggregates logging data from multiple Azure subscriptions and tenants
+- Analysis, visualization, and alerting are performed on the aggregated log data.
+
+## Platform Monitoring
+
+ Operator Nexus gives you visibility into the performance of your deployments
+that consist of [infrastructure resources](./concepts-resource-types.md#platform-components).
+You need the logs and metrics to be collected and analyzed from these platform resources.
+You gain valuable insights from the centralized collection and aggregation of data from all sources, compared with from dis-aggregated data.
+
+These logs and metrics are used to observe the state of the platform. You can see the performance and analyze what's wrong. You can analyze what caused the situation. Visualization helps you configure the required alerts and under what conditions. For example, you can configure the alerts to be generated when resources are behaving abnormally, or when thresholds have been reached. You can use the collected logs and analytics to debug any problems in the environment.
+
+### Monitoring Data
+
+Operator Nexus observability allows you to collect the same kind of data as other Azure
+resources. The data collected from each of your instances can be viewed in your LAW (Log Analytics workspace).
+
+ You can learn about monitoring Azure resources [here](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data).
+
+### Collection and Routing
+
+Operator Nexus observability allows you to collect data for each infrastructure resource.
+The set of infrastructure components includes:
+
+* Network fabric that includes CEs, TORs, NPBs, management switches, and the terminal server.
+* Compute that includes Bare Metal Servers.
+* Undercloud Control Plane (Kubernetes cluster responsible for deployment and managing lifecycle of overall Platform).
+
+Collection of log data from these layers is enabled by default during the creation of your Operator Nexus
+instance. These collected logs are routed to your Azure Monitor Log
+Analytics Workspace.
+
+You can also collect data from the tenant layers
+created for running Containerized and Virtualized Network Functions. The log data that can be collected includes:
+
+* Collection of syslog from Virtual Machines (used for either VNFs or CNF workloads).
+* Collection of logs from AKS-Hybrid clusters and the applications deployed on top.
+
+You'll need to enable the collection of the logs from the tenant AKS-Hybrid clusters and Virtual Machines.
+You should follow the steps to deploy the [Azure monitoring agents](/azure/azure-monitor/agents/agents-overview#install-the-agent-and-configure-data-collection). The data would be collected in your Azure Log
+Analytics Workspace.
+
+### Operator Nexus Logs storage
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set
+of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields; see the [common schema](/azure/azure-monitor/essentials/resource-logs-schema#top-level-common-schema).
+
+The logs from Operator Nexus platform are stored in the following tables:
+
+| Table | Description |
+| - | -- |
+| Syslog | Syslog events on Linux computers using the Log Analytics agent |
+| ContainerInventory | Details and current state of each container. |
+| ContainerLog | Log lines collected from stdout and stderr streams for containers |
+| ContainerNodeInventory | Details of nodes that serve as container hosts. |
+| InsightMetrics | Metrics collected from Server, K8s, Containers. |
+| KubeEvents | Kubernetes events and their properties. |
+| KubeMonAgentEvents | Events logged by Azure Monitor Kubernetes agent for errors and warnings. |
+| KubeNodeInventory | Details for nodes that are part of Kubernetes cluster |
+| KubePodInventory | Kubernetes pods and their properties |
+| KubePVInventory | Kubernetes persistent volumes and their properties. |
+| KubeServices | Kubernetes services and their properties |
+| Heartbeat | Records logged by Log Analytics agents once per minute to report on agent health |
+
+#### Operator nexus metrics
+
+The 'InsightMetrics' table in the Logs section contains the metrics collected from Bare Metal Machines and the undercloud Kubernetes cluster. In addition, a few selected metrics collected from the undercloud can be observed by opening the Metrics tab from the Azure Monitor menu.
+
+<! IMG ![Azure Monitor Metrics Selection](Docs/media/azure-monitor-metrics-selection.png) IMG >
+
+Figure: Azure Monitor Metrics Selection
+
+See **[Getting Started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)** for details on using this tool.
+
+#### Workbooks
+
+Workbooks combine text, log queries, metrics, and parameters for data analysis and the creation of multiple kinds of rich visualizations.
+You can use the sample Azure Resource Manager workbook templates for [Operator Nexus Logging and Monitoring](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services) to deploy Azure Workbooks within your Azure Log Analytics Workspace.
+
+#### Alerts
+
+You can use the sample Azure Resource Manager alarm templates for [Operator Nexus alerting rules](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services#alert-rules). You should specify thresholds and conditions for the alerts. You can then deploy these alert templates on your on-premises environment.
+
+## Log analytic workspace
+
+A [Log Analytics workspace (LAW)](/azure/azure-monitor/logs/log-analytics-workspace-overview)
+is a unique environment to log data from Azure Monitor and
+other Azure services. Each workspace has its own data repository and configuration but may
+combine data from multiple services. Each workspace consists of multiple data tables.
+
+A single Log Analytics workspace can be created to collect all relevant data or multiple workspaces based on operator requirements.
operator-nexus Concepts Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-resource-types.md
+
+ Title: Azure Operator Nexus resource types
+description: Operator Nexus platform and tenant resource types
++++ Last updated : 01/25/2023 #Required; mm/dd/yyyy format.+++
+# Azure Operator Nexus resource types
+
+This article introduces you to the Operator Nexus components represented as Azure resources in Azure Resource Manager.
+
+<! IMG ![Resource Types](Docs/media/resource-types.png) IMG >
+
+Figure: Resource model
+
+## Platform components
+
+Your Operator Nexus Cluster (or simply instance) platform components include the infrastructure resources and the platform software resources used to manage these infrastructure resources.
+
+### Network fabric controller
+
+The Network fabric Controller (NFC) is a resource that automates the life cycle management of all network devices (including storage appliance) deployed in an Operator Nexus instance.
+The NFC resource is created in the Resource group specified by you in your Azure subscription.
+NFC is hosted in a [Microsoft Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) in an Azure region.
+The region should be connected to your on-premises network via [Microsoft Azure ExpressRoute](/azure/expressroute/expressroute-introduction).
+An NFC can manage the network fabric of many (subject to limits) Operator Nexus on-premises instances.
+
+### Network fabric
+
+The Network fabric resource models a collection of network devices, compute servers, and storage appliances, and their interconnections. The network fabric resource also includes the networking required for your Network Functions and workloads. Each Operator Nexus instance has one Network fabric.
+
+The Network fabric Controller (NFC) performs the lifecycle management of the network fabric.
+It configures and bootstraps the network fabric resources.
+
+### Cluster manager
+
+A Cluster Manager (CM) is hosted on Azure and manages the lifecycle of all on-premises clusters.
+Like NFC, a CM can manage multiple Operator Nexus instances.
+The CM and the NFC are hosted in the same Azure subscription.
+
+### Operator nexus cluster
+
+An Operator Nexus cluster models a collection of racks, bare metal machines, storage, and networking.
+Each cluster (sometimes also referred as Operator Nexus instance) is mapped to the on-premises Network fabric. An Operator Nexus cluster provides a holistic view of the deployed capacity.
+Cluster capacity examples include the number of vCPUs, the amount of memory, and the amount of storage space. An Operator Nexus cluster is also the basic unit for compute and storage upgrades.
+
+### Network rack
+
+The Network rack consists of Consumer Edge (CE) routers, Top of Rack switches (ToRs), storage appliance, Network Packet Broker (NPB), and the Terminal Server.
+The rack also models the connectivity to the operator's Physical Edge switches (PEs) and the ToRs on the other racks.
+
+### Rack
+
+The Rack (or a compute rack) resource represents the compute servers (Bare Metal Machines), management servers, management switch and ToRs. The Rack is created, updated or deleted as part of the Cluster lifecycle management.
+
+### Storage appliance
+
+Storage Appliances represent storage arrays used for persistent data storage in the Operator Nexus instance. All user and consumer data is stored in these appliances local to your premises. This local storage complies with some of the most stringent local data storage requirements.
+
+### Bare Metal Machine
+
+Bare Metal Machines represent the physical servers in a rack. They're lifecycle managed by the Cluster Manager.
+Bare Metal Machines are used by workloads to host Virtual Machines and AKS-Hybrid clusters.
+
+## Workload components
+
+Workload components are resources that you use in hosting your workloads.
+
+### Network resources
+
+The Network resources represent the virtual networking in support of your workloads hosted on VMs or AKS-Hybrid clusters.
+There are five Network resource types that represent a network attachment to an underlying isolation-domain.
+
+- **Cloud Services Network Resource**: provides VMs/AKS-Hybrid clusters access to cloud services such as DNS, NTP, and user-specified Azure PaaS services. You must create at least one Cloud Services Network in each of your Operator Nexus instances. Each Cloud Service Network can be reused by many VMs and/or AKS-Hybrid clusters.
+
+- **Default CNI Network Resource**: supports configuring of the AKS-Hybrid cluster network resources.
+
+- **Layer 2 Network Resource**: enables "East-West" communication between VMs or AKS-Hybrid clusters.
+
+- **Layer 3 Network Resource**: facilitate "North-South" communication between your VMs/AKS-Hybrid clusters and the external network.
+
+- **Trunked Network Resource**: provides a VM or an AKS-Hybrid cluster access to multiple layer 3 networks and/or multiple layer 2 networks.
+
+### Virtual machine
+
+You can use VMs to host your Virtualized Network Function (VNF) workloads.
+
+### AKS-hybrid cluster
+
+An AKS-Hybrid cluster is Azure Kubernetes Service cluster modified to run on your on-premises Operator Nexus instance. The AKS-Hybrid cluster is designed to host your Containerized Network Function (CNF) workloads.
operator-nexus Howto Baremetal Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md
+
+ Title: "Azure Operator Nexus: Platform Functions for Bare Metal Machines"
+description: Learn how to manage Bare Metal Machines (BMM).
++++ Last updated : 02/01/2023+++
+# Manage lifecycle of Bare Metal Machines
+
+This article describes how to perform lifecycle management operations on Bare Metal Machines (BMM). The commands to manage the lifecycle of the BMM include:
+
+- power-off
+- start the BMM
+- make the BMM unschedulable or schedulable
+- reinstall the BMM image
+
+When you want to reinstall or update the image, or replace the BMM, make the BMM unschedulable. In these cases, you need to evacuate existing workloads. You may have a need for no new workloads to be scheduled on a BMM, in which case make it unschedulable but without evacuating the workloads.
+
+Make your BMM schedulable for it to be used.
+
+## Prerequisites
+
+1. Install the latest version of the
+ [appropriate CLI extensions](./howto-install-cli-extensions.md)
+1. Ensure that the target bare metal machine (server) must be `powered-on` and have its `readyState` set to True
+1. Get the Resource group name that you created for `network cloud cluster resource`
+
+## Power-off bare metal machines
+
+This command will `power-off` the specified `bareMetalMachineName`.
+
+```azurecli
+ az networkcloud baremetalmachine power-off --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
+```
+
+## Start bare metal machine
+
+This command will `start` the specified `bareMetalMachineName`.
+
+```azurecli
+ az networkcloud baremetalmachine start --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
+```
+
+## Make a BMM unschedulable (cordon)
+
+You can make a BMM unschedulable by executing the [`cordon`](#make-a-bmm-unschedulable-cordon) command.
+On execution of the `cordon` command,
+Kubernetes won't schedule any new pods on the BMM; any attempt to create a pod on a `cordoned`
+BMM results in the pod being set to `pending` state. Existing pods continue to run.
+The cordon command supports an `evacuate` parameter with the default `false` value.
+On executing the `cordon` command, with the value `true` for the `evacuate`
+parameter, the pods currently running on the BMM will be `stopped` and the BMM will be set to `pending` state.
+
+```azurecli
+ az networkcloud baremetalmachine cordon \
+ --evacuate "True" \
+ --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
+```
+
+The `evacuate "True"` removes pods from that node while `evacuate "FALSE"` only prevents the scheduling of new pods.
+
+## Make a BMM schedulable (uncordon)
+
+You can make a BMM `schedulable` (usable) by executing the [`uncordon`](#make-a-bmm-schedulable-uncordon) command. All pods in `pending`
+state on the BMM will be `re-started` when the BMM is `uncordoned`.
+
+```azurecli
+ az networkcloud baremetalmachine uncordon \
+ --name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
+```
+
+## Reimage a BMM (reinstall a BMM image)
+
+The existing BMM image can be **reinstalled** using the `reimage` command but won't install a new image.
+Make sure the BMM's workloads are drained using the [`cordon`](#make-a-bmm-unschedulable-cordon)
+command, with `evacuate "TRUE"`, prior to executing the `reimage` command.
+
+```azurecli
+az networkcloud baremetalmachine reimage ΓÇô-name "bareMetalMachineName" \
+ --resource-group "resourceGroupName"
+```
+
+You should [uncordon](#make-a-bmm-schedulable-uncordon) the BMM on completion of the `reimage` command.
operator-nexus Howto Cluster Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-manager.md
+
+ Title: How to guide for running commands for Cluster Manager on Azure Operator Nexus
+description: Learn to create, view, list, update, delete commands for Cluster Manager on Operator Nexus
++ Last updated : 01/23/2023+
+# ms.prod: used for on prem applications
+++
+# Cluster
+
+You'll need to upgrade or patch a Cluster Manager. You may need to examine or change its properties. When all Operator Nexus instances have been deleted from your sites, you may need to delete the Cluster Manager.
+In this how-to guide you'll learn to manage a Cluster Manager. The `networkcloud` resource provider supports `create`, `list` (also known as read), `show`, `update` and `delete` operations on the `clustermanager` resource.
+
+## Before you begin
+
+You'll need:
+
+- **Azure subscription ID** - The Azure subscription ID where Cluster Manager needs to be created (should be the same subscription ID of the Network fabric Controller) or exists.
+- **Fabric Controller ID** - Network fabric Controller and Cluster Manager have a 1:1 association. You'll need the resource ID of the Network fabric Controller associated with the Cluster Manager. Both the Cluster Manager and Fabric Controller need to be in your same Resource group.
+- **analytics workspace ID** - The resource ID of the Log Analytics workspace used for the logs collection.
+- **Azure Region** - The Cluster Manager should be created in the same region as the Network fabric Controller.
+This Azure region should be used in the `Location` field of the Cluster Manager and all associated Operator Nexus instances.
+
+## Prerequisites: install CLI extensions
+
+Install latest version of the [appropriate CLI extensions](./howto-install-cli-extensions.md)
+
+## Global arguments
+
+Some arguments that are available for every Azure CLI command
+
+- **--debug** - prints even more information about CLI operations, used for debugging purposes. If you find a bug, provide output generated with the `--debug` flag on when submitting a bug report.
+- **--help -h** - prints CLI reference information about commands and their arguments and lists available subgroups and commands.
+- **--only-show-errors** - Only show errors, suppressing warnings.
+- **--output -o** - specifies the output format. The available output formats are Json, Jsonc (colorized JSON), tsv (Tab-Separated Values), table (human-readable ASCII tables), and yaml. By default the CLI outputs Json.
+- **--query** - uses the JMESPath query language to filter the output returned from Azure services.
+- **--verbose** - prints information about resources created in Azure during an operation, and other useful information
+
+## Cluster manager elements
+
+| Elements | Description |
+| | - |
+| Name, ID, location, tags, type | Name: User friendly name <br> ID: < Resource ID > <br> Location: Azure region where the Cluster Manager is created. Values from: `az account list -locations`.<br> Tags: Resource tags <br> Type: Microsoft.NetworkCloud/clusterManagers |
+| managerExtendedLocation | The ExtendedLocation associated with the Cluster Manager |
+| managedResourceGroupConfiguration | Information about the Managed Resource Group |
+| fabricControllerId | A reference to the fabric controller that is 1:1 with this Cluster Manager |
+| analyticsWorkspaceId | This workspace will be where any logs that 's relevant to the customer will be relayed. |
+| clusterVersions[] | List of ClusterAvailableVersions objects. <br> Cluster versions that the manager supports. Will be used as an input in the cluster clusterVersion property. |
+| provisioningState | Succeeded, Failed, Canceled, Provisioning, Accepted, Updating |
+| detailedStatus | Detailed statuses that provide additional information about the status of the Cluster Manager. |
+| detailedStatusMessage | Descriptive message about the current detailedStatus. |
+
+## Create a cluster manager
+
+Use the `az network clustermanager create` command to create a Cluster Manager. This command creates a new Cluster Manager or updates the properties of the Cluster Manager if it exists. If you have multiple Azure subscriptions, select the appropriate subscription ID using the [az account set](/cli/azure/account#az-account-set) command.
+
+```azurecli
+az networkcloud clustermanager create \
+ --name <Cluster Manager name> \
+ --location <region> \
+ --analytics-workspace-id <log analytics workspace ID>
+ --fabric-controller-id <Fabric controller ID associated with this Cluster Manager>
+ --managed-resource-group-configuration < name=<Managed Resource group Name> location=<Managed Resource group location> >
+ --tags <key=value key=value>
+ --resource-group <Resource Group Name>
+ --subscription <subscription ID>
+```
+
+The command returns 200 OK in the successful creation or update. 201 Created is returned when the operation is performed asynchronously.
+
+- **Arguments**
+ - **--name -n [Required]** - The name of the Cluster Manager.
+ - **--fabric-controller-id [Required]** - The resource ID of the Network fabric Controller that is associated with the Cluster Manager.
+ - **--resource-group -g [Required]** - Name of resource group. You can configure the default resource group using `az configure --defaults group=<name>`.
+ - **--analytics-workspace-id** - The resource ID of the Log Analytics workspace that is used for the logs collection
+ - **--location -l** - Location. Azure region where the Cluster Manager is created. Values from: `az account list -locations`. You can configure the default location using `az configure --defaults location=<location>`.
+ - **--managed-resource-group-configuration** - The configuration of the managed resource group associated with the resource.
+ - Usage: --managed-resource-group-configuration location=XX name=XX
+ - location: The region of the managed resource group. If not specified, the region of the
+ parent resource is chosen.
+ - name: The name for the managed resource group. If not specified, a unique name is
+ automatically generated.
+ - **wait/--no-wait** - Wait for command to complete or don't wait for the long-running operation to finish.
+ - **--tags** - Space-separated tags: key[=value] [key[=value]...]. Use '' to clear existing tags
+ - **--subscription** - Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+
+## List/show cluster manager(s)
+
+List and show commands are used to get a list of existing Cluster Managers or the properties of a specific Cluster Manager.
+
+### List cluster managers in resource group
+
+This command lists the Cluster Managers in the specified Resource group.
+
+```azurecli
+az networkcloud clustermanager list --resource-group <Azure Resource group>
+```
+
+### List cluster managers in subscription
+
+This command lists the Cluster Managers in the specified subscription.
+
+```azurecli
+az networkcloud clustermanager list --subscription <subscription ID>
+```
+
+### Show cluster manager properties
+
+This command lists the properties of the specified Cluster Manager.
+
+```azurecli
+az networkcloud clustermanager show \
+ --name <Cluster Manager name> \
+ --resource-group <Resource group Name>
+ --subscription <subscription ID>
+```
+
+### List/show command arguments
+
+- **--name -n** - The name of the Cluster Manager.
+- **--IDs** - One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource ID' arguments.
+- **--resource-group -g** - Name of resource group. You can configure the default group using `az configure --defaults group=<name>`.
+- **--subscription** - Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+
+## Update cluster manager
+
+This command is used to Patch properties of the provided Cluster Manager, or update the tags assigned to the Cluster Manager. Properties and tag updates can be done independently.
+
+```azurecli
+az networkcloud clustermanager update \
+ --name <Cluster Manager name> \
+ --tags < <key1=value1> <key2=value2>>
+ --resource-group <Resource group Name>
+ --subscription <subscription ID>
+```
+
+- **Arguments**
+ - **--tags** - TSpace-separated tags: key[=value] [key[=value] ...]. Use '' to clear existing tags.
+ - **--name -n** - The name of the Cluster Manager.
+ - **--IDs** - One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource ID' arguments.
+ - **--resource-group -g** - Name of resource group. You can configure the default group using `az configure --defaults group=<name>`.
+ - **--subscription** - Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
+
+## Delete cluster manager
+
+This command is used to Delete the provided Cluster Manager.
+A Cluster Manager that has an existing associated Network fabric Controller, or any Clusters that reference this Cluster Manager may not be deleted
+
+```azurecli
+az networkcloud clustermanager delete \
+ --name <Cluster Manager name> \
+ --resource-group <Resource Group Name>
+ --subscription <subscription ID>
+```
+
+- **Arguments**
+ - **--no-wait** - Don't wait for the long-running operation to complete.
+ - **--yes -y** - Don't prompt for confirmation.
+ - **--name -n** - The name of the Cluster Manager.
+ - **--IDs** - One or more resource IDs (space-delimited). It should be a complete resource ID containing all information of 'Resource ID' arguments.
+ - **--resource-group -g** - Name of resource group. You can configure the default group using `az configure --defaults group=<name>`.
+ - **--subscription** - Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`.
operator-nexus Howto Configure Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md
+
+ Title: "Azure Operator Nexus: How to configure the L2 and L3 isolation-domains in Operator Nexus instances"
+description: Learn to create, view, list, update, delete commands for Layer 2 and Layer isolation-domains in Operator Nexus instances
++++ Last updated : 02/02/2023 #Required; mm/dd/yyyy format.+++
+# Configure L2 and L3 isolation-domains using managed network fabric services
+
+The isolation-domains enable communication between workloads hosted in the same rack (intra-rack communication) or different racks (inter-rack communication).
+This how-to describes how you can manage your Layer 2 and Layer 3 isolation-domains using the Azure Command Line Interface (AzureCLI). You can create, update, delete and check status of Layer 2 and Layer 3 isolation-domains.
+
+## Prerequisites
+
+1. Ensure Network fabric Controller and Network fabric have been created.
+1. Install latest version of the
+[necessary CLI extensions](./howto-install-cli-extensions.md).
+
+### Sign-in to your Azure account
+
+Sign-in to your Azure account and set the subscription to your Azure subscription ID. This ID should be the same subscription ID used across all Operator Nexus resources.
+
+```azurecli
+ az login
+ az account set --subscription ********-****-****-****-*********
+```
+
+### Register providers for managed network fabric
+
+1. In Azure CLI, enter the command: `az provider register --namespace Microsoft.ManagedNetworkFabric`
+1. Monitor the registration process. Registration may take up to 10 minutes: `az provider show -n Microsoft.ManagedNetworkFabric -o table`
+1. Once registered, you should see the `RegistrationState` change to `Registered`: `az provider show -n Microsoft.ManagedNetworkFabric -o table`.
+
+You'll create isolation-domains to enable layer 2 and layer 3 connectivity between workloads hosted on an Operator Nexus instance.
+
+> [!NOTE]
+> Operator Nexus reserves VLANs <=500 for Platform use, and therefore VLANs in this range can't be used for your (tenant) workload networks. You should use VLAN values between 501 and 4095.
+
+## Parameters for isolation-domain management
+
+| Parameter| Description|
+| :--| :--|
+| vlan-id | VLAN identifier value. VLANs 1-500 are reserved and can't be used. The VLAN identifier value can't be changed once specified. The isolation-domain must be deleted and recreated if the VLAN identifier value needs to be modified. |
+| administrativeState | Indicate administrative state of the isolation-domain |
+| provisioningState | Indicates provisioning state |
+| subscriptionId | Your Azure subscriptionId for your Operator Nexus instance. |
+| resourceGroupName | Use the corresponding NFC resource group name |
+| resource-name | Resource Name of the isolation-domain |
+| nf-id | ARM ID of the Network fabric |
+| location | Azure region where the resource is being created |
+
+## L2 isolation-domain
+
+You use an L2 isolation-domain to establish layer 2 connectivity between workloads running on Operator Nexus compute nodes.
+
+### Create L2 isolation-domain
+
+Create an L2 isolation-domain:
+
+```azurecli
+az nf l2domain create \
+--resource-group "NFresourcegroupname" \
+--resource-name "example-l2domain" \
+--location "eastus" \
+--nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFname" \
+--vlan-id 501\
+--mtu 1500
+```
+
+Expected output:
+
+```json
+{
+ "administrativeState": "Disabled",
+ "annotation": null,
+ "disabledOnResources": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "location": "eastus2euap",
+ "mtu": 1500,
+ "name": "example-l2domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFresourcegroupname",
+ "systemData": {
+ "createdAt": "2022-11-02T05:59:00.534027+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T05:59:00.534027+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l2isolationdomains",
+ "vlanId": 501
+}
+```
+
+### Show L2 isolation-domains
+
+This command shows L2 isolation-domain details and administrative state of isolation-domain.
+
+```azurecli
+az nf l2domain show --resource-group "resourcegroupname" --resource-name "example-l2domain"
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Disabled",
+ "annotation": null,
+ "disabledOnResources": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "location": "eastus2euap",
+ "mtu": 1500,
+ "name": "example-l2domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFCresourcegroupname",
+ "systemData": {
+ "createdAt": "2022-11-02T05:59:00.534027+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T05:59:00.534027+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l2isolationdomains",
+ "vlanId": 2026
+}
+```
+
+### List all L2 isolation-domains
+
+This command lists all l2 isolation-domains available in resource group.
+
+```azurecli
+az nf l2domain list --resource-group "resourcegroupname"
+```
+
+Expected Output
+
+```json
+ {
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "disabledOnResources": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "location": "eastus",
+ "mtu": 1500,
+ "name": "example-l2domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFCresourcegroupname",
+ "systemData": {
+ "createdAt": "2022-10-24T22:26:33.065672+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-10-26T14:46:45.753165+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l2isolationdomains",
+ "vlanId": 501
+ },
+ {
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "disabledOnResources": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "location": "eastus",
+ "mtu": 1500,
+ "name": "example-l2domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFCresourcegroupname",
+ "systemData": {
+ "createdAt": "2022-10-27T03:03:15.099007+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-10-27T03:45:31.864152+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l2isolationdomains",
+ "vlanId": 501
+ },
+```
+
+### Enable/disable L2 isolation-domain
+
+This command is used to change the administrative state of the isolation-domain.
+
+**Note:**
+Only after the isolation-domain is Enabled, that the layer 2 isolation-domain configuration is pushed to the Network fabric devices.
+
+```azurecli
+az nf l2domain update-admin-state --resource-group "NFCresourcegroupname" --resource-name "example-l2domain" --state Enable/Disable
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "disabledOnResources": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l2IsolationDomains/example-l2domain",
+ "location": "eastus2euap",
+ "mtu": 1500,
+ "name": "example-l2domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFCresourcegroupname",
+ "systemData": {
+ "createdAt": "2022-11-02T05:59:00.534027+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:01:03.552772+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l2isolationdomains",
+ "vlanId": 501
+}
+```
+
+### Delete L2 isolation-domain
+
+This command is used to delete L2 isolation-domain
+
+```azurecli
+az nf l2domain delete --resource-group "resourcegroupname" --resource-name "example-l2domain"
+```
+
+Expected output:
+
+```output
+Please use show or list command to validate that isolation-domain is deleted. Deleted resources will not appear in result
+```
+
+## L3 isolation-domain
+
+Layer 3 isolation-domain enables layer 3 connectivity between workloads running on Operator Nexus compute nodes.
+The L3 isolation-domain enables the workloads to exchange layer 3 information with Network fabric devices.
+
+Layer 3 isolation-domain has two components: Internal and External Networks.
+At least one or more internal networks are required to be created.
+The internal networks define layer 3 connectivity between NFs running in Operator Nexus compute nodes and an optional external network.
+The external network provides connectivity between the internet and internal networks via your PEs.
+
+L3 isolation-domain enables deploying workloads that advertise service IPs to the fabric via BGP.
+Fabric ASN refers to the ASN of the network devices on the Fabric. The Fabric ASN was specified while creating the Network fabric.
+Peer ASN refers to ASN of the Network Functions in Operator Nexus, and it can't be the same as Fabric ASN.
+
+The workflow for a successful provisioning of an L3 isolation-domain is as follows:
+ - Create a L3 isolation-domain
+ - Create one or more Internal Networks
+ - Enable a L3 isolation-domain
+
+To make changes to the L3 isolation-domain, first Disable the L3 isolation-domain (Administrative state). Re-enable the L3 isolation-domain (AdministrativeState state) once the changes are completed:
+ - Disable the L3 isolation-domain
+ - Make changes to the L3 isolation-domain
+ - Re-enable the L3 isolation-domain
+
+Procedure to show, enable/disable and delete IPv6 based isolation-domains is same as used for IPv4.
+
+### Create L3 isolation-domain
+
+You can create the L3 isolation-domain:
+
+```azurecli
+az nf l3domain create
+--resource-group "NFCresourcegroupname"
+--resource-name "example-l3domain"
+--location "eastus"
+--nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFName"
+--external '{"optionBConfiguration": {"importRouteTargets": ["1234:1235"], "exportRouteTargets": ["1234:1234"]}}'
+```
+
+> [!NOTE]
+> For MPLS Option 10 (B) connectivity to external networks via PE devices, you can specify option (B) parameters while creating an isolation-domain.
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Disabled",
+ "annotation": null,
+ "description": null,
+ "disabledOnResources": null,
+ "external": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "internal": null,
+ "location": "eastus2euap",
+ "name": "example-l3domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "optionBDisabledOnResources": null,
+ "provisioningState": "Accepted",
+ "resourceGroup": "resourcegroupname",
+ "systemData": {
+ "createdAt": "2022-11-02T06:23:43.372461+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:23:43.372461+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l3isolationdomains"
+}
+```
+
+### Show L3 isolation-domains
+
+You can get the L3 isolation-domains details and administrative state.
+
+```azurecli
+az nf l3domain show --resource-group "resourcegroupname" --resource-name "example-l3domain"
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Disabled",
+ "annotation": null,
+ "description": null,
+ "disabledOnResources": null,
+ "external": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "internal": null,
+ "location": "eastus2euap",
+ "name": "example-l3domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "optionBDisabledOnResources": null,
+ "provisioningState": "Accepted",
+ "resourceGroup": "resourcegroupname",
+ "systemData": {
+ "createdAt": "2022-11-02T06:23:43.372461+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:23:43.372461+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l3isolationdomains"
+}
+```
+
+### List all L3 isolation-domains
+
+You can get a list of all L3 isolation-domains available in a resource group.
+
+```azurecli
+az nf l3domain list --resource-group "resourcegroupname"
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Disabled",
+ "annotation": null,
+ "description": null,
+ "disabledOnResources": null,
+ "external": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "internal": null,
+ "location": "eastus2euap",
+ "name": "example-l3domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "optionBDisabledOnResources": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "resourcegroupname",
+ "systemData": {
+ "createdAt": "2022-11-02T06:23:43.372461+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:23:43.372461+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l3isolationdomains"
+}
+```
+
+Once the isolation-domain is created successfully, the next step is to create an internal network.
+
+## Internal network creation
+
+| Parameter | Description | Example | Required | type |
+| : | :-- | :- | :- | :-- |
+| vlanId | VLAN identifier | 1001 | True | string |
+| connectedIPv4Subnets/Prefix | IP subnet used by the HAKS cluster's workloads | 10.0.0.0/24 | True | string |
+| connectedIPv4Subnets/gateway | IPv4 subnet gateway used by the HAKS cluster's workloads | 10.0.0.1 | True | string |
+| staticIPv4Routes/Prefix | IPv4 Prefix of the static route | NA |
+| staticIPv4Routes/nexthop | IPv4 next hop address | NA |
+| defaultRouteOriginate | True/False "Enables default route to be originated when advertising routes via BGP" |
+| fabricASN | ASN of Network fabric | 65048 | True | string |
+| peerASN | Peer ASN of Network Function | 65047 | True | string |
+| IPv4Prefix | IPv4 Prefix of NFs for BGP peering (range).<br />The maximum length of the prefix is /28. For example, in 10.1.0.0/28, 10.1.0.0 to 10.1.0.7 are reserved and can't be used by workloads. 10.1.0.1 is assigned as VIP on both CEs. 10.1.0.2 is assigned on CE1 and 10.1.0.3 is assigned on CE2.<br />Workloads must peer to CE1 and CE2. The IP addresses of workloads can start from 10.0.0.8.<br />When only the prefix is configured, and `ipv4NeighborAddresses` isn't specified, the fabric configures the valid addresses in the prefix as part of the listen range. If `ipv4NeighborAddresses` is specified, the fabric configures the specified addresses as neighbors.<br />A smaller prefix than /28, for example /29 or /30 can also be configured | NA | |
+
+This command creates an internal network with BGP configuration and specified peering address.
+
+**Note:** You need to create an internal network before you enable an L3 isolation-domain.
+
+```azurecli
+az nf internalnetwork create \
+--resource-group "resourcegroupname" \
+--l3-isolation-domain-name "example-l3domain" \
+--resource-name "example-internalnetwork" \
+--location "eastus"
+--vlan-id 1001 \
+--connected-ipv4-subnets '[{"prefix":"10.0.0.0/24", "gateway":"10.0.0.1"}]' \
+--mtu 1500 \
+--bgp-configuration '{"fabricASN": 65048, "defaultRouteOriginate":true, "peerASN": 65047 ,"ipv4NeighborAddress":[{"address": "10.0.0.11"}]}'
+
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "bfdDisabledOnResources": null,
+ "bfdForStaticRoutesDisabledOnResources": null,
+ "bgpConfiguration": {
+ "annotation": null,
+ "bfdConfiguration": null,
+ "defaultRouteOriginate": false,
+ "fabricAsn": 65048,
+ "ipv4NeighborAddress": [
+ {
+ "address": "10.0.0.11",
+ "operationalState": null
+ }
+ ],
+ "ipv4Prefix": null,
+ "ipv6NeighborAddress": null,
+ "ipv6Prefix": null,
+ "peerAsn": 65047
+ },
+ "bgpDisabledOnResources": null,
+ "connectedIPv4Subnets": [
+ {
+ "annotation": null,
+ "gateway": null,
+ "prefix": "10.0.0.0/24"
+ }
+ ],
+ "connectedIPv6Subnets": null,
+ "disabledOnResources": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain/internalNetworks/example-internalnetwork",
+ "mtu": 1500,
+ "name": "example-internalnetwork",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "resourcegroupname",
+ "staticRouteConfiguration": null,
+ "systemData": {
+ "createdAt": "2022-11-02T06:25:05.983557+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:25:05.983557+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/internalnetworks",
+ "vlanId": 1001
+}
+```
+
+**Note:** This command creates an Internal network where the BGP speakers of the NFs will be in the range 10.0.0.8 through 10.0.0.15
+
+```azurecli
+az nf internalnetwork create \
+--resource-group "resourcegroupname" \
+--l3-isolation-domain-name "example-l3domain" \
+--name "example-internalnetwork" \
+--vlan-id 1000 \
+--connected-ipv4-subnets '[{"prefix":"10.0.0.0/24", "gateway":"10.0.0.1"}]' \
+--mtu 1500
+--bgp-configuration '{"fabricASN": 65048, "defaultRouteOriginate":true, "peerASN": 5001 ,"ipv4Prefix": "10.0.0.0/28"}'
+
+```
+
+### Internal network creation using IPv6
+
+```azurecli
+az nf internalnetwork create \
+--resource-group "resourcegroupname" \
+--l3-isolation-domain-name "example-l3domain" \
+--resource-name "example-internalipv6network" \
+--location "eastus"
+--vlan-id 1090 \
+--connected-ipv6-subnets '[{"prefix":"10:101:1::0/64", "gateway":"10:101:1::1"}]'
+ --mtu 1500
+ --bgp-configuration '{"fabricASN": 65048, "defaultRouteOriginate":true, "peerASN": 65020 ,"ipv6NeighborAddress":[{"address": "10:101:1::11"}]}
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "bfdDisabledOnResources": null,
+ "bfdForStaticRoutesDisabledOnResources": null,
+ "bgpConfiguration": {
+ "annotation": null,
+ "bfdConfiguration": null,
+ "defaultRouteOriginate": true,
+ "fabricAsn": 65048,
+ "ipv4NeighborAddress": null,
+ "ipv4Prefix": null,
+ "ipv6NeighborAddress": [
+ {
+ "address": "10:101:1::11",
+ "operationalState": null
+ }
+ ],
+ "ipv6Prefix": null,
+ "peerAsn": 65020
+ },
+ "bgpDisabledOnResources": null,
+ "connectedIPv4Subnets": null,
+ "connectedIPv6Subnets": [
+ {
+ "annotation": null,
+ "gateway": "10:101:1::1",
+ "prefix": "10:101:1::0/64"
+ }
+ ],
+ "disabledOnResources": null,
+ "exportRoutePolicyId": null,
+ "id": "/subscriptions//xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx7/resourceGroups/fab1nfrg121322/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/fab1-l3domain121822/internalNetworks/fab1-internalnetworkv16",
+ "importRoutePolicyId": null,
+ "mtu": 1500,
+ "name": "example-internalipv6network",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "resourcegroupname",
+ "staticRouteConfiguration": null,
+ "systemData": {
+ "createdAt": "2022-12-15T12:10:34.364393+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-12-15T12:10:34.364393+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/internalnetworks",
+ "vlanId": 1090
+}
+```
+
+## External network creation
+
+This command creates an External network using Azure CLI.
+
+**Note:** For Option A, you need to create an external network before you enable the L3 isolation-domain.
+An external is dependent on Internal network, so an external can't be enabled without an internal network.
+The vlan-id value should be between 501 and 4095.
+
+```azurecli
+az nf externalnetwork create
+--resource-group "resourcegroupname"
+--l3-isolation-domain-name "example-l3domain"
+--name "example-externalnetwork"
+--location "eastus"
+--vlan-id 515
+--fabric-asn 65025
+--peer-asn 65026
+--primary-ipv4-prefix "10.1.1.0/30"
+--secondary-ipv4-prefix "10.1.1.4/30"
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": null,
+ "annotation": null,
+ "bfdConfiguration": null,
+ "bfdDisabledOnResources": null,
+ "bgpDisabledOnResources": null,
+ "disabledOnResources": null,
+ "fabricAsn": 65025,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/l3domainNEWS3/externalNetworks/example-l3domain",
+ "mtu": 0,
+ "name": "example-l3domain",
+ "peerAsn": 65026,
+ "primaryIpv4Prefix": "10.1.1.0/30",
+ "primaryIpv6Prefix": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "resourcegroupname",
+ "secondaryIpv4Prefix": "10.1.1.4/30",
+ "secondaryIpv6Prefix": null,
+ "systemData": {
+ "createdAt": "2022-10-29T17:24:32.077026+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-07T09:28:18.873754+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/externalnetworks",
+ "vlanId": 515
+}
+```
+
+### External network creation using Ipv6
+
+```azurecli
+az nf externalnetwork create
+--resource-group " resourcegroupname "
+--l3-isolation-domain-name " example-l3domain"
+--resource-name " example-externalipv6network"
+--location "westus3"
+--vlan-id 516
+--fabric-asn 65048
+--peer-asn 65022
+ --primary-ipv4-prefix "10:101:2::0/127"
+--secondary-ipv6-prefix "10:101:3::0/127"
+```
+
+**Note:** Primary and Secondary IPv6 supported in this release is /127
+
+Expected Output
+
+```json
+{
+ "administrativeState": null,
+ "annotation": null,
+ "bfdConfiguration": null,
+ "bfdDisabledOnResources": null,
+ "bgpDisabledOnResources": null,
+ "disabledOnResources": null,
+ "exportRoutePolicyId": null,
+ "fabricAsn": 65048,
+ "id": "/subscriptions//xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/fab1nfrg121322/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/fab1-l3domain121822/externalNetworks/fab1-externalnetworkv6",
+ "importRoutePolicyId": null,
+ "mtu": 1500,
+ "name": "example-externalipv6network",
+ "peerAsn": 65022,
+ "primaryIpv4Prefix": "10:101:2::0/127",
+ "primaryIpv6Prefix": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "resourcegroupname",
+ "secondaryIpv4Prefix": null,
+ "secondaryIpv6Prefix": "10:101:3::0/127",
+ "systemData": {
+ "createdAt": "2022-12-16T07:52:26.366069+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-12-16T07:52:26.366069+00:00",
+ "lastModifiedBy": "",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/l3isolationdomains/externalnetworks",
+ "vlanId": 516
+}
+```
+
+### Enable/disable L3 isolation-domains
+
+This command is used change administrative state of L3 isolation-domain, you have to run the az show command to verify if the Administrative state has changed to Enabled or not.
+
+```azurecli
+az nf l3domain update-admin-state --resource-group "resourcegroupname" --resource-name "example-l3domain" --state Enable/Disable
+```
+
+Expected Output
+
+```json
+{
+ "administrativeState": "Enabled",
+ "annotation": null,
+ "description": null,
+ "disabledOnResources": null,
+ "external": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/l3IsolationDomains/example-l3domain",
+ "internal": null,
+ "location": "eastus2euap",
+ "name": "example-l3domain",
+ "networkFabricId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/NFresourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "optionBDisabledOnResources": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "resourcegroupname",
+ "systemData": {
+ "createdAt": "2022-11-02T06:23:43.372461+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:25:53.240975+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/l3isolationdomains"
+}
+```
+
+### Delete L3 isolation-domains
+
+This command is used to delete L3 isolation-domain
+
+```azurecli
+az nf l3domain delete --resource-group "fab1-nf" --resource-name "example-l3domain"
+```
+
+Use the `show` or `list` commands to validate that the isolation-domain has been deleted.
+
+### Create networks in L3 isolation-domain
+
+Internal networks enable layer 3 inter-rack and intra-rack communication between workloads via exchanging routes with the fabric.
+An L3 isolation-domain can support multiple internal networks, each on a separate VLAN.
+
+External networks enable workloads to have layer 3 connectivity with your provider edge.
+They also allow for workloads to interact with external services like firewalls and DNS.
+The Fabric ASN (created during network fabric creation) is needed for creating external networks.
+
+## An example of networks creation for a network function
+
+The diagram represents an example Network Function, with three different internal networks Trust, Untrust and Management (`Mgmt`).
+Each of the internal networks is created in its own L3 isolation-domain (`L3 ISD`).
+
+<! IMG ![Network Function networking diagram](Docs/media/network-function-networking.png) IMG >
+
+Figure Network Function networking diagram
+
+### Create required L3 isolation-domains
+
+**Create an L3 isolation-domain `l3untrust`**
+
+```azurecli
+az nf l3domain create --resource-group "resourcegroupname" --resource-name "l3untrust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+```
+
+**Create an L3 isolation-domain `l3trust`**
+
+```azurecli
+az nf l3domain create --resource-group "resourcegroupname" --resource-name "l3trust" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+```
+
+**Create an L3 isolation-domain `l3mgmt`**
+
+```azurecli
+az nf l3domain create --resource-group "resourcegroupname" --resource-name "l3mgmt" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName"
+```
+
+### Create required internal networks
+
+Now that the required L3 isolation-domains have been created, you can create the three (3) Internal Networks. The `IPv4 prefix` for these networks are:
+
+- Trusted network: 10.151.1.11/24
+- Management network: 10.151.2.11/24
+- Untrusted network: 10.151.3.11/24
+
+**Create Internal Network in `l3untrust` L3 isolation-domain**
+
+```azurecli
+az nf internalnetwork create --resource-group "resourcegroupname" --l3-isolation-domain-name l3untrust --resource-name untrustnetwork --location "eastus" --vlan-id 502 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.3.11/24" --mtu 1500
+```
+
+**Create Internal Network in `l3trust` L3 isolation-domain**
+
+```azurecli
+az nf internalnetwork create --resource-group "resourcegroupname" --l3-isolation-domain-name l3trust --resource-name trustnetwork --location "eastus" --vlan-id 503 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.1.11/24" --mtu 1500
+```
+
+**Create Internal Network in `l3mgmt` L3 isolation-domain**
+
+```azurecli
+az nf internalnetwork create --resource-group "resourcegroupname" --l3-isolation-domain-name l3mgmt --resource-name mgmtnetwork --location "eastus" --vlan-id 504 --fabric-asn 65048 --peer-asn 65047--connected-i-pv4-subnets prefix="10.151.2.11/24" --mtu 1500
+```
+
+### Enable the L3 isolation-domains
+
+You've created the required L3 isolation-domains and the associated internal network. You now need to enable these isolation-domains.
+
+**Enable L3 isolation-domain `l3untrust`**
+
+```azurecli
+az nf l3domain update-admin-state --resource-group "resourcegroupname" --resource-name "l3untrust" --state Enable
+```
+
+**Enable L3 isolation-domain `l3trust`**
+
+```azurecli
+az nf l3domain update-admin-state --resource-group "resourcegroupname" --resource-name "l3trust" --state Enable
+```
+
+**Enable L3 isolation-domain `l3mgmt`**
+
+```azurecli
+az nf l3domain update-admin-state --resource-group "resourcegroupname" --resource-name "l3mgmt" --state Enable
+```
+
+## Example L2 isolation-domain creation for a workload
+
+First, you need to create the `l2HAnetwork` L2 isolation-domain and then enable it.
+
+**Create `l2HAnetwork` L2 isolation-domain**
+
+```azurecli
+az nf l2domain create --resource-group "resourcegroupname" --resource-name "l2HAnetwork" --location "eastus" --nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName" --vlan-id 505 --mtu 1500
+```
+
+**Enable `l2HAnetwork` L2 isolation-domain**
+
+```azurecli
+az nf l2domain update-administrative-state --resource-group "resourcegroupname" --resource-name "l2HAnetwork" --state Enable
+```
operator-nexus Howto Configure Network Fabric Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric-controller.md
+
+ Title: "Azure Operator Nexus : How to configure Network fabric Controller"
+description: How to configure Network fabric Controller
++++ Last updated : 02/06/2023 #Required; mm/dd/yyyy format.++
+# Create and modify a network fabric Controller using Azure CLI
+
+This article describes how to create a Network fabric Controller by using the Azure Command Line Interface (AzureCLI).
+This document also shows you how to check the status, or delete a Network fabric Controller.
+
+## Prerequisites
+
+- The User must make sure all the Pre-Requisites are met prior moving on to the next steps
+- Before the User starts with NFC deployment, the ExpressRoute circuit has to be validated with the right connectivity (CircuitID)(AuthID), otherwise the NFC provisioning would fail.
+
+### Install CLI extensions
+
+Install latest version of the
+[necessary CLI extensions](./howto-install-cli-extensions.md).
+
+### Sign in to your Azure account and select your subscription
+
+To begin your configuration, sign in to your Azure account. You can use the following examples to connect:
+
+```azurecli
+az login
+```
+
+Check the subscriptions for the account.
+
+```azurecli
+az account list
+```
+
+Select the subscription for which you want to create a Network fabric Controller. This subscription will be used across all Operator Nexus resources.
+
+```azurecli
+az account set --subscription "<subscription ID>"
+```
+
+## Register providers for managed network fabric
+
+You can skip this step if your subscription is already registered with the Microsoft.ManagedNetworkFabric Resource Provider. Otherwise, proceed with the following steps:
+
+In Azure CLI, enter the following commands:
+
+```azurecli
+az provider register --namespace Microsoft.ManagedNetworkFabric
+```
+
+Monitor the registration process. Registration may take up to 10 minutes.
+
+```azurecli
+az provider show -n Microsoft.ManagedNetworkFabric -o table
+```
+
+Once registered, you should see the RegistrationState state for the namespace change to Registered.
+
+If you've already registered, you can verify using the `show` command.
+
+## Create a network fabric controller
+
+If you don't have a resource group created already, you must create a resource group before you create your Network fabric Controller.
+
+**Note**: You should create a separate Resource Group for Network fabric Controller (NFC) and a separate one for Network fabric (NF). The value (\_) underscore isn't supported for any of the naming conventions, for example (Resource Name or Resource Group.
+You can create a resource group by running the following command:
+
+```azurecli
+az group create -n NFCResourceGroupName -l "East US"
+```
+
+```azurecli
+az group create -n NFResourceGroupName -l "East US"
+```
+
+## Attributes for NFC creation
+
+| Parameter | Description | values | Example | Required | Type |
+| - | - | | - | | |
+| Resource-Group | A resource group is a container that holds related resources for an Azure solution. | NFCResourceGroupName | ATTNFCResourceGroupName | True | String |
+| Location | The Azure Region is mandatory to provision your deployment. | eastus, westus3 | eastus | True | String |
+| Resource-Name | The Resource-name will be the name of the Fabric | nfcname | ATTnfcname | True | String |
+| NFC IP Block | This Block is the NFC IP subnet, the default subnet block is 10.0.0.0/19, and it also shouldn't overlap with any of the Express Route Circuits. | 10.0.0.0/19 | 10.0.0.0/19 | Not Required | String |
+| Express Route Circuits | The ExpressRoute circuit is a dedicated 10G link that connects Azure and on-premises. You need to know the ExpressRoute Circuit ID and Auth key for an NFC to successfully provision. There are two Express Route Circuits, one for the Infrastructure services and other one for Workload (Tenant) services | --workload-er-connections '[{"expressRouteCircuitId": "xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx", "expressRouteAuthorizationKey": "xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx"}]' <br /><br /> --infra-er-connections '[{"expressRouteCircuitId": "xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx", "expressRouteAuthorizationKey": "xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx"}]' | subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ER-Dedicated-WUS2-AFO-Circuits/providers/Microsoft.Network/expressRouteCircuits/MSFT-ER-Dedicated-PvtPeering-WestUS2-AFO-Ckt-01", "expressRouteAuthorizationKey": "xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx"}] | True | string |
+
+Here is an example of how you can create a Network fabric Controller using the Azure CLI.
+For more information, see [attributes section](#attributes-for-nfc-creation).
+
+```azurecli
+
+az nf controller create \
+--resource-group "NFCResourceGroupName" \
+--location "eastus" \
+--resource-name "nfcname" \
+--ipv4-address-space "10.0.0.0/19" \
+--infra-er-connections '[{"expressRouteCircuitId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ER-Dedicated-WUS2-AFO-Circuits/providers/Microsoft.Network/expressRouteCircuits/MSFT-ER-Dedicated-PvtPeering-WestUS2-AFO-Ckt-01", "expressRouteAuthorizationKey": "<auth-key>"}]'
+--workload-er-connections '[{"expressRouteCircuitId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ER-Dedicated-WUS2-AFO-Circuits/providers/Microsoft.Network/expressRouteCircuits/MSFT-ER-Dedicated-PvtPeering-WestUS2-AFO-Ckt-01"", "expressRouteAuthorizationKey": "<auth-key>"}]'
+```
+
+**Note:** The NFC creation takes between 30-45 mins. Start using the show commands to monitor the progress of the NFC creation. You'll start to see different provisioning states while monitoring the progress of NFC creation such as, Accepted, updating and Succeeded/Failed.
+
+Expected output:
+
+```json
+ "annotation": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/nfcname",
+ "infrastructureExpressRouteConnections": [
+ {
+ "expressRouteAuthorizationKey": null,
+ "expressRouteCircuitId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ER-Dedicated-WUS2-AFO-Circuits/providers/Microsoft.Network/expressRouteCircuits/MSFT-ER-Dedicated-PvtPeering-WestUS2-AFO-Ckt-01"
+ }
+ ],
+ "infrastructureServices": null,
+ "ipv4AddressSpace": "10.0.0.0/19",
+ "ipv6AddressSpace": null,
+ "location": "eastus",
+ "managedResourceGroupConfiguration": {
+ "location": "eastus2euap",
+ "name": "nfcname-HostedResources-7DE8EEC1"
+ },
+ "name": "nfcname",
+ "networkFabricIds": null,
+ "operationalState": null,
+ "provisioningState": "Accepted",
+ "resourceGroup": "NFCresourcegroupname",
+ "systemData": {
+ "createdAt": "2022-10-31T10:47:08.072025+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-10-31T10:47:08.072025+00:00",
+ "lastModifiedBy": "email@address.com",
+```
+
+## Get network fabric controller
+
+```azurecli
+nfacliuser:~$ az nf controller show --resource-group "NFCResourceGroupName" --resource-name "nfcname"
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/nfcname",
+ "infrastructureExpressRouteConnections": [
+ {
+ "expressRouteAuthorizationKey": null,
+ "expressRouteCircuitId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ER-Dedicated-WUS2-AFO-Circuits/providers/Microsoft.Network/expressRouteCircuits/MSFT-ER-Dedicated-PvtPeering-WestUS2-AFO-Ckt-02"
+ }
+ ],
+ "infrastructureServices": {
+ "ipv4AddressSpaces": ["10.0.0.0/21"],
+ "ipv6AddressSpaces": []
+ },
+ "ipv4AddressSpace": "10.0.0.0/19",
+ "ipv6AddressSpace": null,
+ "location": "eastus",
+ "managedResourceGroupConfiguration": {
+ "location": "eastus",
+ "name": "nfcname-HostedResources-XXXXXXXX"
+ },
+ "name": "nfcname",
+ "networkFabricIds": [],
+ "operationalState": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFCResourceGroupName",
+ "systemData": {
+ "createdAt": "2022-10-27T16:02:13.618823+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-10-27T17:13:18.278423+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkfabriccontrollers",
+ "workloadExpressRouteConnections": [
+ {
+ "expressRouteAuthorizationKey": null,
+ "expressRouteCircuitId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ER-Dedicated-WUS2-AFO-Circuits/providers/Microsoft.Network/expressRouteCircuits/MSFT-ER-Dedicated-PvtPeering-WestUS2-AFO-Ckt-03"
+ }
+ ],
+ "workloadManagementNetwork": true,
+ "workloadServices": {
+ "ipv4AddressSpaces": ["10.0.28.0/22"],
+ "ipv6AddressSpaces": []
+ }
+}
+```
+
+## Delete network fabric controller
+
+```azurecli
+az nf controller delete --resource-group "NFCResourceGroupName" --resource-name "nfcname"
+```
+
+**NOTE:**: if NF is created, then make sure the NF is deleted first before you delete the NFC.
+
+Expected output:
+
+```json
+"name": "nfcname",
+ "networkFabricIds": [],
+ "operationalState": null,
+ "provisioningState": "succeeded",
+ "resourceGroup": "NFCResourceGroupName",
+ "systemData": {
+ "createdAt": "2022-10-31T10:47:08.072025+00:00",
+```
+
+**NOTE:** It will take 30 mins to delete the NFC. Verify the hosted resources in Azure portal whether or not it's deleted. Delete and recreate NFC if you run into NFC provisioning issue (Failed).
+
+### Next steps
+
+Once you've successfully created a Network fabric Controller, the next step is to create a [Cluster Manager](./howto-cluster-manager.md).
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
+
+ Title: "Azure Operator Nexus: How to configure the Network fabric"
+description: Learn to create, view, list, update, delete commands for Network fabric
++++ Last updated : 02/06/2023 #Required; mm/dd/yyyy format.+++
+# Create and provision a network fabric using Azure CLI
+
+This article describes how to create a Network fabric by using the Azure Command Line Interface (AzCLI). This document also shows you how to check the status, update, or delete a Network fabric.
+
+## Prerequisites
+
+* A Network fabric Controller has been created--add link in your Azure account.
+ * A Network fabric Controller instance in Azure can be used for multiple Network-fabrics.
+ * You can reuse a pre-existing Network fabric Controller.
+* Install the latest version of the [CLI commands](#install-cli-extensions)
+* Physical infrastructure has been installed and cabled as per BoM.
+* ExpressRoute connectivity has been established between the Azure region and your WAN (your networking).
+* The needed VLANs, Route-Targets and IP addresses have been configured in your network.
+* Terminal server has been [installed and configured](./quickstarts-platform-prerequisites.md#set-up-terminal-server)
+
+### Install CLI extensions
+
+Install latest version of the [necessary CLI extensions](./howto-install-cli-extensions.md).
+
+## Parameters needed to create network fabric
+
+| Parameter | Description | Example | Required| Type|
+|--|| |-||
+| resource-group | Name of the resource group | "NFResourceGroup" |True | String |
+| location | Location of Azure region | "eastus" |True | String |
+| resource-name | Name of the FabricResource | Austin-Fabric |True | String |
+| nf-sku |Fabric SKU ID, should be based on the SKU of the BoM that was ordered. Contact AFO team for specific SKU value for the BoM | att |True | String|
+| nfc-id |Network fabric Controller ARM resource ID| |True | String |
+||
+|**managed-network-config**| Details of management network ||True ||
+|ipv4Prefix|IPv4 Prefix of the management network. This Prefix should be unique across all Network-fabrics in a Network fabric Controller. Prefix length should be at least 19 (/20 isn't allowed, /18 and lower are allowed) | 10.246.0.0/19|True | String |
+||
+|**managementVpnConfiguration**| Details of management VPN connection between Network fabric and infrastructure services in Network fabric Controller||True ||
+|*optionBProperties*| Details of MPLS option 10B that is used for connectivity between Network fabric and Network fabric Controller||True ||
+|importRouteTargets|Values of import route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B| 65048:10039|True(If OptionB enabled)|Integer |
+|exportRouteTargets|Values of export route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B| 65048:10039|True(If OptionB enabled)|Integer |
+||
+|**workloadVpnConfiguration**| Details of workload VPN connection between Network fabric and workload services in Network fabric Controller||||
+|*optionBProperties*| Details of MPLS option 10B that is used for connectivity between Network fabric and Network fabric Controller||||
+|importRouteTargets|Values of import route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B|for example, 65048:10050|True(If OptionB enabled)|Integer |
+|exportRouteTargets|Values of export route targets to be configured on CEs for exchanging routes between CE & PE via MPLS option 10B|for example, 65048:10050|True(If OptionB enabled)|Integer |
+||
+|**ts-config**| Terminal Server Configuration Details||True ||
+|primaryIpv4Prefix| The terminal server Net1 interface should be assigned the first usable IP from the prefix and the corresponding interface on PE should be assigned the second usable address|20.0.10.0/30, TS Net1 interface should be assigned 20.0.10.1 and PE interface 20.0.10.2|True|String |
+|secondaryIpv4Prefix|IPv4 Prefix for connectivity between TS and PE2. The terminal server Net2 interface should be assigned the first usable IP from the prefix and the corresponding interface on PE should be assigned the second usable address|20.0.0.4/30, TS Net2 interface should be assigned 20.0.10.5 and PE interface 20.0.10.6|True|String |
+|username| Username configured on the terminal server that the services use to configure TS||True|String|
+|password| Password configured on the terminal server that the services use to configure TS||True|String|
+||
+|**nni-config**| Network to Network Inter-connectivity configuration between CEs and PEs||True||
+|*layer2Configuration*| Layer 2 configuration ||||
+|portCount| Number of ports that are part of the port-channel. Maximum value is based on Fabric SKU|2||Integer|
+|mtu| Maximum transmission unit between CE and PE. |1500||Integer|
+|*layer3Configuration*| Layer 3 configuration between CEs and PEs||True||
+|primaryIpv4Prefix|IPv4 Prefix for connectivity between CE1 and PE1. CE1 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE1 should be assigned the second usable address|10.246.0.124/31, CE1 port-channel interface is assigned 10.246.0.125 and PE1 port-channel interface should be assigned 10.246.0.126||String|
+|secondaryIpv4Prefix|IPv4 Prefix for connectivity between CE2 and PE2. CE2 port-channel interface is assigned the first usable IP from the prefix and the corresponding interface on PE2 should be assigned the second usable address|10.246.0.128/31, CE2 port-channel interface should be assigned 10.246.0.129 and PE2 port-channel interface 10.246.0.130||String|
+|fabricAsn|ASN number assigned on CE for BGP peering with PE|65048||Integer|
+|peerAsn|ASN number assigned on PE for BGP peering with CE. For iBGP between PE/CE, the value should be same as fabricAsn, for eBGP the value should be different from fabricAsn |65048|True|Integer|
+|vlan-id| VLAN identifier used for connectivity between PE/CE. The value should be between 10 to 20| 10-20||Integer|
+||
+
+## Create a network fabric
+
+Resource group must be created before Network fabric creation. It's recommended to create a separate resource group for each Network fabric. Resource group can be created by the following command:
+
+```azurecli
+az group create -n NFResourceGroup -l "East US"
+```
+
+Run the following command to create the Network fabric:
+
+```azurecli
++
+az nf fabric create \
+--resource-group "NFResourceGroupName" \
+--location "eastus" \
+--resource-name "NFName" \
+--nf-sku "NFSKU" \
+--nfc-id ""/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName" \" \
+ --nni-config '{"layer3Configuration":{"primaryIpv4Prefix":"10.246.0.124/30", "secondaryIpv4Prefix": "10.246.0.128/30", "fabricAsn":65048, "peerAsn":65048, "vlanId": 20}}' \
+ --ts-config '{"primaryIpv4Prefix":"20.0.10.0/30", "secondaryIpv4Prefix": "20.0.10.4/30","username":"****", "password": "*****"}' \
+ --managed-network-config '{"ipv4Prefix":"10.246.0.0/19", \
+ "managementVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65048:10039"], "exportRouteTargets":["65048:10039"]}}, \
+ "workloadVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65048:10050"], "exportRouteTargets":["65048:10050"]}}}'
+
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "l2IsolationDomains": null,
+ "l3IsolationDomains": null,
+ "location": "eastus",
+ "managementNetworkConfiguration": {
+ "ipv4Prefix": "10.246.0.0/19",
+ "ipv6Prefix": null,
+ "managementVpnConfiguration": {
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10039"
+ ],
+ "importRouteTargets": [
+ "65048:10039"
+ ]
+ },
+ "peeringOption": "OptionA",
+ "state": "Enabled"
+ },
+ "workloadVpnConfiguration": {
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10050"
+ ],
+ "importRouteTargets": [
+ "65048:10050"
+ ]
+ },
+ "peeringOption": "OptionA",
+ "state": "Enabled"
+ }
+ },
+ "name": "NFName",
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
+ "networkFabricSku": "NFSKU",
+ "networkToNetworkInterconnect": {
+ "layer2Configuration": null,
+ "layer3Configuration": {
+ "fabricAsn": 65048,
+ "peerAsn": 65048,
+ "primaryIpv4Prefix": "10.246.0.124/30",
+ "primaryIpv6Prefix": null,
+ "routerId": null,
+ "secondaryIpv4Prefix": "10.246.0.128/30",
+ "secondaryIpv6Prefix": null,
+ "vlanId": 20
+ }
+ },
+ "operationalState": null,
+ "provisioningState": "Accepted",
+ "racks": null,
+ "resourceGroup": "NFResourceGroup",
+ "systemData": {
+ "createdAt": "2022-11-02T06:56:05.019873+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:56:05.019873+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "terminalServerConfiguration": {
+ "networkDeviceId": null,
+ "password": null,
+ "primaryIpv4Prefix": "20.0.10.0/30",
+ "primaryIpv6Prefix": null,
+ "secondaryIpv4Prefix": "20.0.10.4/30",
+ "secondaryIpv6Prefix": null,
+ "****": "root"
+ },
+ "type": "microsoft.managednetworkfabric/networkfabrics"
+}
+```
+
+## List or get network fabric
+
+```azurecli
+az nf fabric list --resource-group "NFResourceGroup"
+```
+
+Expected output:
+
+```json
+[
+ {
+ "annotation": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "l2IsolationDomains": null,
+ "l3IsolationDomains": null,
+ "location": "eastus",
+ "managementNetworkConfiguration": {
+ "ipv4Prefix": "10.246.0.0/19",
+ "ipv6Prefix": null,
+ "managementVpnConfiguration": {
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10039"
+ ],
+ "importRouteTargets": [
+ "65048:10039"
+ ]
+ },
+ "peeringOption": "OptionA",
+ "state": "Enabled"
+ },
+ "workloadVpnConfiguration": {
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10050"
+ ],
+ "importRouteTargets": [
+ "65048:10050"
+ ]
+ },
+ "peeringOption": "OptionA",
+ "state": "Enabled"
+ }
+ },
+ "name": "NFName",
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
+ "networkFabricSku": "NFSKU",
+ "networkToNetworkInterconnect": {
+ "layer2Configuration": null,
+ "layer3Configuration": {
+ "fabricAsn": 65048,
+ "peerAsn": 65048,
+ "primaryIpv4Prefix": "10.246.0.124/30",
+ "primaryIpv6Prefix": null,
+ "routerId": null,
+ "secondaryIpv4Prefix": "10.246.0.128/30",
+ "secondaryIpv6Prefix": null,
+ "vlanId": 20
+ }
+ },
+ "operationalState": null,
+ "provisioningState": "Failed",
+ "racks": null,
+ "resourceGroup": "NFResourceGroup",
+ "systemData": {
+ "createdAt": "2022-11-02T06:56:05.019873+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:56:05.019873+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "terminalServerConfiguration": {
+ "networkDeviceId": null,
+ "password": null,
+ "primaryIpv4Prefix": "20.0.10.0/30",
+ "primaryIpv6Prefix": null,
+ "secondaryIpv4Prefix": "20.0.10.4/30",
+ "secondaryIpv6Prefix": null,
+ "****": "****"
+ },
+ "type": "microsoft.managednetworkfabric/networkfabrics"
+ }
+]
+```
+
+## Add racks
+
+On creating NetworkFabric, one aggregate rack and two or more compute racks should be added to the Network fabric. The number of racks should match the physical racks in the Operator Nexus instance
+
+### Add aggregate rack
+
+```azurecli
+az nf rack create \
+--resource-group "NFResourceGroup" \
+--location "eastus" \
+--network-rack-sku "att" \
+--nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName" \
+--resource-name "AR1"
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "id": "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkRacks/AR1",
+ "location": "eastus",
+ "name": "AR1",
+ "networkDevices": [
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AR1-CE1",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AR1-CE2",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AR1-TOR17",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AR1-TOR18",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AR1-MgmtSwitch1",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AR1-MgmtSwitch2",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AR1-NPB1",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-AR1-NPB2"
+ ],
+ "networkFabricId": "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "networkRackSku": "att",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroupName",
+ "systemData": {
+ "createdAt": "2022-11-01T17:04:18.908946+00:00",
+ "createdBy": "email@adress.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-01T17:04:18.908946+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkracks"
+}
+```
+
+### Add compute rack 1
+
+```azurecli
+az nf rack create \
+--resource-group "NFResourceGroup" \
+--location "eastus" \
+--network-rack-sku "att" \
+--nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName" \
+--resource-name "CR1"
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "id": "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkRacks/CR1",
+ "location": "eastus",
+ "name": "CR1",
+ "networkDevices": [
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-CR1-TOR1",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-CR1-TOR2",
+ "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-CR1-MgmtSwitch"
+ ],
+ "networkFabricId": "/subscriptions/8a0c9a74-a831-4363-8590-49bbdd2ea39e/resourceGroups/OP1lab2-fabric/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "networkRackSku": "att",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroupName",
+ "systemData": {
+ "createdAt": "2022-11-01T17:05:21.219619+00:00",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-01T17:05:21.219619+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkracks"
+}
+```
+
+### Add compute rack 2
+
+```azurecli
+az nf rack create \
+--resource-group "NFResourceGroup" \
+--location "eastus" \
+--network-rack-sku "att" \
+--nf-id "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName" \
+--resource-name "CR2"
+```
+
+Once all the racks are added, NFA creates the corresponding networkDevice resources.
+
+## Next Steps
+
+* Update the serial number in the networkDevice resource with the actual serial number on the device. The device sends the serial number as part of DHCP request.
+* Configure the terminal server with the serial numbers of all the devices (which also hosts DHCP server)
+* Provision the network devices via zero-touch provisioning mode. Based on the serial number in the DHCP request, the DHCP server responds with the boot configuration file for the corresponding device
+
+## Update network fabric devices
+
+Run the following command to update Network fabric Devices:
+
+```azurecli
+az nf device update \
+--resource-group "NFResourceGroup" \
+--location "eastus" \
+--resource-name "network-device-name" \
+--network-device-sku "DeviceSku" \
+--network-device-role "CE" \
+--device-name "NFName-CR2-TOR1" \
+--serial-number "12345"
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "deviceName": "NFName-CR2-TOR1",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/rgName/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-CR2-TOR1",
+ "location": "eastus",
+ "name": "networkDevice1",
+ "networkDeviceRole": "TOR1",
+ "networkDeviceSku": "DeviceSku",
+ "networkRackId": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroupName",
+ "serialNumber": "Arista;DCS-7010TX-48;12.00;JPE12345678",
+ "systemData": {
+ "createdAt": "2022-10-26T09:30:14.424546+00:00",
+ "createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "createdByType": "Application",
+ "lastModifiedAt": "2022-10-31T15:45:24.320290+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkdevices",
+ "version": null
+}
+```
+
+## List or get network fabric devices
+
+Run the following command to List Network fabric Devices:
+
+```azurecli
+az nf device list --resource-group "NFResourceGroup"
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "deviceName": "NFName-CR1-TOR1",
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/rgName/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-CR1-TOR1",
+ "location": "eastus",
+ "name": "networkDevice1",
+ "networkDeviceRole": "TOR1",
+ "networkDeviceSku": "DeviceSku",
+ "networkRackId": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroupName",
+ "serialNumber": "Arista;DCS-7280DR3-24;12.05;JPE12345678",
+ "systemData": {
+ "createdAt": "2022-10-20T17:23:49.203745+00:00",
+ "createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "createdByType": "Application",
+ "lastModifiedAt": "2022-10-27T17:38:57.438007+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkdevices",
+ "version": null
+ },
+ {
+ "annotation": null,
+ "deviceName": "NFName-CR1-MgmtSwitch",
+ "id": "subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/rgName/providers/Microsoft.ManagedNetworkFabric/networkDevices/NFName-CR1-MgmtSwitch",
+ "location": "eastus",
+ "name": "Network device",
+ "networkDeviceRole": "MgmtSwitch",
+ "networkDeviceSku": "DeviceSku",
+ "networkRackId": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroupName",
+ "serialNumber": "Arista;DCS-7010TX-48;12.02;JPE12345678",
+ "systemData": {
+ "createdAt": "2022-10-27T17:23:53.581927+00:00",
+ "createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "createdByType": "Application",
+ "lastModifiedAt": "2022-10-27T17:38:59.922499+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkdevices",
+ "version": null
+ }
+```
+
+Run the following command to Get or Show details of a Network fabric Device:
+
+```azurecli
+az nf device show --resource-group "example-rg" --resource-name "example-device"
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "deviceName": "NFName-CR1-TOR1",
+ "id": "subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/rgName/providers/Microsoft.ManagedNetworkFabric/networkDevices/networkDevice1",
+ "location": "eastus",
+ "name": "networkDevice1",
+ "networkDeviceRole": "TOR1",
+ "networkDeviceSku": "DeviceSku",
+ "networkRackId": null,
+ "provisioningState": "Succeeded",
+ "resourceGroup": "NFResourceGroupName",
+ "serialNumber": "Arista;DCS-7280DR3-24;12.05;JPE12345678",
+ "systemData": {
+ "createdAt": "2022-10-27T17:23:49.203745+00:00",
+ "createdBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "createdByType": "Application",
+ "lastModifiedAt": "2022-10-27T17:38:57.438007+00:00",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "type": "microsoft.managednetworkfabric/networkdevices",
+ "version": null
+}
+```
+
+## Provision fabric
+
+Once the device serial number is updated, the fabric needs to be provisioned by executing the following command
+
+```azurecli
+az nf fabric provision --resource-group "NFResourceGroup" --resource-name "NFName"
+```
+
+```azurecli
+az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "l2IsolationDomains": null,
+ "l3IsolationDomains": null,
+ "location": "eastus",
+ "managementNetworkConfiguration": {
+ "ipv4Prefix": "10.246.0.0/19",
+ "ipv6Prefix": null,
+ "managementVpnConfiguration": {
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10039"
+ ],
+ "importRouteTargets": [
+ "65048:10039"
+ ]
+ },
+ "peeringOption": "OptionA",
+ "state": "Enabled"
+ },
+ "workloadVpnConfiguration": {
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10050"
+ ],
+ "importRouteTargets": [
+ "65048:10050"
+ ]
+ },
+ "peeringOption": "OptionA",
+ "state": "Enabled"
+ }
+ },
+ "name": "NFName",
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
+ "networkFabricSku": "NFSKU",
+ "networkToNetworkInterconnect": {
+ "layer2Configuration": null,
+ "layer3Configuration": {
+ "fabricAsn": 65048,
+ "peerAsn": 65048,
+ "primaryIpv4Prefix": "10.246.0.124/30",
+ "primaryIpv6Prefix": null,
+ "routerId": null,
+ "secondaryIpv4Prefix": "10.246.0.128/30",
+ "secondaryIpv6Prefix": null,
+ "vlanId": 20
+ }
+ },
+ "operationalState": "Provisioned",
+ "provisioningState": "Succeeded",
+ "racks": [
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/AttAggRack"
+ ],
+ "resourceGroup": "NFResourceGroup",
+ "systemData": {
+ "createdAt": "2022-11-02T06:56:05.019873+00:00",
+ "createdBy": "email@adddress.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T09:12:58.889552+00:00",
+ "lastModifiedBy": "d1bd24c7-b27f-477e-86dd-939e107873d7",
+ "lastModifiedByType": "Application"
+ },
+ "tags": null,
+ "terminalServerConfiguration": {
+ "networkDeviceId": null,
+ "password": null,
+ "primaryIpv4Prefix": "20.0.10.0/30",
+ "primaryIpv6Prefix": null,
+ "secondaryIpv4Prefix": "20.0.10.4/30",
+ "secondaryIpv6Prefix": null,
+ "****": "****"
+ },
+ "type": "microsoft.managednetworkfabric/networkfabrics"
+}
+```
+
+## Deleting fabric
+
+To delete the fabric, the operational state of Fabric shouldn't be "Provisioned". To change the operational state from Provisioned, run the same command to create the fabric. Ensure there are no racks associated before deleting fabric.
+
+```azurecli
+az nf fabric create \
+--resource-group "NFResourceGroup" \
+--location "eastus" \
+--resource-name "NFName" \
+--nf-sku "NFSKU" \
+--nfc-id ""/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/ResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName" \" \
+ --nni-config '{"layer3Configuration":{"primaryIpv4Prefix":"10.246.0.124/30", "secondaryIpv4Prefix": "10.246.0.128/30", "fabricAsn":65048, "peerAsn":65048, "vlanId": 20}}' \
+ --ts-config '{"primaryIpv4Prefix":"20.0.10.0/30", "secondaryIpv4Prefix": "20.0.10.4/30","****":"****", "password": "*****"}' \
+ --managed-network-config '{"ipv4Prefix":"10.246.0.0/19", \
+ "managementVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65048:10039"], "exportRouteTargets":["65048:10039"]}}, \
+ "workloadVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65048:10050"], "exportRouteTargets":["65048:10050"]}}}'
+
+```
+
+Expected output:
+
+```json
+{
+ "annotation": null,
+ "id": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkFabrics/NFName",
+ "l2IsolationDomains": null,
+ "l3IsolationDomains": null,
+ "location": "eastus",
+ "managementNetworkConfiguration": {
+ "ipv4Prefix": "10.246.0.0/19",
+ "ipv6Prefix": null,
+ "managementVpnConfiguration": {
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10039"
+ ],
+ "importRouteTargets": [
+ "65048:10039"
+ ]
+ },
+ "peeringOption": "OptionA",
+ "state": "Enabled"
+ },
+ "workloadVpnConfiguration": {
+ "optionAProperties": null,
+ "optionBProperties": {
+ "exportRouteTargets": [
+ "65048:10050"
+ ],
+ "importRouteTargets": [
+ "65048:10050"
+ ]
+ },
+ "peeringOption": "OptionA",
+ "state": "Enabled"
+ }
+ },
+ "name": "NFName",
+ "networkFabricControllerId": "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFCResourceGroupName/providers/Microsoft.ManagedNetworkFabric/networkFabricControllers/NFCName",
+ "networkFabricSku": "NFSKU",
+ "networkToNetworkInterconnect": {
+ "layer2Configuration": null,
+ "layer3Configuration": {
+ "fabricAsn": 65048,
+ "peerAsn": 65048,
+ "primaryIpv4Prefix": "10.246.0.124/30",
+ "primaryIpv6Prefix": null,
+ "routerId": null,
+ "secondaryIpv4Prefix": "10.246.0.128/30",
+ "secondaryIpv6Prefix": null,
+ "vlanId": 20
+ }
+ },
+ "operationalState": null,
+ "provisioningState": "Accepted",
+ "racks":["/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/AttAggRack".
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/AttCompRack1,
+ "/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroup/providers/Microsoft.ManagedNetworkFabric/networkRacks/AttCompRack2]
+ "resourceGroup": "NFResourceGroup",
+ "systemData": {
+ "createdAt": "2022-11-02T06:56:05.019873+00:00",
+ "createdBy": "email@adddress.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2022-11-02T06:56:05.019873+00:00",
+ "lastModifiedBy": "email@adddress.com",
+ "lastModifiedByType": "User"
+ },
+ "tags": null,
+ "terminalServerConfiguration": {
+ "networkDeviceId": null,
+ "password": null,
+ "primaryIpv4Prefix": "20.0.10.0/30",
+ "primaryIpv6Prefix": null,
+ "secondaryIpv4Prefix": "20.0.10.4/30",
+ "secondaryIpv6Prefix": null,
+ "****": "root"
+ },
+ "type": "microsoft.managednetworkfabric/networkfabrics"
+}
+```
+
+After the operationalState is no longer "Provisioned", delete all the racks one by one
+
+```azurecli
+az nf rack delete --resource-group "NFResourceGroup" --resource-name "RackName"
+```
+
+```azurecli
+az nf fabric show --resource-group "NFResourceGroup" --resource-name "NFName"
+```
operator-nexus Howto Hybrid Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-hybrid-aks.md
+
+ Title: "Azure Operator Nexus: Interact with AKS-Hybrid Cluster"
+description: Learn how to manage (view, list, update, delete) AKS-Hybrid clusters.
++++ Last updated : 02/02/2023+++
+# How To interact with AKS-hybrid cluster
+
+This document shows how to manage an AKS-Hybrid cluster that you use for your CNF workloads.
+
+## Before you begin
+
+You'll need:
+
+1. You should have created an [AKS-Hybrid Cluster](./quickstarts-tenant-workload-deployment.md#section-k-how-to-create-aks-hybrid-cluster-for-deploying-cnf-workloads)
+2. <`YourAKS-HybridClusterName`>: the name of your previously created AKS-Hybrid cluster
+3. <`YourSubscription`>: your subscription name or ID where the AKS-Hybrid cluster was created
+4. <`YourResourceGroupName`>: the name of the Resource group where the AKS-Hybrid cluster was created
+5. <`tags`>: space-separated key=value tags: key[=value] [key[=value] ...]. Use '' to clear existing tags
+6. You should install latest version of the
+[necessary CLI extensions](./howto-install-cli-extensions.md).
+
+## List command
+
+To get a list of AKS_Hybrid clusters in your Resource group:
+
+```azurecli
+ az hybridaks list -o table \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>"
+```
+
+## Show command
+
+To see the properties of AKS-Hybrid cluster named `YourAKS-HybridClustername`:
+
+```azurecli
+ az hybridaks show --name "<YourAKS-HybridClusterName>" \
+ --resource-group "< YourResourceGroupName >" \
+ --subscription "< YourSubscription >"
+```
+
+## Update command
+
+To update the properties of your AKS-Hybrid cluster:
+
+```azurecli
+ az hybridaks update --name "<YourAKS-HybridClustername>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "< YourSubscription>" \
+ --tags "<YourAKS-HybridClusterTags>"
+```
+
+## Delete command
+
+To delete the AKS-Hybrid cluster named `YourAKS-HybridClustername`:
+
+```azurecli
+ az hybridaks delete --name "<YourAKS-HybridClustername>" \
+ --resource-group "<YourResourceGroupName >" \
+ --subscription "<YourSubscription>"
+```
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
+
+ Title: "Azure Operator Nexus: Install CLI extensions"
+description: Learn to install the needed Azure CLI extensions for Operator Nexus
++++ Last updated : 01/27/2023
+#
++
+# Install Azure CLI extensions
+
+Install the following CLI extensions:
+
+- `networkcloud` (for Microsoft.NetworkCloud APIs)
+- `managednetworkfabric` (for Microsoft.ManagedNetworkFabric APIs)
+- `hybridaks` (for AKS-Hybrid APIs)
+
+- If you haven't already installed Azure CLI: [Install Azure CLI][installation-instruction]. The aka.ms links download the latest available version of the extension.
+
+- Install `networkcloud` CLI extension
+
+- Remove any previously installed version of the extension
+
+ ```azurecli
+ az extension remove --name networkcloud
+ ```
+
+- Download the `networkcloud` python wheel
+
+# [Linux / macOS / WSL](#tab/linux+macos+wsl)
+
+```sh
+ curl -L "https://aka.ms/nexus-nc-cli" --output "networkcloud-0.0.0-py3-none-any.whl"
+```
+
+# [PowerShell](#tab/powershell)
+
+```ps
+ curl "https://aka.ms/nexus-nc-cli" -OutFile "networkcloud-0.0.0-py3-none-any.whl"
+```
+++
+- Install and test the `networkcloud` CLI extension
+
+ ```azurecli
+ az extension add --source networkcloud-0.0.0-py3-none-any.whl
+ az networkcloud --help
+ ```
+
+- Install `managednetworkfabric` CLI extension
+
+- Remove any previously installed version of the extension
+
+ ```azurecli
+ az extension remove --name managednetworkfabric
+ ```
+
+- Download the `managednetworkfabric` python wheel
+
+# [Linux / macOS / WSL](#tab/linux+macos+wsl)
+
+```sh
+ curl -L "https://aka.ms/nexus-nf-cli" --output "managednetworkfabric-0.0.0-py3-none-any.whl"
+```
+
+# [PowerShell](#tab/powershell)
+
+```ps
+ curl "https://aka.ms/nexus-nf-cli" -OutFile "managednetworkfabric-0.0.0-py3-none-any.whl"
+```
+++
+- Install and test the `managednetworkfabric` CLI extension
+
+ ```azurecli
+ az extension add --source managednetworkfabric-0.0.0-py3-none-any.whl
+ az nf --help
+ ```
+
+- Install AKS-Hybrid (`hybridaks`) CLI extension
+
+- Remove any previously installed version of the extension
+
+ ```azurecli
+ az extension remove --name hybridaks
+ ```
+
+- Download the `hybridaks` python wheel
+
+# [Linux / macOS / WSL](#tab/linux+macos+wsl)
+
+```sh
+ curl -L "https://aka.ms/nexus-hybridaks-cli" --output "hybridaks-0.0.0-py3-none-any.whl"
+```
+
+# [PowerShell](#tab/powershell)
+
+```ps
+ curl "https://aka.ms/nexus-hybridaks-cli" -OutFile "hybridaks-0.0.0-py3-none-any.whl"
+```
+++
+- Install and test the `hybridaks` CLI extension
+
+ ```azurecli
+ az extension add --source hybridaks-0.0.0-py3-none-any.whl
+ az hybridaks --help
+ ```
+
+- Install other needed extensions
+
+ ```azurecli
+ az extension add --yes --upgrade --name customlocation
+ az extension add --yes --upgrade --name k8s-extension
+ az extension add --yes --upgrade --name k8s-configuration
+ az extension add --yes --upgrade --name arcappliance
+ az extension add --yes --upgrade --name connectedmachine
+ az extension add --yes --upgrade --name monitor-control-service --version 0.2.0
+ az extension add --yes --upgrade --name ssh
+ az extension add --yes --upgrade --name connectedk8s
+ ```
+
+- List installed CLI extensions and versions
+
+List the extension version running:
+
+```azurecli
+az extension list --query "[].{Name:name,Version:version}" -o table
+```
+
+Example output:
+
+```output
+Name Version
+-- -
+arcappliance 0.2.29
+monitor-control-service 0.2.0
+connectedmachine 0.5.1
+connectedk8s 1.3.8
+k8s-extension 1.3.7
+networkcloud 0.1.6.post209
+k8s-configuration 1.7.0
+managednetworkfabric 0.1.0.post24
+customlocation 0.1.3
+hybridaks 0.1.6
+ssh 1.1.3
+```
+
+<!-- LINKS - Internal -->
+[howto-configure-network fabric]: ./howto-configure-network fabric.md
+[quickstarts-tenant-workload-deployment]: ./quickstarts-tenant-workload-deployment.md
+
+<!-- LINKS - External -->
+[installation-instruction]: https://aka.ms/azcli
operator-nexus Howto Monitoring Aks H Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitoring-aks-h-cluster.md
+
+ Title: "Azure Operator Nexus: Monitoring of AKS-Hybrid cluster"
+description: How-to guide for setting up monitoring of AKS-Hybrid cluster on Operator Nexus.
++++ Last updated : 01/26/2023 #Required; mm/dd/yyyy format.+++
+# Monitoring AKS-hybrid cluster
+
+Each AKS-Hybrid cluster consists of multiple layers:
+
+- Virtual Machines (VMs)
+- Kubernetes layer
+- Application pods
+
+<! IMG ![AKS-Hybrid-Stack](Docs/media/sample-aks-hybrid-stack.png) IMG >
++
+Figure: Sample AKS-Hybrid Stack
+
+AKS-Hybrid clusters, on an Operator Nexus instance, are delivered with an _optional_ observability solution, [Container Insights](/azure/azure-monitor/containers/container-insights-overview).
+Container Insights captures the logs and metrics from AKS-Hybrid clusters and workloads.
+It's solely your discretion whether to enable this tooling or deploy your own telemetry stack.
+
+The AKS-Hybrid cluster with Azure monitoring tool looks like:
+
+<! IMG ![AKS-Hybrid with Monitoring Tools](Docs/media/ask-hybrid-w-monitoring-tools.png) IMG >
++
+Figure: AKS-Hybrid with Monitoring Tools
+
+## Extension onboarding with CLI using managed identity auth
+
+When enabling Monitoring agents on AKS-Hybrid clusters using CLI, ensure appropriate versions of CLI are installed:
+
+- azure-cli: 2.39.0+
+- azure-cli-core: 2.39.0+
+- k8s-extension: 1.3.2+
+- AKS-Hybrid (for provisioned cluster operation, optional): 0.1.3+
+- Resource-graph: 2.1.0+
+
+Documentation for starting with [Azure CLI](/cli/azure/get-started-with-azure-cli), how to install it across [multiple operating systems](/cli/azure/install-azure-cli), and how to install [CLI extensions](/cli/azure/azure-cli-extensions-overview).
+
+Install latest version of the
+[necessary CLI extensions](./howto-install-cli-extensions.md).
+
+## Monitoring AKS-hybrid ΓÇô VM layer
+
+This how-to guide provides steps and utility scripts to [Arc connect](/azure/azure-arc/servers/overview)
+the AKS-Hybrid Virtual Machines to Azure and enable monitoring agents on top for collection of System logs from these VMs using [Azure Monitoring Agent](/azure/azure-monitor/agents/agents-overview).
+The instructions further capture details on how to set up log data collection into a Log Analytics workspace.
+
+To support these steps, the following resources are provided:
+
+- `arc-connect.env` template file that can be used to create environment variables needed by included scripts
+- `dcr.sh` - script used to create a Data Collection Rule (DCR) that can be used to configure syslog collection
+- `assign.sh` - script used to create a policy that will associate the DCR to all Arc-enabled servers in a resource group
+- `install.sh` - script used to Arc-enable AKS-Hybrid VMs and install Azure Monitoring Agent on each
+
+### Prerequisites-VM
+
+- Cluster administrator access to the AKS-Hybrid cluster. See [documentation](/azure-stack/aks-hci/create-aks-hybrid-preview-cli#connect-to-the-aks-hybrid-cluster) on
+ connecting to the AKS-Hybrid cluster.
+
+- To use Azure Arc-enabled servers, the following Azure resource providers must be registered in the subscription:
+ - Microsoft.HybridCompute
+ - Microsoft.GuestConfiguration
+ - Microsoft.HybridConnectivity
+
+If these resource providers aren't already registered, register them using the following commands:
+
+```azurecli
+az account set --subscription "{the Subscription Name}"
+az provider register --namespace 'Microsoft.HybridCompute'
+az provider register --namespace 'Microsoft.GuestConfiguration'
+az provider register --namespace 'Microsoft.HybridConnectivity'
+```
+
+- An Azure service principal assigned to the following Azure built-in roles, as needed, on the Azure resource group in which the machines will be connected:
+
+| Role | Needed to |
+|-|-- |
+| [Azure Connected Machine Resource Administrator](/azure/role-based-access-control/built-in-roles#azure-connected-machine-resource-administrator)  or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) | Connect Arc-enabled AKS VM server in the resource group and install the Azure Monitoring Agent (AMA)|
+| [Monitoring Contributor](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) | Create a [Data Collection Rule (DCR)](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent) in the resource group and associate Arc-enabled servers to it |
+| [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator), and [Resource Policy Contributor](/azure/role-based-access-control/built-in-roles#resource-policy-contributor) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) | Needed if wanting to use Azure policy assignment(s) to [ensure Arc-enabled machines are associated to a DCR](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd5c37ce1-5f52-4523-b949-f19bf945b73a) |
+| [Kubenetes Extension Contributer](/azure/role-based-access-control/built-in-roles#kubernetes-extension-contributor) | Needed to deploy the K8s extension for Container Insights |
+
+### Environment setup
+
+The scripts included with these instructions can be copied to, and run from an
+[Azure Cloud Shell](/azure/cloud-shell/overview) in the Azure portal or any Linux command
+prompt where the Kubernetes command line tool (kubectl) and Azure CLI are installed.
+See these [instructions](/azure-stack/aks-hci/create-aks-hybrid-preview-cli#connect-to-the-aks-hybrid-cluster)
+for connecting to the AKS-Hybrid cluster.
+
+Prior to running any of the included scripts, the following environment variables must be properly defined:
+
+| Environment Variable | Description |
+|||
+| SUBSCRIPTION_ID | The ID of the Azure subscription that contains the resource group |
+| RESOURCE_GROUP | The resource group name where Arc-enabled server and associated resources will be created |
+| LOCATION | The Azure Region where the Arc-enabled servers and associated resources will be created |
+| SERVICE_PRINCIPAL_ID | The appId of the Azure service principal with appropriate role assignment(s) |
+| SERVICE_PRINCIPAL_SECRET | The authentication password for the Azure service principal |
+| TENANT_ID | The ID of the tenant directory where the service principal exists |
+
+For convenience, the template file, `arc-connect.env` can be modified and used to set the environment variable values.
+
+```bash
+# Apply the modified values to the environment
+ ./arc-connect.env
+```
+
+### Adding a data collection rule (DCR)
+
+Arc-enabled servers must be associated to a DCR in order to enable the collection of log data into a Log Analytics workspace.
+The DCR may be created via the Azure portal or CLI. More information on creating a DCR to collect data from the VMs can be found [here](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent).
+
+For convenience, a **`dcr.sh`** script is included that will create a DCR in the specified resource group that will configure log collection.
+
+1.Ensure proper [environment setup](#environment-setup) and role [prerequisites](#prerequisites-vm) for the service principal. The DCR will be created in the specified resource group.
+
+2.Create or identify a Log Analytics workspace where log data will be ingested per the DCR.
+Set an environment variable, LAW_RESOURCE_ID to its resource ID.
+If the Log Analytics workspace name within the resource group is known, the resource ID can be found using the following command:
+
+```bash
+export LAW_RESOURCE_ID=$(az monitor log-analytics workspace show -g "${RESOURCE_GROUP}" -n <law name> --query id -o tsv)
+```
+
+3.Run the dcr.sh script. It will create a DCR in the specified resource group with name ${RESOURCE_GROUP}-syslog-dcr
+
+```bash
+./dcr.sh
+```
+
+The DCR may now be viewed and managed from the Azure portal or [CLI](/azure/monitor/data-collection/rule).
+By default, the Linux Syslog log level is set to "INFO" but may be updated as needed.
+
+**Note:** If the policy assignment is set up after the Arc-enabled servers are connected, the existing server can be added manually to the DCR or via a policy [remediation task](/azure/governance/policy/how-to/remediate-resources#create-a-remediation-task).
+
+### Associate Arc-enabled server resources to DCR
+
+Once a DCR is created, associate the Arc-enabled server resources to the DCR for logs to flow to the Log Analytics workspace.
+There are options for how it can be done:
+
+#### Use Azure portal or CLI to associate selected Arc-enabled servers to DCR
+
+In Azure portal, add Arc-enabled server resource to the DCR using its Resources section.
+
+Use this [link](/cli/azure/monitor/data-collection/rule/association#az-monitor-data-collection-rule-association-create)
+for information about associating the resources via the Azure CLI.
+
+### Use Azure policy to manage DCR associations
+
+Another mechanism to associate all Arc-enabled servers within a resource group to the same DCR is to assign a policy to the resource group that will enforce the association.
+There's a built-in policy definition, [Configure Linux Arc Machines to be associated with a Data Collection Rule](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd5c37ce1-5f52-4523-b949-f19bf945b73a) which can be assigned to the resource group with a specified DCR as a parameter.
+It ensures that all Arc-enabled servers within the resource group would be associated to the same DCR.
+
+This association can be done in the Azure portal by selecting the Assign button from the [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd5c37ce1-5f52-4523-b949-f19bf945b73a) page.
+
+For convenience, the **`assign.sh`** script is provided, which will assign the built-in policy to the specified resource group and DCR created with the **`dcr.sh`** script.
+
+1. Ensure proper [environment setup](#environment-setup) and role [prerequisites](#prerequisites-vm) for the service principal to do policy and role assignments.
+2. Ensure the DCR is created in the resource group using **`dcr.sh`** scripts as described in [Adding a Data Collection Rule](/azure/azure-monitor/essentials/data-collection-endpoint-overview?tabs=portal#create-a-data-collection-endpoint) section.
+3. Run the **`assign.sh`** script. It will create the policy assignment and necessary role assignments.
+
+```bash
+./assign.sh
+```
+
+#### Connecting Arc-enabled servers and installing azure monitoring agent
+
+The included **`install.sh`** script can be used to Arc-enroll all server VMs that represent the nodes of the AKS-Hybrid cluster.
+This script creates a Kubernetes daemonSet on the AKS-Hybrid cluster.
+It deploys a pod to each cluster node, connecting each VM to Arc-enabled servers and installing the Azure Monitoring Agent.
+The daemonSet also includes a liveness probe that monitors the server connection and AMA processes.
+
+1. Ensure proper environment setup as specified in [Environment Setup](#environment-setup). The current `kubeconfig` context must be set for the AKS-Hybrid cluster for which VMs are to be connected.
+2. Kubectl access may be required to the provisioned cluster when configuring the extension. `KUBECONFIG` can be retrieved by running the Azure CLI command:
+
+```azurecli
+az hybridaks proxy --resource-group <AKS-Hybrid Cluster Resource Group> --name <AKS-Hybrid Cluster Name> --file <kube-config-filename> &
+
+```
+
+3.Set the `kubeconfig` file for using kubectl:
+
+```bash
+export KUBECONFIG=<path-to-kube-config-file>
+```
+
+4.Run the **`install.sh`** script from the command prompt with kubectl access to the AKS-Hybrid cluster.
+
+The script will deploy the daemonSet to the cluster. Connection progress can be monitored as follows:
+
+```bash
+# Run the install script and observe results
+./install.sh
+kubectl get pod --selector='name=haks-vm-telemetry'
+kubectl logs <podname>
+```
+
+Once complete, the message "Server monitoring configured successfully" will appear in the logs.
+At that point the Arc-enabled servers will appear as resources within the selected resource group.
+
+**Note:** Associate these connected servers to the [DCR](#associate-arc-enabled-server-resources-to-dcr). If a policy assignment was configured, there may be a delay to see logs.
+
+### Monitoring AKS-hybrid ΓÇô K8s layer
+
+#### Prerequisites-Kubernetes
+
+There are certain prerequisites the operator should ensure to configure the monitoring tools on AKS-Hybrid clusters.
+
+Container Insights stores its data in a [Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview).
+A Log Analytics workspace Resource ID will be provided either during the [Cluster Extension Installation](#installing-the-cluster-extension),
+or data will funnel into a default Log Analytics workspace in the Azure Subscription in the associated default Resource group (based on Azure Location).
+
+An example for East US may look like follows:
+
+- Log Analytics workspace Name: DefaultWorkspace-\<GUID>-EUS
+- Resource group name: DefaultResourceGroup-EUS
+
+A pre-existing _Log Analytics workspace Resource ID_ can be found by running the following Azure CLI commands:
+
+```azurecli
+az login
+
+az account set --subscription "<Subscription Name or ID the Log Analytics workspace is in>"
+
+az monitor log-analytics workspace show --workspace-name "<Log Analytics workspace Name>" \
+ --resource-group "<Log Analytics workspace Resource Group>" \
+ -o tsv --query id
+```
+
+If the account running the commands doesn't have the "Contributor" role assignment,
+some of the necessary roles for the account (mechanized or not) to be able to deploy
+Container Insights and view data in the applicable Log Analytics workspace
+(instructions on how to assign roles can be found
+[here](/azure/role-based-access-control/role-assignments-steps#step-5-assign-role):
+
+- [Log Analytics Contributor](/azure/azure-monitor/logs/manage-access?tabs=portal#azure-rbac) role: necessary permissions to enable container monitoring on a CNF (provisioned) cluster.
+- [Log Analytics Reader](/azure/azure-monitor/logs/manage-access?tabs=portal#azure-rbac) role: non-members of the Log Analytics Contributor role, receive permissions to view data in the Log Analytics workspace once container monitoring is enabled.
+
+#### Installing the cluster extension
+
+Sign-in into the [Azure Cloud Shell](/azure/cloud-shell/overview) to access the cluster:
+
+```azurecli
+az login
+
+az account set --subscription "<Subscription Name or ID the Provisioned Cluster is in>"
+```
+
+Once, the subscription is set, deploy Container Insights extension on a provisioned AKS-Hybrid cluster using either of the next two commands:
+
+#### With customer pre-created Log analytics workspace
+
+```azurecli
+az k8s-extension create --name azuremonitor-containers \
+ --cluster-name "<Provisioned Cluster Name>" \
+ --resource-group "<Provisioned Cluster Resource Group>" \
+ --cluster-type provisionedclusters \
+ --cluster-resource-provider "microsoft.hybridcontainerservice" \
+ --extension-type Microsoft.AzureMonitor.Containers \
+ --release-train preview \
+ --configuration-settings logAnalyticsWorkspaceResourceID="<Log Analytics workspace Resource ID>" \
+ amalogsagent.useAADAuth=true
+```
+
+#### Using the default Log analytics workspace
+
+```azurecli
+az k8s-extension create --name azuremonitor-containers \
+ --cluster-name "<Provisioned Cluster Name>" \
+ --resource-group "<Provisioned Cluster Resource Group>" \
+ --cluster-type provisionedclusters \
+ --cluster-resource-provider "microsoft.hybridcontainerservice" \
+ --extension-type Microsoft.AzureMonitor.Containers \
+ --release-train preview \
+ --configuration-settings amalogsagent.useAADAuth=true
+```
+
+#### Cluster extension validation
+
+The successful deployment of monitoring agentsΓÇÖ enablement on AKS-Hybrid clusters can be validated using the following command:
+
+```azurecli
+az k8s-extension show --name azuremonitor-containers \
+ --cluster-name "<Provisioned Cluster Name>" \
+ --resource-group "<Provisioned Cluster Resource Group>" \
+ --cluster-type provisionedclusters \
+ --cluster-resource-provider "microsoft.hybridcontainerservice"
+```
+
+Look for a Provisioning State of "Succeeded" for the extension. The "k8s-extension create" command may have also returned the status.
+
+#### Customize logs & metrics collection
+
+Container Insights provides end-users functionality to fine-tune the collection of logs and metrics from AKS-Hybrid clusters -- [Configure Container insights agent data collection](/azure/azure-monitor/containers/container-insights-agent-config).
+
+## Additional resources
+
+- Review [workbooks documentation](/azure/azure-monitor/visualize/workbooks-overview) and then you may use Operator Nexus telemetry [sample Operator Nexus workbooks](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services).
+- Review [Azure Monitor Alerts](/azure/azure-monitor/alerts/alerts-overview), how to create [Azure Monitor Alert rules](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=metric), and use [sample Operator Nexus Alert templates](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services).
operator-nexus Howto Monitoring Virtualized Network Functions Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitoring-virtualized-network-functions-virtual-machines.md
+
+ Title: "Azure Operator Nexus: Monitoring of Virtualized Network Function Virtual Machines"
+description: How-to guide for setting up monitoring of Virtualized Network Function Virtual Machines on Operator Nexus.
++++ Last updated : 02/01/2023 #Required; mm/dd/yyyy format.+++
+# Monitoring virtual machines (for virtualized network function)
+
+This section discusses the optional tooling available for telecom operators to monitor the Virtualized Network Functions (VNF) workloads. With Azure Monitoring Agent (AMA), logs and performance metrics can be collected from the Virtual Machines (VM) running VNFs. One of the pre-requisites for AMA is Arc connectivity back to Azure (using Azure Arc for Servers).
+
+## Extension onboarding with CLI using managed identity auth
+
+When enabling Monitoring agents on VMs using CLI, ensure appropriate versions of CLI are installed:
+
+- azure-cli: 2.39.0+
+- azure-cli-core: 2.39.0+
+- Resource-graph: 2.1.0+
+
+Documentation for starting with [Azure CLI](/cli/azure/get-started-with-azure-cli), how to install it across [multiple operating systems](/cli/azure/install-azure-cli), and how to install [CLI extensions](/cli/azure/azure-cli-extensions-overview).
+
+## Arc connectivity
+
+Azure Arc-enabled servers let you manage Linux physical servers and Virtual Machines hosted outside of Azure, such as on-premises cloud environment like Operator Nexus. A hybrid machine is any machine not running in Azure. When a hybrid machine is connected to Azure, it becomes a connected machine, treated as a resource in Azure. Each connected machine has a Resource ID enabling the machine to be included in a resource group.
+
+### Prerequisites
+
+Before you start, be sure to review the [prerequisites](/azure/azure-arc/servers/prerequisites) and verify that your subscription, and resources meet the requirements.
+Some of the prerequisites are:
+
+- Your VNF VM is connected to CloudServicesNetwork (the network that the VM uses to communicate with Operator Nexus services).
+- You have SSH access to your VNF VM.
+- Proxies & wget install:
+ - Ensure wget is installed.
+ - To set the proxy as an environment variable run:
+
+```azurecli
+echo "http\_proxy=http://169.254.0.11:3128" \>\> /etc/environment
+echo "https\_proxy=http://169.254.0.11:3128" \>\> /etc/environment
+```
+
+- You have appropriate permissions on VNF VM to be able to run scripts, install package dependencies etc. For more information visit [link](/azure/azure-arc/servers/prerequisites#required-permissions) for more details.
+- To use Azure Arc-enabled servers, the following Azure resource providers must be registered in your subscription:
+ - Microsoft.HybridCompute
+ - Microsoft.GuestConfiguration
+ - Microsoft.HybridConnectivity
+
+If these resource providers aren't already registered, you can register them using the following commands:
+
+```azurecli
+az account set --subscription "{Your Subscription Name}"
+
+az provider register --namespace 'Microsoft.HybridCompute'
+
+az provider register --namespace 'Microsoft.GuestConfiguration'
+
+az provider register --namespace 'Microsoft.HybridConnectivity'
+```
+
+### Deployment
+
+You can Arc connect servers in your environment by performing a set of steps manually. The VNF VM can be connected to Azure using a deployment script. Or you can use an automated method by running a template script. The script can be used to automate the download and installation of the agent.
+
+This method requires that you have administrator permissions on the machine to install and configure the agent. On Linux machine, you can deploy the required agentry by using the root account.
+
+The script to automate the download and installation, and to establish the connection with Azure Arc, is available from the Azure portal. To complete the process, take the following steps:
+
+1. From your browser, go to the [Azure portal](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade).
+2. On the **Select a method** page, select the **Add a single server** tile, and then select **Generate script**.
+3. On the **Prerequisites** page, select **Next** **.**
+4. On the **Resource details** page, provide the following information:
+
+5. In the Subscription drop-down list, select the subscription the machine will be managed in.
+6. In the **Resource group** drop-down list, select the resource group the machine will be managed from.
+7. In the **Region** drop-down list, select the Azure region to store the server's metadata.
+8. In the **Operating system** drop-down list, select the operating system of your VNF VM.
+9. If the machine is communicating through a proxy server to connect to the internet, specify the proxy server IP address. If a name and port number is used, specify that information.
+10. Select **Next: Tags**.
+
+11. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards.
+12. Select **Next: Download and run script**.
+13. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**.
+
+**Note:**
+
+1. Set the exit on error flag up at the top of the script to make sure it fails fast and doesn't give you false success in the end. For example, in Shell script use "set -e" at the top of the script.
+2. Add export http\_proxy=\<PROXY\_URL\> and export https\_proxy=\<PROXY\_URL\> to the script along with export statements in the Arc connectivity script. (Proxy IP - 169.254.0.11:3128).
+
+To deploy the `azcmagent` on the server, sign-in to the server with an account that has root access. Change to the folder that you copied the script to and execute it on the server by running the ./OnboardingScript.sh script.
+
+If the agent fails to start after setup is finished, check the logs for detailed error information. The log directory is `/var/opt/azcmagent/log`.
+
+After you install the agent and configure it to connect to Azure Arc-enabled servers, verify that the server is successfully connected at [Azure portal](https://aka.ms/hybridmachineportal).
+
+<! IMG ![Sample Arc-Enrolled VM](Docs/media/sample-arc-enrolled-vm.png) IMG >
++
+Figure: Sample Arc-Enrolled VM
+
+### Troubleshooting
+
+**Note:** If you see errors while running script, then fix the errors and rerun the script before moving to the next steps.
+
+Some common reasons for errors:
+
+1. You don't have the required permissions on the VM.
+2. wget package isn't installed on the VM.
+3. If it fails to install package dependencies, it's because proxy doesn't have the required domains added to the allowed URLs. For example, on Ubuntu, apt fails to install dependencies because it can't reach ".ubuntu.com". Add the required egress endpoints to the proxy.
+
+## Azure monitor agent
+
+The Azure Monitor Agent is implemented as an [Azure VM extension](/azure/virtual-machines/extensions/overview)
+ver Arc connected Machines. It also lists the options to create [associations with Data Collection Rules](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent)
+that define which data the agent should collect. Installing, upgrading, or uninstalling the Azure Monitor Agent
+won't require you to restart your server.
+
+Ensure that you configure collection of logs and metrics using the Data Collection Rule.
+
+<! IMG ![DCR adding source](Docs/media/data-collection-rules-adding-source.png) IMG >
+
+Figure: DCR adding source
+
+**Note:** The metrics configured with DCR should have destination set to Log Analytics Workspace as
+it's not supported on Azure Monitor Metrics yet.
+
+<! IMG ![DCR adding destination](Docs/media/data-collection-rules-adding-destination.png) IMG >
+
+Figure: DCR adding destination
+
+### Pre-requisites
+
+The following prerequisites must be met prior to installing the Azure Monitor Agent:
+
+- **Permissions** : For methods other than using the Azure portal, you must have the following role assignments to install the agent:
+
+| **Built-in role** | **Scopes** | **Reason** |
+| | | |
+|[Virtual Machine Contributor](/azure/role-based-access-control/built-in-roles#virtual-machine-contributor) [Azure Connected Machine Resource Administrator](/azure/role-based-access-control/built-in-roles#azure-connected-machine-resource-administrator)| Azure Arc-enabled servers | To deploy the agent |
+| Any role that includes the action _Microsoft.Resources/deployments/_\* | Subscription and/orResource group and/or | To deploy Azure Resource Manager templates |
+
+### Installing Azure Monitoring Agent
+
+Once, the Virtual Machines are Arc connected, ensure that you create a local file from your [Azure Cloud Shell](/azure/cloud-shell/overview) with name "settings.json" to provide the proxy information:
+
+<! IMG ![Settings.json file](Docs/media/azure-monitor-agent-settings.png) IMG >
+
+Figure: settings.json file
+
+Then use the following command to install the Azure Monitoring agent on these Azure Arc-enabled servers:
+
+```azurecli
+az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name \<arc-server-name\> --resource-group \<resource-group-name\> --location \<arc-server-location\> --type-handler-version "1.21.1" --settings settings.json
+```
+
+To collect data from virtual machines by using Azure Monitor Agent, you'll need to:
+
+1. Create [Data Collection Rules (DCRs)](/azure/azure-monitor/essentials/data-collection-rule-overview)that define which data Azure Monitor Agent sends to which destinations.
+
+2. Associate the Data Collection Rule to specific Virtual Machines.
+
+#### Data Collection Rule via Portal
+
+The steps to create a DCR and associate it to a Log Analytics Workspace can be found [here](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent?tabs=portal#create-data-collection-rule-and-association).
+
+Lastly verify if you're getting the logs in the Log Analytics Workspace specified.
+
+#### Data collection rule via CLI
+
+Following are the commands to create and associate DCR to enable collection of logs and metrics from these Virtual Machines.
+
+**Create DCR:**
+
+```azurecli
+az monitor data-collection rule create --name \<name-for-dcr\> --resource-group \<resource-group-name\> --location \<location-for-dcr\> --rule-file \<rules-file\> [--description] [--tags]
+```
+
+An example rules-file:
+
+<! IMG ![Sample DCR rule file](Docs/media/sample-data-collection-rule.png) IMG >
+
+Figure: Sample DCR rule file
+
+For more information, please refer to this [link](/azure/monitor/data-collection/rule#az-monitor-data-collection-rule-create).
+
+**Associate DCR:**
+
+```azurecli
+az monitor data-collection rule association create --name \<name-for-dcr-association\> --resource \<connected-machine-resource-id\> --rule-id \<dcr-resource-id\> [--description]
+```
+
+For more information, please refer to this [link](/azure/monitor/data-collection/rule/association#az-monitor-data-collection-rule-association-create).
+
+## Additional resources
+
+- Review [workbooks documentation](/azure/azure-monitor/visualize/workbooks-overview) and then you may use Operator Nexus telemetry [sample Operator Nexus workbooks](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services).
+- Review [Azure Monitor Alerts](/azure/azure-monitor/alerts/alerts-overview), how to create [Azure Monitor Alert rules](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=metric), and use [sample Operator Nexus Alert templates](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Operator%20Distributed%20Services).
operator-nexus Howto Precertification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-precertification.md
+
+ Title: How to pre-certify network functions on Azure Operator Nexus
+description: Overview of the pre-certification process for network function workloads on Operator Nexus
++ Last updated : 01/30/2023++++
+# How to pre-certify network functions on Azure Operator Nexus
+
+The pre-certification of network functions accelerates deployment of network services.
+The pre-certified network functions can be managed like any other Azure resource.
+The lifecycle of these pre-certified network functions is managed by Azure Network Function Manager (ANFM) or an NFM of your choice.
+
+In this section, we'll describe the process and the steps for network function pre-certification
+
+## Pre-certification of network function for operators
+
+The goal is make available a catalog of network functions that
+conform to the Operator Nexus specifications. NF
+partners onboarding to pre-certification program and ANFM service won't be
+required to change the commercial licensing arrangement with the operators.
+
+## Pre-certification process
+
+This section outlines the pre-certification process for Network Function deployment.
+Microsoft uses this process with Network Equipment Providers (NEP) that
+provide network function(s). This process guides the partner through
+onboarding the network function onto Operator Nexus and
+certifies the network function deployment methods using Azure deployment
+services. The goal of this program is to ensure that the partner's network function
+deployment process is predictable and repeatable on the Operator Nexus platform.
+Microsoft provides a pre-certification environment for the partners to validate
+the deployment of their network function. As a result, the partners'
+network functions will be published in the Microsoft catalog of
+network functions. This catalog will be available to operators using the Operator Nexus platform.
+
+If the NF partner is interested in listing their offer in the Azure Marketplace,
+Microsoft will work with the partner to enable this offering in the marketplace.
+
+### Azure Network Function Manager
+
+The Azure [Network Function Manager (ANFM)](/azure/network-function-manager/overview)
+provides a cloud native orchestration and managed experience for
+pre-certified network functions (from the Azure Marketplace). The ANFM
+provides consistent Azure managed applications experience for network functions.
+
+### Pre-certification steps
+
+Here are the steps of the NF Deployment pre-certification
+
+<! IMG ![Pre-Certification Process](Docs/media/network-function-manager-precert-process.png) IMG >
+
+Figure: Pre-Certification (precert) Process
+
+## Prerequisites and process for partner on-boarding to the pre-cert lab
+
+To ensure an efficient and effective onboarding process for the partner there are perquisites to pre-certification lab entry.
+
+1. The partners start the Azure Marketplace agreement and [create a partner
+ center account](/azure/marketplace/create-account).
+ The partner can then publish the network function
+ offers in the marketplace. The marketplace agreement doesn't have to be
+ completed prior to precert lab entry. However, it's an important step before the
+ helm charts and images on-boarded to Azure Network Function Manager (ANFM)
+ service are added to the pre-certified catalog.
+
+2. Microsoft will conduct several sessions on key topics with the partner:
+
+ a. Technical discussions describing the Operator Nexus architecture with focus on run time specification:
+
+ - Compute dimensions for Kubernetes master and worker nodes, memory, storage requirements, and compute capabilities
+ - NUMA alignment
+ - huge page support
+ - hyperthreading
+ - VM Networking/Kubernetes networking requirements:
+ - SR-IOV
+ - DPDK
+ - Review `cloudinit` support for VM based network functions
+ - Microsoft AKS-Hybrid support for tenant workloads, CNI versions for Calico and Cultus
+
+ b. The Operator Nexus platform includes a managed fabric automation service. With an agreement from the partner regarding the network function requirements, Microsoft will engage with the partner and review:
+
+ - the network fabric architecture
+ - and fabric automation APIs for the creation of L2/L3 isolation-domains
+ - L3 route policies that will extend the network connectivity from the node to the TOR/CE router.
+
+ The fabric deep dive sessions will identify the peering requirements, route policies, and filters that need to be configured in the fabric for testing the network function.
+
+ c. Microsoft will work with the NEPs to onboard the helm charts and container images (CNFs) or VM images (VNFs) to the Azure Network Function Manager service (ANFM). Microsoft will consult with the partner to validate the supported versions of the helm charts for deployment using the ANFM service.
+
+ d. Microsoft will work with the partner to identify test tool requirements for the specific network function and prepare the lab prior to entry. Microsoft will provide basic traffic simulation tools such as Developing Solutions' dsTest or Spirent Landslide in the precert lab. The partners can also deploy other test tools of their choice during the precert testing.
+
+3. On reviewing the requirements (2a ΓÇô 2d) and prior to smoke testing, Microsoft will work with the partners to define Azure Resource Manager (ARM) templates:
+
+ - for network fabric automation components ΓÇô L2/L3 isolation-domains, L3 route policies if any for the end-to-end test set-up
+ - for AKS-Hybrid (CNF), VM instance (VNF), workload networks and management networks that describe subcluster networking
+ - onboard the helm charts and images to ANFM service and create a vendor image version for deployment into precert lab
+ - for deployment of NFs using ANFM with the user data container pods/ VMs configuration properties
+ - for deployment of NEP specific config management application
+ - for deployment of other CNF/VNF test tools.
+
+### Operator Nexus technical specification documents provided to NF partners
+
+As a part of technical engagement, the documents that Microsoft will provide to the partner are:
+
+- Operator Nexus runtime specification document and the Azure Resource Manager (ARM) API specification document for creating AKS-Hybrid clusters (CNFs) and VM instances (VNFs) on an Operator Nexus cluster
+- Azure Resource Manager API specification document for creating fabric automation components that are required for tenant networking.
+- ARM API specification document for onboarding to ANFM service. The specification will also define the customer facing APIs for deployment of network functions using the ANFM service.
+
+### Scheduling/partner engagement
+
+The Network Function deployment pre-certification lab will be a shared
+infrastructure where multiple partners will be testing simultaneously. Based
+on the available capacity in the lab Microsoft will allocate resources for
+various partners to complete the Network Function deployment pre-certification activities in a timely and limited window.
+
+If the partner has a dedicated Operator Nexus environment in their facility, Microsoft will work with them
+to enable updated version of Operator Nexus software to complete Network Function deployment pre-certification.
+
+#### Deployment testing
+
+##### Scope of testing in the pre-certification lab
+
+With the ARM templates defined in the previous section, Microsoft will work
+with the partners to identify the appropriate lab environment to perform the
+testing. Microsoft will enable the appropriate Subscription ID and Resource
+Groups so that the partners can deploy all the resources from the Azure
+Portal/CLI into the target lab environment. Microsoft will also enable jump box
+access for the partners to perform troubleshooting or remote connectivity to
+the test tools/config management tools. The following verification will be
+performed by the partners and results reviewed by Microsoft:
+
+1. Verify that the resources in the ARM template, corresponding to the tenant
+ cluster definition, include:
+
+ - AKS-Hybrid cluster/ VM instance
+ - managed fabric resources for isolation-domain and L3 route policies
+ - workload networks and management network resources for Kubernetes networks/ VM networks
+
+2. Verify the basic network connectivity between all the end points of test
+ set-up is working.
+3. After the tenant cluster set-up is validated, verify that the
+ ANFM network function is working as designed. Verify that the
+ container pods are in a running state.
+4. Verify that NF works with supported versions of Kubernetes, CNI versions.
+5. Verify that the helm-based upgrade operations on the NF application using
+ ANFM service is working as designed.
+6. Perform interface testing after the NF has been deployed and configured. This testing will validate that the networking is working as designed. And it validates Kubernetes cluster's internal network and connection to a source/sink (simulator).
+7. Perform a low-volume traffic simulation on the application.
+To validate the internal routing,
+ deploy the test tool application inside the Operator Nexus cluster.
+To test the routing across the CE, based on the application characteristics,
+ deploy the test tool application outside the Operator Nexus cluster.
+8. Azure PaaS integration (optional): Microsoft precert environment will be
+ connected to an Azure region using ExpressRoute. The NEP partner can also integrate
+ Azure PaaS services with their application. They can verify that the PaaS functionality is working as designed.
+1. After testing is complete, delete NF resources
+ in the ANFM service and verify that the container pods are removed from the
+ subcluster.
+1. Verify all the tenant cluster and fabric components are deleted. Validate that deleting the network fabric
+ resources removes the corresponding configuration on the Network
+ devices.
+
+##### Testing tools in pre-certification lab
+
+Microsoft will provide basic traffic simulation tools such as Developing Solutions' dsTest or Spirent Landslide in the precert lab. These tools can be used to validate packet flow patterns for the network function. The partners can also deploy other test tools of their choice during the precert testing.
++
+#### Test results from deployment precertification testing
+
+Microsoft will review the test results, provided by the partner, for the
+application being precertified. The
+objective of the Network Function Deployment Precertification process is to
+ensure that the test cases defined, and test results produced comprehensively
+validate the deployment of the application on the Operator Nexus platform. For interface
+testing using a test tool such as Spirent, Microsoft will work with the
+partners to identify the test scenarios. After the deployment validation and smoke
+testing are completed in the precertification lab, Microsoft will review
+the test results with the NEPS to confirm that the test results meet the scope
+of deployment pre-certification.
+Microsoft will then work with the partner to graduate the NF application to the
+ANFM service catalog. The partners are free to share the results of the
+pre-certification/re-certification testing.
+
+#### Recertification
+
+Microsoft will enable the preview/ update versions of Operator Nexus platform and ANFM
+service releases in precert lab. Microsoft will coordinate with partners to
+recertify.
operator-nexus List Of Metrics Collected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/list-of-metrics-collected.md
+
+ Title: List of Metrics Collected in Azure Operator Nexus.
+description: List of metrics collected in Azure Operator Nexus.
++++ Last updated : 02/03/2023 #Required; mm/dd/yyyy format.+++
+# List of metrics collected in Azure Operator Nexus
+
+This section provides the list of metrics collected from the different components.
+
+- [API server](#api-server)
+- [coreDNS](#coredns)
+- [Containers](#containers)
+- [etcd](#etcd)
+- [Felix](#felix)
+- [Kubernetes Services](#kubernetes-services)
+- [kubevirt](#kubevirt)
+- [Node (servers)](#node-servers)
+- [Pure Storage](#pure-storage)
+- [Typha](#typha)
+
+## API server
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
+|-|-|--|-|-||-|
+| apiserver_audit_requests_rejected_total | Apiserver | Count | Average | Counter of apiserver requests rejected due to an error in audit logging backend. | Cluster, Node | Yes |
+| apiserver_client_certificate_expiration_seconds_sum | Apiserver | Second | Sum | Distribution of the remaining lifetime on the certificate used to authenticate a request. | Cluster, Node | Yes |
+| apiserver_storage_data_key_generation_failures_total | Apiserver | Count | Average | Total number of failed data encryption key(DEK) generation operations. | Cluster, Node | Yes |
+| apiserver_tls_handshake_errors_total | Apiserver | Count | Average | Number of requests dropped with 'TLS handshake error from' error | Cluster, Node | Yes |
+
+## coreDNS
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|-|--|-|-||-|
+| coredns_dns_requests_total | DNS Requests | Count | Average | total query count | Cluster, Node, Protocol | Yes |
+| coredns_dns_responses_total | DNS response/errors | Count | Average | response per zone, rcode and plugin. | Cluster, Node, Rcode | Yes |
+| coredns_health_request_failures_total | DNS Health Request Failures | Count | Average | The number of times the internal health check loop failed to query | Cluster, Node | Yes |
+| coredns_panics_total | DNS panic | Count | Average | total number of panics | Cluster, Node | Yes |
+
+## Containers
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|-|--|-|-||-|
+| container_fs_io_time_seconds_total | Containers - Filesystem | Second | Average | Cumulative count of seconds spent doing I/Os | Cluster, Node, Pod+Container+Interface | Yes |
+| container_memory_failcnt | Containers - Memory | Count | Average | Number of memory usage hits limits | Cluster, Node, Pod+Container+Interface | Yes |
+| container_memory_usage_bytes | Containers - Memory | Byte | Average | Current memory usage, including all memory regardless of when it was accessed | Cluster, Node, Pod+Container+Interface | Yes |
+| container_oom_events_total | Container OOM Events | Count | Average | Count of out of memory events observed for the container | Cluster, Node, Pod+Container | Yes |
+| container_start_time_seconds | Containers - Start Time | Second | Average | Start time of the container since unix epoch | Cluster, Node, Pod+Container+Interface | Yes |
+| container_tasks_state | Containers - Task state | Labels | Average | Number of tasks in given state | Cluster, Node, Pod+Container+Interface, State | Yes |
+
+## etcd
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|-|--|-|-||-|
+| etcd_disk_backend_commit_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of commit called by backend. | Cluster, Pod | Yes |
+| etcd_disk_wal_fsync_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of fsync called by wal | Cluster, Pod | Yes |
+| etcd_server_is_leader | Etcd Server | Labels | Average | Whether node is leader | Cluster, Pod | Yes |
+| etcd_server_is_learner | Etcd Server | Labels | Average | Whether node is learner | Cluster, Pod | Yes |
+| etcd_server_leader_changes_seen_total | Etcd Server | Count | Average | The number of leader changes seen. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_committed_total | Etcd Server | Count | Average | The total number of consensus proposals committed. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_applied_total | Etcd Server | Count | Average | The total number of consensus proposals applied. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_failed_total | Etcd Server | Count | Average | The total number of failed proposals seen. | Cluster, Pod, Tier | Yes |
+
+## Felix
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|-|--|-|-||-|
+| felix_ipsets_calico | Felix | Count | Average | Number of active Calico IP sets. | Cluster, Node | Yes |
+| felix_cluster_num_host_endpoints | Felix | Count | Average | Total number of host endpoints cluster-wide. | Cluster, Node | Yes |
+| felix_active_local_endpoints | Felix | Count | Average | Number of active endpoints on this host. | Cluster, Node | Yes |
+| felix_cluster_num_hosts | Felix | Count | Average | Total number of Calico hosts in the cluster. | Cluster, Node | Yes |
+| felix_cluster_num_workload_endpoints | Felix | Count | Average | Total number of workload endpoints cluster-wide. | Cluster, Node | Yes |
+| felix_int_dataplane_failures | Felix | Count | Average | Number of times dataplane updates failed and will be retried. | Cluster, Node | Yes |
+| felix_ipset_errors | Felix | Count | Average | Number of ipset command failures. | Cluster, Node | Yes |
+| felix_iptables_restore_errors | Felix | Count | Average | Number of iptables-restore errors. | Cluster, Node | Yes |
+| felix_iptables_save_errors | Felix | Count | Average | Number of iptables-save errors. | Cluster, Node | Yes |
+| felix_resyncs_started | Felix | Count | Average | Number of times Felix has started resyncing with the datastore. | Cluster, Node | Yes |
+| felix_resync_state | Felix | Count | Average | Current datastore state. | Cluster, Node | Yes |
+
+## Kubernetes services
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|-|--|-|-||-|
+| kube_daemonset_status_current_number_scheduled | Kube Daemonset | Count | Average | Number of Daemonsets scheduled | Cluster | Yes |
+| kube_daemonset_status_desired_number_scheduled | Kube Daemonset | Count | Average | Number of daemoset replicas desired | Cluster | Yes |
+| kube_deployment_status_replicas_ready | Kube Deployment | Count | Average | Number of deployment replicas present | Cluster | Yes |
+| kube_deployment_status_replicas_available | Kube Deployment | Count | Average | Number of deployment replicas available | Cluster | Yes |
+| kube_job_status_active | Kube job - Active | Labels | Average | Number of actively running jobs | Cluster, Job | Yes |
+| kube_job_status_failed | Kube job - Failed | Labels | Average | Number of failed jobs | Cluster, Job | Yes |
+| kube_job_status_succeeded | Kube job - Succeeded | Labels | Average | Number of successful jobs | Cluster, Job | Yes |
+| kube_node_status_allocatable | Node - Allocatable | Labels | Average | The amount of resources allocatable for pods | Cluster, Node, Resource | Yes |
+| kube_node_status_capacity | Node - Capacity | Labels | Average | The total amount of resources available for a node | Cluster, Node, Resource | Yes |
+| kube_node_status_condition | Kubenode status | Labels | Average | The condition of a cluster node | Cluster, Node, Condition, Status | Yes |
+| kube_pod_container_resource_limits | Pod container - Limits | Count | Average | The number of requested limit resource by a container. | Cluster, Node, Resource, Pod | Yes |
+| kube_pod_container_resource_requests | Pod container - Requests | Count | Average | The number of requested request resource by a container. | Cluster, Node, Resource, Pod | Yes |
+| kube_pod_container_state_started | Pod container - state | Second | Average | Start time in unix timestamp for a pod container | Cluster, Node, Container | Yes |
+| kube_pod_container_status_last_terminated_reason | Pod container - state | Labels | Average | Describes the last reason the container was in terminated state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_container_status_ready | Container State | Labels | Average | Describes whether the containers readiness check succeeded | Cluster, Node, Container | Yes |
+| kube_pod_container_status_restarts_total | Container State | Count | Average | The number of container restarts per container | Cluster, Node, Container | Yes |
+| kube_pod_container_status_running | Container State | Labels | Average | Describes whether the container is currently in running state | Cluster, Node, Container | Yes |
+| kube_pod_container_status_terminated | Container State | Labels | Average | Describes whether the container is currently in terminated state | Cluster, Node, Container | Yes |
+| kube_pod_container_status_terminated_reason | Container State | Labels | Average | Describes the reason the container is currently in terminated state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_container_status_waiting | Container State | Labels | Average | Describes whether the container is currently in waiting state | Cluster, Node, Container | Yes |
+| kube_pod_container_status_waiting_reason | Container State | Labels | Average | Describes the reason the container is currently in waiting state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_deletion_timestamp | Pod Deletion Timestamp | Timestamp | NA | Unix deletion timestamp | Cluster, Pod | Yes |
+| kube_pod_init_container_status_ready | Init Container State | Labels | Average | Describes whether the init containers readiness check succeeded | Cluster, Node, Container | Yes |
+| kube_pod_init_container_status_restarts_total | Init Container State | Count | Average | The number of restarts for the init container | Cluster, Container | Yes |
+| kube_pod_init_container_status_running | Init Container State | Labels | Average | Describes whether the init container is currently in running state | Cluster, Node, Container | Yes |
+| kube_pod_init_container_status_terminated | Init Container State | Labels | Average | Describes whether the init container is currently in terminated state | Cluster, Node, Container | Yes |
+| kube_pod_init_container_status_terminated_reason | Init Container State | Labels | Average | Describes the reason the init container is currently in terminated state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_init_container_status_waiting | Init Container State | Labels | Average | Describes whether the init container is currently in waiting state | Cluster, Node, Container | Yes |
+| kube_pod_init_container_status_waiting_reason | Init Container State | Labels | Average | Describes the reason the init container is currently in waiting state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_status_phase | Pod Status | Labels | Average | The pods current phase | Cluster, Node, Container, Phase | Yes |
+| kube_pod_status_ready | Pod Status Ready | Count | Average | Describe whether the pod is ready to serve requests. | Cluster, Pod | Yes |
+| kube_pod_status_reason | Pod Status Reason | Labels | Average | The pod status reasons | Cluster, Node, Container, Reason | Yes |
+| kube_statefulset_replicas | Statefulset # of replicas | Count | Average | The number of desired pods for a statefulset | Cluster, Stateful Set | Yes |
+| kube_statefulset_status_replicas | Statefulset replicas status | Count | Average | The number of replicas per statefulsets | Cluster, Stateful Set | Yes |
+| controller_runtime_reconcile_errors_total | Kube Controller | Count | Average | Total number of reconciliation errors per controller | Cluster, Node, Controller | Yes |
+| controller_runtime_reconcile_total | Kube Controller | Count | Average | Total number of reconciliation per controller | Cluster, Node, Controller | Yes |
+| kubelet_running_containers | Containers - # of running | Labels | Average | Number of containers currently running | Cluster, node, Container State | Yes |
+| kubelet_running_pods | Pods - # of running | Count | Average | Number of pods that have a running pod sandbox | Cluster, Node | Yes |
+| kubelet_runtime_operations_errors_total | Kubelet Runtime Op Errors | Count | Average | Cumulative number of runtime operation errors by operation type. | Cluster, Node | Yes |
+| kubelet_volume_stats_available_bytes | Pods - Storage - Available | Byte | Average | Number of available bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
+| kubelet_volume_stats_capacity_bytes | Pods - Storage - Capacity | Byte | Average | Capacity in bytes of the volume | Cluster, Node, Persistent Volume Claim | Yes |
+| kubelet_volume_stats_used_bytes | Pods - Storage - Used | Byte | Average | Number of used bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
+
+## kubevirt
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|-|--|-|-||-|
+| kubevirt_info | Host | Labels | NA | Version information. | Cluster, Node | Yes |
+| kubevirt_virt_controller_leading | Kubevirt Controller | Labels | Average | Indication for an operating virt-controller. | Cluster, Pod | Yes |
+| kubevirt_virt_operator_ready | Kubevirt Operator | Labels | Average | Indication for a virt operator being ready | Cluster, Pod | Yes |
+| kubevirt_vmi_cpu_affinity | VM-CPU | Labels | Average | Details the cpu pinning map via boolean labels in the form of vcpu_X_cpu_Y. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_actual_balloon_bytes | VM-Memory | Byte | Average | Current balloon size in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_domain_total_bytes | VM-Memory | Byte | Average | The amount of memory in bytes allocated to the domain. The memory value in domain xml file | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_swap_in_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of data read from swap space of the guest in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_swap_out_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of memory written out to swap space of the guest in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_available_bytes | VM-Memory | Byte | Average | Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_unused_bytes | VM-Memory | Byte | Average | The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free | Cluster, Node, VM | Yes |
+| kubevirt_vmi_network_receive_packets_total | VM-Network | Count | Average | Total network traffic received packets. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_network_transmit_packets_total | VM-Network | Count | Average | Total network traffic transmitted packets. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_network_transmit_packets_dropped_total | VM-Network | Count | Average | The total number of tx packets dropped on vNIC interfaces. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_outdated_count | VMI | Count | Average | Indication for the total number of VirtualMachineInstance workloads that are not running within the most up-to-date version of the virt-launcher environment. | Cluster, Node, VM, Phase | Yes |
+| kubevirt_vmi_phase_count | VMI | Count | Average | Sum of VMIs per phase and node. | Cluster, Node, VM, Phase | Yes |
+| kubevirt_vmi_storage_iops_read_total | VM-Storage | Count | Average | Total number of I/O read operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_iops_write_total | VM-Storage | Count | Average | Total number of I/O write operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_read_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on read operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_write_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on write operations | Cluster, Node, VM, Drive | Yes |
+| kubevirt_virt_controller_ready | Kubevirt Controller | Labels | Average | Indication for a virt-controller that is ready to take the lead. | Cluster, Pod | Yes |
+
+## Node (servers)
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|-|--|-|-||-|
+| node_boot_time_seconds | Node - Boot time | Second | Average | Unix time of last boot | Cluster, Node | Yes |
+| node_cpu_seconds_total | Node - CPU | Second | Average | CPU usage | Cluster, Node, CPU, Mode | Yes |
+| node_disk_read_time_seconds_total | Node - Disk - Read Time | Second | Average | Disk read time | Cluster, Node, Device | Yes |
+| node_disk_reads_completed_total | Node - Disk - Read Completed | Count | Average | Disk reads completed | Cluster, Node, Device | Yes |
+| node_disk_write_time_seconds_total | Node - Disk - Write Time | Second | Average | Disk write time | Cluster, Node, Device | Yes |
+| node_disk_writes_completed_total | Node - Disk - Write Completed | Count | Average | Disk writes completed | Cluster, Node, Device | Yes |
+| node_entropy_available_bits | Node - Entropy Available | Bits | Average | Available node entropy | Cluster, Node | Yes |
+| node_filesystem_avail_bytes | Node - Disk - Available (TBD) | Byte | Average | Available filesystem size | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_free_bytes | Node - Disk - Free (TBD) | Byte | Average | Free filesystem size | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_size_bytes | Node - Disk - Size | Byte | Average | Filesystem size | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_files | Node - Disk - Files | Count | Average | Total number of permitted inodes | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_files_free | Node - Disk - Files Free | Count | Average | Total number of free inodes | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_device_error | Node - Disk - FS Device error | Count | Average | indicates if there was a problem getting information for the filesystem | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_readonly | Node - Disk - Files Readonly | Count | Average | indicates if the filesystem is readonly | Cluster, Node, Mountpoint | Yes |
+| node_hwmon_temp_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for temperature | Cluster, Node, Chip, Sensor | Yes |
+| node_hwmon_temp_max_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for maximum temperature | Cluster, Node, Chip, Sensor | Yes |
+| node_load1 | Node - Memory | Second | Average | 1m load average. | Cluster, Node | Yes |
+| node_load15 | Node - Memory | Second | Average | 15m load average. | Cluster, Node | Yes |
+| node_load5 | Node - Memory | Second | Average | 5m load average. | Cluster, Node | Yes |
+| node_memory_HardwareCorrupted_bytes | Node - Memory | Byte | Average | Memory information field HardwareCorrupted_bytes. | Cluster, Node | Yes |
+| node_memory_MemAvailable_bytes | Node - Memory | Byte | Average | Memory information field MemAvailable_bytes. | Cluster, Node | Yes |
+| node_memory_MemFree_bytes | Node - Memory | Byte | Average | Memory information field MemFree_bytes. | Cluster, Node | Yes |
+| node_memory_MemTotal_bytes | Node - Memory | Byte | Average | Memory information field MemTotal_bytes. | Cluster, Node | Yes |
+| node_memory_numa_HugePages_Free | Node - Memory | Byte | Average | Free hugepages | Cluster, Node. NUMA | Yes |
+| node_memory_numa_HugePages_Total | Node - Memory | Byte | Average | Total hugepages | Cluster, Node. NUMA | Yes |
+| node_memory_numa_MemFree | Node - Memory | Byte | Average | Numa memory free | Cluster, Node. NUMA | Yes |
+| node_memory_numa_MemTotal | Node - Memory | Byte | Average | Total Numa memory | Cluster, Node. NUMA | Yes |
+| node_memory_numa_MemUsed | Node - Memory | Byte | Average | Numa memory used | Cluster, Node. NUMA | Yes |
+| node_memory_numa_Shmem | Node - Memory | Byte | Average | Shared memory | Cluster, Node | Yes |
+| node_os_info | Node - OS Info | Labels | Average | OS details | Cluster, Node | Yes |
+| node_network_carrier_changes_total | Node Network - Carrier changes | Count | Average | carrier_changes_total value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
+| node_network_receive_packets_total | NodeNetwork - receive packets | Count | Average | Network device statistic receive_packets. | Cluster, node, Device | Yes |
+| node_network_transmit_packets_total | NodeNetwork - transmit packets | Count | Average | Network device statistic transmit_packets. | Cluster, node, Device | Yes |
+| node_network_up | Node Network - Interface state | Labels | Average | Value is 1 if operstate is 'up', 0 otherwise. | Cluster, node, Device | Yes |
+| node_network_mtu_bytes | Network Interface - MTU | Byte | Average | mtu_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
+| node_network_receive_errs_total | Network Interface - Error totals | Count | Average | Network device statistic receive_errs | Cluster, node, Device | Yes |
+| node_network_receive_multicast_total | Network Interface - Multicast | Count | Average | Network device statistic receive_multicast. | Cluster, node, Device | Yes |
+| node_network_speed_bytes | Network Interface - Speed | Byte | Average | speed_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
+| node_network_transmit_errs_total | Network Interface - Error totals | Count | Average | Network device statistic transmit_errs. | Cluster, node, Device | Yes |
+| node_timex_sync_status | Node Timex | Labels | Average | Is clock synchronized to a reliable server (1 = yes, 0 = no). | Cluster, Node | Yes |
+| node_timex_maxerror_seconds | Node Timex | Second | Average | Maximum error in seconds. | Cluster, Node | Yes |
+| node_timex_offset_seconds | Node Timex | Second | Average | Time offset in between local system and reference clock. | Cluster, Node | Yes |
+| node_vmstat_oom_kill | Node VM Stat | Count | Average | /proc/vmstat information field oom_kill. | Cluster, Node | Yes |
+| node_vmstat_pswpin | Node VM Stat | Count | Average | /proc/vmstat information field pswpin. | Cluster, Node | Yes |
+| node_vmstat_pswpout | Node VM Stat | Count | Average | /proc/vmstat information field pswpout | Cluster, Node | Yes |
+| node_dmi_info | Node Bios Information | Labels | Average | Node environment information | Cluster, Node | Yes |
+| node_time_seconds | Node - Time | Second | NA | System time in seconds since epoch (1970) | Cluster, Node | Yes |
+| idrac_power_input_watts | Node - Power | Watt | Average | Power Input | Cluster, Node, PSU | Yes |
+| idrac_power_output_watts | Node - Power | Watt | Average | Power Output | Cluster, Node, PSU | Yes |
+| idrac_power_capacity_watts | Node - Power | Watt | Average | Power Capacity | Cluster, Node, PSU | Yes |
+| idrac_sensors_temperature | Node - Temperature | Celcius | Average | Idrac sensor Temperature | Cluster, Node, Name | Yes |
+| idrac_power_on | Node - Power | Labels | Average | Idrac Power On Status | Cluster, Node | Yes |
+
+## Pure storage
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via <br/>Diagnostic Settings? |
+|-|-|--|-|-||-|
+| purefa_hardware_component_health | FlashArray | Labels | NA | FlashArray hardware component health status | Cluster, Appliance, Controller+Component+Index | Yes |
+| purefa_hardware_power_volts | FlashArray | Volt | Average | FlashArray hardware power supply voltage | Cluster, Power Supply, Appliance | Yes |
+| purefa_volume_performance_throughput_bytes | Volume | Byte | Average | FlashArray volume throughput | Cluster, Volume, Dimension, Appliance | Yes |
+| purefa_volume_space_datareduction_ratio | Volume | Count | Average | FlashArray volumes data reduction ratio | Cluster, Volume, Appliance | Yes |
+| purefa_hardware_temperature_celsius | FlashArray | Celcius | Average | FlashArray hardware temperature sensors | Cluster, Controller, Sensor, Appliance | Yes |
+| purefa_alerts_total | FlashArray | Count | Average | Number of alert events | Cluster, Severity | Yes |
+| purefa_array_performance_iops | FlashArray | Count | Average | FlashArray IOPS | Cluster, Dimension, Appliance | Yes |
+| purefa_array_performance_qdepth | FlashArray | Count | Average | FlashArray queue depth | Cluster, Appliance | Yes |
+| purefa_info | FlashArray | Labels | NA | FlashArray host volumes connections | Cluster, Array | Yes |
+| purefa_volume_performance_latency_usec | Volume | MicroSecond | Average | FlashArray volume IO latency | Cluster, Volume, Dimension, Appliance | Yes |
+| purefa_volume_space_bytes | Volume | Byte | Average | FlashArray allocated space | Cluster, Volume, Dimension, Appliance | Yes |
+| purefa_volume_performance_iops | Volume | Count | Average | FlashArray volume IOPS | Cluster, Volume, Dimension, Appliance | Yes |
+| purefa_volume_space_size_bytes | Volume | Byte | Average | FlashArray volumes size | Cluster, Volume, Appliance | Yes |
+| purefa_array_performance_latency_usec | FlashArray | MicroSecond | Average | FlashArray latency | Cluster, Dimension, Appliance | Yes |
+| purefa_array_space_used_bytes | FlashArray | Byte | Average | FlashArray overall used space | Cluster, Dimension, Appliance | Yes |
+| purefa_array_performance_bandwidth_bytes | FlashArray | Byte | Average | FlashArray bandwidth | Cluster, Dimension, Appliance | Yes |
+| purefa_array_performance_avg_block_bytes | FlashArray | Byte | Average | FlashArray avg block size | Cluster, Dimension, Appliance | Yes |
+| purefa_array_space_datareduction_ratio | FlashArray | Count | Average | FlashArray overall data reduction | Cluster, Appliance | Yes |
+| purefa_array_space_capacity_bytes | FlashArray | Byte | Average | FlashArray overall space capacity | Cluster, Appliance | Yes |
+| purefa_array_space_provisioned_bytes | FlashArray | Byte | Average | FlashArray overall provisioned space | Cluster, Appliance | Yes |
+| purefa_host_space_datareduction_ratio | Host | Count | Average | FlashArray host volumes data reduction ratio | Cluster, Node, Appliance | Yes |
+| purefa_host_space_size_bytes | Host | Byte | Average | FlashArray host volumes size | Cluster, Node, Appliance | Yes |
+| purefa_host_performance_latency_usec | Host | MicroSecond | Average | FlashArray host IO latency | Cluster, Node, Dimension, Appliance | Yes |
+| purefa_host_performance_bandwidth_bytes | Host | Byte | Average | FlashArray host bandwidth | Cluster, Node, Dimension, Appliance | Yes |
+| purefa_host_space_bytes | Host | Byte | Average | FlashArray host volumes allocated space | Cluster, Node, Dimension, Appliance | Yes |
+| purefa_host_performance_iops | Host | Count | Average | FlashArray host IOPS | Cluster, Node, Dimension, Appliance | Yes |
+
+## Typha
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
+|-|-|--|-|-||-|
+| typha_connections_accepted | Typha | Count | Average | Total number of connections accepted over time. | Cluster, Node | Yes |
+| typha_connections_dropped | Typha | Count | Average | Total number of connections dropped due to rebalancing. | Cluster, Node | Yes |
+| typha_ping_latency_count | Typha | Count | Average | Round-trip ping latency to client. | Cluster, Node | Yes |
+
+## Collected set of metrics
+
+| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? |
+| - | -- | -- | - | | - | -- |
+| purefa_hardware_component_health | FlashArray | Labels | NA | FlashArray hardware component health status | Cluster, Appliance, Controller+Component+Index | Yes |
+| purefa_hardware_power_volts | FlashArray | Volt | Average | FlashArray hardware power supply voltage | Cluster, Power Supply, Appliance | Yes |
+| purefa_volume_performance_throughput_bytes | Volume | Byte | Average | FlashArray volume throughput | Cluster, Volume, Dimension, Appliance | Yes |
+| purefa_volume_space_datareduction_ratio | Volume | Count | Average | FlashArray volumes data reduction ratio | Cluster, Volume, Appliance | Yes |
+| purefa_hardware_temperature_celsius | FlashArray | Celcius | Average | FlashArray hardware temperature sensors | Cluster, Controller, Sensor, Appliance | Yes |
+| purefa_alerts_total | FlashArray | Count | Average | Number of alert events | Cluster, Severity | Yes |
+| purefa_array_performance_iops | FlashArray | Count | Average | FlashArray IOPS | Cluster, Dimension, Appliance | Yes |
+| purefa_array_performance_qdepth | FlashArray | Count | Average | FlashArray queue depth | Cluster, Appliance | Yes |
+| purefa_info | FlashArray | Labels | NA | FlashArray host volumes connections | Cluster, Array | Yes |
+| purefa_volume_performance_latency_usec | Volume | MicroSecond | Average | FlashArray volume IO latency | Cluster, Volume, Dimension, Appliance | Yes |
+| purefa_volume_space_bytes | Volume | Byte | Average | FlashArray allocated space | Cluster, Volume, Dimension, Appliance | Yes |
+| purefa_volume_performance_iops | Volume | Count | Average | FlashArray volume IOPS | Cluster, Volume, Dimension, Appliance | Yes |
+| purefa_volume_space_size_bytes | Volume | Byte | Average | FlashArray volumes size | Cluster, Volume, Appliance | Yes |
+| purefa_array_performance_latency_usec | FlashArray | MicroSecond | Average | FlashArray latency | Cluster, Dimension, Appliance | Yes |
+| purefa_array_space_used_bytes | FlashArray | Byte | Average | FlashArray overall used space | Cluster, Dimension, Appliance | Yes |
+| purefa_array_performance_bandwidth_bytes | FlashArray | Byte | Average | FlashArray bandwidth | Cluster, Dimension, Appliance | Yes |
+| purefa_array_performance_avg_block_bytes | FlashArray | Byte | Average | FlashArray avg block size | Cluster, Dimension, Appliance | Yes |
+| purefa_array_space_datareduction_ratio | FlashArray | Count | Average | FlashArray overall data reduction | Cluster, Appliance | Yes |
+| purefa_array_space_capacity_bytes | FlashArray | Byte | Average | FlashArray overall space capacity | Cluster, Appliance | Yes |
+| purefa_array_space_provisioned_bytes | FlashArray | Byte | Average | FlashArray overall provisioned space | Cluster, Appliance | Yes |
+| purefa_host_space_datareduction_ratio | Host | Count | Average | FlashArray host volumes data reduction ratio | Cluster, Node, Appliance | Yes |
+| purefa_host_space_size_bytes | Host | Byte | Average | FlashArray host volumes size | Cluster, Node, Appliance | Yes |
+| purefa_host_performance_latency_usec | Host | MicroSecond | Average | FlashArray host IO latency | Cluster, Node, Dimension, Appliance | Yes |
+| purefa_host_performance_bandwidth_bytes | Host | Byte | Average | FlashArray host bandwidth | Cluster, Node, Dimension, Appliance | Yes |
+| purefa_host_space_bytes | Host | Byte | Average | FlashArray host volumes allocated space | Cluster, Node, Dimension, Appliance | Yes |
+| purefa_host_performance_iops | Host | Count | Average | FlashArray host IOPS | Cluster, Node, Dimension, Appliance | Yes |
+| kubevirt_info | Host | Labels | NA | Version information. | Cluster, Node | Yes |
+| kubevirt_virt_controller_leading | Kubevirt Controller | Labels | Average | Indication for an operating virt-controller. | Cluster, Pod | Yes |
+| kubevirt_virt_operator_ready | Kubevirt Operator | Labels | Average | Indication for a virt operator being ready | Cluster, Pod | Yes |
+| kubevirt_vmi_cpu_affinity | VM-CPU | Labels | Average | Details the cpu pinning map via boolean labels in the form of vcpu_X_cpu_Y. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_actual_balloon_bytes | VM-Memory | Byte | Average | Current balloon size in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_domain_total_bytes | VM-Memory | Byte | Average | The amount of memory in bytes allocated to the domain. The memory value in domain xml file | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_swap_in_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of data read from swap space of the guest in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_swap_out_traffic_bytes_total | VM-Memory | Byte | Average | The total amount of memory written out to swap space of the guest in bytes. | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_available_bytes | VM-Memory | Byte | Average | Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages | Cluster, Node, VM | Yes |
+| kubevirt_vmi_memory_unused_bytes | VM-Memory | Byte | Average | The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free | Cluster, Node, VM | Yes |
+| kubevirt_vmi_network_receive_packets_total | VM-Network | Count | Average | Total network traffic received packets. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_network_transmit_packets_total | VM-Network | Count | Average | Total network traffic transmitted packets. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_network_transmit_packets_dropped_total | VM-Network | Count | Average | The total number of tx packets dropped on vNIC interfaces. | Cluster, Node, VM, Interface | Yes |
+| kubevirt_vmi_outdated_count | VMI | Count | Average | Indication for the total number of VirtualMachineInstance workloads that are not running within the most up-to-date version of the virt-launcher environment. | Cluster, Node, VM, Phase | Yes |
+| kubevirt_vmi_phase_count | VMI | Count | Average | Sum of VMIs per phase and node. | Cluster, Node, VM, Phase | Yes |
+| kubevirt_vmi_storage_iops_read_total | VM-Storage | Count | Average | Total number of I/O read operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_iops_write_total | VM-Storage | Count | Average | Total number of I/O write operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_read_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on read operations. | Cluster, Node, VM, Drive | Yes |
+| kubevirt_vmi_storage_write_times_ms_total | VM-Storage | Mili Second | Average | Total time (ms) spent on write operations | Cluster, Node, VM, Drive | Yes |
+| kubevirt_virt_controller_ready | Kubevirt Controller | Labels | Average | Indication for a virt-controller that is ready to take the lead. | Cluster, Pod | Yes |
+| coredns_dns_requests_total | DNS Requests | Count | Average | total query count | Cluster, Node, Protocol | Yes |
+| coredns_dns_responses_total | DNS response/errors | Count | Average | response per zone, rcode and plugin. | Cluster, Node, Rcode | Yes |
+| coredns_health_request_failures_total | DNS Health Request Failures | Count | Average | The number of times the internal health check loop failed to query | Cluster, Node | Yes |
+| coredns_panics_total | DNS panic | Count | Average | total number of panics | Cluster, Node | Yes |
+| kube_daemonset_status_current_number_scheduled | Kube Daemonset | Count | Average | Number of Daemonsets scheduled | Cluster | Yes |
+| kube_daemonset_status_desired_number_scheduled | Kube Daemonset | Count | Average | Number of daemoset replicas desired | Cluster | Yes |
+| kube_deployment_status_replicas_ready | Kube Deployment | Count | Average | Number of deployment replicas present | Cluster | Yes |
+| kube_deployment_status_replicas_available | Kube Deployment | Count | Average | Number of deployment replicas available | Cluster | Yes |
+| kube_job_status_active | Kube job - Active | Labels | Average | Number of actively running jobs | Cluster, Job | Yes |
+| kube_job_status_failed | Kube job - Failed | Labels | Average | Number of failed jobs | Cluster, Job | Yes |
+| kube_job_status_succeeded | Kube job - Succeeded | Labels | Average | Number of successful jobs | Cluster, Job | Yes |
+| kube_node_status_allocatable | Node - Allocatable | Labels | Average | The amount of resources allocatable for pods | Cluster, Node, Resource | Yes |
+| kube_node_status_capacity | Node - Capacity | Labels | Average | The total amount of resources available for a node | Cluster, Node, Resource | Yes |
+| kube_node_status_condition | Kubenode status | Labels | Average | The condition of a cluster node | Cluster, Node, Condition, Status | Yes |
+| kube_pod_container_resource_limits | Pod container - Limits | Count | Average | The number of requested limit resource by a container. | Cluster, Node, Resource, Pod | Yes |
+| kube_pod_container_resource_requests | Pod container - Requests | Count | Average | The number of requested request resource by a container. | Cluster, Node, Resource, Pod | Yes |
+| kube_pod_container_state_started | Pod container - state | Second | Average | Start time in unix timestamp for a pod container | Cluster, Node, Container | Yes |
+| kube_pod_container_status_last_terminated_reason | Pod container - state | Labels | Average | Describes the last reason the container was in terminated state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_container_status_ready | Container State | Labels | Average | Describes whether the containers readiness check succeeded | Cluster, Node, Container | Yes |
+| kube_pod_container_status_restarts_total | Container State | Count | Average | The number of container restarts per container | Cluster, Node, Container | Yes |
+| kube_pod_container_status_running | Container State | Labels | Average | Describes whether the container is currently in running state | Cluster, Node, Container | Yes |
+| kube_pod_container_status_terminated | Container State | Labels | Average | Describes whether the container is currently in terminated state | Cluster, Node, Container | Yes |
+| kube_pod_container_status_terminated_reason | Container State | Labels | Average | Describes the reason the container is currently in terminated state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_container_status_waiting | Container State | Labels | Average | Describes whether the container is currently in waiting state | Cluster, Node, Container | Yes |
+| kube_pod_container_status_waiting_reason | Container State | Labels | Average | Describes the reason the container is currently in waiting state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_deletion_timestamp | Pod Deletion Timestamp | Timestamp | NA | Unix deletion timestamp | Cluster, Pod | Yes |
+| kube_pod_init_container_status_ready | Init Container State | Labels | Average | Describes whether the init containers readiness check succeeded | Cluster, Node, Container | Yes |
+| kube_pod_init_container_status_restarts_total | Init Container State | Count | Average | The number of restarts for the init container | Cluster, Container | Yes |
+| kube_pod_init_container_status_running | Init Container State | Labels | Average | Describes whether the init container is currently in running state | Cluster, Node, Container | Yes |
+| kube_pod_init_container_status_terminated | Init Container State | Labels | Average | Describes whether the init container is currently in terminated state | Cluster, Node, Container | Yes |
+| kube_pod_init_container_status_terminated_reason | Init Container State | Labels | Average | Describes the reason the init container is currently in terminated state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_init_container_status_waiting | Init Container State | Labels | Average | Describes whether the init container is currently in waiting state | Cluster, Node, Container | Yes |
+| kube_pod_init_container_status_waiting_reason | Init Container State | Labels | Average | Describes the reason the init container is currently in waiting state | Cluster, Node, Container, Reason | Yes |
+| kube_pod_status_phase | Pod Status | Labels | Average | The pods current phase | Cluster, Node, Container, Phase | Yes |
+| kube_pod_status_ready | Pod Status Ready | Count | Average | Describe whether the pod is ready to serve requests. | Cluster, Pod | Yes |
+| kube_pod_status_reason | Pod Status Reason | Labels | Average | The pod status reasons | Cluster, Node, Container, Reason | Yes |
+| kube_statefulset_replicas | Statefulset # of replicas | Count | Average | The number of desired pods for a statefulset | Cluster, Stateful Set | Yes |
+| kube_statefulset_status_replicas | Statefulset replicas status | Count | Average | The number of replicas per statefulsets | Cluster, Stateful Set | Yes |
+| controller_runtime_reconcile_errors_total | Kube Controller | Count | Average | Total number of reconciliation errors per controller | Cluster, Node, Controller | Yes |
+| controller_runtime_reconcile_total | Kube Controller | Count | Average | Total number of reconciliation per controller | Cluster, Node, Controller | Yes |
+| node_boot_time_seconds | Node - Boot time | Second | Average | Unix time of last boot | Cluster, Node | Yes |
+| node_cpu_seconds_total | Node - CPU | Second | Average | CPU usage | Cluster, Node, CPU, Mode | Yes |
+| node_disk_read_time_seconds_total | Node - Disk - Read Time | Second | Average | Disk read time | Cluster, Node, Device | Yes |
+| node_disk_reads_completed_total | Node - Disk - Read Completed | Count | Average | Disk reads completed | Cluster, Node, Device | Yes |
+| node_disk_write_time_seconds_total | Node - Disk - Write Time | Second | Average | Disk write time | Cluster, Node, Device | Yes |
+| node_disk_writes_completed_total | Node - Disk - Write Completed | Count | Average | Disk writes completed | Cluster, Node, Device | Yes |
+| node_entropy_available_bits | Node - Entropy Available | Bits | Average | Available node entropy | Cluster, Node | Yes |
+| node_filesystem_avail_bytes | Node - Disk - Available (TBD) | Byte | Average | Available filesystem size | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_free_bytes | Node - Disk - Free (TBD) | Byte | Average | Free filesystem size | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_size_bytes | Node - Disk - Size | Byte | Average | Filesystem size | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_files | Node - Disk - Files | Count | Average | Total number of permitted inodes | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_files_free | Node - Disk - Files Free | Count | Average | Total number of free inodes | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_device_error | Node - Disk - FS Device error | Count | Average | indicates if there was a problem getting information for the filesystem | Cluster, Node, Mountpoint | Yes |
+| node_filesystem_readonly | Node - Disk - Files Readonly | Count | Average | indicates if the filesystem is readonly | Cluster, Node, Mountpoint | Yes |
+| node_hwmon_temp_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for temperature | Cluster, Node, Chip, Sensor | Yes |
+| node_hwmon_temp_max_celsius | Node - temperature (TBD) | Celcius | Average | Hardware monitor for maximum temperature | Cluster, Node, Chip, Sensor | Yes |
+| node_load1 | Node - Memory | Second | Average | 1m load average. | Cluster, Node | Yes |
+| node_load15 | Node - Memory | Second | Average | 15m load average. | Cluster, Node | Yes |
+| node_load5 | Node - Memory | Second | Average | 5m load average. | Cluster, Node | Yes |
+| node_memory_HardwareCorrupted_bytes | Node - Memory | Byte | Average | Memory information field HardwareCorrupted_bytes. | Cluster, Node | Yes |
+| node_memory_MemAvailable_bytes | Node - Memory | Byte | Average | Memory information field MemAvailable_bytes. | Cluster, Node | Yes |
+| node_memory_MemFree_bytes | Node - Memory | Byte | Average | Memory information field MemFree_bytes. | Cluster, Node | Yes |
+| node_memory_MemTotal_bytes | Node - Memory | Byte | Average | Memory information field MemTotal_bytes. | Cluster, Node | Yes |
+| node_memory_numa_HugePages_Free | Node - Memory | Byte | Average | Free hugepages | Cluster, Node. NUMA | Yes |
+| node_memory_numa_HugePages_Total | Node - Memory | Byte | Average | Total hugepages | Cluster, Node. NUMA | Yes |
+| node_memory_numa_MemFree | Node - Memory | Byte | Average | Numa memory free | Cluster, Node. NUMA | Yes |
+| node_memory_numa_MemTotal | Node - Memory | Byte | Average | Total Numa memory | Cluster, Node. NUMA | Yes |
+| node_memory_numa_MemUsed | Node - Memory | Byte | Average | Numa memory used | Cluster, Node. NUMA | Yes |
+| node_memory_numa_Shmem | Node - Memory | Byte | Average | Shared memory | Cluster, Node | Yes |
+| node_os_info | Node - OS Info | Labels | Average | OS details | Cluster, Node | Yes |
+| node_network_carrier_changes_total | Node Network - Carrier changes | Count | Average | carrier_changes_total value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
+| node_network_receive_packets_total | NodeNetwork - receive packets | Count | Average | Network device statistic receive_packets. | Cluster, node, Device | Yes |
+| node_network_transmit_packets_total | NodeNetwork - transmit packets | Count | Average | Network device statistic transmit_packets. | Cluster, node, Device | Yes |
+| node_network_up | Node Network - Interface state | Labels | Average | Value is 1 if operstate is 'up', 0 otherwise. | Cluster, node, Device | Yes |
+| node_network_mtu_bytes | Network Interface - MTU | Byte | Average | mtu_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
+| node_network_receive_errs_total | Network Interface - Error totals | Count | Average | Network device statistic receive_errs | Cluster, node, Device | Yes |
+| node_network_receive_multicast_total | Network Interface - Multicast | Count | Average | Network device statistic receive_multicast. | Cluster, node, Device | Yes |
+| node_network_speed_bytes | Network Interface - Speed | Byte | Average | speed_bytes value of `/sys/class/net/<iface>`. | Cluster, node, Device | Yes |
+| node_network_transmit_errs_total | Network Interface - Error totals | Count | Average | Network device statistic transmit_errs. | Cluster, node, Device | Yes |
+| node_timex_sync_status | Node Timex | Labels | Average | Is clock synchronized to a reliable server (1 = yes, 0 = no). | Cluster, Node | Yes |
+| node_timex_maxerror_seconds | Node Timex | Second | Average | Maximum error in seconds. | Cluster, Node | Yes |
+| node_timex_offset_seconds | Node Timex | Second | Average | Time offset in between local system and reference clock. | Cluster, Node | Yes |
+| node_vmstat_oom_kill | Node VM Stat | Count | Average | /proc/vmstat information field oom_kill. | Cluster, Node | Yes |
+| node_vmstat_pswpin | Node VM Stat | Count | Average | /proc/vmstat information field pswpin. | Cluster, Node | Yes |
+| node_vmstat_pswpout | Node VM Stat | Count | Average | /proc/vmstat information field pswpout | Cluster, Node | Yes |
+| node_dmi_info | Node Bios Information | Labels | Average | Node environment information | Cluster, Node | Yes |
+| node_time_seconds | Node - Time | Second | NA | System time in seconds since epoch (1970) | Cluster, Node | Yes |
+| container_fs_io_time_seconds_total | Containers - Filesystem | Second | Average | Cumulative count of seconds spent doing I/Os | Cluster, Node, Pod+Container+Interface | Yes |
+| container_memory_failcnt | Containers - Memory | Count | Average | Number of memory usage hits limits | Cluster, Node, Pod+Container+Interface | Yes |
+| container_memory_usage_bytes | Containers - Memory | Byte | Average | Current memory usage, including all memory regardless of when it was accessed | Cluster, Node, Pod+Container+Interface | Yes |
+| container_oom_events_total | Container OOM Events | Count | Average | Count of out of memory events observed for the container | Cluster, Node, Pod+Container | Yes |
+| container_start_time_seconds | Containers - Start Time | Second | Average | Start time of the container since unix epoch | Cluster, Node, Pod+Container+Interface | Yes |
+| container_tasks_state | Containers - Task state | Labels | Average | Number of tasks in given state | Cluster, Node, Pod+Container+Interface, State | Yes |
+| kubelet_running_containers | Containers - # of running | Labels | Average | Number of containers currently running | Cluster, node, Container State | Yes |
+| kubelet_running_pods | Pods - # of running | Count | Average | Number of pods that have a running pod sandbox | Cluster, Node | Yes |
+| kubelet_runtime_operations_errors_total | Kubelet Runtime Op Errors | Count | Average | Cumulative number of runtime operation errors by operation type. | Cluster, Node | Yes |
+| kubelet_volume_stats_available_bytes | Pods - Storage - Available | Byte | Average | Number of available bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
+| kubelet_volume_stats_capacity_bytes | Pods - Storage - Capacity | Byte | Average | Capacity in bytes of the volume | Cluster, Node, Persistent Volume Claim | Yes |
+| kubelet_volume_stats_used_bytes | Pods - Storage - Used | Byte | Average | Number of used bytes in the volume | Cluster, Node, Persistent Volume Claim | Yes |
+| idrac_power_input_watts | Node - Power | Watt | Average | Power Input | Cluster, Node, PSU | Yes |
+| idrac_power_output_watts | Node - Power | Watt | Average | Power Output | Cluster, Node, PSU | Yes |
+| idrac_power_capacity_watts | Node - Power | Watt | Average | Power Capacity | Cluster, Node, PSU | Yes |
+| idrac_sensors_temperature | Node - Temperature | Celcius | Average | Idrac sensor Temperature | Cluster, Node, Name | Yes |
+| idrac_power_on | Node - Power | Labels | Average | Idrac Power On Status | Cluster, Node | Yes |
+| felix_ipsets_calico | Felix | Count | Average | Number of active Calico IP sets. | Cluster, Node | Yes |
+| felix_cluster_num_host_endpoints | Felix | Count | Average | Total number of host endpoints cluster-wide. | Cluster, Node | Yes |
+| felix_active_local_endpoints | Felix | Count | Average | Number of active endpoints on this host. | Cluster, Node | Yes |
+| felix_cluster_num_hosts | Felix | Count | Average | Total number of Calico hosts in the cluster. | Cluster, Node | Yes |
+| felix_cluster_num_workload_endpoints | Felix | Count | Average | Total number of workload endpoints cluster-wide. | Cluster, Node | Yes |
+| felix_int_dataplane_failures | Felix | Count | Average | Number of times dataplane updates failed and will be retried. | Cluster, Node | Yes |
+| felix_ipset_errors | Felix | Count | Average | Number of ipset command failures. | Cluster, Node | Yes |
+| felix_iptables_restore_errors | Felix | Count | Average | Number of iptables-restore errors. | Cluster, Node | Yes |
+| felix_iptables_save_errors | Felix | Count | Average | Number of iptables-save errors. | Cluster, Node | Yes |
+| felix_resyncs_started | Felix | Count | Average | Number of times Felix has started resyncing with the datastore. | Cluster, Node | Yes |
+| felix_resync_state | Felix | Count | Average | Current datastore state. | Cluster, Node | Yes |
+| typha_connections_accepted | Typha | Count | Average | Total number of connections accepted over time. | Cluster, Node | Yes |
+| typha_connections_dropped | Typha | Count | Average | Total number of connections dropped due to rebalancing. | Cluster, Node | Yes |
+| typha_ping_latency_count | Typha | Count | Average | Round-trip ping latency to client. | Cluster, Node | Yes |
+| etcd_disk_backend_commit_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of commit called by backend. | Cluster, Pod | Yes |
+| etcd_disk_wal_fsync_duration_seconds_sum | Etcd Disk | Second | Average | The latency distributions of fsync called by wal | Cluster, Pod | Yes |
+| etcd_server_is_leader | Etcd Server | Labels | Average | Whether node is leader | Cluster, Pod | Yes |
+| etcd_server_is_learner | Etcd Server | Labels | Average | Whether node is learner | Cluster, Pod | Yes |
+| etcd_server_leader_changes_seen_total | Etcd Server | Count | Average | The number of leader changes seen. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_committed_total | Etcd Server | Count | Average | The total number of consensus proposals committed. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_applied_total | Etcd Server | Count | Average | The total number of consensus proposals applied. | Cluster, Pod, Tier | Yes |
+| etcd_server_proposals_failed_total | Etcd Server | Count | Average | The total number of failed proposals seen. | Cluster, Pod, Tier | Yes |
+| apiserver_audit_requests_rejected_total | Apiserver | Count | Average | Counter of apiserver requests rejected due to an error in audit logging backend. | Cluster, Node | Yes |
+| apiserver_client_certificate_expiration_seconds_sum | Apiserver | Second | Sum | Distribution of the remaining lifetime on the certificate used to authenticate a request. | Cluster, Node | Yes |
+| apiserver_storage_data_key_generation_failures_total | Apiserver | Count | Average | Total number of failed data encryption key(DEK) generation operations. | Cluster, Node | Yes |
+| apiserver_tls_handshake_errors_total | Apiserver | Count | Average | Number of requests dropped with 'TLS handshake error from' error | Cluster, Node | Yes |
operator-nexus Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/overview.md
+
+ Title: Introduction to Operator Nexus
+description: High level information about the Operator Nexus product.
++ Last updated : 01/30/2023++++
+# What is Azure Operator Nexus?
+
+Azure Operator Nexus is the next-generation hybrid cloud platform for telecommunication operators.
+Operator Nexus is purpose-built for operators' network-intensive workloads and mission-critical applications.
+Operator Nexus supports both our first-party and a wide variety of third party virtualized or containerized telco network functions.
+The platform automates life cycle management of network fabric, bare metal hosts, storage appliances, and both infrastructure and tenant Kubernetes clusters.
+Operator Nexus meets operators' security, resiliency, observability and performance requirements to achieve meaningful business results.
+The platform seamlessly integrates compute, network, and storage.
+The user can operate and deploy the platform end-to-end via Azure portal, CLI, or APIs.
+
+<! IMG ![Operator Nexus HL overview diagram](Docs/media/hl-architecture.png) IMG >
+
+Figure: Operator Nexus Overview
+
+## Key benefits
+
+Operator Nexus includes the following benefits for operating secure carrier-grade network functions at scale:
+
+* **Reduced operational complexity and costs** ΓÇô Operators decide in which Azure regions to deploy Operator Nexus.
+One set of Operator Nexus controllers can scale automatically to support multiple instances of on-premises Operator Nexus deployment.
+Operators can use the same APIs or automation to operationalize their on-premises services and their cloud native services.
+* **Integrated platform for compute, network, and storage** ΓÇô Operators no longer need to provision compute, network, and storage separately as Operator Nexus integrates the stacks.
+For example, the elastic network fabric is designed to let compute and storage scale up or down.
+The solution simplifies operators' capacity planning and deployment.
+* **Expanding Network Function (NF) ecosystem** ΓÇô Operator Nexus supports a wide variety of Microsoft's own NFs and third-party partners' NFs via an NF certification program.
+These NFs are tested for deployment and lifecycle management on Operator Nexus before they're made available in Azure Marketplace.
+* **Access to key Azure services** ΓÇô Operator Nexus being connected to Azure, operators can seamlessly access most Azure services through the same connection as the on-premises network.
+For example, you can provision and manage Operator Nexus through Azure portal or Azure CLI.
+Operators can monitor logs and metrics via Azure Monitor, and analyze telemetry data using Log Analytics or Azure AI/Machine Learning framework.
+* **Unified governance and compliance** ΓÇô As an Azure service, Operator Nexus extends Azure management and services to operator's premises.
+Operators can unify data governance and enforce security and compliance policies by [Azure Role based Access Control](/azure/role-based-access-control/overview) and [Azure Policy](/azure/governance/policy/overview).
+
+## How Operator Nexus works
+
+Operator Nexus requires curated hardware Bill of Materials. It is comprised of commercially available off-the-shelf servers, network switches, and storage arrays. The infrastructure is deployed in operator's on-premises data center. Operators or System Integrators must make sure they [meet the prerequisites and follow the guidance](quickstarts-platform-deployment.md).
+
+The service that manages the Operator Nexus infrastructure is hosted in Azure. Operators can choose an Azure region that supports Operator Nexus for any on-premises Operator Nexus infrastructure or deployment. The diagram illustrates the architecture of the Operator Nexus service.
+
+<! IMG ![How Operator Nexus works diagram](Docs/media/architecture-overview.png) IMG >
+
+Figure: How Operator Nexus works
+
+1. The management layer of Operator Nexus is built on Azure Resource Manager (ARM), that provides consistent user experience in Azure portal and API.
+2. Azure Resource Providers provide modeling and lifecycle management of [Operator Nexus resources](./concepts-resource-types.md) such as bare metal machines, clusters, network devices, etc.
+3. Operator Nexus controllers, that is, Cluster Manager and Network fabric Controller, are deployed in a managed Virtual Network (vNET) connected to operator's on-premises network. The controllers enable functionalities such as infrastructure bootstrapping, configurations, service upgrades etc.
+4. Operator Nexus is integrated with many Azure services such as Azure Monitor, Azure Container Registries, and Azure Kubernetes Services.
+5. Azure Arc enables a seamless integration of Azure cloud services and on-premises environments, translating between the ARM models and the Kubernetes resource definitions.
+6. ExpressRoute is a network connectivity service that bridges Azure regions and operators' locations.
+
+## Key features
+
+Here are some of the key features of Operator Nexus.
+
+### CBL-Mariner
+
+Operator Nexus runs Microsoft's own Linux distribution "CBL-Mariner" on the bare metal hosts in the operator's facilities.
+The same Linux distribution supports Azure cloud infrastructure and edge services.
+It includes a small set of core packages by default, whereas each service running on top of it can install more packages.
+[CBL-Mariner](https://microsoft.github.io/CBL-Mariner/docs/) is a lightweight OS and consumes limited system resources. It's engineered to be efficient.
+For example, it has a fast boot time. Small footprints with locked-down packages also mean minimal attack surface.
+On identifying a security vulnerability, the CBL-Mariner team makes the latest security patches and fixes available with the goal of fast turn-around time. Running the infrastructure on Linux aligns with Network Function needs, telecommunication industry trends, and relevant open-source communications. Operator Nexus supports both virtualized network functions (VNFs) and containerized network functions (CNFs).
+
+### Bare metal and cluster management
+
+Operator Nexus includes a service that manages the bare metal hosts in operators' premises.
+Operators can provision the bare metal hosts using Azure APIs for tasks such as "restart a host" or "reimage a host".
+One important component of the service is Cluster Manager.
+[Cluster Manager](./howto-cluster-manager.md) provides the lifecycle management of Kubernetes clusters that are made of the bare metal hosts.
+
+### Network fabric automation
+
+Operator Nexus goes beyond compute and includes Network fabric Automation (NFA). The [NFA](./howto-configure-network fabric.md) service enables operators to build, operate and manage carrier grade network fabric. The reliable and distributed cloud services model supports the operators' telco network functions. For example, to bootstrap network devices in Operator Nexus, operators just need to call an Azure API to trigger the Zero Touch Provisioning (ZTP) process. ZTP downloads the configuration templates from a terminal server, which is built in Operator Nexus design, to all the network devices and provisions them to the initial known state.
+
+### Network packet broker
+
+Network Packet Broker (NPB) is an integral part of the network fabric in Operator Nexus. NPB enables multiple scenarios from network performance monitoring to security intrusion detection. Operators can monitor every single packet in Operator Nexus and replicate it. They can apply packet filters dynamically and send filtered packets to multiple destinations for further processing.
+
+### Azure Hybrid Kubernetes Service
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service on Azure. It lets users focus on developing and deploying their workloads while letting Azure handle the management and operational overheads. In [AKS-Hybrid](/azure/aks/hybrid/) the Kubernetes cluster is deployed on-premises. Azure still performs the traditional operational management activities such as updates, certificate rotations, etc.
+
+### Network functions virtualization infrastructure capabilities
+
+As a platform, Operator Nexus is designed for telco network functions and optimized for carrier-grade performance and resiliency. It has many built-in Network Functions Virtualization Infrastructure (NFVI) capabilities:
+
+* Compute: NUMA aligned VMs with dedicated cores (both SMT siblings), backed by huge pages ensures consistent performance. There's no impact from other workloads running on the same hypervisor host.
+* Networking: SR-IOV & DPDK for low latency and high throughput. Highly available VFs to VMs with redundant physical paths provide links to all workloads. APIs are used to control access and trunk port consumption in both VNFs and CNFs.
+* Storage: Filesystem storage for CNFs backed by high performance storage arrays
+
+### Network function management
+
+Azure Network Function Manager (ANFM) is a service that allows Network Equipment Providers (NEP) to publish their NFs in Azure Marketplace. Operators can deploy them using familiar Azure APIs. ANFM provides a framework for NEPs and Microsoft to test and validate the basic functionality of the NFs. The validation includes lifecycle management of an NF on Operator Nexus.
+
+### Observability
+
+After bootstrap, Operator Nexus automatically streams the metrics and logs from the operator's premises to Azure Monitor and Log Analytics workspace of:
+
+* the infrastructure stack (compute, network and storage), and
+* the workload stacks (for example, AKS-Hybrid).
+
+Log Analytics has a rich analytical tool-set that operators can use for troubleshooting or correlating for operational insights. And, Azure Monitor lets operators specify alerts.
+
+## Next steps
+
+* Learn more about Operator Nexus [resource models](./concepts-resource-types.md)
+* Review [Operator Nexus deployment prerequisites and steps](./quickstarts-platform-prerequisites.md)
operator-nexus Quickstart Network Fabric Controller Cluster Manager Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstart-network-fabric-controller-cluster-manager-create.md
+
+ Title: "Network fabric Controller and Cluster Manger creation"
+description: Learn the steps for create the Azure Operator Nexus Network fabric Controller and Cluster Manger.
++++ Last updated : 02/08/2023 #Required; mm/dd/yyyy format.+++
+# Create network fabric controller and cluster manager in an Azure region
+
+You need to create a Network fabric Controller (NFC) and then a (Network Cloud) Cluster Manager (CM)
+in your target Azure region. This Azure region will be connected to your on-premise sites.
+You'll also need to create an NFC and CM in other Azure regions to be connected to your on-premise sites.
+
+Each NFC is associated with a CM in the same Azure region and your subscription.
+The NFC/CM pair lifecycle manages up to 32 Azure Operator Nexus instances deployed in your sites connected to this Azure region.
+Each Operator Nexus instance consists of network fabric, compute and storage infrastructure.
+
+## Prerequisites
+
+- Ensure Azure Subscription for Operator Nexus resources has been permitted access to the
+ necessary Azure Resource Providers:
+ - Microsoft.NetworkCloud
+ - Microsoft.ManagedNetworkFabric
+ - Microsoft.HybridContainerService
+ - Microsoft.HybridNetwork
+- Establish [ExpressRoute](/azure/azure/expressroute/expressroute-introduction) connectivity
+ from your on-premises network to an Azure Region:
+ - ExpressRoute circuit [creation and verification](/azure/azure/expressroute/expressroute-howto-circuit-portal-resource-manager)
+ can be performed via the Azure portal
+ - In the ExpressRoute blade, ensure Circuit status indicates the status
+ of the circuit on the Microsoft side. Provider status indicates if
+ the circuit has been provisioned or not provisioned on the
+ service-provider side. For an ExpressRoute circuit to be operational,
+ Circuit status must be Enabled, and Provider status must be
+ Provisioned
+- Set up Key Vault to store encryption and security tokens, service principals,
+ passwords, certificates, and API keys
+- Set up Log Analytics workSpace (LAW) to store logs and analytics data for
+ Operator Nexus subcomponents (Fabric, Cluster, etc.)
+- Set up Azure Storage account to store Operator Nexus data objects:
+ - Azure Storage supports blobs and files accessible from anywhere in the world over HTTP or HTTPS
+ - this storage isn't for user/consumer data.
+
+### Install CLI extensions
+
+Install latest version of the
+[necessary CLI extensions](./howto-install-cli-extensions.md).
+
+## Create steps
+
+- Step 1: Create Network fabric Controller
+- Step 2: Create Cluster Manager
+
+## Step 1: Create a network fabric controller
+
+Operators will sign in to their subscription to create a `Network
+Fabric Controller` (NFC) in an Azure region. Bootstrapping
+and management of network fabric instances are performed from the NFC.
+
+You'll create a Network fabric Controller (NFC) prior to the first deployment
+of an on-premises Operator Nexus instance. Each NFC can manage up to 32 Operator Nexus instances.
+For subsequent network fabric deployments, managed by this
+Fabric Controller, an NFC won't need to be created. After 32 Operator Nexus instances
+have been deployed, another NFC will need to be created.
+
+An NFC manages network fabric of Operator Nexus instances deployed in an Azure region.
+You.ll need to create an NFC in every Azure region that you'll deploy
+Operator Nexus instances in.
+
+Create the NFC:
+
+```azurecli
+az nf controller create \
+--resource-group "$NFC_RESOURCE_GROUP" \
+--location "$LOCATION" \
+--resource-name "$NFC_RESOURCE_NAME" \
+--ipv4-address-space "$NFC_MANAGEMENT_CLUSTER_IPV4" \
+--ipv6-address-space "$NFC_MANAGEMENT_CLUSTER_IPV6" \
+--infra-er-connections '[{"expressRouteCircuitId": "$INFRA_ER_CIRCUIT1_ID", \
+ "expressRouteAuthorizationKey": "$INFRA_ER_CIRCUIT1_AUTH"}]'
+--workload-er-connections '[{"expressRouteCircuitId": "$WORKLOAD_ER_CIRCUIT1_ID", \
+ "expressRouteAuthorizationKey": "$WORKLOAD_ER_CIRCUIT1_AUTH"}]'
+```
+
+### Parameters required for network fabric controller operations
+
+| Parameter name | Description |
+| | - |
+| NFC_RESOURCE_GROUP | The resource group name |
+| LOCATION | The Azure Region where the NFC will be deployed (for example, `eastus`) |
+| NFC_RESOURCE_NAME | Resource Name of the Network fabric Controller |
+| NFC_MANAGEMENT_CLUSTER_IPV4 | Optional IPv4 Prefixes for NFC VNet. Can be specified at the time of creation. If unspecified, default value of `10.0.0.0/19` is assigned. The prefix should be at least of length `/19` |
+| NFC_MANAGEMENT_CLUSTER_IPV6 | Optional IPv6 Prefixes for NFC `vnet`. Can be specified at the time of creation. If unspecified, undefined. The prefix should be at least of length `/59` |
+| INFRA_ER_CIRCUIT1_ID | The name of express route circuit for infrastructure must be of type `Microsoft.Network/expressRouteCircuits/circuitName` |
+| INFRA_ER_CIRCUIT1_AUTH | Authorization key for the circuit for infrastructure must be of type `Microsoft.Network/expressRouteCircuits/authorizations` |
+| WORKLOAD_ER_CIRCUIT1_ID | The name of express route circuit for workloads must be of type `Microsoft.Network/expressRouteCircuits/circuitName` |
+| WORKLOAD_ER_CIRCUIT1_AUTH | Authorization key for the circuit for workloads must be of type `Microsoft.Network/expressRouteCircuits/authorizations` |
++
+The Network fabric Controller is created within the resource group in your Azure Subscription.
+
+The Network fabric Controller ID will be needed in the next steps to create
+the Cluster Manager and Network fabric resources. The v4 and v6 IP address
+space is a private large subnet, recommended for `/16` in multi-rack
+environments, which is used by the NFC to allocate IP to all devices in all Instances under the NFC and Cluster Manager domain.
+
+### NFC validation
+
+The NFC and a few other hosted resources will be created in the NFC hosted resource groups.
+The other resources include:
+
+- ExpressRoute Gateway,
+- Infrastructure vnet,
+- Tenant vnet,
+- Infrastructure Proxy/DNS/NTP VM,
+- storage account,
+- Key Vault,
+- SAW restricted jumpbox VM,
+- hosted AKS,
+- resources for each cluster, and
+- Kubernetes clusters for the controller, infrastructure, and tenant.
+
+View the status of the NFC:
+
+```azurecli
+az nf controller show --resource-group "$NFC_RESOURCE_GROUP" --resource-name "$NFC_RESOURCE_NAME"
+```
+
+The NFC deployment is complete when the `provisioningState` of the resource shows: `"provisioningState": "Succeeded"`
+
+### NFC logging
+
+NFC created logs can be viewed in:
+
+1. Azure portal Resource/ResourceGroup Activity logs.
+2. Azure CLI with `--debug` flag on the command-line.
+3. Resource provider logs based off subscription or correlation ID in debug
+
+## Step 2: create a cluster manager
+
+A Cluster Manager (CM) represents the control-plane to manage one or more of your
+on-premises Operator Nexus clusters (instances).
+The Cluster Manager is served by a User Resource Provider (RP) that
+resides in an AKS cluster within your Subscription. The Cluster Manager
+is responsible for the lifecycle management of your Operator Nexus Clusters.
+The CM will appear in your subscription as a resource.
+
+A Fabric Controller is required before the Cluster Manager can be created.
+There's a one-to-one dependency between the Network fabric Controller and
+Cluster Manager. You'll need to create a Cluster Manager every time another
+NFC is created.
+
+You need to create a CM before the first deployment of an Operator Nexus instance.
+You don't need to create a CM for subsequent Operator Nexus on-premises deployments to be managed by
+the same Cluster Manager.
+
+Create Cluster
+
+```azurecli
+az networkcloud clustermanager create --name "$CM_RESOURCE_NAME" \
+ --location "$LOCATION" --analytics-workspace-id "$LAW_ID" \
+ --availability-zones "$AVAILABILITY_ZONES" --fabric-controller-id "$NFC_ID" \
+ --managed-resource-group-configuration name="$CM_MRG_RG" \
+ --tags $TAG1="$VALUE1" $TAG2="$VALUE2" \
+ --resource-group "$CM_RESOURCE_GROUP"
+
+az networkcloud clustermanager wait --created --name "$CM_RESOURCE_NAME" \
+ --resource-group "$CM_RESOURCE_GROUP"
+```
+
+You can also create a Cluster Manger using ARM template/parameter files in
+[ARM Template Editor](https://portal.azure.com/#create/Microsoft.Template):
+
+### Parameters for use in cluster manager operations
+
+| Parameter name | Description |
+| | -- |
+| CM_RESOURCE_NAME | Resource Name of the Network fabric Controller |
+| LAW_ID | Log Analytics Workspace ID for the CM |
+| LOCATION | The Azure Region where the NFC will be deployed (for example, `eastus`) |
+| AVAILABILITY_ZONES | List of targeted availability zones, recommended "1" "2" "3" |
+| CM_RESOURCE_GROUP | The resource group name |
+| NFC_ID | ID for NFC integrated with this Cluster Manager from `az nf controller show` output |
+| CM_MRG_RG | The resource group name for the Cluster Manager managed resource group |
+| TAG/VALUE | Custom tag/value pairs to pass to Cluster Manager |
+
+The Cluster Manager is created within the resource group in your Azure Subscription.
+
+The CM Custom Location will be needed to create the Cluster.
+
+### CM validation
+
+The CM creation will also create other resources in the CM hosted resource groups.
+These other resources include a storage account, Key Vault, AKS cluster,
+managed identity, and a custom location.
+
+You can view the status of the CM:
+
+```azurecli
+az networkcloud clustermanager show --resource-group "$CM_RESOURCE_GROUP" \
+ --name $CM_RESOURCE_NAME"
+```
+
+The CM deployment is complete when the `provisioningState` of the resource shows: `"provisioningState": "Succeeded",`
+
+### CM logging
+
+CM create logs can be viewed in:
+
+1. Azure portal Resource/ResourceGroup Activity logs.
+2. Azure CLI with `--debug` flag passed on command-line.
+3. Resource provider logs based off subscription or correlation ID in debug
operator-nexus Quickstarts Platform Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-platform-deployment.md
+
+ Title: "Operator Nexus: Platform deployment"
+description: Learn the steps for deploying the Operator Nexus platform software.
++++ Last updated : 01/26/2023 #Required; mm/dd/yyyy format.+++
+# Platform software deployment
+
+In this quickstart, you'll learn step by step process to deploy the Azure Operator Distributed
+Services platform software.
+
+- Step 1: Create Network fabric
+- Step 2: Create a Cluster
+- Step 3: Provision the Network fabric
+- Step 4: Provision the Cluster
+
+These steps use commands and parameters that are detailed in the API documents.
+
+## Prerequisites
+
+- Verify that Network fabric Controller and Cluster Manger exist in your Azure region
+- Complete the [prerequisite steps](./quickstarts-platform-prerequisites.md).
+
+## API guide and metrics
+
+The [API guide](/rest/api/azure/azure-operator-distributed-services) provides
+information on the resource providers and resource models, and the APIs.
+
+The metrics generated from the logging data are available in [Azure Monitor metrics](/azure/azure-monitor/essentials/data-platform-metrics).
+
+## Step 1: create network fabric
+
+The network fabric instance (NF) is a collection of all network devices
+described in the previous section, associated with a single Operator Nexus instance. The NF
+instance interconnects compute servers and storage instances within an Operator Nexus
+instance. The NF facilitates connectivity to and from your network to
+the Operator Nexus instance.
+
+Create the Network fabric:
+
+```azurecli
+az nf fabric create --resource-group $FABRIC_RG --location $LOCATION \
+ --resource-name $FABRIC_RESOURCE_NAME --nf-sku "$NF_SKU" \
+ --nfc-id "$NFC_ID" \
+ --nni-config '{"layer2Configuration": null, "layer3Configuration":{"primaryIpv4Prefix":"$L3_IPV4_PREFIX1", \
+ "secondaryIpv4Prefix": "$L3_IPV4_PREFIX2", "fabricAsn":$NNI_FABRIC_ASN, "peerAsn":$NNI_PEER_ASN}}' \
+ --ts-config '{"primaryIpv4Prefix":"$TS_IPV4_PREFIX1", "secondaryIpv4Prefix":"$TS_IPV4_PREFIX2", \
+ "username":"$TS_USER", "password": "$TS_PASS"}' \
+ --managed-network-config '{"ipv4Prefix":"{ManagedNetworkIPV4Prefix}", \
+ "managementVpnConfiguration":{"optionBProperties":{"importRouteTargets":["$IR_TARGETS"], \
+ "exportRouteTargets":["$ER_TARGETS"]}}, "workloadVpnConfiguration":{"optionBProperties":{"importRouteTargets":["WL_IR_TARGETS"], \
+ "exportRouteTargets":["WL_ER_TARGETS"]}}'
+
+az nf fabric show --resource-group "$FABRIC_RG" \
+ --resource-name "$FABRIC_RESOURCE_NAME"
+```
+
+Create the Network fabric Racks (Aggregate and Compute Racks).
+Repeat for each rack in the SKU.
+
+```azurecli
+az nf rack create \
+--resource-group "$FABRIC_RG" \
+--location "$LOCATION" \
+--network-rack-sku "$RACK_SKU" \
+--nf-id "$FABRIC_ID" \
+--resource-name "$RACK_RESOURCE_NAME"
+```
+
+Update the Network fabric Device names and Serial Numbers for all devices.
+Repeat for each device in the SKU.
+
+```azurecli
+az nf device update --resource-group "$FABRIC_RG" \
+ --location "$LOCATION" \
+ --resource-name "$DEVICE_RESOURCE_NAME" \
+ --device-name "$DEVICE_NAME" \
+ --network-device-role "$DEVICE_ROLE" --serial-number "$DEVICE_SN"
+```
+
+### Parameters for network fabric operations
+
+| Parameter name | Description |
+| -- | - |
+| FABRIC_RESOURCE_NAME | Resource Name of the Network fabric Controller |
+| LOCATION | The Azure Region where the NFC will be deployed (for example, `eastus`) |
+| FABRIC_RG | The resource group name |
+| NF_SKU | SKU of the Network fabric that needs to be created |
+| NFC_ID | Reference to Network fabric Controller |
+| NNI_FABRIC_ASN | ASN of PE devices |
+| NNI_PEER_ASN | Router ID to be used for MP-BGP between PE and CE |
+| L3_IPV4_PREFIX1 | L3 IPV4 Primary Prefix |
+| L3_IPV4_PREFIX2 | L3 IPV4 Secondary Prefix |
+| TS_IPV4_PREFIX1 | Primary TS IPV4 Prefix |
+| TS_IPV4_PREFIX2 | Secondary TS IPV4 Prefix |
+| TS_USER | Username of Terminal Server |
+| TS_PASS | Password for Terminal Server username |
+| IR_TARGETS | Import route targets of management VPN (MP-BGP) to NFC via PE devices and express route. |
+| ER_TARGETS | Export route targets of management VPN (MP-BGP) to NFC via PE devices and express route. |
+| WL_IR_TARGETS | Import route targets of workload VPN (MP-BGP) to NFC via PE devices and express route. |
+| WL_ER_TARGETS | Export route targets of workload VPN (MP-BGP) to NFC via PE devices and express route. |
+| RACK_SKU | SKU of the Network fabric Rack that needs to be created |
+| RACK_RESOURCE_NAME | RACK resource name |
+| DEVICE_RESOURCE_NAME | Device resource name |
+| DEVICE_NAME | Device customer name |
+| DEVICE_ROLE | Device Type (CE/NPB/MGMT/TOR) |
+| DEVICE_SN | Device serial number for DHCP using format `*VENDOR*;*DEVICE_MODEL*;*DEVICE_HW_VER*;*DEVICE_SN*` (for example, `Arista;DCS-7280DR3K-24;12.04;JPE22113317`) |
+
+### NF validation
+
+The Network fabric creation will result in the Fabric Resource and other hosted resources to be created in the Fabric hosted resource groups. The other resources include racks, devices, MTU size, IP address prefixes.
+
+View the status of the Fabric:
+
+```azurecli
+az nf fabric show --resource-group "$FABRIC_RG" \
+ --resource-name $FABRIC_RESOURCE_NAME"
+```
+
+The Fabric deployment is complete when the `provisioningState` of the resource shows: `"operationalState": "Succeeded"`
+
+### NF logging
+
+Fabric create Logs can be viewed in the following locations:
+
+1. Azure portal Resource/ResourceGroup Activity logs.
+2. Azure CLI with `--debug` flag passed on command-line.
+
+## Step 2: create a cluster
+
+The Cluster resource represents an on-premises deployment of the platform
+within the Cluster Manager. All other platform-specific resources are
+dependent upon it for their lifecycle.
+
+You should have successfully created the Network fabric for this on-premises deployment.
+Each Operator Nexus on-premises instance has a one-to-one association
+with a Network fabric.
+
+Create the Cluster:
+
+```azurecli
+az networkcloud cluster create --name "$CLUSTER_NAME" --location "$LOCATION" \
+ --extended-location name="$CL_NAME" type="CustomLocation" \
+ --resource-group "$CLUSTER_RG" \
+ --analytics-workspace-id "$LAW_ID" \
+ --cluster-location "$CLUSTER_LOCATION" \
+ --network-rack-id "$AGGR_RACK_RESOURCE_ID" \
+ --rack-sku-id "$AGGR_RACK_SKU"\
+ --rack-serial-number "$AGGR_RACK_SN" \
+ --rack-location "$AGGR_RACK_LOCATION" \
+ --bare-metal-machine-configuration-data "["$AGGR_RACK_BMM"]" \
+ --storage-appliance-configuration-data '[{"adminCredentials":{"password":"$SA_PASS","username":"$SA_USER"},"rackSlot":1,"serialNumber":"$SA_SN","storageApplianceName":"$SA_NAME"}]' \
+ --compute-rack-definitions '[{"networkRackId": "$COMPX_RACK_RESOURCE_ID", "rackSkuId": "$COMPX_RACK_SKU", "rackSerialNumber": "$COMPX_RACK_SN", "rackLocation": "$COMPX_RACK_LOCATION", "storageApplianceConfigurationData": [], "bareMetalMachineConfigurationData":[{"bmcCredentials": {"password":"$COMPX_SVRY_BMC_PASS", "username":"$COMPX_SVRY_BMC_USER"}, "bmcMacAddress":"$COMPX_SVRY_BMC_MAC", "bootMacAddress":"$COMPX_SVRY_BOOT_MAC", "machineDetails":"$COMPX_SVRY_SERVER_DETAILS", "machineName":"$COMPX_SVRY_SERVER_NAME"}]}]'\
+ --managed-resource-group-configuration name="$MRG_NAME" location="$MRG_LOCATION" \
+ --network fabric-id "$NFC_ID" \
+ --cluster-service-principal application-id="$SP_APP_ID" \
+ password="$SP_PASS" principal-id="$SP_ID" tenant-id="$TENANT_ID" \
+ --cluster-type "$CLUSTER_TYPE" --cluster-version "$CLUSTER_VERSION" \
+ --tags $TAG_KEY1="$TAG_VALUE1" $TAG_KEY2="$TAG_VALUE2"
+
+az networkcloud cluster wait --created --name "$CLUSTER_NAME" --resource-group
+"$CLUSTER_RG"
+```
+
+You can instead create a Cluster with ARM template/parameter files in
+[ARM Template Editor](https://portal.azure.com/#create/Microsoft.Template):
+
+### Parameters for cluster operations
+
+| Parameter name | Description |
+| - | |
+| CLUSTER_NAME | Resource Name of the Cluster |
+| LOCATION | The Azure Region where the Cluster will be deployed |
+| CL_NAME | The Cluster Manager Custom Location from Azure portal |
+| CLUSTER_RG | The cluster resource group name |
+| LAW_ID | Log Analytics Workspace ID for the Cluster |
+| CLUSTER_LOCATION | The local name of the Cluster |
+| AGGR_RACK_RESOURCE_ID | RackID for Aggregator Rack |
+| AGGR_RACK_SKU | Rack SKU for Aggregator Rack |
+| AGGR_RACK_SN | Rack Serial Number for Aggregator Rack |
+| AGGR_RACK_LOCATION | Rack physical location for Aggregator Rack |
+| AGGR_RACK_BMM | Used for single rack deployment only, empty for multi-rack |
+| SA_NAME | Storage Appliance Device name |
+| SA_PASS | Storage Appliance admin password |
+| SA_USER | Storage Appliance admin user |
+| SA_SN | Storage Appliance Serial Number |
+| COMPX_RACK_RESOURCE_ID | RackID for CompX Rack, repeat for each rack in compute-rack-definitions |
+| COMPX_RACK_SKU | Rack SKU for CompX Rack, repeat for each rack in compute-rack-definitions |
+| COMPX_RACK_SN | Rack Serial Number for CompX Rack, repeat for each rack in compute-rack-definitions |
+| COMPX_RACK_LOCATION | Rack physical location for CompX Rack, repeat for each rack in compute-rack-definitions |
+| COMPX_SVRY_BMC_PASS | CompX Rack ServerY BMC password, repeat for each rack in compute-rack-definitions and for each server in rack |
+| COMPX_SVRY_BMC_USER | CompX Rack ServerY BMC user, repeat for each rack in compute-rack-definitions and for each server in rack |
+| COMPX_SVRY_BMC_MAC | CompX Rack ServerY BMC MAC address, repeat for each rack in compute-rack-definitions and for each server in rack |
+| COMPX_SVRY_BOOT_MAC | CompX Rack ServerY boot NIC MAC address, repeat for each rack in compute-rack-definitions and for each server in rack |
+| COMPX_SVRY_SERVER_DETAILS | CompX Rack ServerY details, repeat for each rack in compute-rack-definitions and for each server in rack |
+| COMPX_SVRY_SERVER_NAME | CompX Rack ServerY name, repeat for each rack in compute-rack-definitions and for each server in rack |
+| MRG_NAME | Cluster managed resource group name |
+| MRG_LOCATION | Cluster Azure region |
+| NFC_ID | Reference to Network fabric Controller |
+| SP_APP_ID | Service Principal App ID |
+| SP_PASS | Service Principal Password |
+| SP_ID | Service Principal ID |
+| TENANT_ID | Subscription tenant ID |
+| CLUSTER_TYPE | Type of cluster, Single or MultiRack |
+| CLUSTER_VERSION | NC Version of cluster |
+| TAG_KEY1 | Optional tag1 to pass to Cluster Create |
+| TAG_VALUE1 | Optional tag1 value to pass to Cluster Create |
+| TAG_KEY2 | Optional tag2 to pass to Cluster Create |
+| TAG_VALUE2 | Optional tag2 value to pass to Cluster Create |
+
+### Cluster validation
+
+A successful Operator Nexus Cluster creation will result in the creation of an AKS cluster
+inside your subscription. The cluster ID, cluster provisioning state and
+deployment state are returned as a result of a successful `cluster create`.
+
+View the status of the Cluster:
+
+```azurecli
+az networkcloud cluster show --resource-group "$CLUSTER_RG" \
+ --resource-name "$CLUSTER_RESOURCE_NAME"
+```
+
+The Cluster deployment is complete when the `provisioningState` of the resource
+shows: `"provisioningState": "Succeeded"`
+
+#### Cluster logging
+
+Cluster create Logs can be viewed in the following locations:
+
+1. Azure portal Resource/ResourceGroup Activity logs.
+2. Azure CLI with `--debug` flag passed on command-line.
+
+## Step 3: provision network fabric
+
+The network fabric instance (NF) is a collection of all network devices
+associated with a single Operator Nexus instance.
+
+Provision the Network fabric:
+
+```azurecli
+az nf fabric provision --resource-group "$FABRIC_RG" \
+ --resource-name "$FABRIC_RESOURCE_NAME"
+```
+
+### NF provisioning validation
+
+Provisioning of the fabric will result in the Fabric racks and device resources created
+in the Fabric hosted resource groups. The following data is returned as a result of
+successful Network fabric create: racks, MTU size, IP address prefixes, etc.
+
+View the status of the fabric:
+
+```azurecli
+az nf fabric show --resource-group "$FABRIC_RG" \
+ --resource-name "$FABRIC_RESOURCE_NAME"
+```
+
+The Fabric provisioning is complete when the `provisioningState` of the resource
+shows: `"provisioningState": "Succeeded"`
+
+### Logging
+
+Fabric create Logs can be viewed in the following locations:
+
+1. Azure portal Resource/ResourceGroup Activity logs.
+2. Azure CLI with `--debug` flag passed on command-line.
+
+## Step 4: Deploy cluster
+
+Once a Cluster has been created and the Rack Manifests have been added, the
+deploy cluster action is triggered. The deploy cluster action creates the
+bootstrap image and deploys the cluster.
+
+Deploy Cluster will cause a sequence of events to occur in the Cluster Manager
+
+1. Validation of the cluster/rack manifests for completeness.
+2. Generation of a bootable image for the ephemeral bootstrap cluster
+ (Validation of Infrastructure).
+3. Interaction with the IPMI interface of the targeted bootstrap machine.
+4. Perform hardware validation checks
+5. Monitoring of the Cluster deployment process.
+
+Deploy the on-premises Cluster:
+
+```azurecli
+az networkcloud cluster deploy \
+ --name "$CLUSTER_NAME" \
+ --resource-group "$CLUSTER_RESOURCE_GROUP" \
+ --subscription "$SUBSCRIPTION_ID"
+```
+
+## Cluster deployment validation
+
+View the status of the cluster:
+
+```azurecli
+az networkcloud Cluster show --resource-group "$CLUSTER_RG" \
+ --resource-name "$CLUSTER_RESOURCE_NAME"
+```
+
+The Cluster deployment is complete when the `provisioningState` of the resource
+shows: `"provisioningState": "Succeeded"`
+
+## Cluster deployment Logging
+
+Cluster create Logs can be viewed in the following locations:
+
+1. Azure portal Resource/ResourceGroup Activity logs.
+2. Azure CLI with `--debug` flag passed on command-line.
operator-nexus Quickstarts Platform Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-platform-prerequisites.md
+
+ Title: "Operator Nexus: Platform deployment pre-requisites"
+description: Learn the prerequisite steps for deploying the Operator Nexus platform software.
++++ Last updated : 01/26/2023 #Required; mm/dd/yyyy format.+++
+# Quickstart: deploy Operator Nexus platform software prerequisites
+
+You'll need to complete the prerequisites before you can deploy the
+Operator Nexus platform software. Some of these steps may take
+weeks to months and, thus, a review of these prerequisites may prove beneficial.
+
+In subsequent deployments of Operator Nexus instances, you can skip to creating the on-premises
+network fabric and the cluster. An instance of Network fabric Controller can support up to 32
+Operator Nexus instances.
+
+## Prerequisites
+
+You need to be familiar with the Operator Nexus [key features](./overview.md#key-features)
+and [platform components](./concepts-resource-types.md).
+
+The prerequisite activities have been split among activities you'll perform in Azure and on
+your premises that may require some data gathering.
+
+### Azure prerequisites
+
+- When deploying Operator Nexus for the first time or in a new region,
+you'll first need to create a Network fabric Controller and then a (Network Cloud) Cluster Manager as specified [here](./quickstart-network-fabric-controller-cluster-manager-create.md).
+- Set up users, policies, permissions, and RBAC
+- Set up Resource Groups to place and group resources in a logical manner
+ that will be created for Operator Nexus platform.
+- Set up Key Vault to store encryption and security tokens, service principals,
+ passwords, certificates, and API keys
+- Set up Log Analytics workSpace (LAW) to store logs and analytics data for
+ Operator Nexus sub-components (Fabric, Cluster, etc.)
+- Set up Azure Storage account to store Operator Nexus data objects:
+ - Azure Storage supports blobs and files accessible from anywhere in the world over HTTP or HTTPS
+ - this storage is not for user/consumer data.
+
+### On your premises prerequisites
+
+- Purchase and install hardware:
+ - Purchase the hardware as specified in the BOM provided to you
+ - Perform the physical installation (EF&I)
+ - Cable as per the BOM including the cabling to your WAN via a pair of PE devices.
+- All network fabric devices (except for the Terminal Server (TS)) are set to ZTP mode
+- Servers and Storage devices have default factory settings
+- Establish ExpressRoute connectivity from your WAN to an Azure Region
+- Terminal Server has been [deployed and configured](#set-up-terminal-server)
+ - Terminal Server is configured for Out-of-Band management
+ - Authentication credentials have been set up
+ - DHCP client is enabled on the out-of-band management port, and
+ - HTTP access is enabled
+ - Terminal Server Interface is connected to your on-premises Provider Edge routers (PEs) and configured with the IP addresses and credentials
+ - Terminal Server is accessible from the management VPN
+- For the [Network fabric configuration](./quickstarts-platform-deployment.md#step-1-create-network fabric) (to be performed later)
+ you'll need to provide:
+ - ExpressRoute credentials and information
+ - Terminal Server IPs and credentials
+ - [optional] IP prefix for the Network
+ Fabric Controller (NFC) subnet during its creation; the default IPv4 and IPv6
+ prefix are `10.0.0.0/19` and `FC00:/59`, respectively
+ - [optional] IP prefix for the Operator Nexus
+ Management plane during NFC creation.
+ By default, `10.1.0.0/19` and, `FC00:0000:0000:100::/59`
+ IPv4 and IPv6 prefix, respectively, are used for subnets in the management plane for the first
+ Operator Nexus instance. Prefix range `10.1.0.0/19` to `10.4.224.0/19` and
+ `FC00:0000:0000:100::/59` to `FC00:0000:0000:4e0::/59` are used for
+ the 32 instances of Operator Nexus supported for each NFC instance.
+
+## Set up terminal server
+
+1. Setup Hostname:
+ [CLI Reference](https://opengear.zendesk.com/hc/articles/360044253292-Using-the-configuration-CLI-ogcli-)
+
+ ```bash
+ sudo ogcli update system/hostname hostname=\"$TS_HOSTNAME\"
+ ```
+
+ | Parameter name | Description |
+ | -- | - |
+ | TS_HOSTNAME | The terminal server hostname |
+
+2. Setup Network:
+
+ ```bash
+ sudo ogcli create conn << 'END'
+ description="PE1 to TS NET1"
+ mode="static"
+ ipv4_static_settings.address="$TS_NET1_IP"
+ ipv4_static_settings.netmask="$TS_NET1_NETMASK"
+ ipv4_static_settings.gateway="$TS_NET1_GW"
+ physif="net1"
+ END
+
+ sudo ogcli create conn << 'END'
+ description="PE2 to TS NET2"
+ mode="static"
+ ipv4_static_settings.address="$TS_NET2_IP"
+ ipv4_static_settings.netmask="$TS_NET2_NETMASK"
+ ipv4_static_settings.gateway="$TS_NET2_GW"
+ physif="net1"
+ END
+ ```
+
+ | Parameter name | Description |
+ | | |
+ | TS_NET1_IP | The terminal server PE1 to TS NET1 IP |
+ | TS_NET1_NETMASK | The terminal server PE1 to TS NET1 netmask |
+ | TS_NET1_GW | The terminal server PE1 to TS NET1 gateway |
+ | TS_NET2_IP | The terminal server PE1 to TS NET2 IP |
+ | TS_NET2_NETMASK | The terminal server PE1 to TS NET2 netmask |
+ | TS_NET2_GW | The terminal server PE1 to TS NET2 gateway |
+
+3. Setup Support Admin User:
+
+ For each port
+
+ ```bash
+ ogcli create user << 'END'
+ description="Support Admin User"
+ enabled=true
+ groups[0]="admin"
+ groups[1]="netgrp"
+ hashed_password="$HASHED_SUPPORT_PWD"
+ username="$SUPPORT_USER"
+ END
+ ```
+
+ | Parameter name | Description |
+ | | -- |
+ | SUPPORT_USER | Support User |
+ | HASHED_SUPPORT_PWD | Encoded Support Admin user password |
+
+4. Verify settings:
+
+```bash
+ ping $PE1_IP -c 3 # Ping test to PE1
+ ping $PE2_IP -c 3 # Ping test to PE2
+ ogcli get conns # verify NET1, NET2
+ ogcli get users # verify Support Admin User
+ ogcli get static_routes # There should be no static routes
+ ip r # verify only interface routes
+ ip a # verify loopback, NET1, NET2
+```
+
+## Set up Pure storage
+
+1. Operator needs to install the Pure hardware as specified by the BOM and rack elevation within the Aggregation Rack.
+2. Operator will need to provide the Pure Technician with information, in order for the Pure Technician to arrive on-site to configure the appliance.
+3. Required location-specific data that will be shared with Pure Technician:
+ - Customer Name:
+ - Physical Inspection Date:
+ - Chassis Serial Number:
+ - Pure Array Hostname:
+ - CLLI code (Common Language location identifier):
+ - Installation Address:
+ - FIC/Rack/Grid Location:
+4. Data provided to the Operator and shared with Pure Technician, which will be common to all installations:
+ - Purity Code Level: 6.1.14
+ - Array Time zone: UTC
+ - DNS Server IP Address: 172.27.255.201
+ - DNS Domain Suffix: not set by Operator during setup
+ - NTP Server IP Address or FQDN: 172.27.255.212
+ - Syslog Primary: 172.27.255.210
+ - Syslog Secondary: 172.27.255.211
+ - SMTP Gateway IP address or FQDN: not set by Operator during setup
+ - Email Sender Domain Name: not set by Operator during setup
+ - Email Address(es) to be alerted: not set by Operator during setup
+ - Proxy Server and Port: not set by Operator during setup
+ - Management: Virtual Interface
+ - IP Address: 172.27.255.200
+ - Gateway: 172.27.255.1
+ - Subnet Mask: 255.255.255.0
+ - MTU: 1500
+ - Bond: not set by Operator during setup
+ - Management: Controller 0
+ - IP Address: 172.27.255.254
+ - Gateway: 172.27.255.1
+ - Subnet Mask: 255.255.255.0
+ - MTU: 1500
+ - Bond: not set by Operator during setup
+ - Management: Controller 1
+ - IP Address: 172.27.255.253
+ - Gateway: 172.27.255.1
+ - Subnet Mask: 255.255.255.0
+ - MTU: 1500
+ - Bond: not set by Operator during setup
+ - VLAN Number / Prefix: 43
+ - ct0.eth10: not set by Operator during setup
+ - ct0.eth11: not set by Operator during setup
+ - ct0.eth18: not set by Operator during setup
+ - ct0.eth19: not set by Operator during setup
+ - ct1.eth10: not set by Operator during setup
+ - ct1.eth11: not set by Operator during setup
+ - ct1.eth18: not set by Operator during setup
+ - ct1.eth19: not set by Operator during setup
+
+## Install CLI Extensions and sign-in to your Azure subscription
+
+Install latest version of the
+[necessary CLI extensions](./howto-install-cli-extensions.md).
+
+### Azure subscription sign-in
+
+```azurecli
+ az login
+ az account set --subscription $SUBSCRIPTION_ID
+ az account show
+```
+
+>[!NOTE]
+>The account must have permissions to read/write/publish in the subscription
operator-nexus Quickstarts Tenant Workload Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md
+
+ Title: How to deploy tenant workloads
+description: Learn the steps for creating VMs for VNF workloads and for creating AKS-Hybrid clusters for CNF workloads
++++ Last updated : 01/25/2023 #Required; mm/dd/yyyy format.+++
+# How-to deploy tenant workloads
+
+This how-to guide explains the steps for deploying VNF and CNF workloads. Section V (for VM-based deployments) deals with creating VMs and to deploy VNF workloads. Section K (for Kubernetes; based deployments) specifies steps for creating AKS-Hybrid clusters for deploying CNF workloads.
+
+These examples don't specify all required parameters and, thus, shouldn't be used verbatim.
+
+## Before you begin
+
+You should complete the prerequisites specified [here](./quickstarts-tenant-workload-prerequisites.md).
+
+**Capacity Note:**
+
+Say, each server has two CPU chipsets and each CPU chip has 28 cores. Then with hyperthreading enabled (default), the CPU chip supports 56 vCPUs. 10 vCPUs are reserved for infrastructure (OS, agents, emulator thread, etc.) with the remaining 46 vCPUs available for your workloads (maximum VM size).
+
+## Section V: how to create VMs for deploying VNF workloads
+
+Step-V1: [Create isolation-domains for VMs](#step-v1-create-isolation-domain-for-vm-workloads)
+
+Step-V2: [Create Networks for VM](#step-v2-create-networks-for-vm-workloads)
+
+Step-V3: [Create Virtual Machines](#step-v3-create-a-vm)
+
+## Deploy VMs for VNF workloads
+
+This section explains steps to create VMs for VNF workloads
+
+### Step V1: create isolation domain for VM workloads
+
+isolation-domains enable creation of layer 2 and layer 3 connectivity between Operator Nexus and network fabric network functions.
+This connectivity enables inter-rack and intra-rack communication between the workloads.
+You can create as many L2 and L3 isolation-domains as needed.
+
+You should have the following information already:
+
+- VLAN/subnet info for each of the layer 3 network(s)
+- Which network(s) would need to talk to each other (remember to put VLANs/subnets that needs to
+ talk to each other into the same L3 isolation-domain)
+- VLAN/subnet info for your `defaultcninetwork` for AKS-Hybrid cluster
+- BGP peering and network policies information for your L3 isolation-domain(s)
+- VLANs for all your layer 2 network(s)
+- VLANs for all your trunked network(s)
+- MTU values for your network.
+
+#### L2 isolation domain
++
+#### L3 isolation domain
++
+### Step V2: create networks for VM workloads
+
+This section describes how to create the following networks for VM Workloads:
+
+- Layer 2 network
+- Layer 3 network
+- Trunked network
+- Cloud services network
+
+#### Create an L2 network
+
+You'll need to create an L2 network if necessary for your VM. You can repeat the instructions for each L2 network required.
+
+You'll need the resource ID of the L2 isolation-domain you [created](#l2-isolation-domain) that configures the VLAN for this network.
+
+Example CLI command:
+
+```azurecli
+ az networkcloud l2network create --name "<YourL2NetworkName>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="<ClusterCustomLocationId>" type="CustomLocation" \
+ --location "<ClusterAzureRegion>" \
+ --l2-isolation-domain-id "<YourL2IsolationDomainId>"
+```
+
+#### Create an L3 network
+
+You'll need to create an L3 network if necessary for your VM. You can repeat the instructions for each L3 network required.
+
+You'll need:
+
+- resource ID of the L3 isolation-domain you [created](#l3-isolation-domain) that configures the VLAN for this network.
+- The ipv4-connected-prefix must match the i-pv4-connected-prefix that is in the L3 isolation-domain
+- The ipv6-connected-prefix must match the i-pv6-connected-prefix that is in the L3 isolation-domain
+- The ip-allocation-type can be either "IPv4", "IPv6", or "DualStack" (default)
+- The VLAN value must match what is in the L3 isolation-domain
+- The MTU of the network doesn't need to be specified here, but the network will be configured with the same MTU information
+
+```azurecli
+ az networkcloud l3network create --name "<YourL3NetworkName>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="<ClusterCustomLocationId>" type="CustomLocation" \
+ --location "<ClusterAzureRegion>" \
+ --ip-allocation-type "<YourNetworkIpAllocation>" \
+ --ipv4-connected-prefix "<YourNetworkIpv4Prefix>" \
+ --ipv6-connected-prefix "<YourNetworkIpv6Prefix>" \
+ --l3-isolation-domain-id "<YourL3IsolationDomainId>" \
+ --vlan <YourNetworkVlan>
+```
+
+#### Create a trunked network
+
+You'll need to create a trunked network if necessary for your VM. You can repeat the instructions for each Trunked network required.
+
+You'll need to gather the resourceId(s) of the L2 and L3 isolation-domains you created earlier to configure the VLAN(s) for this network.
+You can include as many L2 and L3 isolation-domains as needed.
+
+```azurecli
+ az networkcloud trunkednetwork create --name "<YourTrunkedNetworkName>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="<ClusterCustomLocationId>" type="CustomLocation" \
+ --location "<ClusterAzureRegion>" \
+ --interface-name "<YourNetworkInterfaceName>" \
+ --isolation-domain-ids \
+ "<YourL3IsolationDomainId1>" \
+ "<YourL3IsolationDomainId2>" \
+ "<YourL2IsolationDomainId1>" \
+ "<YourL2IsolationDomainId2>" \
+ "<YourL3IsolationDomainId3>" \
+ --vlans <YourVlanList>
+```
+
+### Create cloud services network
+
+Your VM will require one Cloud Services Network. You'll need the egress endpoints you want to add to the proxy for your VM to access.
+
+```azurecli
+ az networkcloud cloudservicesnetwork create --name "<YourCloudServicesNetworkName>" \
+ --resource-group "<YourResourceGroupName >" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="<ClusterCustomLocationId >" type="CustomLocation" \
+ --location "<ClusterAzureRegion>" \
+ --additional-egress-endpoints "[{\"category\":\"<YourCategory >\",\"endpoints\":[{\"<domainName1 >\":\"< endpoint1 >\",\"port\":<portnumber1 >}]}]"
+```
+
+### Step V3: create a VM
+
+Operator Nexus Virtual Machines (VM) can be used for hosting VNF(s) within a Telco network.
+Operator Nexus provides `az networkcloud virtualmachine create` to enable users to create a customized
+VM. For creating a virtual machine on your cluster, have it [Microsoft Azure Arc-enrolled](//azure/azure-arc/servers/overview),
+and provide a way to ssh to it via Azure CLI.
+
+#### Parameters
+
+- The `subscription`, `resource group`, `location`, and `customlocation` of the Operator Nexus cluster for deployment
+ - **SUBSCRIPTION**=
+ - **RESOURCE_GROUP**=
+ - **LOCATION**=
+ - **CUSTOM_LOCATION**=
+- A service principal configured with proper access
+ - **SERVICE_PRINCIPAL_ID**=
+ - **SERVICE_PRINCIPAL_SECRET**=
+- A tenant ID
+ - **TENANT_ID**=
+- If the VM image is hosted in a managed ACR, a generated token for access
+ - **ACR_URL**=
+ - **ACR_USERNAME**=
+ - **ACR_TOKEN**=
+ - **IMAGE_URL**=
+- SSH public/private keypair
+ - **SSH_PURLIC_KEY**=
+ - **SSH_PRIVATE_KEY**=
+- Azure CLI and extensions installed and available
+- A customized `cloudinit userdata` file (provided)
+ - **USERDATA**=
+- The resource ID of the earlier created [cloud service network](#create-cloud-services-network) and [L3 networks](#create-an-l3-network) to configure VM connectivity
+
+#### 1. Update user data file
+
+Update the values listed in the _USERDATA_ file with the proper information
+
+- service principal ID
+- service principal secret
+- tenant ID
+- location (Azure Region)
+- custom location
+
+Locate the following line in the _USERDATA_ (toward the end) and update appropriately:
+
+```azurecli
+azcmagent connect --service-principal-id _SERVICE_PRINCIPAL_ID_ --service-principal-secret _SERVICE_PRINCIPAL_SECRET_ --tenant-id _TENANT_ID_ --subscription-id _SUBSCRIPTION_ --resource-group _RESOURCE_GROUP_ --location _LOCATION_
+```
+
+Encode the user data
+
+```bash
+ENCODED_USERDATA=(`base64 -w0 USERDATA`)
+```
+
+#### 2. Create the VM with the encoded data
+
+Update the VM template with proper information:
+
+- name (_VMNAME_)
+- location (_LOCATION_)
+- custom location (_CUSTOM_LOCATION_)
+- adminUsername (_ADMINUSER_)
+- cloudServicesNetworkAttachment
+- cpuCores
+- memorySizeGB
+- networkAttachments (set your layer3 network as default gateway)
+- sshPublicKeys (_SSH_PUBLIC_KEY_)
+- diskSizeGB
+- userData (_ENCODED_USERDATA_)
+- vmImageRepositoryCredentials (_ACR_URL_, _ACR_USERNAME_, _ACR_TOKEN_)
+- vmImage (_IMAGE_URL_)
+
+Run this command, update with your resource group and subscription info
+
+- subscription
+- resource group
+- deployment name
+- layer 3 network template
+
+```azurecli
+az deployment group create --resource-group _RESOURCE_GROUP_ --subscription=_SUBSCRIPTION_ --name _DEPLOYMENT_NAME_ --template-file _VM_TEMPLATE_
+```
+
+#### 3. SSH to the VM
+
+It will take a few minutes for the VM to be created and then Arc connected, so should it fail at first, try again after a short wait.
+
+```azurecli
+az ssh vm -n _VMNAME_ -g _RESOURCE_GROUP_ --subscription _SUBSCRIPTION_ --private-key _SSH_PRIVATE_KEY_ --local-user _ADMINUSER_
+```
+
+Here's some information you'll need.
+
+- The `resourceId` of the `cloudservicesnetwork`
+- The `resourceId(s)` for each of the L2/L3/Trunked networks
+- Determine which network will serve as your default gateway (can only choose 1)
+- If you want to specify `networkAttachmentName` (interface name) for any of your networks
+- Determine the `ipAllocationMethod` for each of your L3 network (static/dynamic)
+- The dimension of your VM
+ - number of cpuCores
+ - RAM (memorySizeGB)
+ - DiskSize
+ - emulatorThread support (if needed)
+- Boot method (UEFI/BIOS)
+- vmImage reference and credentials needed to download this image
+- sshKey(s)
+- placement information
+
+The sample command contains the information about the VM requirements covering
+compute/network/storage.
+
+Sample Command:
+
+```azurecli
+az networkcloud virtualmachine create --name "<YourVirtualMachineName>" \
+--resource-group "<YourResourceGroup>" \
+--subscription "<YourSubscription" \
+--extended-location name="<ClusterCustomLocationId>" type="CustomLocation" \
+--location "<ClusterAzureRegion>" \
+--admin-username "<AdminUserForYourVm>" \
+--csn attached-network-id="<CloudServicesNetworkResourceId>" \
+--cpu-cores <NumOfCpuCores> \
+--memory-size <AmountOfMemoryInGB> \
+--network-attachments '[{"attachedNetworkId":"<L3NetworkResourceId>","ipAllocationMethod":"<YourIpAllocationMethod","defaultGateway":"True","networkAttachmentName":"<YourNetworkInterfaceName"},\
+ {"attachedNetworkId":"<L2NetworkResourceId>","ipAllocationMethod":"Disabled","networkAttachmentName":"<YourNetworkInterfaceName"},
+ {"attachedNetworkId":"<TrunkedNetworkResourceId>","ipAllocationMethod":"Disabled","networkAttachmentName":"<YourNetworkInterfaceName"}]' \
+--storage-profile create-option="Ephemeral" delete-option="Delete" disk-size="<YourVmDiskSize>" \
+--vm-image "<vmImageRef>" \
+--ssh-key-values "<YourSshKey1>" "<YourSshKey2>" \
+--placement-hints '[{<YourPlacementHint1},\
+ {<YourPlacementHint2}]' \
+--vm-image-repository-credentials registry-url="<YourAcrUrl>" username="<YourAcrUsername>" password="<YourAcrPassword>" \
+```
+
+You've created the VMs with your custom image. You're now ready to use for VNFs.
+
+## Section K: how to create AKS-Hybrid cluster for deploying CNF workloads
+
+Step-K1: [Create isolation-domains for AKS-Hybrid Cluster](#step-k1-create-isolation-domain-for-aks-hybrid-cluster)
+
+Step-K2: [Create Networks for AKS-Hybrid Cluster](#step-k2-create-aks-hybrid-networks)
+
+Step-K3: [Create AKS-Hybrid Cluster](#step-k3-create-an-aks-hybrid-cluster)
+
+Step-K4: [Provision Tenant workloads (CNFs)](#step-k4-provision-tenant-workloads-cnfs)
+
+**Commands shown below are examples and should not be copied or used verbatim.**
+
+## Create AKS-Hybrid clusters for CNF workloads
+
+This section explains steps to create AKS-Hybrid clusters for CNF workloads
+
+### Step K1: create isolation domain for AKS-Hybrid cluster
+
+You should have the following information already:
+
+- VLAN/subnet info for each of the layer 3 network(s). List of networks
+ that need to talk to each other (remember to put VLAN/subnets that needs to
+ talk to each other into the same L3 isolation-domain)
+- VLAN/subnet info for your `defaultcninetwork` for aks-hybrid cluster
+- BGP peering and network policies information for your L3 isolation-domain(s)
+- VLANs for all your layer 2 network(s)
+- VLANs for all your trunked network(s)
+- MTU needs to be passed during creation of isolation-domain, due to a known issue. The issue will be fixed with the 11/15 release.
+
+#### L2 isolation domain
++
+#### L3 isolation domain
++
+### Step K2: create AKS-Hybrid networks
+
+This section describes how to create networks and vNET(s) for your AKD-Hybrid Cluster.
+
+#### Step K2a create tenant networks for AKS-Hybrid cluster
+
+This section describes how to create the following networks:
+
+- Layer 2 network
+- Layer 3 network
+- Trunked network
+- Default CNI network
+- Cloud services network
+
+At a minimum, you need to create a "Default CNI network" and a "Cloud Services network".
+
+##### Create an L2 network for AKS-Hybrid cluster
+
+You'll need the resourceId of the [L2 isolation-domain](#l2-isolation-domain-1) you created earlier that configures the VLAN for this network.
+
+For your network, the valid values for
+`hybrid-aks-plugin-type` are `OSDevice`, `SR-IOV`, `DPDK`; the default value is `SR-IOV`.
+
+```azurecli
+ az networkcloud l2network create --name "<YourL2NetworkName>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="<ClusterCustomLocation>" type="CustomLocation" \
+ --location "< ClusterAzureRegion>" \
+ --l2-isolation-domain-id "<YourL2IsolationDomainId>" \
+ --hybrid-aks-plugin-type "<YourHaksPluginType>"
+```
+
+##### Create an L3 network for AKS-Hybrid cluster
+
+You'll need the following information:
+
+- The `resourceId` of the [L3 isolation-domain](#l3-isolation-domain) domain you created earlier that configures the VLAN for this network.
+- The `ipv4-connected-prefix` must match the i-pv4-connected-prefix that is in the L3 isolation-domain
+- The `ipv6-connected-prefix` must match the i-pv6-connected-prefix that is in the L3 isolation-domain
+- The `ip-allocation-type` can be either "IPv4", "IPv6", or "DualStack" (default)
+- The VLAN value must match what is in the L3 isolation-domain
+- The MTU of the network doesn't need to be specified here as the network will be configured with the MTU specified during isolation-domain creation
+
+You'll also need to configure the following information for your aks-hybrid cluster
+
+- hybrid-aks-ipam-enabled: If you want IPAM enabled for this network within your AKS-hybrid cluster. Default: True
+- hybrid-aks-plugin-type: valid values are `OSDevice`, `SR-IOV`, `DPDK`. Default: `SR-IOV`
+
+```azurecli
+ az networkcloud l3network create --name "<YourL3NetworkName>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="<ClusterCustomLocationId>" type="CustomLocation" \
+ --location "< ClusterAzureRegion>" \
+ --ip-allocation-type "<YourNetworkIpAllocation>" \
+ --ipv4-connected-prefix "<YourNetworkIpv4Prefix>" \
+ --ipv6-connected-prefix "<YourNetworkIpv6Prefix>" \
+ --l3-isolation-domain-id "<YourL3IsolationDomainId>" \
+ --vlan <YourNetworkVlan> \
+ --hybrid-aks-ipam-enabled "<YourHaksIpam>" \
+ --hybrid-aks-plugin-type "<YourHaksPluginType>"
+```
+
+##### Create a trunked network for AKS-hybrid cluster
+
+You'll need to gather the resourceId(s) of the L2 and L3 isolation-domains you created earlier that configured the VLAN(s) for this network. You're allowed to include as many L2 and L3 isolation-domains as needed.
+
+You'll also need to configure the following information for your network
+
+- hybrid-aks-plugin-type: valid values are `OSDevice`, `SR-IOV`, `DPDK`. Default: `SR-IOV`
+
+```azurecli
+ az networkcloud trunkednetwork create --name "<YourTrunkedNetworkName>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="<ClusterCustomLocationId>" type="CustomLocation" \
+ --location "<ClusterAzureRegion>" \
+ --interface-name "<YourNetworkInterfaceName>" \
+ --isolation-domain-ids \
+ "<YourL3IsolationDomainId1>" \
+ "<YourL3IsolationDomainId2>" \
+ "<YourL2IsolationDomainId1>" \
+ "<YourL2IsolationDomainId2>" \
+ "<YourL3IsolationDomainId3>" \
+ --vlans < YourVlanList> \
+ --hybrid-aks-plugin-type "<YourHaksPluginType>"
+```
+
+##### Create default CNI network for AKS-Hybrid cluster
+
+You'll need the following information:
+
+- `resourceId` of the L3 isolation-domain you created earlier that configures the VLAN for this network.
+- The ipv4-connected-prefix must match the i-pv4-connected-prefix that is in the L3 isolation-domain
+- The ipv6-connected-prefix must match the i-pv6-connected-prefix that is in the L3 isolation-domain
+- The ip-allocation-type can be either "IPv4", "IPv6", or "DualStack" (default)
+- The VLAN value must match what is in the L3 isolation-domain
+- The network MTU doesn't need to be specified here, but the network will be configured with the same MTU information
+
+```azurecli
+ az networkcloud defaultcninetwork create --name "<YourDefaultCniNetworkName>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="<ClusterCustomLocationId>" type="CustomLocation" \
+ --location "<ClusterAzureRegion>" \
+ --ip-allocation-type "<YourNetworkIpAllocation>" \
+ --ipv4-connected-prefix "<YourNetworkIpv4Prefix>" \
+ --ipv6-connected-prefix "<YourNetworkIpv6Prefix>" \
+ --l3-isolation-domain-id "<YourL3IsolationDomainId>" \
+ --vlan < YourNetworkVlan>
+```
+
+##### Create cloud services network for AKS-Hybrid cluster
+
+You'll need the following information:
+
+- The egress endpoints you want to add to the proxy for your VM to access.
+
+```azurecli
+ az networkcloud cloudservicesnetwork create --name "<YourCloudServicesNetworkName>" \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>" \
+ --extended-location name="< ClusterCustomLocationId >" type="CustomLocation" \
+ --location "< ClusterAzureRegion >" \
+ --additional-egress-endpoints "[{\"category\":\"< YourCategory >\",\"endpoints\":[{\"< domainName1 >\":\"< endpoint1 >\",\"port\":< portnumber1 >}]}]"
+```
+
+#### Step K2b. ceate vNET for the tenant networks of AKS-Hybrid cluster
+
+For each previously created tenant network, a corresponding AKS-Hybrid vNET network needs to be created
+
+You'll need the Azure Resource Manager resource ID for each of the networks you created earlier. You can retrieve the Azure Resource Manager resource IDs as follows:
+
+```azurecli
+az networkcloud cloudservicesnetwork show -g "<YourResourceGroupName>" -n "<YourCloudServicesNetworkName>" --subscription "<YourSubscription>" -o tsv --query id
+
+az networkcloud defaultcninetwork show -g "<YourResourceGroupName>" -n "<YourDefaultCniNetworkName>" --subscription "<YourSubscription>" -o tsv --query id
+
+az networkcloud l2network show -g "<YourResourceGroupName>" -n "<YourL2NetworkName>" --subscription "<YourSubscription>" -o tsv --query id
+
+az networkcloud l3network show -g "<YourResourceGroupName>" -n "<YourL3NetworkName>" --subscription "<YourSubscription>" -o tsv --query id
+
+az networkcloud trunkednetwork show -g "<YourResourceGroupName>" -n "<YourTrunkedNetworkName>" --subscription "<YourSubscription>" -o tsv --query id
+```
+
+##### To create vNET for each tenant network
+
+```azurecli
+az hybridaks vnet create \
+ --name <YourVnetName> \
+ --resource-group "<YourResourceGroupName>" \
+ --subscription "<YourSubscription>"\
+ --custom-location "<ARM ID of the custom location>" \
+ --aods-vnet-id "<ARM resource ID>"
+```
+
+### Step K3: Create an AKS-Hybrid Cluster
+
+This section describes how to create an AKS-Hybrid Cluster
+
+```azurecli
+ az hybridaks create \
+ -n <aks-hybrid cluster name> \
+ -g <Azure resource group> \
+ --subscription "<YourSubscription>" \
+ --custom-location <ARM ID of the custom location> \
+ --vnet-ids <comma separated list of ARM IDs of all the Azure hybridaks vnets> \
+ --aad-admin-group-object-ids <comma separated list of Azure AD group IDs> \
+ --kubernetes-version v1.21.9 \
+ --load-balancer-sku stacked-kube-vip \
+ --control-plane-count <count> \
+ --location <dc-location> \
+ --node-count <worker node count> \
+ --node-vm-size <Operator Nexus SKU>
+```
+
+After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+#### Connect to the AKS-Hybrid cluster
+
+Now that you've created the cluster, connect to your AKS-Hybrid cluster by running the
+`az hybridaks proxy` command from your local machine. Make sure to sign-in to Azure before
+running this command. If you have multiple Azure subscriptions, select the appropriate
+subscription ID using the `az account set` command.
+
+This command downloads the kubeconfig of your AKS-Hybrid cluster to your local machine
+and opens a proxy connection channel to your on-premises AKS-Hybrid cluster.
+The channel is open for as long as this command is running. Let this command run for
+as long as you want to access your cluster. If this command times out, close the CLI
+window, open a fresh one and run the command again.
+
+```azurecli
+az hybridaks proxy --name <aks-hybrid cluster name> --resource-group <Azure resource group> --file .\aks-hybrid-kube-config
+```
+
+Expected output:
+
+```output
+Proxy is listening on port 47011
+Merged "aks-workload" as current context in .\aks-hybrid-kube-config
+Start sending kubectl requests on 'aks-workload' context using kubeconfig at .\aks-hybrid-kube-config
+Press CTRL+C to close proxy.
+```
+
+Keep this session running and connect to your AKS-Hybrid cluster from a
+different terminal/command prompt. Verify that you can connect to your
+AKS-Hybrid cluster by running the kubectl get command. This command
+returns a list of the cluster nodes.
+
+```azurecli
+ kubectl get nodes -A --kubeconfig .\aks-hybrid-kube-config
+```
+
+### Step K4: provision tenant workloads (CNFs)
+
+You can now deploy the CNFs either directly via Operator Nexus APIs or via Azure Network Function Manager.
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
+
+ Title: How to deploy tenant workloads prerequisites
+description: Learn the prerequisites for creating VMs for VNF workloads and for creating AKS-Hybrid clusters for CNF workloads
++++ Last updated : 01/25/2023 #Required; mm/dd/yyyy format.+++
+# Tenant workloads deployment prerequisites
+
+<! IMG ![Tenant Workload Deployment Flow](Docs//media/tenant-workload-deployment-flow.png) IMG >
+
+Figure: Tenant Workload Deployment Flow
+
+This guide explains prerequisites for creating VMs for VNF workloads and AKS-Hybrid for CNF workloads.
+
+## Preparation
+
+You'll need to create various networks based on your workload needs. The following are some
+recommended questions to consider, but this list is by no means exhaustive. Consult with
+the appropriate support team(s) for help:
+
+- What type of network(s) would you need to support your workload?
+ - A layer 3 network requires a VLAN and subnet assignment
+ - Subnet must be large enough to support IP assignment to each of the VM
+ - Note the first three usable IP addresses are reserved for internal use by the
+ platform. For instance, to support 6 VMs, then the minimum CIDR for
+ your subnet is /28 (14 usable address ΓÇô 3 reserved == 11 addresses available)
+ - A layer 2 network requires only a single VLAN assignment
+ - A trunked network requires the assignment of multiple VLANs
+ - Determine how many networks of each type you'll need
+ - Determine the MTU size of each of your networks (maximum is 9000)
+ - Determine the BGP peering info for each network, and whether they'll need to talk to
+ each other. You should group networks that need to talk to each other into the same L3
+ isolation-domain, as each L3 isolation-domain can support multiple layer 3 networks.
+ - You'll be provided with a proxy to allow your VM to reach other external endpoints.
+ You'll be asked later to create a `cloudservicesnetwork` where you'll need to supply the
+ endpoints to be proxied, so now will be a good time to gather that list of endpoints
+ (you can update the list of endpoints after the network is created)
+ - For AKS-Hybrid cluster, you'll also be creating a `defaultcninetwork` to support your
+ cluster CNI networking needs, you'll need to come up with another VLAN/subnet
+ assignment similar to a layer 3 network.
+
+You'll need:
+
+- your Azure account and the subscription ID of Operator Nexus cluster deployment
+- the `custom location` resource ID of your Operator Nexus cluster
+
+### Review Azure container registry
+
+[Azure Container Registry](/azure/container-registry/container-registry-intro) is a managed registry service to store and manage your container images and related artifacts.
+The document provides details on how to create and maintain the Azure Container Registry operations such as [Push/Pull an image](/azure/container-registry/container-registry-get-started-docker-cli?tabs=azure-cli), [Push/Pull a Helm chart](/azure/container-registry/container-registry-helm-repos), etc., security and monitoring.
+For more details, also see [Azure Container Registry](/azure/container-registry/).
+
+## Install CLI extensions
+
+Install latest version of the
+[necessary CLI extensions](./howto-install-cli-extensions.md).
+
+## Operator Nexus workload images
+
+These images will be used when creating your workload VMs. Make sure each is a
+containerized image in either `qcow2` or `raw` disk format and is uploaded to an Azure Container
+Registry. If your Azure Container Registry is password protected, you can supply this info when creating your VM.
+Refer to [Operator Nexus VM disk image build procedure](#operator-nexus-vm-disk-image-build-procedure) for an example for pulling from an anonymous Azure Container Registry.
+
+### Operator Nexus VM disk image build procedure
+
+This is a paper-exercise example of an anonymous pull of an image from Azure Container Registry.
+It assumes that you already have an existing VM instance image in `qcow2` format and that the image is set up to boot with cloud-init. A working docker build and runtime environment is required.
+
+Create a dockerfile that copies the `qcow2` image file into the container's /disk directory. Place in an expected directory with correct permissions.
+For example, a Dockerfile named `aods-vm-img-dockerfile`:
+
+```bash
+FROM scratch
+ADD --chown=107:107 your-favorite-image.qcow2 /disk/
+```
+
+Using the docker command, build the image and tag to a Docker registry (such as Azure Container Registry) that you can push to. Note the build can take a while depending on how large the `qcow2` file is.
+The docker command assumes the `qcow2` file is in the same directory as your Dockerfile.
+
+```bash
+ docker build -f aods-vm-img-dockerfile -t devtestacr.azurecr.io/your-favorite-image:v1 .
+ FROM scratch
+ ADD --chown=107:107 your-favorite-image.qcow2 /disk/
+```
+
+Sign in to the Azure Container Registry if needed and push. Given the size of the docker image this push too can take a while.
+
+```azurecli
+az acr login -n devtestacr
+```
+
+The push refers to repository [devtestacr.azurecr.io/your-favorite-image]
+
+```bash
+docker push devtestacr.azurecr.io/your-favorite-image:v1
+```
+
+### Create VM using image
+
+You can now use this image when creating Operator Nexus virtual machines.
+
+```azurecli
+az networkcloud virtualmachine create --name "<YourVirtualMachineName>" \
+--resource-group "<YourResourceGroup>" \
+--subscription "<YourSubscription" \
+--extended-location name="<ClusterCustomLocationId>" type="CustomLocation" \
+--location "<ClusterAzureRegion>" \
+--admin-username "<AdminUserForYourVm>" \
+--csn attached-network-id="<CloudServicesNetworkResourceId>" \
+--cpu-cores <NumOfCpuCores> \
+--memory-size <AmountOfMemoryInGB> \
+--network-attachments '[{"attachedNetworkId":"<L3NetworkResourceId>","ipAllocationMethod":"<YourIpAllocationMethod","defaultGateway":"True","networkAttachmentName":"<YourNetworkInterfaceName"},\
+ {"attachedNetworkId":"<L2NetworkResourceId>","ipAllocationMethod":"Disabled","networkAttachmentName":"<YourNetworkInterfaceName"},
+ {"attachedNetworkId":"<TrunkedNetworkResourceId>","ipAllocationMethod":"Disabled","networkAttachmentName":"<YourNetworkInterfaceName"}]' \
+--storage-profile create-option="Ephemeral" delete-option="Delete" disk-size="<YourVmDiskSize>" \
+--vm-image "<vmImageRef>" \
+--ssh-key-values "<YourSshKey1>" "<YourSshKey2>" \
+--placement-hints '[{<YourPlacementHint1},\
+ {<YourPlacementHint2}]' \
+--vm-image-repository-credentials registry-url="<YourAcrUrl>" username="<YourAcrUsername>" password="<YourAcrPassword>" \
+```
+
+This VM image build procedure is derived from [kubevirt](https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk-workflow-example).
+
+## Miscellaneous prerequisites
+
+To deploy your workloads you'll also need:
+
+- to create resource group or find a resource group to use for your workloads
+- the network fabric resource ID, you'll need this ID to create isolation-domains
operator-nexus Reference Customer Edge Provider Edge Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-customer-edge-provider-edge-connectivity.md
+
+ Title: Operator Nexus Operator PE configuration
+description: Interconnectivity parameters to configure PE for PE-CE connectivity in Operator Nexus
++++ Last updated : 02/02/2023 #Required. The publish date in mm/dd/yyyy format.++
+# Configuration options for PE-CE connectivity
+
+## Introduction
+
+Operator Nexus is a two layer Clos type architecture with the CEs (Connection Endpoint) acting as the edge devices or boundary routers. All types of traffic (such as management traffic, mobile network control, and user plane traffic), to and from an Operator Nexus instance, will pass through the CE.
+
+On your site, the CEs will be connected to your PEs (Provider Edge or P) routers. You can configure PE-CE connectivity in multiple ways.
+
+Following are the configuration areas.
+
+### Physical connection
+
+Operator Nexus is designed to reserve multiple ports for physical connectivity between CE and PE. These ports will be added to port channel. You don't have to connect all ports on day one. You can start with one port and add more ports on need basis.
+
+### Port channel
+
+Port channel is required for PE-CE connectivity. All the ports connecting PE to CE will be part of this port channel. You can start with one port and later add more ports to this port channel. Based on your design, you'll create subinterfaces from this port channel interface for different types of traffic.
+
+### VLANs
+
+At least one subinterface is required between PE and CE. You can create multiple subinterfaces and assign them to respective VLANs. You should pick a VLAN number above 500.
+
+### IP addresses
+
+CE supports both IPv4 and IPv6 address. You can assign a /31 or /30 IPv4 on the subinterface between PE and CE. You can assign a /127 IPv6 address. Based on your BGP design, you can use only IPv6 for option A. However, for option B you shall configure IPv4.
+
+### Protocols
+
+Only BGP is supported between PE and CE. You can use iBGP or eBGP between the PE and CE. You'll assign "Fabric ASN" and "Peer ASN" based on your design. All BGP peerings between PE and CE at a given site will use the same "Fabric ASN". To establish some sessions as iBGP and others as eBGP make changes at PE.
+
+#### BGP
+
+You can use standard BGP (option A). You can also use MP-BGP with inter-as Option 10B. In this case, you have to define option B parameters during your network fabric creation.
+
+## Prerequisites
+
+1. Decide how many ports you want to start with.
+2. Find the right optics and cables based on your desired throughput and distance between PE and CE. For the CE, the optics must conform to provided bill of materials.
+3. Choose the VLAN numbers for subinterfaces
+4. Allocate IP addresses for the PE CE interfaces
+5. Select the right BGP design for PE-CE connectivity
+
+For MP-BGP make sure you configure matching route targets on both PE and CE.
+
+```azurecli
+az nf fabric create \
+--resource-group "example-rg" \
+--location "eastus" \
+--resource-name "example-nf" \
+--nf-sku "123" \
+--nfc-id "12333" \
+--nni-config '{"layer3Configuration":{"primaryIpv4Prefix":"10.20.0.0/19", "fabricAsn":10000, "peerAsn":10001, "vlanId": 20}, "layer2Configuration" : {"portCount":4,"mtu":1500} }' \
+--managed-network-config '{"ipv4Prefix":"10.1.0.0/19", "managementVpnConfiguration":{"optionBProperties":{"importRouteTargets":["65531:2001","65532:2001"], "exportRouteTargets":["65531:2001","65532:2001"]}}}'
+```
+
+## PE configuration steps
+
+1. Add selected interface(s) to port channel
+2. Configure subinterfaces and assign corresponding VLANs
+3. Assign IPv4 and/or IPv6 addresses to the interfaces
+4. Configure BGP based on the design
+5. For option B, configure route targets
+
+## Test the integration
+
+1. Run `show lldp` neighbor to verify physical connection
+2. Validate connectivity by ping test
+3. Check the BGP neighbor status
+4. Verify that you're exchanging routes with CE
operator-nexus Template Cloud Native Network Function Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/template-cloud-native-network-function-deployment.md
+
+ Title: "Operator Nexus: Sample CNF deployment script"
+description: "Sample script to create the resources required for CNF deployment on Operator Nexus. After the resources have been created, the Azure Network Function Manager is used to deploy the CNF."
++ Last updated : 01/24/2023+
+# ms.prod: used for on prem applications
+++
+# Sample: CNF deployment script
+
+This script creates the resources required to deploy a CNF on an Azure Operator
+Distributed Services cluster (instance) on your premises. Once the resources have
+been created, the Azure Network Function Manager is used to deploy the CNF.
+
+The first step is to create the workload networks, followed by the AKS-Hybrid
+vNET, and finally the AKS-Hybrid cluster that will host the CNF.
+
+## Prerequisites
+
+- fully deployed and configured Operator Nexus cluster
+ (instance)
+- Tenant inter-fabric network (the L2 and L3 isolation-domains) has been created
+
+## Common parameters
+
+```bash
+export myloc="eastus"
+export myrg="****"
+export MSYS_NO_PATHCONV=1
+export mysub="******"
+export mynfid='******'
+export myplatcustloc='******'
+export myhakscustloc='******'
+```
+
+## Initialization
+
+Set `$mysub` as the active subscription for your Operator Nexus instance.
+
+```azurecli
+ az account set --subscription "$mysub"
+```
+
+Get list of `internalnetworks` in the L3 isolation-domain `$myl3isd`
+
+```azurecli
+ az nf internalnetwork list --l3domain "$myl3isd" \
+ -g "$myrg" --subscription "$mysub"
+```
+
+## Create `cloudservicesnetwork`
+
+```bash
+export mycsn="******"
+```
+
+```azurecli
+az networkcloud cloudservicesnetwork create --name "$mycsn" \
+--resource-group "$myrg" \
+--subscription "$mysub" \
+--extended-location name="$myplatcustloc" type="CustomLocation" \
+--location "$myloc" \
+--additional-egress-endpoints '[{
+ "category": "azure-resource-management",
+ "endpoints": [{
+ "domainName": "https://storageaccountex.blob.core.windows.net",
+ "port": 443
+ }]
+ },
+ {
+ "category": "ubuntu",
+ "endpoints": [{
+ "domainName": ".ubuntu.com",
+ "port": 443
+ },
+ {
+ "domainName": ".ubuntu.com",
+ "port": 80
+ }]
+ },
+ {
+ "category": "google",
+ "endpoints": [{
+ "domainName": ".google.com",
+ "port": 80
+ },
+ {
+ "domainName": ".google.com",
+ "port": 443
+ }]
+ }
+]'
+
+--debug
+```
+
+### Validate `cloudservicesnetwork` has been created
+
+```azurecli
+az networkcloud cloudservicesnetwork show --name "$mycsn" --resource-group "$myrg" --subscription "$mysub" -o table
+```
+
+## Create `DefaultCNINetwork` Instance
+
+```bash
+export myl3n=="******"
+export myalloctype="IPV4"
+export myvlan=****
+export myipv4sub=="******"
+export mymtu="9000"
+export myl3isdarm=="******"
+```
+
+```azurecli
+az networkcloud defaultcninetwork create --name "$myl3n" \
+ --resource-group "$myrg" \
+ --subscription "$mysub" \
+ --extended-location name="$myplatcustloc" type="CustomLocation" \
+ --location "$myloc" \
+ --bgp-peers '[]' \
+ --community-advertisements '[{"communities": ["65535:65281", "65535:65282"], "subnetPrefix": "10.244.0.0/16"}]' \
+ --service-external-prefixes '["10.101.65.0/24"]' \
+ --service-load-balancer-prefixes '["10.101.66.0/24"]' \
+ --ip-allocation-type "$myalloctype" \
+ --ipv4-connected-prefix "$myipv4sub" \
+ --l3-isolation-domain-id "$myl3isdarm" \
+ --vlan $myvlan
+```
+
+### Validate `defaultcninetwork` has been created
+
+```azurecli
+az networkcloud defaultcninetwork show --name "$myl3n" \
+ --resource-group "$myrg" --subscription "$mysub" -o table
+```
+
+## Set AKS-Hybrid Extended Location
+
+```bash
+export myhakscustloc=="******"
+```
+
+## Create AKS-Hybrid Network cloud services network vNET
+
+The AKS-Hybrid (HAKS) Virtual Networks are different from the Azure to on-premises Virtual Networks.
+
+```bash
+export myhaksvnetname=="******"
+export myncnw=="******"
+```
+
+```azurecli
+az hybridaks vnet create \
+ --name "$myhaksvnetname" \
+ --resource-group "$myrg" \
+ --subscription "$mysub" \
+ --custom-location "$myhakscustloc" \
+ --aods-vnet-id "$myncnw"
+```
+
+## Create AKS-Hybrid Network default services network vNET
+
+```bash
+export myhaksvnetname=="******"
+export myncnw=="******"
+```
+
+```azurecli
+az hybridaks vnet create \
+ --name "$myhaksvnetname" \
+ --resource-group "$myrg" \
+ --subscription "$mysub" \
+ --custom-location "$myhakscustloc" \
+ --aods-vnet-id "$myncnw"
+```
+
+## Create AKS-Hybrid cluster
+
+The AKS-Hybrid (HAKS) cluster will be used to host the CNF.
+
+```bash
+export myhaksvnet1=="******"
+export myhaksvnet2=="******"
+export myencodedkey=="******"
+export ="******"
+export myclustername=="******"
+```
+
+```azurecli
+az hybridaks create \
+ --name "$myclustername" \
+ --resource-group "$myrg" \
+ --subscription "$mysub" \
+ --aad-admin-group-object-ids "$AADID" \
+ --custom-location "$myhakscustloc" \
+ --location eastus \
+ --control-plane-vm-size NC_G4_v1 \
+ --node-vm-size NC_H16_v1 \
+ --kubernetes-version v1.22.11 \
+ --load-balancer-sku stacked-kube-vip \
+ --load-balancer-count 0 \
+ --load-balancer-vm-size '' \
+ --vnet-ids "$myhaksvnet1","$myhaksvnet2" \
+ --ssh-key-value "$myencodedkey" \
+ --control-plane-count 3 \
+ --node-count 4
+```
+
+## Next Step
+
+Deploy the CNF on the AKS-Hybrid cluster using Azure Network Function Manager.
operator-nexus Template Virtualized Network Function Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/template-virtualized-network-function-deployment.md
+
+ Title: "Operator Nexus: Sample VNF deployment script"
+description: Sample script to create the environment for VNF deployment on Operator Nexus.
++ Last updated : 01/24/2023+
+# ms.prod: used for on prem applications
+++
+# Sample: VNF deployment script
+
+This script creates the resources required to deploy a VNF on an Operator Nexus cluster (instance) on your premises.
+
+The first step is to create the workload L2 and L3 networks, followed by the creation of the virtual machine for the VNF.
+
+## Prerequisites
+
+- fully deployed and configured Operator Nexus cluster
+ (instance)
+- Tenant inter-fabric network (the L2 and L3 isolation-domains) has been created
+
+## Common parameters
+
+```bash
+export myloc="eastus"
+export myrg="****"
+export MSYS_NO_PATHCONV=1
+export mysub="******"
+export mynfid='******'
+export myplatcustloc='******'
+export myhakscustloc='******'
+```
+
+## Initialization
+
+Set `$mysub` as the active subscription for your Operator Nexus instance.
+
+```azurecli
+ az account set --subscription "$mysub"
+```
+
+## Create `cloudservicesnetwork`
+
+```azurecli
+az networkcloud cloudservicesnetwork create --name "$mycsn" \
+--resource-group "$myrg" \
+--subscription "$mysub" \
+--extended-location name="$myplatcustloc" type="CustomLocation" \
+--location "$myloc" \
+--additional-egress-endpoints "[{\"category\":\"azure-resource-management\",\"endpoints\":[{\"domainName\":\"az \",\"port\":443}]}]" \
+--debug
+```
+
+### Validate `cloudservicesnetwork` has been created
+
+```azurecli
+az networkcloud cloudservicesnetwork show --name "$mycsn" --resource-group "$myrg" --subscription "$mysub" -o table
+```
+
+## Create management L3network
+
+```azurecli
+az networkcloud l3network create --name "$myl3n-mgmt" \
+--resource-group "$myrg" \
+--subscription "$mysub" \
+--extended-location name="$myplatcustloc" type="CustomLocation" \
+--location "$myloc" \
+--hybrid-aks-ipam-enabled "False" \
+--hybrid-aks-plugin-type "HostDevice" \
+--ip-allocation-type "$myalloctype" \
+--ipv4-connected-prefix "$myipv4sub" \
+--l3-isolation-domain-id "$myl3isdarm" \
+--vlan $myvlan \
+--debug
+```
+
+### Validate `l3network` has been created
+
+```azurecli
+az networkcloud l3network show --name "$myl3n-mgmt" \
+ --resource-group "$myrg" --subscription "$mysub"
+```
+
+## Create trusted L3network
+
+```azurecli
+az networkcloud l3network create --name "$myl3n-trust" \
+--resource-group "$myrg" \
+--subscription "$mysub" \
+--extended-location name="$myplatcustloc" type="CustomLocation" \
+--location "$myloc" \
+--hybrid-aks-ipam-enabled "False" \
+--hybrid-aks-plugin-type "HostDevice" \
+--ip-allocation-type "$myalloctype" \
+--ipv4-connected-prefix "$myipv4sub" \
+--l3-isolation-domain-id "$myl3isdarm" \
+--vlan $myvlan \
+--debug
+```
+
+### Validate trusted `l3network` has been created
+
+```azurecli
+az networkcloud l3network show --name "$myl3n-trust" \
+ --resource-group "$myrg" --subscription "$mysub"
+```
+
+## Create untrusted L3network
+
+```azurecli
+az networkcloud l3network create --name "$myl3n-untrust" \
+--resource-group "$myrg" \
+--subscription "$mysub" \
+--extended-location name="$myplatcustloc" type="CustomLocation" \
+--location "$myloc" \
+--hybrid-aks-ipam-enabled "False" \
+--hybrid-aks-plugin-type "HostDevice" \
+--ip-allocation-type "$myalloctype" \
+--ipv4-connected-prefix "$myipv4sub" \
+--l3-isolation-domain-id "$myl3isdarm" \
+--vlan $myvlan \
+--debug
+```
+
+### Validate untrusted `l3network` has been created
+
+```azurecli
+az networkcloud l3network show --name "$myl3n-untrust" \
+ --resource-group "$myrg" --subscription "$mysub"
+```
+
+## Create L2network
+
+```azurecli
+az networkcloud l2network create --name "$myl2n" \
+--resource-group "$myrg" \
+--subscription "$mysub" \
+--extended-location name="$myplatcustloc" type="CustomLocation" \
+--location "$myloc" \
+--hybrid-aks-plugin-type "HostDevice" \
+--l2-isolation-domain-id "$myl2isdarm" \
+--debug
+```
+
+### Validate `l2network` has been created
+
+```azurecli
+az networkcloud l2network show --name "$myl2n" --resource-group "$myrg" --subscription "$mysub"
+```
+
+## Create Virtual Machine and deploy VNF
+
+The virtual machine parameters include the VNF image.
+
+```azurecli
+az networkcloud virtualmachine create --name "$myvm" \
+--resource-group "$myrg" --subscription "$mysub" \
+--virtual-machine-parameters "$vmparm" \
+--debug
+```
peering-service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/cli.md
If you decide to install and use Azure CLI locally, this article requires you to
## Register your subscription with the resource provider and feature flag
-Before you proceed to the steps of creating the Peering Service connection, register your subscription with the resource provider and feature flag using [az feature register](/cli/azure/feature.md#az-feature-register) and [az provider register](/cli/azure/provider.md#az-provider-register):
+Before you proceed to the steps of creating the Peering Service connection, register your subscription with the resource provider and feature flag using [az feature register](/cli/azure/feature#az-feature-register) and [az provider register](/cli/azure/provider#az-provider-register):
```azurecli-interactive az feature register --namespace Microsoft.Peering --name AllowPeeringService
az provider register --name Microsoft.Peering
## List Peering Service locations and service providers
-Use [az peering service country list](/cli/azure/peering/service/country.md#az-peering-service-country-list) to list the countries where Peering Service is available and [az peering service location list](/cli/azure/peering/service/location.md#az-peering-service-location-list) to list the available metro locations in a specific country where you can get the Peering Service:
+Use [az peering service country list](/cli/azure/peering/service/country#az-peering-service-country-list) to list the countries where Peering Service is available and [az peering service location list](/cli/azure/peering/service/location#az-peering-service-location-list) to list the available metro locations in a specific country where you can get the Peering Service:
```azurecli-interactive # List the countries available for Peering Service.
az peering service country list --out table
az peering service location list --country "united states" --output table ```
-Use [az peering service provider list](/cli/azure/peering/service/provider.md#az-peering-service-provider-list) to get a list of available [Peering Service providers](location-partners.md):
+Use [az peering service provider list](/cli/azure/peering/service/provider#az-peering-service-provider-list) to get a list of available [Peering Service providers](location-partners.md):
```azurecli-interactive az peering service provider list --output table
az peering service provider list --output table
## Create a Peering Service connection
-Create a Peering Service connection using [az peering service create](/cli/azure/peering/service.md#az-peering-service-create):
+Create a Peering Service connection using [az peering service create](/cli/azure/peering/service#az-peering-service-create):
```azurecli-interactive az peering service create --location "eastus" --peering-service-name "myPeeringService" --resource-group "myResourceGroup" --peering-service-location "Virginia" --peering-service-provider "Contoso"
az peering service create --location "eastus" --peering-service-name "myPeeringS
## Add the Peering Service prefix
-Use [az peering service prefix create](/cli/azure/peering/service/prefix.md#az-peering-service-prefix-create) to add the prefix provided to you by the connectivity provider:
+Use [az peering service prefix create](/cli/azure/peering/service/prefix#az-peering-service-prefix-create) to add the prefix provided to you by the connectivity provider:
```azurecli-interactive az peering service prefix create --peering-service-name "myPeeringService" --prefix-name "myPrefix" --resource-group "myResourceGroup" --peering-service-prefix-key "00000000-0000-0000-0000-000000000000" --prefix "240.0.0.0/32"
az peering service prefix create --peering-service-name "myPeeringService" --pre
## List all Peering Services connections
-To view the list of all Peering Service connections, use [az peering service list](/cli/azure/peering/service.md#az-peering-service-list):
+To view the list of all Peering Service connections, use [az peering service list](/cli/azure/peering/service#az-peering-service-list):
```azurecli-interactive az peering service list --resource-group "myresourcegroup" --output "table"
az peering service list --resource-group "myresourcegroup" --output "table"
## List all Peering Service prefixes
-To view the list of all Peering Service prefixes, use [az peering service prefix list](/cli/azure/peering/service/prefix.md#az-peering-service-prefix-list):
+To view the list of all Peering Service prefixes, use [az peering service prefix list](/cli/azure/peering/service/prefix#az-peering-service-prefix-list):
```azurecli-interactive az peering service prefix list --peering-service-name "myPeeringService" --resource-group "myResourceGroup"
az peering service prefix list --peering-service-name "myPeeringService" --resou
## Remove the Peering Service prefix
-To remove a Peering Service prefix, use [az peering service prefix delete](/cli/azure/peering/service/prefix.md#az-peering-service-prefix-delete):
+To remove a Peering Service prefix, use [az peering service prefix delete](/cli/azure/peering/service/prefix#az-peering-service-prefix-delete):
```azurecli-interactive az peering service prefix delete --peering-service-name "myPeeringService" --prefix-name "myPrefix" --resource-group "myResourceGroup"
az peering service prefix delete --peering-service-name "myPeeringService" --pre
## Delete a Peering Service connection
-To delete a Peering Service connection, use [az peering service delete](/cli/azure/peering/service.md#az-peering-service-delete):
+To delete a Peering Service connection, use [az peering service delete](/cli/azure/peering/service#az-peering-service-delete):
```azurecli-interactive az peering service delete --peering-service-name "myPeeringService" --resource-group "myResourceGroup"
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Introducing Enhanced Metrics for Azure Database for PostgreSQL Flexible Server t
- There is **50 database** limit on metrics with `database name` dimension. * On **Burstable** SKU - this limit is 10 `database name` dimension-- `database name` dimension limit is applied on OiD column (in other words Order-of-Creation of the database)
+- `database name` dimension limit is applied on OiD column (in other words _Order-of-Creation_ of the database)
- The `database name` in metrics dimension is **case insensitive**. Therefore the metrics for same database names in varying case (_ex. foo, FoO, FOO_) will be merged, and may not show accurate data. ## Autovacuum metrics
Autovaccum metrics can be used to monitor and tune autovaccum performance for Az
* To enable these metrics, please turn ON the server parameter `metrics.autovacuum_diagnostics`. * This parameter is dynamic, hence will not require instance restart.
-#### Autovacuum metrics
+#### List of Autovacuum metrics
|Display Name |Metric ID |Unit |Description |Dimension |Default enabled| |-|-|--|--|||
Autovaccum metrics can be used to monitor and tune autovaccum performance for Az
- There is **30 database** limit on metrics with `database name` dimension. * On **Burstable** SKU - this limit is 10 `database name` dimension-- `database name` dimension limit is applied on OiD column (in other words Order-of-Creation of the database)
+- `database name` dimension limit is applied on OiD column (in other words _Order-of-Creation_ of the database)
+
+## PgBouncer metrics
+
+PgBouncer metrics can be used for monitoring the performance of PgBouncer process, including details for active connections, Idle connections, Total pooled connections, number of connection pools etc. Each metric is emitted at a **30 minute** frequency and has up to **93 days** of history. Customers can configure alerts on the metrics and can also access the new metrics dimensions, to split and filter the metrics data on database name.
+
+#### Enabling PgBouncer metrics
+* PgBouncer metrics are disabled by default.
+* For Pgbouncer metrics to work, both the server parameters `pgbouncer.enabled` and `metrics.pgbouncer_diagnostics` have to be enabled.
+ * These parameters are dynamic, and will not require instance restart.
+
+#### List of PgBouncer metrics
+
+|Display Name |Metrics ID |Unit |Description |Dimension |Default enabled|
+|-|--|--|-|||
+|**Active client connections** (Preview) |client_connections_active |Count|Connections from clients which are associated with a PostgreSQL connection |DatabaseName|No |
+|**Waiting client connections** (Preview)|client_connections_waiting|Count|Connections from clients that are waiting for a PostgreSQL connection to service them|DatabaseName|No |
+|**Active server connections** (Preview) |server_connections_active |Count|Connections to PostgreSQL that are in use by a client connection |DatabaseName|No |
+|**Idle server connections** (Preview) |server_connections_idle |Count|Connections to PostgreSQL that are idle, ready to service a new client connection |DatabaseName|No |
+|**Total pooled connections** (Preview) |total_pooled_connections |Count|Current number of pooled connections |DatabaseName|No |
+|**Number of connection pools** (Preview)|num_pools |Count|Total number of connection pools |DatabaseName|No |
+
+#### Considerations when using the PgBouncer metrics
+
+- There is **30 database** limit on metrics with `database name` dimension.
+ * On **Burstable** SKU - this limit is 10 `database name` dimension.
+- `database name` dimension limit is applied on OiD column (in other words _Order-of-Creation_ of the database)
## Applying filters and splitting on metrics with dimension
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/principles-for-ai-generated-content.md
+
+ Title: Principles for AI generated content
+description: Describes Microsoft's approach for using AI-generated content on Microsoft Learn
++++ Last updated : 02/26/2023++
+# Our principles for using AI-generated content on Microsoft Learn
+
+Microsoft uses [Azure OpenAI Service](/azure/cognitive-services/openai/) to generate some of the text and code examples that we publish on [Microsoft Learn](/). This article describes our approach for using Azure OpenAI to generate technical content that supports our products and services.
+
+At Microsoft, we're working to add articles to Microsoft Learn that contain AI-generated content. Over time, more articles will feature AI-generated text and code samples.
+
+For information about the broader effort at Microsoft to put our AI principles into practice, see [Microsoft's AI principles](https://www.microsoft.com/ai/responsible-ai).
+
+## Our commitment
+
+We're committed to providing you with accurate and comprehensive learning experiences for Microsoft products and services. By using AI-generated content, we can extend the content for your scenarios. We can provide more examples in more programming languages. We can cover solutions in greater detail. We can cover new scenarios more rapidly.
+
+We understand that AI-generated content isn't always accurate. We test and review AI-generated content before we publish it.
+
+## Transparency
+
+We're transparent about articles that contain AI-generated content. All articles that contain any AI-generated content include text acknowledging the role of AI. You'll see this text at the top of the article.
+
+## Augmentation
+
+For articles that contain AI-generated content, our authors use AI to augment their content creation process. For example, an author plans what to cover in the article, and then uses Azure OpenAI to generate part of the content. Or, the author runs a process to convert an existing article from one programming language to another language. The author reviews and revises the AI-generated content. Finally, the author writes any remaining sections.
+
+These articles contain a mix of authored content and AI-generated content and are clearly marked as containing AI-generated content.
+
+## Validation
+
+The author reviews all AI-generated content and revises it as needed. After the author has reviewed the content, the article goes through our standard validation process to check for formatting errors, and to make sure the terms and language are appropriate and inclusive. The article is eligible for publishing only after passing all validation tests.
+
+The author tests all AI-generated code before publishing. The author either manually tests the code or runs it through an automated test process.
+
+## AI models
+
+Currently, we're using large language models from OpenAI accessed through Azure OpenAI Service to generate content. Specifically, we're using the GPT-3 and Codex language models.
+
+We may add other AI services in the future and will update this page periodically to reflect our updated practices.
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
Check for updates in the following Microsoft security products, and implement an
- [Microsoft 365 security solutions and services](/microsoft-365/security/) - [Windows 10 Enterprise Security](/windows/security/) - [Microsoft Defender for Cloud Apps ](/cloud-app-security/)-- [Microsoft Defender for IoT](/defender-for-iot/organizations)
+- [Microsoft Defender for IoT](/azure/defender-for-iot/organizations/)
Implementing new updates will help identify any prior campaigns and prevent future campaigns against your system. Keep in mind that lists of IOCs may not be exhaustive, and may expand as investigations continue.
In addition to the recommended actions listed above, we recommend that you consi
> [!IMPORTANT] > If you believe you have been compromised and require assistance through an incident response, open a **Sev A** Microsoft support case.
- >
+ >
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Use Azure RBAC to create and assign roles within your security operations team t
- [**Microsoft Sentinel Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) can, in addition to the above, create and edit workbooks, analytics rules, and other Microsoft Sentinel resources. -- **Microsoft Sentinel Playbook Operator** can list, view, and manually run playbooks.
+- [**Microsoft Sentinel Playbook Operator**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) can list, view, and manually run playbooks.
- [**Microsoft Sentinel Automation Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-automation-contributor) allows Microsoft Sentinel to add playbooks to automation rules. It isn't meant for user accounts.
Users with particular job requirements may need to be assigned other roles or sp
- **Working with playbooks to automate responses to threats**
- Microsoft Sentinel uses **playbooks** for automated threat response. Playbooks are built on **Azure Logic Apps**, and are a separate Azure resource. For specific members of your security operations team, you might want to assign the ability to use Logic Apps for Security Orchestration, Automation, and Response (SOAR) operations. You can use the **Microsoft Sentinel Playbook Operator** role to assign explicit, limited permission for running playbooks, and the [**Logic App Contributor**](../role-based-access-control/built-in-roles.md#logic-app-contributor) role to create and edit playbooks.
+ Microsoft Sentinel uses **playbooks** for automated threat response. Playbooks are built on **Azure Logic Apps**, and are a separate Azure resource. For specific members of your security operations team, you might want to assign the ability to use Logic Apps for Security Orchestration, Automation, and Response (SOAR) operations. You can use the [**Microsoft Sentinel Playbook Operator**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) role to assign explicit, limited permission for running playbooks, and the [**Logic App Contributor**](../role-based-access-control/built-in-roles.md#logic-app-contributor) role to create and edit playbooks.
- **Giving Microsoft Sentinel permissions to run playbooks**
After understanding how roles and permissions work in Microsoft Sentinel, you ca
| User type | Role | Resource group | Description | | | | | | | **Security analysts** | [Microsoft Sentinel Responder](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) | Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. |
-| | Microsoft Sentinel Playbook Operator | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run playbooks. |
+| | [Microsoft Sentinel Playbook Operator](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run playbooks. |
|**Security engineers** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) |Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. <br><br>Create and edit workbooks, analytics rules, and other Microsoft Sentinel resources. | | | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run and modify playbooks. | | **Service Principal** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) | Microsoft Sentinel's resource group | Automated configuration for management tasks |
storage Blob Inventory How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory-how-to.md
Previously updated : 08/16/2021 Last updated : 02/24/2023
synapse-analytics Workspaces Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/workspaces-encryption.md
The data in the following Synapse components is encrypted with the customer-mana
* SQL pools * Dedicated SQL pools * Serverless SQL pools
+* Data Explorer pools
* Apache Spark pools * Azure Data Factory integration runtimes, pipelines, datasets.
virtual-machines Boot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-diagnostics.md
Previously updated : 11/06/2020 Last updated : 02/25/2023+ # Azure boot diagnostics
Last updated 11/06/2020
Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. ## Boot diagnostics storage account
-When you create a VM in Azure portal, boot diagnostics is enabled by default. The recommended boot diagnostics experience is to use a managed storage account, as it yields significant performance improvements in the time to create an Azure VM. This is because an Azure managed storage account will be used, removing the time it takes to create a new user storage account to store the boot diagnostics data.
+When you create a VM in Azure portal, boot diagnostics is enabled by default. The recommended boot diagnostics experience is to use a managed storage account, as it yields significant performance improvements in the time to create an Azure VM. An Azure managed storage account is used, removing the time it takes to create a user storage account to store the boot diagnostics data.
> [!IMPORTANT]
-> The boot diagnostics data blobs (which comprise of logs and snapshot images) are stored in a managed storage account. Customers will be charged only on used GiBs by the blobs, not on the disk's provisioned size. The snapshot meters will be used for billing of the managed storage account. Because the managed accounts are created on either Standard LRS or Standard ZRS, customers will be charged at $0.05/GB per month for the size of their diagnostic data blobs only. For more information on this pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Customers will see this charge tied to their VM resource URI.
+> The boot diagnostics data blobs (which comprise of logs and snapshot images) are stored in a managed storage account. Customers will be charged only on used GiBs by the blobs, not on the disk's provisioned size. The snapshot meters will be used for billing of the managed storage account. Because the managed accounts are created on either Standard LRS or Standard ZRS, customers will be charged at $0.05/GB per month for the size of their diagnostic data blobs only. For more information on this pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Customers see this charge tied to their VM resource URI.
An alternative boot diagnostic experience is to use a custom storage account. A user can either create a new storage account or use an existing one. When the storage firewall is enabled on the custom storage account (**Enabled from all networks** option isn't selected), you must:
To configure the storage firewall for Azure Serial Console, see [Use Serial Cons
> The custom storage account associated with boot diagnostics requires the storage account and the associated virtual machines reside in the same region and subscription. ## Boot diagnostics view
-Go to the virtual machine blade in the Azure portal, the boot diagnostics option is under the *Support and Troubleshooting* section in the Azure portal. Selecting boot diagnostics will display a screenshot and serial log information. The serial log contains kernel messaging and the screenshot is a snapshot of your VMs current state. Based on if the VM is running Windows or Linux determines what the expected screenshot would look like. For Windows, users will see a desktop background and for Linux, users will see a login prompt.
+Go to the virtual machine blade in the Azure portal, the boot diagnostics option is under the *Support and Troubleshooting* section in the Azure portal. Selecting boot diagnostics display a screenshot and serial log information. The serial log contains kernel messaging and the screenshot is a snapshot of your VMs current state. Based on if the VM is running Windows or Linux determines what the expected screenshot would look like. For Windows, users see a desktop background and for Linux, users see a login prompt.
:::image type="content" source="./media/boot-diagnostics/boot-diagnostics-linux.png" alt-text="Screenshot of Linux boot diagnostics"::: :::image type="content" source="./media/boot-diagnostics/boot-diagnostics-windows.png" alt-text="Screenshot of Windows boot diagnostics":::
Go to the virtual machine blade in the Azure portal, the boot diagnostics option
Managed boot diagnostics can be enabled through the Azure portal, CLI and ARM Templates. ### Enable managed boot diagnostics using the Azure portal
-When you create a VM in the Azure portal, the default setting is to have boot diagnostics enabled using a managed storage account. To view this, navigate to the *Management* tab during the VM creation.
+When you create a VM in the Azure portal, the default setting is to have boot diagnostics enabled using a managed storage account. Navigate to the *Management* tab during the VM creation to view it.
:::image type="content" source="./media/boot-diagnostics/boot-diagnostics-enable-portal.png" alt-text="Screenshot enabling managed boot diagnostics during VM creation."::: ### Enable managed boot diagnostics using CLI
-Boot diagnostics with a managed storage account is supported in Azure CLI 2.12.0 and later. If you don't input a name or URI for a storage account, a managed account will be used. For more information and code samples, see the [CLI documentation for boot diagnostics](/cli/azure/vm/boot-diagnostics).
+Boot diagnostics with a managed storage account is supported in Azure CLI 2.12.0 and later. If you don't input a name or URI for a storage account, a managed account is used. For more information and code samples, see the [CLI documentation for boot diagnostics](/cli/azure/vm/boot-diagnostics).
### Enable managed boot diagnostics using PowerShell
-Boot diagnostics with a managed storage account is supported in Azure PowerShell 6.6.0 and later. If you don't input a name or URI for a storage account, a managed account will be used. For more information and code samples, see the [PowerShell documentation for boot diagnostics](/powershell/module/az.compute/set-azvmbootdiagnostic).
+Boot diagnostics with a managed storage account is supported in Azure PowerShell 6.6.0 and later. If you don't input a name or URI for a storage account, a managed account is used. For more information and code samples, see the [PowerShell documentation for boot diagnostics](/powershell/module/az.compute/set-azvmbootdiagnostic).
### Enable managed boot diagnostics using Azure Resource Manager (ARM) templates Everything after API version 2020-06-01 supports managed boot diagnostics. For more information, see [boot diagnostics instance view](/rest/api/compute/virtualmachines/createorupdate#bootdiagnostics).
Everything after API version 2020-06-01 supports managed boot diagnostics. For m
## Limitations - Managed boot diagnostics is only available for Azure Resource Manager VMs. - Managed boot diagnostics doesn't support VMs using unmanaged OS disks.-- Boot diagnostics doesn't support premium storage accounts or zone redundant storage accounts. If either of these are used for boot diagnostics users will receive an `StorageAccountTypeNotSupported` error when starting the VM.
+- Boot diagnostics doesn't support premium storage accounts or zone redundant storage accounts. If either of these are used for boot diagnostics users receive an `StorageAccountTypeNotSupported` error when starting the VM.
- Managed storage accounts are supported in Resource Manager API version "2020-06-01" and later. - Portal only supports the use of boot diagnostics with a managed storage account for single instance VMs.-- Users cannot configure a retention period for Managed Boot Diagnostics. The logs will be overwritten when the total size crosses 1 GB.
+- Users can't configure a retention period for Managed Boot Diagnostics. The logs are overwritten when the total size crosses 1 GB.
## Next steps
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
Previously updated : 11/22/2022- Last updated : 02/24/2023+
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
-On-demand Capacity Reservation enables you to reserve Compute capacity in an Azure region or an Availability Zone for any duration of time. Unlike [Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/), you do not have to sign up for a 1-year or a 3-year term commitment. Create and delete reservations at any time and have full control over how you want to manage your reservations.
+On-demand Capacity Reservation enables you to reserve Compute capacity in an Azure region or an Availability Zone for any duration of time. Unlike [Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/), you don't have to sign up for a 1-year or a 3-year term commitment. Create and delete reservations at any time and have full control over how you want to manage your reservations.
-Once the Capacity Reservation is created, the capacity is available immediately and is exclusively reserved for your use until the reservation is deleted.
+Once you create the Capacity Reservation, the resources can be used immediately. Capacity is reserved for you until you delete the reservation.
Capacity Reservation has some basic properties that are always defined at the time of creation:
Capacity Reservation has some basic properties that are always defined at the ti
- **Location**ΓÇ»- Each reservation is for one location (region). If that location has availability zones, then the reservation can also specify one of the zones. - **Quantity**ΓÇ»- Each reservation has a quantity of instances to be reserved.
-To create a Capacity Reservation, these parameters are passed to Azure as a capacity request. If the subscription lacks the required quota or Azure does not have capacity available that meets the specification, the reservation will fail to deploy. To avoid deployment failure, request more quota or try a different VM size, location, or zone combination.
+To create a Capacity Reservation, these parameters are passed to Azure as a capacity request. If Azure doesn't have capacity available that meets the request, the reservation deployment fails. Your deployment fails if you don't have an adequate subscription quota. Request a higher quota or try a different VM size, location, or zone combination.
-Once Azure accepts a reservation request, it is available to be consumed by VMs of matching configurations. To consume Capacity Reservation, the VM will have to specify the reservation as one of its properties. Otherwise, the Capacity Reservation will remain unused. One benefit of this design is that you can target only critical workloads to reservations and other non-critical workloads can run without reserved capacity.
+Once Azure accepts your reservation request, it's available for VMs with matching configurations. To consume Capacity Reservation, the VM has to specify the reservation in its properties. Otherwise, the Capacity Reservation isn't used. One benefit of this design is that you can target only critical workloads to reservations and other non-critical workloads can run without reserved capacity.
## Benefits of Capacity Reservation
Once Azure accepts a reservation request, it is available to be consumed by VMs
Please read the Service Level Agreement details in the [SLA for Capacity Reservation](https://aka.ms/CapacityReservationSLAForVM).
-Any claim against the SLA requires calculating the Minutes Not Available for the reserved capacity. Here is an example of how to calculate Minutes Not Available.
+Any claim against the SLA requires calculating the Minutes Not Available for the reserved capacity. Here's an example of how to calculate Minutes Not Available.
-- A On Demand Capacity Reservation has a total Capacity of 5 Reserved Units. The On Demand Capacity Reservation starts in the Unused Capacity state with 0 Virtual Machines Allocated. -- A Supported Deployment of quantity 5 is allocated to the On Demand Capacity Reservation. 3 Virtual Machines succeed and 2 fail with a Virtual Machine capacity error. Result: 2 Reserved Units begin to accumulate Minutes Not Available. -- No action is taken for 20 minutes. Result: two Reserved Units each accumulate 15 Minutes Not Available.
+- A On Demand Capacity Reservation has a total Capacity of five Reserved Units. The On Demand Capacity Reservation starts in the Unused Capacity state with zero Virtual Machines Allocated.
+- A Supported Deployment of quantity 5 is allocated to the On Demand Capacity Reservation. Three Virtual Machines succeed and two fail with a Virtual Machine capacity error. Result: Two Reserved Units begin to accumulate Minutes Not Available.
+- No action is taken for 20 minutes. Result: Two Reserved Units each accumulate 15 Minutes Not Available.
- At 20 minutes, a Supported Deployment of quantity 2 is attempted. One Virtual Machine succeeds, the other Virtual Machine fails with a Virtual Machine capacity error. Result: One Reserved Unit stays at 15 accumulated Minutes Not Available. Another Reserved Unit resumes accumulating Minutes Not Available. -- Four additional Supported Deployments of quantity 1 are made at 10 minute intervals. On the fourth attempt (60 minutes after the first capacity error), the Virtual Machine is deployed. Result: The last Reserved Unit adds 40 minutes of Minutes Not Available (4 attempts x 10 minutes between attempts) for a total of 55 Minutes Not Available.
+- Four more Supported Deployments of quantity 1 are made at 10-minute intervals. On the fourth attempt (60 minutes after the first capacity error), the Virtual Machine is deployed. Result: The last Reserved Unit adds 40 minutes of Minutes Not Available (Four attempts x 10 minutes between attempts) for a total of 55 Minutes Not Available.
-From this example accumulation of Minutes Not Available, here is the calculation of Service Credit.
+From this example accumulation of Minutes Not Available, here's the calculation of Service Credit.
-- One Reserved Unit accumulated 15 minutes of Downtime. The Percentage Uptime is 99.97%. This Reserved Unit does not qualify for Service Credit.
+- One Reserved Unit accumulated 15 minutes of Downtime. The Percentage Uptime is 99.97%. This Reserved Unit doesn't qualify for Service Credit.
- Another Reserved Unit accumulated 55 minutes of Downtime. The Percentage Uptime is 99.87. This Reserved Unit qualifies for Service Credit of 10%. ## Limitations and restrictions
From this example accumulation of Minutes Not Available, here is the calculation
- F series, all versions - Lsv3 (Intel) and Lasv3 (AMD) - At VM deployment, Fault Domain (FD) count of up to 3 may be set as desired using Virtual Machine Scale Sets. A deployment with more than 3 FDs will fail to deploy against a Capacity Reservation. -- Support for additional VM Series isn't currently available:
+- Support for other VM Series isn't currently available:
- Ls and Lsv2 series - M series, any version - NC-series, v3 and newer
From this example accumulation of Minutes Not Available, here is the calculation
- Single VM - Virtual Machine Scale Sets with Uniform Orchestration - Virtual Machine Scale Sets with Flexible Orchestration (preview)-- The following deployment types are not supported:
+- The following deployment types aren't supported:
- Spot VMs - Azure Dedicated Host Nodes or VMs deployed to Dedicated Hosts - Availability Sets -- Other deployment constraints are not supported. For example:
+- Other deployment constraints aren't supported. For example:
- Proximity Placement Group - Update domains - Virtual Machine Scale Sets with single placement group set 'true'
From this example accumulation of Minutes Not Available, here is the calculation
- VMs resuming from hibernation - VMs requiring vnet encryption - Only the subscription that created the reservation can use it. -- Reservations are only available to paid Azure customers. Sponsored accounts such as Free Trial and Azure for Students are not eligible to use this feature.
+- Reservations are only available to paid Azure customers. Sponsored accounts such as Free Trial and Azure for Students aren't eligible to use this feature.
## Pricing and billing
-Capacity Reservations are priced at the same rate as the underlying VM size. For example, if you create a reservation for ten D2s_v3 VMs then you will start getting billed for ten D2s_v3 VMs, even if the reservation is not being used.
+Capacity Reservations are priced at the same rate as the underlying VM size. For example, if you create a reservation for 10 D2s_v3 VMs then you'll start getting billed for 10 D2s_v3 VMs, even if the reservation isn't being used.
-If you then deploy a D2s_v3 VM and specify reservation property, the Capacity Reservation gets used. Once in use, you will only pay for the VM and nothing extra for the Capacity Reservation. LetΓÇÖs say you deploy six D2s_v3 VMs against the previously mentioned Capacity Reservation. You will see a bill for six D2s_v3 VMs and four unused Capacity Reservation, both charged at the same rate as a D2s_v3 VM.
+If you then deploy a D2s_v3 VM and specify reservation property, the Capacity Reservation gets used. Once in use, you pay for only VM and not the Capacity Reservation. LetΓÇÖs say you deploy six D2s_v3 VMs against the previously mentioned Capacity Reservation. You see a bill for six D2s_v3 VMs and four unused Capacity Reservation, both charged at the same rate as a D2s_v3 VM.
-Both used and unused Capacity Reservation are eligible for Reserved Instances term commitment discounts. In the previous example, if you have Reserved Instances for two D2s_v3 VMs in the same Azure region, the billing for two resources (either VM or unused Capacity Reservation) will be zeroed out. The remaining eight D2s_v3 will be billed normally. The term commitment discounts could be applied on either the VM or the unused Capacity Reservation.
+Both used and unused Capacity Reservation are eligible for Reserved Instances term commitment discounts. In the previous example, if you have Reserved Instances for two D2s_v3 VMs in the same Azure region, the billing for two resources (either VM or unused Capacity Reservation) will be zeroed out. The remaining eight D2s_v3 is billed normally. The term commitment discounts could be applied on either the VM or the unused Capacity Reservation.
## Difference between On-demand Capacity Reservation and Reserved Instances
Both used and unused Capacity Reservation are eligible for Reserved Instances te
|||| | Term | No term commitment required. Can be created and deleted as per the customer requirement | Fixed term commitment of either one-year or three-years| | Billing discount | Charged at pay-as-you-go rates for the underlying VM size* | Significant cost savings over pay-as-you-go rates |
-| Capacity SLA | Provides capacity guarantee in the specified location (region or availability zone) | Does not provide a capacity guarantee. Customers can choose ΓÇ£capacity priorityΓÇ¥ to gain better access, but that option does not carry an SLA |
+| Capacity SLA | Provides capacity guarantee in the specified location (region or availability zone) | Doesn't provide a capacity guarantee. Customers can choose ΓÇ£capacity priorityΓÇ¥ to gain better access, but that option doesn't carry an SLA |
| Region vs Availability Zones | Can be deployed per region or per availability zone | Only available at regional level | *Eligible for Reserved Instances discount if purchased separately
Capacity Reservation is created for a specific VM size in an Azure region or an
The group specifies the Azure location: -- The group sets the region in which all reservations will be created. For example, East US, North Europe, or Southeast Asia.
+- The group sets the region in which all reservations are created. For example, East US, North Europe, or Southeast Asia.
- The group sets the eligible zones. For example, AZ1, AZ2, AZ3 in any combination. -- If no zones are specified, Azure will select the placement for the group somewhere in the region. Each reservation will specify the region and may not set a zone.
+- If no zones are specified, Azure selects the placement for the group somewhere in the region. Each reservation specifies the region and may not set a zone.
Each reservation in a group is for one VM size. If eligible zones were selected for the group, the reservation must be for one of the supported zones. A group can have only one reservation per VM size per zone, or just one reservation per VM size if no zones are selected.
-To consume Capacity Reservation, specify Capacity Reservation Group as one of the VM properties. If the group doesnΓÇÖt have a reservation matching the size and location, Azure will return an error message.
+To consume Capacity Reservation, specify Capacity Reservation Group as one of the VM properties. If the group doesnΓÇÖt have a reservation matching the size and location, Azure returns an error message.
-The quantity reserved for reservation can be adjusted after initial deployment by changing the capacity property. Other changes to Capacity Reservation, such as VM size or location, are not permitted. The recommended approach is to create a new reservation, migrate any existing VMs, and then delete the old reservation if no longer needed.
+The quantity reserved for reservation can be adjusted after initial deployment by changing the capacity property. Other changes to Capacity Reservation, such as VM size or location, aren't permitted. The recommended approach is to create a new reservation, migrate any existing VMs, and then delete the old reservation if no longer needed.
-Capacity Reservation doesnΓÇÖt create limits on the number of VM deployments. Azure supports allocating as many VMs as desired against the reservation. As the reservation itself requires quota, the quota checks are omitted for VM deployment up to the reserved quantity. Allocating VMs beyond the reserved quantity is call overallocating the reservation. Overallocating VMs is not covered by the SLA and the VMs will be subject to quota checks and Azure fulfilling the extra capacity. Once deployed, these extra VM instances can cause the quantity of VMs allocated against the reservation to exceed the reserved quantity. To learn more, go to [Overallocating Capacity Reservation](capacity-reservation-overallocate.md).
+Capacity Reservation doesnΓÇÖt create limits on the number of VM deployments. Azure supports allocating as many VMs as desired against the reservation. As the reservation itself requires quota, the quota checks are omitted for VM deployment up to the reserved quantity. Allocating VMs beyond the reserved quantity is call overallocating the reservation. Overallocating VMs isn't covered by the SLA and the VMs are subject to quota checks and Azure fulfilling the extra capacity. Once deployed, these extra VM instances can cause the quantity of VMs allocated against the reservation to exceed the reserved quantity. To learn more, go to [Overallocating Capacity Reservation](capacity-reservation-overallocate.md).
## Capacity Reservation lifecycle
Track the state of the overall reservation through the following properties:
- `virtualMachinesAllocated` = List of VMs allocated against the Capacity Reservation and count towards consuming the capacity. These VMs are either *Running*, *Stopped* (*Allocated*), or in a transitional state such as *Starting* or *Stopping*. This list doesnΓÇÖt include the VMs that are in deallocated state, referred to as *Stopped* (*deallocated*). - `virtualMachinesAssociated` = List of VMs associated with the Capacity Reservation. This list has all the VMs that have been configured to use the reservation, including the ones that are in deallocated state.
-The previous example will start with `capacity` as 2 and length of `virtualMachinesAllocated` and `virtualMachinesAssociated` as 0.
+The previous example starts with `capacity` as 2 and length of `virtualMachinesAllocated` and `virtualMachinesAssociated` as 0.
-When a VM is then allocated against the Capacity Reservation, it will logically consume one of the reserved capacity instances:
+When a VM is then allocated against the Capacity Reservation, it consumes one of the reserved capacity instances:
![Capacity Reservation image 2.](./media/capacity-reservation-overview/capacity-reservation-2.jpg)
-The status of the Capacity Reservation will now show `capacity` as 2 and length of `virtualMachinesAllocated` and `virtualMachinesAssociated` as 1.
+The status of the Capacity Reservation shows `capacity` as 2 and length of `virtualMachinesAllocated` and `virtualMachinesAssociated` as 1.
-Allocations against the Capacity Reservation will succeed as along as the VMs have matching properties and there is at least one empty capacity instance.
+Allocations against the Capacity Reservation succeed as along as the VMs have matching properties and there is at least one empty capacity instance.
-Using our example, when a third VM is allocated against the Capacity Reservation, the reservation enters the [overallocated](capacity-reservation-overallocate.md) state. This third VM will require unused quota and extra capacity fulfillment from Azure. Once the third VM is allocated, the Capacity Reservation now looks like this:
+Using our example, when a third VM is allocated against the Capacity Reservation, the reservation enters the [overallocated](capacity-reservation-overallocate.md) state. This third VM requires unused quota and extra capacity fulfillment from Azure. Once the third VM is allocated, the Capacity Reservation now looks like this:
![Capacity Reservation image 3.](./media/capacity-reservation-overview/capacity-reservation-3.jpg) The `capacity` is 2 and the length of `virtualMachinesAllocated` and `virtualMachinesAssociated` is 3.
-Now suppose the application scales down to the minimum of two VMs. Since VM 0 needs an update, it is chosen for deallocation. The reservation automatically shifts to this state:
+Now suppose the application scales down to the minimum of two VMs. Since VM 0 needs an update, it's chosen for deallocation. The reservation automatically shifts to this state:
![Capacity Reservation image 4.](./media/capacity-reservation-overview/capacity-reservation-4.jpg) The `capacity` and the length of `virtualMachinesAllocated` are both 2. However, the length for `virtualMachinesAssociated` is still 3 as VM 0, though deallocated, is still associated with the Capacity Reservation. To prevent quota overrun, the deallocated VM 0 still counts against the quota allocated to the reservation. As long as you have enough unused quota, you can deploy new VMs to the Capacity Reservation and receive the SLA from any unused reserved capacity. Or you can delete VM 0 to remove its use of quota.
-The Capacity Reservation will exist until explicitly deleted. To delete a Capacity Reservation, the first step is to dissociate all the VMs in the `virtualMachinesAssociated` property. Once disassociation is complete, the Capacity Reservation should look like this:
+The Capacity Reservation exists until explicitly deleted. To delete a Capacity Reservation, the first step is to dissociate all the VMs in the `virtualMachinesAssociated` property. Once disassociation is complete, the Capacity Reservation should look like this:
![Capacity Reservation image 5.](./media/capacity-reservation-overview/capacity-reservation-5.jpg)
-The status of the Capacity Reservation will now show `capacity` as 2 and length of `virtualMachinesAssociated` and `virtualMachinesAllocated` as 0. From this state, the Capacity Reservation can be deleted. Once deleted, you will not pay for the reservation anymore.
+The status of the Capacity Reservation shows `capacity` as 2 and length of `virtualMachinesAssociated` and `virtualMachinesAllocated` as 0. From this state, the Capacity Reservation can be deleted. Once deleted, you don't pay for the reservation anymore.
![Capacity Reservation image 6.](./media/capacity-reservation-overview/capacity-reservation-6.jpg) ## Usage and billing
-When a Capacity Reservation is empty, VM usage will be reported for the corresponding VM size and the location. [VM Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) can cover some or all of the Capacity Reservation usage even when VMs are not deployed.
+When a Capacity Reservation is empty, VM usage is reported for the corresponding VM size and the location. [VM Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/) can cover some or all of the Capacity Reservation usage even when VMs aren't deployed.
### Example
For example, lets say a Capacity Reservation with quantity reserved 2 has been c
![Capacity Reservation image 7.](./media/capacity-reservation-overview/capacity-reservation-7.jpg)
-In the previous image, a Reserved VM Instance discount is applied to one of the unused instances and the cost for that instance will be zeroed out. For the other instance, PAYG rate will be charged for the VM size reserved.
+In the previous image, a Reserved VM Instance discount is applied to one of the unused instances and the cost for that instance is zeroed out. For the other instance, PAYG rate is charged for the VM size reserved.
-When a VM is allocated against the Capacity Reservation, the other VM components such as disks, network, extensions, and any other requested components must also be allocated. In this state, the VM usage will reflect one allocated VM and one unused capacity instance. The Reserved VM Instance will zero out the cost of either the VM or the unused capacity instance. The other charges for disks, networking, and other components associated with the allocated VM will also appear on the bill.
+When a VM is allocated against the Capacity Reservation, the other VM components such as disks, network, extensions, and any other requested components must also be allocated. In this state, the VM usage reflects one allocated VM and one unused capacity instance. The Reserved VM Instance will zero out the cost of either the VM or the unused capacity instance. The other charges for disks, networking, and other components associated with the allocated VM also appears on the bill.
![Capacity Reservation image 8.](./media/capacity-reservation-overview/capacity-reservation-8.jpg)
-In the previous image, the VM Reserved Instance discount is applied to VM 0, which will only be charged for other components such as disk and networking. The other unused instance is being charged at PAYG rate for the VM size reserved.
+In the previous image, the VM Reserved Instance discount is applied to VM 0, which is only charged for other components such as disk and networking. The other unused instance is being charged at PAYG rate for the VM size reserved.
## Frequently asked questions - **WhatΓÇÖs the price of on-demand Capacity Reservation?**
- The price of your on-demand Capacity Reservation is same as the price of underlying VM size associated with the reservation. When using Capacity Reservation, you will be charged for the VM size you selected at pay-as-you-go rates, whether the VM has been provisioned or not. Visit the [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) and [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) VM pricing pages for more details.
+ The price of your on-demand Capacity Reservation is same as the price of underlying VM size associated with the reservation. When using Capacity Reservation, you'll be charged for the VM size you selected at pay-as-you-go rates, whether the VM has been provisioned or not. Visit the [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) and [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) VM pricing pages for more details.
- **Will I get charged twice, for the cost of on-demand Capacity Reservation and for the actual VM when I finally provision it?**
- No, you will only get charged once for on-demand Capacity Reservation.
+ No, you only get charged once for on-demand Capacity Reservation.
- **Can I apply Reserved Virtual Machine Instance (RI) to on-demand Capacity Reservation to lower my costs?**
- Yes, you can apply existing or future RIs to on-demand capacity reservations and receive RI discounts. Available RIs are applied automatically to Capacity Reservation the same way they are applied to VMs.
+ Yes, you can apply existing or future RIs to on-demand capacity reservations and receive RI discounts. Available RIs are applied automatically to Capacity Reservation the same way they're applied to VMs.
- **What is the difference between Reserved Virtual Machine Instance (RI) and on-demand Capacity Reservation?**
- Both RIs and on-demand capacity reservations are applicable to Azure VMs. However, RIs provide discounted reservation rates for your VMs compared to pay-as-you-go rates as a result of a 1-year or 3-year term commitment. Conversely, on-demand capacity reservations do not require a commitment. You can create or cancel a Capacity Reservation at any time. However, no discounts are applied, and you will incur charges at pay-as-you-go rates after your Capacity Reservation has been successfully provisioned. Unlike RIs, which prioritize capacity but do not guarantee it, when you purchase an on-demand Capacity Reservation, Azure sets aside compute capacity for your VM and provides an SLA guarantee.
+ Both RIs and on-demand capacity reservations are applicable to Azure VMs. However, RIs provide discounted reservation rates for your VMs compared to pay-as-you-go rates as a result of a 1-year or 3-year term commitment. Conversely, on-demand capacity reservations don't require a commitment. You can create or cancel a Capacity Reservation at any time. However, no discounts are applied, and you'll incur charges at pay-as-you-go rates after your Capacity Reservation has been successfully provisioned. Unlike RIs, which prioritize capacity but don't guarantee it, when you purchase an on-demand Capacity Reservation, Azure sets aside compute capacity for your VM and provides an SLA guarantee.
- **Which scenarios would benefit the most from on-demand capacity reservations?**
Get started reserving Compute capacity. Check out our other related Capacity Res
- [Modify a capacity reservation](capacity-reservation-modify.md) - [Associate a VM](capacity-reservation-associate-vm.md) - [Remove a VM](capacity-reservation-remove-vm.md)-- [Associate a VM scale set - Flexible](capacity-reservation-associate-virtual-machine-scale-set-flex.md)-- [Associate a VM scale set - Uniform](capacity-reservation-associate-virtual-machine-scale-set.md)-- [Remove a VM scale set](capacity-reservation-remove-virtual-machine-scale-set.md)
+- [Associate a Virtual Machine Scale Set - Flexible](capacity-reservation-associate-virtual-machine-scale-set-flex.md)
+- [Associate a Virtual Machine Scale Set - Uniform](capacity-reservation-associate-virtual-machine-scale-set.md)
+- [Remove a Virtual Machine Scale Set](capacity-reservation-remove-virtual-machine-scale-set.md)
virtual-machines Create Fqdn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/create-fqdn.md
Previously updated : 05/07/2021 Last updated : 02/25/2023 + # Create a fully qualified domain name for a VM in the Azure portal **Applies to:** **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
-When you create a virtual machine (VM) in the [Azure portal](https://portal.azure.com), a public IP resource for the virtual machine is automatically created. You use this public IP address to remotely access the VM. Although the portal does not create a [fully qualified domain name](https://en.wikipedia.org/wiki/Fully_qualified_domain_name), or FQDN, you can add one once the VM is created. This article demonstrates the steps to create a DNS name or FQDN. If you create a VM without a public IP address, you can't create a FQDN.
+When you create a virtual machine (VM) in the [Azure portal](https://portal.azure.com), a public IP resource for the virtual machine is automatically created. You use this public IP address to remotely access the VM. Although the portal doesn't create a [fully qualified domain name](https://en.wikipedia.org/wiki/Fully_qualified_domain_name), or FQDN, you can add one once the VM is created. This article demonstrates the steps to create a DNS name or FQDN. If you create a VM without a public IP address, you can't create an FQDN.
-## Create a FQDN
-This article assumes that you have already created a VM. If needed, you can create a [Linux](./linux/quick-create-portal.md) or [Windows](./windows/quick-create-portal.md) VM in the portal. Follow these steps once your VM is up and running:
+## Create an FQDN
+This article assumes that you've already created a VM. If needed, you can create a [Linux](./linux/quick-create-portal.md) or [Windows](./windows/quick-create-portal.md) VM in the portal. Follow these steps once your VM is up and running:
1. Select your VM in the portal.
virtual-machines Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/custom-data.md
- Last updated 03/06/2020
+ Last updated 02/24/2023
+
# Custom data and cloud-init on Azure Virtual Machines
Custom data is made available to the VM during first startup or setup, which is
## Pass custom data to the VM To use custom data, you must Base64-encode the contents before passing the data to the API--unless you're using a CLI tool that does the conversion for you, such as the Azure CLI. The size can't exceed 64 KB.
-In the CLI, you can pass your custom data as a file, as the following example shows. The file will be converted to Base64.
+In the CLI, you can pass your custom data as a file, as the following example shows. The file is converted to Base64.
```azurecli az vm create \
In Azure Resource Manager, there's a [base64 function](../azure-resource-manager
The provisioning agents installed on the VMs handle communication with the platform and placing data on the file system. ### Windows
-Custom data is placed in *%SYSTEMDRIVE%\AzureData\CustomData.bin* as a binary file, but it isn't processed. If you want to process this file, you'll need to build a custom image and write code to process *CustomData.bin*.
+Custom data is placed in *%SYSTEMDRIVE%\AzureData\CustomData.bin* as a binary file, but it isn't processed. If you want to process this file, you need to build a custom image and write code to process *CustomData.bin*.
### Linux
-On Linux operating systems, custom data is passed to the VM via the *ovf-env.xml* file. That file is copied to the */var/lib/waagent* directory during provisioning. Newer versions of the Linux Agent will also copy the Base64-encoded data to */var/lib/waagent/CustomData* for convenience.
+On Linux operating systems, custom data is passed to the VM via the *ovf-env.xml* file. That file is copied to the */var/lib/waagent* directory during provisioning. Newer versions of the Linux Agent copy the Base64-encoded data to */var/lib/waagent/CustomData* for convenience.
Azure currently supports two provisioning agents:
-* **Linux Agent**. By default, the agent won't process custom data. You need to build a custom image with the data enabled. The [relevant settings](https://github.com/Azure/WALinuxAgent#configuration) are:
+* **Linux Agent**. By default, the agent doesn't process custom data. You need to build a custom image with the data enabled. The [relevant settings](https://github.com/Azure/WALinuxAgent#configuration) are:
* `Provisioning.DecodeCustomData` * `Provisioning.ExecuteCustomData`
- When you enable custom data and run a script, it will delay the VM reporting that it's ready or that provisioning has succeeded until the script has finished. If the script exceeds the total VM provisioning time allowance of 40 minutes, VM creation will fail.
+ When you enable custom data and run a script, the virtual machine will not report a successful VM provision until the script has finished executing. If the script exceeds the total VM provisioning time limit of 40 minutes, VM creation fails.
- If the script fails to run, or errors happen during execution, that's not a fatal provisioning failure. You'll need to create a notification path to alert you for the completion state of the script.
+ If the script fails to run, or errors happen during execution, that's not a fatal provisioning failure. You need to create a notification path to alert you for the completion state of the script.
To troubleshoot custom data execution, review */var/log/waagent.log*.
-* **cloud-init**. By default, this agent will process custom data. It accepts [multiple formats](https://cloudinit.readthedocs.io/en/latest/topics/format.html) of custom data, such as cloud-init configuration and scripts.
+* **cloud-init**. By default, this agent processes custom data. It accepts [multiple formats](https://cloudinit.readthedocs.io/en/latest/topics/format.html) of custom data, such as cloud-init configuration and scripts.
- Similar to the Linux Agent, if errors happen during execution of the configuration processing or scripts when cloud-init is processing the custom data, that's not a fatal provisioning failure. You'll need to create a notification path to alert you for the completion state of the script.
+ Similar to the Linux Agent, if errors happen during execution of the configuration processing or scripts when cloud-init is processing the custom data, that's not a fatal provisioning failure. You need to create a notification path to alert you for the completion state of the script.
However, unlike the Linux Agent, cloud-init doesn't wait for custom data configurations from the user to finish before reporting to the platform that the VM is ready. For more information on cloud-init on Azure, including troubleshooting, see [cloud-init support for virtual machines in Azure](./linux/using-cloud-init.md). ## FAQ ### Can I update custom data after the VM has been created?
-For single VMs, you can't update custom data in the VM model. But for virtual machine scale sets, you can update custom data via the [REST API](/rest/api/compute/virtualmachinescalesets/update), the [Azure CLI](/cli/azure/vmss#az-vmss-update), or [Azure PowerShell](/powershell/module/az.compute/update-azvmss). When you update custom data in the model for a virtual machine scale set:
+For single VMs, you can't update custom data in the VM model. But for Virtual Machine Scale Sets, you can update custom data via the [REST API](/rest/api/compute/virtualmachinescalesets/update), the [Azure CLI](/cli/azure/vmss#az-vmss-update), or [Azure PowerShell](/powershell/module/az.compute/update-azvmss). When you update custom data in the model for a Virtual Machine Scale Set:
-* Existing instances in the scale set won't get the updated custom data until they're reimaged.
-* Existing instances in the scale set that are upgraded won't get the updated custom data.
-* New instances will receive the new custom data.
+* Existing instances in the scale set don't get the updated custom data until they're reimaged.
+* Existing instances in the scale set that is upgraded don't get the updated custom data.
+* New instances receive the new custom data.
### Can I place sensitive values in custom data? We advise *not* to store sensitive data in custom data. For more information, see [Azure data security and encryption best practices](../security/fundamentals/data-encryption-best-practices.md). ### Is custom data made available in IMDS?
-Custom data is not available in Azure Instance Metadata Service (IMDS). We suggest using user data in IMDS instead. For more information, see [User data through Azure Instance Metadata Service](./linux/instance-metadata-service.md?tabs=linux#get-user-data).
+Custom data isn't available in Azure Instance Metadata Service (IMDS). We suggest using user data in IMDS instead. For more information, see [User data through Azure Instance Metadata Service](./linux/instance-metadata-service.md?tabs=linux#get-user-data).
virtual-machines Infrastructure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/infrastructure-automation.md
Previously updated : 07/17/2020 Last updated : 02/25/2023 + # Use infrastructure automation tools with virtual machines in Azure
To create and manage Azure virtual machines (VMs) in a consistent manner at scal
- Examples include [Azure DevOps Services](#azure-devops-services) and [Jenkins](#jenkins). ## Ansible
-[Ansible](https://www.ansible.com/) is an automation engine for configuration management, VM creation, or application deployment. Ansible uses an agent-less model, typically with SSH keys, to authenticate and manage target machines. Configuration tasks are defined in playbooks, with a number of Ansible modules available to carry out specific tasks. For more information, see [How Ansible works](https://www.ansible.com/how-ansible-works).
+[Ansible](https://www.ansible.com/) is an automation engine for configuration management, VM creation, or application deployment. Ansible uses an agent-less model, typically with SSH keys, to authenticate and manage target machines. Configuration tasks are defined in playbooks, with several Ansible modules available to carry out specific tasks. For more information, see [How Ansible works](https://www.ansible.com/how-ansible-works).
Learn how to:
Learn how to:
## Chef
-[Chef](https://www.chef.io/) is an automation platform that helps define how your infrastructure is configured, deployed, and managed. Additional components included Chef Habitat for application lifecycle automation rather than the infrastructure, and Chef InSpec that helps automate compliance with security and policy requirements. Chef Clients are installed on target machines, with one or more central Chef Servers that store and manage the configurations. For more information, see [An Overview of Chef](https://docs.chef.io/chef_overview.html).
+[Chef](https://www.chef.io/) is an automation platform that helps define how your infrastructure is configured, deployed, and managed. Some components include Chef Habitat for application lifecycle automation rather than the infrastructure, and Chef InSpec that helps automate compliance with security and policy requirements. Chef Clients are installed on target machines, with one or more central Chef Servers that store and manage the configurations. For more information, see [An Overview of Chef](https://docs.chef.io/chef_overview.html).
Learn how to:
Learn how to:
## Cloud-init
-[Cloud-init](https://cloudinit.readthedocs.io) is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to install packages and write files, or to configure users and security. Because cloud-init is called during the initial boot process, there are no additional steps or required agents to apply your configuration. For more information on how to properly format your `#cloud-config` files, see the [cloud-init documentation site](https://cloudinit.readthedocs.io/en/latest/topics/format.html#cloud-config-data). `#cloud-config` files are text files encoded in base64.
+[Cloud-init](https://cloudinit.readthedocs.io) is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to install packages and write files, or to configure users and security. Because cloud-init is called during the initial boot process, there are no extra steps or required agents to apply your configuration. For more information on how to properly format your `#cloud-config` files, see the [cloud-init documentation site](https://cloudinit.readthedocs.io/en/latest/topics/format.html#cloud-config-data). `#cloud-config` files are text files encoded in base64.
Cloud-init also works across distributions. For example, you don't use **apt-get install** or **yum install** to install a package. Instead you can define a list of packages to install. Cloud-init automatically uses the native package management tool for the distro you select.
-We are actively working with our endorsed Linux distro partners in order to have cloud-init enabled images available in the Azure Marketplace. These images make your cloud-init deployments and configurations work seamlessly with VMs and virtual machine scale sets.
+We're actively working with our endorsed Linux distro partners in order to have cloud-init enabled images available in the Azure Marketplace. These images make your cloud-init deployments and configurations work seamlessly with VMs and Virtual Machine Scale Sets.
Learn more details about cloud-init on Azure: - [Cloud-init support for Linux virtual machines in Azure](./linux/using-cloud-init.md)
Learn how to:
## Azure Custom Script Extension
-The Azure Custom Script Extension for [Linux](./extensions/custom-script-linux.md) or [Windows](./extensions/custom-script-windows.md) downloads and executes scripts on Azure VMs. You can use the extension when you create a VM, or any time after the VM is in use.
+The Azure Custom Script Extension for [Linux](./extensions/custom-script-linux.md) or [Windows](./extensions/custom-script-windows.md) downloads and executes scripts on Azure VMs. You can use the extension when you create a VM, or anytime after the VM is in use.
Scripts can be downloaded from Azure storage or any public location such as a GitHub repository. With the Custom Script Extension, you can write scripts in any language that runs on the source VM. These scripts can be used to install applications or configure the VM as desired. To secure credentials, sensitive information such as passwords can be stored in a protected configuration. These credentials are only decrypted inside the VM.
Learn how to:
## Azure Automation
-[Azure Automation](https://azure.microsoft.com/services/automation/) uses runbooks to process a set of tasks on the VMs you target. Azure Automation is used to manage existing VMs rather than to create an infrastructure. Azure Automation can run across both Linux and Windows VMs, as well as on-premises virtual or physical machines with a hybrid runbook worker. Runbooks can be stored in a source control repository, such as GitHub. These runbooks can then run manually or on a defined schedule.
+[Azure Automation](https://azure.microsoft.com/services/automation/) uses runbooks to process a set of tasks on the VMs you target. Azure Automation is used to manage existing VMs rather than to create an infrastructure. Azure Automation can run across both Linux and Windows VMs, and on-premises virtual or physical machines with a hybrid runbook worker. Runbooks can be stored in a source control repository, such as GitHub. These runbooks can then run manually or on a defined schedule.
Azure Automation also provides a Desired State Configuration (DSC) service that allows you to create definitions for how a given set of VMs should be configured. DSC then ensures that the required configuration is applied and the VM stays consistent. Azure Automation DSC runs on both Windows and Linux machines.
virtual-machines Openshift Container Platform 3X Marketplace Self Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/openshift-container-platform-3x-marketplace-self-managed.md
- Title: Deploy OpenShift Container Platform 3.11 Self-Managed Marketplace Offer in Azure
-description: Deploy OpenShift Container Platform 3.11 Self-Managed Marketplace Offer in Azure.
------- Previously updated : 10/14/2019---
-# Configure prerequisites
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-Before using the Marketplace offer to deploy a self-managed OpenShift Container Platform 3.11 cluster in Azure, a few prerequisites must be configured. Read the [OpenShift prerequisites](./openshift-container-platform-3x-prerequisites.md) article for instructions to create an ssh key (without a passphrase), Azure key vault, key vault secret, and a service principal.
-
-
-## Deploy using the Marketplace offer
-
-The simplest way to deploy a self-managed OpenShift Container Platform 3.11 cluster into Azure is to use the Azure Marketplace offer.
-
-This option is the simplest, but it also has limited customization capabilities. The Marketplace offer deploys OpenShift Container Platform 3.11.82 and includes the following configuration options:
--- **Master Nodes**: Three (3) Master Nodes with configurable instance type.-- **Infra Nodes**: Three (3) Infra Nodes with configurable instance type.-- **Nodes**: The number of Nodes (between 1 and 9) and the instance type are configurable.-- **Disk Type**: Managed Disks are used.-- **Networking**: Support for new or existing Network and custom CIDR range.-- **CNS**: CNS can be enabled.-- **Metrics**: Hawkular Metrics can be enabled.-- **Logging**: EFK Logging can be enabled.-- **Azure Cloud Provider**: Enabled by default, can be disabled.-
-In the upper left of the Azure portal, click **Create a resource**, enter 'openshift container platform' into the search box and hit Enter.
-
- ![New resource search](media/openshift-marketplace-self-managed/ocp-search.png)
-<br>
-
-The Results page will open with **Red Hat OpenShift Container Platform 3.11 Self-Managed** in the list.
-
- ![New resource search result](media/openshift-marketplace-self-managed/ocp-searchresult.png)
-<br>
-
-Click the offer to view details of the offer. To deploy this offer, click **Create**. The UI to enter necessary parameters will appear. The first screen is the **Basics** blade.
-
- ![Offer title page](media/openshift-marketplace-self-managed/ocp-titlepage.png)
-<br>
-
-**Basics**
-
-To get help on any of the input parameters, hover over the ***i*** next to the parameter name.
-
-Enter values for the input parameters and click **OK**.
-
-| Input Parameter | Parameter Description |
-|--|--|
-| VM Admin User Name | The administrator user to be created on all VM instances |
-| SSH Public Key for Admin User | SSH public key used to log into VM - must not have a passphrase |
-| Subscription | Azure subscription to deploy cluster into |
-| Resource Group | Create a new resource group or select an existing empty resource group for cluster resources |
-| Location | Azure region to deploy cluster into |
-
- ![Offer basics blade](media/openshift-marketplace-self-managed/ocp-basics.png)
-<br>
-
-**Infrastructure Settings**
-
-Enter values for the input parameters and click **OK**.
-
-| Input Parameter | Parameter Description |
-|--|--|
-| OCP Cluster Name Prefix | Cluster Prefix used to configure hostnames for all nodes. Between 1 and 20 characters |
-| Master Node Size | Accept the default VM size or click **Change size** to select a different VM size. Select appropriate VM size for your work load |
-| Infrastructure Node Size | Accept the default VM size or click **Change size** to select a different VM size. Select appropriate VM size for your work load |
-| Number of Application Nodes | Accept the default VM size or click **Change size** to select a different VM size. Select appropriate VM size for your work load |
-| Application Node Size | Accept the default VM size or click **Change size** to select a different VM size. Select appropriate VM size for your work load |
-| Bastion Host Size | Accept the default VM size or click **Change size** to select a different VM size. Select appropriate VM size for your work load |
-| New or Existing Virtual Network | Create a new vNet (Default) or use an existing vNet |
-| Choose Default CIDR Settings or customize IP Range (CIDR) | Accept default CIDR ranges or Select **Custom IP Range** and enter custom CIDR information. Default Settings will create vNet with CIDR of 10.0.0.0/14, master subnet with 10.1.0.0/16, infra subnet with 10.2.0.0/16, and compute and cns subnet with 10.3.0.0/16 |
-| Key Vault Resource Group Name | The name of the Resource Group that contains the Key Vault |
-| Key Vault Name | The name of the Key Vault that contains the secret with the ssh private key. Only alphanumeric characters and dashes are allowed, and be between 3 and 24 characters |
-| Secret Name | The name of the secret that contains the ssh private key. Only alphanumeric characters and dashes are allowed |
-
- ![Offer infrastructure blade](media/openshift-marketplace-self-managed/ocp-inframain.png)
-<br>
-
-**Change size**
-
-To select a different VM size, click ***Change size***. The VM selection window will open. Select the VM size you want and click **Select**.
-
- ![Select VM Size](media/openshift-marketplace-self-managed/ocp-selectvmsize.png)
-<br>
-
-**Existing Virtual Network**
-
-| Input Parameter | Parameter Description |
-|--|--|
-| Existing Virtual Network Name | Name of the existing vNet |
-| Subnet name for master nodes | Name of existing subnet for master nodes. Needs to contain at least 16 IP addresses and follow RFC 1918 |
-| Subnet name for infra nodes | Name of existing subnet for infra nodes. Needs to contain at least 32 IP addresses and follow RFC 1918 |
-| Subnet name for compute and cns nodes | Name of existing subnet for compute and cns nodes. Needs to contain at least 32 IP addresses and follow RFC 1918 |
-| Resource Group for the existing Virtual Network | Name of resource group that contains the existing vNet |
-
- ![Offer infrastructure existing vnet](media/openshift-marketplace-self-managed/ocp-existingvnet.png)
-<br>
-
-**Custom IP Range**
-
-| Input Parameter | Parameter Description |
-|--|--|
-| Address Range for the Virtual Network | Custom CIDR for the vNet |
-| Address Range for the subnet containing the master nodes | Custom CIDR for master subnet |
-| Address Range for the subnet containing the infrastructure nodes | Custom CIDR for infrastructure subnet |
-| Address Range for subnet containing the compute and cns nodes | Custom CIDR for the compute and cns nodes |
-
- ![Offer infrastructure custom IP range](media/openshift-marketplace-self-managed/ocp-customiprange.png)
-<br>
-
-**OpenShift Container Platform 3.11**
-
-Enter values for the Input Parameters and click **OK**
-
-| Input Parameter | Parameter Description |
-|--|--|
-| OpenShift Admin User Password | Password for the initial OpenShift user. This user will also be the cluster admin |
-| Confirm OpenShift Admin User Password | Retype the OpenShift Admin User Password |
-| Red Hat Subscription Manager User Name | User Name to access your Red Hat Subscription or Organization ID. This credential is used to register the RHEL instance to your subscription and will not be stored by Microsoft or Red Hat |
-| Red Hat Subscription Manager User Password | Password to access your Red Hat Subscription or Activation Key. This credential is used to register the RHEL instance to your subscription and will not be stored by Microsoft or Red Hat |
-| Red Hat Subscription Manager OpenShift Pool ID | Pool ID that contains OpenShift Container Platform entitlement. Ensure you have enough entitlements of OpenShift Container Platform for the installation of the cluster |
-| Red Hat Subscription Manager OpenShift Pool ID for Broker / Master Nodes | Pool ID that contains OpenShift Container Platform entitlements for Broker / Master Nodes. Ensure you have enough entitlements of OpenShift Container Platform for the installation of the cluster. If not using broker / master pool ID, enter the pool ID for Application Nodes |
-| Configure Azure Cloud Provider | Configure OpenShift to use Azure Cloud Provider. Necessary if using Azure disk attach for persistent volumes. Default is Yes |
-| Azure AD Service Principal Client ID GUID | Azure AD Service Principal Client ID GUID - also known as AppID. Only needed if Configure Azure Cloud Provider set to **Yes** |
-| Azure AD Service Principal Client ID Secret | Azure AD Service Principal Client ID Secret. Only needed if Configure Azure Cloud Provider set to **Yes** |
-
- ![Offer OpenShift blade](media/openshift-marketplace-self-managed/ocp-ocpmain.png)
-<br>
-
-**Additional Settings**
-
-The Additional Settings blade allows the configuration of CNS for glusterfs storage, Logging, Metrics, and Router Sub domain. The default won't install any of these options and will use nip.io as the router sub domain for testing purposes. Enabling CNS will install three additional compute nodes with three additional attached disks that will host glusterfs pods.
-
-Enter values for the Input Parameters and click **OK**
-
-| Input Parameter | Parameter Description |
-|--|--|
-| Configure Container Native Storage (CNS) | Installs CNS in the OpenShift cluster and enable it as storage. Will be default if Azure Provider is disabled |
-| Configure Cluster Logging | Installs EFK logging functionality into the cluster. Size infra nodes appropriately to host EFK pods |
-| Configure Metrics for the Cluster | Installs Hawkular metrics into the OpenShift cluster. Size infra nodes appropriately to host Hawkular metrics pods |
-| Default Router Sub domain | Select nipio for testing or custom to enter your own sub domain for production |
-
- ![Offer additional blade](media/openshift-marketplace-self-managed/ocp-additionalmain.png)
-<br>
-
-**Additional Settings - Extra Parameters**
-
-| Input Parameter | Parameter Description |
-|--|--|
-| (CNS) Node Size | Accept the default node size or select **Change size** to select a new VM size |
-| Enter your custom subdomain | The custom routing domain to be used for exposing applications via the router on the OpenShift cluster. Be sure to create the appropriate wildcard DNS entry] |
-
- ![Offer additional cns Install](media/openshift-marketplace-self-managed/ocp-additionalcnsall.png)
-<br>
-
-**Summary**
-
-Validation occurs at this stage to check core quota is sufficient to deploy the total number of VMs selected for the cluster. Review all the parameters that were entered. If the inputs are acceptable, click **OK** to continue.
-
- ![Offer summary blade](media/openshift-marketplace-self-managed/ocp-summary.png)
-<br>
-
-**Buy**
-
-Confirm contact information on the Buy page and click **Purchase** to accept the terms of use and start deployment of the OpenShift Container Platform cluster.
-
- ![Offer purchase blade](media/openshift-marketplace-self-managed/ocp-purchase.png)
-<br>
--
-## Connect to the OpenShift cluster
-
-When the deployment finishes, retrieve the connection from the output section of the deployment. Connect to the OpenShift console with your browser by using the **OpenShift Console URL**. you can also SSH to the Bastion host. Following is an example where the admin username is clusteradmin and the bastion public IP DNS FQDN is bastiondns4hawllzaavu6g.eastus.cloudapp.azure.com:
-
-```bash
-$ ssh clusteradmin@bastiondns4hawllzaavu6g.eastus.cloudapp.azure.com
-```
-
-## Clean up resources
-
-Use the [az group delete](/cli/azure/group) command to remove the resource group, OpenShift cluster, and all related resources when they're no longer needed.
-
-```azurecli
-az group delete --name openshiftrg
-```
-
-## Next steps
--- [Post-deployment tasks](./openshift-container-platform-3x-post-deployment.md)-- [Troubleshoot OpenShift deployment in Azure](./openshift-container-platform-3x-troubleshooting.md)-- [Getting started with OpenShift Container Platform](https://docs.openshift.com)--
virtual-machines Openshift Container Platform 3X Post Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/openshift-container-platform-3x-post-deployment.md
- Title: OpenShift Container Platform 3.11 in Azure post-deployment tasks
-description: Additional tasks for after an OpenShift Container Platform 3.11 cluster has been deployed.
------- Previously updated : 10/14/2019----
-# Post-deployment tasks
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-After you deploy an OpenShift cluster, you can configure additional items. This article covers:
--- How to configure single sign-on by using Azure Active Directory (Azure AD)-- How to configure Azure Monitor logs to monitor OpenShift-- How to configure metrics and logging-- How to install Open Service Broker for Azure (OSBA)-
-## Configure single sign-on by using Azure Active Directory
-
-To use Azure Active Directory for authentication, first you need to create an Azure AD app registration. This process involves two steps: creating the app registration, and configuring permissions.
-
-### Create an app registration
-
-These steps use the Azure CLI to create the app registration, and the GUI (portal) to set the permissions. To create the app registration, you need the following five pieces of information:
--- Display name: App registration name (for example, OCPAzureAD)-- Home page: OpenShift console URL (for example, `https://masterdns343khhde.westus.cloudapp.azure.com/console`)-- Identifier URI: OpenShift console URL (for example, `https://masterdns343khhde.westus.cloudapp.azure.com/console`)-- Reply URL: Master public URL and the app registration name (for example, `https://masterdns343khhde.westus.cloudapp.azure.com/oauth2callback/OCPAzureAD`)-- Password: Secure password (use a strong password)-
-The following example creates an app registration by using the preceding information:
-
-```azurecli
-az ad app create --display-name OCPAzureAD --homepage https://masterdns343khhde.westus.cloudapp.azure.com/console --reply-urls https://masterdns343khhde.westus.cloudapp.azure.com/oauth2callback/hwocpadint --identifier-uris https://masterdns343khhde.westus.cloudapp.azure.com/console --password {Strong Password}
-```
-
-If the command is successful, you get a JSON output similar to:
-
-```json
-{
- "appId": "12345678-ca3c-427b-9a04-ab12345cd678",
- "appPermissions": null,
- "availableToOtherTenants": false,
- "displayName": "OCPAzureAD",
- "homepage": "https://masterdns343khhde.westus.cloudapp.azure.com/console",
- "identifierUris": [
- "https://masterdns343khhde.westus.cloudapp.azure.com/console"
- ],
- "objectId": "62cd74c9-42bb-4b9f-b2b5-b6ee88991c80",
- "objectType": "Application",
- "replyUrls": [
- "https://masterdns343khhde.westus.cloudapp.azure.com/oauth2callback/OCPAzureAD"
- ]
-}
-```
-
-Take note of the appId property returned from the command for a later step.
-
-In the Azure portal:
-
-1. Select **Azure Active Directory** > **App Registration**.
-2. Search for your app registration (for example, OCPAzureAD).
-3. In the results, click the app registration.
-4. Under **Settings**, select **Required permissions**.
-5. Under **Required Permissions**, select **Add**.
-
- ![App Registration](media/openshift-post-deployment/app-registration.png)
-
-6. Click Step 1: Select API, and then click **Windows Azure Active Directory (Microsoft.Azure.ActiveDirectory)**. Click **Select** at the bottom.
-
- ![App Registration Select API](media/openshift-post-deployment/app-registration-select-api.png)
-
-7. On Step 2: Select Permissions, select **Sign in and read user profile** under **Delegated Permissions**, and then click **Select**.
-
- ![App Registration Access](media/openshift-post-deployment/app-registration-access.png)
-
-8. Select **Done**.
-
-### Configure OpenShift for Azure AD authentication
-
-To configure OpenShift to use Azure AD as an authentication provider, the /etc/origin/master/master-config.yaml file must be edited on all master nodes.
-
-Find the tenant ID by using the following CLI command:
-
-```azurecli
-az account show
-```
-
-In the yaml file, find the following lines:
-
-```yaml
-oauthConfig:
- assetPublicURL: https://masterdns343khhde.westus.cloudapp.azure.com/console/
- grantConfig:
- method: auto
- identityProviders:
- - challenge: true
- login: true
- mappingMethod: claim
- name: htpasswd_auth
- provider:
- apiVersion: v1
- file: /etc/origin/master/htpasswd
- kind: HTPasswdPasswordIdentityProvider
-```
-
-Insert the following lines immediately after the preceding lines:
-
-```yaml
- - name: <App Registration Name>
- challenge: false
- login: true
- mappingMethod: claim
- provider:
- apiVersion: v1
- kind: OpenIDIdentityProvider
- clientID: <appId>
- clientSecret: <Strong Password>
- claims:
- id:
- - sub
- preferredUsername:
- - unique_name
- name:
- - name
- email:
- - email
- urls:
- authorize: https://login.microsoftonline.com/<tenant Id>/oauth2/authorize
- token: https://login.microsoftonline.com/<tenant Id>/oauth2/token
-```
-
-Make sure the text aligns correctly under identityProviders. Find the tenant ID by using the following CLI command: ```az account show```
-
-Restart the OpenShift master services on all master nodes:
-
-```bash
-sudo /usr/local/bin/master-restart api
-sudo /usr/local/bin/master-restart controllers
-```
-
-In the OpenShift console, you now see two options for authentication: htpasswd_auth and [App Registration].
-
-## Monitor OpenShift with Azure Monitor logs
-
-There are three ways to add the Log Analytics agent to OpenShift.
-- Install the Log Analytics agent for Linux directly on each OpenShift node-- Enable Azure Monitor VM Extension on each OpenShift node-- Install the Log Analytics agent as an OpenShift daemon-set-
-Read the full [instructions](../../azure-monitor/containers/containers.md#configure-a-log-analytics-agent-for-red-hat-openshift) for more details.
-
-## Configure metrics and logging
-
-Based on the branch, the Azure Resource Manager templates for OpenShift Container Platform and OKD may provide input parameters for enabling metrics and logging as part of the installation.
-
-The OpenShift Container Platform Marketplace offer also provides an option to enable metrics and logging during cluster installation.
-
-If metrics / logging wasn't enabled during the installation of the cluster, they can easily be enabled after the fact.
-
-### Azure Cloud Provider in use
-
-SSH to the bastion node or first master node (based on template and branch in use) using the credentials provided during deployment. Issue the following command:
-
-```bash
-ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml \
--e openshift_metrics_install_metrics=True \--e openshift_metrics_cassandra_storage_type=dynamic-
-ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml \
--e openshift_logging_install_logging=True \--e openshift_logging_es_pvc_dynamic=true
-```
-
-### Azure Cloud Provider not in use
-
-```bash
-ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml \
--e openshift_metrics_install_metrics=True-
-ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml \
--e openshift_logging_install_logging=True
-```
-
-## Install Open Service Broker for Azure (OSBA)
-
-Open Service Broker for Azure, or OSBA, lets you provision Azure Cloud Services directly from OpenShift. OSBA in an Open Service Broker API implementation for Azure. The Open Service Broker API is a spec that defines a common language for cloud providers that cloud native applications can use to manage cloud services without lock-in.
-
-To install OSBA on OpenShift, follow the instructions located here: https://github.com/Azure/open-service-broker-azure#openshift-project-template.
-> [!NOTE]
-> Only complete the steps in the OpenShift Project Template section and not the entire Installing section.
-
-## Next steps
--- [Getting started with OpenShift Container Platform](https://docs.openshift.com)
virtual-machines Openshift Container Platform 3X Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/openshift-container-platform-3x-prerequisites.md
- Title: OpenShift Container Platform 3.11 in Azure prerequisites
-description: Prerequisites to deploy OpenShift Container Platform 3.11 in Azure.
------- Previously updated : 10/23/2019---
-# Common prerequisites for deploying OpenShift Container Platform 3.11 in Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-This article describes common prerequisites for deploying OpenShift Container Platform or OKD in Azure.
-
-The installation of OpenShift uses Ansible playbooks. Ansible uses Secure Shell (SSH) to connect to all cluster hosts to complete installation steps.
-
-When ansible makes the SSH connection to the remote hosts, it can't enter a password. For this reason, the private key can't have a password (passphrase) associated with it or deployment fails.
-
-Because the virtual machines (VMs) deploy via Azure Resource Manager templates, the same public key is used for access to all VMs. The corresponding private key must be on the VM that executes all the playbooks as well. To perform this action securely, an Azure key vault is used to pass the private key into the VM.
-
-If there's a need for persistent storage for containers, then persistent volumes are required. OpenShift supports Azure virtual hard disks (VHDs) for persistent volumes, but Azure must first be configured as the cloud provider.
-
-In this model, OpenShift:
--- Creates a VHD object in an Azure storage account or a managed disk.-- Mounts the VHD to a VM and formats the volume.-- Mounts the volume to the pod.-
-For this configuration to work, OpenShift needs permissions to perform these tasks in Azure. A service principal is used for this purpose. The service principal is a security account in Azure Active Directory that is granted permissions to resources.
-
-The service principal needs to have access to the storage accounts and VMs that make up the cluster. If all OpenShift cluster resources deploy to a single resource group, the service principal can be granted permissions to that resource group.
-
-This guide describes how to create the artifacts associated with the prerequisites.
-
-> [!div class="checklist"]
-> * Create a key vault to manage SSH keys for the OpenShift cluster.
-> * Create a service principal for use by the Azure Cloud Provider.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Sign in to Azure
-Sign in to your Azure subscription with the [az login](/cli/azure/reference-index) command and follow the on-screen directions, or click **Try it** to use Cloud Shell.
-
-```azurecli
-az login
-```
-
-## Create a resource group
-
-Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. You should use a dedicated resource group to host the key vault. This group is separate from the resource group into which the OpenShift cluster resources deploy.
-
-The following example creates a resource group named *keyvaultrg* in the *eastus* location:
-
-```azurecli
-az group create --name keyvaultrg --location eastus
-```
-
-## Create a key vault
-Create a key vault to store the SSH keys for the cluster with the [az keyvault create](/cli/azure/keyvault) command. The key vault name must be globally unique and must be enabled for template deployment or the deployment will fail with "KeyVaultParameterReferenceSecretRetrieveFailed" error.
-
-The following example creates a key vault named *keyvault* in the *keyvaultrg* resource group:
-
-```azurecli
-az keyvault create --resource-group keyvaultrg --name keyvault \
- --enabled-for-template-deployment true \
- --location eastus
-```
-
-## Create an SSH key
-An SSH key is needed to secure access to the OpenShift cluster. Create an SSH key pair by using the `ssh-keygen` command (on Linux or macOS):
-
-```bash
-ssh-keygen -f ~/.ssh/openshift_rsa -t rsa -N ''
-```
-
-> [!NOTE]
-> Your SSH key pair can't have a password / passphrase.
-
-For more information on SSH keys on Windows, see [How to create SSH keys on Windows](./ssh-from-windows.md). Be sure to export the private key in OpenSSH format.
-
-## Store the SSH private key in Azure Key Vault
-The OpenShift deployment uses the SSH key you created to secure access to the OpenShift master. To enable the deployment to securely retrieve the SSH key, store the key in Key Vault by using the following command:
-
-```azurecli
-az keyvault secret set --vault-name keyvault --name keysecret --file ~/.ssh/openshift_rsa
-```
-
-## Create a service principal
-OpenShift communicates with Azure by using a username and password or a service principal. An Azure service principal is a security identity that you can use with apps, services, and automation tools like OpenShift. You control and define the permissions as to which operations the service principal can perform in Azure. It's best to scope the permissions of the service principal to specific resource groups rather than the entire subscription.
-
-Create a service principal with [az ad sp create-for-rbac](/cli/azure/ad/sp) and output the credentials that OpenShift needs.
-
-The following example creates a service principal and assigns it contributor permissions to a resource group named *openshiftrg*.
-
-First, create the resource group named *openshiftrg*:
-
-```azurecli
-az group create -l eastus -n openshiftrg
-```
-
-Create service principal:
-
-```azurecli
-az group show --name openshiftrg --query id
-```
-
-Save the output of the command and use in place of $scope in next command
-
-```azurecli
-az ad sp create-for-rbac --name openshiftsp \
- --role Contributor --scopes $scope \
-```
-
-Take note of the appId property and password returned from the command:
-
-```json
-{
- "appId": "11111111-abcd-1234-efgh-111111111111",
- "displayName": "openshiftsp",
- "name": "http://openshiftsp",
- "password": {Strong Password},
- "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
-}
-```
-
- > [!WARNING]
- > Be sure to write down the secure password as it will not be possible to retrieve this password again.
-
-For more information on service principals, see [Create an Azure service principal with Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli).
-
-## Prerequisites applicable only to Resource Manager template
-
-Secrets will need to be created for the SSH private key (**sshPrivateKey**), Azure AD client secret (**aadClientSecret**), OpenShift admin password (**openshiftPassword**), and Red Hat Subscription Manager password or activation key (**rhsmPasswordOrActivationKey**). Additionally, if custom TLS/SSL certificates are used, then six additional secrets will need to be created - **routingcafile**, **routingcertfile**, **routingkeyfile**, **mastercafile**, **mastercertfile**, and **masterkeyfile**. These parameters will be explained in more detail.
-
-The template references specific secret names so you **must** use the bolded names listed above (case sensitive).
-
-### Custom Certificates
-
-By default, the template will deploy an OpenShift cluster using self-signed certificates for the OpenShift web console and the routing domain. If you want to use custom TLS/SSL certificates, set 'routingCertType' to 'custom' and 'masterCertType' to 'custom'. You'll need the CA, Cert, and Key files in .pem format for the certificates. It is possible to use custom certificates for one but not the other.
-
-You'll need to store these files in Key Vault secrets. Use the same Key Vault as the one used for the private key. Rather than require 6 additional inputs for the secret names, the template is hard-coded to use specific secret names for each of the TLS/SSL certificate files. Store the certificate data using the information from the following table.
-
-| Secret Name | Certificate file |
-||--|
-| mastercafile | master CA file |
-| mastercertfile | master CERT file |
-| masterkeyfile | master Key file |
-| routingcafile | routing CA file |
-| routingcertfile | routing CERT file |
-| routingkeyfile | routing Key file |
-
-Create the secrets using the Azure CLI. Below is an example.
-
-```azurecli
-az keyvault secret set --vault-name KeyVaultName -n mastercafile --file ~/certificates/masterca.pem
-```
-
-## Next steps
-
-This article covered the following topics:
-> [!div class="checklist"]
-> * Create a key vault to manage SSH keys for the OpenShift cluster.
-> * Create a service principal for use by the Azure Cloud Solution Provider.
-
-Next, deploy an OpenShift cluster:
--- [Deploy OpenShift Container Platform](./openshift-container-platform-3x.md)-- [Deploy OpenShift Container Platform Self-Managed Marketplace Offer](./openshift-container-platform-3x-marketplace-self-managed.md)
virtual-machines Openshift Container Platform 3X Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/openshift-container-platform-3x-troubleshooting.md
- Title: Troubleshoot OpenShift Container Platform 3.11 deployment in Azure
-description: Troubleshoot OpenShift Container Platform 3.11 deployment in Azure.
------- Previously updated : 10/14/2019----
-# Troubleshoot OpenShift Container Platform 3.11 deployment in Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-If the OpenShift cluster doesn't deploy successfully, the Azure portal will provide error output. The output may be difficult to read which makes it difficult to identify the problem. Quickly scan this output for exit code 3, 4 or 5. The following provides information on these three exit codes:
--- Exit code 3: Your Red Hat Subscription User Name / Password or Organization ID / Activation Key is incorrect-- Exit code 4: Your Red Hat Pool ID is incorrect or there are no entitlements available-- Exit code 5: Unable to provision Docker Thin Pool Volume-
-For all other exit codes, connect to the host(s) via ssh to view the log files.
-
-**OpenShift Container Platform 3.11**
-
-SSH to the ansible playbook host. For the template or the Marketplace offer, use the bastion host. From the bastion, you can SSH to all other nodes in the cluster (master, infra, CNS, compute). You'll need to be root to view the log files. Root is disabled for SSH access by default so don't use root to SSH to other nodes.
-
-**OKD**
-
-SSH to the ansible playbook host. For the OKD template (version 3.9 and earlier), use the master-0 host. For the OKD template (version 3.10 and later), use the bastion host. From the ansible playbook host, you can SSH to all other nodes in the cluster (master, infra, CNS, compute). You'll need to be root (sudo su -) to view the log files. Root is disabled for SSH access by default so don't use root to SSH to other nodes.
-
-## Log files
-
-The log files (stderr and stdout) for the host preparation scripts are located in `/var/lib/waagent/custom-script/download/0` on all hosts. If an error occurred during the preparation of the host, view these log files to determine the error.
-
-If the preparation scripts ran successfully, then the log files in the `/var/lib/waagent/custom-script/download/1` directory of the ansible playbook host will need to be examined. If the error occurred during the actual installation of OpenShift, the stdout file will display the error. Use this information to contact Support for further assistance.
-
-Example output
-
-```json
-TASK [openshift_storage_glusterfs : Load heketi topology] **********************
-fatal: [mycluster-master-0]: FAILED! => {"changed": true, "cmd": ["oc", "--config=/tmp/openshift-glusterfs-ansible-IbhnUM/admin.kubeconfig", "rsh", "--namespace=glusterfs", "deploy-heketi-storage-1-d9xl5", "heketi-cli", "-s", "http://localhost:8080", "--user", "admin", "--secret", "VuoJURT0/96E42Vv8+XHfsFpSS8R20rH1OiMs3OqARQ=", "topology", "load", "--json=/tmp/openshift-glusterfs-ansible-IbhnUM/topology.json", "2>&1"], "delta": "0:00:21.477831", "end": "2018-05-20 02:49:11.912899", "failed": true, "failed_when_result": true, "rc": 0, "start": "2018-05-20 02:48:50.435068", "stderr": "", "stderr_lines": [], "stdout": "Creating cluster ... ID: 794b285745b1c5d7089e1c5729ec7cd2\n\tAllowing file volumes on cluster.\n\tAllowing block volumes on cluster.\n\tCreating node mycluster-cns-0 ... ID: 45f1a3bfc20a4196e59ebb567e0e02b4\n\t\tAdding device /dev/sdd ... OK\n\t\tAdding device /dev/sde ... OK\n\t\tAdding device /dev/sdf ... OK\n\tCreating node mycluster-cns-1 ... ID: 596f80d7bbd78a1ea548930f23135131\n\t\tAdding device /dev/sdc ... Unable to add device: Unable to execute command on glusterfs-storage-4zc42: Device /dev/sdc excluded by a filter.\n\t\tAdding device /dev/sde ... OK\n\t\tAdding device /dev/sdd ... OK\n\tCreating node mycluster-cns-2 ... ID: 42c0170aa2799559747622acceba2e3f\n\t\tAdding device /dev/sde ... OK\n\t\tAdding device /dev/sdf ... OK\n\t\tAdding device /dev/sdd ... OK", "stdout_lines": ["Creating cluster ... ID: 794b285745b1c5d7089e1c5729ec7cd2", "\tAllowing file volumes on cluster.", "\tAllowing block volumes on cluster.", "\tCreating node mycluster-cns-0 ... ID: 45f1a3bfc20a4196e59ebb567e0e02b4", "\t\tAdding device /dev/sdd ... OK", "\t\tAdding device /dev/sde ... OK", "\t\tAdding device /dev/sdf ... OK", "\tCreating node mycluster-cns-1 ... ID: 596f80d7bbd78a1ea548930f23135131", "\t\tAdding device /dev/sdc ... Unable to add device: Unable to execute command on glusterfs-storage-4zc42: Device /dev/sdc excluded by a filter.", "\t\tAdding device /dev/sde ... OK", "\t\tAdding device /dev/sdd ... OK", "\tCreating node mycluster-cns-2 ... ID: 42c0170aa2799559747622acceba2e3f", "\t\tAdding device /dev/sde ... OK", "\t\tAdding device /dev/sdf ... OK", "\t\tAdding device /dev/sdd ... OK"]}
-
-PLAY RECAP *********************************************************************
-mycluster-cns-0 : ok=146 changed=57 unreachable=0 failed=0
-mycluster-cns-1 : ok=146 changed=57 unreachable=0 failed=0
-mycluster-cns-2 : ok=146 changed=57 unreachable=0 failed=0
-mycluster-infra-0 : ok=143 changed=55 unreachable=0 failed=0
-mycluster-infra-1 : ok=143 changed=55 unreachable=0 failed=0
-mycluster-infra-2 : ok=143 changed=55 unreachable=0 failed=0
-mycluster-master-0 : ok=502 changed=198 unreachable=0 failed=1
-mycluster-master-1 : ok=348 changed=140 unreachable=0 failed=0
-mycluster-master-2 : ok=348 changed=140 unreachable=0 failed=0
-mycluster-node-0 : ok=143 changed=55 unreachable=0 failed=0
-mycluster-node-1 : ok=143 changed=55 unreachable=0 failed=0
-localhost : ok=13 changed=0 unreachable=0 failed=0
-
-INSTALLER STATUS ***************************************************************
-Initialization : Complete (0:00:39)
-Health Check : Complete (0:00:24)
-etcd Install : Complete (0:01:24)
-Master Install : Complete (0:14:59)
-Master Additional Install : Complete (0:01:10)
-Node Install : Complete (0:10:58)
-GlusterFS Install : In Progress (0:03:33)
- This phase can be restarted by running: playbooks/openshift-glusterfs/config.yml
-
-Failure summary:
-
- 1. Hosts: mycluster-master-0
- Play: Configure GlusterFS
- Task: Load heketi topology
- Message: Failed without returning a message.
-```
-
-The most common errors during installation are:
-
-1. Private key has passphrase
-2. Key vault secret with private key wasn't created correctly
-3. Service principal credentials were entered incorrectly
-4. Service principal doesn't have contributor access to the resource group
-
-### Private Key has a passphrase
-
-You'll see an error that permission was denied for ssh. ssh to the ansible playbook host to check for a passphrase on the private key.
-
-### Key vault secret with private key wasn't created correctly
-
-The private key is copied into the ansible playbook host - ~/.ssh/id_rsa. Confirm this file is correct. Test by opening an SSH session to one of the cluster nodes from the ansible playbook host.
-
-### Service principal credentials were entered incorrectly
-
-When providing the input to the template or Marketplace offer, the incorrect information was provided. Make sure you use the correct appId (clientId) and password (clientSecret) for the service principal. Verify by issuing the following azure cli command.
-
-```azurecli
-az login --service-principal -u <client id> -p <client secret> -t <tenant id>
-```
-
-### Service principal doesn't have contributor access to the resource group
-
-If the Azure cloud provider is enabled, then the service principal used must have contributor access to the resource group. Verify by issuing the following azure cli command.
-
-```azurecli
-az group update -g <openshift resource group> --set tags.sptest=test
-```
-
-## Additional tools
-
-For some errors, you can also use the following commands to get more information:
-
-1. systemctl status \<service>
-2. journalctl -xe
virtual-machines Openshift Container Platform 3X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/openshift-container-platform-3x.md
- Title: Deploy OpenShift Container Platform 3.11 in Azure
-description: Deploy OpenShift Container Platform 3.11 in Azure.
------- Previously updated : 04/05/2020---
-# Deploy OpenShift Container Platform 3.11 in Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-You can use one of several methods to deploy OpenShift Container Platform 3.11 in Azure:
--- You can manually deploy the necessary Azure infrastructure components and then follow the [OpenShift Container Platform documentation](https://docs.openshift.com/container-platform).-- You can also use an existing [Resource Manager template](https://github.com/Microsoft/openshift-container-platform/) that simplifies the deployment of the OpenShift Container Platform cluster.-- Another option is to use the Azure Marketplace offer.-
-For all options, a Red Hat subscription is required. During the deployment, the Red Hat Enterprise Linux instance is registered to the Red Hat subscription and attached to the Pool ID that contains the entitlements for OpenShift Container Platform.
-Make sure you have a valid Red Hat Subscription Manager (RHSM) username, password, and Pool ID. You can use an Activation Key, Org ID, and Pool ID. You can verify this information by signing in to https://access.redhat.com.
--
-## Deploy using the OpenShift Container Platform Resource Manager 3.11 template
-
-### Private Clusters
-
-Deploying private OpenShift clusters requires more than just not having a public IP associated to the master load balancer (web console) or to the infra load balancer (router). A private cluster generally uses a custom DNS server (not the default Azure DNS), a custom domain name (such as contoso.com), and pre-defined virtual network(s). For private clusters, you need to configure your virtual network with all the appropriate subnets and DNS server settings in advance. Then use **existingMasterSubnetReference**, **existingInfraSubnetReference**, **existingCnsSubnetReference**, and **existingNodeSubnetReference** to specify the existing subnet for use by the cluster.
-
-If private master is selected (**masterClusterType**=private), a static private IP needs to be specified for **masterPrivateClusterIp**. This IP will be assigned to the front end of the master load balancer. The IP must be within the CIDR for the master subnet and not in use. **masterClusterDnsType** must be set to "custom" and the master DNS name must be provided for **masterClusterDns**. The DNS name must map to the static Private IP and will be used to access the console on the master nodes.
-
-If private router is selected (**routerClusterType**=private), a static private IP needs to be specified for **routerPrivateClusterIp**. This IP will be assigned to the front end of the infra load balancer. The IP must be within the CIDR for the infra subnet and not in use. **routingSubDomainType** must be set to "custom" and the wildcard DNS name for routing must be provided for **routingSubDomain**.
-
-If private masters and private router are selected, the custom domain name must also be entered for **domainName**
-
-After successful deployment, the Bastion Node is the only node with a public IP that you can ssh into. Even if the master nodes are configured for public access, they aren't exposed for ssh access.
-
-To deploy using the Resource Manager template, you use a parameters file to supply the input parameters. To further customize the deployment, fork the GitHub repo and change the appropriate items.
-
-Some common customization options include, but aren't limited to:
--- Bastion VM size (variable in azuredeploy.json)-- Naming conventions (variables in azuredeploy.json)-- OpenShift cluster specifics, modified via hosts file (deployOpenShift.sh)-
-### Configure the parameters file
-
-The [OpenShift Container Platform template](https://github.com/Microsoft/openshift-container-platform) has multiple branches available for different versions of OpenShift Container Platform. Based on your needs, you can deploy directly from the repo or you can fork the repo and make custom changes to the templates or scripts before deploying.
-
-Use the `appId` value from the service principal you created earlier for the `aadClientId` parameter.
-
-The following example shows a parameters file named azuredeploy.parameters.json with all the required inputs.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "_artifactsLocation": {
- "value": "https://raw.githubusercontent.com/Microsoft/openshift-container-platform/master"
- },
- "location": {
- "value": "eastus"
- },
- "masterVmSize": {
- "value": "Standard_E2s_v3"
- },
- "infraVmSize": {
- "value": "Standard_D4s_v3"
- },
- "nodeVmSize": {
- "value": "Standard_D4s_v3"
- },
- "cnsVmSize": {
- "value": "Standard_E4s_v3"
- },
- "osImageType": {
- "value": "defaultgallery"
- },
- "marketplaceOsImage": {
- "value": {
- "publisher": "RedHat",
- "offer": "RHEL",
- "sku": "7-RAW",
- "version": "latest"
- }
- },
- "storageKind": {
- "value": "changeme"
- },
- "openshiftClusterPrefix": {
- "value": "changeme"
- },
- "minorVersion": {
- "value": "69"
- },
- "masterInstanceCount": {
- "value": 3
- },
- "infraInstanceCount": {
- "value": 3
- },
- "nodeInstanceCount": {
- "value": 3
- },
- "cnsInstanceCount": {
- "value": 3
- },
- "osDiskSize": {
- "value": 64
- },
- "dataDiskSize": {
- "value": 64
- },
- "cnsGlusterDiskSize": {
- "value": 128
- },
- "adminUsername": {
- "value": "changeme"
- },
- "enableMetrics": {
- "value": "false"
- },
- "enableLogging": {
- "value": "false"
- },
- "enableCNS": {
- "value": "false"
- },
- "rhsmUsernameOrOrgId": {
- "value": "changeme"
- },
- "rhsmPoolId": {
- "value": "changeme"
- },
- "rhsmBrokerPoolId": {
- "value": "changeme"
- },
- "sshPublicKey": {
- "value": "GEN-SSH-PUB-KEY"
- },
- "keyVaultSubscriptionId": {
- "value": "255a325e-8276-4ada-af8f-33af5658eb34"
- },
- "keyVaultResourceGroup": {
- "value": "changeme"
- },
- "keyVaultName": {
- "value": "changeme"
- },
- "enableAzure": {
- "value": "true"
- },
- "aadClientId": {
- "value": "changeme"
- },
- "domainName": {
- "value": "contoso.com"
- },
- "masterClusterDnsType": {
- "value": "default"
- },
- "masterClusterDns": {
- "value": "console.contoso.com"
- },
- "routingSubDomainType": {
- "value": "nipio"
- },
- "routingSubDomain": {
- "value": "apps.contoso.com"
- },
- "virtualNetworkNewOrExisting": {
- "value": "new"
- },
- "virtualNetworkName": {
- "value": "changeme"
- },
- "addressPrefixes": {
- "value": "10.0.0.0/14"
- },
- "masterSubnetName": {
- "value": "changeme"
- },
- "masterSubnetPrefix": {
- "value": "10.1.0.0/16"
- },
- "infraSubnetName": {
- "value": "changeme"
- },
- "infraSubnetPrefix": {
- "value": "10.2.0.0/16"
- },
- "nodeSubnetName": {
- "value": "changeme"
- },
- "nodeSubnetPrefix": {
- "value": "10.3.0.0/16"
- },
- "existingMasterSubnetReference": {
- "value": "/subscriptions/abc686f6-963b-4e64-bff4-99dc369ab1cd/resourceGroups/vnetresourcegroup/providers/Microsoft.Network/virtualNetworks/openshiftvnet/subnets/mastersubnet"
- },
- "existingInfraSubnetReference": {
- "value": "/subscriptions/abc686f6-963b-4e64-bff4-99dc369ab1cd/resourceGroups/vnetresourcegroup/providers/Microsoft.Network/virtualNetworks/openshiftvnet/subnets/infrasubnet"
- },
- "existingCnsSubnetReference": {
- "value": "/subscriptions/abc686f6-963b-4e64-bff4-99dc369ab1cd/resourceGroups/vnetresourcegroup/providers/Microsoft.Network/virtualNetworks/openshiftvnet/subnets/cnssubnet"
- },
- "existingNodeSubnetReference": {
- "value": "/subscriptions/abc686f6-963b-4e64-bff4-99dc369ab1cd/resourceGroups/vnetresourcegroup/providers/Microsoft.Network/virtualNetworks/openshiftvnet/subnets/nodesubnet"
- },
- "masterClusterType": {
- "value": "public"
- },
- "masterPrivateClusterIp": {
- "value": "10.1.0.200"
- },
- "routerClusterType": {
- "value": "public"
- },
- "routerPrivateClusterIp": {
- "value": "10.2.0.200"
- },
- "routingCertType": {
- "value": "selfsigned"
- },
- "masterCertType": {
- "value": "selfsigned"
- }
- }
-}
-```
-
-Replace the parameters with your specific information.
-
-Different releases may have different parameters so verify the necessary parameters for the branch you use.
-
-### azuredeploy.Parameters.json file explained
-
-| Property | Description | Valid Options | Default Value |
-|-|-|||
-| `_artifactsLocation` | URL for artifacts (json, scripts, etc.) | | https:\//raw.githubusercontent.com/Microsoft/openshift-container-platform/master |
-| `location` | Azure region to deploy resources to | | |
-| `masterVmSize` | Size of the Master VM. Select from one of the allowed VM sizes listed in the azuredeploy.json file | | Standard_E2s_v3 |
-| `infraVmSize` | Size of the Infra VM. Select from one of the allowed VM sizes listed in the azuredeploy.json file | | Standard_D4s_v3 |
-| `nodeVmSize` | Size of the App Node VM. Select from one of the allowed VM sizes listed in the azuredeploy.json file | | Standard_D4s_v3 |
-| `cnsVmSize` | Size of the Container Native Storage (CNS) Node VM. Select from one of the allowed VM sizes listed in the azuredeploy.json file | | Standard_E4s_v3 |
-| `osImageType` | The RHEL image to use. defaultgallery: On-Demand; marketplace: third-party image | defaultgallery <br> marketplace | defaultgallery |
-| `marketplaceOsImage` | If `osImageType` is marketplace, then enter the appropriate values for 'publisher', 'offer', 'sku', 'version' of the marketplace offer. This parameter is an object type | | |
-| `storageKind` | The type of storage to be used | managed<br> unmanaged | managed |
-| `openshiftClusterPrefix` | Cluster Prefix used to configure hostnames for all nodes. Between 1 and 20 characters | | mycluster |
-| `minoVersion` | The minor version of OpenShift Container Platform 3.11 to deploy | | 69 |
-| `masterInstanceCount` | Number of Masters nodes to deploy | 1, 3, 5 | 3 |
-| `infraInstanceCount` | Number of infra nodes to deploy | 1, 2, 3 | 3 |
-| `nodeInstanceCount` | Number of Nodes to deploy | 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 | 2 |
-| `cnsInstanceCount` | Number of CNS nodes to deploy | 3, 4 | 3 |
-| `osDiskSize` | Size of OS disk for the VM (in GB) | 64, 128, 256, 512, 1024, 2048 | 64 |
-| `dataDiskSize` | Size of data disk to attach to nodes for Docker volume (in GB) | 32, 64, 128, 256, 512, 1024, 2048 | 64 |
-| `cnsGlusterDiskSize` | Size of data disk to attach to CNS nodes for use by glusterfs (in GB | 32, 64, 128, 256, 512, 1024, 2048 | 128 |
-| `adminUsername` | Admin username for both OS (VM) login and initial OpenShift user | | ocpadmin |
-| `enableMetrics` | Enable Metrics. Metrics require more resources so select proper size for Infra VM | true <br> false | false |
-| `enableLogging` | Enable Logging. elasticsearch pod requires 8 GB RAM so select proper size for Infra VM | true <br> false | false |
-| `enableCNS` | Enable Container Native Storage | true <br> false | false |
-| `rhsmUsernameOrOrgId` | Red Hat Subscription Manager Username or Organization ID | | |
-| `rhsmPoolId` | The Red Hat Subscription Manager Pool ID that contains your OpenShift entitlements for compute nodes | | |
-| `rhsmBrokerPoolId` | The Red Hat Subscription Manager Pool ID that contains your OpenShift entitlements for masters and infra nodes. If you don't have different pool IDs, enter same pool ID as 'rhsmPoolId' | |
-| `sshPublicKey` | Copy your SSH Public Key here | | |
-| `keyVaultSubscriptionId` | The Subscription ID of the subscription that contains the Key Vault | | |
-| `keyVaultResourceGroup` | The name of the Resource Group that contains the Key Vault | | |
-| `keyVaultName` | The name of the Key Vault you created | | |
-| `enableAzure` | Enable Azure Cloud Provider | true <br> false | true |
-| `aadClientId` | Azure Active Directory Client ID also known as Application ID for Service Principal | | |
-| `domainName` | Name of the custom domain name to use (if applicable). Set to "none" if not deploying fully private cluster | | none |
-| `masterClusterDnsType` | Domain type for OpenShift web console. 'default' will use DNS label of master infra public IP. 'custom' allows you to define your own name | default <br> custom | default |
-| `masterClusterDns` | The custom DNS name to use to access the OpenShift web console if you selected 'custom' for `masterClusterDnsType` | | console.contoso.com |
-| `routingSubDomainType` | If set to `nipio`, `routingSubDomain` will use `nip.io`. Use 'custom' if you have your own domain that you want to use for routing | `nipio` <br> custom | `nipio` |
-| `routingSubDomain` | The wildcard DNS name you want to use for routing if you selected 'custom' for `routingSubDomainType` | | apps.contoso.com |
-| `virtualNetworkNewOrExisting` | Select whether to use an existing Virtual Network or create a new Virtual Network | existing <br> new | new |
-| `virtualNetworkResourceGroupName` | Name of the Resource Group for the new Virtual Network if you selected 'new' for `virtualNetworkNewOrExisting` | | resourceGroup().name |
-| `virtualNetworkName` | The name of the new Virtual Network to create if you selected 'new' for `virtualNetworkNewOrExisting` | | openshiftvnet |
-| `addressPrefixes` | Address prefix of the new virtual network | | 10.0.0.0/14 |
-| `masterSubnetName` | The name of the master subnet | | mastersubnet |
-| `masterSubnetPrefix` | CIDR used for the master subnet - needs to be a subset of the addressPrefix | | 10.1.0.0/16 |
-| `infraSubnetName` | The name of the infra subnet | | infrasubnet |
-| `infraSubnetPrefix` | CIDR used for the infra subnet - needs to be a subset of the addressPrefix | | 10.2.0.0/16 |
-| `nodeSubnetName` | The name of the node subnet | | nodesubnet |
-| `nodeSubnetPrefix` | CIDR used for the node subnet - needs to be a subset of the addressPrefix | | 10.3.0.0/16 |
-| `existingMasterSubnetReference` | Full reference to existing subnet for master nodes. Not needed if creating new vNet / Subnet | | |
-| `existingInfraSubnetReference` | Full reference to existing subnet for infra nodes. Not needed if creating new vNet / Subnet | | |
-| `existingCnsSubnetReference` | Full reference to existing subnet for CNS nodes. Not needed if creating new vNet / Subnet | | |
-| `existingNodeSubnetReference` | Full reference to existing subnet for compute nodes. Not needed if creating new vNet / Subnet | | |
-| `masterClusterType` | Specify whether the cluster uses private or public master nodes. If private is chosen, the master nodes won't be exposed to the Internet via a public IP. Instead, it will use the private IP specified in the `masterPrivateClusterIp` | public <br> private | public |
-| `masterPrivateClusterIp` | If private master nodes are selected, then a private IP address must be specified for use by the internal load balancer for master nodes. This static IP must be within the CIDR block for the master subnet and not already in use. If public master nodes are selected, this value won't be used but must still be specified | | 10.1.0.200 |
-| `routerClusterType` | Specify whether the cluster uses private or public infra nodes. If private is chosen, the infra nodes won't be exposed to the Internet via a public IP. Instead, it will use the private IP specified in the `routerPrivateClusterIp` | public <br> private | public |
-| `routerPrivateClusterIp` | If private infra nodes are selected, then a private IP address must be specified for use by the internal load balancer for infra nodes. This static IP must be within the CIDR block for the infra subnet and not already in use. If public infra nodes are selected, this value won't be used but must still be specified | | 10.2.0.200 |
-| `routingCertType` | Use custom certificate for routing domain or the default self-signed certificate - follow instructions in **Custom Certificates** section | selfsigned <br> custom | selfsigned |
-| `masterCertType` | Use custom certificate for master domain or the default self-signed certificate - follow instructions in **Custom Certificates** section | selfsigned <br> custom | selfsigned |
-
-<br>
-
-### Deploy using Azure CLI
-
-> [!NOTE]
-> The following command requires Azure CLI 2.0.8 or later. You can verify the CLI version with the `az --version` command. To update the CLI version, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-The following example deploys the OpenShift cluster and all related resources into a resource group named openshiftrg, with a deployment name of myOpenShiftCluster. The template is referenced directly from the GitHub repo, and a local parameters file named azuredeploy.parameters.json file is used.
-
-```azurecli
-az deployment group create -g openshiftrg --name myOpenShiftCluster \
- --template-uri https://raw.githubusercontent.com/Microsoft/openshift-container-platform/master/azuredeploy.json \
- --parameters @./azuredeploy.parameters.json
-```
-
-The deployment takes at least 60 minutes to complete, based on the total number of nodes deployed and options configured. The Bastion DNS FQDN and URL of the OpenShift console prints to the terminal when the deployment finishes.
-
-```json
-{
- "Bastion DNS FQDN": "bastiondns4hawllzaavu6g.eastus.cloudapp.azure.com",
- "OpenShift Console URL": "http://openshiftlb.eastus.cloudapp.azure.com/console"
-}
-```
-
-If you don't want to tie up the command line waiting for the deployment to complete, add `--no-wait` as one of the options for the group deployment. The output from the deployment can be retrieved from the Azure portal in the deployment section for the resource group.
-
-## Connect to the OpenShift cluster
-
-When the deployment finishes, retrieve the connection from the output section of the deployment. Connect to the OpenShift console with your browser by using the **OpenShift Console URL**. You can also SSH to the Bastion host. Following is an example where the admin username is clusteradmin and the bastion public IP DNS FQDN is bastiondns4hawllzaavu6g.eastus.cloudapp.azure.com:
-
-```bash
-$ ssh clusteradmin@bastiondns4hawllzaavu6g.eastus.cloudapp.azure.com
-```
-
-## Clean up resources
-
-Use the [az group delete](/cli/azure/group) command to remove the resource group, OpenShift cluster, and all related resources when they're no longer needed.
-
-```azurecli
-az group delete --name openshiftrg
-```
-
-## Next steps
--- [Post-deployment tasks](./openshift-container-platform-3x-post-deployment.md)-- [Troubleshoot OpenShift deployment in Azure](./openshift-container-platform-3x-troubleshooting.md)-- [Getting started with OpenShift Container Platform](https://docs.openshift.com)
virtual-machines Tag Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/tag-cli.md
- Title: How to tag an Azure virtual machine using the Azure CLI
-description: Learn about tagging a virtual machine using the Azure CLI.
---- Previously updated : 11/11/2020----
-# How to tag a VM using the Azure CLI
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-This article describes how to tag a VM using the Azure CLI. Tags are user-defined key/value pairs which can be placed directly on a resource or a resource group. Azure currently supports up to 50 tags per resource and resource group. Tags may be placed on a resource at the time of creation or added to an existing resource. You can also tag a virtual machine using Azure [PowerShell](tag-powershell.md).
--
-You can view all properties for a given VM, including the tags, using `az vm show`.
-
-```azurecli-interactive
-az vm show --resource-group myResourceGroup --name myVM --query tags
-```
-
-To add a new VM tag through the Azure CLI, you can use the `azure vm update` command along with the tag parameter `--set`:
-
-```azurecli-interactive
-az vm update \
- --resource-group myResourceGroup \
- --name myVM \
- --set tags.myNewTagName1=myNewTagValue1 tags.myNewTagName2=myNewTagValue2
-```
-
-To remove tags, you can use the `--remove` parameter in the `azure vm update` command.
-
-```azurecli-interactive
-az vm update \
- --resource-group myResourceGroup \
- --name myVM \
- --remove tags.myNewTagName1
-```
-
-Now that we have applied tags to our resources Azure CLI and the Portal, let's take a look at the usage details to see the tags in the billing portal.
-
-### Next steps
--- To learn more about tagging your Azure resources, see [Azure Resource Manager Overview](../azure-resource-manager/management/overview.md) and [Using Tags to organize your Azure Resources](../azure-resource-manager/management/tag-resources.md).-- To see how tags can help you manage your use of Azure resources, see [Understanding your Azure Bill](../cost-management-billing/understand/review-individual-bill.md).
virtual-machines Tag Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/tag-portal.md
- Title: How to tag a VM using the Azure portal
-description: Learn about tagging a virtual machine using the Azure portal.
---- Previously updated : 11/11/2020---
-# Tagging a VM using the portal
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-This article describes how to add tags to a VM using the portal. Tags are user-defined key/value pairs which can be placed directly on a resource or a resource group. Azure currently supports up to 50 tags per resource and resource group. Tags may be placed on a resource at the time of creation or added to an existing resource.
--
-1. Navigate to your VM in the portal.
-1. In **Essentials**, select **Click here to add tags**.
-
- :::image type="content" source="media/tag/azure-portal-tag.png" alt-text="Screenshot of the Essentials section of the VM page.":::
-
-1. Add a value for **Name** and **Value**, and then select **Save**.
-
- :::image type="content" source="media/tag/key-value.png" alt-text="Screenshot of the page for adding a key value pair as a tag.":::
-
-### Next steps
--- To learn more about tagging your Azure resources, see [Azure Resource Manager Overview](../azure-resource-manager/management/overview.md) and [Using Tags to organize your Azure Resources](../azure-resource-manager/management/tag-resources.md).-- To see how tags can help you manage your use of Azure resources, see [Understanding your Azure Bill](../cost-management-billing/understand/review-individual-bill.md).
virtual-machines Tag Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/tag-powershell.md
- Title: How to tag a VM using PowerShell
-description: Learn about tagging a virtual machine using PowerShell
---- Previously updated : 11/11/2020----
-# How to tag a virtual machine in Azure using PowerShell
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-This article describes how to tag a VM in Azure using PowerShell. Tags are user-defined key/value pairs which can be placed directly on a resource or a resource group. Azure currently supports up to 50 tags per resource and resource group. Tags may be placed on a resource at the time of creation or added to an existing resource. If you want to tag a virtual machine using the Azure CLI, see [How to tag a virtual machine in Azure using the Azure CLI](tag-cli.md).
-
-Use the `Get-AzVM` cmdlet to view the current list of tags for your VM.
-
-```azurepowershell-interactive
-Get-AzVM -ResourceGroupName "myResourceGroup" -Name "myVM" | Format-List -Property Tags
-```
-
-If your Virtual Machine already contains tags, you will then see all the tags in list format.
-
-To add tags, use the `Set-AzResource` command. When updating tags through PowerShell, tags are updated as a whole. If you are adding one tag to a resource that already has tags, you will need to include all the tags that you want to be placed on the resource. Below is an example of how to add additional tags to a resource through PowerShell Cmdlets.
-
-Assign all of the current tags for the VM to the `$tags` variable, using the `Get-AzResource` and `Tags` property.
-
-```azurepowershell-interactive
-$tags = (Get-AzResource -ResourceGroupName myResourceGroup -Name myVM).Tags
-```
-
-To see the current tags, type the variable.
-
-```azurepowershell-interactive
-$tags
-```
-
-Here is what the output might look like:
-
-```output
-Key Value
-- --
-Department MyDepartment
-Application MyApp1
-Created By MyName
-Environment Production
-```
-
-In the following example, we add a tag called `Location` with the value `myLocation`. Use `+=` to append the new key/value pair to the `$tags` list.
-
-```azurepowershell-interactive
-$tags += @{Location="myLocation"}
-```
-
-Use `Set-AzResource` to set all of the tags defined in the *$tags* variable on the VM.
-
-```azurepowershell-interactive
-Set-AzResource -ResourceGroupName myResourceGroup -Name myVM -ResourceType "Microsoft.Compute/VirtualMachines" -Tag $tags
-```
-
-Use `Get-AzResource` to display all of the tags on the resource.
-
-```azurepowershell-interactive
-(Get-AzResource -ResourceGroupName myResourceGroup -Name myVM).Tags
-
-```
-
-The output should look something like the following, which now includes the new tag:
-
-```output
-
-Key Value
-- --
-Department MyDepartment
-Application MyApp1
-Created By MyName
-Environment Production
-Location MyLocation
-```
-
-### Next steps
--- To learn more about tagging your Azure resources, see [Azure Resource Manager Overview](../azure-resource-manager/management/overview.md) and [Using Tags to organize your Azure Resources](../azure-resource-manager/management/tag-resources.md).-- To see how tags can help you manage your use of Azure resources, see [Understanding your Azure Bill](../cost-management-billing/understand/review-individual-bill.md).
virtual-machines Tag Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/tag-template.md
- Title: How to tag a VM using a template
-description: Learn about tagging a virtual machine using a template.
----- Previously updated : 10/26/2018--
-# Tagging a VM using a template
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-This article describes how to tag a VM in Azure using a Resource Manager template. Tags are user-defined key/value pairs which can be placed directly on a resource or a resource group. Azure currently supports up to 50 tags per resource and resource group. Tags may be placed on a resource at the time of creation or added to an existing resource.
-
-[This template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-tags) places tags on the following resources: Compute (Virtual Machine), Storage (Storage Account), and Network (Public IP Address, Virtual Network, and Network Interface). This template is for a Windows VM but can be adapted for Linux VMs.
-
-Click the **Deploy to Azure** button from the [template link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-tags). This will navigate to the [Azure portal](https://portal.azure.com/) where you can deploy this template.
-
-![Simple deployment with Tags](./media/tag/deploy-to-azure-tags.png)
-
-This template includes the following tags: *Department*, *Application*, and *Created By*. You can add/edit these tags directly in the template if you would like different tag names.
-
-![Azure tags in a template](./media/tag/azure-tags-in-a-template.png)
-
-As you can see, the tags are defined as key/value pairs, separated by a colon (:). The tags must be defined in this format:
-
-```config
-"tags": {
- "Key1" : "Value1",
- "Key2" : "Value2"
-}
-```
-
-Save the template file after you finish editing it with the tags of your choice.
-
-Next, in the **Edit Parameters** section, you can fill out the values for your tags.
-
-![Edit Tags in Azure portal](./media/tag/edit-tags-in-azure-portal.png)
-
-Click **Create** to deploy this template with your tag values.
-
-### Next steps
--- To learn more about tagging your Azure resources, see [Azure Resource Manager Overview](../azure-resource-manager/management/overview.md) and [Using Tags to organize your Azure Resources](../azure-resource-manager/management/tag-resources.md).-- To see how tags can help you manage your use of Azure resources, see [Understanding your Azure Bill](../cost-management-billing/understand/review-individual-bill.md).
virtual-machines Create Vm Specialized Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/create-vm-specialized-portal.md
Previously updated : 01/18/2019 Last updated : 02/24/2023
**Applies to:** :heavy_check_mark: Windows VMs +
+> [!NOTE]
+> Customers are encouraged to use Azure Compute Gallery as all new features like ARM64, Trusted Launch, and Confidential VM,are only supported through Azure Compute Gallery.  If you have an existing VHD or managed image, you can use it as a source and create an Azure Compute Gallery image. For more information, see [Create an image definition and image version](../image-version.md).
+>
+> Creating an image instead of just attaching a disk means you can create multiple VMs from the same sounrce disk.
++ There are several ways to create a virtual machine (VM) in Azure: - If you already have a virtual hard disk (VHD) to use or you want to copy the VHD from an existing VM to use, you can create a new VM by *attaching* the VHD to the new VM as an OS disk.
There are several ways to create a virtual machine (VM) in Azure:
- You can create an Azure VM from an on-premises VHD by uploading the on-premises VHD and attaching it to a new VM. You use PowerShell or another tool to upload the VHD to a storage account, and then you create a managed disk from the VHD. For more information, see [Upload a specialized VHD](create-vm-specialized.md#option-2-upload-a-specialized-vhd). ++ > [!IMPORTANT] > > When you use a [specialized](shared-image-galleries.md#generalized-and-specialized-images) disk to create a new VM, the new VM retains the computer name of the original VM. Other computer-specific information (e.g. CMID) is also kept and, in some cases, this duplicate information could cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
We recommend that you limit the number of concurrent deployments to 20 VMs from
Create a snapshot and then create a disk from the snapshot. This strategy allows you to keep the original VHD as a fallback:
-1. From the [Azure portal](https://portal.azure.com), on the left menu, select **All services**.
-2. In the **All services** search box, enter **disks** and then select **Disks** to display the list of available disks.
+1. Open the [Azure portal](https://portal.azure.com).
+2. In the search box, enter **disks** and then select **Disks** to display the list of available disks.
3. Select the disk that you would like to use. The **Disk** page for that disk appears. 4. From the menu at the top, select **Create snapshot**. 5. Choose a **Resource group** for the snapshot. You can use either an existing resource group or create a new one. 6. Enter a **Name** for the snapshot.
-7. For **Snapshot type**, choose either **Full** or **Incremental**.
+7. For **Snapshot type**, choose **Full**.
8. For **Storage type**, choose **Standard HDD**, **Premium SSD**, or **Zone-redundant** storage.
-9. When you're done, select **Create** to create the snapshot.
-10. After the snapshot has been created, select **Create a resource** in the left menu.
+9. When you're done, select **Review + create** to create the snapshot.
+10. After the snapshot has been created, select **Home** > **Create a resource**.
11. In the search box, enter **managed disk** and then select **Managed Disks** from the list. 12. On the **Managed Disks** page, select **Create**. 13. Choose a **Resource group** for the disk. You can use either an existing resource group or create a new one. This selection will also be used as the resource group where you create the VM from the disk.
-14. Enter a **Name** for the disk.
+14. For **Region**, you must select the same region where the snapshot is located.
+15. Enter a **Name** for the disk.
16. In **Source type**, ensure **Snapshot** is selected. 17. In the **Source snapshot** drop-down, select the snapshot you want to use.
-18. For **Size**, choose either **Standard (HDD)** or **Premium (SSD)** storage.
-19. Make any other adjustments as needed and then select **Create** to create the disk.
+18. For **Size**, you can change the storage type and size as needed.
+19. Make any other adjustments as needed and then select **Review + create** to create the disk. Once validation passes, select **Create**.
## Create a VM from a disk After you have the managed disk VHD that you want to use, you can create the VM in the portal:
-1. From the [Azure portal](https://portal.azure.com), on the left menu, select **All services**.
-2. In the **All services** search box, enter **disks** and then select **Disks** to display the list of available disks.
+1. In the search box, enter **disks** and then select **Disks** to display the list of available disks.
3. Select the disk that you would like to use. The **Disk** page for that disk opens.
-4. In the **Overview** page, ensure that **DISK STATE** is listed as **Unattached**. If it isn't, you might need to either detach the disk from the VM or delete the VM to free up the disk.
+4. In the **Essentials** section, ensure that **Disk state** is listed as **Unattached**. If it isn't, you might need to either detach the disk from the VM or delete the VM to free up the disk.
4. In the menu at the top of the page, select **Create VM**. 5. On the **Basics** page for the new VM, enter a **Virtual machine name** and either select an existing **Resource group** or create a new one. 6. For **Size**, select **Change size** to access the **Size** page.
-7. Select a VM size row and then choose **Select**.
+7. The disk name should be pre-filled in the **Image** section.
8. On the **Disks** page, you may notice that the "OS Disk Type" cannot be changed. This preselected value is configured at the point of Snapshot or VHD creation and will carry over to the new VM. If you need to modify disk type take a new snapshot from an existing VM or disk. 9. On the **Networking** page, you can either let the portal create all new resources or you can select an existing **Virtual network** and **Network security group**. The portal always creates a new network interface and public IP address for the new VM. 10. On the **Management** page, make any changes to the monitoring options.
After you have the managed disk VHD that you want to use, you can create the VM
## Next steps
-You can also use PowerShell to [upload a VHD to Azure and create a specialized VM](create-vm-specialized.md).
+You can also [create an image definition and image version](../image-version.md) from your VHD.
vpn-gateway Ipsec Ike Policy Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ipsec-ike-policy-howto.md
The steps to create a VNet-to-VNet connection with an IPsec/IKE policy are simil
### Step 1 - Create the virtual network, VPN gateway, and local network gateway for TestVNet2
-Use the steps in the [Create a VNet-to-VNet connection](/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to create TestVNet2 and create a VNet-to-VNet connection to TestVNet1.
+Use the steps in the [Create a VNet-to-VNet connection](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to create TestVNet2 and create a VNet-to-VNet connection to TestVNet1.
Example values: