The Linux Foundation Projects
Skip to main content

Join us at ONE Summit 2024 in San Jose and explore the latest in networking and edge innovation | REGISTER

By: Tomas Fredberg, Cloud Infrastructure Reference Model developer (LFN/CNTT)

Note : Part 1 of this two-part blog series is available here.

Reference Model Basic Concepts

The First concept is the division of Underlay and Overlay Network layers where the Shared Underlay Networking separates each Virtualization Infrastructure instance from each other and connects the traffic flows inside each Virtualization Infrastructure instance in between their allocated HW resources (the servers).

Each Virtualization Infrastructure instance separate their Tenants from each other through the Overlay Networking typically through a virtual Switch or virtual Router; commonly implemented in software. 

Note that Network Functions with extreme performance requirements e.g., the Packet Core User Plane, often must bypass the Virtualization Instance Overlay encapsulation by using SR-IOV directly on the NIC. These networks must be encapsulated in the Shared Underlay Network e.g., through a Virtual Termination End Point thereby ensuring separation on the underlay network.

The Second concept is the need for a clear separation into a shared managed HW Infrastructure Layer and a Virtualization Infrastructure Layer with separately managed Virtualization instances. This enables separate Operator organizations to have manageable responsibilities of a single shared HW Infrastructure and allows for separate administrative domains for the Virtualized instances.  This protects each instance from accidental or malicious faulty configurations and limits access to only their own respective responsibility areas.

The diagram shows the Reference Model an IaaS and a Bare Metal CaaS, with one of the Tenant Virtual Machines being used for a K8s environment. 

The Third concept defines a layered Software Defined Networking which must be modeled in alignment with the different administrative domains forming the operational basis for each organization’s management responsibility.

Consequently, an Orchestrator for a set of VNFs or CNFs in a specific Virtualization Infrastructure instance and its Virtualization Infrastructure Manager can only be allowed to manage its Overlay Networking allocated by the Shared Underlay Networking in the HW Infrastructure Layer. 

The management to encapsulate and or map a Virtualization Infrastructure instance Tenants Network on the provisioned Underlay Networking is modeled as an SDN Overlay instance, private to that Virtualization Infrastructure instance only.

The Shared Underlay Network is provisioned using a HW Infrastructure Orchestrator functionality ensuring separation between the different Virtualization Infrastructure instances e.g., using VNI Ranges on the VxLAN protocol to each Virtualization Infrastructure instance. The rules are installed and enforced on the Programmable Networking Fabric implementing the Shared Underlay Networking. 

For high performance VNFs/CNFs the SDN Overlay controllers can request the Underlay Networking to encapsulate these networks e.g., through installation of a Virtual Termination End Point. The Reference Model specify that the HW Infrastructure Layer instance should provide an interface for each SDNo instance.  The Programmable Networking Fabrics can be implemented in switch fabrics and/or SmartNICs that can implement varying degrees of complex acceleration services for the HW Infrastructure Layer itself or be offered to the Application Layer VNFs/CNFs.

CNTT Reference Model Logical Architecture

As we bring the concepts and layers into the Reference Model and map in the ETSI NFV reference points some reference points are missing. For instance, the Container Run-Time environment is well defined, but the interfaces for CaaS Secondary networking for the CNFs and their respective Infrastructure Service management interface are missing. There are proprietary solutions available, but so far no standardized solutions or data models have been agreed upon.

Existing HW Infrastructure management solutions are for the most part vendor specific for each class of equipment.  A number of proprietary implementations for some of the HW Infrastructure Manager functions exist from HW OEM vendors e.g., HPE, Dell and Cisco which address the requirements for their own equipment and there are some Virtualization Infrastructure solution vendors such as VMware and RedHat have developed a set of verified HW Reference Designs. The result is that many operators have been forced to build in-house or integrator designed solutions based on open source components and scripts with limited functionality and portability as their starting point. The new LFN incubation project ODIM (Open Distributed Infrastructure Management) aims to build an open framework for HW Infrastructure Management components.  

The newly introduced shared HW Infrastructure Layer is missing the interface from the Virtualization Layer instances, however this is where Redfish (in DMTF) is working to improve the higher levels of their NorthBound APIs to also include multi-client composable HW Resources. It is however not clear how the interaction, models and automation around networking would look like, how they will be able to be used to enable the assignments, dynamic provisioning and fulfillment interfaces of the Networking Resources in the Shared Underlay Network.

Both the Redfish and ODIM projects are exploring an intent based data model where the HW Infrastructure Manager can request a Switch Fabric controller (in the HW Resource Pool) over a Plug-in API for a desired new end state of the Networking Resources. The data model shall also be able to expose relevant status, telemetry as well as physical and logical topologies of the network to enable Infrastructure automation from the Cloud Infrastructure Management stack.

It shall also be noted that existing SDN Controllers on the market tend to build on the paradigm that the Virtualization Infrastructure owns all the hardware and networking resources it runs on, and that there is no clear separation in between underlay and overlay network management. They also normally lack functionality to make a clear separation between multiple Virtualization Infrastructures.  This is clearly a need that the Operators find highly desirable and that CNTT are addressing onwards.

To wrap it up, there has been much progress in defining the Cloud Infrastructure area, but there are still much standardization and implementations to be done before the efforts can be considered fully interoperable as automated Cloud Infrastructure stacks. If you have a passion in this area, you are most welcome to get involved in the discussions and relevant areas or development whether it is of standards or implementations.

For more background and information on the Cloud iNfrastructure Telecom Taskforce (CNTT) and its Reference Model I can recommend the blog “Open Cloud Infrastructure for Telecom Operators – Myth or reality?” by Walter Kozlowski and the LFN YouTube Channel Webinar “Evolution of the Cloud Infrastructure Reference Model and its Applications”.

Note: CNTT will be merging with the OPNFV project early next year. Stay tuned for that exciting announcement.

Author