Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

This Akraino Blueprint, when completed, will be a complete vBBU (virtualized baseband unit) - vCU and vDU (central and distributed unit) design plan.  The blueprint will include hardware components, Wind River Linux OVP real-time low latency kernel, Airship, Middleware components, and ONAP (open network automation platform) for policy-driven service orchestration and automation.

Considering the diagram below, the primary performance criteria of the platform is round trip latency of SCTP (stream control transmission protocol) messages.  The fundamental operation of the system is that it is a containerized (running on Kubernetes/Docker) application that initiates SCTP connections to eNodeB and gNodeB elements in the RAN (radio access network) and subscribes to what are known as “E2 messages”.  When an gNodeB generates an E2 message it needs a response back within a short timeframe or else the response becomes irrelevant.

Using this blueprint, it should be possible to identify and optimize bottlenecks affecting the total round trip time of the E2 message and response.   Scaling characteristics with respect to how many gNodeB elements can concurrently connect and how many E2 messages per unit time can be handled.  The capacity and node count will depend on if the vDU/vCU is coresident or separate, scale will be dependent on the number of connected radio units.

 The underpinning technology to achieve the desired low latency numbers is Wind River Linux OVP (open virtualization platform):

  • Low Latency - Better latency than enterprise-rt Linux used in REC blueprint.
  • Small Footprint - The Yocto based platform is smaller, and supports customizations that enterprise Linux does not.  
  • Lower CAPEX - Because the vDU count can be in the 100Ks, smaller footprint images can result in significant savings.
  • Improved utilization efficiency - BBU pooling via aggregating vDU/vcU

Airship will be responsible for the deployment, and thus Airship must run on Wind River Linux

With respect to Akraino CI/CD integration, this blueprint should take advantage of the existing infrastructure.  

vDU will require support for SR-IOV, FPGA cards for hardware acceleration (maybe GPU), v-switch likely, NUMA awareness (including NUMA topology control), and CPU-pinning.

Suggested that Ironic is used for bare metal deployment.

Will there be virtualization all the way to the DU?  7.2 split.  DU and CU together forming vBBU.

The blueprint will be build from source code, the output of will be RPM and then an ISO image.

Native Build capability will be an option for ease of development, but can be removed for delivery to achieve small footprint requirements.

What are the footprint requirements?  Builds should take a lean approach and sized only to the validated hardware.

Should the vswitch used for east-west traffic between the containers utilize vpp or ovs?

Currently the assumption will that only containers will be support, not VMs (but that is open for discussion).


TO DO:

Determine packages that are needed in the blueprint.

Discussion of relevant middleware for DU.   Does Airship require complementary middleware for redundancy, failover and alarms? 

Create pre-built RPM repository.  yum.config.d should point here by default.



Case Attributes

Description

Informational

Type

New Blueprint for appliance tuned to meet the requirement of the 5G Distributed & Centralized user plan


Blueprint Family - Proposed Name

Telco Appliance


Use Case

vRAN - DU, CU Applications


Blueprint proposed Name

Radio Access Cloud


Initial POD Cost (capex)

x86 General Purpose servers x 6


Scale & Type

x86/ARM OCP Open Edge x6 or deep edge class


Applications

5G vDU or/and vCU


Power Restrictions

Example only: Less than 10Kw


Infrastructure orchestration

Kubernetes v1.15 or above

Docker 18.6.0-ce1 or above

OS – Wind River Linux LTS 18 (4.18.31-rt)

Under Cloud Orchestration - Airship v1.0

 Low latency real time kernel

SDN

OVS, Calico


Workload Type

Containers


Additional Details

CPU Manager for Kubernetes (CMK)

Node Feature Discovery (NFD) for Kubernetes

NUMA Topology Manager for Kubernetes

Multus

CNI Plugins (support for SRI-IOV CNI, and Userspace CNI)

Device Plugin for Kubernetes (support for SR-IOV, FPGA, and QAT)

Ironic

Keystone

Helm

Ironic

Keystone

runc


  • No labels