Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

l Introduction

Goal: An innovative architecture for small-size edge cloud using data processor


To meet the demanding increase of 5G data and the utilize the advantage of low latency and high bandwidth of 5G technologies, the number of small-size datacenter is increasing dramatically. Research shows there shall be more than 2.8 million cloudlet datacenters with less than 200 servers connected near the 5G tower. In IEC Type 5 project, we will build an innovative architecture for small-size edge cloud computing using latest and greatest data processor.


Fig. IEC and Cloudlet

l Release 6

1. Brief Introduction of R6

In R6, we introduce an innovative networking architecture based on PCIe data fabric to lower both the cost (CAPEX) and power consumption (OPEX) in small clusters for edge cloud computing. Based-on innovative data processor (DPU and XPU), the next-generation networking features with:

  • New networking architecture to lower the TCO of edge infrastructure
  • TCP/IP compatible and cloud native for develops and developers
  • Green to protect the environment for lasting development
  • Scalable and composable to meet the dynamical workload


 IEC Type 5

Latest update

 System Architecture

R6

 Software & Platform API

-

 Hardware Design

R6

 CD log

R6


2. System Architecture

2.1 Extending PCIe Transport

To achieve low cost and low power, in R6, we introduce the latest and greatest technology to connect two servers directly with PCIe links via innovative data processors. In this scenario, we eliminate the legacy network adapters (such as NIC, optics and legacy network switches). In typical application, with PCIe networking, we can reduce the number of connection components by 75%.


2.2 PCIe Networking and Cloud-on-Board

By the advantage of PCIe networking, we can unified the system-on-board (SoB) connection and the cloud cluster topologies into one single and simple architecture, which we named as Cloud-on-Board (CoB) Architecture.


In the CoB architecture, we can connect CPU directly without additional adapters.

2.3 PCIe Extending DPU Cluster

Since DPU is PCIe-compatible device, we can further combine DPU and PCIe Networking together. In R6, we introduce a hardware layer or physical link/fabric layer between the DPU and the CPUs as below. With this layer, we extend the DPU cluster size and also use the DPU management features as well.


IEC Type 5 System Architecture


2.4 Roadmap

In R6, we introduce PCIe based data fabric. In the future, we will include CXL or UCIe based as well.




3. Software & Platform API

No changes from previous Release 5.

4. Hardware Design

4.1 Cloud-on-Board (CoB) Architecture

For general computing, we introduce the CoB with three major components:

1) Edge Computing Module (ECM): CPU+RAM+OS SSD combo,

2) Edge Base Board (EB2): for PCIe Data Fabric,

3) Edge Adapter Module (EAM): PCIe-compatible Device, such as GPU, NIC, SSD etc.


All in PCIe Fabric


4.2 Networking Topology

In CoB design, we have multiple networks. At least one PCIe networking for multiple CPUs. Also we can introduce more connections as well as traditional RJ45 as the management ports as well. Hence all CoB Hardware are cloud native compatible at the beginning.


Networking in CoB system for Cloud Native Applications

4.3 Cloud Native Server Reference Design

At least, we pack all components together, and give a reference design as below.




4.4 Test Environment for CoB Prototype

Please refer to https://dev.socnoc.ai. Send email to demo@socnoc.ai for test account.

5. CD log

  • System bringup log

Link:

  • Docker installation log


Link:

  • Docker cluster status log


Link:

  • Web portal screenshot


Link:

  • Demo video

Link:

  • No labels