IEC Type 4: AR/VR oriented Edge Stack for Integrated Edge Cloud (IEC) Blueprint Family

Project Technical Lead: Bart Dong . Elected and Approved  

Wenping Ying. Elected 5/7/2019. Stepped Down  


Project ManagerWenhui Zhang


Link to process-SC review record Graduation reviews


Blueprint species: 

Use Case Attributes

Description

Informational

Type

New Blueprint for VR/AR on the Network Edge

 

Blueprint Family

Integrated Edge Cloud (IEC)

 

Use Case

Deployment of generic edge end and cloud environment for VR/AR cloud streaming

 

Blueprint proposed Name

IEC Type 4: AR/VR oriented Internet Edge Stack for Integrated Edge Cloud (IEC) Blueprint Family

 

Initial POD Cost (capex)

NVIDIA RTX GPUs, Chelsio T580-CR NICs, AMD Radeon.

less than $120k (3 nodes)

 

Scale & Type

Generic blueprint PoC:

  •   One master node and up to 5 worker nodes with mixed Linux and optional Windows OS
  •   Each worker node is x86/ARM server with nVidia RTX GPUs (Titan or GeForce TBD), AMD Radeon.

Large scale deployment:

  •  Number of worker node servers, x86/ARM server or deep edge class, is site dependent (footprint)
  •  vGPU and federation supported class, e.g. NVIDIA Tesla K80, AMD Radeon GPUs.
  •  Chelsio T580-CR NIC

 

Applications

Generic blueprint POD: Small scale cloud AR/VR rendering farm with generic SO.

Production/commercial service:

  1. Consumer applications: High performance premium gaming, 3D/Light-field video for movies, live concerts, events, LBE, etc.
  2. Enterprise applications: training/education, product design collaboration, manufacturing, maintenance, data analytical etc,

 

Power and memory restrictions

N/A

 

Infrastructure orchestration

Docker 18.09.4 or above (19.03 may be needed to run optional windows container with nVidia or AMD GPU support) and K8s 1.14.1 or above- Container Orchestration, VMWare or Openstack VM

OS - Ubuntu 18.04.2, windows server 2019

Under Cloud Orchestration - Airship v1.0 (TBD)

 

SDN

Calico and K8s, or or SR-IOV, OVS-DPDK

 

Workload Type

VR and AR applications with split rendering runtime running inside Containers or VM

 

Additional Details

The test configuration consists of 3 machines connected using Ethernet switch: a master and 2 worker nodes, each with TBD processor clocked at TBD GHz, with TBD GB of RAM and Ubuntu operating system for master, windows server 2019 or later for worker. MTU of 1450B is configured (to compensate for GTP tunnel header and to avoid fragmentation). Each windows server preconfigures with 2-3 VMs with fixed GPU allocation per VM.



Attachments:


As per the Akraino Community process and directed by TSC, a blueprint which has only one nominee for Project Technical Lead (PTL) will be the elected lead once at least one committer seconds the nomination after the close of nominations. If there are two or more, an election will take place.

Self Nominations start 23 April and go through 29 April

Committer

Committer

Company

 Committer Contact Info

Committer Bio

Committer Picture

Self Nominate for PTL (Y/N)

Arm



PSU


N
Wenping YingHTCwenping_ying@htc.com


Ryan AndersonIBMrranders@us.ibm.com


Vikram SiwachMobiledgeXvikram.siwach@mobiledgex.com


Kris ChaisanguanthumVisbychaisang@visby.io


Guoxi Wang

UC Irvine guoxiw1@uci.edu


Armtrevor.tao@arm.com


Armjingzhao.ni@arm.com


Armjianlin.lv@arm.com


Tencentrobertqiu@tencent.com

Stepped down
Junipersukhdev@juniper.net


Margarida CorreiaJunipermcorreia@juniper.net


Bart DongTencentbartdong@tencent.com

Y

Christos Kolias

Orangechristos.kolias@orange.com


thorkinginwinSTACKthor.c@inwinstack.com


 Tide WangPhytiumwanghailong@phytium.com.cn


Yanning WangPhytiumwangyanning@phytium.com.cn 









Contributor

Contributor

Company

 Contributor Contact Info

Contributor Bio

Contributor Picture

Jianqiang Li

GDCNIlijq@gdcni.cn

Meeting Minutes:

June 6th: June6th

June 20th: June20th