Blueprint overview/Introduction
This document covers both Integrated Edge Cloud Type 1 & 2.
Integrated Edge Cloud(IEC) is an Akraino approved blueprint family and part of Akraino Edge Stack, which intends to develop a fully integrated edge infrastructure solution, and the project is completely focused towards Edge Computing. This open source software stack provides critical infrastructure to enable high performance, reduce latency, improve availability, lower operational overhead, provide scalability, address security needs, and improve fault management. The IEC project will address multiple edge use cases and industry, not just Telco Industry. IEC intends to develop solution and support of carrier, provider, and the IoT networks.
...
- The IEC supported hardware are edge servers mainly based on arm64, such as Huawei Taishan, Marvell ThunderX, Ampere Arm64 servers; at the far edge, the supported edge end devices would be Marvell MACCHIATObin Double Shot or other arm based boxes/devices. The desired network connections are above 10Gbit/s which may satisfy most current IEC applications requirement.
- The installation scripts which deploys Kubernetes cluster, Calico CNI, Helm/Tiller and related verifying Kubernetes applications/services with 1 master and 2 slave nodes. The scripts can be run from the jumpserver, or with manual installation from the servers on which it run. The installation methods is introduced in IEC Blueprints Installation Overview.
- Currently IEC uses project Calico as the main container networking solution which provides high performance, rich network policy, widely supported from Linux system and easy installation. In the future, Contiv/VPP and OVN-Kubernetes can be used as a high performance substitute since those 2 solutions can support DPDK enabled high speed interface access.
- IEC support Akraino CI/CD requests: IEC Daily jobs (scheduled to run recurrently) deploy IEC using one of the agreed installers; run testing suites; collect logs and publish them.
- Currently IEC suppors the SDN Enabled Broadband Access(SEBA) as its first use case. The installation scripts for SEBA on arm and its related source repositories are developed and/or integrated in IEC source code repo. We had ported SEBA components to arm64 servers with Helm chart installation support
- Until now IEC has 3 approved types: Type 1, Type 2 and Type 3 as its supported running types, and other types: Type 3 and Type 5 are under review. IEC is still enriching its use cases with the progress of developing.
Platform Architecture
...
The IEC project is for openness, which intends to develop a fully integrated edge infrastructure solution, it provides a reference implementation for hardware and software to help users build their projects.
...
Generic Hardware Guidelines
Compute Machines: By observing the actual memory utilization of ThunderX2, it is found that if IEC is deployed on a single node, at least 15G of memory and 62G disk is required; This kind of hardware condition is very harsh for embedded devices. For more realistic deployments, we suggest using at least three machines (preferably all the same). The characteristics of these machines depends several factors. At the very minimum, each machine should have a 4 cores CPU, 32GB of RAM, and 60G of disk capacity.
Network: The machine have to download a large quantity of software from different sources on the Internet, so it`s need to be able to reach Internet. For whatever server use, it should have at the very minimum a 1G network interface for management. 40G NIC is required if performance testing is required.
Optics and Cabling: Some hardware may be picky about the optics. Both optics and cable models tested by the community are provided below.
Recommended Hardware
Following is a list of hardware that people from the ONF IEC community have tested over time in lab trials.
Please attention: Until now, there has been no performance testing of the IEC, which is our follow-up work.
Type Device 1
Quantity | Category | Brand | Model | P/N |
1 | Compute | Cavium | ThunderX2 | ThunderX2 |
4 | Memory | Micron Technology | 9ASF1G72PZ-2G6D1 | 9ASF1G72PZ-2G6D1 8GB*4 |
1 | Management switch (L2 with VLAN support) | * | * | * |
1 | Network interface card(for mgmt) | Intel | 10-Gigabit X540-AT2 | 10-Gigabit X540-AT2 |
1 | Network interface card(for data) | Intel | XL710 40 GbE | XL710 40 GbE |
2 | SFP(for mgmt) | Intel | FTLX8571D3BCV-IT | INTEL FTLX8571D3BCV-IT Finisar 10GB s 850nm Multimode SFP SR Transceiver |
Fabric switch | N/A | N/A | N/A |
Type Device 2
Quantity | Category | Brand | Model | P/N |
1 | Compute | Ampere | eMAG server | eMAG server |
8 | Memory | Samsung | M393A4K40CB2-CTD | M393A4K40CB2-CTD 32GB*8 |
1 | Management switch (L2 with VLAN support) | * | * | * |
1 | Network interface card(for mgmt) | Mellanox | MT27710 Family | ConnectX-4 Lx |
1 | Network interface card(for data) | Intel | XL710 40 GbE | XL710 40 GbE |
2 | SFP(for mgmt) | Intel | FTLX8571D3BCV-IT | INTEL FTLX8571D3BCV-IT Finisar 10GB s 850nm Multimode SFP SR Transceiver |
Fabric switch | N/A | N/A | N/A |
Type Device 3
Quantity | Category | Brand | Model | P/N |
2 | Compute | Marvell | Marvell ARMADA 8040 | MACCHIATObin Double Shot |
1 | Memory | System memory | Marvell ARMADA 8040 | DDR4 DIMM slot with optional ECC and single/dual chip select support 16GB |
1 | Management switch (L2 with VLAN support) | * | * | * |
1 | Network interface card(for mgmt) |
Marvell | Marvell ARMADA 8040 | Dual 10GbE (1/2.5/10GbE) via copper or SFP 2.5GbE (1/2.5GbE) via SFP 1GbE via copper | ||
2 | SFP(for mgmt) | Cisco | Passive Direct Attach Copper Twinax Cable | SFP-H10GB-CU3M Compatible 10G SFP+ |
Fabric switch | N/A | N/A | N/A |
Software Platform Architecture
<Software components with version/release numbers >
<EDGE Interface>
<ETSI MEC Interaction>The IEC reference software platform architecture is given with the following figure:
Platform Software | Version |
docker | 18.06.1-ce |
kubelet | v1.13.0 |
kubeadm | v1.13.0 |
kubectl | v1.13.0 |
calico | v3.3.2 |
etcd | v3.3.9-arm64 |
APIs
APIs with reference to Architecture and Modules
...