Table of Contentsoutline true
Introduction
outline | true |
---|
The ICN blueprint family intends to address deployment of workloads in a large number of edges and also in public clouds using K8S as resource orchestrator in each site and ONAP-K8S EMCO as service level orchestrator (across sites). ICN also intends to integrate infrastructure orchestration which is needed to bring up a site using bare-metal servers. Infrastructure orchestration, which is the focus of this page, needs to ensure that the infrastructure software required on edge servers is installed on a per-site basis, but controlled from a central dashboard. Infrastructure orchestration is expected to do the following:
...
The user experience needs to be as simple as possible and even a novice user should be able to set up a site.
Use Cases
- SDEWAN Controller with Open source based SDWAN CNF and IPSEC tunnelling between Edge DistributionsOpenness Edge deployments in ONAPSDEWAN HUB to establish IPSEC tunneling between Edge Distributions with Service Function Chaining(SFC)
- Composite vFirewall to show case Telco, Cable Use cases using EMCO(Edge Multi-Cluster Orchestration)
Where on the Edge
Nowadays best efforts are put to keep the Cloud native control plane close to workload to reduce latency, increase performance, and fault tolerance. A single orchestration engine to be lightweight and maintain the resources in a cluster of compute node, Where the customer can deploy multiple Network Functions, such as VNF, CNF, Micro service, Function as a service (FaaS), and also scale the orchestration infrastructure depending upon the customer demand.
...
Note that the infra-local-controller can be run without the infra-global-controller. In the interim release, we expect that only the infra-local-controller is supported. The infra-global-controller is targeted for the final Akraino R2 R4 release. It is the goal that any operations done in the interim release on infra-local-controller manually are automated by infra-global-controller. And hence the interface provided by infra-local-controller is flexible enough to support both manual actions as well as automated actions.
...
Local Controller: Kubeadm, Metal3, Baremetal Operator, Ironic, Prometheus, ONAPEMCO
Global Controller: Kubeadm, KuD, K8S Provisioning Manager, Binary Provisioning Manager, Prometheus, CSM
R2 R4 Release cover only Infra local controller:
Baremetal Operator
...
Kubernetes deployment (KUD) is a project that uses Kubespray to bring up a Kubernetes deployment and some addons on a provisioned machine. As it already part of ONAP it EMCO it can be effectively reused to deploy the K8s App components(as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster. In R2 R4 release KuD will be used to deploy the K8s addon such as Virlet, OVN, NFD, and Intel device plugins such as SRIOV in the edge location(as shown in figure I). In R3 release, KuD will be evolved as "ICN Operator" to install all K8s addons. For more information on the architecture of KuD please find the information here.
ONAP on K8s
One of the Kubernetes clusters with high availability, which
One of the Kubernetes clusters with high availability, which is provisioned and configured by KUD will be used to deploy ONAP EMCO on K8s. ICN family uses ONAP Operations Manager(OOM) to deploy ONAP installation. OOM Edge Multi-Cluster Orchestration for service orchestration. EMCO provides a set of helm chart to be used to install ONAP run the workloads on a K8s Multi - cluster. ICN family will create OOM installation and automate the ONAP installation once a Kubernetes cluster is configured by KUD
ONAP Block and Modules:
ONAP will
EMCO Block and Modules:
EMCO will be the Service Orchestration Engine in ICN family and is responsible for the VNF life cycle management, tenant management and Tenant resource quota allocation and managing Resource Orchestration engine(ROE) to schedule VNF workloads with Multi-site scheduler awareness and Hardware Platform abstraction(HPA). Required an Akraino dashboard that sits on the top of ONAP EMCO to deploy the VNFs
Kubernetes Block and Modules:
Kubernetes will be the Resource Orchestration Engine in ICN family to manage Network, Storage and Compute resource for the VNF application. ICN family will be using multiple container runtimes as Virtlet (R2 Release) and docker as a de-facto container runtime. Each release supports different container runtimes that are focused on use cases.
...
ICN uses Metal3 project for provisioning server in the edge locations, ICN project uses IPMI protocol to identify the servers in the edge locations, and use Ironic & Ironic - Inspector to provision the OS in the edge location. For R2 R4 release, ICN project provision Ubuntu 18.04.5 in each server, and uses the distinguished network such provisioning network and bare-metal network for inspection and ipmi provisioning
ICN project injects the user data in each server regarding network configuration, grub update to enable IOMMU, remote command execution using ssh and maintain a common secure mechanism for all provisioning the servers. Each local controller maintains IP address management for that edge location. For more information refer - Metal3 Baremetal Operator in ICN stack
BPA Operator:
ICN uses the BPA operator to install KUD. It can install KUD either on Baremetal hosts or on Virtual Machines. The BPA operator is also used to install software on the machines after KUD has been installed successfully
...
More on BPA Restful API can be found at ICN Rest API.
KuD
Kubernetes deployment (KUD) is a project that uses Kubespray to bring up a Kubernetes deployment and some addons on a provisioned machine. As it already part of ONAP EMCO it can be effectively reused to deploy the K8s App components(as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster. In R2 R4 release KuD will be used to deploy the K8s addon such as Virlet, OVN, NFD, CMK CPU Manager for Kubernetes and Intel device plugins such as SRIOV and QAT in the edge location(as shown in figure I). In R3 release, KuD will be evolved as "ICN Operator" to install all K8s addons. For more information on the architecture of KuD please find the information here.
...
EMCO:
ONAP EMCO is used as Service orchestration in ICN BP. A lightweight golang version of ONAP EMCO is developed as part of Multicloud-k8s project in ONAP community. ICN BP developed containerized KUD multi-cluster to install the onap4k8s EMCO as a plugin in any cluster provisioned by BPA operator. ONAP4k8s installed EdgeX Foundry Workload, EMCO installed Composite vFW application to install in any edge location.
...
SDEWAN Configure Agent(also named SDEWAN Controller) module is worked as K8s controller located in each edge location and central hub k8s cluster to support configuration of SDEWAN CNF functionalities (e.g. mwan3, firwall, SNAT, DNAT, IPSec etc.) and monitor SDEWAN CNF status. It exposes CRDs to support configuration via K8s API server for unified authentication and authorization, detail information can be found at: Sdewan CRD ControllerOpenness: Openness is an open source reference toolkit that makes it easy to move applications from the Cloud to the Network and On-Premise Edge. Some components of Openness Network edge have been integrated. EAA (Edge Application Agent) which provides application/service registration and authentication in openness has been integrated via ONAP4K8S. What's more, we work with OpenNESS community to ensure that EAA address distributed applications that not only spread across nodes in one K8s clusters, but also across K8s clusters. For platform related microservices (Multus, SR-IOV CNI, SR-IOV Network Device Plugin, NFD, CMK), test cases of Openness have been integrated. , firwall, SNAT, DNAT, IPSec etc.) and monitor SDEWAN CNF status. It exposes CRDs to support configuration via K8s API server for unified authentication and authorization, detail information can be found at: Sdewan CRD Controller
Cloud Storage:
Cloud Storage (Cloud Storage Design) act as storage service and plugins, currently can divide into two parts:
- Storage Service for Local controller: which used by BPA Rest Agent to provide storage service for image objects with binary, container and operating system. There are 2 solutions, MinIO and GridFS, with the consideration of Cloud native and Data reliability, we propose to use MinIO, which is CNCF project for object storage and compatible with Amazon S3 API, and provide language plugins for client application, it is also easy to deploy in Kubernetes and flexible scale-out. MinIO also provide storage service for HTTP Server. Since MinIO need export volume in bootstrap, local-storage is a simple solution but lack of reliability for the data safety, we will switch to reliability volume provided by Ceph CSI RBD in next release.
- Optane Persistent Memory plugin in KUD, which can provide LVM and direct volumes on Optane PM namespaces, since the Optane PM has high performance and low latency compared with normal SSD storage device, it can be used as cache, metadata volume or other high throughput and low latency scenarios.
Software components:
...
Components
...
Link
...
Components | Link | License | Akraino Release target |
ICN | https://github.com/akraino-edge-stack/icn - v0.4.0 | Apache License 2.0 | R4 |
Provision stack - Metal3 |
1-icn | Apache License 2.0 |
R4 | ||
Ironic - Ironic IPA downloader | https://github.com/akraino-icn/ironic-ipa-downloader - v1.0-icn | Apache License 2.0 |
R4 | ||
Ironic - Ironic image | https://github.com/akraino-icn/ironic-image - v1.0-icn | Apache License 2.0 |
R4 | ||
Ironic - Ironic Inspector Image | https://github.com/akraino-icn/ironic-inspector-image - v1.0-icn | Apache License 2.0 |
R4 | ||
Host Operating system | Ubuntu 18.04.5 | GNU General Public License |
R4 | ||
NIC drivers | GNU General Public License Version 2 |
R4 | ||
QAT drivers | Intel® C627 Chipset - https://ark.intel.com/content/www/us/en/ark/products/97343/intel-c627-chipset.html | GNU General Public License Version 2 |
R4 | |||
Intel® Optane™ DC Persistent Memory | Intel® Optane™ DC 256GB Persistent Memory Module - PMDK: Persistent Memory Development Kit - https://github.com/pmem/pmdk/ | SPDX-License-Identifier - BSD-3-Clause | R4 |
EMCO (formerly known as ONAP4K8s) | Apache License 2.0 |
R4 |
Workloads
SDEWAN CNFs | https://github.com/akraino-edge-stack/icn-sdwan - v1.0 https://hub.docker.com/repository/docker/integratedcloudnative/openwrt - 0.3.1 | GNU General Public License Version 2 |
R4 | ||
KUD | Apache License 2.0 |
R4 | |
Kubespray |
14. |
1 | Apache License 2.0 |
R4 | |
K8s |
18.9 | Apache License 2.0 |
R4 | |
Docker |
19.03. |
13 | Apache License 2.0 |
R4 | ||
Virtlet | Apache License 2.0 |
R4 | |
SDN - OVN |
icn/ovn/ - |
v20. |
06.0 (mirror repo - https://github.com/ovn-org/ovn) | Apache License 2.0 |
R4 |
vSwitch - OVS |
v2.14.0 (mirror repo - https://github.com/openvswitch/ovs ) | Apache License 2.0 |
R4 | ||
Ansible | Apache License 2.0 |
R4 | |
Helm | https://github.com/helm/helm - 3.2. |
4 | Apache License 2.0 |
R4 | ||
Istio | https://github.com/istio/istio - 1.0.3 | Apache License 2.0 |
R4 | ||
Rook/Ceph | Apache License 2.0 |
R4 | |
MetalLB |
R3
/releases - v0.7.3 | Apache License 2.0 | R4 | |
OVN4NFV-K8Ss-Plugin | https://github.com/opnfv/ovn4nfv-k8s-plugin - v0.9.0 | Apache License 2.0 | R4 |
SDEWAN controller | https://github.com/akraino-edge-stack/icn-sdwan - v1.0 https://hub.docker.com/repository/docker/integratedcloudnative/sdewan-controller - 0.3.0 | Apache License 2.0 | R4 |
Device Plugins | https://github.com/intel/intel-device-plugins-for-kubernetes - SRIOV | Apache License 2.0 |
R4 | ||
Node Feature Discovery | https://github.com/kubernetes-sigs/node-feature-discovery - 0.4.0 | Apache License 2.0 |
R4 | |
CNI | https://github.com/coreos/flannel/ - release tag v0.11.0 https://github.com/containernetworking/cni - release tag v0.7.0 https://github.com/containernetworking/plugins - release tag v0.8.1 |
cni - Multus v3. |
4.1 tp, | Apache License 2.0 |
R4 |
Hardware and Software Management
Software Management
ICN R2 R4 Timelines
Hardware Management
Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch) |
---|---|---|---|---|---|
Jump | 2xE5-2699 | 64GB | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) |
node1 | 2xE5-2699 | 64GB | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) |
node2 | 2xE5-2699 | 64GB | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) |
node3 | 2xE5-2699 | 64GB | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) |
node4 | 2xE5-2699 | 64GB | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) |
node5 | 2xE5-2699 | 64GB | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) |
Licensing
Refer Software Components list