Overview
ICN BP family intends to address deployment of workloads in a large number of edges and also in public clouds using K8S as resource orchestrator in each site and ONAP-K8S as service level orchestrator (across sites). ICN also intends to integrate infrastructure orchestration which is needed to bring up a site using bare-metal servers. Infrastructure orchestration, which is the focus of this page, needs to ensure that the infrastructure software required on edge servers is installed on per-site basis, but controlled from a central dashboard. Infrastructure orchestration is expected to do the following:
- Installation : First-time installation of all infrastructure software.
- Keep monitoring for new servers and install the software based on the role of the server machine.
- Patching: Continue to install the patches (mainly security related) if new patch release is made in any one of the infrastructure software packages.
- May need to work with resource and service orchestrators to ensure that workload functionality does not get impacted.
- Software updates: Updating software due to new releases.
infra-global-controller: If infrastructure provisioning needs to controlled from a central location, this component is expected to be brought up in one location. This controller communicates with infra-local-controller (which is kind of agent) that perform the actual software installation/update/patch and provisions the software or BIOS etc...
infra-local-controller: Typically sits in each site in a bootstrap machine. Typically provided as bootable USB disk. It works in conjunction with the infra-global-controller. Note that, if there is no requirement to manage the software provisioning from a central location, then infra-global-controller is brought up along with the infra-local-controller.
User experience needs to be as simple as possible and even novice user shall be able to set up a site
- 1st-time installation:
- User procures set of machines or racks.
- User connects them together with switches.
- User uses the USB or other mechanisms to boot a machine (call it as bootstrap machine) with infra-local-controller stack.
- User provides infra-global-controller FQDN/IP address for local controller to reach to it (Note that if global controller is in the same machine as the local controller, loopback IP address can be used to communicate - this is default option).
- User updates the inventory of machines, defines the role of each machine and inform the central ZTP system (infra-global-controller) or infra-local-controller (bootstrap) stack using its provided UI or via API.
- System installs the software and verifies the installation of each software component by running tests that are part of ICN-infra.
- User/Entity gets informed when all the machines are installed with the software.
- Addition of new server
- User updates the site inventory with its role to the infra global controller using its UI.
- System expected to install the software.
- System expected to verify the installation of software using tests that are part of bootstrap stack.
- User/entity gets informed when the machine is successfully brought online.
- Deletion of existing server:
- User informs the infra-global-controller to bring down the server.
- System removes the software and cleans up any local disk and other persistent systems
- User/Entity gets informed that server can be taken off from the network and disposed of.
- Patching
- User or external patch system informs the infra-global-controller that new patch(es) is available for a given software package (or packages).
- User or external patch system also informs whether the patch or patches require a restart of the process or kernel.
- System then takes care of patching every server that has these software packages.
- If the software package impacts the workloads:
- Informs the local workload (resource orchestrator) to not deploy new workloads or move existing workloads. Many resource level orchestrator provides a way to decommission the server on a temporary basis.
- Ensures that there are no workloads on the system.
- Installs the patch and do needed restarts.
- Informs the local orchestrator to put the server back in the pool.
- If the software package impacts the workloads:
- Updating
- User or external update system informs the infra-global-controller that a new software version is available for a current software package
- User or external update system should inform the update is minor or major. A major update required to remove completely the old version, and then install the new version
- Infra-global-controller based the update nature then takes care of update as follows
- Reschedule the existing workload in the server that required an update to the other server in the cluster
- Remove the server from the local orchestrator cluster, provide the update software and reconnects it back
Akraino's "Integrated Cloud Native NFV & App Stack" (ICN) Blueprint is a Cloud Native Compute and Network Framework(CN-CNF) to integrated NFV's application to the de-facto standard and setting a framework to address 5G, IOT and various Linux Foundation edge use case in Cloud Native.
ICN has ONAP as the Service Orchestration Engine(SOE) and the Cloud Native(CN) projects such as Kubernetes for Resource Orchestration Engine(ROE), Prometheus as the monitoring and alerting, OVN as the SDN controller, Container Network Interface(CNI) for Orchestration Networking, provides networking between the clusters, Envoy for Service proxy, Helm and Operators for package management and Rook for storage. The framework stack specifics the best configuration methodology, enables development projects, installation scripts, software package to bind CNCF and LF edge use cases together.
This document break downs the hardware requirements, software ingredient, Testing and benchmarking for the R2 and R3 release for and provides overall picture toward blue print effect in Edge use cases.
Goals
- Generic: Infrastructure Orchestration shall be as generic. Even though this work is being done on behalf of one BP (MICN), infrastructure orchestration shall be common across all BPs in the ICN family. Also, it shall be possible to use this component in other BPs outside of ICN family.
- Leverage open source projects:
- Leverage cluster-API for infra-global-controller. Identify gaps and provide fixed and also provide UI/CLI for good user experience.
- Leverage Ironic and metal3 for infra-local-controller to do bare-metal provisioning. Identify any gaps to make it work with Cluster-API.
- Leverage KuD in infra-local-controller to do Kubernetes installation. Identify any gaps and fix them.
- Figure out ways to use the bootstrap machine also as workload machine (Not in scope for Akraino-R2)
- Flexible and Extensible :
- Adding any new package in future shall be a simple addition.
- Interaction with workload orchestrator shall not be limited to K8S. Shall be able to talk to any workload orchestrator.
- Data Model driven:
- Follow CRD models as much as possible.
- Security:
- Infra-global and infra-local controller may have privileged access to secrets, keys etc.. Shall ensure to protect them by putting them in HW RoT or at least ensure that they are not visible in clear in HDD/SSDs.
- Redundancy: Infra-global controller shall be redundant, especially, if it used to manage multiple sites.
- Performance:
- Shall be able to complete the first time installation or patching across multiple servers in a site shall be in minutes < 10minutes for 10 server site. (May need to ensure that jobs are done in parallel - Multi-threading of infra-local-controller).
- Shall be able to complete the patching across sites shall be done in <10 minutes for 100 sites.
Architecture:
Blocks and Modules
Global ZTP:
Global ZTP system is used for Infrastructure provisioning and configuration in ICN family. It is subdivided into 3 deployments Cluster-API, KuD and ONAP on K8s.
Cluster-API & Baremetal Operator
One of the major challenges to cloud admin managing multiple clusters in different edge location is coordinate control plane of each cluster configuration remotely, managing patches and updates/upgrades across multiple machines. Cluster-API provides declarative APIs to represent clusters and machines inside a cluster. Cluster-API provides the abstraction for various common logic that can be seen in various cluster provider such as GKE, AWS, Vsphere. Cluster-API consolidated all those logic provide abstractions for all those logic functions such as grouping machines for the upgrade, autoscaling mechanism.
In ICN family stack, Cluster-API Baremetal provider is metal3 Baremetal Operator, it is used as a machine actuator that uses Ironic to provide k8s API to manage the physical servers that also run Kubernetes clusters on bare metal host. Cluster-API manages the kubernetes control plane through cluster CRD, and Kubernetes node(host machine) through machine CRDs, Machineset CRDs and MachineDeployment CRDS. It also has an autoscaler mechanism that checks the Machineset CRD that is similar to the analogy of K8s replica set and MachineDeployment CRD similar to the analogy of K8s Deployment. MachineDeployment CRDs are used to update/upgrade of software drivers in
Cluster-API provider with Baremetal operator is used to provision physical server, and initiate the kubernetes cluster with user configuration
KuD
Kubernetes deployer(KUD) in ONAP can be reused to deploy the K8s App components(as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster. In R2 release KuD will be used to deploy the K8s addon such as Prometheus, Rook, Virlet, OVN, NFD, and Intel device plugins in the edge location(as shown in figure I). In R3 release, KuD will be evolved as "ICN Operator" to install all K8s addons.
ONAP on K8s
One of the Kubernetes clusters with high availability, which is provisioned and configured by Cluster-API will be used to deploy ONAP on K8s. ICN family uses ONAP Operations Manager(OOM) to deploy ONAP installation. OOM provides a set of helm chart to be used to install ONAP on a K8s cluster. ICN family will create OOM installation and automate the ONAP installation once a kubernetes cluster is configured by cluster-API
ONAP Block and Modules:
ONAP will be the Service Orchestration Engine in ICN family and is responsible for the VNF life cycle management, tenant management and Tenant resource quota allocation and managing Resource Orchestration engine(ROE) to schedule VNF workloads with Multi-site scheduler awareness and Hardware Platform abstraction(HPA)
Kubernetes Block and Modules:
Kubernetes will be the Resource Orchestration Engine in ICN family to manage Network, Storage and Compute resource for the VNF application. ICN family will be using multiple container runtimes as Virtlet, Kata container, Kubevirt and gVisor. Each release supports different container runtimes that are focused on use cases.
Kubernetes module is divided into 3 groups - K8s App components, NFV specific components and NFVi SDN controller components, all these components will be installed using KuD addons
K8s App components: This block has k8s storage plugins, container runtime, OVN for networking, Service proxy and Prometheus for monitoring, and responsible application management
NFV Specific components: This block is responsible for k8s compute management to support both software and hardware acceleration(include network acceleration) with CPU pinning and Device plugins such as QAT, FPGA, SRIOV & GPU.
SDN Controller components: This block is responsible for managing SDN controller and to provide additional features such as Service Function chaining(SFC) and Network Route manager.
Apps/ Use cases:
ICN Infrastructure layout:
- Use Clusterctl command to create the cluster for the cluster-api-provider-baremetal provide
- Machine CRD and Cluster CRD in configured to instated 4 clusters as #0 - #3
- Automation script for OOM deployment is trigged to deploy ONAP on cluster #0
- KuD addons script in trigger in all edge location to deploy K8s App components, NFV Specific and NFVi SDN controller
- Subscriber or Operator requires to deploy the workload in Service Orchestration
- ONAP should place the workload in the edge location based on Mult-site scheduling and K8s HPA
Software components
Components | Link | Akraino Release target |
Cluster-API | R2 | |
Cluster-API-Provider-bare metal | R2 | |
Provision stack - Metal3 | R2 | |
Host Operating system | Ubuntu 18.04 | R2 |
Quick Access Technology(QAT) drivers | Intel® C627 Chipset - https://ark.intel.com/content/www/us/en/ark/products/97343/intel-c627-chipset.html | R3 |
NIC drivers | R3 | |
ONAP | Latest release 3.0.1-ONAP - https://github.com/onap/integration/ | R2 |
Workloads | OpenWRT SDWAN - https://openwrt.org/ | R3 |
KUD | R2 | |
Kubespray | R2 | |
K8s | R2 | |
Docker | https://github.com/docker - 18.09 | R2 |
Virtlet | R2 | |
SDN - OVN | 0.3.0 | R2 |
OpenvSwitch | R2 | |
Ansible | https://github.com/ansible/ansible - 2.7.10 | R2 |
Helm | https://github.com/helm/helm - 2.9.1 | R2 |
Istio | https://github.com/istio/istio - 1.0.3 | R2 |
Kata container | R3 | |
Kubevirt | https://github.com/kubevirt/kubevirt/ - v0.18.0 | R3 |
Collectd | R2 | |
Rook/Ceph | R3 | |
MetalLB | R3 | |
Kube - Prometheus | R3 | |
OpenNESS | Will be updated soon | R3 |
Multi-tenancy | R2 | |
Knative | R3 | |
Device Plugins | https://github.com/intel/intel-device-plugins-for-kubernetes - | R2 |
Node Feature Discovery | R2 | |
CNI | https://github.com/coreos/flannel/ - release tag v0.11.0 https://github.com/containernetworking/cni - release tag v0.7.0 https://github.com/containernetworking/plugins - release tag v0.8.1 https://github.com/containernetworking/cni#3rd-party-plugins - Multus v3.3tp, SRIOV CNI v2.0( withSRIOV Network Device plugin) | R2 |
Conformance Test for K8s | R2 |
Gaps(WIP)
Release | Block | Components | Identified Gaps | Initial thought |
---|---|---|---|---|
R2 | ZTP | Cluster-API | The cluster upgrade yet to be support | |
No node repair mechanism | ||||
R3 | ||||