...
The ICN blueprint family intends to address deployment of workloads in a large number of edges and also in public clouds using K8S as resource orchestrator in each site and ONAP-K8S as service level orchestrator (across sites). ICN also intends to integrate infrastructure orchestration which is needed to bring up a site using bare-metal servers. Infrastructure orchestration, which is the focus of this page, needs to ensure that the infrastructure software required on edge servers is installed on a per-site basis, but controlled from a central dashboard. Infrastructure orchestration is expected to do the following:
- Installation: First-time installation of all infrastructure software.
- Keep monitoring for new servers and install the software based on the role of the server machine.
- Patching: Continue to install the patches (mainly security-related) if new patch release is made in any one of the infrastructure software packages.
- May need to work with resource and service orchestrators to ensure that workload functionality does not get impacted.
- Software updates: Updating software due to new releases.
...
- SDWAN, Customer Edge, Edge Clouds – deploy VNFs/CNFs and applications as micro-services (Completed in R2 release using OpenWRT SDWAN Containerized)
- DAaaS - Distributed Ana- Distributed Analytics as a Service
- CDN - Content Delivery Network
Where on the Edge
Kuralamudhan Ramakrishnan (Deactivated) - what are the drivers?
Business Drivers
Overall Architecture
On an edge deployment, there may be multiple edges that need to be brought up. The Administrator going to each location, using the infra-local-controller to bring up application-K8S clusters in compute nodes of each location, is not scalable. Therefore, we have an "infra-global-controller" to control multiple "infra-local-controllers" which are controlling the worker nodes. The "infra-global-controller" is expected to provide a centralized software provisioning and configuration system. It provides one single-pane-of-glass for administrating the edge locations with respect to infrastructure. The worker nodes may be baremetal servers, or they may be virtual machines resident on the infra-local-controller. So the minimum platform configuration is one global controller and one local controller (although the local controller can be run without a global controller).
Since, there are a few K8S clusters, let us define them:
...
So many definition of cloud native ? A way to move faster or performance ? Powerful way to scale ? Or reduce operational costs or capital expenditure ?
Here we are going to explore it meanings and we can cut though the waffle to identify the right Cloud native approach / strategy for new revenue generating services
Nowadays best efforts are to have a single orchestration engine to be lightweight and maintain the resources in a cluster of compute node, can deploy multiple VNFs, such as VNF, CNF, Micro service, Function as a service (FaaS), and also deploy K8s inside the VMs
Overall Architecture
On an edge deployment, there may be multiple edges that need to be brought up. The Administrator going to each location, using the infra-local-controller to bring up application-K8S clusters in compute nodes of each location, is not scalable. Therefore, we have an "infra-global-controller" to control multiple "infra-local-controllers" which are controlling the worker nodes. The "infra-global-controller" is expected to provide a centralized software provisioning and configuration system. It provides one single-pane-of-glass for administrating the edge locations with respect to infrastructure. The worker nodes may be baremetal servers, or they may be virtual machines resident on the infra-local-controller. So the minimum platform configuration is one global controller and one local controller (although the local controller can be run without a global controller).
Since, there are a few K8S clusters, let us define them:
- infra-global-controller-K8S : This is the K8S cluster where infra-global-controller related containers are run.
- infra-local-controller-K8S: This is the K8S cluster where the infra-local-controller related containers are run, which bring up compute nodes.
- application-K8S : These are K8S clusters on compute nodes, where application workloads are run.
Flows & Sequence Diagrams
@Kural - I think this is the closest to a general architectural view, but the steps in the sequence (1-6 below the diagram) you might say belong in the Software Architecture diagram.
- Use Clusterctl command to create the cluster for the cluster-api-provider-baremetal provider. For this step, we required KuD to provide a cluster and run the machine controller and cluster controller
- Users Machine CRD and Cluster CRD in configured to instated 4 clusters as #0, #1, #2, #3
- Automation script for OOM deployment is trigged to deploy ONAP on cluster #0
- KuD addons script in trigger in all edge location to deploy K8s App components, NFV Specific and NFVi SDN controller
- Subscriber or Operator requires to deploy the VNF workload such as SDWAN in Service Orchestration
- ONAP should place the workload in the edge location based on Multi-site scheduling and K8s HPA
Platform Architecture
@Kural - I have explanations of software components in both Platform Architecture and Software Platform Architecture. Maybe it would be better to rewrite the PA to have a more general explanation of the arch, and move details into SPA? Also, maybe the locations of the elements (global, local) is not clear.
Infra-global-controller:
Administration involves
- First time bring up.
- Addition of new compute nodes in locations.
- Removal of compute nodes from locations
- Software patching
- Software upgrading
The infra-local-controller will be brought up in each location. The infra-local-controller kubeconfig will be made known to the infra-global-controller. Beyond that, everything else is taken care by the infra-global-controller. The infra-global-controller communicates with various infra-local-controllers to do the job of software installation and provisioning.
Infra-global-controller runs in its own K8S cluster. All the components of infra-global-controllers are containers. The following components are part of the infra-global-controller.
- Provisioning controller (PC) Micro Services
- Binary Provisioning Manager (BPM) Micro services
- K8S Provisioning Manager (KPM) Micro-services
- CSM: Certificate and Secret management related Micro-services
- Cluster-API related Micro-services
- MongoDB for storing packages and OS images.
- Prometheus: Monitoring and alerting
Since we expect the infra-global-controller to be reachable from the Internet, we should be secured using
- ISTIO and Envoy (for internal communication as well as for external communication)
- Store Citadel private keys using CSM.
- Store secrets using SMS of CSM.
Infra-local-controller:
The "infra-local-controller" runs on the bootstrap machine in each location. The Bootstrap is the one which installs the required software in compute nodes used for future workloads. For example, say a location has 10 servers. 1 server can be used as the bootstrap machine and all other 9 servers can be used as compute nodes for running workloads. The Bootstrap machine not only installs all required software in the compute nodes, but is also expected to patch and update compute nodes with newer patched versions of the software.
As you see above in the picture, the bootstrap machine itself is based on K8S. Note that this K8S is different from the K8S that gets installed in compute nodes. That is, these are two different K8S clusters. In case of bootstrap machine, it itself is a complete K8S cluster with one node that has both master and minion software combined. All the components of the infra-local-controller (such as BPA, Metal3 and Ironic) are containers.
Since we expect infra-local-controller is reachable from outside we expect it to be secured using
- ISTIO and Envoy (for internal communication as well as for external communication)
Infra-local-controller is expected to be brought up in two ways:
- As a USB bootable disk: One should be able to get any bare-metal server machine, insert USB and restart the server. This means that the USB bootable disk shall have basic Linux, K8S and all containers coming up without any user actions. It must also have packages and OS images that are required to provision actual compute nodes. As in above example, these binary, OS and packages are installed on 9 compute nodes.
- As individual entities: As developers, one shall be able to use any machine without inserting a USB disk. In this case, the developer can choose a machine as a bootstrap machine, install Linux OS, Install K8S using Kubeadm and then bring up BPA, Metal3 and Ironic. Then upload packages via RESTAPIs provided by BPA to the system.
- As a KVM/QMEU Virtual machine image: One shall be able to use any VM as a bootstrap machine using this image.
...
Each edge location has infra local controller, which has a bootstrap cluster, which has all the components required to boot up the compute cluster.
Platform Architecture
Infra-global-controller:
Administration involves
- First time bring up.
- Addition of new compute nodes in locations.
- Removal of compute nodes from locations
- Software patching
- Software upgrading
The infra-local-controller will be brought up in each location. The infra-local-controller kubeconfig will be made known to the infra-global-controller. Beyond that, everything else is taken care by the infra-global-controller. The infra-global-controller communicates with various infra-local-controllers to do the job of software installation and provisioning.
Infra-global-controller runs in its own K8S cluster. All the components of infra-global-controllers are containers. The following components are part of the infra-global-controller.
- Provisioning controller (PC) Micro Services
- Binary Provisioning Manager (BPM) Micro services
- K8S Provisioning Manager (KPM) Micro-services
- CSM: Certificate and Secret management related Micro-services
- Cluster-API related Micro-services
- MongoDB for storing packages and OS images.
- Prometheus: Monitoring and alerting
Since we expect the infra-global-controller to be reachable from the Internet, we should be secured using
- ISTIO and Envoy (for internal communication as well as for external communication)
- Store Citadel private keys using CSM.
- Store secrets using SMS of CSM.
Infra-local-controller:
The "infra-local-controller" runs on the bootstrap machine in each location. The Bootstrap is the one which installs the required software in compute nodes used for future workloads. For example, say a location has 10 servers. 1 server can be used as the bootstrap machine and all other 9 servers can be used as compute nodes for running workloads. The Bootstrap machine not only installs all required software in the compute nodes, but is also expected to patch and update compute nodes with newer patched versions of the software.
As you see above in the picture, the bootstrap machine itself is based on K8S. Note that this K8S is different from the K8S that gets installed in compute nodes. That is, these are two different K8S clusters. In case of bootstrap machine, it itself is a complete K8S cluster with one node that has both master and minion software combined. All the components of the infra-local-controller (such as BPA, Metal3 and Ironic) are containers.
Since we expect infra-local-controller is reachable from outside we expect it to be secured using
- ISTIO and Envoy (for internal communication as well as for external communication)
Infra-local-controller is expected to be brought up in two ways:
- As a USB bootable disk: One should be able to get any bare-metal server machine, insert USB and restart the server. This means that the USB bootable disk shall have basic Linux, K8S and all containers coming up without any user actions. It must also have packages and OS images that are required to provision actual compute nodes. As in above example, these binary, OS and packages are installed on 9 compute nodes.
- As individual entities: As developers, one shall be able to use any machine without inserting a USB disk. In this case, the developer can choose a machine as a bootstrap machine, install Linux OS, Install K8S using Kubeadm and then bring up BPA, Metal3 and Ironic. Then upload packages via RESTAPIs provided by BPA to the system.
- As a KVM/QMEU Virtual machine image: One shall be able to use any VM as a bootstrap machine using this image.
Note that the infra-local-controller can be run without the infra-global-controller. In the interim release, we expect that only the infra-local-controller is supported. The infra-global-controller is targeted for the final Akraino R2 release. It is the goal that any operations done in the interim release on infra-local-controller manually are automated by infra-global-controller. And hence the interface provided by infra-local-controller is flexible enough to support both manual actions as well as automated actions.
...
Ironic is expected to bring up Linux on compute nodes. It is also expected to create SSH keys automatically for each compute node. In addition, it is also expected to create SSH user for each compute node. Usernames and password are expected to be stored in SMS for security reasons in infra-local-controller. BPA is expected to leverage these authentication credentials when it installs the software packages.
CSM is used for storing secrets and performing crypto operations using CSM.
- Use PKCS11
- If TPM is present, Citadel keys are expected to be distributed to TPM and also use TPM for signing operations.
Software Platform Architecture
@Kural - see comments under platform arch
...
Software Platform Architecture
@Kural - see comments under platform arch
Local Controller: kubeadm, Metal3, Baremetal Operator, Ironic, Prometheus, CSM, ONAP
Global Controller: kubeadm, KuD, K8S Provisioning Manager, Binary Provisioning Manager, Prometheus, CSM
...
R2 Release cover only Infra local controller:
Baremetal Operator
One of the major challenges to cloud admin managing multiple clusters in different edge location is coordinate control plane of each cluster configuration remotely, managing patches and updates/upgrades across multiple machines. Cluster-API provides declarative APIs to represent clusters and machines inside a cluster. Cluster-API provides the abstraction for various common logic that can be seen in various cluster provider such as GKE, AWS, Vsphere. Cluster-API consolidated all those logic provide abstractions for all those logic functions such as grouping machines for the upgrade, autoscaling mechanism.
In ICN family stack, Cluster-API Baremetal provider is the Metal3 Baremetal OperatorBaremetal operator from metal3 project is used as bare metal provider. It is used as a machine actuator that uses Ironic to provide k8s API to manage the physical servers that also run Kubernetes clusters on bare metal host. Cluster-API manages the kubernetes control plane through cluster CRD, and Kubernetes node(host machine) through machine CRDs, Machineset CRDs and MachineDeployment CRDS. It also has an autoscaler mechanism that checks the Machineset CRD that is similar to the analogy of K8s replica set and MachineDeployment CRD similar to the analogy of K8s Deployment. MachineDeployment CRDs are used to update/upgrade of software drivers in
Cluster-API provider with Baremetal operator is used to provision physical server, and initiate the Kubernetes cluster with user configuration
KuD
Kubernetes deployment (KUD) is a project that uses Kubespray to bring up a Kubernetes deployment and some addons on a provisioned machine. As it already part of ONAP it can be effectively reused to deploy the K8s App components(as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster. In R2 release KuD will be used to deploy the K8s addon such as Virlet, OVN, NFD, and Intel device plugins such as SRIOV and QAT in the edge location(as shown in figure I). In R3 release, KuD will be evolved as "ICN Operator" to install all K8s addons. For more information on the architecture of KuD please find the information here.
ONAP on K8s
One of the Kubernetes clusters with high availability, which is provisioned and configured by Cluster-API will be used to deploy ONAP on K8s. ICN family uses ONAP Operations Manager(OOM) to deploy ONAP installation. OOM provides a set of helm chart to be used to install ONAP on a K8s cluster. ICN family will create OOM installation and automate the ONAP installation once a Kubernetes cluster is configured by cluster-API
ONAP Block and Modules:
ONAP will be the Service Orchestration Engine in ICN family and is responsible for the VNF life cycle management, tenant management and Tenant resource quota allocation and managing Resource Orchestration engine(ROE) to schedule VNF workloads with Multi-site scheduler awareness and Hardware Platform abstraction(HPA). Required an Akraino dashboard that sits on the top of ONAP to deploy the VNFs
Kubernetes Block and Modules:
Kubernetes will be the Resource Orchestration Engine in ICN family to manage Network, Storage and Compute resource for the VNF application. ICN family will be using multiple container runtimes as Virtlet, Kata container, Kubevirt and gVisor. Each release supports different container runtimes that are focused on use cases.
Kubernetes module
KuD
Kubernetes deployment (KUD) is a project that uses Kubespray to bring up a Kubernetes deployment and some addons on a provisioned machine. As it already part of ONAP it can be effectively reused to deploy the K8s App components(as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster. In R2 release KuD will be used to deploy the K8s addon such as Virlet, OVN, NFD, and Intel device plugins such as SRIOV and QAT in the edge location(as shown in figure I). In R3 release, KuD will be evolved as "ICN Operator" to install all K8s addons. For more information on the architecture of KuD please find the information here.
ONAP on K8s
One of the Kubernetes clusters with high availability, which is provisioned and configured by KUD will be used to deploy ONAP on K8s. ICN family uses ONAP Operations Manager(OOM) to deploy ONAP installation. OOM provides a set of helm chart to be used to install ONAP on a K8s cluster. ICN family will create OOM installation and automate the ONAP installation once a Kubernetes cluster is configured by KUD
ONAP Block and Modules:
ONAP will be the Service Orchestration Engine in ICN family and is responsible for the VNF life cycle management, tenant management and Tenant resource quota allocation and managing Resource Orchestration engine(ROE) to schedule VNF workloads with Multi-site scheduler awareness and Hardware Platform abstraction(HPA). Required an Akraino dashboard that sits on the top of ONAP to deploy the VNFs
Kubernetes Block and Modules:
Kubernetes will be the Resource Orchestration Engine in ICN family to manage Network, Storage and Compute resource for the VNF application. ICN family will be using multiple container runtimes as Virtlet (R2 Release) and docker as a de-facto container runtime. Each release supports different container runtimes that are focused on use cases.
Kubernetes module is divided into 3 groups - K8s App components, NFV specific components and NFVi SDN controller components, all these components will be installed using KuD addons
...
NFV Specific components: This block is responsible for k8s compute management to support both software and hardware acceleration(include network acceleration) with CPU pinning and Device plugins such as QAT, FPGA, SRIOV & GPU.SRIOV
SDN Controller components: This block is responsible for managing SDN controller and to provide additional features such as Service Function chaining(SFC) and Network Route manager.
...
Please explain each component & their design/architecture, Please keep maximum 2 paragraph, if possible link your project wiki link for more information
Metal3: Kuralamudhan Ramakrishnan (Deactivated)
BPA Operator: Itohan Ukponmwan (Deactivated) Ramamani Yeleswarapu (Deactivated)
BPA Rest Agent: Enyinna Ochulor Tingjie Chen (Deactivated)
KUD: Akhila Kishore (Deactivated) Kuralamudhan Ramakrishnan (Deactivated)
ONAP4K8s: Kuralamudhan Ramakrishnan (Deactivated)
SDWAN: SDWAN module is worked as software defined router which can be used to defined the rules when connect to external internet. It is implemented as CNF instead of VNF for better performance and effective deployment, and leverage OpenWRT (an open source project based on Linux, and used on embedded devices to route network traffic) and mwan3 package (for wan interfaces management) to implement its functionalities, detail information can be found at: SDWAN Module Design
...
Components
...
Link
...
Akraino Release target
...
Cluster-API
...
https://github.com/kubernetes-sigs/cluster-api - 0.1.0
...
R2
...
Cluster-API-Provider-bare metal
...
https://github.com/metal3-io/cluster-api-provider-baremetal
...
R2
...
Provision stack - Metal3
...
https://github.com/metal3-io/baremetal-operator/
...
R2
...
Host Operating system
...
Ubuntu 18.04
...
R2
...
Quick Access Technology(QAT) drivers
...
Intel® C627 Chipset - https://ark.intel.com/content/www/us/en/ark/products/97343/intel-c627-chipset.html
...
R2
...
NIC drivers
...
...
R2
...
ONAP
...
Latest release 3.0.1-ONAP - https://github.com/onap/integration/
...
R2
...
Workloads
...
- OpenWRT SDWAN - https://openwrt.org/
- Distributed Analytics as a Service
- EdgeXFoundry use case
- VR 360 streaming
...
R3
...
KUD
...
https://git.onap.org/multicloud/k8s/
...
R2
...
ICN uses Metal3 project for provisioning server in the edge locations, ICN project uses IPMI protocol to identify the servers in the edge locations, and use Ironic & Ironic - Inspector to provision the OS in the edge location. For R2 release, ICN project provision Ubuntu 18.04 in each server, and uses the distinguished network such provisioning network and bare-metal network for inspection and ipmi provisioning
ICN project injects the user data in each server regarding network configuration, remote command execution using ssh and maintain a common secure mechanism for all provisioning the servers. Each local controller maintain IP address management for that edge location. For more information refer - Metal3 Baremetal Operator in ICN stack
BPA Operator: ICN Architecture Document ICN Architecture Document
BPA Rest Agent: ICN Architecture Document ICN Architecture Document
KUD: ICN Architecture Document ICN Architecture Document
ONAP4K8s: ICN Architecture Document
SDWAN: SDWAN module is worked as software defined router which can be used to defined the rules when connect to external internet. It is implemented as CNF instead of VNF for better performance and effective deployment, and leverage OpenWRT (an open source project based on Linux, and used on embedded devices to route network traffic) and mwan3 package (for wan interfaces management) to implement its functionalities, detail information can be found at: SDWAN Module Design
Components | Link | Akraino Release target | ||||
Provision stack - Metal3 | https://github.com/metal3-io/kubernetesbaremetal-sigs/kubesprayoperator/ | R2 | ||||
Host Operating system | Ubuntu 18.04 | R2 | ||||
K8s | Quick Access Technology(QAT) drivers | Intel® C627 Chipset - https://githubark.intel.com/kubernetes/kubeadm - v1.15 | R2 | Docker | /content/www/us/en/ark/products/97343/intel-c627-chipset.html | R2 |
NIC drivers | XL710 - https://githubwww.intel.com/docker - 18.09/content/dam/www/public/us/en/documents/datasheets/xl710-10-40-controller-datasheet.pdf | R2 | ||||
VirtletONAP | https://githubgit.onap.com/Mirantis/virtlet -1.4.4org/multicloud | R2 | SDN - OVN | |||
Workloads | OpenWRT SDWAN - https://githubopenwrt.com/ovn-org/ovn-kubernetes - 0.3.0 | R2 | ||||
OpenvSwitchKUD | https://github.com/openvswitch/ovs - 2.10.1git.onap.org/multicloud/k8s/ | R2 | ||||
AnsibleKubespray | https://github.com/ansible/ansible - 2.7.10kubernetes-sigs/kubespray | R2 | ||||
HelmK8s | https://github.com/helmkubernetes/helmkubeadm - 2v1.9.115 | R2 | ||||
IstioDocker | https://github.com/istio/istiodocker - 118.0.309 | R2 | ||||
Kata containerVirtlet | https://github.com/kata-containers/runtime/releasesMirantis/virtlet -1.4.04 | R3R2Kubevirt | ||||
SDN - OVN | https://github.com/kubevirt/kubevirt/ovn-org/ovn-kubernetes - v00.183.0 | R3 | Collectd | R2 | ||
Rook/CephOpenvSwitch | https://rookgithub.iocom/docs/rook/v1.0/helm-operator.html v1.0openvswitch/ovs - 2.10.1 | R2 | ||||
MetalLBAnsible | https://github.com/dandersonansible/metallb/releasesansible - v02.7.310 | R3R2 | ||||
Kube - PrometheusHelm | https://github.com/coreoshelm/kube-prometheushelm - v02.9.1.0 | R2 | ||||
OpenNESS | Will be updated soon | R3 | ||||
Multi-tenancy | ||||||
Istio | https://github.com/kubernetes-sigs/multi-tenancy istio/istio - 1.0.3 | R2Knative | ||||
Rook/Ceph | https://rook.io/docs/rook/githubv1.com/knative | R3 | Device Plugins0/helm-operator.html v1.0 | R2 | ||
MetalLB | https://github.com/danderson/intelmetallb/intel-device-plugins-for-kubernetes - QAT, SRIOVR2releases - v0.7.3 | R2 | ||||
Device Plugins | https://github.com/intel/intel-device-plugins-for-kubernetes - | FPGAQAT, | GPUSRIOV | R3R2 | ||
Node Feature Discovery | R2 | |||||
CNI | https://github.com/coreos/flannel/ - release tag v0.11.0 https://github.com/containernetworking/cni - release tag v0.7.0 https://github.com/containernetworking/plugins - release tag v0.8.1 https://github.com/containernetworking/cni#3rd-party-plugins - Multus v3.3tp, SRIOV CNI v2.0( withSRIOV Network Device plugin) | R2 | ||||
Conformance Test for K8s | R2 |
Kuralamudhan Ramakrishnan (Deactivated) - Edit it for R2 release
APIs
Kuralamudhan Ramakrishnan (Deactivated) kuralamudhan- what to do here?
APIs with reference to Architecture and Modules
High Level definition of APIs are stated here, assuming Full definition APIs are in the API documentation
Hardware and Software Management
...
SRIOV CNI v2.0( with SRIOV Network Device plugin) | R2 |
Hardware and Software Management
Licensing
GNU/common license
...