OpenNESS 19.12 Design
Openness released 19.12 on December 21 2019 and this new release has removed the deployment mode ( kubernetes + NTS ). Two modes is supported now: Native deployment Mode (which is based on pure docker/libvirt) and Infrastructure Mode (which is based on kube-ovn), below are the brief summary of the difference of these 2 modes:
Functionality | Native Deployment Mode | Infrastructure Deployment Mode |
Usage Scenarios | On-Premises Edge | Network Edge |
Infrastructure | Virtualization base: docker/libvirt Orchestration: OpenNESS controller Network: docker network (container) + NTS (through new added KNI interface) | Orchestration: Kubernetes Network: kube-ovn CNI |
Micro-Services in OpenNESS Controller | Web UI: controller UI Edge Node/Edge application lifecycle management Core Network Configuration Telemetry | Core Network Configuration: Configure the access network (e.g., LTE/CUPS, 5G) control plane Telemetry |
Micro-Services in OpenNESS Node | EAA: application/service registration, authentication etc. ELA/EVA/EDA: used by controller to configure host interfaces, network policy (used by NTS), create/destroy application etc. DNS: for client to access MS in edge node NTS: traffic steering | EAA: application/service registration, authentication etc. EIS(Edge Interface Service), looks to be similar with providernet implemented in ovs4nfv k8s CNI DNS: for client to access MS in edge node |
Application on-boarding | OpenNESS Controller Web UI or Restful API | Kubernetes (e.g. Kubectl apply -f application.yaml) Note: unlike 19.09, No UI used to on-board application |
Edge node interface configuration | ELA (Edge LifeCycle Agent, Implemented by OpenNESS) – Configurated by OpenNESS controller | EIS (Edge Interface Service, which is an kubectl extension to configurate edge node host network adapter), use
e.g. kubectl interfaceservice attach $NODE_NAME $PCI_ADDRESS |
Traffic Policy configuration | EDA (Edge Dataplane Agent, Implemented by OpenNESS) – Configurated by OpenNESS controller | Kubenetes Network Policy CRD
e.g. kubectl apply -f network_policy.yml Note: unlike 19.09, No UI used to configure policy |
DataPlane Service | NTS (Implemented based on DPDK in OpenNESS) to provide additional KNI interface for container | kube-ovn + Network policy |
Gap Analysis for Integrating OpenNESS with ICN
Network Policy
Network policy and DNS is used for traffic steering. Network policy is used for restrict access among services but NOT “proactively” forward the traffic, While the OpenNESS DNS service can help “redirect” the external client’s traffic to the edge application service。
By default, in a Network Edge environment, all ingress traffic is blocked (services running inside of deployed applications are not reachable) and all egress traffic is enabled (pods are able to reach the internet). The following NetworkPolicy definition is used:
apiVersion: networking.k8s.io/v1 metadata: name: block-all-ingress namespace: default # selects default namespace spec: podSelector: {} # matches all the pods in the default namespace policyTypes: - Ingress ingress: [] # no rules allowing ingress traffic = ingress blocked
Admin can enable access to certain service by applying a NetworkPolicy CRD. For example:
1. To deploy a Network Policy allowing ingress traffic on port 5000 (tcp and udp) from 192.168.1.0/24 network to OpenVINO consumer application pod, create the following specification file for this Network Policy:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: openvino-policy namespace: default spec: podSelector: matchLabels: name: openvino-cons-app policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 192.168.1.0/24 ports: - protocol: TCP port: 5000 - protocol: UDP port: 5000
2. Create the Network Policy:
kubectl apply -f network_policy.yml
DNS
DNS service can help “redirect” the external client’s traffic to the edge application service. This gap analysis is to investigate whether OpenNESS DNS can be used for ICN traffic steering or not.
OpenNESS provides DNS server which provides the microsevice’s ip address based on FQDN. OpenNESS extends kubectl utility with kubectl edgedns cmd to set/delete DNS entry. For example,
- define a file with below content: openvino-dns.json
{
"record_type":"A",
"fqdn":"openvino.openness",
"addresses":["10.16.0.10"]
} - Then use below command to add an entry in OpenNESS DNS server:
kubectl edgedns set <edge_node_host_name> openvino-dns.json
Below are implement details of OpenNESS DNS server:
- Run as independent process/container in each Edge Node : ./edgednssvr -port 53 -fwdr=8.8.8.8 -db XXX.db // port: DNS server port; fwdr: forwarder ip used when cannot found FQDN in OpenNESS DNS DB; db: OpenNESS db file
- Provide 2 servers after running:
Control Server: gRPC/IP based API to receive DNS record add/remove request – OpenNESS controller can call this interface to add DNS record
DNS server: DNS service is based on https://github.com/miekg/dns - DNS process flow: After get a DNS request, it will try to find the FQDN in local OpenNESS DNS db first, if not found, forward the request to an external forwarder (default is 8.8.8.8, set by “-fwdr“ parameter)
The OpenNESS DNS service is different from K8s’ CoreDNS to support different usages:
- CoreDNS: provides DNS service within K8s cluster, e.g. from app in container to find the service also running in container of the same cluster.
- OpenNESS DNS: provides DNS service for app of external host which is not running in the edge cluster to find a app (which may not be a K8s service, so its ip may not be recorded in coreDNS) in k8s cluster. e.g. in OpenNESS OpenVINO demo, the video stream generator is running in a separate host, admin needs manually (add a new name server in /etc/resolv.conf) set it’s DNS server IP to point to OpenNESS edge node DNS server then it can know how to send the stream.
Cross-Node communication
Edge apps can be divided into producer and consumer. This gap analysis is to investigate the communication between the producers and consumers which are on different edge nodes.
Edge applications must introduce themselves to OpenNESS framework and identify if they would like to activate new edge services or consume an existing service. Edge Application Agent (EAA) component is the handler of all the edge applications hosted by the OpenNESS edge node and acts as their point-of-contact.
OpenNESS-awareness involves (a) authentication, (b) service activation/deactivation, (c) service discovery, (d) service subscription, and (e) Websocket connection establishment. The Websocket connection retains a channel for EAA for notification forwarding to pre-subscribed consumer applications. Notifications are generated by "producer" edge applications and absorbed by "consumer" edge applications.
The sequence of operations for the producer application:
- Authenticate with OpenNESS edge node
- Activate new service and include the list of notifications involved
- Send notifications to OpenNESS edge node according to business logic
The sequence of operations for the consumer application:
- Authenticate with OpenNESS edge node
- Discover the available services on OpenNESS edge platform
- Subscribe to services of interest and listen for notifications
Edge apps will access eaa through eaa.openness (name.namespace) which is a kubernetes service:
https://github.com/open-ness/edgecontroller/blob/master/kube-ovn/openness.yaml#L18
For example: as following links show, openvino consumer will access http://eaa.openness:443/auth for authentication.
https://github.com/open-ness/edgeapps/blob/master/openvino/consumer/cmd/main.go#L24
https://github.com/open-ness/edgeapps/blob/master/openvino/consumer/cmd/main.go#L66
eaa is deployed as a deployment and only 1 eaa will be deployed:
https://github.com/open-ness/edgecontroller/blob/master/kube-ovn/openness.yaml#L41
Because all edge apps will access only 1 eaa, it doesn't matter that eaa is stateful.
For example:
only 1 eaa is deployed on node1. producer1 and producer2 will activate the new service with eaa. consumer1 and consumer2 will consume services stored in eaa. Because all the information are stored in only 1 eaa, there won't be issues.
node1 node2
eaa
producer1 consumer1 producer2 consumer2
Because edge apps on different edge node all can access service eaa, the consumer can consume the service provided by producer which is on a different node.
For example:
producer1 is located in node1 and consumer2 is located on node2. The networking flow will be:
producer1 -> service eaa -> pod eaa
consumer2 -> service eaa -> pod eaa
node1 node2
eaa
producer1 consumer2
OS (Ubuntu)
OpenNESS only supports Centos but ICN is based on Ubuntu 18.04. This gap analysis is to investigate how to deploy OpenNESS on Ubuntu 18.04
OpenNESS only supports Centos but ICN is based on Ubuntu 18.04. By changing the ansible scripts of OpenNESS, it is able to deploy OpenNESS on Ubuntu 18.04. The following parts of ansible scripts need to change:
1. Following ansible roles can be removed for OpenNESS master: grub, cnca, multus, nfd. Ansible role grub can be removed for OpenNESS node. Because:
- grub is used to add hugepages to grub and hugepages are not useful for integration OpenNESS with ICN.
- cnca is not required for integration.
- multus has already been integrated with ICN.
- nfd will be integrated directly with ICN.
2. Centos uses yum to install packages and we need to use apt for Ubuntu.
3. Some packages which will be installed by ansible scripts should be removed or replaced:
- Some Centos packages doesn't exist on Ubuntu and these packages should be removed. For example, yum-utils, device-mapper-persistent-data.
- Some Centos packages' name are different for Ubuntu. For example, python2-pip should be replaced with python-pip, python-devel should be replaced with python-dev.
4. Selinux is not used on Ubuntu and need to remove the ansible scripts configuring selinux.
5. Epel repository is for Centos and Ubuntu doesn't need this repository.
6. Proxy will be set for yum and need to change the scripts to set proxy for apt.
7. Docker installation for Centos and Ubuntu are different. Need to change the scripts following the installation guide. For example: the docker repository is different for Centos and Ubuntu.
8. Auditd is used for Docker. Auditd is delivered with Centos by default but Ubuntu needs to install auditd.
9. Kubernetes installation for Centos and Ubuntu are different. Need to change the scripts following the installation guide. For example: gpg key is different for Centos and Ubuntu, ubuntu use deb and Centos uses repository.
10. cgroups driver is different for Centos (systemd) and Ubuntu (cgroups). By default, cgroups driver is cgroups and need to remove the ansible scripts which configures cgroups driver to systemd.
11. firewalld is used in Centos and need to change to ufw which is used by Ubuntu.
12. Packages are different for installing openvswitch and ovn. Centos uses RPMs. Ubuntu uses openvswitch-switch, ovn-common, ovn-central and ovn-host.
13. Topology manager and CPU manager is configured for edge node's kubelet. No need to use topology manager and can remove these.
Openness Integration Design
Openness Microserivces
We are planning to integrate Openness Infrastructure mode. The following figure shows the microservices of Openness infrastructure mode and also lists the microserivces that we propose to integrate.
Microservices of Openness Infrastructure mode | Description | Deployment method | Deployment of the component | Propose to integrate |
---|---|---|---|---|
eaa | application/service registration, authentication etc | deployment | edge node | yes |
edgedns | for client to access microservices in edge node | daemonset (propose to change to deployment) | edge node | yes |
interfaceservice | similar with providernet implemented in ovn4nfv-k8s-plugin | daemonset | edge node | no, will use ovn4nfv-k8s-plugin's provider network |
cnca | Core Network Configuration: Configure the access network (e.g., LTE/CUPS, 5G) control plane | controller | no | |
syslog | log service for openness | daemonset | controller & edge node | no |
multus | enabling attaching multiple network interfaces to pods | daemonset | controller & edge node | no, will be covered by ONAP4K8S |
nfd | node feature discovery | daemonset | controller & edge node | no, will be covered by ONAP4K8S |
sriov | sriov network device plugin & sriov cni | daemonset | controller & edge node | no, will be covered by ONAP4K8S |
topology manager | kubernetes topology manager | Kubelet component | controller & edge node | no, will be covered by ONAP4K8S |
CMK | CPU Manager | part of kubelet | controller & edge node | no, will be covered by ONAP4K8S |
bios | BIOS and Firmware Configuration using Intel® System Configuration Utility which is a command-line utility that can be used to save and restore BIOS and firmware settings to a file or to set and display individual settings | privileged Pod | controller & edge node | |
fpga | Open Programmable Acceleration Engine (OPAE) package consisting of a kernel driver and user space FPGA utils package that enables programming of the FPGA is used. sriov is used to configure the FPGA resources such as Virtual Functions and queues | pod | controller & edge node |
Openness dns config agent design
Openness extends kubectl command line to set edgedns (Described in OpenNESS 19.12 Investigation part). To integrate Openness with ICN, we will not use this but create a config agent to set edge dns. This config agent will monitor below CRD:
apiVersion: openness.akraino.org/v1alpha1 kind: Opennessdns metadata: name: example-dns spec: node: node1 dns: - record_type: A fqdn: openvino.openness addresses: 10.16.0.10 - record_type: A fqdn: www.google.com addresses: 10.16.0.11
The config agent behavior
- Monitor openness edge dns CRD
- When a CRD instance is created:
- Call edgedns on the specific edge node to set the dns
- When a CRD instance is deleted:
- Call edgedns on the specific edge node to delete the dns
Task List
- Ansible scripts to build microservice docker images and push them to docker repository
- Use helm charts to run microservice in kubernetes