Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Networking and HW Prerequisites

For a virutal deploy, minimum hardware requirement is 1 baremetal server (either x86_64 or aarch64) for a deploy with 3 VMs on it.

For a baremetal deploy minimum hardware requirement is 3 baremetal servers.

Networking requirements - TBD

Methods of Installation

To address a large variety of setups, multiple methods of deployment should be supported.  Deployment works both on x86_64 and aarch64 hw.

MethodPros (current state)Cons (current state)Prerequisites
Manual installation
  • Full control over each step
  • Easy to understand and replicate
  • Already available (see next chapter on this page)
  • Requires user intervention
  • Requires certain prerequisites be met on cluster nodes apriori
  • preinstalled operating system (Ubuntu 16.04/18.04) on all involved nodes
Script-based installation
  • High degree of flexibility via arguments
  • Portable
  • Can be used in CI/CD, assuming baremetal nodes are pre-provisioned, e.g. for shorter test cycles like a patch verify job where we'd want to avoid reinstalling the operating system each time
  • Implementation currently in progress
  • Fixed number of nodes (1 master + 1 worker)
  • Requires certain prerequisites be met on cluster nodes apriori
  • preinstalled operating system (Ubuntu 16.04/18.04) on all involved nodes
  • user with passwordless sudo access already available on the target nodes
OPNFV-based installer(s)
  • Unified and standardized input configuration files (PDF/IDF)
  • Can be used in CI/CD
  • Can handle OS provisioning on its own, for virtual, baremetal or hybrid PODs
  • Not yet implemented
  • Requires hardware descriptor files (PDF/IDF)
  • Jumpserver (installer) node preinstalled
  • XDF (PDF/IDF) available for the target lab
Heat stack
  • Portable
  • Not
implemented
  • tested on aarch64 yet
  • Uses VMs rather than baremetal
  • Openstack cloud preinstalled
Other installer solutions (e.g. Airship)
  • Alignment with industry standard installer solutions for K8s
  • Not implemented
  • More complex design and configuration
  • Might be overkill for IEC, at least with the current requirements
  • TBD


Kubernetes Install for Ubuntu

...

Please use the following command to install etcd database. 


Code Block
languagebash
   $ kubectl apply -fwget https://raw.githubusercontent.com/Jingzhao123iecedge/arm64TemporaryCalicoiec/temporay_arm64/
   v3.3/getting-started/kubernetes/installation/hosted/etcd-arm64.yaml

Install the RBAC Roles required for Calico

Code Block
languagebash
master/src/foundation/scripts/cni/calico/etcd.yaml
   $ kubectlsed apply -fi https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/rbac.yaml

Install Calico to system

...

"s/10.96.232.136/${CLUSTER_IP}/" ./etcd.yaml
   $ kubectl apply -f etcd.yaml

Install the RBAC Roles required for Calico

Code Block
languagebash
   $ kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/rbac.yaml

Install Calico to system

Firstly, we should get the configuration file from web site and modify the corresponding image from amd64 to arm64 version. Then, by using kubectl, the calico pod will be created.

Code Block
languagebash
   $ wget https://docsraw.projectcalicogithubusercontent.orgcom/v3.3/getting-started/kubernetes/installation/hostediecedge/iec/master/src/foundation/scripts/cni/calico/calico.yaml

Since the "quay.io/calico" image repo does not support does not multi-arch, we have to replace the “quay.io/calico” image path to "calico" which supports multi-arch.

Code Block
languagebash
   $ sed -i "s/quay.io\/calico/calico/" s@10.96.232.136@${CLUSTER_IP}@; s@192.168.0.0/16@${POD_NETWORK_CIDR}@" ./calico.yaml

Deploy the Calico using following command:

...

We would like to provide a walk through shell script to automate the (described in the following chapter) to automate the installation of Kubernetes and Calico in the future. But this README is still useful for IEC developers and users.

...

Jianlin Lv: jianlin.lv@arm.com

Alternative Methods of Installation

OPNFV Installers

OPNFV Fuel

OPNFV Fuel installer can be leveraged to automate the IEC prerequisites setup (e.g. baremetal operating system provisioning for baremetal clusters), as well as the IEC installation itself.

Supported configurations include, but are not limited to:

  • single hypervisor node running 3 VMs dedicated to IEC;
  • 3 baremetal nodes dedicated IEC (K8 directly on baremetal);
  • 3 baremetal nodes running a virtual control plane (each baremetal node has 1 VM dedicated to IEC);

Upstream Fuel patch is currently undergoing final stages of review and is expected to be merged soon.

...

Installation

Script Based Installation

Akraino IEC repository now provides an automated method, based on sh scripts, that handles all above steps.

Prerequisites:

  • 2 nodes (virtual machines or baremetal) with a preinstalled operating system (Ubuntu 16.04/18.04) and passwordless-sudo capable user on them (password-based login via SSH enabled);

The following snippet will automatically handle all steps described above in the previous chapter:

Code Block
languagebash
     $ git clone https://gerrit.akraino.org/r/iec
     # iec/scripts/startup.sh [master ip] [worker ip] [user] [password]
     $ iec/scripts/startup.sh 10.169.40.171 10.169.41.172 iec 123456

OPNFV Installers

OPNFV Fuel

OPNFV Fuel installer can be leveraged to automate the IEC prerequisites setup (e.g. baremetal operating system provisioning for baremetal clusters), as well as the IEC installation itself.

Prerequisites:

  • 1 jumpserver node with preinstalled operating system (Ubuntu 16.04/18.04 or CentOS7) - will also be used as a hypvervisor for the IEC VMs - for single hypervisor deployments;
  • 1 jumpserver node with preinstalled operating system + 3 baremetal nodes for multiple hypervisor deployments;

Supported configurations include, but are not limited to:

  • single hypervisor node running 3 VMs dedicated to IEC;
  • 3 baremetal nodes dedicated IEC (K8 directly on baremetal);
  • 3 baremetal nodes running a virtual control plane (each baremetal node has 1 VM dedicated to IEC);

Upstream Fuel patch is currently undergoing final stages of review and is expected to be merged soon.

Once the Fuel patch lands upstream, deploying IEC (including handling its prerequisites, like creating the required VMs on the hypervisor) can be done using (e.g. for an AArch64 single-hypervisor POD):

Code Block
languagebash
     $ git clone -b stable/hunter https://github.com/opnfv/fuel
     $ fuel/ci/deploy.sh -l arm -p virtual2 -s k8-nosdn-iec-noha -S /var/lib/opnfv/tmpdir/ -D |& tee deploy.log

Heat Orchestration Templates

Prerequisites:

  • Openstack Ocata or latest

Recommended configuration:

  • 2 or more compute nodes with enough RAM and disk (128 GB of RAM, 2 TB disk space)
  • DPDK is optional but it is recommended

The scripts and templates can be found in the Akraino iec git repository:

Code Block
languagebash
titleIEC HOT
$ git clone https://githubgerrit.akraino.comorg/opnfvr/fueliec
$    cd iec/src/foundation/hot
$ fuel/ci/deploy.sh -l arm -p virtual2 -s k8-nosdn-iec-noha -S /var/lib/opnfv/tmpdir/ -D |& tee deploy.log# [has_dpdk=true] [skip_k8s_net=1] [skip_k8s_master=1] [skip_k8s_slaves=1] external_net=<external_net> ./control.sh <start|stop>
$ has_dpdk=true external_net=external ./control.sh start

More useful information can be found in the README in the same directory