Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

This project offers a means for deploying a Kubernetes cluster that satisfies the requirements of ONAP multicloud/k8s plugin.

Its ansible playbooks allow provisioning a deployment on Virtual Machines and on Baremetal.

KuD facilitates virtual deployment using Vagrant and Baremetal deployment using the All-in-one script.

installer_workflow.png

Components

NameDescriptionSourceStatus
KubernetesBase Kubernetes deploymentkubesprayDone
ovn4nfvIntegrates Opensource Virtual Networkingconfigure-ovn4nfv.ymlTested
VirtletAllows to run VMsconfigure-virtlet.ymlTested
MultusProvides Multiple Network support in a podconfigure-multus.ymlTested
NFDNode feature discoveryconfigure-nfd.ymlTested
IstioService Mesh platformconfigure-istio.ymlTested

Deployment

To deploy KuD independent of ICN please refer to the documents/instructions here.

Listed the items or code blocks in play for clarity

To have the KuD offline mode working fine we have to do the following:


  1. Get all the dependency packages and resolve the dependency in the right order
  2. Install basic components for KuD 
  3. Run the installer script
  4. Docker 
  5. Ansible
  6. Get the kubespray prescribed version listed in KuD
  7. Get the correct version of Kubeadm, etcd, hypercube, cni
  8. Get the docker images used by kubespray 
    1. load them making sure the right versions are available.
  9. We had the override the following defaults in Kubespray:
    1. Download_run_once: true
    2. Dowload_localhost: true
    3. Skip_downloads: true
    4. Strict_dns_check: False
    5. Update_cache: False
    6. Helm_client_refersh: False
  10. Get galaxy-requirements 
  11. Get galaxy requirements dependents
  12. Run the roles
  13. Get add-ons 
  14. Modify the ansible script to not pull from the web and instead use from release dir and run the playbook. (Tested for Multus)
  15. Run all the addon playbooks
  16. KuD offline done. Run the test cases to verify

Challenges

The values listed above are the changes that could be overridden in K8s-cluster.yml. However, some of the changes did not have his option. There are cases where we had to manually change some defaults in Kubespray code. 

One of the places is when it checks for docker_version in ubuntu_bionic.yml. The existing version is 18.06.0 expected is 18.06.1. Tried to change this by supplying the correct/requested version by Kubespray. Regardless it fails unless we change the hardcoded version in the https://github.com/kubernetes-sigs/kubespray/blob/release-2.8/roles/container-engine/docker/vars/ubuntu-bionic.yml#L6

Another case where we encountered issues is when ansible runs a name: ensure docker packages are installed

update_cache: "{{ omit if ansible_distribution == 'Fedora' else False }}"

https://github.com/kubernetes-sigs/kubespray/blob/release-2.8/roles/container-engine/docker/tasks/main.yml#L134

Right now we have the K8s cluster setup in offline mode on the client-server machine replicated in the lab. However, we see that support for the offline version is poor for v2.8.2 currently used by KUD. Setting up the galaxy requirement has been a blocker. We have developed an approach to it to get the add-ons going.

  • No labels