User manual update

Objective

This document provides step-by-step method to install KNI Blueprint on Baremetal servers. This is minimal configuration example where only 3 servers are used. Servers and their role are given in below table.

Server#

Role

Purpose

1

Installer node

This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well.

2

Master node

This is control plane or master node of K8s cluster that is based on openshift 4.1.

3

Worker node

This is worker node which hosts the application.

4

Bootstrap node

Bootstrap node runs as VM on installer node and it exists only during the installation and later automatically deleted by installer.


This installation is based on KNI UPI Automation Framework.

High Level Connectivity Diagram

Below diagram shows the connectivity requirement for this automation framework.


Key highlight from above diagram is as follows-

  • Each server should have 3 Ethernet ports configured, purpose of these is listed below. These three are in addition to IPMI port, which is required for PXE boot.


Interface

Purpose

Management interface

Remote root login from this interface is used for entire setup. This interface needs to have internet connectivity to download various files. This is shown as 10.19.110.x in above diagram.

Baremetal interface

This interface is for baremetal network, also known as SDN network. This interface doesn’t need internet connectivity. This is shown as 192.168.111.x

Provisioning interface

This interface is for PXE boot. This interface doesn’t need internet connectivity. This is shown as 172.22.0.x


 

Pre-requisites

  • Following is OS requirement for each of these nodes.


Node Role

OS requirement

Installer

CentOS 7.6 and above

Bootstrap

RHCOS (Redhat CoreOS)

Master

RHCOS (Redhat CoreOS)

Worker

RHCOS/RHEL/CentOS/CentOS-rt


  • Configure required network interfaces as explained earlier. Ensure that provisioning interface is setup on 1st ethernet port. Reason being, PXE boot is by default enabled on 1st ethernet port. This is true even for BIOS mode. If you want to enable it for other ethernet ports then BIOS mode needs to be changed to UEFI so that network settings are enabled. However, it is recommended to use BIOS mode instead of UEFI mode.


  • Collect IPs and MAC addresses of all the nodes, one sample is given below. This information will be required to populate config files.

Role

iDRAC IP/IPMI port IP

Provisioning network IP

Baremetal network IP

Management network IP

Provisioning network port & mac

Baremetal network port & mac

Management network port & mac

Installer





em1 / 21:02:0E:DC:BC:27

em2/ 21:02:0E:DC:BC:28

em3/ 21:02:0E:DC:BC:29

master-0








worker-0









  • Enable IPMI over LAN for all master and worker nodes. This is required for remote PXE boot from installer node. Different make of servers have different ways to enable it. In Dell-EMC server, it is done using iDRAC console and this setting is present as shown below.


In absence of this setting, following kind of errors are thrown from installer.

Error: Error running command '          ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx chassis bootdev pxe;

          ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx power cycle || ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx power on;

': exit status 1. Output: Error: Unable to establish IPMI v2 / RMCP+ session

Error: Unable to establish IPMI v2 / RMCP+ session

Error: Unable to establish IPMI v2 / RMCP+ session


After enabling this setting, you can run below command to verify that it is working as expected. Give IP address, username and password.


ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx chassis status

(where x.x.x.x is IPMI port IP of your master/worker node, this is followed by root username and password for IPMI e.g. iDRAC)

 


 

High-level steps

Following are high level steps, these are taken from official documentation and shown here for clarity purpose only.


KNI setup is 7 steps process. These are as shown below.

  1. Installer node OS setup
  2. Prepare Installer node
  3. Kick off make procedure
  4. Kick off bootstrap node and master node creation
  5. Kick off Cluster initialization
  6. Kick off worker node addition
  7. Verify successful setup

Let us understand these in details.

Step 1 Installer node OS setup

There are many ways to install OS on given server; you are free to choose whatever works for you. Method given here is based on mounting virtual media and booting from it.

  1. Downloaded the DVD ISO image of CentOS 7.6 from below link.



  1. On iDRAC console of installer node, map this file as virtual media.



  1. Change the boot mode to Virtual CD/DVD/ISO in boot controls from iDRAC.



  1. Perform cold boot to load the ISO image and boot the server with CentOS image.

       


              


  1. Modify /etc/sysconfig/network-scripts/ifcfg-em1 (please note, your network interface name might be different than em1, change it accordingly) and add following lines to put this machine on provisioning network, give appropriate IP address, Netmask and Gateway

IPADDR=

NETMASK=

GATEWAY=

BOOTPROTO=static

ONBOOT=yes


Restart network services and check the IP allocation.


systemctl restart network

ifdown em1

ifup em1

ip a |grep em1


  1. Modify /etc/sysconfig/network-scripts/ifcfg-em2 and add following lines to put this machine on baremetal network


IPADDR=

NETMASK=

GATEWAY=

BOOTPROTO=static

ONBOOT=yes


Restart network services and check the IP allocation.


systemctl restart network

ifdown em2

ifup em2

ip a |grep em2



  1. Modify /etc/sysconfig/network-scripts/ifcfg-em3 and add following lines to put this machine on management network


IPADDR=

NETMASK=

GATEWAY=

BOOTPROTO=static

ONBOOT=yes

DNS1=


Restart network services and check the IP allocation.


systemctl restart network

ifdown em3

ifup em3

ip a |grep em3


This interface should have internet connectivity hence DNS entry should be added here.


  1. Enable the remote root login by modifying /etc/ssh/sshd_config file and enabling below line.


Permitrootlogin yes


  1. Check if machine is remotely accessible from management IP with root credential.


  1. Perform yum update.

This brings up your installer node on CentOS.

 


 

Step 2 Prepare Installer node

Now it’s time to take care of other pre-requisites like-

  • Dnsmasq setup for baremetal and provisioning network. Dnsmasq role is to provide DHCP service.
  • Matchbox setup. This is required for iPXE boot.
  • Terraform provider for matchbox configuration.
  • Libvirt installation and configuration for hosting bootstrap node.
  • HAProxy setup.
  • CoreDNS setup.
  • Manifest file, Ignition files, tfvars file, kickstart file generation etc.

All this is taken care by prep_bm_host.sh, this is most important step in overall installation and success of this step ensures success of subsequent steps.

Remember to Login thru management IP to perform all the work on installer node. You will not do ssh to bootstrap, master or worker node unless there is need of some serious troubleshooting work.

First Git clone the repository in /home, before doing git clone, let’s install git


yum install gitgit clone https://github.com/redhat-nfvpe/kni-upi-lab.git

 

Sample output of above commands would be similar to below.

 

[root@localhost home]# git clone https://github.com/redhat-nfvpe/kni-upi-lab.git

Cloning into 'kni-upi-lab'...

remote: Enumerating objects: 32, done.

remote: Counting objects: 100% (32/32), done.

remote: Compressing objects: 100% (28/28), done.

remote: Total 1193 (delta 12), reused 14 (delta 4), pack-reused 1161

Receiving objects: 100% (1193/1193), 533.58 KiB | 362.00 KiB/s, done.

Resolving deltas: 100% (765/765), done.

[root@localhost home]# pwd

/home


The prep_bm_host.sh is located in the repo directory, this script prepares the host for provisioning. Before running this script, You need to update below files as per your configuration and requirements.

·         Populate cluster/site-config.yaml

 

·         Populate cluster/install-config.yaml

 

Beside other entries, Add ssh-key and pull secret in this file.

 

Generate SSH key using below command and add it in ssh-agent of installer node.

ssh-keygen -t rsa -b 4096 -N '' -f  ~/.ssh/id_rsa

eval "$(ssh-agent -s)"

ssh-add ~/.ssh/id_rsa


Copy /root/.ssh/id_rsa.pub in install-config.yaml file


Download the pull secret from below link. Sign in using your redhat account. This should also be added in install-config.yaml file. Remember that there is no new line character in pull secret so do a direct copy paste without using any intermediate text file.


https://cloud.redhat.com/openshift/install/metal/user-provisioned

 

 

·         Edit common.sh file to choose the version, you can keep it as-is or choose the latest.

 

·         Update cluster/ha-lab-ipmi-creds file with base64 values of username and password for IPMI root login. Example of this conversion is- 

 

[root@localhost kni-upi-lab]# echo -n 'root' |base64

cm9vdA==

similarly convert your password to base64.

[root@localhost kni-upi-lab]# echo -n 'givepasswordhere' |base64

 

Now you are ready to run the BM host preparation command. This script downloads several files including some large files of size 700MB or so hence, completion time of this script will depend on your network speed.


[root@localhost kni-upi-lab]# pwd

/home/kni-upi-lab

[root@localhost kni-upi-lab]# ./prep_bm_host.sh


Go thru the output from this script execution and ensure that there are no failures.  In case of any failures, depending on the failure type, you can either run clean_bm_host and run prep_bm_host again or just run the failed part(s) again.

If everything is successfully completed then move to next step.


 

Step 3 Kick off make procedure

This step takes care of generating various configuration files. Run below commands and ensure these are successfully completed, Move to next step only when previous step was successful.

make cleanmake allmake con-start


Sample output of these commands would be similar to below.

make clean


[root@localhost kni-upi-lab]# make clean


rm -rf ./build ./coredns ./dnsmasq ./haproxy ./ocp

./scripts/gen_config_prov.sh remove

kni-dnsmasq-prov removed

./scripts/gen_config_bm.sh remove

kni-dnsmasq-bm removed

./scripts/gen_haproxy.sh remove

kni-haproxy removed

./scripts/gen_coredns.sh remove

kni-coredns removed

./scripts/gen_matchbox.sh remove

kni-matchbox removed

[root@localhost kni-upi-lab]#


make all


make con-start


[root@localhost kni-upi-lab]# make con-start


echo "All config files generated and copied into their proper locations..."

All config files generated and copied into their proper locations...

./scripts/gen_config_prov.sh start

Started kni-dnsmasq-prov as a1a6ac2c2a0deb75c0a6617495dba12c3d2126f2e111bf3338a52cfcc72756dc...

./scripts/gen_config_bm.sh start

Started kni-dnsmasq-bm as e74e503d9fd5232e87e4addb97cf98469bc05cb0d1d0fd2e9621ec1d67a92d8e...

./scripts/gen_haproxy.sh start

Started kni-haproxy as 70fdf0782f99d96e639b1e31a04159049bf99f61f62761541e560319a1d86a56...

./scripts/gen_coredns.sh start

Started kni-coredns as 35f6a62ecd0c6bba5f0b743c0276bf3e5995b4cda6707ac188d64d02891da487...

./scripts/gen_matchbox.sh start

Started kni-matchbox as 353fe1bfc889d0dab69d67ce1683f6d8189ce01eb8c4499da915f3cd65e0f190...

[root@localhost kni-upi-lab]#


Remember that ignition file generated here has validity of 24 hrs only so if you don’t finish the installation in 24 hrs from ignition file generation then you will get X.509 certificate error. To resolve this, you need to generate the ignition files again by running these make commands again.



 

Step 4 Kick off bootstrap node and master node creation

cd terraform/clusterterraform initterraform apply --auto-approveopenshift-install --dir ocp wait-for bootstrap-complete --log-level debug


Sample output of these commands would be as below-

cd terraform/clusterterraform init


[root@localhost cluster]# terraform init


Initializing modules...

- bootstrap in bootstrap

- masters in masters


Initializing the backend...


Initializing provider plugins...

- Checking for available provider plugins...

- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...

- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...

- Downloading plugin for provider "local" (hashicorp/local) 1.3.0...


The following providers do not have any version constraints in configuration,

so the latest version was installed.


To prevent automatic upgrades to new major versions that may contain breaking

changes, it is recommended to add version = "..." constraints to the

corresponding provider blocks in configuration, with the constraint strings

suggested below.


* provider.local: version = "~> 1.3"

* provider.null: version = "~> 2.1"

* provider.template: version = "~> 2.1"


Terraform has been successfully initialized!


You may now begin working with Terraform. Try running "terraform plan" to see

any changes that are required for your infrastructure. All Terraform commands

should now work.


If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary.


terraform apply --auto-approve


[root@localhost cluster]# terraform apply -auto-approve

module.bootstrap.data.template_file.vm_bootstrap: Refreshing state...

module.bootstrap.null_resource.vm_bootstrap_destroy: Refreshing state... [id=6024402973644034124]

module.masters.null_resource.ipmi_master[0]: Creating...

module.masters.null_resource.ipmi_master_cleanup[0]: Creating...

module.masters.null_resource.ipmi_master_cleanup[0]: Creation complete after 0s [id=8830102992367627367]

module.masters.null_resource.ipmi_master[0]: Provisioning with 'local-exec'...

module.masters.null_resource.ipmi_master[0] (local-exec): Executing: ["/bin/sh" "-c" "          ipmitool -I lanplus -H x.x.x.x -U root -P xxxx chassis bootdev pxe;\n          ipmitool -I lanplus -H x.x.x.x -U root -P xxxx power cycle || ipmitool -I lanplus -H x.x.x.x -U root -P xxxx power on;\n"]

module.bootstrap.local_file.vm_bootstrap: Creating...

module.bootstrap.local_file.vm_bootstrap: Creation complete after 0s [id=489ef9df0c85d5cbf1e9a422332c69e9e9c01bcd]

module.bootstrap.null_resource.vm_bootstrap: Creating...

module.bootstrap.null_resource.vm_bootstrap: Provisioning with 'local-exec'...

module.bootstrap.null_resource.vm_bootstrap (local-exec): Executing: ["/bin/sh" "-c" "rm -f /var/lib/libvirt/images/bootstrap.img || true\nqemu-img create -f qcow2 /var/lib/libvirt/images/bootstrap.img 800G\nchown qemu:qemu /var/lib/libvirt/images/bootstrap.img\nvirsh create /tmp/test1-bootstrap-vm.xml\n"]

module.bootstrap.null_resource.vm_bootstrap (local-exec): Formatting '/var/lib/libvirt/images/bootstrap.img', fmt=qcow2 size=858993459200 encryption=off cluster_size=65536 lazy_refcounts=off

matchbox_profile.default: Creating...

module.masters.null_resource.ipmi_master[0] (local-exec): Set Boot Device to pxe

module.masters.matchbox_profile.master[0]: Creating...

matchbox_profile.default: Creation complete after 0s [id=test1]

module.masters.null_resource.ipmi_master[0] (local-exec): Set Chassis Power Control to Cycle failed: Command not supported in present state

module.bootstrap.matchbox_profile.bootstrap: Creating...

module.masters.matchbox_profile.master[0]: Creation complete after 0s [id=test1-master-0]

module.masters.null_resource.ipmi_master[0] (local-exec): Chassis Power Control: Up/On

matchbox_group.default: Creating...

module.masters.null_resource.ipmi_master[0]: Creation complete after 0s [id=7455677445444302993]

module.bootstrap.matchbox_profile.bootstrap: Creation complete after 0s [id=test1-bootstrap]

matchbox_group.default: Creation complete after 0s [id=test1]

module.masters.matchbox_group.master[0]: Creating...

module.masters.matchbox_group.master[0]: Creation complete after 0s [id=test1-master-0]

module.bootstrap.matchbox_group.bootstrap: Creating...

module.bootstrap.matchbox_group.bootstrap: Creation complete after 0s [id=test1-bootstrap]

module.bootstrap.null_resource.vm_bootstrap (local-exec): Domain test1-bootstrap created from /tmp/test1-bootstrap-vm.xml


module.bootstrap.null_resource.vm_bootstrap: Creation complete after 0s [id=3253903187048812410]


Apply complete! Resources: 10 added, 0 changed, 0 destroyed.


Outputs:


bootstrap_ip = x.x.x.x





openshift-install --dir ocp wait-for bootstrap-complete --log-level debug


This command will take up to 30 minutes to complete. Move to next step only when you see completion message like below.


[root@localhost kni-upi-lab]# openshift-install --dir ocp wait-for bootstrap-complete --log-level debug


DEBUG OpenShift Installer v4.1.0-201905212232-dirty

DEBUG Built from commit 71d8978039726046929729ad15302973e3da18ce

INFO Waiting up to 30m0s for the Kubernetes API at https://api.test1.tt.testing:6443...

INFO API v1.13.4+838b4fa up

INFO Waiting up to 30m0s for bootstrapping to complete...

DEBUG Bootstrap status: complete

INFO It is now safe to remove the bootstrap resources

 

For troubleshooting, you can do ssh to bootstrap node from installer node and run below command to see detailed status.


 [root@localhost kni-upi-lab]# ssh core@x.x.x.x

The authenticity of host 'x.x.x.x (x.x.x.x)' can't be established.

ECDSA key fingerprint is SHA256:SMzY9wqbg3vud+2RE6bVrnJFPacVZtm7zfdSNa5fWKs.

ECDSA key fingerprint is MD5:1a:c7:19:a7:8b:24:82:53:5d:a9:b6:42:86:2a:1a:7b.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'x.x.x.x' (ECDSA) to the list of known hosts.

Red Hat Enterprise Linux CoreOS 410.8.20190520.0

WARNING: Direct SSH access to machines is not recommended.

This node has been annotated with machineconfiguration.openshift.io/ssh=accessed


---

This is the bootstrap node; it will be destroyed when the master is fully up.


The primary service is "bootkube.service". To watch its status, run e.g.


  journalctl -b -f -u bootkube.service


[core@test1-bootstrap-0 ~]$ journalctl -b -f -u bootkube.service


-- Logs begin at Fri 2019-09-20 14:47:53 UTC. --

Sep 20 15:07:17 test1-bootstrap-0 bootkube.sh[1453]: [#135] failed to fetch discovery: Get https://localhost:6443/api?timeout=32s: dial tcp [::1]:6443: connect: connection refused


You will see lot of errors/warnings during this setup but it’s ok if last line says, it is complete. Here is an example of last few lines from journalctl output.

Sep 20 15:13:02 test1-bootstrap-0 bootkube.sh[1453]: Skipped "secret-loadbalancer-serving-signer.yaml" secrets.v1./loadbalancer-serving-signer -n openshift-kube-apiserver-operator as it already exists

Sep 20 15:13:02 test1-bootstrap-0 bootkube.sh[1453]: Skipped "secret-localhost-serving-signer.yaml" secrets.v1./localhost-serving-signer -n openshift-kube-apiserver-operator as it already exists

Sep 20 15:13:02 test1-bootstrap-0 bootkube.sh[1453]: Skipped "secret-service-network-serving-signer.yaml" secrets.v1./service-network-serving-signer -n openshift-kube-apiserver-operator as it already exists

Sep 20 15:13:02 test1-bootstrap-0 bootkube.sh[1453]: bootkube.service complete

 

For master node, You can watch the progress at console of master node. At the end, it will show login prompt similar to below.

          


 

Step 5 Kick off Cluster initialization

cd ..cd ..openshift-install --dir ocp wait-for install-complete


This command will take up to 30 minutes to complete. Sample output of install command would be similar to below. Since availability of some of the operators (shown in red in below output) depends on worker node availability, these operators won’t come up during master node setup hence you won’t see successful completion of below command. You will see cluster initializations status with 93 or 94% completion. This is fine. These operators will come up when you add the worker node(s).

[root@localhost kni-upi-lab]# openshift-install --dir ocp wait-for install-complete


INFO Waiting up to 30m0s for the cluster at https://api.test1.tt.testing:6443 to initialize...

DEBUG Still waiting for the cluster to initialize: Working towards 4.1.0: 93% complete


FATAL failed to initialize the cluster: Multiple errors are preventing progress:

* Cluster operator authentication is still updating: missing version information for oauth-openshift

* Cluster operator console has not yet reported success

* Cluster operator image-registry is still updating

* Cluster operator ingress has not yet reported success

* Cluster operator monitoring is still updating

* Could not update servicemonitor "openshift-apiserver-operator/openshift-apiserver-operator" (346 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-authentication-operator/authentication-operator" (321 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-controller-manager-operator/openshift-controller-manager-operator" (349 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-image-registry/image-registry" (327 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-kube-apiserver-operator/kube-apiserver-operator" (337 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-kube-controller-manager-operator/kube-controller-manager-operator" (340 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-kube-scheduler-operator/kube-scheduler-operator" (343 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-operator-lifecycle-manager/olm-operator" (267 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-service-catalog-apiserver-operator/openshift-service-catalog-apiserver-operator" (330 of 350): the server does not recognize this resource, check extension API servers

* Could not update servicemonitor "openshift-service-catalog-controller-manager-operator/openshift-service-catalog-controller-manager-operator" (333 of 350): the server does not recognize this resource, check extension API servers: timed out waiting for the condition

[root@localhost kni-upi-lab]#

 


 

Step 6 Kick off worker node addition

Worker nodes could be with different OS like- RHCOS, RHEL, CentOS etc and worker count could also be different. Below scenario covers the details for single worker node addition that is based on CentOS.

First, you need to download centos ISO file on your installer node and mount the ISO. After this you need to copy the files to matchbox folder as shown below. Target location is $HOME/matchbox-data/var/lib/matchbox/assets/centos7/

mount -o loop CentOS-7-x86_64-DVD-1810.iso /mnt/

cd /mnt

mkdir –p /home/kni-upi-lab/matchbox-data/var/lib/matchbox/assets/centos7

cp -av * /home/kni-upi-lab/matchbox-data/var/lib/matchbox/assets/centos7/

umount /mnt/


Update root user password in centos-worker-kickstart.cfg file (present in $HOME/build) at below line, in place of –plaintext give root user password.

rootpw –plaintext

Now we are ready to run terraform apply to create worker node.

cd ../workersterraform initterraform apply --auto-approve


Sample output of these commands would be as below.

terraform apply --auto-approve


[root@localhost workers]# terraform apply --auto-approve


null_resource.ipmi_worker_clenup[0]: Creating...

null_resource.ipmi_worker[0]: Creating...

null_resource.ipmi_worker_clenup[0]: Creation complete after 0s [id=8860623875506377578]

null_resource.ipmi_worker[0]: Provisioning with 'local-exec'...

null_resource.ipmi_worker[0] (local-exec): Executing: ["/bin/sh" "-c" "          ipmitool -I lanplus -H x.x.x.x -U root -P xxxx chassis bootdev pxe;\n          ipmitool -I lanplus -H x.x.x.x -U root -P xxxx power cycle || ipmitool -I lanplus -H x.x.x.x -U root -P xxxx power on;\n"]

null_resource.ipmi_worker[0] (local-exec): Set Boot Device to pxe

null_resource.ipmi_worker[0] (local-exec): Set Chassis Power Control to Cycle failed: Command not supported in present state

null_resource.ipmi_worker[0] (local-exec): Chassis Power Control: Up/On

null_resource.ipmi_worker[0]: Creation complete after 0s [id=3564174579995686728]

matchbox_profile.default: Creating...

matchbox_profile.worker[0]: Creating...

matchbox_profile.default: Creation complete after 0s [id=test1]

matchbox_profile.worker[0]: Creation complete after 0s [id=test1-worker-0]

matchbox_group.default: Creating...

matchbox_group.default: Creation complete after 0s [id=test1]

matchbox_group.worker[0]: Creating...

matchbox_group.worker[0]: Creation complete after 0s [id=test1-worker-0]


Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

[root@localhost workers]#


Worker node addition runs several scripts to join this node in cluster so it will take up to 30 minutes to complete. You can watch the progress at console of worker node. At the end, it will show login prompt similar to below.



 

Step 7 Verify successful setup

Ensure all operators are shown as available and none of the operators are progressing or degraded.

Nodes status should be READY.

[root@localhost ~]# oc get nodes

NAME                     STATUS   ROLES    AGE   VERSION

test1-master-0   Ready    master   12d   v1.13.4+cb455d664

test1-worker-0   Ready    worker   9d    v1.13.4+af45cda

[root@localhost ~]#


Most important check, output of following command should show cluster as available.

[root@localhost ~]# oc get clusterversion

NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS

Version        4.1.0           True                False               8d      Cluster version is 4.1.0


Check if console URL is working. You can get the console URL from below command. Last line of output would be console URL.

[root@localhost ~]#  oc edit console.config.openshift.io cluster

apiVersion: config.openshift.io/v1

kind: Console

metadata:

  annotations:

    release.openshift.io/create-only: "true"

  creationTimestamp: "2019-09-27T21:57:32Z"

  generation: 1

  name: cluster

  resourceVersion: "2616368"

  selfLink: /apis/config.openshift.io/v1/consoles/cluster

  uid: ce5225b9-e171-11e9-bc48-52540082683f

spec: {}

status:

  consoleURL: https://console-openshift-console.apps.test1.tt.testing       

~

You can login to console URL using kubeadmin username and password that is shown at the end of installation. If you don’t remember it, you can pull it from $HOME/ocp/auth/kubeconfig


Congratulations, you have successfully configured KNI on Baremetal. You are all set to use it.








 

References

https://github.com/redhat-nfvpe/kni-upi-lab

https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html