Overview
This document describes how to deploy blueprints from Akraino's KNI Blueprint Family. It is common to all blueprints in that family, unless otherwise noted.
...
Pre-Requisites for Deploying to Bare Metal
The baremetal UPI install can be optionally automated when using knictl (see below). When attempting a manual baremetal UPI install, however, please be sure to read: https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html
...
Run the prepare manifests command, using as a parameter the name of your site
...
./knictl
...
prepare_manifests
...
$SITE_NAME
...
For
...
example:
./knictl
...
prepare_manifests
...
community.baremetal.edge-sites
...
.net
Remember that the generated files there have a validity of 24 hours. If you don't finish the installation on that time, you'll need to re-run this command.
Deploy masters
Code Block | ||
---|---|---|
| ||
./knictl deploy_masters $SITE_NAME |
This will deploy a bootstrap VM and begin to bring up your master nodes. After this command has successfully executed, monitor your cluster as you normally would while the masters are deploying. Once the masters have reached the ready state, you can then deploy your workers.
Deploy workers
...
It may be possible that, as workers are not present on this point, some operators could fail. You can check API and nodes availability with:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/requirements/oc get nodes |
When all master nodes are shown as ready, you can start deployment of your workers
Deploy workers
Code Block | ||
---|---|---|
| ||
./knictl deploy_workers $SITE_NAME |
This will begin to bring up your worker nodes. Monitor your worker nodes are you normally would during this process. If the deployment doesn't hit any errors, you will then have a working baremetal cluster.After masters and workers are up, you can apply the workloads using the general procedure as shown here
Accessing the Cluster
After the deployment finishes, a kubeconfig
file will be placed inside auth directory:
...
You can monitor the state of the cluster with:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/ |
...
requirements/openshift-install --dir ocp wait-for install-complete |
It may happen that the monitor of this process stops at 93%-94%. This is fine, you can just launch again, or simply start using the cluster, as mostly all operators will come online over the time.
Prepare to deploy CentOS nodes
The default installation is totally automated for RHCOS. However, there is the possibility to deploy CentOS nodes, but this requires some specific preparation steps:
- Download DVD iso from http://isoredirect.centos.org/centos/7/isos/x86_64/ , place it on /tmp
Mount it:
Code Block language bash mount -o loop /tmp/CentOS-7-x86_64-DVD-1810.iso /mnt/ mkdir -p $HOME/.kni/$SITE_NAME/baremetal_automation/matchbox-data/var/lib/matchbox/assets/centos7 cp -ar $HOME/.kni/$SITE_NAME/baremetal_automation/matchbox-data/var/lib/matchbox/assets/centos7/ umount /mnt
Prepare a $HOME/settings_upi.env file with the following parameters:
Code Block language bash export CLUSTER_NAME="$CLUSTER_NAME" export BASE_DOMAIN="$CLUSTER_DOMAIN" export PULL_SECRET='your_pull_secret' export KUBECONFIG_PATH=$HOME/.kni/$SITE_NAME/baremetal_automation/ocp/auth/kubeconfig export OS_INSTALL_ENDPOINT=http://172.22.0.1:8080/assets/centos7
Navigate to the kickstart script generation and execute it, copying the generated kickstart file:
Code Block language bash cd $HOME/.kni/$SITE_NAME/baremetal_automation/kickstart/ bash add_kickstart_for_centos.sh cp centos-worker-kickstart.cfg $HOME/.kni/$SITE_NAME/baremetal_automation/matchbox-data/var/lib/matchbox/assets/
- After that, you are ready to deploy your CentOS workers with the usual procedure.
After masters and workers are up, you can apply the workloads using the general procedure as shown here
Accessing the Cluster
After the deployment finishes, a kubeconfig
file will be placed inside auth directory:
export KUBECONFIG=$HOME/.kni/$SITE_NAME/final_manifests/auth/kubeconfig
Then cluster can be managed with the kubectl or oc
(drop-in replacement with advanced functionality) CLI tools.
To verify a correct setup, you can check again the nodes, and see if masters and workers are ready:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/requirements/oc get nodes |
You also can check if the cluster is available:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/requirements/oc get clusterversion |
You can also verify that the console is working, the console url is the following:
Code Block | ||
---|---|---|
| ||
https://console-openshift-console.apps.$CLUSTER_NAME.$CLUSTER_DOMAIN |
You can enter the console with kubeadmin user and the password that is shown at the end of the install.
Destroying the Cluster
Manual
...