Overview
This document describes how to deploy blueprints from Akraino's KNI Blueprint Family. It is common to all blueprints in that family, unless otherwise noted.
Pre-Installation Requirements
Resource Requirements
The resource requirements for deployment depend on the specific blueprint and deployment target. Please see:
CLI tool
The current KNI blueprints use the openshift-install
tool from the OKD Kubernetes distro to stand up a minimal Kubernetes cluster. All other Day 1 and Day 2 operations are then driven purely through manipulation of declarative Kubernetes manifests. To use this in the context of Akraino KNI blueprints, the project has created a helper CLI tool that needs to be installed first.
If necessary, install golang binary (incl. GOPATH var).
Next, install the following dependencies:
sudo yum install -y make gcc libvirt-devel
Then install the knictl:
mkdir -p $GOPATH/src/gerrit.akraino.org/kni
cd $GOPATH/src/gerrit.akraino.org/kni
git clone https://gerrit.akraino.org/r/kni/installer
cd installer
make build
cp knictl $GOPATH/bin/
Secrets
Most secrets (TLS certificates, Kubernetes API keys, etc.) will be auto-generated for you, but you need to provide at least two secrets yourself:
- a public SSH key
- a pull secret
The public SSH key is automatically added to every machine provisioned into the cluster and allows remote access to that machine. In case you don't have / want to use an existing key, you can create a new key pair using:
ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa
The pull secret is used to download the container images used during cluster deployment. Unfortunately, the OKD Kubernetes distro used by the KNI blueprints does not (yet) provide pre-built container images for all of the deployed components. Instead of going through the hassle of building those from source, we use the ones made available by openshift.com. Therefore, you need to go to https://cloud.redhat.com/openshift/install/metal/user-provisioned, log in (creating a free account, if necessary), and hit "Download Pull Secret".
Create a $HOME/.kni folder and copy the following files:
- id_rsa.pub → needs to contain the public key that you want to use to access your nodes
- pull-secret.json → needs to contain the pull secret previously copied
Pre-Requisites for Deploying to AWS
For deploying a KNI blueprint to AWS, you need to
- add a public hosted DNS zone for the cluster to Route53,
- validate your AWS quota in the chosen region is sufficient,
- set up an API user account with the necessarily IAM privileges.
Please see the upstream documentation for details.
Store the aws-access-key-id
and aws-secret-access-key
in a credentials file inside $HOME/.aws, with the following format:
[default]aws_access_key_id=xxx aws_secret_access_key=xxx
Pre-Requisites for Deploying to Bare Metal
The baremetal UPI install can be optionally automated when using knictl (see below). When attempting a manual baremetal UPI install, however, please be sure to read: https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html
Pre-Requisites for Deploying to Libvirt
For deploying a KNI blueprint to VMs on KVM/libvirt, you need to
- provision a machine with CentOS 1810 to serve as virthost and
- prepare the virthost by running
source utils/prep_host.sh
from the kni-installer repo on that host.
Please see the upstream documentation for details.
Structure of a site
In order to deploy a blueprint, you need to create a repository with a site. The site configuration is based in kustomize, and needs to use our blueprints as base, referencing that properly. A sample site can be seen on https://github.com/yrobla/kni-site. Site needs to have this structure:
.
├── 00_install-config
│ ├── install-config.name.patch.yaml
│ ├── install-config.patch.yaml
│ ├── kustomization.yaml
│ └── site-config.yaml
├── 01_cluster-mods
│ ├── kustomization.yaml
│ ├── manifests
│ └── openshift
├── 02_cluster-addons
│ └── kustomization.yaml
└── 03_services
└── kustomization.yaml
00_install-config
This folder will contain the basic settings for the site, including the base blueprint/profile, and the site name/domain. The following files are needed:
- kustomization.yaml: key file, where it will contain a link to the used blueprint/profile, and a reference to the used patches to customize the site bases:The entry in bases needs to reference the blueprint being used (in this case blueprint-pae), and the profile install-config file (in this case production.aws/00_install-config). The other entries need to be just written literally.
bases: - git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/00_install-config patches: - install-config.patch.yaml patchesJson6902: - target: version: v1 kind: InstallConfig name: cluster path: install-config.name.patch.yaml transformers: - site-config.yaml
install-config.patch.yaml: is a patch to modify the domain from the base blueprint. You need to customize with the domain you want to give to your site
apiVersion: v1 kind: InstallConfig metadata: name: cluster baseDomain: devcluster.openshift.com
- install-config.name.patch.yaml: is a patch to modify the site name from the base blueprint. You need to customize with the name you want to give to your site
- op: replace path: "/metadata/name" value: kni-site
- site-config.yaml: site configuration file, you can add entries in config to override behaviour of knictl (currently just releaseImageOverride is supported)
apiVersion: kni.akraino.org/v1alpha1 kind: SiteConfig metadata: name: notImportantHere config: releaseImageOverride: registry.svc.ci.openshift.org/origin/release:4.1
NOTE: If you intend to use knictl's baremetal UPI automation (see below), you will need to add a provisioningInfrastructure block to your site-config.yaml for the automation to consume. Below is an example template, with in-line comments describing the various options:
provisioningInfrastructure: hosts: # interface to use for provisioning on the masters masterBootInterface: eno2 # interface to use for provisioning on the workers workerBootInterface: eno2 # interface to use for baremetal on the masters masterSdnInterface: ens1f0 # interface to use for baremetal on the workers workerSdnInterface: ens1f0 network: # The provisioning network's CIDR provisioningIpCidr: 172.22.0.0/24 # PXE boot server IP # DHCP range start (usually provHost/interfaces/provisioningIpAddress + 1) provisioningDHCPStart: 172.22.0.11 provisioningDHCPEnd: 172.22.0.51 # The baremetal networks's CIDR baremetalIpCidr: 192.168.111.0/24 # Address map # bootstrap: baremetalDHCPStart i.e. 192.168.111.10 # master-0: baremetalDHCPStart+1 i.e. 192.168.111.11 # master-1: baremetalDHCPStart+2 i.e. 192.168.111.12 # master-2: baremetalDHCPStart+3 i.e. 192.168.111.13 # worker-0: baremetalDHCPStart+5 i.e. 192.168.111.15 # worker-N: baremetalDHCPStart+5+N baremetalDHCPStart: 192.168.111.10 baremetalDHCPEnd: 192.168.111.50 # baremetal network default gateway, set to proper IP if /provHost/services/baremetalGateway == false # if /provHost/services/baremetalGateway == true, baremetalGWIP with be located on provHost/interfaces/baremetal # and external traffic will be routed through the provisioning host baremetalGWIP: 192.168.111.4 dns: # cluster DNS, change to proper IP address if provHost/services/clusterDNS == false # if /provHost/services/clusterDNS == true, cluster (IP) with be located on provHost/interfaces/provisioning # and DNS functionality will be provided by the provisioning host cluster: 192.168.111.3 # Up to 3 external DNS servers to which non-local queries will be directed external1: 10.11.5.19 # external2: 10.11.5.19 # external3: 10.11.5.19 provHost: interfaces: # Interface on the provisioning host that connects to the provisioning network provisioning: eno2 # Must be in provisioningIpCidr range # pxe boot server will be at port 8080 on this address provisioningIpAddress: 172.22.0.10 # Interface on the provisioning host that connects to the baremetal network baremetal: ens1f0 # Must be in baremetalIpCidr range baremetalIpAddress: 192.168.111.6 # Interface on the provisioning host that connects to the internet/external network external: eno1 bridges: # These bridges are created on the bastion host provisioning: provisioning baremetal: baremetal services: # Does the provsioning host provide DHCP services for the baremetal network? baremetalDHCP: true # Does the provisioning host provide DNS services for the cluster? clusterDNS: true # Does the provisioning host provide a default gateway for the baremetal network? baremetalGateway: true
01_cluster_mods
This is the directory that will contain all the customizations for the basic cluster deployment. You could create patches for modifying number of masters/workers, network settings... everything that needs to be modified on cluster deployment time. It needs to have a basic kustomization.yaml file, that will reference the same level file for the blueprint. And you could create additional patches following kustomize syntax:
bases: - git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/01_cluster-mods
02_cluster_addons and 03_services
Follow same structure as 01_cluster_mods, but in this case is for adding additional workloads after cluster deployment. They also need to have a kustomization.yaml file that references the file of the same level for the blueprint, and can include additional resources and patches.
How to deploy
The whole deployment workflow is based on knictl CLI tool that this repository is providing.
1. Fetch requirements for a site.
You need to have a site repository with the structure described above. Then, first thing is to fetch the requirements needed for the blueprint that the site references. This is achieved by:
./knictl fetch_requirements github.com/site-repo.git
Where the first argument references a site repository, following https://github.com/hashicorp/go-getter syntax. This will download the site repository, and will create a folder with the site name inside $HOME/.kni . It will also fetch all the binaries needed, and will store them inside $HOME/.kni/$SITE_NAME/requirements folder.
2. Prepare manifests for a site
NOTE: Before performing this step, you must copy your OpenShift pull secret into your build path (i.e. to ~/.kni/pull-secret.json).
Next step is to run a procedure to prepare all the manifests for deploying a site. This is achieved by applying kustomize on the site repository, combining that with the base manifests for the blueprint, and doing a merge with the manifests generated by the installer at runtime. This is achieved by the following command:
./knictl prepare_manifests $SITE_NAME
This will generate a set of manifests ready to apply, and will be stored on $HOME/.kni/$SITE_NAME/final_manifests folder. Along with manifests, a profile.env file has been created also in $HOME/.kni/$SITE_NAME folder. It includes environment vars that can be sourced before deploying the cluster. Current vars that can be exported are:
- OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE : used when a new image is wanted, instead of the default one
- TF_VAR_libvirt_master_memory, TF_VAR_libvirt_master_vcpu: Used in the libvirt case, to define the memory and CPU for the vms.
3. Deploy the cluster
Manual
Before starting the deployment, it is recommended to source the env vars from profile.env . You can achieve it with:
source $HOME/.kni/$SITE_NAME/profile.env
Then, you need to deploy the cluster using the generated manifests. This can be achieved with:
$HOME/.kni/$SITE_NAME/requirements/openshift-install create cluster --dir=$HOME/.kni/$SITE_NAME/final_manifests
This will deploy a cluster based on the specified manifests. You can learn more about how to manage cluster deployment and how to interact with it on https://docs.openshift.com/container-platform/4.1/welcome/index.html
For deploying to baremetal using UPi, you will need to generate ignition files and use them when provisioning the machines. You can create the ignition files with the following command, instead of create cluster:
$HOME/.kni/$SITE_NAME/requirements/openshift-install create ignition-configs --dir=$HOME/.kni/$SITE_NAME/final_manifests
Automated (Baremetal UPI only)
knictl offers two commands to automate the deployment of a baremetal UPI cluster (and only baremetal UPI, at this time). As prerequisites to using these commands, you must ensure the following are true:
- You added a proper provisioningInfrastructure block to your site's site-config.yaml (see above).
- You ran:
./knictl fetch_requirements <site repo URI>
...with the aforementioned provisioningInfrastructure present in your site's site-config.yaml. - You ran:
./knictl prepare_manifests $SITE_NAME
...after #2. - Your install-config.yaml for the site, or the blueprint upon which it is based, contains the following to indicate a baremetal installation:
platform: none: {}
Once the aforementioned items have been dealt with, deploy your master nodes like so:
./knictl deploy_masters $SITE_NAME
This will deploy a bootstrap VM and begin to bring up your master nodes. After this command has successfully executed, monitor your cluster as you normally would while the masters are deploying. Once the masters have reached the ready state, you can then deploy your workers with the following command:
./knictl deploy_workers $SITE_NAME
This will begin to bring up your worker nodes. Monitor your worker nodes are you normally would during this process. If the deployment doesn't hit any errors, you will then have a working baremetal cluster.
4. Apply workloads
After the cluster has been generated, the extra workloads that have been specified in manifests (like kubevirt), need to be applied. This can be achieved by:
./knictl apply_workloads $SITE_NAME
This will execute kustomize on the site manifests and will apply the output to the cluster. After that, the site deployment can be considered as finished.
Accessing the Cluster
After the deployment finishes, a kubeconfig
file will be placed inside auth directory:
export KUBECONFIG=$HOME/.kni/$SITE_NAME/final_manifests/auth/kubeconfig
Then cluster can be managed with the kubectl or oc
(drop-in replacement with advanced functionality) CLI tools. To get the oc
client, visit https://cloud.redhat.com/openshift/install/metal/user-provisioned , and follow the Download Command-Line Tools link, where you need to download the openshift-client archive that matches your operating system.
Destroying the Cluster
When needed, the site can be destroyed with the openshift-install command, using the following syntax:
$HOME/.kni/$SITE_NAME/requirements/openshift-install destroy cluster --dir $HOME/.kni/$SITE_NAME/final_manifests
Troubleshooting the Cluster
Please see the upstream documentation for details.