Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

This document describes how to deploy blueprints from Akraino's KNI Blueprint Family. It is common to all blueprints in that family, unless otherwise noted.

...

Pre-Requisites for Deploying to Bare Metal

The baremetal UPI install can be optionally automated when using knictl (see below).  When attempting a manual baremetal UPI install, however, please be sure to read: https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html

...

Pre-Requisites for Deploying to Libvirt

For Procedure for deploying a KNI blueprint to VMs on KVM/libvirt, you need to

...

source utils/prep_host.sh

...

Please see the upstream documentation for details.

...

libvirt will be the same as for baremetal, but using vbmc (virtual bmc emulation), to simulate baremetal from virtual machines.

Create site for AWS and GCP

In order to deploy a blueprint, you need to create a repository with a site. The site configuration is based in kustomize, and needs to use our blueprints as base, referencing that properly. Sample sites for deploying on libvirt, AWS and baremetal can be seen on: https://github.com/akraino-edge-stack/kni-blueprint-pae/tree/master/sites.
Site needs to have this structure:

...

Follow same structure as 01_cluster_mods, but in this case is for adding additional workloads after cluster deployment. They also need to have a kustomization.yaml file that references the file of the same level for the blueprint, and can include additional resources and patches.

How to deploy on AWS

...

and GCP

...

The whole deployment workflow is based on knictl CLI tool that this repository is providing.

...

mkdir -p $GOPATH/src/gerrit.akraino.org/kni
cd $GOPATH/src/gerrit.akraino.org/kni
git clone https://gerrit.akraino.org/r/kni/installer
cd installer
make build
mkdir -p $GOPATH/bin/
cp knictl $GOPATH/bin/

cp knictl /usr/local/go/bin/

Secrets

Most secrets (TLS certificates, Kubernetes API keys, etc.) will be auto-generated for you, but you need to provide at least two secrets yourself:

...

Configure the system properly to run knictl on it: Install knictl

Fetch requirements

Inside knictl path (typically $HOME/go/src/gerrit.akraino.org/kni/installer), run the fetch-requirements command, pointing to the github repo of the site you created

...

You can enter the console with kubeadmin user and the password that is shown at the end of the install.

...

How to Deploy on libvirt

Minimal hardware footprint needed

Only one server is needed, that will be acting as a virthost. Master and worker VMs will be created there

Server#

Role

Purpose

1

Installer node

This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well.

High level connectivity

Network connectivity will be the same as the baremetal case, but these can be dummy interfaces as all the network connectivity will be just inside the same host:

Interface

Purpose

Management interface

Remote root login from this interface is used for entire setup. This interface needs to have internet connectivity to download various files. This can be shared with external interface. This only needs to be present on the Installer node

External interfaceInterface on the installer node that has internet network connectivity. All external traffic from masters/workers is redirected to the external interface of the installer node.

Baremetal interface

This interface is for baremetal network, also known as SDN network. This interface doesn’t need internet connectivity.

Provisioning interface

This interface is for PXE boot. This interface doesn’t need internet connectivity.

Pre-requisites

OS requirements

Node Role

OS requirement

Installer

CentOS 7.6 and above

High level steps

Create site for virtual baremetal

The procedure for virtual baremetal is the same as for the baremetal case, but adding extra flags to indicate that the process is going to be virtual.

First step to start a virtual baremetal deployment is to have a site defined, with all the network and baremetal settings defined in the yaml files. A sample of site using this baremetal automation can be seen here .
In order to define the settings for a site, the first section 00_install-config needs to be used.
Start by creating a kustomization file like the following: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/kustomization.yaml

bases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.baremetal/00_install-config

patches:
- install-config.patch.yaml

patchesJson6902:
- target:
    version: v1
    kind: InstallConfig
    name: cluster
  path: install-config.name.patch.yaml

transformers:
- site-config.yaml

In this kustomization file we are patching the default install-config, and also adding some extra files to define networking (site-config.yaml).

install-config.name.patch.yaml: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/testing.baremetal.edge-sites.net/00_install-config/install-config.name.patch.yaml

- op: replace
  path: "/metadata/name"
  value: testing <- replace with your cluster name here

install-config.patch.yaml : https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/testing.baremetal.edge-sites.net/00_install-config/install-config.patch.yaml

apiVersion: v1
kind: InstallConfig
baseDomain: baremetal.edge-sites.net <- domain for your site
compute:
- name: worker
replicas: 2
controlPlane:
name: master
platform: {}
replicas: 1
metadata:
name: cluster
networking:
clusterNetworks:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
apiVIP: 192.168.111.4
ingressVIP: 192.168.111.3
dnsVIP: 192.168.111.2
hosts: {} <- see it's empty, this will be created automatically as it's virtual
pullSecret: 'PULL_SECRET' <- leave like that, it will be replaced in runtime
sshKey: |
SSH_PUB_KEY <- leave like that, it will be replaced in runtime

site-config.yaml: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/testing.baremetal.edge-sites.net/00_install-config/site-config.yaml

apiVersion: kni.akraino.org/v1alpha1
kind: SiteConfig
metadata:
  name: notImportantHere
config:
  virtualizedInstall: "true" <- this will tell the installer to deploy with virtual baremetal
provisioningInfrastructure:
  hosts:
    # interface to use for provisioning on the masters
    masterBootInterface: eno1
    # interface to use for provisioning on the workers
    workerBootInterface: eno1
    # interface to use for baremetal on the masters
    masterSdnInterface: eno2
    # interface to use for baremetal on the workers
    workerSdnInterface: eno2

  network:
    # The provisioning network's CIDR
    provisioningIpCidr: 172.22.0.0/24
    # PXE boot server IP
    # DHCP range start (usually provHost/interfaces/provisioningIpAddress + 1)
    provisioningDHCPStart: 172.22.0.11
    provisioningDHCPEnd: 172.22.0.51

    # The baremetal networks's CIDR
    baremetalIpCidr: 192.168.111.0/24
    # Address map
    # bootstrap: baremetalDHCPStart   i.e. 192.168.111.10
    # master-0: baremetalDHCPStart+1  i.e. 192.168.111.11
    # master-1: baremetalDHCPStart+2  i.e. 192.168.111.12
    # master-2: baremetalDHCPStart+3  i.e. 192.168.111.13
    # worker-0: baremetalDHCPStart+5  i.e. 192.168.111.15
    # worker-N: baremetalDHCPStart+5+N
    baremetalDHCPStart: 192.168.111.10
    baremetalDHCPEnd: 192.168.111.50
    # baremetal network default gateway, set to proper IP if /provHost/services/baremetalGateway == false
    # if /provHost/services/baremetalGateway == true, baremetalGWIP with be located on provHost/interfaces/baremetal
    # and external traffic will be routed through the provisioning host
    baremetalGWIP: 192.168.111.4
    dns:
      # cluster DNS, change to proper IP address if provHost/services/clusterDNS == false
      # if /provHost/services/clusterDNS == true, cluster (IP) with be located on provHost/interfaces/provisioning
      # and DNS functionality will be provided by the provisioning host
      cluster: 192.168.111.3
      # Up to 3 external DNS servers to which non-local queries will be directed
      external1: 10.10.160.1
      external2: 10.10.160.2

  provHost:
    interfaces:
      # Interface on the provisioning host that connects to the provisioning network
      provisioning: dummy0
      # Must be in provisioningIpCidr range
      # pxe boot server will be at port 8080 on this address
      provisioningIpAddress: 172.22.0.1
      # Interface on the provisioning host that connects to the baremetal network
      baremetal: eno1
      # Must be in baremetalIpCidr range
      baremetalIpAddress: 192.168.111.199
      # Interface on the provisioning host that connects to the internet/external network
      external: eno3
    bridges:
      # These bridges are created on the bastion host
      provisioning: provisioning
      baremetal: baremetal
    services:
      # Does the provsioning host provide DHCP services for the baremetal network?
      baremetalDHCP: true
      # Does the provisioning host provide DNS services for the cluster?
      clusterDNS: true
      # Does the provisioning host provide a default gateway for the baremetal network?
      baremetalGateway: true

Setup installer node

Install CentOS operating system there. Once you have it, configure your NIC/VLANS properly. You can make use of dummy interfaces if you need it, as the network will all be virtualized.

Configure the system properly to run knictl on it: Install knictl

Fetch requirements

Inside knictl path (typically $HOME/go/src/gerrit.akraino.org/kni/installer), run the fetch-requirements command, pointing to the github repo of the site you created

 ./knictl fetch_requirements <site repo URI> 

For example:

./knictl fetch_requirements github.com/akraino-edge-stack/kni-blueprint-pae/tree/master/sites/testing.baremetal.edge-sites.net

Prepare manifests

Run the prepare manifests command, using as a parameter the name of your site

./knictl prepare_manifests $SITE_NAME

For example:
./knictl prepare_manifests testing.baremetal.edge-sites.net

Remember that the generated files there have a validity of 24 hours. If you don't finish the installation on that time, you'll need to re-run this command.

Deploy masters

Code Block
languagebash
./knictl deploy_masters $SITE_NAME

This will deploy a bootstrap VM and begin to bring up your master nodes.  Once the masters have reached the ready state, you can then deploy your workers. You can monitor the process of installation with:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install wait-for bootstrap-complete --dir $HOME/.kni/$SITE_NAME/baremetal_automation/ocp/

When all master nodes are shown as ready, you can start deployment of your workers

Deploy workers

Code Block
languagebash
./knictl deploy_workers $SITE_NAME

This will begin to bring up your worker nodes.  Monitor your worker nodes are you normally would during this process.  If the deployment doesn't hit any errors, you will then have a working baremetal cluster.

You can monitor the state of the cluster with:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install wait-for install-complete --dir $HOME/.kni/$SITE_NAME/baremetal_automation/ocp/

After masters and workers are up, you can apply the workloads using the general procedure with:

Code Block
languagebash
./knictl apply_workloads $SITE_NAME --kubeconfig $HOME/.kni/$SITE_NAME/baremetal_automation/ocp/auth/kubeconfig

Accessing the Cluster

After the deployment finishes, a kubeconfig file will be placed inside auth directory:

export KUBECONFIG=$HOME/.kni/$SITE_NAME/final_manifests/auth/kubeconfig

NOTE: When using automated baremetal deployment, the kubeconfig will be found here instead:

export KUBECONFIG=$HOME/.kni/$SITE_NAME/baremetal_automation/ocp/auth/kubeconfig

Then cluster can be managed with the kubectl or oc (drop-in replacement with advanced functionality) CLI tools.

To verify a correct setup, you can check again the nodes, and see if masters and workers are ready:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/oc get nodes

You also can check if the cluster is available:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/oc get clusterversion

You can also verify that the console is working, the console url is the following:

Code Block
languagebash
 https://console-openshift-console.apps.$CLUSTER_NAME.$CLUSTER_DOMAIN

You can enter the console with kubeadmin user and the password that is shown at the end of the install.


Destroying the Cluster

Manual

When needed, the site can be destroyed with the openshift-install command, using the following syntax:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install destroy cluster --dir $HOME/.kni/$SITE_NAME/final_manifests

Automated (Baremetal

...

/ virtual baremetal only)

A baremetal UPI cluster that was deployed using knictl's automation commands (deploy_masters / deploy_workers) can be destroyed like so:

...