Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

This document describes how to deploy blueprints from Akraino's KNI Blueprint Family. It is common to all blueprints in that family, unless otherwise noted.

...

Pre-Requisites for Deploying to Bare Metal

The baremetal UPI install can be optionally automated when using knictl (see below).  When attempting a manual baremetal UPI install, however, please be sure to read: https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html

...

Please see the upstream documentation for details.

Structure of a site

In order to deploy a blueprint, you need to create a repository with a site. The site configuration is based in kustomize, and needs to use our blueprints as base, referencing that properly. A sample site Sample sites for deploying on libvirt, AWS and baremetal can be seen on: https://github.com/yroblaakraino-edge-stack/kni-site. If you intend to use automated baremetal UPI deployment (see below), please consult this sample site instead: https://github.com/abays/kni-site. Site needs to have this structure:.-blueprint-pae/tree/master/sites.
Site needs to have this structure:

.
├── 00_install-config
│   ├── install-config.name.patch.yaml
│   ├── install-config.patch.yaml
│   ├── kustomization.yaml
│   └── site-config.yaml
├── 01_cluster-mods
│   ├── kustomization.yaml
│   ├── manifests
│   └── openshift
├── 02_cluster-addons
│   └── kustomization.yaml
└── 03_services
└── kustomization.yaml

...

Code Block
languageyml
apiVersion: kni.akraino.org/v1alpha1
kind: SiteConfig
metadata:
 name: notImportantHere
 config:
   releaseImageOverride: registry.svc.ci.openshift.org/origin/release:4.1

NOTE: If you intend to use knictl's baremetal UPI automation (see below), you will need to add a provisioningInfrastructure block to your site-config.yaml for the automation to consume.  Below is an example template, with in-line comments describing the various options:

Code Block
languageyml
provisioningInfrastructure:
  hosts:
    # interface to use for provisioning on the masters
    masterBootInterface: eno2
    # interface to use for provisioning on the workers
    workerBootInterface: eno2
    # interface to use for baremetal on the masters
    masterSdnInterface: ens1f0
    # interface to use for baremetal on the workers
    workerSdnInterface: ens1f0

  network:
    # The provisioning network's CIDR
    provisioningIpCidr: 172.22.0.0/24
    # PXE boot server IP
    # DHCP range start (usually provHost/interfaces/provisioningIpAddress + 1)
    provisioningDHCPStart: 172.22.0.11
    provisioningDHCPEnd: 172.22.0.51

    # The baremetal networks's CIDR
    baremetalIpCidr: 192.168.111.0/24
    # Address map
    # bootstrap: baremetalDHCPStart   i.e. 192.168.111.10
    # master-0: baremetalDHCPStart+1  i.e. 192.168.111.11
    # master-1: baremetalDHCPStart+2  i.e. 192.168.111.12
    # master-2: baremetalDHCPStart+3  i.e. 192.168.111.13
    # worker-0: baremetalDHCPStart+5  i.e. 192.168.111.15
    # worker-N: baremetalDHCPStart+5+N
    baremetalDHCPStart: 192.168.111.10   
    baremetalDHCPEnd: 192.168.111.50
    # baremetal network default gateway, set to proper IP if /provHost/services/baremetalGateway == false
    # if /provHost/services/baremetalGateway == true, baremetalGWIP with be located on provHost/interfaces/baremetal
    # and external traffic will be routed through the provisioning host
    baremetalGWIP: 192.168.111.4
    dns:
      # cluster DNS, change to proper IP address if provHost/services/clusterDNS == false
      # if /provHost/services/clusterDNS == true, cluster (IP) with be located on provHost/interfaces/provisioning
      # and DNS functionality will be provided by the provisioning host
      cluster: 192.168.111.3
      # Up to 3 external DNS servers to which non-local queries will be directed
      external1: 10.11.5.19 
#     external2: 10.11.5.19 
#     external3: 10.11.5.19 

  provHost:
    interfaces:
      # Interface on the provisioning host that connects to the provisioning network
      provisioning: eno2
      # Must be in provisioningIpCidr range
      # pxe boot server will be at port 8080 on this address
      provisioningIpAddress: 172.22.0.10
      # Interface on the provisioning host that connects to the baremetal network
      baremetal: ens1f0
      # Must be in baremetalIpCidr range
      baremetalIpAddress: 192.168.111.6
      # Interface on the provisioning host that connects to the internet/external network
      external: eno1
    bridges:
      # These bridges are created on the bastion host
      provisioning: provisioning
      baremetal: baremetal
    services:
      # Does the provsioning host provide DHCP services for the baremetal network?
      baremetalDHCP: true
      # Does the provisioning host provide DNS services for the cluster?
      clusterDNS: true
      # Does the provisioning host provide a default gateway for the baremetal network?
      baremetalGateway: true

01_cluster_mods

This is the directory that will contain all the customizations for the basic cluster deployment. You could create patches for modifying number of masters/workers, network settings... everything that needs to be modified on cluster deployment time. It needs to have a basic kustomization.yaml file, that will reference the same level file for the blueprint. And you could create additional patches following kustomize syntax:

Code Block
languageyml
bases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/01_cluster-mods

02_cluster_addons and 03_services

Follow same structure as 01_cluster_mods, but in this case is for adding additional workloads after cluster deployment. They also need to have a kustomization.yaml file that references the file of the same level for the blueprint, and can include additional resources and patches.

How to deploy

The whole deployment workflow is based on knictl CLI tool that this repository is providing.

1. Fetch requirements for a site.

You need to have a site repository with the structure described above. Then, first thing is to fetch the requirements needed for the blueprint that the site references. This is achieved by:

Code Block
languagebash
./knictl fetch_requirements github.com/site-repo.git 

Where the first argument references a site repository, following https://github.com/hashicorp/go-getter syntax. This will download the site repository, and will create a folder with the site name inside $HOME/.kni . It will also fetch all the binaries needed, and will store them inside $HOME/.kni/$SITE_NAME/requirements folder.

2. Prepare manifests for a site

NOTE: Before performing this step, you must copy your OpenShift pull secret into your build path (i.e. to ~/.kni/pull-secret.json).

Next step is to run a procedure to prepare all the manifests for deploying a site. This is achieved by applying kustomize on the site repository, combining that with the base manifests for the blueprint, and doing a merge with the manifests generated by the installer at runtime. This is achieved by the following command:

Code Block
languagebash
./knictl prepare_manifests $SITE_NAME

This will generate a set of manifests ready to apply, and will be stored on $HOME/.kni/$SITE_NAME/final_manifests folder. Along with manifests, a profile.env file has been created also in $HOME/.kni/$SITE_NAME folder. It includes environment vars that can be sourced before deploying the cluster. Current vars that can be exported are:

  • OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE : used when a new image is wanted, instead of the default one
  • TF_VAR_libvirt_master_memory, TF_VAR_libvirt_master_vcpu: Used in the libvirt case, to define the memory and CPU for the vms.

3. Deploy the cluster

Manual

Before starting the deployment, it is recommended to source the env vars from profile.env . You can achieve it with:

Code Block
languagebash
source $HOME/.kni/$SITE_NAME/profile.env

Then, you need to deploy the cluster using the generated manifests. This can be achieved with:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install create cluster --dir=$HOME/.kni/$SITE_NAME/final_manifests

This will deploy a cluster based on the specified manifests. You can learn more about how to manage cluster deployment and how to interact with it on https://docs.openshift.com/container-platform/4.1/welcome/index.html

For deploying to baremetal using UPi, you will need to generate ignition files and use them when provisioning the machines. You can create the ignition files with the following command, instead of create cluster:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install create ignition-configs --dir=$HOME/.kni/$SITE_NAME/final_manifests

...

are deploying on baremetal, specific configuration needs to be set. This is going to be covered in an specific section for it

01_cluster_mods

This is the directory that will contain all the customizations for the basic cluster deployment. You could create patches for modifying number of masters/workers, network settings... everything that needs to be modified on cluster deployment time. It needs to have a basic kustomization.yaml file, that will reference the same level file for the blueprint. And you could create additional patches following kustomize syntax:

Code Block
languageyml
bases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/01_cluster-mods

02_cluster_addons and 03_services

Follow same structure as 01_cluster_mods, but in this case is for adding additional workloads after cluster deployment. They also need to have a kustomization.yaml file that references the file of the same level for the blueprint, and can include additional resources and patches.

How to deploy

The whole deployment workflow is based on knictl CLI tool that this repository is providing.

1. Fetch requirements for a site.

You need to have a site repository with the structure described above. Then, first thing is to fetch the requirements needed for the blueprint that the site references. This is achieved by:

Code Block
languagebash
./knictl fetch_requirements github.com/site-repo.git 

Where the first argument references a site repository, following https://github.com/hashicorp/go-getter syntax. This will download the site repository, and will create a folder with the site name inside $HOME/.kni . It will also fetch all the binaries needed, and will store them inside $HOME/.kni/$SITE_NAME/requirements folder.

2. Prepare manifests for a site

NOTE: Before performing this step, you must copy your OpenShift pull secret into your build path (i.e. to ~/.kni/pull-secret.json).

Next step is to run a procedure to prepare all the manifests for deploying a site. This is achieved by applying kustomize on the site repository, combining that with the base manifests for the blueprint, and doing a merge with the manifests generated by the installer at runtime. This is achieved by the following command:

Code Block
languagebash
./knictl prepare_manifests $SITE_NAME

This will generate a set of manifests ready to apply, and will be stored on $HOME/.kni/$SITE_NAME/final_manifests folder. Along with manifests, a profile.env file has been created also in $HOME/.kni/$SITE_NAME folder. It includes environment vars that can be sourced before deploying the cluster. Current vars that can be exported are:

  • OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE : used when a new image is wanted, instead of the default one
  • TF_VAR_libvirt_master_memory, TF_VAR_libvirt_master_vcpu: Used in the libvirt case, to define the memory and CPU for the vms.

3. Deploy the cluster

Manual

Before starting the deployment, it is recommended to source the env vars from profile.env . You can achieve it with:

Code Block
languagebash
source $HOME/.kni/$SITE_NAME/profile.env

If you are deploying on AWS or libvirt, then you need to deploy the cluster. This can be achieved with:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install create cluster --dir=$HOME/.kni/$SITE_NAME/final_manifests

This will deploy a cluster based on the specified manifests. You can learn more about how to manage cluster deployment and how to interact with it on https://docs.openshift.com/container-platform/4.1/welcome/index.html

Specific instructions for baremetal are going to be provided later.

Deploying on baremetal

Minimal hardware footprint needed

This is minimal configuration example where only 3 servers are used. Servers and their role are given in below table.

Server#

Role

Purpose

1

Installer node

This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well.

2

Master node

This is control plane or master node of K8s cluster that is based on openshift 4.x.

3

Worker node

This is worker node which hosts the application.

4

Bootstrap node

Bootstrap node runs as VM on installer node and it exists only during the installation and later automatically deleted by installer.

High level connectivity

Each server should have 3 Ethernet ports configured, purpose of these is listed below. These three are in addition to IPMI port, which is required for PXE boot.

Interface

Purpose

Management interface

Remote root login from this interface is used for entire setup. This interface needs to have internet connectivity to download various files. This can be shared with external interface. This only needs to be present on the Installer node

External interfaceInterface on the installer node that has internet network connectivity. All external traffic from masters/workers is redirected to the external interface of the installer node.

Baremetal interface

This interface is for baremetal network, also known as SDN network. This interface doesn’t need internet connectivity.

Provisioning interface

This interface is for PXE boot. This interface doesn’t need internet connectivity.

Pre-requisites

OS requirements

Node Role

OS requirement

Installer

CentOS 7.6 and above

Bootstrap

RHCOS (Redhat CoreOS)

Master

RHCOS (Redhat CoreOS)

Worker

RHCOS/RHEL/CentOS/CentOS-rt

Network requirements

  • Configure required network interfaces as explained earlier. Be sure that each server has the NIC for PXE configured properly, matching to the interface that you are setting for this deployment. You can set it by entering the BIOS setup, and entering into the NIC configuration of your BIOS setup menu.
  • Collect IPs and MAC addresses of all the nodes, one sample is given below. This information will be required to populate config files:

Role

iDRAC IP/IPMI port IP

Provisioning network IP

Baremetal network IP

Management network IP

Provisioning network port & mac

Baremetal network port & mac

Management network port & mac

Installer

xx.xx.xx.xx

xx.xx.xx.xx

xx.xx.xx.xx

xx.xx.xx.xx

em1 / 21:02:0E:DC:BC:27

em2/ 21:02:0E:DC:BC:28

em3/ 21:02:0E:DC:BC:29

master-0








worker-0









knictl offers two commands to automate the deployment of a baremetal UPI cluster (and only baremetal UPI, at this time).  As prerequisites to using these commands, you must ensure the following are true:

...

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install destroy cluster --dir $HOME/.kni/$SITE_NAME/final_manifests

Automated (Baremetal UPI only)

A baremetal UPI cluster that was deployed using knictl's automation commands (deploy_masters / deploy_workers) can be destroyed like so:

...