Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

This document describes how to deploy blueprints from Akraino's KNI Blueprint Family. It is common to all blueprints in that family, unless otherwise noted.

...

Pre-Requisites for Deploying to Bare Metal

The baremetal UPI install can be optionally automated when using knictl (see below).  When attempting a manual baremetal UPI install, however, please be sure to read: https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html

...

Specific instructions for baremetal are going to be provided later.

Deploying on baremetal

Minimal hardware footprint needed

This is minimal configuration example where only 3 servers are used. Servers and their role are given in below table.

...

Server#

...

Role

...

Purpose

...

1

...

4. Apply workloads
Anchor
apply_workloads
apply_workloads

After the cluster has been generated, the extra workloads that have been specified in manifests (like kubevirt), need to be applied. This can be achieved by:

Code Block
languagebash
./knictl apply_workloads $SITE_NAME

This will execute kustomize on the site manifests and will apply the output to the cluster. After that, the site deployment can be considered as finished.

Deploying on baremetal

Minimal hardware footprint needed

This is minimal configuration example where only 3 servers are used. Servers and their role are given in below table.

Server#

Role

Purpose

1

Installer node

This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well.

2

Master node

This is control plane or master node of K8s cluster that is based on openshift 4.x.

3

Worker node

This is worker node which hosts the application.

4

Bootstrap node

Bootstrap node runs as VM on installer node and it exists only during the installation and later automatically deleted by installer.

...

First step to start a baremetal deployment is to have a site defined, with all the network and baremetal settings defined in the yaml files. A sample of site using this baremetal automation can be seen here .
In order to define the settings for a site, the first section 00_install-config needs to be used.
Start by creating a kustomization file like the following: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/kustomization.yaml

In this kustomization file we are patching the default install-config, and also adding some extra files to define networking (site-config.yaml).

credentials.yaml:

This file is not shown on the site structure as it contains private content. It needs to have following structure:

apiVersion: v1
kind: Secret
metadata:
name: community-lab-ipmi
stringdata:
username: xxx <- base64 encoded IPMI username
password: xxx <- base64 encoded IPMI password
type: Opaque

install-config.patch.yaml : https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/install-config.patch.yaml

apiVersion: v1
kind: InstallConfig
baseDomain: baremetal.edge-sites.net <- domain of your site
compute:
 - name: worker
   replicas: 2 <- number of needed workers
controlPlane:
   name: master
   platform: {}
   replicas: 1 <- number of needed masters (1/3)
metadata:
   name: cluster
networking:
  clusterNetworks:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
   none: {}
   apiVIP: 192.168.111.4  <- IP for Kubernetes api endpoint, needs to be on the range of your baremetal network
   ingressVIP: 192.168.111.3 <- IP for the Kubernetes ingress endpoint, needs to be on the range of your baremetal network
   dnsVIP: 192.168.111.2 <- IP for the Kubernetes DNS endpoint, needs to be on the range of your baremetal network
   hosts:
      # Master nodes are always RHCOS
      -  name: master-0
         role: master
         bmc:
            address: ipmi://10.11.7.12 <- ipmi address for master
            credentialsName: community-lab-ipmi <- this needs to reference the name of the secret provided in credentials.yaml
         bootMACAddress: 3C:FD:FE:CD:98:C9  <- mac address for the provisioning interface of your master
         sdnMacAddress: 3C:FD:FE:CD:98:C8   <- mac address for the baremetal interface of your masterbases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.baremetal/00_install-config

patches:
- install-config.patch.yaml

patchesJson6902:
- target:
    version: v1
    kind: InstallConfig
    name: cluster
  path: install-config.name.patch.yaml

transformers:
- site-config.yaml

In this kustomization file we are patching the default install-config, and also adding some extra files to define networking (site-config.yaml).

credentials.yaml:

This file is not shown on the site structure as it contains private content. It needs to have following structure:

apiVersion: v1
kind: Secret
metadata:
name: community-lab-ipmi
stringdata:
username: xxx <- base64 encoded IPMI username
password: xxx <- base64 encoded IPMI password
type: Opaque

install-config.name.patch.yaml: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/install-config.name.patch.yaml

- op: replace
  path: "/metadata/name"
  value: community <- replace with your cluster name here

install-config.patch.yaml : https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/install-config.patch.yaml

apiVersion: v1
kind: InstallConfig
baseDomain: baremetal.edge-sites.net <- domain of your site
compute:
 - name: worker
   replicas: 2 <- number of needed workers
controlPlane:
   name: master
   platform: {}
   replicas: 1 <- number of needed masters (1/3)
metadata:
   name: cluster
networking:
  clusterNetworks:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
   none: {}
   apiVIP: 192.168.111.4  <- IP for Kubernetes api endpoint, needs to be on the range of your baremetal network
   ingressVIP: 192.168.111.3 <- IP for the Kubernetes ingress endpoint, needs to be on the range of your baremetal network
   dnsVIP: 192.168.111.2 <- IP for the Kubernetes DNS endpoint, needs to be on the range of your baremetal network
   hosts:
      # Master nodes are always RHCOS
      -  name: master-0
         role: master
         bmc:
           # sdnIPAddressaddress: 192.168.111.11     ipmi://10.11.7.12 <- Optionalipmi --address Setfor staticmaster
IP on your baremetal for your master      credentialsName: community-lab-ipmi <- this hardwareProfile:needs defaultto reference the name of the secret provided in credentials.yaml
osProfile:         bootMACAddress: 3C:FD:FE:CD:98:C9  <- mac #address Withfor rolethe ==provisioning master,interface theof osTypeyour ismaster
always rhcos        sdnMacAddress: 3C:FD:FE:CD:98:C8   <- #mac Andaddress withfor typethe rhcos,baremetal theinterface followingof areyour settingsmaster
are available        # sdnIPAddress: 192.168.111.11   type: rhcos <- Optional -- Set static IP on your baremetal for your master
pxe: bios        hardwareProfile: <-default
pxe boot type either bios (default if not specified) orosProfile: uefi
            install_dev: sda  <- where to install the operating system (sda is the default)# With role == master, the osType is always rhcos
          # Worker nodes# canAnd bewith eithertype rhcos, (default) || centos (7.x) || rhel (8.x)the following are settings are available
       -  name: worker-0  type: rhcos
      role: worker     pxe: bios    bmc:     <- pxe boot type either bios (default if not address: ipmi://10.11.7.13
specified) or uefi
           credentialsName: community-lab-ipmi
         bootMACAddress: 3C:FD:FE:CD:9E:91
  install_dev: sda  <- where to install the operating system (sda is the default)
      # sdnMacAddress: 3C:FD:FE:CD:9E:90
         hardwareProfile: defaultWorker nodes can be either rhcos (default) || centos (7.x) || rhel (8.x)
      -   provisioning_interfacename: enp134s0f1worker-0
<- specify that if the provisioning interface is different thanrole: theworker
one you will provide on next site-config.yaml   bmc: 
     baremetal_interface: enp134s0f0 <- specify that if the baremetal interface is different than the one you will provide on next site-config.yaml
  address: ipmi://10.11.7.13
            credentialsName: community-lab-ipmi
      # If an osProfile/type is not defined, rhe node defaults to RHCOS
  bootMACAddress: 3C:FD:FE:CD:9E:91
         sdnMacAddress: 3C:FD:FE:CD:9E:90
      osProfile:   hardwareProfile: default
         typeprovisioning_interface: centos7enp134s0f1 <- specify that if the provisioning interface is different than the one #you Withwill type:provide rhcoson the following are settings are availablenext site-config.yaml
         baremetal_interface: enp134s0f0 <- specify that if the pxe:baremetal biosinterface is different than #the pxeone bootyou typewill eitherprovide bioson (default if not specified) or uefinext site-config.yaml
         # If an osProfile/type is not  install_dev: sda  # where to install the operating system (sda is the default)
defined, rhe node defaults to RHCOS
         osProfile: 
    -  name: worker-1     type: centos7
   role: worker        # With bmctype: rhcos the following are settings are available
      address: ipmi://10.11.7.14     pxe: bios    # pxe boot credentialsName: community-lab-ipmi
         bootMACAddress: 3C:FD:FE:CD:9B:81type either bios (default if not specified) or uefi
            sdnMacAddressinstall_dev: 3C:FD:FE:CD:9B:80sda  # where to install the operating system (sda hardwareProfile:is the default)
      -  name: #worker-1
If an osProfile/type is not defined, rhe node defaults torole: RHCOSworker
         # osProfilebmc: 
            #address: type: rhcosipmi://10.11.7.14
             # With type: rhcos the following are settings are available
 credentialsName: community-lab-ipmi
         bootMACAddress: 3C:FD:FE:CD:9B:81
          # pxe: bios|uefisdnMacAddress: 3C:FD:FE:CD:9B:80
    # pxe boot type either bioshardwareProfile: (default ifdefault
         # If an osProfile/type is not specified)defined, orrhe uefinode defaults to RHCOS
         # install_devosProfile: 
sda  # where to install the operating system (sda is the default)# pullSecrettype: 'PULL_SECRET'
sshKey: |rhcos
        SSH_PUB_KEY

site-config.yaml: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/site-config.yaml

apiVersion: kni.akraino.org/v1alpha1 kind: SiteConfig metadata:# With  nametype: notImportantHererhcos config:the {}following provisioningInfrastructure:are settings are hosts:available
    # interface to use for provisioning on the masters# pxe: bios|uefi   masterBootInterface: ens787f1# <-pxe nameboot oftype theeither provisioningbios interface(default forif thenot mastersspecified) or uefi
  # interface to use for provisioning on the workers  # install_dev: sda workerBootInterface: ens787f1# <-where nameto ofinstall the provisioning interface foroperating system (sda is the workersdefault)
pullSecret: 'PULL_SECRET'
sshKey: |
# interface to use for baremetal on  SSH_PUB_KEY

site-config.yaml: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/site-config.yaml

apiVersion: kni.akraino.org/v1alpha1
kind: SiteConfig
metadata:
  name: notImportantHere
config: {}
provisioningInfrastructure:
  hosts:
    # interface to use for provisioning on the masters
    masterSdnInterfacemasterBootInterface: ens787f0ens787f1 <- name of the baremetalprovisioning interface for the masters
    # interface to use for baremetalprovisioning on the workers
    workerSdnInterfaceworkerBootInterface: ens787f0ens787f1 <- name of the baremetalprovisioning interface for the workers
   network: # interface to use #for Thebaremetal provisioningon network'sthe CIDRmasters
    provisioningIpCidrmasterSdnInterface: 172.22.0.0/24ens787f0 <- rangename of the provisioning network baremetal interface for the masters
    # PXE boot server IP interface to use for baremetal on the workers
    workerSdnInterface: ens787f0 #<- DHCPname rangeof startthe (usually provHost/interfaces/provisioningIpAddress + 1)
baremetal interface for the workers

  provisioningDHCPStartnetwork:
172.22.0.11 <- DHCP start range# ofThe the provisioning network's CIDR
   provisioningDHCPEnd provisioningIpCidr: 172.22.0.510/24 <-> DHCP end range
 range of the provisioning network
    # PXE boot server IP
    # The baremetal networks's CIDR DHCP range start (usually provHost/interfaces/provisioningIpAddress + 1)
    baremetalIpCidrprovisioningDHCPStart: 192172.16822.1110.0/2411 <- DHCP start range of the baremetalprovisioning network
    # Address map
    # bootstrap: baremetalDHCPStart   i.provisioningDHCPEnd: 172.22.0.51 -> DHCP end range

    # The baremetal networks's CIDR
    baremetalIpCidr: 192.168.111.0/24 <- range of the baremetal network
    # Address map
    # bootstrap: baremetalDHCPStart   i.e. 192.168.111.10
    # master-0: baremetalDHCPStart+1  i.e. 192.168.111.11
    # master-1: baremetalDHCPStart+2  i.e. 192.168.111.12
    # master-2: baremetalDHCPStart+3  i.e. 192.168.111.13
    # worker-0: baremetalDHCPStart+5  i.e. 192.168.111.15
    # worker-N: baremetalDHCPStart+5+N
    baremetalDHCPStart: 192.168.111.10 <- DHCP start range of the baremetal network. Needs to start with an IP that does not conflict with previous baremetal VIP definitions
    baremetalDHCPEnd: 192.168.111.50 <- DHCP end range
    # baremetal network default gateway, set to proper IP if /provHost/services/baremetalGateway == false
    # if /provHost/services/baremetalGateway == true, baremetalGWIP with be located on provHost/interfaces/baremetal
    # and external traffic will be routed through the provisioning host
    baremetalGWIP: 192.168.111.4
    dns:
      # cluster DNS, change to proper IP address if provHost/services/clusterDNS == false
      # if /provHost/services/clusterDNS == true, cluster (IP) with be located on provHost/interfaces/provisioning
      # and DNS functionality will be provided by the provisioning host
      cluster: 192.168.111.3
      # Up to 3 external DNS servers to which non-local queries will be directed
      external1: 8.8.8.8
#     external2: 10.11.5.19
#     external3: 10.11.5.19

  provHost:
    interfaces:
      # Interface on the provisioning host that connects to the provisioning network
      provisioning: enp136s0f1 <- it typically needs to be a nic, not a vlan (unless your system supports pxe booting from vlans)
      # Must be in provisioningIpCidr range
      # pxe boot server will be at port 8080 on this address
      provisioningIpAddress: 172.22.0.1
      # Interface on the provisioning host that connects to the baremetal network
      baremetal: enp136s0f0.3009
      # Must be in baremetalIpCidr range
      baremetalIpAddress: 192.168.111.1
      # Interface on the provisioning host that connects to the internet/external network
      external: enp136s0f0.3008
    bridges:
      # These bridges are created on the bastion host
      provisioning: provisioning <- typically leave those fixed names
      baremetal: baremetal
    services:
      # Does the provsioning host provide DHCP services for the baremetal network?
      baremetalDHCP: true <- set it to false just if you have your own DHCP for the baremetal network
      # Does the provisioning host provide DNS services for the cluster?
      clusterDNS: true <- set it to false just if you have your own DNS in the baremetal network and you can configure your names properly
      # Does the provisioning host provide a default gateway for the baremetal network?
      baremetalGateway: true

Setup installer node

Install CentOS operating system there. Once you have it, configure your NIC/VLANS properly (management/external, provisioning, baremetal, ipmi). Be sure that you collect the information of interfaces/vlans.

Configure the system properly to run knictl on it: Install knictl

knictl offers two commands to automate the deployment of a baremetal UPI cluster (and only baremetal UPI, at this time).  As prerequisites to using these commands, you must ensure the following are true:

...

Fetch requirements

Inside knictl path (typically $HOME/go/src/gerrit.akraino.org/kni/installer), run the fetch-requirements command, pointing to the github repo of the site you created


 ./knictl fetch_requirements <site repo URI> 

For example:

./knictl fetch_requirements https://github.

...

 ./knictl prepare_manifests $SITE_NAME 

...

platform:
   none: {}

Once the aforementioned items have been dealt with, deploy your master nodes like so:

Code Block
languagebash
./knictl deploy_masters $SITE_NAME

...

com/akraino-edge-stack/kni-blueprint-pae/tree/master/sites/community.baremetal.edge-sites.net

Prepare manifests

Run the prepare manifests command, using as a parameter the name of your site

 ./knictl prepare_manifests $SITE_NAME 

For example:
./knictl prepare_manifests community.baremetal.edge-sites.ne

Deploy masters

Code Block
languagebash
./knictl deploy_workersmasters $SITE_NAME

This will deploy a bootstrap VM and begin to bring up your worker master nodes.  Monitor your worker nodes are After this command has successfully executed, monitor your cluster as you normally would during this processwhile the masters are deployingIf the deployment doesn't hit any errors, you will then have a working baremetal cluster.

4. Apply workloads

After the cluster has been generated, the extra workloads that have been specified in manifests (like kubevirt), need to be applied. This can be achieved by:Once the masters have reached the ready state, you can then deploy your workers.

Deploy workers

Code Block
languagebash
./knictl applydeploy_workloadsworkers $SITE_NAME

This will execute kustomize on the site manifests and will apply the output to the cluster. After that, the site deployment can be considered as finished.begin to bring up your worker nodes.  Monitor your worker nodes are you normally would during this process.  If the deployment doesn't hit any errors, you will then have a working baremetal cluster.

After masters and workers are up, you can apply the workloads using the general procedure as shown  here

Accessing the Cluster

After the deployment finishes, a kubeconfig file will be placed inside auth directory:

...