...
...
...
...
...
...
...
...
...
Introduction
This document describes how to deploy blueprints from Akraino's KNI Blueprint Family. It is common to all blueprints in that family, unless otherwise noted.
License
All our code is released under Apache license: https://www.apache.org/licenses/LICENSE-2.0.html
How to use this document
This document describes the generic installation for our KNI blueprint family. Specific documentation is provided for Provider Access Edge and Industrial Edge blueprints. See KNI PAE Installation Guide
Deployment architecture
See KNI PAE Architecture document
Pre-Installation Requirements
...
Pre-Requisites for Deploying to Bare Metal
The baremetal UPI install can be optionally automated when using knictl (see below). When attempting a manual baremetal UPI install, however, please be sure to read: https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html
Pre-Requisites for Deploying to Google Cloud Platform
For deploying a KNI blueprint to GCP, you need to:
- enable service APIs
- setup DNS
- ensure sufficient quota
- create a installer service account
Please, see the upstream documentation for details. As mentioned in the KNI installer repo, the service account JSON file should be located inside $HOME/.gcp with the name osServiceAccount.json.
Pre-Requisites for Deploying to Libvirt
Procedure for deploying to libvirt will be the same as for baremetal, but using vbmc (virtual bmc emulation), to simulate baremetal from virtual machines.
Create site for AWS and GCP
In order to deploy a blueprint, you need to create a repository with a site. The site configuration is based in kustomize, and needs to use our blueprints as base, referencing that properly. Sample sites for deploying on libvirt, AWS and baremetal can be seen on: https://github.com/akraino-edge-stack/kni-blueprint-pae/tree/master/sites.
Site needs to have this structure:
.
├── 00_install-config
│ ├── install-config.name.patch.yaml
│ ├── install-config.patch.yaml
│ ├── kustomization.yaml
│ └── site-config.yaml
├── 01_cluster-mods
│ ├── kustomization.yaml
│ ├── manifests
│ └── openshift
├── 02_cluster-addons
│ └── kustomization.yaml
└── 03_services
└── kustomization.yaml
00_install-config
This folder will contain the basic settings for the site, including the base blueprint/profile, and the site name/domain. The following files are needed:
...
Code Block | ||
---|---|---|
| ||
bases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/00_install-config
patches:
- install-config.patch.yaml
patchesJson6902:
- target:
version: v1
kind: InstallConfig
name: cluster
path: install-config.name.patch.yaml
transformers:
- site-config.yaml |
...
install-config.patch.yaml: is a patch to modify the domain from the base blueprint. You need to customize with the domain you want to give to your site
Code Block | ||
---|---|---|
| ||
apiVersion: v1
kind: InstallConfig
metadata:
name: cluster
baseDomain: devcluster.openshift.com |
...
Code Block | ||
---|---|---|
| ||
- op: replace
path: "/metadata/name"
value: kni-site
|
- site-config.yaml: site configuration file, you can add entries in config to override behaviour of knictl (currently just releaseImageOverride is supported)
Code Block | ||
---|---|---|
| ||
apiVersion: kni.akraino.org/v1alpha1
kind: SiteConfig
metadata:
name: notImportantHere
config:
releaseImageOverride: registry.svc.ci.openshift.org/origin/release:4.1
|
NOTE: If you are deploying on baremetal, specific configuration needs to be set. This is going to be covered in an specific section for it
01_cluster_mods
This is the directory that will contain all the customizations for the basic cluster deployment. You could create patches for modifying number of masters/workers, network settings... everything that needs to be modified on cluster deployment time. It needs to have a basic kustomization.yaml file, that will reference the same level file for the blueprint. And you could create additional patches following kustomize syntax:
Code Block | ||
---|---|---|
| ||
bases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/01_cluster-mods |
02_cluster_addons and 03_services
Follow same structure as 01_cluster_mods, but in this case is for adding additional workloads after cluster deployment. They also need to have a kustomization.yaml file that references the file of the same level for the blueprint, and can include additional resources and patches.
How to deploy on AWS and GCP
The whole deployment workflow is based on knictl CLI tool that this repository is providing.
...
The current KNI blueprints use the openshift-install
tool from the OKD Kubernetes distro to stand up a minimal Kubernetes cluster. All other Day 1 and Day 2 operations are then driven purely through manipulation of declarative Kubernetes manifests. To use this in the context of Akraino KNI blueprints, the project has created a helper CLI tool that needs to be installed first on Installer Node.
If necessary, install golang binary (incl. GOPATH var) using following steps, you can use latest version instead of the one given below.
...
Next, install the following dependencies:
sudo yum install -y make gcc libvirt-devel
Then install the knictl:
mkdir -p $GOPATH/src/gerrit.akraino.org/kni
cd $GOPATH/src/gerrit.akraino.org/kni
git clone https://gerrit.akraino.org/r/kni/installer
cd installer
make build
mkdir -p $GOPATH/bin/
cp knictl $GOPATH/bin/cp knictl /usr/local/go/bin/
Secrets
Most secrets (TLS certificates, Kubernetes API keys, etc.) will be auto-generated for you, but you need to provide at least two secrets yourself:
- a public SSH key
- a pull secret
The public SSH key is automatically added to every machine provisioned into the cluster and allows remote access to that machine. In case you don't have / want to use an existing key, you can create a new key pair using:
ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa
The pull secret is used to download the container images used during cluster deployment. Unfortunately, the OKD Kubernetes distro used by the KNI blueprints does not (yet) provide pre-built container images for all of the deployed components. Instead of going through the hassle of building those from source, we use the ones made available by openshift.com. Therefore, you need to go to https://cloud.redhat.com/openshift/install/metal/user-provisioned, log in (creating a free account, if necessary), and hit "Download Pull Secret".
Create a $HOME/.kni folder and copy the following files:
- id_rsa.pub → needs to contain the public key that you want to use to access your nodes
- pull-secret.json → needs to contain the pull secret previously copied
1. Fetch requirements for a site.
You need to have a site repository with the structure described above. Then, first thing is to fetch the requirements needed for the blueprint that the site references. This is achieved by:
Code Block | ||
---|---|---|
| ||
./knictl fetch_requirements github.com/site-repo.git |
Where the first argument references a site repository, following https://github.com/hashicorp/go-getter syntax. This will download the site repository, and will create a folder with the site name inside $HOME/.kni . It will also fetch all the binaries needed, and will store them inside $HOME/.kni/$SITE_NAME/requirements folder.
2. Prepare manifests for a site
NOTE: Before performing this step, you must copy your OpenShift pull secret into your build path (i.e. to ~/.kni/pull-secret.json).
Next step is to run a procedure to prepare all the manifests for deploying a site. This is achieved by applying kustomize on the site repository, combining that with the base manifests for the blueprint, and doing a merge with the manifests generated by the installer at runtime. This is achieved by the following command:
Code Block | ||
---|---|---|
| ||
./knictl prepare_manifests $SITE_NAME |
This will generate a set of manifests ready to apply, and will be stored on $HOME/.kni/$SITE_NAME/final_manifests folder. Along with manifests, a profile.env file has been created also in $HOME/.kni/$SITE_NAME folder. It includes environment vars that can be sourced before deploying the cluster. Current vars that can be exported are:
- OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE : used when a new image is wanted, instead of the default one
- TF_VAR_libvirt_master_memory, TF_VAR_libvirt_master_vcpu: Used in the libvirt case, to define the memory and CPU for the vms.
3. Deploy the cluster
Manual
Before starting the deployment, it is recommended to source the env vars from profile.env . You can achieve it with:
Code Block | ||
---|---|---|
| ||
source $HOME/.kni/$SITE_NAME/profile.env |
If you are deploying on AWS or libvirt, then you need to deploy the cluster. This can be achieved with:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/requirements/openshift-install create cluster --dir=$HOME/.kni/$SITE_NAME/final_manifests |
This will deploy a cluster based on the specified manifests. You can learn more about how to manage cluster deployment and how to interact with it on https://docs.openshift.com/container-platform/4.1/welcome/index.html
Specific instructions for baremetal are going to be provided later.
...
After the cluster has been generated, the extra workloads that have been specified in manifests (like kubevirt), need to be applied. This can be achieved by:
Code Block | ||
---|---|---|
| ||
./knictl apply_workloads $SITE_NAME |
This will execute kustomize on the site manifests and will apply the output to the cluster. After that, the site deployment can be considered as finished.
How to Deploy on Baremetal
Minimal hardware footprint needed
This is minimal configuration example where only 3 servers are used. Servers and their role are given in below table.
...
Server#
...
Role
...
Purpose
...
1
...
Installer node
...
This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well.
...
2
...
Master node
...
This is control plane or master node of K8s cluster that is based on openshift 4.x.
...
3
...
Worker node
...
This is worker node which hosts the application.
...
4
...
Bootstrap node
...
Bootstrap node runs as VM on installer node and it exists only during the installation and later automatically deleted by installer.
High level connectivity
Each server should have 3 Ethernet ports configured, purpose of these is listed below. These three are in addition to IPMI port, which is required for PXE boot.
...
Interface
...
Purpose
...
Management interface
...
Remote root login from this interface is used for entire setup. This interface needs to have internet connectivity to download various files. This can be shared with external interface. This only needs to be present on the Installer node
...
Baremetal interface
...
This interface is for baremetal network, also known as SDN network. This interface doesn’t need internet connectivity.
...
Provisioning interface
...
This interface is for PXE boot. This interface doesn’t need internet connectivity.
These can be independent NICs or VLANs.
Pre-requisites
OS requirements
...
Node Role
...
OS requirement
...
Installer
...
CentOS 7.6 and above
...
Bootstrap
...
RHCOS (Redhat CoreOS)
...
Master
...
RHCOS (Redhat CoreOS)
...
Worker
...
RHCOS/RHEL/CentOS/CentOS-rt
Network requirements
- Configure required network interfaces as explained earlier. Be sure that each server has the NIC for PXE configured properly, matching to the interface that you are setting for this deployment. You can set it by entering the BIOS setup, and entering into the NIC configuration of your BIOS setup menu.
- Collect IPs and MAC addresses of all the nodes, one sample is given below. This information will be required to populate config files:
...
Role
...
iDRAC IP/IPMI port IP
...
Provisioning network IP
...
Baremetal network IP
...
Management network IP
...
Provisioning network port & mac
...
Baremetal network port & mac
...
Management network port & mac
...
Installer
...
xx.xx.xx.xx
...
xx.xx.xx.xx
...
xx.xx.xx.xx
...
xx.xx.xx.xx
...
em1 / 21:02:0E:DC:BC:27
...
em2/ 21:02:0E:DC:BC:28
...
em3/ 21:02:0E:DC:BC:29
...
master-0
...
worker-0
- Enable IPMI over LAN for all master and worker nodes. This is required for remote PXE boot from installer node. Different servers have different ways to enable it.
In absence of this setting, following kind of errors are thrown from installer.
Error: Error running command ' ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx chassis bootdev pxe;
ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx power cycle || ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx power on;
': exit status 1. Output: Error: Unable to establish IPMI v2 / RMCP+ session
Error: Unable to establish IPMI v2 / RMCP+ session
Error: Unable to establish IPMI v2 / RMCP+ session
Depending on servers, RMCP session needs to be enabled on security settings of the management console.
After enabling this setting, you can run below command to verify that it is working as expected. Give IP address, username and password.
ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx chassis status
(where x.x.x.x is IPMI port IP of your master/worker node, this is followed by root username and password for IPMI e.g. iDRAC)
...
Minimum hardware requirements
This is minimal configuration example where only 3 servers are used. Servers and their role are given in below table.
Server# | Role | Purpose |
1 | Installer node | This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well. |
2 | Master node | This is control plane or master node of K8s cluster that is based on openshift 4.x. |
3 | Worker node | This is worker node which hosts the application. |
4 | Bootstrap node | Bootstrap node runs as VM on installer node and it exists only during the installation and later automatically deleted by installer. |
Other installation requirements
Network requirements
Each server should have 3 Ethernet ports configured, purpose of these is listed below. These three are in addition to IPMI port, which is required for PXE boot.
Interface | Purpose |
Management interface | Remote root login from this interface is used for entire setup. This interface needs to have internet connectivity to download various files. This can be shared with external interface. This only needs to be present on the Installer node |
External interface | Interface on the installer node that has internet network connectivity. All external traffic from masters/workers is redirected to the external interface of the installer node. |
Baremetal interface | This interface is for baremetal network, also known as SDN network. This interface doesn’t need internet connectivity. |
Provisioning interface | This interface is for PXE boot. This interface doesn’t need internet connectivity. |
These can be independent NICs or VLANs.
Configure required network interfaces as explained earlier. Be sure that each server has the NIC for PXE configured properly, matching to the interface that you are setting for this deployment. You can set it by entering the BIOS setup, and entering into the NIC configuration of your BIOS setup menu.
Collect IPs and MAC addresses of all the nodes, one sample is given below. This information will be required to populate config files:
Role | iDRAC IP/IPMI port IP | Provisioning network IP | Baremetal network IP | Management network IP | Provisioning network port & mac | Baremetal network port & mac | Management network port & mac |
Installer | xx.xx.xx.xx | xx.xx.xx.xx | xx.xx.xx.xx | xx.xx.xx.xx | em1 / 21:02:0E:DC:BC:27 | em2/ 21:02:0E:DC:BC:28 | em3/ 21:02:0E:DC:BC:29 |
master-0 | |||||||
worker-0 |
Enable IPMI over LAN for all master and worker nodes. This is required for remote PXE boot from installer node. Different servers have different ways to enable it.
In absence of this setting, following kind of errors are thrown from installer.
Error: Error running command ' ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx chassis bootdev pxe;
ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx power cycle || ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx power on;
': exit status 1. Output: Error: Unable to establish IPMI v2 / RMCP+ session
Error: Unable to establish IPMI v2 / RMCP+ session
Error: Unable to establish IPMI v2 / RMCP+ session
Depending on servers, RMCP session needs to be enabled on security settings of the management console.
After enabling this setting, you can run below command to verify that it is working as expected. Give IP address, username and password.
ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx chassis status
(where x.x.x.x is IPMI port IP of your master/worker node, this is followed by root username and password for IPMI e.g. iDRAC)
Bare metal node requirements
Node Role | OS requirement |
Installer | CentOS 7.6 and above |
Bootstrap | RHCOS (Redhat CoreOS) |
Master | RHCOS (Redhat CoreOS) |
Worker | RHCOS/RHEL/CentOS/CentOS-rt |
Pre-Requisites for Deploying to Google Cloud Platform
For deploying a KNI blueprint to GCP, you need to:
- enable service APIs
- setup DNS
- ensure sufficient quota
- create a installer service account
Please, see the upstream documentation for details. As mentioned in the KNI installer repo, the service account JSON file should be located inside $HOME/.gcp with the name osServiceAccount.json.
Pre-Requisites for Deploying to Libvirt
Minimum hardware requirements
Only one server is needed, that will be acting as a virthost. Master and worker VMs will be created there
Server# | Role | Purpose |
1 | Installer node | This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well. |
Network requirements
Network connectivity will be the same as the baremetal case, but these can be dummy interfaces as all the network connectivity will be just inside the same host:
Interface | Purpose |
Management interface | Remote root login from this interface is used for entire setup. This interface needs to have internet connectivity to download various files. This can be shared with external interface. This only needs to be present on the Installer node |
External interface | Interface on the installer node that has internet network connectivity. All external traffic from masters/workers is redirected to the external interface of the installer node. |
Baremetal interface | This interface is for baremetal network, also known as SDN network. This interface doesn’t need internet connectivity. |
Provisioning interface | This interface is for PXE boot. This interface doesn’t need internet connectivity. |
Jump host requirements
Node Role | OS requirement |
Installer | CentOS 7.6 and above |
Installation high level overview
Virtual deployment guide
Create site for AWS and GCP
In order to deploy a blueprint, you need to create a repository with a site. The site configuration is based in kustomize, and needs to use our blueprints as base, referencing that properly. Sample sites for deploying on libvirt, AWS and baremetal can be seen on: https://github.com/akraino-edge-stack/kni-blueprint-pae/tree/master/sites.
Site needs to have this structure:
.
├── 00_install-config
│ ├── install-config.name.patch.yaml
│ ├── install-config.patch.yaml
│ ├── kustomization.yaml
│ └── site-config.yaml
├── 01_cluster-mods
│ ├── kustomization.yaml
│ ├── manifests
│ └── openshift
├── 02_cluster-addons
│ └── kustomization.yaml
└── 03_services
└── kustomization.yaml
00_install-config
This folder will contain the basic settings for the site, including the base blueprint/profile, and the site name/domain. The following files are needed:
- kustomization.yaml: key file, where it will contain a link to the used blueprint/profile, and a reference to the used patches to customize the site bases:
Code Block language yml bases: - git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/00_install-config patches: - install-config.patch.yaml patchesJson6902: - target: version: v1 kind: InstallConfig name: cluster path: install-config.name.patch.yaml transformers: - site-config.yaml
The entry in bases needs to reference the blueprint being used (in this case blueprint-pae), and the profile install-config file (in this case production.aws/00_install-config). The other entries need to be just written literally. install-config.patch.yaml: is a patch to modify the domain from the base blueprint. You need to customize with the domain you want to give to your site
Code Block language yml apiVersion: v1 kind: InstallConfig metadata: name: cluster baseDomain: devcluster.openshift.com
- install-config.name.patch.yaml: is a patch to modify the site name from the base blueprint. You need to customize with the name you want to give to your site
Code Block | ||
---|---|---|
| ||
- op: replace
path: "/metadata/name"
value: kni-site
|
- site-config.yaml: site configuration file, you can add entries in config to override behaviour of knictl (currently just releaseImageOverride is supported)
Code Block | ||
---|---|---|
| ||
apiVersion: kni.akraino.org/v1alpha1
kind: SiteConfig
metadata:
name: notImportantHere
config:
releaseImageOverride: registry.svc.ci.openshift.org/origin/release:4.1
|
NOTE: If you are deploying on baremetal, specific configuration needs to be set. This is going to be covered in an specific section for it
01_cluster_mods
This is the directory that will contain all the customizations for the basic cluster deployment. You could create patches for modifying number of masters/workers, network settings... everything that needs to be modified on cluster deployment time. It needs to have a basic kustomization.yaml file, that will reference the same level file for the blueprint. And you could create additional patches following kustomize syntax:
Code Block | ||
---|---|---|
| ||
bases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/01_cluster-mods |
02_cluster_addons and 03_services
Follow same structure as 01_cluster_mods, but in this case is for adding additional workloads after cluster deployment. They also need to have a kustomization.yaml file that references the file of the same level for the blueprint, and can include additional resources and patches.
How to deploy on AWS and GCP
The whole deployment workflow is based on knictl CLI tool that this repository is providing.
CLI tool
Anchor | ||||
---|---|---|---|---|
|
The current KNI blueprints use the openshift-install
tool from the OKD Kubernetes distro to stand up a minimal Kubernetes cluster. All other Day 1 and Day 2 operations are then driven purely through manipulation of declarative Kubernetes manifests. To use this in the context of Akraino KNI blueprints, the project has created a helper CLI tool that needs to be installed first on Installer Node.
If necessary, install golang binary (incl. GOPATH var) using following steps, you can use latest version instead of the one given below.
wget https://golang.org/dl/go1.13.4.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.13.4.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Next, install the following dependencies:
sudo yum install -y make gcc libvirt-devel
Then install the knictl:
mkdir -p $GOPATH/src/gerrit.akraino.org/kni
cd $GOPATH/src/gerrit.akraino.org/kni
git clone https://gerrit.akraino.org/r/kni/installer
cd installer
make build
mkdir -p $GOPATH/bin/
cp knictl $GOPATH/bin/cp knictl /usr/local/go/bin/
Secrets
Most secrets (TLS certificates, Kubernetes API keys, etc.) will be auto-generated for you, but you need to provide at least two secrets yourself:
- a public SSH key
- a pull secret
The public SSH key is automatically added to every machine provisioned into the cluster and allows remote access to that machine. In case you don't have / want to use an existing key, you can create a new key pair using:
ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa
The pull secret is used to download the container images used during cluster deployment. Unfortunately, the OKD Kubernetes distro used by the KNI blueprints does not (yet) provide pre-built container images for all of the deployed components. Instead of going through the hassle of building those from source, we use the ones made available by openshift.com. Therefore, you need to go to https://cloud.redhat.com/openshift/install/metal/user-provisioned, log in (creating a free account, if necessary), and hit "Download Pull Secret".
Create a $HOME/.kni folder and copy the following files:
- id_rsa.pub → needs to contain the public key that you want to use to access your nodes
- pull-secret.json → needs to contain the pull secret previously copied
1. Fetch requirements for a site.
You need to have a site repository with the structure described above. Then, first thing is to fetch the requirements needed for the blueprint that the site references. This is achieved by:
Code Block | ||
---|---|---|
| ||
./knictl fetch_requirements github.com/site-repo.git |
Where the first argument references a site repository, following https://github.com/hashicorp/go-getter syntax. This will download the site repository, and will create a folder with the site name inside $HOME/.kni . It will also fetch all the binaries needed, and will store them inside $HOME/.kni/$SITE_NAME/requirements folder.
2. Prepare manifests for a site
NOTE: Before performing this step, you must copy your OpenShift pull secret into your build path (i.e. to ~/.kni/pull-secret.json).
Next step is to run a procedure to prepare all the manifests for deploying a site. This is achieved by applying kustomize on the site repository, combining that with the base manifests for the blueprint, and doing a merge with the manifests generated by the installer at runtime. This is achieved by the following command:
Code Block | ||
---|---|---|
| ||
./knictl prepare_manifests $SITE_NAME |
This will generate a set of manifests ready to apply, and will be stored on $HOME/.kni/$SITE_NAME/final_manifests folder. Along with manifests, a profile.env file has been created also in $HOME/.kni/$SITE_NAME folder. It includes environment vars that can be sourced before deploying the cluster. Current vars that can be exported are:
- OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE : used when a new image is wanted, instead of the default one
- TF_VAR_libvirt_master_memory, TF_VAR_libvirt_master_vcpu: Used in the libvirt case, to define the memory and CPU for the vms.
3. Deploy the cluster
Before starting the deployment, it is recommended to source the env vars from profile.env . You can achieve it with:
Code Block | ||
---|---|---|
| ||
source $HOME/.kni/$SITE_NAME/profile.env |
If you are deploying on AWS or libvirt, then you need to deploy the cluster. This can be achieved with:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/requirements/openshift-install create cluster --dir=$HOME/.kni/$SITE_NAME/final_manifests |
This will deploy a cluster based on the specified manifests. You can learn more about how to manage cluster deployment and how to interact with it on https://docs.openshift.com/container-platform/4.1/welcome/index.html
Specific instructions for baremetal are going to be provided later.
4. Apply workloads
Anchor | ||||
---|---|---|---|---|
|
After the cluster has been generated, the extra workloads that have been specified in manifests (like kubevirt), need to be applied. This can be achieved by:
Code Block | ||
---|---|---|
| ||
./knictl apply_workloads $SITE_NAME |
This will execute kustomize on the site manifests and will apply the output to the cluster. After that, the site deployment can be considered as finished.
Bare metal deployment guide
Create site for Baremetal
...
- Download DVD iso from http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1908.iso , place it on /tmp
Mount it:
Code Block mount -o loop /tmp/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1908.iso /mnt/ mkdir -p $HOME/.kni/$SITE_NAME/baremetal_automation/matchbox-data/var/lib/matchbox/assets/centos7 cp -ar /mnt/. $HOME/.kni/$SITE_NAME/baremetal_automation/matchbox-data/var/lib/matchbox/assets/centos7/ umount /mntPrepare a $HOME/settings_upi.env file with the following parameters , place it on /tmplanguage bash Mount it:
Code Block language bash export CLUSTER_NAME="$CLUSTER_NAME" export BASE_DOMAIN="$CLUSTER_DOMAIN" export PULL_SECRET='your_pull_secret' export KUBECONFIG_PATH=$HOME/.kni/$SITE_NAME/baremetal_automation/ocp/auth/kubeconfig export OS_INSTALL_ENDPOINT=http://<Installer node provisioning IP>:8080/assets/centos7 export ROOT_PASSWORD="pick_something"
Navigate to the kickstart script generation and execute it, copying the generated kickstart file:
Code Block language bash cdmount -o loop /tmp/CentOS-7-x86_64-DVD-1908.iso /mnt/ mkdir -p $HOME/.kni/$SITE_NAME/baremetal_automation/kickstart/ bash add_kickstart_for_centos.sh/matchbox-data/var/lib/matchbox/assets/centos7 cp centos-worker-kickstart.cfg-ar /mnt/. $HOME/.kni/$SITE_NAME/baremetal_automation/matchbox-data/var/lib/matchbox/assets/centos7/ umount /
- After that, you are ready to deploy your CentOS workers with the usual procedure.
...
mnt
Prepare a $HOME/settings_upi.env file with the following parameters:
Code Block language bash
...
export
...
CLUSTER_NAME="$CLUSTER_NAME"
...
Accessing the Cluster
After the deployment finishes, a kubeconfig
file will be placed inside auth directory:
export KUBECONFIG=$HOME/.kni/$SITE_NAME/final_manifests/auth/kubeconfig
NOTE: When using automated baremetal deployment, the kubeconfig
will be found here instead:
...
export BASE_DOMAIN="$CLUSTER_DOMAIN" export PULL_SECRET='your_pull_secret' export KUBECONFIG_PATH=$HOME/.kni/$SITE_NAME/baremetal_automation/ocp/auth/kubeconfig
...
Then cluster can be managed with the kubectl or oc
(drop-in replacement with advanced functionality) CLI tools.
To verify a correct setup, you can check again the nodes, and see if masters and workers are ready:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/requirements/oc get nodes |
...
export OS_INSTALL_ENDPOINT=http://<Installer node provisioning IP>:8080/assets/centos7 export ROOT_PASSWORD="pick_something"
Navigate to the kickstart script generation and execute it, copying the generated kickstart file:
Code Block language bash cd $HOME/.kni/$SITE_NAME/baremetal_automation/
...
kickstart/
...
You can also verify that the console is working, the console url is the following:
Code Block | ||
---|---|---|
| ||
https://console-openshift-console.apps.$CLUSTER_NAME.$CLUSTER_DOMAIN |
You can enter the console with kubeadmin user and the password that is shown at the end of the install.
How to Deploy on libvirt
Minimal hardware footprint needed
Only one server is needed, that will be acting as a virthost. Master and worker VMs will be created there
...
Server#
...
Role
...
Purpose
...
1
...
Installer node
...
This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well.
High level connectivity
Network connectivity will be the same as the baremetal case, but these can be dummy interfaces as all the network connectivity will be just inside the same host:
...
Interface
...
Purpose
...
Management interface
...
Remote root login from this interface is used for entire setup. This interface needs to have internet connectivity to download various files. This can be shared with external interface. This only needs to be present on the Installer node
...
Baremetal interface
...
This interface is for baremetal network, also known as SDN network. This interface doesn’t need internet connectivity.
...
Provisioning interface
...
This interface is for PXE boot. This interface doesn’t need internet connectivity.
Pre-requisites
OS requirements
...
Node Role
...
OS requirement
...
Installer
...
bash add_kickstart_for_centos.sh cp centos-worker-kickstart.cfg $HOME/.kni/$SITE_NAME/baremetal_automation/matchbox-data/var/lib/matchbox/assets/
- After that, you are ready to deploy your CentOS workers with the usual procedure.
After masters and workers are up, you can apply the workloads using the general procedure with:
Code Block | ||
---|---|---|
| ||
./knictl apply_workloads $SITE_NAME --kubeconfig $HOME/.kni/$SITE_NAME/baremetal_automation/ocp/auth/kubeconfig |
Accessing the Cluster
After the deployment finishes, a kubeconfig
file will be placed inside auth directory:
export KUBECONFIG=$HOME/.kni/$SITE_NAME/final_manifests/auth/kubeconfig
NOTE: When using automated baremetal deployment, the kubeconfig
will be found here instead:
export KUBECONFIG=$HOME/.kni/$SITE_NAME/baremetal_automation/ocp/auth/kubeconfig
Then cluster can be managed with the kubectl or oc
(drop-in replacement with advanced functionality) CLI tools.
To verify a correct setup, you can check again the nodes, and see if masters and workers are ready:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/requirements/oc get nodes |
You also can check if the cluster is available:
Code Block | ||
---|---|---|
| ||
$HOME/.kni/$SITE_NAME/requirements/oc get clusterversion |
You can also verify that the console is working, the console url is the following:
Code Block | ||
---|---|---|
| ||
https://console-openshift-console.apps.$CLUSTER_NAME.$CLUSTER_DOMAIN |
You can enter the console with kubeadmin user and the password that is shown at the end of the install.
libvirt deployment guide
High level steps
Create site for virtual baremetal
...
You can enter the console with kubeadmin user and the password that is shown at the end of the install.
Destroying the Cluster
Manual
When needed, the site can be destroyed with the openshift-install command, using the following syntax:
...