Table of Contents maxLevel 2
...
The table below shows the recommended minimum specifications for the hardware in the testing installation. It is possible that lower spec hardware could be used for many of the nodes. The sensor node hardware in particular is specific to the testing installation and could be swapped out with any number of other platforms as long as LoRa connectivity was possible using the hardware.
...
Master/Deploy/Build/CICD | Edge | Camera | |
---|---|---|---|
Platform | VM running on commercial grade PC | NVidia Jetson Nano | H.View HV-500E6A |
CPU | x86-64, Intel i5 or similar | ARM 64bit Cortex-A57 | N/A |
Cores | 2 | 4 | N/A |
RAM | 4 GB | 2 GB | N/A |
Storage | 128 GB Hard Disk space | 32 GB SD Card | N/A (SD card optional) |
Network | 1x Ethernet | 1x Ethernet | 1x Ethernet |
Other | N/A | LoRa dongle (LRA-1)* *Used in R6 configuration | ONVIF (Profile S, Profile T) supporting IP camera |
At a minimum one node is required for the master and deploy and build and cicd roles together, and at least two edge nodes and two camera nodes. The testing installation uses two edge and camera nodescontains eight nodes(one deploy node, one master node, one build node, one cicd node, two edge nodes, two camera nodes).
Network Requirements
All nodes are expected to have IP connectivity to one another during installation and normal operation, with the exception of the camera nodes. In the installation described here, all the nodes are connected to a private wired network operating at 100Mbps or better. However, there are no strict bandwidth or latency requirements.
...
Note that the installation process will install several more software packages through Ansible playbooks. These are listed below for reference. Packages included by default in an install of Ubuntu 20.04 server are not included. The version numbers are those that are available/installed at the time of writing by the Ansible playbooks on Ubuntu 20.04.
- Build Cicd node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2
- Ansible collections
community.docker
,kubernetes.core
,community.crypto
- Docker (docker.io) 20.10.12
- Robot Framework 5.0
- , default-jre 2:1.11-72
- Jenkins 2.361.2
- Build node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2, default-jre 2:1.11-72
- Robot Framework 6.0
- Docker (docker.io) 20.10.12
- Go 1.16.10
- Deploy node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2
- Ansible 2.12.9
- Ansible collections
community.docker
,kubernetes.core
,community.crypto
- Master node
- Docker (docker.io) 20.10.12
- python3-pip 20.0.2
- Python packages
cryptography
andkubernetes
- mosquitto 2.0.15, mosquitto-clients 2.0.15
- Kubernetes (kubectl, kubelet, kubeadm) 1.22.9
- Flannel 0.17.0, flannel-cni-plugin 1.0.1 (Note: These are containers installed via Kubernetes through a config file)
- Edge node
- Docker (dockerdocker.io) 20.10.12
- Kubernetes (kubelet, kubeadm) 1.22.9 (kubectl may be installed for debugging purposes)
...
Before running the setup_deploy.yml
playbook, modify two hosts files need to be modified.
Modify the hosts
file in the deploy/playbook
directory with the host names and IP addresses of the master and edge nodes in your cluster. Also update the entry for the master node's host if it is not the same as the deploy node..
all:
hosts:
children:
deploy:
hosts:
localhost:
master:
hosts:
localhostsdt-master: # hostname of master node
edge_nodes:
hosts:
jet03: # Namehostname of first edge node
ip: 192.168.2.27 # IP address of first edge node
lora_id: 1
jet04 jet04: # Namehostname of second edge node
ip: 192.168.2.29 # IP address of second edge node
lora_id: 4
vars:
ansible ansible_user: edge
ansible ansible_ssh_private_key_file: ~/.ssh/edge
In addition, if the master node is not the same as the deploy node, remove the line connection: local
wherever it follows hosts: master
in the playbooks in deploy/playbook
.
In the file master.yml
in the deploy/playbook/group_vars/all
directory, set the master_ip
value to the IP address of the master node. Note that this is required even if the master node is the same as the deploy node.
master_ip: 192.168.2.16
...
Modify the host names and ip addresses of master/build/cicd/deploy nodes in the cicd/playbook/hosts
file.
all:
hosts:
localhost:
arm-build:
ansible_host: erc01
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
ansible_become_password: password
children:
master:
hosts:
sdt-master: # hostname of master node
build:
hosts:
sdt-build: # hostname of build node
ip: 192.168.10.203 # ip address of build node
vars:
ansible_user: sdt-admin
ansible_ssh_private_key_file: ~/.ssh/lfedge_build
cicd:
hosts:
sdt-cicd: # hostname of cicd node
ip: 192.168.10.200 # ip address of cicd node
vars:
ansible_user: sdt-admin
ansible_ssh_private_key_file: ~/.ssh/lfedge_cicd
deploy:
hosts:
sdt-deploy: # hostname of deploy node
ip: 192.168.10.231 # ip address of deploy node
In the file master.yml
in the deploy/playbook/group_vars/all
directory, set the master_ip
value to the IP address of the master node. Note that this is required even if the master node is the same as the deploy node.
master_ip: 192.168.2.16
Set Up the Deploy Node
The account which runs the deploy playbooks will need to be able to use sudo
to execute some commands with super-user permissions. The following command can be used (by root or another user which already has super-user permissions) to enable the use of sudo for a user:
...
Populating the registry will leave extra copies of the downloaded images on the master node. You can remove these using the following command (the images will remain in the private registry):
ansible-playbook -i ./hosts clean_local_images.yml
Preparing Edge Nodes
Add an administrative account to all the edge nodes. This account will be used by the deploy node when it needs to run commands directly on the edge nodes (e.g. for installing base software, or for joining or leaving the cluster). The following commands run on each edge node will add a user account named "edge" and add it to the group of users with sudo
privileges.
sudo adduser edge
sudo usermod -aG sudo edge
Note, if you use an administrative account with a different name, change the variable ansible_user
in the edge_nodes
group in the deploy/playbook/hosts
file to match the user name you are using.
The deploy node needs to log in via SSH to the edge nodes using a cryptographic key (rather than a password), so that a password does not need to be provided for every command. Run the following command on the deploy node to create a key called "edge" for the administrative user.
ssh-keygen -t ed25519 -f ~/.ssh/edge
The parameter ~/.ssh/edge
is the name and location of the private key file that will be generated. If you use a different name or location, change the ansible_ssh_private_key_file
variable for the edge_nodes
group in deploy/playbook/hosts
to match.
Once the key files have been created, the following command can be run from the deploy node to copy the key to each edge node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id -i ~/.ssh/edge.pub edge@nodename
After the administrative account has been created, the following command will perform initial setup on all edge nodes configured in the deploy/playbook/hosts
file:
ansible-playbook -i ./hosts edge_install.yml
The playbook will perform the following initialization tasks:
- Make sure there is an entry for the master node in
/etc/hosts
- Install required software packages including Docker and kubelet
- Make sure the user can run Docker commands
- Configure Docker, including adding the certificates to secure access to the private registry
Edge Node Kubernetes Requirements
Like the master node, swap should be disabled and the cluster IP address ranges should be excluded from proxy processing if necessary.
Note that on the Jetson Nano hardware platform has a service called nvzramconfig
that acts as swap and needs to be disabled. Use the following command to disable it:
sudo systemctl disable nvzramconfig.service
Preparing the Build Node
In the test installation, the build node is a VM running on a x86 PC, with Ubuntu Linux 20.04 installed. In addition, the Ansible tool must be installed. The Ansible tool provided in the Ubuntu software repository is a slightly older version which needs to be upgraded, refer to Ansible Installation Guide to install the latest ansible version. But before running that playbook you need to configure a few things described in the section below.
The playbooks for use on the build node are stored in the cicd/playbook
directory of the source repository. These playbooks refer to other files in the source code, so the entire directory tree should be copied onto the deploy node. The easiest way to do this is by cloning the git repository directly as shown below:
git clone repository-url
Note, using the --depth=1
option can save some disk space if you don't need to modify the source code. In addition, the directory to clone the git repository must be same with the git repository directory of the deploy node.
The git command will create a directory in the directory where it is run named after the repository. Inside the new directory will be the cicd/playbook
directory. Unless noted otherwise, the commands below should be run in that directory.
Node Configuration
Before running the setup_build.yml
playbook, if you use a different hostname of the master node, update the entry for the master node's host.
all:
hosts:
localhost:
arm-build:
ansible_host: erc01
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
ansible_become_password: password
children:
master:
hosts:
sdt-master: # hostname of master node
Set Up the Build Node
If the build node is not on the same host as the deploy node, the user that runs the deploy playbooks must have an account on the build host under the same name, and that account must have sudo
privileges like the account on the deploy node (see above).
The following command will prepare the master node for use:
ansible-playbook -i ./hosts setup_build.yml
The playbook will perform the following initialization tasks:
...
(the images will remain in the private registry):
ansible-playbook -i ./hosts clean_local_images.yml
Preparing Edge Nodes
Add an administrative account to all the edge nodes. This account will be used by the deploy node when it needs to run commands directly on the edge nodes (e.g. for installing base software, or for joining or leaving the cluster). The following commands run on each edge node will add a user account named "edge" and add it to the group of users with sudo
privileges.
sudo adduser edge
sudo usermod -aG sudo edge
Note, if you use an administrative account with a different name, change the variable ansible_user
in the edge_nodes
group in the deploy/playbook/hosts
file to match the user name you are using.
The deploy node needs to log in via SSH to the edge nodes using a cryptographic key (rather than a password), so that a password does not need to be provided for every command. Run the following command on the deploy node to create a key called "edge" for the administrative user.
ssh-keygen -t ed25519 -f ~/.ssh/edge
The parameter ~/.ssh/edge
is the name and location of the private key file that will be generated. If you use a different name or location, change the ansible_ssh_private_key_file
variable for the edge_nodes
group in deploy/playbook/hosts
to match.
Once the key files have been created, the following command can be run from the deploy node to copy the key to each edge node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id -i ~/.ssh/edge.pub edge@nodename
After the administrative account has been created, the following command will perform initial setup on all edge nodes configured in the deploy/playbook/hosts
file:
ansible-playbook -i ./hosts edge_install.yml
The playbook will perform the following initialization tasks:
- Make sure there is an entry for the master node in
/etc/hosts
- Install required software packages including Docker and kubelet
- Make sure the user can run Docker commands
- Configure Docker, including adding the certificates to secure access to the private registry
Edge Node Kubernetes Requirements
Like the master node, swap should be disabled and the cluster IP address ranges should be excluded from proxy processing if necessary.
Note that on the Jetson Nano hardware platform has a service called nvzramconfig
that acts as swap and needs to be disabled. Use the following command to disable it:
sudo systemctl disable nvzramconfig.service
Preparing the Build Node
The deploy node needs to log in via SSH to the build node and the cicd node using a cryptographic key (rather than a password), so that a password does not need to be provided for every command. Run the following command on the deploy node to create a key called "lfedge_build" for the administrative user to log in to the build node.
ssh-keygen -t rsa -b 2048 -f ~/.ssh/lfedge_build
The parameter ~/.ssh/lfedge_build
is the name and location of the private key file that will be generated. If you use a different name or location, change the ansible_ssh_private_key_file
variable for the build
group in cicd/playbook/hosts
to match.
Once the key files have been created, the following command can be run from the deploy node to copy the key to build node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id i ~/.ssh/lfedge_build.pub sdt-admin@nodename
Note, if you use an administrative account with a different name, change the variable ansible_user
in the build
group in the cicd/playbook/hosts
file to match the user name you are using.
After the configuration of private key, the following command will prepare the build node for use:
ansible-playbook -i ./hosts setup_build.yml --ask-become-pass
The playbook will perform the following initialization tasks:
- Make sure there is an entry for the master node and deploy node in
/etc/hosts
- Install required software packages including Docker and Go and Robot Framework
- Make sure the user can run Docker commands
- Configure Docker, including adding the certificates to secure access to the private registry
Preparing the CICD Node
The deploy node needs to log in via SSH to the cicd node using a cryptographic key (rather than a password), so that a password does not need to be provided for every command. Run the following command on the deploy node to create a key called "lfedge_cicd" for the administrative user to log in to the cicd node.
ssh-keygen -t rsa -b 2048 -f ~/.ssh/lfedge_cicd
The parameter ~/.ssh/lfedge_cicd
is the name and location of the private key file that will be generated. If you use a different name or location, change the ansible_ssh_private_key_file
variable for the cicd
group in cicd/playbook/hosts
to match.
Once the key files have been created, the following command can be run from the deploy node to copy the key to build node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id i ~/.ssh/lfedge_cicd.pub sdt-admin@nodename
Note, if you use an administrative account with a different name, change the variable ansible_user
in the cicd
group in the cicd/playbook/hosts
file to match the user name you are using.
Building the Custom Services
...
- CPS: Cyber-Physical System
- MQTT: A lightweight, publish-subscribe network protocol designed for connecting remote devices, especially when there are bandwidth constraints. (MQTT is not an acronym.)