Table of Contents maxLevel 2
...
Before running the setup_deploy.yml
playbook, modify the hosts
file in the deploy/playbook
directory with the host names and IP addresses of the edge nodes in your cluster. Also update the entry for the master node's host if it is not the same as the deploy node.
all:
hosts:
children:
deploy:
hosts:
localhost:
master:
hosts:
localhost: # IP address or hostname of master node
edge_nodes:
hosts:
...
jet03: # Name of first edge node
ip: 192.168.2.27 # IP address of first edge node
lora_id: 1
...
jet04: # Name of second edge node
ip: 192.168.2.29 # IP address of second edge node
lora_id: 4
vars:
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
In addition, if the master node is not the same as the deploy node, remove the line connection: local
wherever it follows hosts: master
in the playbooks in deploy/playbook
.
In the file master.yml
in the deploy/playbook/group_vars/all
directory, set the master_ip
value to the IP address of the master node. Note that this is required even if the master node is the same as the deploy node.
master_ip: 192.168.2.16
Set Up the Deploy Node
The account which runs the deploy playbooks will need to be able to use sudo
to execute some commands with super-user permissions. The following command can be used (by root or another user which already has super-user permissions) to enable the use of sudo for a user:
...
Kubernetes' initialization tool kubeadm
requires that swap be disabled on nodes in the cluster. Turn off swap on the master mode by editing the /etc/fstab
file (using sudo) and commenting out the line with "swap" as the third parameter:
# /swap.img none swap sw 0 0
In addition, if you have proxy settings kubeadm
will warn that you should disable the proxy for cluster IP addresses. The default cluster IP ranges 10.96.0.0/12
and 10.244.0.0/16
should be added to the no_proxy
and NO_PROXY
variables in /etc/environment
if necessary.
no_proxy=localhost,127.0.0.0/8,192.168.2.0/24,10.96.0.0/12,10.244.0.0/16,*.local,*.fujitsu.com
NO_PROXY=localhost,127.0.0.0/8,192.168.2.0/24,10.96.0.0/12,10.244.0.0/16,*.local,*.fujitsu.com
Creating the Docker Registry
...
The playbooks for use on the build node are stored in the cicd/playbook
directory of the source repository. These playbooks refer to other files in the source code, so the entire directory tree should be copied onto the deploy node. The easiest way to do this is by cloning the git repository directly as shown below:
git clone repository-url
Note, using the --depth=1
option can save some disk space if you don't need to modify the source code.
...
Before running the setup_build.yml
playbook, if you use a different hostname of the master node, update the entry for the master node's host.
all:
hosts:
localhost:
arm-build:
ansible_host: erc01
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
ansible_become_password: password
children:
master:
hosts:
sdt-master: # hostname of master node
Set Up the Build Node
If the build node is not on the same host as the deploy node, the user that runs the deploy playbooks must have an account on the build host under the same name, and that account must have sudo
privileges like the account on the deploy node (see above).
The following command will prepare the master node for use:
ansible-playbook -i ./hosts setup_build.yml --ask-become-pass
The playbook will perform the following initialization tasks:
- Make sure there is an entry for the master node in
/etc/hosts
- Install required software packages including Docker and Go and Robotwork
- Make sure the user can run Docker commands
- Configure Docker, including adding the certificates to secure access to the private registry
Building the Custom Services
At this time, images for the four custom services, sync-app
and image-app
and device-lora
and device-camera
, need to be built from source and pushed to the private Docker registry. (In the future these images should be available on Docker Hub or another public registry.) Use the following playbooks from the cicd/playbook
directory on the deploy build node to do so.
This command will install components that support cross-compiling the microservices for ARM devices:
ansible-playbook -i ./hosts setup_build.yml
This command will build local docker images of the custom microservices:
...
With the base software installed and configured on the master and edge nodes, the following command executed on the deploy node will start the cluster:
ansible-playbook -i ./hosts init_cluster.yml --ask-become-pass
...
admin@master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 3d5h 14m v1.22.79
Adding Edge Nodes to the Cluster
Once the cluster is initialized, the following command executed on the deploy node will add all the configured edge nodes to the cluster:
...
admin@master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
edge1jet03 Ready <none> 2m50s 3d5h v1.22.79
edge2jet04 Ready <none> 2m45s 3d5h v1.22.79
master Ready control-plane,master 17m 3d5h v1.22.79
Starting EdgeX
After adding the edge nodes to the cluster, the following command will start the EdgeX services on the edge nodes:
...
admin@master:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
edgex-edge1jet03-57859dcdff7f9644bb7d-k8j6ggklvb 2022/2022 Running 16 18 (3d5h ago) 1m31s3d5h
edgex-edge2jet04-5678d8fbbf749647459c-q988vdrpzb 20/20 Running 16 22/22 Running 18 (3d5h ago) 1m26s 3d5h
Note, during initialization of the services you may see some containers restart one or more times. This is part of the timeout and retry behavior of the services waiting for other services to complete initialization and does not indicate a problem.
Sensor Nodes
In the test installation sensor nodes have been constructed using Raspberry Pi devices running a Python script as a service to read temperature and humidity from a DHT-1 sensor, and forward those readings through an LRA-1 USB dongle to a pre-configured destination.
The Python script is located in sensor/dht2lra.py
, and an example service definition file for use with systemd is dht2lra.service
in the same directory.
The destination edge node is configured by connecting to the LRA-1 USB dongle, for example using the tio
program (tio needs to be installed using sudo apt-get install tio
):
pi@raspi02:~ $ sudo tio /dev/ttyUSB0
[tio 09:31:52] tio v1.32
[tio 09:31:52] Press ctrl-t q to quit
[tio 09:31:52] Connected
i2-ele LRA1
Ver 1.07.b+
OK
>
At the ">" prompt, enter dst=N
, where N is the number in the lora_id
variable for the edge node in deploy/playbook/hosts
. Then enter the ssave
command and disconnect from the dongle (using Ctrl+t q
in the case of tio). The destination ID will be stored in the dongle's persistent memory (power cycling will not clear the value).
Running the script, either directly with python ./dht2lra.py
, or using the service, will periodically send readings to the edge node. or more times. This is part of the timeout and retry behavior of the services waiting for other services to complete initialization and does not indicate a problem.
Camera Nodes
Configuration of the Camera Nodes
These readings should appear in the core-data
database and be possible to monitor using the edgex-events-nodename
channel. For example, the following command run on the master node should show the readings arriving at an edge node named "edge1jet03":
mosquitto_sub -t edgex-events-edge1 jet03 -u edge -P edgemqtt
Verifying the Setup
...
- CPS: Cyber-Physical System
- MQTT: A lightweight, publish-subscribe network protocol designed for connecting remote devices, especially when there are bandwidth constraints. (MQTT is not an acronym.)