Table of Contents maxLevel 2
...
The list below shows the required software for each node type prior to beginning the installation process.
- Build node
- Ubuntu 20.04
- Ansible 2.12.5
- Deploy node
- Ubuntu 20.04
- Ansible 2.12.5
- Master node
- Ubuntu 20.04
- Edge node
- Ubuntu 20.04
- Camera node
- N/A (pre-installed)
...
Note, if you use an administrative account with a different name, change the variable ansible_user
in the edge_nodes
group in the deploy/playbook/hosts
file to match the user name you are using.
In the file secret
in the deploy/playbook/group_vars/edge_nodes
directory, set the edge node admin user's sudo password.
The deploy node needs to log in via SSH to the edge nodes using a cryptographic key (rather than a password), so that a password does not need to be provided for every command. Run the following command on the deploy node to create a key called "edge" for the administrative user.
...
Note, if you use an administrative account with a different name, change the variable ansible_user
in the cicd
group in the cicd/playbook/hosts
file to match the user name you are using.
Building the Custom Services
At this time, images for the four custom services, sync-app
and image-app
and device-lora
and device-camera
, need to be built from source and pushed After the configuration of private key, the following command will prepare the cicd node for use:
ansible-playbook -i ./hosts setup_cicd.yml --ask-become-pass
The playbook will perform the following initialization tasks:
- Make sure there is an entry for the build node in
/etc/hosts
- Install required software packages including Jenkins
Building the Custom Services
At this time, images for the four custom services, sync-app
and image-app
and device-lora
and device-camera
, need to be built from source and pushed to the private Docker registry. (In the future these images should be available on Docker Hub or another public registry.) Use the following playbooks from the cicd/playbook
directory on the build node to do so.
Note, limited by base image NVIDIA L4T CUDA which only supports arm architecture, so custom service imageservice image-app also app also only supports arm architecture. Other custom services support both arm64 and amd64 architecture.
This command executed on deploy node will build local docker images of the custom microservices:
...
The build command can take some time, depending on connection speed and the load on the deploy host, especially the compilation of cross-compiled images.
This command executed on deploy node will push the images to the private registry:
...
This command only starts the master node in the Kubernetes cluster. The state of the master node can be confirmed using the kubectl get node
command on the master node.
admin@mastersdt-admin@sdt-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 3d5h v1.22.9
...
The kubectl get node
command on the master node can be used to confirm the state of the edge nodes.
admin@mastersdt-admin@sdt-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
jet03 Ready <none> 3d5h v1.22.9
jet04 Ready <none> 3d5h v1.22.9
master Ready control-plane,master 3d5h v1.22.9
...
Before starting EdgeX services, you need to do the fllowing following configuration first.
- Modify the file
edgex.yml
in thedeploy/playbook/group_vars/all
directory , Set the service to be started as on and the service not to be started as off
...
- to decide which services will be started.
- If the custom service
device-camera
will be started, set thecamera_ip
value to the IP address of the camera node in thedeploy\playbook\host_vars\jet03.yml
file and thedeploy\playbook\host_vars\jet04.yml
file. - If you are using different hostnames of the edge nodes, change the file name of files in the
deploy\playbook\host_vars
directory, and changedestination_host
value in the files in thedeploy\playbook\host_vars
directory.
After all configurations are completed, the following command executed on the deploy node will start the EdgeX services on the edge nodes:
...
You can confirm the status of the EdgeX microservices using the kubectl get pod
command on the master node. (EdgeX micro-service containers are grouped into one Kubernetes "pod" per node.)
admin@mastersdt-admin@sdt-master:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
edgex-jet03-7f9644bb7d-gklvb 22/22 Running 18 (3d5h ago) 3d5h
edgex-jet04-749647459c-drpzb 22/22 Running 18 (3d5h ago) 3d5h
...
Test cases for verifying the blueprint's operation are provided in the cicd/tests
directory. These are Robot Framework scripts which can be executed using the robot
tool. In addition, the cicd/playbook
directory contains playbooks supporting setup of a Jenkins-based automated testing environment for CI/CD. For more information, consult the README.md
files in those directories.using the robot
tool.
Before using the playbook scripts in the cicd/tests
directory, modify the common.resource
file in the cicd/tests/
directory according to your environment. The content to be changed is the part annotated with '#' below.
*** Settings ***
Library SSHLibrary
Library String
*** Variables ***
${HOME} /home/sdt-admin # host directory of build and deploy node
${DEPLOY_HOST} sdt-deploy # hostname of deploy node
${DEPLOY_USER} sdt-admin # username of deploy node
${DEPLOY_KEY} ${HOME}/.ssh/lfedge_deploy # private key in build node to access deploy node
${DEPLOY_PWD} password
${PLAYBOOK_PATH} lf-edge/deploy/playbook # playbook path of build and deploy node
${MASTER_HOST} sdt-master # hostname of master node
${BUILD_HOST} sdt-build # hostname of build node
${BUILD_USER} sdt-admin # username of build node
${BUILD_KEY} ${HOME}/.ssh/lfedge_build # private key in build node to access build node
${EDGE_HOST1} jet03 # hostname of edge node#1
${EDGE_HOST2} jet04 # hostname of edge node#2
${EDGE_USER} edge # username of edge node
${EDGE_KEY} ${HOME}/.ssh/edge # private key in deploy node to access edge node
*** Keywords ***
……
……
...
- CPS: Cyber-Physical System
- MQTT: A lightweight, publish-subscribe network protocol designed for connecting remote devices, especially when there are bandwidth constraints. (MQTT is not an acronym.)