Table of Contents maxLevel 2
...
Note that the build node and the deploy node and the master node can be the same host or virtual machine.
The camera nodes are not shown in the above diagram as they are not envisioned as being connected to the network, and are not configured by the playbooks from the deploy node. See the Camera Nodes section of Installation for an example of how camera nodes may be setup.
Pre-Installation Requirements
...
The list below shows the required software for each node type prior to beginning the installation process.
- CICD node
- Ubuntu 20.04
- Jenkins 2.332.3
- Build node
- Ubuntu 20.04
- Ansible 2.12.5
- Deploy node
- Ubuntu 20.04
- Ansible 2.12.5
- Master node
- Ubuntu 20.04
- Edge node
- Ubuntu 20.04
- Camera node
- N/A (pre-installed)
Note that Ansible 2.9.6 is installed from the regular Ubuntu repository on Ubuntu 20.04, but needs to be upgraded from the Ansible repository to support the kubernetes.core
collection used by this blueprint. The setup_cicd.yml
playbook can be run with Ansible 2.9.6 and will update Ansible to the required version.
Additional Installed Software Packages
Note that the installation process will install several more software packages through Ansible playbooks. These are listed below for reference. Packages included by default in an install of Ubuntu 20.04 server are not included. The version numbers are those that are available/installed at the time of writing by the Ansible playbooks on Ubuntu 20.04.
- Cicd CICD node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2, default-jre 2:1.11-72
- Jenkins 2Jenkins 2.361332.23
- Build node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2, default-jre 2:1.11-72
- Robot Framework 6.0
- Docker (docker.io) 20.10.12
- Go 1.16.10
- Deploy node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2
- Ansible 2.12.9
- Ansible collections
community.docker
,kubernetes.core
,community.crypto
- Master node
- Docker (docker.io) 20.10.12
- python3-pip 20.0.2
- Python packages
cryptography
andkubernetes
- mosquitto 2.0.15, mosquitto-clients 2.0.15
- Kubernetes (kubectl, kubelet, kubeadm) 1.22.9
- Flannel 0.17.0, flannel-cni-plugin 1.0.1 (Note: These are containers installed via Kubernetes through a config file)
- Edge node
- Docker (docker.io) 20.10.12
- Kubernetes (kubelet, kubeadm) 1.22.9 (kubectl may be installed for debugging purposes)
...
Modify the hosts
file in the deploy/playbook
directory with the host names and IP addresses of the master, build, cicd and edge nodes.
all:
hosts:
children:
deploy:
hosts:
localhost:
master:
hosts:
sdt-master: # hostname of master node
edge_nodes:
hosts:
jet03: # hostname of first edge node
ip: 192.168.2.27 # IP address of first edge node
lora_id: 1
jet04: # hostname of second edge node
ip: 192.168.2.29 # IP address of second edge node
lora_id: 4
vars:
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
Modify the host names and ip addresses of master/build/cicd/deploy nodes in the cicd/playbook/hosts
file.
all:
build:
hosts:
localhost:
armsdt-build: # hostname of build node
ip: 192.168.10.203 # IP address of build node
cicd:
hosts:
sdt-cicd: # hostname of cicd node
ip: 192.168.10.200 # IP address of cicd node
Modify the host names and ip addresses of master/build/cicd/deploy nodes in the cicd/playbook/hosts
file.
all:
hosts:
localhost:
arm-build:
ansible_host: erc01
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
ansible_become_password: password
children:
master:
hosts:
sdt-master: # hostname of master node
build:
hosts:
sdt-build: # hostname of build node
ip: 192.168.10.203 # ip address of build node
vars:
ansible_user: sdt-admin
ansible_ssh_private_key_file: ~/.ssh/lfedge_build
cicd:
hosts:
sdt-cicd: # hostname of cicd node
ip: 192.168.10.200 # ip address of cicd node
vars:
ansible_user: sdt-admin
ansible_ssh_private_key_file: ~/.ssh/lfedge_cicd
deploy:
hosts:
sdt-deploy: # hostname of deploy node
ip: 192.168.10.231 # ip address of deploy node
...
Once the key files have been created, the following command can be run from the deploy node to copy the key to build node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id -i ~/.ssh/lfedge_build.pub sdt-admin@nodename
...
ansible-playbook -i ./hosts setup_build.yml yml --ask-become-pass
The playbook will perform the following initialization tasks:
...
Once the key files have been created, the following command can be run from the deploy node to copy the key to build node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id -i ~/.ssh/lfedge_cicd.pub sdt-admin@nodename
...
ansible-playbook -i ./hosts setup_cicd.yml yml --ask-become-pass
The playbook will perform the following initialization tasks:
...
- Modify the file
edgex.yml
in thedeploy/playbook/group_vars/all
directory to decide which services will be started. For details, please refer to the sectionEnabling and Disabling Optional Services
below. - If the custom service
device-camera
will be started, set thecamera_ip
value to the IP address of the camera node in thedeploy\playbook\host_vars\jet03.yml
file and thedeploy\playbook\host_vars\jet04.yml
file. - If you are using different hostnames of the edge nodes, change the file name of files in the
deploy\playbook\host_vars
directory, and changedestination_host
value in the files in thedeploy\playbook\host_vars
directory.
...
*** Settings ***
Library SSHLibrary
Library String
*** Variables ***
${HOME} /home/sdt-admin # host directory of build and deploy node
${DEPLOY_HOST} sdt-deploy # hostname of deploy node
${DEPLOY_USER} sdt-admin # username of deploy node
${DEPLOY_KEY} ${HOME}/.ssh/lfedge_deploy # private key in build node to access deploy node
${DEPLOY_PWD} password
${PLAYBOOK_PATH} lf-edge/deploy/playbook # playbook path of build and deploy node
${EDGE_HOST1} jet03 # hostname of edge node#1
${EDGE_HOST2} jet04 # hostname of edge node#2
${EDGE_USER} edge # username of edge node
${EDGE_KEY} ${HOME}/.ssh/edge # private key in deploy node to access edge node
*** Keywords ***
……
……
...
*
……
……
Note, if there is no private key(${DEPLOY_KEY}
) on the build or deploy hostnode, use command ssh-keygen
to create the private key, and use the command ssh-copy-id
to copy the key to the destination node.(Please refer to the chapter section Preparing Edge Nodes
above for detailed usage of command ssh-keygen
and ssh-copy-id
.)
...
Enabling and Disabling Optional Services
Three Five EdgeX micro-services can be enabled and disabled using variables in the deploy/playbook/group_vars/all/edgex.yml
file. Set the variable to true
to enable the micro-service the next time the edgex_start.yml
playbook is run. Set the variable to false
to disable that micro-service. The micro-service controlling variables are listed below:
device_virtual
: Enable or disable thedevice-virtual
service, provided by EdgeX Foundry, used for testing.device_lora
: Enable or disable thedevice-lora
service, one of the custom services provided by this blueprint, which provides support for receiving readings and sending commands to remote sensors over LoRa low-power radio links.sync_app
: Enable or disable thesync-app
application service, the custom services provided other custom service provided by this blueprint, which provides a way to forward sensor data to other edge nodes.device-camera
: Enable or disable thedevice-camera
service, provided by EdgeX Foundry, and modified by this blueprint, which provides support provides support for receiving readings and readings and sending commands to remote sensors over LoRa low-power radio linkscameras.sync_image-app
: Enable Enable or disable thesync-app
application service service, the the other custom service provided by this blueprint, which provides a way to forward sensor data to other edge nodes. which provides support for analyzing and comparing images received from the edge nodes
Debugging Failures
Consult the sections under Troubleshooting for commands to debug failures. In particular, using the kubectl
commands described in Accessing Logs, and changing the log levels of services using the configuration UI described above, which can change the logging level of running services, can be useful.
...
This blueprint stores configuration and data in the following places. When uninstalling the software, these folders and files can also be removed, if present, on the master, deploy build and edge nodes.
- Master node:
- ~/.lfedge
- /opt/lfedge
- /etc/mosquitto/conf.d/edge.conf
- /usr/share/keyrings/kubernetes-archive-keyring.gpg
- Edge node:
- /opt/lfedge
- /etc/docker/certs.d/master:5000/registry.crt
- /usr/local/share/ca-certificates/master.crt
- /etc/docker/daemon.json
- /usr/share/keyrings/kubernetes-archive-keyring.gpg
- Deploy Build node:
- /etc/profile.d/go.sh
- /usr/local/go
- ~/edgexfoundry
- /usr/local/go1.16.10.linux-amd64.tar.gz
Troubleshooting
Confirming Node and Service Status
...
Edge nodes can be added an removed by stopping the cluster and , editing the deploy/playbook/hosts
file, and adding or removing host files in the deploy/palybook/host_vars
directory. The master_install.yml
and edge_install.yml
playbooks need to be run again to update /etc/hosts
and certificates on any added nodes.
Updating the Software
Running setup_deploy.yml, setup_build.yml, setup_cicd.yml, master_install.yml, and edge_install.yml playbooks can be used to update software packages if necessary. Note that Kubernetes is specified to use version 1.22 to avoid problems that might arise from version instability, but it should be possible to update if so desired.
...
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
The synchronization application, image application, device camera service and LoRa device service are linked with other Go packages/components when compiled, which are each covered by their own licenses, listed below. Other components downloaded and installed during the blueprint's installation process are covered by their own licenses.
...
- CPS: Cyber-Physical System
- MQTT: A lightweight, publish-subscribe network protocol designed for connecting remote devices, especially when there are bandwidth constraints. (MQTT is not an acronym.)