Table of Contents maxLevel 2
...
Deployment, as well as other tasks such as starting and stopping the cluster, is coordinated through a set of Ansible playbooks. (Ansible playbooks are a system used by the Ansible tool for describing the desired state of a system. In many ways they are similar to shell scripts. For more details see the Ansible documentation.) The playbooks are run by the deploy node and build node, and they execute commands on the deploy node, the master node, the build node, and in some cases on the edge nodes. Once the nodes are set up, most activity is carried out by Kubernetes. Kubernetes is configured by the playbooks and told to start or stop services on the edge nodes. These services are run in containers, and the images for these containers are stored in a local Docker registry. There are containers for the Kubernetes components themselves, plus Flannel (a component which provides networking inside the Kubernetes cluster), EdgeX Foundry services, and four custom services (sync-app
and , image-app
and , device-lora
and device-camera
) built using the EdgeX SDKs.
...
- Build node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2, default-jre 2:1.11-72
- Robot Framework 6.0
- Docker (docker.io) 20.10.12
- Go 1.16.10
- Deploy node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2
- Ansible 2.12.95
- Ansible collections
community.docker
,kubernetes.core
,community.crypto
- Master node
- Docker (docker.io) 20.10.12
- python3-pip 20.0.2
- Python packages
cryptography
andkubernetes
- mosquitto 2.0.15, mosquitto-clients 2.0.15
- Kubernetes (kubectl, kubelet, kubeadm) 1.22.9
- Flannel 0.17.0, flannel-cni-plugin 1.0.1 (Note: These are containers installed via Kubernetes through a config file)
- Edge node
- Docker (docker.io) 20.10.12
- Kubernetes (kubelet, kubeadm) 1.22.9 (kubectl may be installed for debugging purposes)
...
Modify the hosts
file in the deploy/playbook
directory with the host names and IP addresses of the master, build, cicd, and edge nodes.
all:
hosts:
children:
deploy:
hosts:
localhost:
master:
hosts:
sdt-master: # hostname of master node
edge_nodes:
hosts:
jet03: # hostname of first edge node
ip: 192.168.2.27 # IP address of first edge node
lora_id: 1
jet04: # hostname of second edge node
ip: 192.168.2.29 # IP address of second edge node
lora_id: 4
vars:
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
build:
hosts:
sdt-build: # hostname of build node
ip: 192.168.10.203 # IP address of build node
cicd:
hosts:
sdt-cicd: # hostname of cicd node
ip: 192.168.10.200 # IP address of cicd node
Modify the host names and ip IP addresses of the master/, build/, cicd /and deploy nodes in the cicd/playbook/hosts
file.
...
- Make sure there are entries for the cicd, build, master and edge node names in
/etc/hosts
- Install required software packages including Docker, Kubernetes, pip, and mosquitto
- Install Python packages used by other playbooks (
kubernetes
andcryptography
) - Make sure the user can run Docker commands
- Prepare basic configuration for Docker and Kubernetes
- Set up a user name and password for the MQTT service
...
Once the key files have been created, the following command can be run from the deploy node to copy the key to build node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id -
i ~/.ssh/lfedge_build.pub sdt-admin@nodename
Note, if you use an administrative account with a different name, change the variable ansible_user
in the build
group in the cicd/playbook/hosts
file to match the user name you are using.
...
The playbook will perform the following initialization tasks:
- Make sure there is an entry are entries for the master node and deploy node nodes in
/etc/hosts
- Install required software packages including Docker and Go and Robot Framework
- Make sure the user can run Docker commands
- Configure Docker, including adding the certificates to secure access to the private registry
...
Once the key files have been created, the following command can be run from the deploy node to copy the key to build node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id -
i ~/.ssh/lfedge_cicd.pub sdt-admin@nodename
Note, if you use an administrative account with a different name, change the variable ansible_user
in the cicd
group in the cicd/playbook/hosts
file to match the user name you are using.
...
At this time, images for the four custom services, sync-app
and , image-app
and , device-lora
and device-camera
, need to be built from source and pushed to the private Docker registry. (In the future these images should be available on Docker Hub or another public registry.) Use the following playbooks from the cicd/playbook
directory on the build deploy node to do so.
Note, limited by base image NVIDIA L4T CUDA which only supports arm architecture, so custom service imageservice image-
app also app
also only supports arm architecture. Other custom services support both arm64 and amd64 architecturearchitectures.
This command executed on the deploy node will build local docker images of the custom microservices:
...
The build command can take some time, depending on connection speed and the load on the deploy host, especially the compilation of cross-compiled images.
This command executed on the deploy node will push the images to the private registry:
ansible-playbook -i ./hosts push_images.yml
At time of writing this step will also create some workaround images required to enable EdgeX security features in this blueprint's Kubernetes configuration. Hopefully, these images will no longer be needed once fixes have been made upstream.
Starting the Cluster
With the base software installed and configured on the master and edge nodes, the following command executed on the deploy node will start the cluster:
...
This command only starts the master node in the Kubernetes cluster. The state of the master node can be confirmed using the kubectl get node
command command on the master node.
sdt-admin@sdt-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 3d5h v1.22.9
...
ansible-playbook -i ./hosts join_cluster.yml
The kubectl get nodenodes
command on the master node can be used to confirm the state of the edge nodes.
...
- Modify the file
edgex.yml
in thedeploy/playbook/group_vars/all
directory to decide which services will be started. For details, please refer to the sectionEnabling and Disabling Optional Services
below. - If the custom service
device-camera
will be started, set thecamera_ip
value to the IP address of the camera node in thedeploy\playbook\host_vars\jet03.yml
file and thedeploy\playbook\host_vars\jet04.yml
file. - If you are using different hostnames host names of the edge nodes, change the file name of files in the
deploy\playbook\host_vars
directory, and changedestination_host
value in the files in thedeploy\playbook\host_vars
directory.
...
Note, during initialization of the services you may see some containers restart one or more times. This is part of the timeout and retry behavior of the services waiting for other services to complete initialization and does not indicate a problem.
Camera Nodes
Configuration of the Camera Nodes (TODO)
These readings Consult the installation instructions for the H.View HV-500E6A hardware.
Readings received from Camera nodes should appear in the core-data
database and be possible to monitor using the edgex-events-nodename
channel. For example, the following command run on the master node should show the readings arriving at an edge node named "jet03":
...
*** Settings ***
Library SSHLibrary
Library String
*** Variables ***
${HOME} /home/sdt-admin # host directory of build and deploy node
${DEPLOY_HOST} sdt-deploy # hostname of deploy node
${DEPLOY_USER} sdt-admin # username of deploy node
${DEPLOY_KEY} ${HOME}/.ssh/lfedge_deploy # private key in build node to access deploy node
${DEPLOY_PWD} password
${PLAYBOOK_PATH} lf-edge/deploy/playbook # playbook path of build and deploy node
${EDGE_HOST1} jet03 # hostname of edge node#1
${EDGE_HOST2} jet04 # hostname of edge node#2
${EDGE_USER} edge # username of edge node
${EDGE_KEY} ${HOME}/.ssh/edge # private key in deploy node to access edge node
*** Keywords ***
……
……
...
The custom services can be rebuilt by running the build_images.yml
playbook in cicd/playbook
. After successfully building a new version of a service, use push_images.yml to push the images to the private Docker registry. The source for the services is found in edgex/sync-app, edgex/image-app
and , edgex/device-camera,
and edgex/device-lora
.
License
...
LoRa Device Service
The LoRa device service is linked with the following packages when compiled:
Image Application
The image application is linked with the following packages when compiled:
Camera Device Service
The camera device service is linked with the following packages when compiled:
References
- EdgeX Foundry Documentation (release 2.1): https://docs.edgexfoundry.org/2.1/
...
- CPS: Cyber-Physical System
- MQTT: A lightweight, publish-subscribe network protocol designed for connecting remote devices, especially when there are bandwidth constraints. (MQTT is not an acronym.)