...
Deployment, as well as other tasks such as starting and stopping the cluster, is coordinated through a set of Ansible playbooks. (Ansible playbooks are a system used by the Ansible tool for describing the desired state of a system. In many ways they are similar to shell scripts. For more details see the Ansible documentation.) The playbooks are run by the deploy node and build node, and they execute commands on the deploy node, the master node, the build node, and in some cases on the edge nodes. Once the nodes are set up, most activity is carried out by Kubernetes. Kubernetes is configured by the playbooks and told to start or stop services on the edge nodes. These services are run in containers, and the images for these containers are stored in a local Docker registry. There are containers for the Kubernetes components themselves, plus Flannel (a component which provides networking inside the Kubernetes cluster), EdgeX Foundry services, and four custom services (sync-app
and , image-app
and , device-lora
and device-camera
) built using the EdgeX SDKs.
Note that the build node and the deploy node and the master node can be the same host or virtual machine.
The camera nodes are not shown in the above diagram as they are not envisioned as being connected to the network, and are not configured by the playbooks from the deploy node. See the Camera Nodes section of Installation for an example of how camera nodes may be setup.
Pre-Installation Requirements
...
The list below shows the required software for each node type prior to beginning the installation process.
- Build CICD node
- Ubuntu 20.04Ansible 2.12.5
- Build node
- Deploy node
- Ubuntu 20.04
- Ansible 2.12.5
- Master node
- Edge node
- Camera node
Note that Ansible 2.9.6 is installed from the regular Ubuntu repository on Ubuntu 20.04, but needs to be upgraded from the Ansible repository to support the kubernetes.core
collection used by this blueprint. The setup_cicd.yml
playbook can be run with Ansible 2.9.6 and will update Ansible to the required version.
Additional Installed Software Packages
Note that the installation process will install several more software packages through Ansible playbooks. These are listed below for reference. Packages included by default in an install of Ubuntu 20.04 server are not included. The version numbers are those that are available/installed at the time of writing by the Ansible playbooks on Ubuntu 20.04.
- Cicd CICD node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2, default-jre 2:1.11-72
- Jenkins 2Jenkins 2.361332.23
- Build node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2, default-jre 2:1.11-72
- Robot Framework 6.0
- Docker (docker.io) 20.10.12
- Go 1.16.10
- Deploy node
- make 4.2.1, build-essential 12.8, python3-pip 20.0.2
- Ansible 2.12.95
- Ansible collections
community.docker
, kubernetes.core
, community.crypto
- Master node
- Docker (docker.io) 20.10.12
- python3-pip 20.0.2
- Python packages
cryptography
and kubernetes
- mosquitto 2.0.15, mosquitto-clients 2.0.15
- Kubernetes (kubectl, kubelet, kubeadm) 1.22.9
- Flannel 0.17.0, flannel-cni-plugin 1.0.1 (Note: These are containers installed via Kubernetes through a config file)
- Edge node
- Docker (docker.io) 20.10.12
- Kubernetes (kubelet, kubeadm) 1.22.9 (kubectl may be installed for debugging purposes)
...
Modify the hosts
file in the deploy/playbook
directory with the host names and IP addresses of the master, build, cicd, and edge nodes.
all:
hosts:
children:
deploy:
hosts:
localhost:
master:
hosts:
sdt-master: # hostname of master node
edge_nodes:
hosts:
jet03: # hostname of first edge node
ip: 192.168.2.27 # IP address of first edge node
lora_id: 1
jet04: # hostname of second edge node
ip: 192.168.2.29 # IP address of second edge node
lora_id: 4
vars:
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
Modify the host names and ip addresses of master/build/cicd/deploy nodes in the cicd/playbook/hosts
file.
all:
build:
hosts:
localhost:
armsdt-build:
# hostname of ansible_host: erc01
build node
ansible_user: edge
ansible_ssh_private_key_file: ~/.ssh/edge
ansible_become_password: password
children:
masterip: 192.168.10.203 # IP address of build node
cicd:
hosts:
sdt-mastercicd: # hostname of mastercicd node
build:
hostsip:
192.168.10.200 # IP address of sdt-build: # hostname of build nodecicd node
Modify the host names and IP addresses of the master, build, cicd and deploy nodes in the cicd/playbook/hosts
file.
all:
hosts:
ip: 192.168.10.203 # ip address of build nodelocalhost:
arm-build:
varsansible_host: erc01
ansible_user: sdt-admin
edge
ansible_ssh_private_key_file: ~/.ssh/lfedge_build
edge
cicd:ansible_become_password: password
children:
master:
hosts:
sdt-cicdmaster: # hostname of master node
build:
hosts:
sdt-build: # hostname of cicdbuild node
ip: 192.168.10.200203 # ip address of cicdbuild node
vars:
ansible_user: sdt-admin
ansible_ssh_private_key_file: ~/.ssh/lfedge_cicdbuild
deploycicd:
hosts:
sdt-deploycicd: # hostname of deploycicd node
ip: 192.168.10.231200 # ip address of deploycicd node
In the file master.yml
in the deploy/playbook/group_vars/all
directory, set the master_ip
value to the IP address of the master node. Note that this is required even if the master node is the same as the deploy node.
master_ vars:
ansible_user: sdt-admin
ansible_ssh_private_key_file: ~/.ssh/lfedge_cicd
deploy:
hosts:
sdt-deploy: # hostname of deploy node
ip: 192.168.2.16
Set Up the Deploy Node
10.231 # ip address of deploy node
In the file master.yml
in the deploy/playbook/group_vars/all
directory, set the master_ip
value to the IP address of the master node. Note that this is required even if the master node is the same as the deploy node.
master_ip: 192.168.2.16
Set Up the Deploy Node
The account which runs the deploy playbooks will need to be able to use sudo
to execute some commands with super-user permissions. The following command can be used (by root or another user which already has super-user permissions) to enable the use of sudo for a user:
...
- Make sure there are entries for the cicd, build, master and edge node names in
/etc/hosts
- Install required software packages including Docker, Kubernetes, pip, and mosquitto
- Install Python packages used by other playbooks (
kubernetes
and cryptography
) - Make sure the user can run Docker commands
- Prepare basic configuration for Docker and Kubernetes
- Set up a user name and password for the MQTT service
...
Note, if you use an administrative account with a different name, change the variable ansible_user
in the edge_nodes
group in the deploy/playbook/hosts
file to match the user name you are using.
In the file secret
in the deploy/playbook/group_vars/edge_nodes
directory, set the edge node admin user's sudo password.
The deploy node needs to log in via SSH to the edge nodes using a cryptographic key (rather than a password), so that a password does not need to be provided for every command. Run the following command on the deploy node to create a key called "edge" for the administrative user.
...
Once the key files have been created, the following command can be run from the deploy node to copy the key to build node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id -i ~/.ssh/lfedge_build.pub sdt-admin@nodename
Note, if you use an administrative account with a different name, change the variable ansible_user
in the build
group in the cicd/playbook/hosts
file to match the user name you are using.
...
ansible-playbook -i ./hosts setup_build.yml yml --ask-become-pass
The playbook will perform the following initialization tasks:
- Make sure there is an entry are entries for the master node and deploy node nodes in
/etc/hosts
- Install required software packages including Docker and Go and Robot Framework
- Make sure the user can run Docker commands
- Configure Docker, including adding the certificates to secure access to the private registry
...
Once the key files have been created, the following command can be run from the deploy node to copy the key to build node so a password will not be required for each login. (The administrative user's password will be requested when running this command.)
ssh-copy-id -i ~/.ssh/lfedge_cicd.pub sdt-admin@nodename
Note, if you use an administrative account with a different name, change the variable ansible_user
in the cicd
group in the cicd/playbook/hosts
file to match the user name you are using.
Building the Custom Services
After the configuration of private key, the following command will prepare the cicd node for use:
ansible-playbook -i ./hosts setup_cicd.yml --ask-become-pass
The playbook will perform the following initialization tasks:
- Make sure there is an entry for the build node in
/etc/hosts
- Install required software packages including Jenkins
Building the Custom Services
At this time, images for the four custom services, sync-app
and , image-app
and , device-lora
and device-camera
, need to be built from source and pushed to the private Docker registry. (In the future these images should be available on Docker Hub or another public registry.) Use the following playbooks from the cicd/playbook
directory on the build deploy node to do so.
Note, limited by base image NVIDIA L4T CUDA which only supports arm architecture, so custom service image-app
also only supports arm architecture. Other custom services support both arm64 and amd64 architecturearchitectures.
This command executed on the deploy node will build local docker images of the custom microservices:
...
The build command can take some time, depending on connection speed and the load on the deploy host, especially the compilation of cross-compiled images.
This command executed on the deploy node will push the images to the private registry:
ansible-playbook -i ./hosts push_images.yml
At time of writing this step will also create some workaround images required to enable EdgeX security features in this blueprint's Kubernetes configuration. Hopefully, these images will no longer be needed once fixes have been made upstream.
Starting the Cluster
With the base software installed and configured on the master and edge nodes, the following command executed on the deploy node will start the cluster:
...
This command only starts the master node in the Kubernetes cluster. The state of the master node can be confirmed using the kubectl get node
command command on the master node.
admin@mastersdt-admin@sdt-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 3d5h v1.22.9
...
ansible-playbook -i ./hosts join_cluster.yml
The kubectl get nodenodes
command on the master node can be used to confirm the state of the edge nodes.
admin@mastersdt-admin@sdt-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
jet03 Ready <none> 3d5h v1.22.9
jet04 Ready <none> 3d5h v1.22.9
master Ready control-plane,master 3d5h v1.22.9
...
Before starting EdgeX services, you need to do the fllowing following configuration first.
- Modify the file
edgex.yml
in the deploy/playbook/group_vars/all
directory , Set the service to be started as on and the service not to be started as off
After adding the edge nodes to the cluster, the following command will start the EdgeX services on the edge nodes:
ansible-playbook -i ./hosts edgex_start.yml
...
- to decide which services will be started. For details, please refer to the section
Enabling and Disabling Optional Services
below. - If the custom service
device-camera
will be started, set the camera_ip
value to the IP address of the camera node in the deploy\playbook\host_vars\jet03.yml
file and the deploy\playbook\host_vars\jet04.yml
file. - If you are using different host names of the edge nodes, change the file name of files in the
deploy\playbook\host_vars
directory, and change destination_host
value in the files in the deploy\playbook\host_vars
directory.
After all configurations are completed, the following command executed on the deploy node will start the EdgeX services on the edge nodes:
ansible-playbook -i ./hosts edgex_start.yml
You can confirm the status of the EdgeX microservices using the kubectl get pod
command on the master node. (EdgeX micro-service containers are grouped into one Kubernetes "pod" per node.)
admin@mastersdt-admin@sdt-master:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
edgex-jet03-7f9644bb7d-gklvb 22/22 Running 18 (3d5h ago) 3d5h
edgex-jet04-749647459c-drpzb 22/22 Running 18 (3d5h ago) 3d5h
Note, during initialization of the services you may see some containers restart one or more times. This is part of the timeout and retry behavior of the services waiting for other services to complete initialization and does not indicate a problem.
Camera Nodes
Configuration of the Camera Nodes (TODO)
These readings Consult the installation instructions for the H.View HV-500E6A hardware.
Readings received from Camera nodes should appear in the core-data
database and be possible to monitor using the edgex-events-nodename
channel. For example, the following command run on the master node should show the readings arriving at an edge node named "jet03":
...
Test cases for verifying the blueprint's operation are provided in the cicd/tests
directory. These are Robot Framework scripts which can be executed using the robot
tool. In addition, the cicd/playbook
directory contains playbooks supporting setup of a Jenkins-based automated testing environment for CI/CD. For more information, consult the README.md
files in those directories.
Before using the playbook scripts in the cicd/tests
directory, modify the common.resource
file in the cicd/tests/
directory according to your environment. The content to be changed is the part annotated with '#' below.
*** Settings ***
Library SSHLibrary
Library String
*** Variables ***
${HOME} /home/sdt-admin # host directory of build and deploy node
${DEPLOY_HOST} sdt-deploy # hostname of deploy node
${DEPLOY_USER} sdt-admin # username of deploy node
${DEPLOY_KEY} ${HOME}/.ssh/lfedge_deploy # private key in build node to access deploy node
${DEPLOY_PWD} password
${PLAYBOOK_PATH} lf-edge/deploy/playbook # playbook path of build and deploy node
${MASTEREDGE_HOSTHOST1} sdt-masterjet03 # hostname of masteredge nodenode#1
${BUILDEDGE_HOSTHOST2} sdt-buildjet04 # hostname of buildedge nodenode#2
${BUILDEDGE_USER} sdt-adminedge # username of buildedge node
${BUILDEDGE_KEY} ${HOME}/.ssh/lfedge_buildedge # private key in builddeploy node to access buildedge node
*** Keywords ***
……
……
Note, if there is no private key(${
...
DEPLOY_
...
Note, the build node and deploy node must use the same username and playbook path. If you want to use different username or playbook path, robot framework scripts in the cicd/tests/
directory need to be modified. In addition, if there is no private key on the build or deploy host, use command ssh-keygen
to create the private key, and use the command ssh-copy-KEY}
) on the build node, use command ssh-keygen
to create the private key, and use the command ssh-copy-id
to copy the key to the destination node.(Please refer to the chapter section Preparing Edge Nodes
above for detailed usage of command ssh-keygen
and ssh-copy-id
.)
...
Enabling and Disabling Optional Services
Three Five EdgeX micro-services can be enabled and disabled using variables in the deploy/playbook/group_vars/all/edgex.yml
file. Set the variable to true
to enable the micro-service the next time the edgex_start.yml
playbook is run. Set the variable to false
to disable that micro-service. The micro-service controlling variables are listed below:
device_virtual
: Enable or disable the device-virtual
service, provided by EdgeX Foundry, used for testing.device_lora
: Enable or disable the device-lora
service, one of the custom services provided by this blueprint, which provides support for receiving readings and sending commands to remote sensors over LoRa low-power radio links.sync_app
: Enable or disable the sync-app
application service, the other custom service provided by this blueprint, which provides a way to forward sensor data to other edge nodes.
Debugging Failures
...
device-camera
: Enable or disable the device-camera
service, provided by EdgeX Foundry, and modified by this blueprint, which provides support for receiving readings and sending commands to remote cameras.image-app
: Enable or disable the sync-app
service, the other custom service provided by this blueprint, which provides support for analyzing and comparing images received from the edge nodes
Debugging Failures
Consult the sections under Troubleshooting for commands to debug failures. In particular, using the kubectl
commands described in Accessing Logs, and changing the log levels of services using the configuration UI described above, which can change the logging level of running services, can be useful.
...
This blueprint stores configuration and data in the following places. When uninstalling the software, these folders and files can also be removed, if present, on the master, deploy build and edge nodes.
- Master node:
- ~/.lfedge
- /opt/lfedge
- /etc/mosquitto/conf.d/edge.conf
- /usr/share/keyrings/kubernetes-archive-keyring.gpg
- Edge node:
- /opt/lfedge
- /etc/docker/certs.d/master:5000/registry.crt
- /usr/local/share/ca-certificates/master.crt
- /etc/docker/daemon.json
- /usr/share/keyrings/kubernetes-archive-keyring.gpg
- Deploy Build node:
- /etc/profile.d/go.sh
- /usr/local/go
- ~/edgexfoundry
- /usr/local/go1.16.10.linux-amd64.tar.gz
Troubleshooting
Confirming Node and Service Status
...
Edge nodes can be added an removed by stopping the cluster and , editing the deploy/playbook/hosts
file, and adding or removing host files in the deploy/palybook/host_vars
directory. The master_install.yml
and edge_install.yml
playbooks need to be run again to update /etc/hosts
and certificates on any added nodes.
...
Running setup_deploy.yml, setup_build.yml, setup_cicd.yml, master_install.yml, and edge_install.yml playbooks can be used to update software packages if necessary. Note that Kubernetes is specified to use version 1.22 to avoid problems that might arise from version instability, but it should be possible to update if so desired.
...
The custom services can be rebuilt by running the build_images.yml
playbook in cicd/playbook
. After successfully building a new version of a service, use push_images.yml to push the images to the private Docker registry. The source for the services is found in edgex/sync-app, edgex/image-app
and , edgex/device-camera,
and edgex/device-lora
.
License
...
The synchronization application and LoRa , image application, device camera service and LoRa device service are linked with other Go packages/components when compiled, which are each covered by their own licenses, listed below. Other components downloaded and installed during the blueprint's installation process are covered by their own licenses.
...
LoRa Device Service
The LoRa device service is linked with the following packages when compiled:
Image Application
The image application is linked with the following packages when compiled:
armongo-metricsmastercenkaltibackoffMITcenkaltibackoffmasterdiegoholiveirajsonlogicMITdiegoholiveirajsonlogicmastereclipse/paho.mqtt.golang3eclipse/paho.mqtt.golangmasteredgexfoundry/app-functions-sdk-go/v2Apacheedgexfoundry/app-functions-sdk-gomaster/v2edgexfoundryapp-functions-sdk-go/v2/internal/etmMIT | edgexfoundry/app-functions-sdk-gomaster/v2/internal/etm/edgexfoundrymod-bootstrap/v2Apacheedgexfoundrymod-bootstrapmaster/v2/edgexfoundrymod-configuration/v2Apache-2.0edgexfoundrymod-configurationmaster/v2edgexfoundrymod-core-contracts/v2Apacheedgexfoundrymodcore-contractsmaster/v2edgexfoundrymod-messaging/v2Apacheedgexfoundrymod-messagingmaster/v2/edgexfoundrymod-registry/v2Apacheedgexfoundrymod-registrymaster/v2edgexfoundrygomod-secretsv2Apacheedgexfoundrygomod-secretsmaster/v2fatih/colorMITfatihcolormaster.mdfxamacker/cbor/v2fxamackercbormaster/v2kit/kit/logkit/kitmaster/log/logfmt/logfmtlogfmt/logfmtmastergomoduleredigoApache-2.0gomoduleredigomastergoogleuuidBSD-3-Clausegoogleuuidmastergo-playgroundlocalesgo-playgroundlocalesmastergo-playground/universal-translatorgo-playground/universal-translatormastergo-playground/validator/v10MITgo-playgroundvalidatormaster/v10/LICENSEredis/redis/v7BSD-Clauseredis/redismaster/v7/gorillamux3gorillamuxmastergorillawebsocketBSD-Clausegorillawebsocket/masterhashicorpconsul/apiMPL-2.0hashicorpconsulmaster/apihashicorperrwrapMPL-2.0hashicorperrwrapmastergithubcomhashicorpgo-cleanhttpMPLhashicorpgo-cleanhttpmastergithubcomhashicorp/go-hclogMITgithub.com/hashicorp/go-hclog/blob/master/LICENSEgithub.com/hashicorp/go-immutable-radix | MPL-2.0 | https://github.com/hashicorp/go-immutable-radix/blob/master/LICENSE |
github.com/hashicorp/golang-lru/simplelru | MPL-2.0 | https://github.com/hashicorp/golang-lru/blob/master/simplelru/LICENSE |
github.com/hashicorp/go-multierror | MPL-2.0 | https://github.com/hashicorp/go-multierror/blob/master/LICENSE |
github.com/hashicorp/go-rootcerts | MPLhashicorprootcertsmastergithubcom/hashicorp/serf/coordinateMPLhashicorpserfmaster/coordinate/githubcom/leodido/go-urnMIT | leodido-urnmastergithubcommattncolorableMITmattncolorablemastergithubcommattnisattyMITmattnisattymastergithub.com/mitchellh/consulstructure Camera Device Service
The camera device service is linked with the following packages when compiled:
Package | License Type | License URL |
---|
bitbucket.org/bertimus9/systemstat | MIT | https:// |
githubcommitchellhconsulstructureblob/mastermitchellhcopystructureMITmitchellhcopystructuremastermitchellhmapstructuremitchellhmapstructuremastermitchellhreflectwalkmitchellhreflectwalkmasterpebbe/zmq4BSD-2-Clausepebbe/zmq4master.txtpelletieredgexfoundry/device-camera-go |
-tomlpelletier-tomlmaster/x448/float16MITx448/float16mastergolangorgx/crypto/sha3BSD-3-Clauseedgexfoundry/go-mod-bootstrap/v2 | Apache-2.0 | https:// |
pkg.dev/golang.org/x/crypto/sha3?tab=licensesgolang.org/x/net | BSD-3-Clausepkg.dev/golang.org/x/net?tab=licensesgolang.org/x/sys | BSD-3-Clausepkg.dev/golang.org/x/sys?tab=licensesgolang.org/x/text | BSD-3-Clausepkg/golangorg/x/text?tab=licensesLoRa Device Service
The LoRa device service is linked with the following packages when compiled:
Package | License Type | License URL |
---|
bitbucket.org/bertimus9/systemstat | MITbitbucketorgbertimus9/systemstat/src/masterarmon-metricsMITarmonmetricsmastercenkaltibackoffcenkaltibackoffmastereclipse/paho.mqtt.golangBSD-3-Clauseeclipse/paho.mqtt.golangmasteredgexfoundry/device-sdk-goApache-2.0edgexfoundry/device-sdk-go/masteredgexfoundry/mod-bootstrapv2Apache-2.0/edgexfoundrymod-bootstrapmaster/v2/edgexfoundrymod-configurationv2Apache-2.0/edgexfoundrymod-configurationmaster/v2/edgexfoundry/mod-core-contracts/v2Apache-2.0/edgexfoundrymod-core-contractsmaster/v2/edgexfoundrymod-messaging/v2Apache-2.0/edgexfoundrymodmessagingmaster/v2/edgexfoundrymod-registry/v2Apache-2.0/edgexfoundrymod-registry/master/v2/edgexfoundrymod-secrets/v2Apache-2.0/edgexfoundrymod-secretsmaster/v2fatihcolorMITfatihcolormaster.mdfxamackercbor/v2MITfxamackercbormaster/v2go-kit/kit/logMITgo-kitkitmaster/loggo-logfmt/logfmtMITgo-logfmtlogfmtmastergoogleuuidBSD-3-Clausegoogleuuid/mastergo-playgroundlocalesMITgo-playgroundlocalesmasterplayground/universal-translatorMITplaygrounduniversal-translator/masterplayground/validator/v10playground/validatormaster/v10-redis/redis/v7BSD-2-Clauseredis/redismaster/v7gorillamuxBSD-3-ClausegorillamuxmastergorillawebsocketBSD-Clausegorillawebsocketmasterconsulapiconsulmaster/apierrwraperrwrapmasterhashicorpcleanhttpMPL-2.0hashicorpcleanhttpmasterhashicorphcloghashicorphclogmasterhashicorpimmutable-radixMPL-2.0hashicorpimmutable-radixmasterhashicorp/golang-lru/simplelruMPL-2.0hashicorpgolang-lrumaster/simplelruhashicorpgo-multierrorMPL-2.0hashicorpgo-multierrormasterhashicorpgo-rootcertsMPL-2.0hashicorpgo-rootcertsmasterhashicorpserf/coordinateMPL-2.0hashicorpserfmaster/coordinate/leodido/go-urnMIT | leodidogo-urnmastermattncolorableMITmattncolorablemastermattnisattyMITmattnisattymastermitchellhconsulstructureMITmitchellhconsulstructure/mastermitchellhcopystructuremitchellhcopystructuremastermitchellhmapstructuremitchellhmapstructuremastergithubcommitchellhreflectwalkMITgithubcommitchellhreflectwalkblobmastergithubcomOneOfOnexxhashApache-2.0githubcomOneOfOnexxhashblobmastergithubcompebbezmq42githubcompebbezmq4blobmaster.txtgithubcompelletiergo-tomlApache-2.0githubcompelletiergo-tomlblobmastergithubcomtarmserialgithubcomtarmserialblobmaster/MITgithub.com/x448/float16 | x448float16masterx/crypto/sha3BSD-3-Clausepkg.go.dev/golang.org/x/crypto/sha3?tab=licensesx/netpkg.go.dev/golang.org/x/net?tab=licensesgolang.org/x/sys | BSD-3-Clausepkg.dev/golang.org/x/sys?tab=licensesgolang.org/x/textpkg.dev/golang.org/x/text?tab=licensesReferences
- EdgeX Foundry Documentation (release 2.1): https://docs.edgexfoundry.org/2.1/
...
- CPS: Cyber-Physical System
- MQTT: A lightweight, publish-subscribe network protocol designed for connecting remote devices, especially when there are bandwidth constraints. (MQTT is not an acronym.)