Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overall Architecture

  • Fully automated deployment on different platforms: AWS, GCP
  • Deploy kubernetes cluster properly configured and tuned for NFV/MEC workloads
  • Enablement of real time workloads
  • Possibility of deploying apps on virtual machines and containers in parallel

The blueprint is based on site/profile/base pattern:

...

Platform Architecture

This blueprint is expected to run on multiple environments (libvirt, AWS and baremetal).

Deployments to AWS

...

The KNI Industrial Edge blueprint consists of at least two sites:

  • a management hub and
  • one or more factory edge sites.

The management hub consists of a 3-master, 3-worker cluster running Open Cluster Management for managing edge site clusters, applying upgrades, policies, etc. to them as well as OpenDataHub that allows streaming data from factory edge clusters to be stored in a data lake for re-training of machine learning models. OpenDataHub also deploys Jupyter Notebooks for data scientists to analyse data and work on models. Updated models can be distributed out to the factory via the same GitOps mechanisms used also for updates of the clusters and their workloads. The management hub also deploys Tekton pipelines, which will eventually be used for GitOps based management of edge sites, but is not yet implemented in this release.

A factory edge site consists of a 3-node cluster of schedulable masters. Edge sites are deployed from a minimal blueprint that contains Open Cluster Management's klusterlet agent. When the edge cluster comes up, the klusterlet registers the cluster with the management hub and installs a local ArgoCD instance that references the GitHub repo on which the definition of the services to be installed is hosted. Changes to the services will be automatically pulled into and applied to the edge cluster by ArgoCD. The edge cluster's services include Apache Camel-K for ingestion and transformation of sensor data, Kafka for streaming data, and MirrorMaker for replicating streaming data to the data lake on the management hub. Edge clusters also include Seldon runtime for ML models, which the edge clusters pull from the Quay container registry just like every other container image.


Image Added

Platform Architecture

This blueprint currently runs on GCP and AWS, but is currently only tested against GCP.

Deployments to AWS

Deployments to GCP

Resources used for the management hub cluster:

nodesinstance type
3x mastersEC2: m4.xlarge, EBS: 120GB GP2
3x
masters
workersEC2: m4.
xlarge
large, EBS: 120GB GP2
3x workers

Resources used for the factory edge cluster:

nodesinstance type
3x mastersEC2: m4.
large
xlarge, EBS: 120GB GP2

Deployments to

...

GCP

Resources used for the management hub cluster:

nodes
requirements1x provisioning host (temporary)12 cores, 16GB RAM, 200GB disk free, 3 NICs (1 internet connectivity, 1 provisioning+storage, 1 cluster)3x masters12 cores, 16GB RAM, 200GB disk free, 2 NICs (1 provisioning+storage, 1 cluster)3x workers12 cores, min.
instance type
3x mastersEC2: m4.xlarge, EBS: 120GB GP2
3x workersEC2: m4.large, EBS: 120GB GP2

Resources used for the factory edge cluster:

nodesinstance type
3x mastersEC2: m4.xlarge, EBS: 120GB GP2

Deployments to Bare Metal

Resources used for the factory edge cluster:

nodesrequirements
3x masters12 cores, 16GB RAM, 200GB disk free, 2
SR/IOV-capable
NICs (1 provisioning+storage, 1 cluster)

The blueprint validation lab uses 7 3 SuperMicro SuperServer 1028R-WTR (Black) with the following specs:

UnitsTypeDescription
2CPUBDW-EP 12C E5-2650V4 2.2G 30M 9.6GT QPI
8Mem16GB DDR4-2400 2RX8 ECC RDIMM
1SSDSamsung PM863, 480GB, SATA 6Gb/s, VNAND, 2.5" SSD - MZ7LM480HCHP-00005
4HDDSeagate 2.5" 2TB SATA 6Gb/s 7.2K RPM 128M, 512N (Avenger)
2NICStandard LP 40GbE with 2 QSFP ports, Intel XL710

Networking for the machines has to be set up as follows:

Image Removed

Deployments to vBaremetal (KVM)

...


Software Platform Architecture

Image Removed

deploy on AWS, baremetal, Google Cloud, KVM (libvirt)Image Added

Release 4 components:

  • CoreOS for all nodes (RT workers too): Red Hat Enterprise Linux CoreOS release 4.6OKD v4.5 GA
  • CRI-O: xxxxxxx
  • Kubernetes (OKD) version: openshift-install v4.6.6, built from commit db0f93089a64c5fd459d226fc224a2584e8cfb7e
    release image quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39
  • Multus, Cluster/Machine operator, Prometheus: versions provided in 4.6.6 Openshift release
  • Ceph: 14.2.8 (2d095e947a02261ce61424021bb43bd3022d35cb) nautilus (stable)
  • Kubevirt: 0.27.2v1.18
  • Multus-cni: version:v4.3.3-202002171705, commit:d406b4470f58367df1dd79b47e6263582b8fb511
  • Open Cluster Management: v2.0
  • ArgoCD Operator: v0.0.11
  • OpenShift Pipelines Operator: v1.1.1
  • OpenDataHub Operator: v0.6.1

APIs

No specific APIs involved on this blueprint. It relies on Kubernetes cluster so all the APIs used are Kubernetes ones.

...