Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
maxLevel3

Introduction

This document describes the blueprint test environment for the Smart Data Transaction for CPS blueprint. The test results and logs are posted in the Akraino Nexus at the link below:

https://nexus.

Table of Contents
maxLevel3

Introduction

This document describes the blueprint test environment for the Smart Data Transaction for CPS blueprint. The test results and logs are posted in the Akraino Nexus at the link below:

https://nexus.akraino.org/content/sites/logs/fujitsu/job/

...

Deploy
Node TypeCountHardwareOS

CI/CD

1Intel i5, 2 cores VMUbuntu 20.04
Build1Intel i5, 2 cores VMUbuntu 20.04

1Intel i5, 2 cores VMUbuntu 20.04
MasterBuild1Intel i5, 2 cores VMUbuntu 20.04
Edge2Jetson Nano, ARM Cortex-A57, 4 coresUbuntu 20.04
Camera2H.View HV-500E6AN/A (pre-installed)

The Build VM is used to run the BluVal test framework components outside the system under test.

Test Framework

BluVal and additional tests are carried out using Robot Framework.

Traffic Generator

N/A

Test API description

CI/CD Regression Tests: Node Setup

This set of test cases confirms the scripting to change the default runtime of edge nodes.

The Test inputs

The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/install/ directory.

Test Procedure

The test bed is place in a state where all nodes are prepared with required software. No EdgeX or Kubernetes services are running. 

Execute the test scripts:

robot cicd/tests/sdt_step2/install/

Expected output

The test scripts will change the default runtime of edge nodes from runc to nvidia.

The robot command should report success for all test cases.

Test Results

Nexus URL: 

Image Removed

...

VMUbuntu 20.04
Deploy1Intel i5, 2 cores VMUbuntu 20.04
Master1Intel i5, 2 cores VMUbuntu 20.04
Edge2Jetson Nano, ARM Cortex-A57, 4 coresUbuntu 20.04
Camera2H.View HV-500E6AN/A (pre-installed)

The Build VM is used to run the BluVal test framework components outside the system under test.

Test Framework

BluVal and additional tests are carried out using Robot Framework.

Traffic Generator

N/A

Test API description

CI/CD Regression Tests:

...

Node Setup

This set of test cases verify that the images for EdgeX microservices can be constructed, and pushed to private registryconfirms the scripting to change the default runtime of edge nodes.

The Test inputs

The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/buildinstall/ directory directory.

Test Procedure

The test bed is placed place in a state where all nodes are prepared with required software and the Docker registry is . No EdgeX or Kubernetes services are running. 

Execute the test scripts:

robot cicd/tests/sdt_step2/buildinstall/

Expected output

...

The test scripts will change the default runtime of edge nodes from runc to nvidia.

The robot command should report success for all test cases.

Test Results

Nexus URL: 

Image RemovedImage Added

Pass (21/2 1 test casescase)

CI/CD Regression Tests:

...

Images Build &

...

Push

These test cases verify that the Kubernetes cluster images for EdgeX microservices can be initialized, edge nodes added to it and removed, and the cluster torn downconstructed, and pushed to private registry.

The Test inputs

The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/clusterbuild/ directory.

Test Procedure

The test bed is placed in a state where all nodes are prepared with required software and the Docker registry is running. The registry must be populated with the Kubernetes and Flannel images from upstream.is running. 

Execute the test scripts:

robot cicd/tests/sdt_step2/clusterbuild/

Expected output

The test scripts will start the cluster, add all configured edge nodes, remove the edge nodes, and reset the clusterbuild images of changed services(sync-app/image-app/device-camera), add push the images to private registry.

The robot command should report success for all test cases.

Test Results

Nexus URL: 

Image RemovedImage Added

Pass (42/4 2 test cases)

CI/CD Regression Tests:

...

Cluster Setup & Teardown

These test cases verify that the EdgeX micro-services Kubernetes cluster can be started and that MQTT messages are passed to the master node from the servicesinitialized, edge nodes added to it and removed, and the cluster torn down.

The Test inputs

The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/edgexcluster/ directory.

Test Procedure

The test bed is placed in a state where the cluster is initialized and all edge nodes have joined. The Docker registry and mosquitto MQTT broker must be running on the master node. The registry must be populated with all upstream images and custom images. Either the device-camera service should be enabled, or device-virtual should be enabled to provide readings.

Execute the test scripts:

...

all nodes are prepared with required software and the Docker registry is running. The registry must be populated with the Kubernetes and Flannel images from upstream.

Execute the test scripts:

robot cicd/tests/sdt_step2/cluster/

Expected output

The test scripts will start the cluster, add all configured edge nodes, remove the edge nodes, and reset the cluster.

The robot command should report success for all test cases.

Test Results

Nexus URL: 

Image Added

Pass (4/4 test cases)

CI/CD Regression Tests: EdgeX Services

These test cases verify that the EdgeX micro-services can be started and that MQTT messages are passed to the master node from the services.

The Test inputs

The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/edgex/ directory.

...

Test Procedure

The test scripts will start the EdgeX micro-services on all edge nodes, confirm that MQTT messages are being delivered from the edge nodes, and stop the EdgeX micro-services.

The robot command should report success for all test cases.

Test Results

Nexus URL: 

Image Removed

Pass (8/8 test cases)

CI/CD Regression Tests: Camera Device Service

These test cases verify that the device-camera service can get image from IP Camera, the sync-app service can share the image to other edge node, the image-app service can analyze the image, and the support-notification can receive the crowded notification.

The Test inputs

The test steps and data are contained in the scripts in the source repository bed is placed in a state where the cluster is initialized and all edge nodes have joined. The Docker registry and mosquitto MQTT broker must be running on the master node. The registry must be populated with all upstream images and custom images. Either the device-camera service should be enabled, or device-virtual should be enabled to provide readings.

Execute the test scripts:

robot cicd/tests/sdt_step2/

...

edgex/

...

...

Expected output

The test bed is initialized to the point of having all EdgeX services running, with device-camera and image-app enabled.

Execute the test scripts:

robot cicd/tests/sdt_step2/camera/

Expected output

The test cases will check if MQTT messages and the core-data service containing the data of image acquisition, image sharing and image analysis, and check whether the support-notification service having the notification data of crowded after setting the crowded rule.

The Robot Framework should report success for all test cases

Test Results

Nexus URL: 

Image Removed

Pass (9/9 test cases)

Feature Project Tests

N/A

BluVal Tests

BluVal tests for Lynis, Vuls, and Kube-Hunter were executed on the test bed.

The Test inputs

Bluval User Guide

Steps To Implement Security Scan Requirements

https://vuls.io/docs/en/tutorial-docker.html

Test Procedure

  1. Copy the folder ~/.kube from Kubernetes master node to Build VM
  2. Create SSH Key on Build VM to access Kubernetes master node
Vuls

We use Ubuntu 20.04, so we run Vuls test as follows:

Create directory

...

$ mkdir ~/vuls
$ cd ~/vuls
$ mkdir go-cve-dictionary-log goval-dictionary-log gost-log

Fetch NVD

...

$ docker run --rm -it \
    -v $PWD:/go-cve-dictionary \
    -v $PWD/go-cve-dictionary-log:/var/log/go-cve-dictionary \
    vuls/go-cve-dictionary fetch nvd

Fetch OVAL

...

$ docker run --rm -it \
     -v $PWD:/goval-dictionary \
     -v $PWD/goval-dictionary-log:/var/log/goval-dictionary \
     vuls/goval-dictionary fetch ubuntu 16 17 18 19 20

...

scripts will start the EdgeX micro-services on all edge nodes, confirm that MQTT messages are being delivered from the edge nodes, and stop the EdgeX micro-services.

The robot command should report success for all test cases.

Test Results

Nexus URL: 

Image Added

Pass (8/8 test cases)

CI/CD Regression Tests: Camera Device Service

These test cases verify that the device-camera service can get image from IP Camera, the sync-app service can share the image to other edge node, the image-app service can analyze the image, and the support-notification can receive the crowded notification.

The Test inputs

The test steps and data are contained in the scripts in the source repository cicd/tests/sdt_step2/camera/ directory.

Test Procedure

The test bed is initialized to the point of having all EdgeX services running, with device-camera and image-app enabled.

Execute the test scripts:

robot cicd/tests/sdt_step2/camera/

Expected output

The test cases will check if MQTT messages and the core-data service containing the data of image acquisition, image sharing and image analysis, and check whether the support-notification service having the notification data of crowded after setting the crowded rule.

The Robot Framework should report success for all test cases

Test Results

Nexus URL: 

Image Added

Pass (9/9 test cases)

Feature Project Tests

N/A

BluVal Tests

BluVal tests for Lynis, Vuls, and Kube-Hunter were executed on the test bed.

The Test inputs

Bluval User Guide

Steps To Implement Security Scan Requirements

https://vuls.io/docs/en/tutorial-docker.html

Test Procedure

  1. Copy the folder ~/.kube from Kubernetes master node to Build VM
  2. Create SSH Key on Build VM to access Kubernetes master node
Vuls

We use Ubuntu 20.04, so we run Vuls test as follows:

  1. Create directory

    $ mkdir ~/vuls
    $ cd ~/vuls
    $ mkdir go-cve-dictionary-log goval-dictionary-log gost-log
    


  2. Fetch NVD

    $ docker run --rm -
    i
    it \
        
    -v $PWD:/
    gost
    go-cve-dictionary \
        
    -v $PWD/
    gost
    go-cve-dictionary-log:/var/log/
    gost
    go-cve-dictionary \
        
    vuls/
    gost
    go-cve-dictionary fetch 
    ubuntu
    nvd
    

    Create config.toml


  3. [servers]
    
    [servers.master]
    host = "192.168.51.22"
    port = "22"
    user = "test-user"
    keyPath = "/root/.ssh/id_rsa" # path to ssh private key in docker
    

    Start vuls container to run testsFetch OVAL

    $ docker run --rm -it \
         -v $PWD:/goval-dictionary \
         -v $PWD/goval-dictionary-log:/var/log/goval-dictionary \
         vuls/goval-dictionary fetch ubuntu 16 17 18 19 20
    


  4. Fetch gost

    $ docker run --rm -
    it
    i \
        
    -v
     
    ~/.ssh:/root/.ssh:ro \
    -v $PWD:/
    vuls
    gost \
         -v $PWD/
    vuls
    gost-log:/var/log/
    vuls
    gost \
        
    -v
     
    /etc/localtime:/etc/localtime:ro \ -v /etc/timezone:/etc/timezone:ro \ vuls/vuls scan \ -config=./config.toml
    Get the report
    vuls/gost fetch ubuntu
    


  5. Create config.toml

    [servers]
    
    [servers.master]
    host = "192.168.51.22"
    port = "22"
    user = "test-user"
    keyPath = "/root/.ssh/id_rsa" # path to ssh private key in docker
    


  6. Start vuls container to run tests

    $ docker run --rm -it \
    
        -v ~/.ssh:/root/.ssh:ro \
    
        -v $PWD:/vuls \
        
    -v $PWD/vuls-log:/var/log/vuls \
        
    -v /etc/localtime:/etc/localtime:ro \
    vuls/vuls
    
    
    report
     
    \
       
    -format-list \ -config=./config.toml
Lynis/Kube-Hunter
  1. Create ~/validation/bluval/bluval-sdtfc.yaml to customize the Test

    blueprint:
    -v /etc/timezone:/etc/timezone:ro \
       
    name: sdtfc layers:
     vuls/vuls scan \
        -config=./config.toml
    


  2. Get the report

    $ docker run --rm 
    os
    -it \
         -v 
    - k8s
    ~/.ssh:/root/.ssh:ro \
         -v 
    os
    $PWD:/vuls 
    &os
    \
         
    -
    -v $PWD/vuls-log:/var/log/vuls \
         
    name: lynis
    -v /etc/localtime:/etc/localtime:ro \
         vuls/vuls report \
        
    what: lynis
     -format-list \
         -config=./config.toml
    


Lynis/Kube-Hunter
  1. Create ~/validation/bluval/bluval-sdtfc.yaml to customize the Test

    blueprint:
        optionalname: "False"sdtfc
        k8slayers:
    &k8s         - os
            - k8s
    
    name: kube-hunter    os: &os
           what: kube-hunter
                optionalname: lynis
    "False" 

    Update ~/validation/bluval/volumes.yaml file

    volumes:
         
    #
     
    location
     
    of
     
    the
     
    ssh
     
    key
     
    to
    what: 
    access
    lynis
    
    the
     
    cluster
         
    ssh_key_dir:
          optional: "False"
     
    local:
     
    '/home/ubuntu/.ssh'
      k8s: &k8s
         
    target:
     
    '/root/.ssh'
      -
      
    #
     
    location
     
    of
     
    the
     
    k8s
     
    access
     
    files
     
    (config
     
    file,
     
    certificates,
     
    keys)
    name: kube-hunter
       
    kube_config_dir:
             
    local
    what: 
    '/home/ubuntu/kube'
    kube-hunter
                
    target
    optional: 
    '/root/.kube/'
    "False"
    


  2. Update ~/validation/bluval/volumes.yaml file

    volumes:
        # location of the customized variables.yamlssh key to access the cluster
        customssh_variableskey_filedir:
            local: '/home/ubuntu/validation/tests/variables.yamlssh'
            target: '/opt/akraino/validation/tests/variables.yamlroot/.ssh'
        # location of the bluval-<blueprint>.yaml file k8s access files (config file, certificates, keys)
        blueprintkube_config_dir:
            local: '/home/ubuntu/validation/bluvalkube'
            target: '/optroot/akraino.kube/validation/bluval'
        # location on where to store of the results on the local jumpservercustomized variables.yaml
         results_dircustom_variables_file:
            local: '/home/ubuntu/results/validation/tests/variables.yaml'
            target: '/opt/akraino/resultsvalidation/tests/variables.yaml'
        # location onof where to store openrcthe bluval-<blueprint>.yaml file
        openrcblueprint_dir:
            local: ''
            target: '/root/openrc/home/ubuntu/validation/bluval'
     # parameters that will be passed to the container at each layer layers:target: '/opt/akraino/validation/bluval'
        # volumeslocation mountedon atwhere allto layers;store volumesthe specificresults foron athe differentlocal layerjumpserver
    are below     commonresults_dir:
            - custom_variables_filelocal: '/home/ubuntu/results'
            - blueprint_dirtarget: '/opt/akraino/results'
        # location on where -to results_dirstore openrc file
      hardware:  openrc:
          - ssh_key_dir local: ''
      os:      target: '/root/openrc'
    
    - ssh_key_dir
        networking:
            - ssh_key_dir
        docker:
            - ssh_key_dir# parameters that will be passed to the container at each layer
    layers:
        # volumes mounted at all layers; volumes specific for a different layer are below
        k8scommon:
            - sshcustom_keyvariables_dirfile
            - kubeblueprint_config_dir
        k8s_networking:    - results_dir
       - ssh_key_dir hardware:
            - kubessh_configkey_dir
        openstackos:
            - openrcssh_key_dir
        sdsnetworking:
        sdn:     vim:
    

    Update ~/validation/tests/variables.yaml file

    ### Input variables cluster's master host
    host: <IP Address>- ssh_key_dir
        docker:
            - ssh_key_dir
        #k8s:
    cluster's master host address username: <username>   - ssh_key_dir
           # login name to connect to cluster
    password: <password>- kube_config_dir
        k8s_networking:
             # login password to connect to cluster
    ssh_keyfile: /root/.ssh/id_rsa- ssh_key_dir
            - kube_config_dir
        openstack:
       # Identity file for authentication 

    Run Blucon

    $ bash validation/bluval/blucon.sh sdtfc
    

Expected output

BluVal tests should report success for all test cases.

Test Results

Vuls results (manual) Nexus URL: 

Lynis results (manual) Nexus URL: 

Kube-Hunter results Nexus URL: 

Vuls

Nexus URL: 

There are 8 CVEs with a CVSS score >= 9.0. These are exceptions requested here:

Release 7: Akraino CVE and KHV Vulnerability Exception Request

...

No fix available

Ubuntu CVE record

TODO: File exception request

...

Fix released in libsqlite 3.31.1-4ubuntu0.4

Ubuntu CVE record

TODO: Check libsqlite3-0 version, update if possible and re-run.

...

Fix not yet available

Ubuntu CVE record

TODO: File exception request

...

Fix not yet available

Ubuntu CVE record

TODO: File exception request

...

  1. - openrc
        sds:
        sdn:
        vim:
    


  2. Update ~/validation/tests/variables.yaml file

    ### Input variables cluster's master host
    host: <IP Address>             # cluster's master host address
    username: <username>            # login name to connect to cluster
    password: <password>         # login password to connect to cluster
    ssh_keyfile: /root/.ssh/id_rsa        # Identity file for authentication
    


  3. Run Blucon

    $ bash validation/bluval/blucon.sh sdtfc
    


Expected output

BluVal tests should report success for all test cases.

Test Results

Vuls results (manual) Nexus URL: 

Lynis results (manual) Nexus URL: 

Kube-Hunter results Nexus URL: 

Vuls

Nexus URL: 

There are 5 CVEs with a CVSS score >= 9.0. These are exceptions requested here:

Release 7: Akraino CVE and KHV Vulnerability Exception Request

2038520385158611586Fix released in libpcre 10.34-7ubuntu0.1 Check for updates to libpcre. Update if possible and re-run.158711587Fix released in libpcre 10.34-7ubuntu0.1 Same as CVE-2022-1586
CVE-IDCVSSNVDFix/Notes
CVE-2016-15859.8https://nvd.nist.gov/vuln/detail/CVE-20222016-374341585

No fix available (for zlib1g, zlib1g-dev)

Ubuntu CVE record

TODO: File exception request

CVE-2022-03189.8https://nvd.nist.gov/vuln/detail/CVE-2022-0318

Fix not yet available

Ubuntu CVE record

TODO: File exception request

CVE-2022-101219279.18https://nvd.nist.gov/vuln/detail/CVE-2022-1012Fix released in linux-image 5.4.0-126.1421927

Fix not yet available

Ubuntu CVE record

TODO: Check kernel version (linux-image-5.4.0-109-generic?) and check for updates. Update if possible and re-run.: File exception request

CVE-2022-203859.8https://nvd.nist.gov/vuln/detail/CVE-2022-20385

No fix available

Ubuntu CVE record

TODO:

File exception request

CVE-2022-374349.8https://nvd.nist.gov/vuln/detail/CVE-2022-37434

No fix available (for zlib1g, zlib1g-dev)

Ubuntu CVE record

TODO:

File exception request

Lynis

Nexus URL (run via Bluval, without fixes): 

...