Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

- An appropriate mariadb instance is up and running (look at the Database subsection).
This prerequisite concerns both of the UI modes.

- A Jenkins instance exists capable of executing blueprint validation tests on the specified lab and storing the results to Nexus server The available labs and their silos (i.e. which silo is used by a lab in order to store results in Nexus) for blueprint validation execution are defined by the corresponding lab owners (look at the Jenkins configuration Database subsection).
This prerequisite concerns only the full control loop mode.- A Nexus server exists where all the blueprint validation results are stored (look at the Nexus subsection)It is their responsibility to publish them. Currently, this data is statically stored in the blueprint validation UI mariadb database. In order for a lab owner to update them, he/her must update the corresponding table entries. This inconvenience will be handled in the future.
This prerequisite concerns both of the UI modesonly the full control loop mode.

- The whole installation and deployment of a blueprint and its corresponding blueprint family components (i.e. the appropriate edge cloud stack with its combination of infrastructure hardware components, OS, K8s, software, etc) are already performed in the appropriate lab.
Recall that multiple labs can be used for a specific blueprint validation. Also, it is the responsibility of the blueprint submitter to ensure that the edge validation and community CI labs can support comprehensive validation of the blueprint and cover all use case characteristics.
This prerequisite concerns both of the UI modes.

Developer's guide

Download the project

~$ git clone "https://gerrit.akraino.org/r/validation"

Prerequisites

Tools

In order to setup the development environment, the following tools are needed:

...

Execute the commands below in order to install these tools (note that the PROXY_IP and PROXY_PORT variables must be substituted with the ones that are used by the hosting operating system)

If the host is behind a proxy, define this proxy using the following commands:

~$ sudo touch etc/apt/apt.conf.d/proxy.conf
~$ sudo sh -c 'echo "Acquire::http::proxy \"http://<PROXY_IP>:<PROXY_PORT>/\";" >> /etc/apt/apt.conf.d/proxy.conf'
~$ sudo sh -c 'echo "Acquire::https::proxy \"https://<PROXY_IP>:<PROXY_PORT>/\";" >> /etc/apt/apt.conf.d/proxy.conf'
~$ sudo sh -c 'echo "Acquire::ftp::proxy \"ftp://<PROXY_IP>:<PROXY_PORT>/\";" >> /etc/apt/apt.conf.d/proxy.conf'
~$ sudo apt-get update
~$ export http_proxy=http://<PROXY_IP>:<PROXY_PORT>
~$ export https_proxy=http://<PROXY_IP>:<PROXY_PORT>

...

available timeslots for blueprint validation execution of every lab are defined by the corresponding lab owners (look at the Database subsection). It is their responsibility to publish them. Currently, this data is statically stored in the blueprint validation UI mariadb database. In order for a lab owner to update them, he/her must update the corresponding table entries. This inconvenience will be handled in the future.
This prerequisite concerns only the full control loop mode.

- The data of available blueprints (i.e. blueprint name) is stored in the mariadb database (look at the Database subsection). This data is automatically updated using info from Nexus. If a blueprint owner's is not satisfied with this info, he/her must update the corresponding table entries.
This prerequisite concerns only the full control loop mode.

- The data of an available blueprint instance for validation (i.e. version and layer) is stored in the mariadb database (look at the Database subsection). This data is automatically updated using info from Nexus. If a blueprint owner's is not satisfied with this info, he/her must update the corresponding table entries.
This prerequisite concerns only the full control loop mode.

- A Jenkins instance exists capable of executing blueprint validation tests on the specified lab and storing the results to Nexus server (look at the Jenkins configuration subsection).
This prerequisite concerns only the full control loop mode.

- A Nexus server exists where all the blueprint validation results are stored (look at the Nexus subsection).
This prerequisite concerns both of the UI modes.

- The whole installation and deployment of a blueprint and its corresponding blueprint family components (i.e. the appropriate edge cloud stack with its combination of infrastructure hardware components, OS, K8s, software, etc) are already performed in the appropriate lab.
Recall that multiple labs can be used for a specific blueprint validation. Also, it is the responsibility of the blueprint submitter to ensure that the edge validation and community CI labs can support comprehensive validation of the blueprint and cover all use case characteristics.
This prerequisite concerns both of the UI modes.

Developer's guide

Download the project

~$ git clone "https://gerrit.akraino.org/r/validation"

Prerequisites

Tools

In order to setup the development environment, the following tools are needed:


- JDK 1.8
- Maven
- docker
- MySQL client

Execute the commands below in order to install these tools (note that the PROXY_IP and PROXY_PORT variables must be substituted with the ones that are used by the hosting operating system)

If the host is behind a proxy, configure define this proxy for mavenusing the following commands:

~$ nanosudo touch ~etc/apt/apt.conf.m2d/settingsproxy.xml
<Paste the following lines>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi=
conf
~$ sudo sh -c 'echo "Acquire::http::proxy \"http://
www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<proxies>
<proxy>
<active>true</active>
<protocol>http</protocol>
<host><PROXY_IP></host>
<port><PROXY_PORT></port>
<nonProxyHosts>127.0.0.1|localhost</nonProxyHosts>
</proxy>
<proxy>
<id>https</id>
<active>true</active>
<protocol>https</protocol>
<host><PROXY_IP></host>
<port><PROXY_PORT></port>
<nonProxyHosts>127.0.0.1|localhost</nonProxyHosts>
</proxy>
</proxies>
</settings>

<Save and exit from nano>

...

<PROXY_IP>:<PROXY_PORT>/\";" >> /etc/apt/apt.conf.d/proxy.conf'
~$ sudo sh -c 'echo "Acquire::https::proxy \"https://<PROXY_IP>:<PROXY_PORT>/\";" >> /etc/apt/apt.conf.d/proxy.conf'
~$ sudo sh -c 'echo "Acquire::ftp::proxy \"ftp://<PROXY_IP>:<PROXY_PORT>/\";" >> /etc/apt/apt.conf.d/proxy.conf'
~$ sudo apt-get update
~$ export http_proxy=http://<PROXY_IP>:<PROXY_PORT>
~$ export https_proxy=http://<PROXY_IP>:<PROXY_PORT>


Install jdk and maven using the following commands:

~$ sudo apt install docker.iodefault-jdk
~$ sudo groupaddapt docker
~$ sudo gpasswd -a $USER docker
~$ newgrp dockerinstall maven


If the host is behind a proxy, configure docker to use this proxy for maven:


~$ mkdir /etc/systemd/system/docker.service.d
~$ sudo nano /etc/systemd/system/docker.service.d/http-proxy.confnano ~/.m2/settings.xml
<Paste the following lines>

[Service]
Environment<settings xmlns="HTTP_PROXY=http://<PROXY_IP>:<PROXY_PORT>/"

<Save and exit from nano>

~$ sudo systemctl daemon-reload
~$ sudo systemctl restart docker

Install mySQL client:

~$ sudo apt install mysql-client

Database

A mariadb database instance is needed for both modes of the UI with the appropriate databases and tables in order for the back-end system to store and retrieve data.

The pom.xml file supports the creation of an appropriate docker image for development purposes. The initialization scripts reside under the db-scripts directory.

Also, a script has been developed, namely validation/docker/mariadb/deploy.sh which easily deploys the container. This script accepts the following items as input parameters:

CONTAINER_NAME, name of the container, default value is akraino-validation-mariadb
MARIADB_ROOT_PASSWORD, the desired mariadb root user password, this variable is required
MARIADB_AKRAINO_PASSWORD, the desired mariadb akraino user password, this variable is required
UI_ADMIN_PASSWORD, the desired Blueprint Validation UI password for the admin user, this variable is required
UI_AKRAINO_PASSWORD, the desired Blueprint Validation UI password for the akraino user, this variable is required
REGISTRY, registry of the mariadb image, default value is akraino
NAME, name of the mariadb image, default value is validation
TAG_PRE, first part of the image version, default value is mariadb
TAG_VER, last part of the image version, default value is latest
MARIADB_HOST_PORT, port on which mariadb is exposed on host, default value is 3307

Currently, two users are supported by the UI, namely admin (full privileges) and akraino (limited privileges). Their passwords must be defined in the database.

In order to build and deploy the image using only the required parameters, the below instructions should be followed:

The mariadb root password, mariadb akraino user password (currently the UI connects to the database using the akraino user), the UI admin password and the UI akraino password should be configured using the appropriate variables and the following commands should be executed:

~$ cd validation/ui
~$ mvn docker:build -Ddocker.filter=akraino/validation:dev-mariadb-latest
~$ cd ../docker/mariadb
~$ ./deploy.sh TAG_PRE=dev-mariadb MARIADB_ROOT_PASSWORD=<mariadb root user password> MARIADB_AKRAINO_PASSWORD=<mariadb akraino user password> UI_ADMIN_PASSWORD=<UI admin user password> UI_AKRAINO_PASSWORD=<UI akraino user password>
~$ mysql -p<MARIADB_AKRAINO_PASSWORD> -uakraino -h <IP of the mariadb container> < ../../ui/db-scripts/examples/initialize_db_example.sql

In order to retrieve the IP of the mariadb container, the following command should be executed:

~$ docker inspect <name of the mariadb container>

Furthermore, the TAG_PRE variable should be defined because the default value is 'mariadb' (note that the 'dev-mariadb' is used for development purposes - look at pom.xml file).

If the database must be re-deployed (it is assumed that the corresponding mariadb container has been stopped and deleted) while the persistent storage already exists (currently, the 'akraino-validation-mariadb' docker volume is used), a different approach should be used after the image building process.

To this end, another script has been developed, namely validation/docker/mariadb/deploy_with_existing_storage.sh which easily deploys the container. This script accepts the following as input parameters:

CONTAINER_NAME, the name of the container, default value is akraino-validation-mariadb
REGISTRY, the registry of the mariadb image, default value is akraino
NAME, the name of the mariadb image, default value is validation
TAG_PRE, the first part of the image version, default value is mariadb
TAG_VER, the last part of the image version, default value is latest
MARIADB_HOST_PORT, the port on which mariadb is exposed on host, default value is 3307

In order to deploy the image using only the required parameters and the existing persistent storage, the below instructions should be followed:

The mariadb root user password (currently the UI connects to the database using root privileges) should be configured using the appropriate variable and the following commands should be executed:

~$ cd validation/docker/mariadb

~$ ./deploy_with_existing_persistent_storage.sh TAG_PRE=dev-mariadb

Finally, if the database must be re-deployed (it is assumed that the corresponding mariadb container has been stopped and deleted) and the old persistent storage must be deleted, the used docker volume should be first deleted (note that all database's data will be lost).

To this end, after the image build process, the following commands should be executed:

~$ docker volume rm akraino-validation-mariadb
~$ cd validation/docker/mariadb
~$ ./deploy.sh TAG_PRE=dev-mariadb MARIADB_ROOT_PASSWORD=<root user password> MARIADB_AKRAINO_PASSWORD=<mariadb akraino user password> UI_ADMIN_PASSWORD=<UI admin user password> UI_AKRAINO_PASSWORD=<UI akraino user password>
~$ mysql -p<MARIADB_AKRAINO_PASSWORD> -uakraino -h <IP of the mariadb container> < ../../ui/db-scripts/examples/initialize_db_example.sql

In the context of the full control loop mode, the following tables must be initialized with appropriate data:

- lab (here every lab owner should store the name of the lab)
- timeslot (here every lab owner should register the available timeslots that can be used for blueprint validation test execution)
- silo (here every lab owner should register the silo which is used for storing results in Nexus, for example for AT&T lab the value is 'att-blu-val')
- blueprint (here every blueprint owner should register the name of the blueprint)
- blueprint_instance_for_validation (here every blueprint owner should register the blueprint instances for validation, i.e. version, layer and description of a layer)

...

maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<proxies>
<proxy>
<active>true</active>
<protocol>http</protocol>
<host><PROXY_IP></host>
<port><PROXY_PORT></port>
<nonProxyHosts>127.0.0.1|localhost</nonProxyHosts>
</proxy>
<proxy>
<id>https</id>
<active>true</active>
<protocol>https</protocol>
<host><PROXY_IP></host>
<port><PROXY_PORT></port>
<nonProxyHosts>127.0.0.1|localhost</nonProxyHosts>
</proxy>
</proxies>
</settings>

<Save and exit from nano>


Install docker using the following commands:

~$ sudo apt install docker.io
~$ sudo groupadd docker
~$ sudo gpasswd -a $USER docker
~$ newgrp docker


If the host is behind a proxy, configure docker to use this proxy:

~$ mkdir /etc/systemd/system/docker.service.d
~$ sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf

<Paste the following lines>

[Service]
Environment="HTTP_PROXY=http://<PROXY_IP>:<PROXY_PORT>/"

<Save and exit from nano>

~$ sudo systemctl daemon-reload
~$ sudo systemctl restart docker


Install mySQL client:

~$ sudo apt install mysql-client

Database

A mariadb database instance is needed for both modes of the UI with the appropriate databases and tables in order for the back-end system to store and retrieve data.

The pom.xml file supports the creation of an appropriate docker image for development purposes. The initialization scripts reside under the db-scripts directory.

Also, a script has been developed, namely validation/docker/mariadb/deploy.sh which easily deploys the container. This script accepts the following items as input parameters:

CONTAINER_NAME, name of the container, default value is akraino-validation-mariadb
MARIADB_ROOT_PASSWORD, the desired mariadb root user password, this variable is required
MARIADB_AKRAINO_PASSWORD, the desired mariadb akraino user password, this variable is required
UI_ADMIN_PASSWORD, the desired Blueprint Validation UI password for the admin user, this variable is required
UI_AKRAINO_PASSWORD, the desired Blueprint Validation UI password for the akraino user, this variable is required
REGISTRY, registry of the mariadb image, default value is akraino
NAME, name of the mariadb image, default value is validation
TAG_PRE, first part of the image version, default value is mariadb
TAG_VER, last part of the image version, default value is latest
MARIADB_HOST_PORT, port on which mariadb is exposed on host, default value is 3307

Currently, two users are supported by the UI, namely admin (full privileges) and akraino (limited privileges). Their passwords must be defined in the database.

In order to build and deploy the image using only the required parameters, the below instructions should be followed:

The mariadb root password, mariadb akraino user password (currently the UI connects to the database using the akraino user), the UI admin password and the UI akraino password should be configured using the appropriate variables and the following commands should be executed:

~$ cd validation/ui
~$ mvn docker:build -Ddocker.filter=akraino/validation:dev-mariadb-latest
~$ cd ../docker/mariadb
~$ ./deploy.sh TAG_PRE=dev-mariadb MARIADB_ROOT_PASSWORD=<mariadb root user password> MARIADB_AKRAINO_PASSWORD=<mariadb akraino user password> UI_ADMIN_PASSWORD=<UI admin user password> UI_AKRAINO_PASSWORD=<UI akraino user password>
~$ mysql -p<MARIADB_AKRAINO_PASSWORD> -uakraino -h <IP of the mariadb container> < ../../ui/db-scripts/examples/initialize_db_example.sql

In order to retrieve the IP of the mariadb container, the following command should be executed:

~$ docker inspect <name of the mariadb container>

Furthermore, the TAG_PRE variable should be defined because the default value is 'mariadb' (note that the 'dev-mariadb' is used for development purposes - look at pom.xml file).

If the database must be re-deployed (it is assumed that the corresponding mariadb container has been stopped and deleted) while the persistent storage already exists (currently, the 'akraino-validation-mariadb' docker volume is used), a different approach should be used after the image building process.

To this end, another script has been developed, namely validation/docker/mariadb/deploy_with_existing_storage.sh which easily deploys the container. This script accepts the following as input parameters:

CONTAINER_NAME, the name of the container, default value is akraino-validation-mariadb
REGISTRY, the registry of the mariadb image, default value is akraino
NAME, the name of the mariadb image, default value is validation
TAG_PRE, the first part of the image version, default value is mariadb
TAG_VER, the last part of the image version, default value is latest
MARIADB_HOST_PORT, the port on which mariadb is exposed on host, default value is 3307

In order to deploy the image using only the required parameters and the existing persistent storage, the below instructions should be followed:

The mariadb root user password (currently the UI connects to the database using root privileges) should be configured using the appropriate variable and the following commands should be executed:

~$ cd validation/docker/mariadb

~$ ./deploy_with_existing_persistent_storage.sh TAG_PRE=dev-mariadb

Finally, if the database must be re-deployed (it is assumed that the corresponding mariadb container has been stopped and deleted) and the old persistent storage must be deleted, the used docker volume should be first deleted (note that all database's data will be lost).

To this end, after the image build process, the following commands should be executed:

~$ docker volume rm akraino-validation-mariadb
~$ cd validation/docker/mariadb
~$ ./deploy.sh TAG_PRE=dev-mariadb MARIADB_ROOT_PASSWORD=<root user password> MARIADB_AKRAINO_PASSWORD=<mariadb akraino user password> UI_ADMIN_PASSWORD=<UI admin user password> UI_AKRAINO_PASSWORD=<UI akraino user password>
~$ mysql -p<MARIADB_AKRAINO_PASSWORD> -uakraino -h <IP of the mariadb container> < ../../ui/db-scripts/examples/initialize

...

db-scripts/examples/initialize_db_example.sql

Some of this data is illustrated below (refer to 'org.akraino.validation.ui.data' package for more info regarding available values):

Lab
id:1, lab:0 (0 stands for AT&T)

Timeslots:
id:1 , start date and time: 'now', duration: null, lab: 1

Silo
id:1, silo: 'att-blu-val', lab: 1

Blueprints:
id: 3 , name : 'REC'

Blueprint Instances:
id: 2, blueprint_id: 3 (i.e. REC), version: "latest", layer: 0 (i.e. Hardware), layer_description: "AT&T Hardware"

It should be noted that currently the start date and time and the duration of the timeslot are not taken into account by the UI (see limitation section). Therefore, a user should define 'now' and null respectively for their content.

Based on this data, the UI enables the user to select an appropriate blueprint instance for validation.

Currently, this data cannot be retrieved dynamically by the UI (see limitations subsection). For this reason, in cases of new data, a user should define new entries in this database.

For example, if a user wants to define a new lab with the following data:

lab: Community

the following file should be created:

name: dbscript
content:

SET FOREIGN_KEY_CHECKS=1;
use akraino;
insert into lab values(2, 2);

2 stands for community lab. Refer to 'org.akraino.validation.ui.data' package for more info.

Then, the following command should be executed:

~$ mysql -p<MARIADB_AKRAINO_PASSWORD> -uakraino -h <IP of the mariadb container> < ./dbscript.sql
_db_example.sql

In the context of the full control loop mode, the following tables must be initialized with appropriate data:

- lab (here every lab owner should store the name of the lab and the silo used for storing results in Nexus)
- timeslot (here every lab owner should register the available timeslots that can be used for blueprint validation test execution)
- blueprint_layer (here all the blueprint layers should be registered. These layers will be referenced by the blueprint instances)
- blueprint (here every blueprint owner should register the name of the blueprint)
- blueprint_instance_for_validation (here every blueprint owner should register the blueprint instances for validation, i.e. version and layer)
- blueprint_instance_blueprint_layer (here the many-to-many relationship between blueprint instances and layers is formulated)

As it has been already mentioned, these tables are initialized automatically by the UI by fetching data from Nexus.

However, a user may wish to extend or change this data (for example a new blueprint has been created and no results have been pushed to Nexus yet). To this end, the following file can be used (that's why the command 'mysql -p<MARIADB_AKRAINO_PASSWORD> -uakraino -h <IP of the mariadb container> < ../../ui/db-scripts/examples/initialize_db_example.sql' has been used previously):

db-scripts/examples/initialize_db_example.sql

Some of this data is illustrated below (refer to 'org.akraino.validation.ui.data' package for more info regarding available values):

Labs:
id:1, lab:'att', silo:'att-blu-val'

Timeslots:
id:1 , start date and time: 'now', duration: null, lab: 1

Blueprint layers:
id:1, layer: 'hardware';

Blueprints:
id: 2 , blueprint_name : 'rec'

Blueprint Instances:
id: 2, blueprint_id: 2 (i.e. rec), version: "master"

blueprint_instances_blueprint_layers
blueprint_id: 2 (i.e. rec), layer_id: 1 (i.e. hardware)

It should be noted that currently the start date and time and the duration of the timeslot are not taken into account by the UI (see limitation section). Therefore, a user should define 'now' and null respectively for their content.

Based on this data, the UI enables the user to select an appropriate blueprint instance for validation.

For example, if a user wants to define a new timeslot lab with the following data:

start date and timelab: community, silo : 'now', duration: 0, lab: AT&Tcommunity'

the following file should be created:

name: dbscript
content:

SET FOREIGN_KEY_CHECKS=1;
use akraino;
insert into timeslot lab (id, lab, silo) values(2, 'nowcommunity', null, 1'community');

1 is the id of the AT&T lab.

Then, the following command should be executed:

...

For example, if a user wants to define a new silo timeslot with the following data:

silostart date and time:'community-blu-val'now', duration: 0, lab: AT&T

the following file should be created:

...

SET FOREIGN_KEY_CHECKS=1;
use akraino;
insert into silo timeslot values(2, 'community-blu-valnow', null, 2);

2 is the id of the community lab.

...

Furthermore, if a user wants to define a new blueprint, namely "newBlueprint" and a new instance of this blueprint with the following data:

version: "latestmaster", layer: 2 (i.e. K8s), layer_description: "K8s with High Availability Ingress controller"k8s

the following file should be created:

...

SET FOREIGN_KEY_CHECKS=1;
use akraino;
insert into blueprint (id, blueprint_name) values(3, 'newBlueprint');
insert into blueprint_instance (id, blueprint_id, blueprint_nameversion) values(3, 3, 'master');
insert into blueprint_layer (id, layer) values(4, 'newBlueprintk8s');
insert into blueprint_instance_blueprint_layer (blueprint_instance_id, blueprint_id, version, layer, layer_descriptionid) values(63, 4, 'latest', 2, 'K8s with High Availability Ingress controller');


Then, the following command should be executed:

...

Also, currently, the corresponding Jenkins job should accept the following as input parameters: "SUBMISSION_ID", "BLUEPRINT", "VERSION", "LAYER", "OPTIONAL", "LAB" and "UI_IP".
The "SUBMISSION_ID" and "UI_IP" parameters (i.e. IP address of the UI host machine-this is needed by the Jenkins instance in order to send back Job completion notification) are created and provided by the back-end part of the UI.
The "BLUEPRINT", "VERSION", "LAYER" and "LAB" parameters are configured by the UI user. The parameter "OPTIONAL" defines whether the optional test cases should be included or not.

Moreover, as the Jenkins notification plugin (https://wiki.jenkins.io/display/JENKINS/Notification+Plugin) seems to ignore proxy settings, the corresponding Jenkins job must be configured to execute the following commands at the end (Post-build Actions)

...


- The UI has been tested using Chrome and Firefox browsers.
- The back-end part of the UI does not take into account the start date and time and duration of the configured timeslot. It immediately triggers the corresponding Jenkins Job.
- Results data manipulation (filtering, graphical representation, indexing in time order, etc) is not supported.
- The silos, labs, and the available blueprints and timeslots must be manually configured in the mariadb database.