Instructions for installing Airship+Tungsten Fabric using the Regional Controller and the TF Blueprint
Requirements:
- Regional Controller: AWS instance t2.medium or any virtual or baremetal node with 2CPU and 4GB Memory (OS: Ubuntu Xenial 16.04)
- Airship+Tungsten Fabric host: AWS instance m5.4xlarge or any virtual or baremetal node with 16CPU and 64GB Memory (OS: Ubuntu Xenial 16.04)
Both nodes must be available by ssh.
Overview
Akraino Regional Controller is necessary part of release 2 deployment procedure. It's Akraino approved blueprint which is common for all of release 2 blueprints and which is using for Edge Site, Blueprint and POD deployment.
After creating the POD Regional Controller creates WORKFLOW that initiates remote installation Airship with TungstenFabric.
More information about Regional Controller:
- The Regional Controller's Object Model (it helps you to figure out what is Blueprint, Edge Site, POD, Workflow)
- How to write Blueprints and Workflows
- How to load objects into the Regional Controller
- Frequently Asked Questions
If you already have Regional Controller you can use it for deployment and if you don't have it you can install it following the instruction bellow
Installing the Regional Controller
You can use any machine or VM dedicated for this purpose. See instructions on how to start the regional controller.
https://wiki.akraino.org/display/AK/Starting+the+Regional+Controller
After you have a working Regional Controller you have to login on it and follow the steps bellow
Steps to manual installation of TungstenFabric Blueprint
All this steps must be done on Regional Controller
Clone the nc/tf repository using
git clone https://gerrit.akraino.org/r/nc/tf
Setup ssh keys and put them on web -server
Regional Controller goes to the remote node by ssh, so it needs ssh private key. It can be provided as http URL. (It's not secure for production, it's only OK for the demo).
Put ssh private key and script deploy.sh on some web server. Ssh public key must be written to the .ssh/authorized_keys on remote node.
Hint: python provisional web server can be used on the localhost. Use python3 -m http.server
Edit the file setup-env.sh
Update all the environment variables define the ip addresses nodes, web server baseurl, etc. Define where the Regional Controller is located, as well as the login/password to use
(the login/password shown here are the built-in values and do not need to be changed, if you have not changed them on the Regional Controller):
export RC_HOST=<IP or FQDN name of Regional Controller> export USER=admin export PW=admin123
Generate yaml files from templates
source setup-env.sh cat objects.yaml.env | envsubst > objects.yaml cat TF_blueprint.yaml.env | envsubst > TF_blueprint.yaml
Clone the api-server repository.
This provides the CLI tools used to interact with the Regional Controller. Add the scripts from this repository to your PATH:
git clone https://gerrit.akraino.org/r/regional_controller/api-server export PATH=$PATH:$PWD/api-server/scripts
Load the objects
Load objects defined in objects.yaml into the Regional Controller using:
rc_loaddata -H $RC_HOST -u $RC_USER -p $RC_PW -A objects.yaml
Load the blueprint
Load the blueprint into the Regional Controller using:
rc_cli -H $RC_HOST -u $RC_USER -p $RC_PW blueprint create TF_blueprint.yaml
Get and export UUIDs
Get the UUIDs of the edgesite and the blueprint from the Regional Controller using:
rc_cli -H $RC_HOST -u $RC_USER -p $RC_PW blueprint list rc_cli -H $RC_HOST -u $RC_USER -p $RC_PW edgesite list
These are needed to create the POD. You will also see the UUID of the Blueprint displayed when you create the Blueprint in step 8 (it is at the tail end of the URL that is printed).
Set and export them as the environment variables ESID and BPID.
export ESID=<UUID of edgesite in the RC> export BPID=<UUID of blueprint in the RC>
Generate POD.yaml
cat POD.yaml.env | envsubst > POD.yaml
Create the POD
Create the POD using
rc_cli -H $RC_HOST -u $RC_USER -p $RC_PW pod create POD.yaml
This will cause the POD to be created, and the deploy.sh workflow script to be
run on the Regional Controller's workflow engine. This in turn will login to remote node by ssh
and install airship+ tungstenfabric demo on it
Checking POD status
If you want to monitor ongoing progess of the installation, you can issue periodic calls
to monitor the POD with:
rc_cli -H $RC_HOST -u $RC_USER -p $RC_PW pod show $PODID
where $PODID is the UUID of the POD. This will show all the messages logged by the workflow, as well as the current status of the workflow. The status will be WORKFLOW while the workflow is running, and will change to ACTIVE if the workflow completes successfully, or FAILED, if the workflow fails.