Table of Contents
Instructions for
...
MANUAL installation Airship+Tungsten Fabric using the Regional Controller and the TF Blueprint
Requirements:
- For Regional Controller host: AWS instance t2.medium or any virtual or /baremetal node with 2CPU and 4GB Memory (OS: Ubuntu Xenial 16.04)
- For Airship+Tungsten Fabric host: AWS instance m5instance m5.4xlarge or any virtual or /baremetal node with 16CPU and 64GB Memory (OS: Ubuntu Xenial 16.04)
Both nodes must be available by ssh.
This document describes detailed manual installation procedure. As an option you can use Automatic deployment with ansible to get the same environment as used for CICD validation.
Overview
Akraino Regional Controller is necessary part of release 2 deployment procedure. It's Akraino approved blueprint which is common for all of release 2 blueprints and which is using for Edge Site, Blueprint and POD deployment.
...
More information about Regional Controller:
- The Regional Controller's Object Model (it helps you to figure out what is Blueprint, Edge Site, POD, Workflow)
- How to write Blueprints and Workflows
- How to load objects into the Regional Controller
- Frequently Asked Questions
...
You can use any machine or VM dedicated for this purpose. See instructions on how to start the regional controller.
https://wikilf-akraino.akrainoatlassian.orgnet/wiki/display/AK/Starting+the+Regional+Controller
After you have a working Regional Controller you have to login on it and follow the steps bellow
...
Update all the environment variables define the ip addresses nodes, web server baseurl, etc. Define where the Regional Controller is located, as well as the login/password to usevariables
Mandatory variables:
- RC_HOST - Regional Controller IP
- NODE - IP address of the node where airhip-in-a-bottle with TungtenFabric would be deployed
- BASE_URL (URL to download ssh key and deploy.sh script)
(the login/password shown here are the built-in values and do not need to be changed, if you have not changed them on the Regional Controller):
Code Block | ||||
---|---|---|---|---|
| ||||
#Regional Controler (ip address or FQDN) export RC_HOST=<IP=35.181.44.122 #Regional Controler credentials export RC_USER=admin export RC_PW=admin123 #Node for airship remote deployment (ip address or FQDN name of Regional Controller>) export NODE=52.47.109.251 #ssh user for aisrship remote deployment export SSH_USER=admin export PW=admin123=ubuntu #File with private ssh key (public key must be added into the node for auth) export SSH_KEY=ssh_key.pem #web server for downloading scripts and ssh key #simpliest way is running "python3 -m http.server" in current directory export BASE_URL=http://172.31.37.160:8000 #repo URL and branch for treasuremap with tungstenfabric export REPO_URL=https://github.com/progmaticlab/treasuremap.git export REPO_BRANCH=master |
Generate yaml files from templates
Code Block | ||
---|---|---|
| ||
source setup-env.sh cat objects.yaml.env | envsubst > objects.yaml cat TF_blueprint.yaml.env | envsubst > TF_blueprint.yaml |
As the result you would get correct yaml files objects.yaml and TF_blueprint.yaml
Code Block | ||||
---|---|---|---|---|
| ||||
ubuntu@ip-172-31-37-160:/opt/akraino-tf$ cat objects.yaml
hardware:
AWS_instance:
uuid: 5367a004-71d4-11e9-8bda-0017f00dbff7
description: AWS Ubuntu Xenial for the TF Blueprint
yaml:
todo: AWS instance with >=8 VCPU and >=32GB RAM
edgesites:
TF_Edgesite:
description: The demo singlenode TF cluster
nodes: [ node1 ]
regions: [ 00000000-0000-0000-0000-000000000000 ]
nodes:
node1:
hardware: AWS_instance
yaml:
oob_ip: 52.47.109.251
ubuntu@ip-172-31-37-160:/opt/akraino-tf$ cat TF_blueprint.yaml
blueprint: 1.0.0
name: TF Edge Cloud
version: 1.0.0
description: This Blueprint defines an instance of the TF Edge Cloud
yaml:
# Required hardware profiles (can match on either UUID or name)
# Note: UUIDs would likely require a global registry of HW profiles.
hardware_profile:
or:
- { uuid: 5367a004-71d4-11e9-8bda-0017f00dbff7 }
workflow:
# Workflow that is invoked when the POD is created
create:
url: 'http://172.31.37.160:8000/deploy.sh'
components:
# SSH key for remote installation
- 'http://172.31.37.160:8000/ssh_key.pem'
input_schema:
rc_host: { type: string }
ssh_user: {type: string }
node: {type: string }
repo_url: {type: string }
repo_branch: {type: string }
|
Clone the api-server repository
...
(optional)
(If you are working on Regional Controller this repo is should be already presented in /opt/api-server/scripts)
This provides the CLI tools used to interact with the Regional Controller. Add the scripts from this repository to your PATH:
...
Code Block | ||
---|---|---|
| ||
cat POD.yaml.env | envsubst > POD.yaml |
As the result you get correct POD.yaml
Code Block | ||
---|---|---|
| ||
name: My_TF_Edge_Cloud_POD description: Put a description of the POD here. blueprint: 76c27993-1cc3-471d-8d32-45f1c7c7a753 edgesite: 52783249-45e2-4e34-831d-c46ff5170ae5 yaml: rc_host: 35.181.44.122 node: 52.47.109.251 ssh_user: ubuntu repo_url: https://github.com/progmaticlab/treasuremap.git repo_branch: master |
Please check that file POD.yaml contains correct data.
Create the POD
Create the POD using
...
where $PODID is the UUID of the POD. This will show all the messages logged by the workflow, as well as the current status of the workflow. The status will be WORKFLOW while the workflow is running, and will change to ACTIVE if the workflow completes successfully, or FAILED, if the workflow fails.
Uninstall
Uninstall Regional Controller
As we using one-time AWS instances they can be just removed with AWS console or other tools which were used for creating (ansible, terraform, etc).
In other cases following comands can be used for manual cleanup procedure.
Deleting POD from Regional Controller
Code Block | ||||
---|---|---|---|---|
| ||||
rc_cli -H $RC_HOST -u $RC_USER -p $RC_PW pod delete $PODID |
Deleting Blueprint from Regional Controller
Code Block | ||||
---|---|---|---|---|
| ||||
rc_cli -H $RC_HOST -u $RC_USER -p $RC_PW blueprint delete $BPID |
Uninstall Regional Controller itself
Code Block | ||||
---|---|---|---|---|
| ||||
sudo docker stop $(docker ps -aq)
sudo docker rm $(docker ps -aq)
sudo docker rmi $(docker images -q)
sudo rm -rf /opt/api-server/
sudo rm -rf /opt/akraino-tf/ |
Uninstall Airship
Airship-in-a-bottle doesn't have any tools for installation. Moreover according the documentation it's not recommended to use one virtual instance twice after fail. It's better to remove the failed instance and create a new one for reinstalling.
So the best way to uninstall airship-in-a-bottle it's removing Airship+Tungsten Fabric host via AWC console.