Table of Contents |
---|
Pre-Installation Requirements
In order to use the playbooks several preconditions must be fulfilled:
...
Time must be configured on all hosts (refer to "Configuring time")
...
Table of Contents |
---|
Pre-Installation Requirements
In order to use the playbooks several preconditions must be fulfilled:
Time must be configured on all hosts (refer to "Configuring time")
Hosts for Edge Controller (Kubernetes master) and Edge Nodes (Kubernetes workers) must have proper and unique hostname (not
localhost
). This hostname must be specified in/etc/hosts
(refer to "Setup static hostname").Ansible inventory must be configured must be configured (refer to "Configuring inventory").
SSH keys must be exchanged with hosts (refer to "Exchanging SSH keys with hosts").
Proxy must be configured if needed (refer to "Setting proxy").
If a private repository is used Github token has to be set up (refer to "Configuring inventory").
SSH keys must be exchanged with hosts (refer to "Exchanging SSH keys with hosts").
Proxy must be configured if needed (refer to "Setting proxy").
If a private repository is used Github token has to be set up (refer to "GitHub Token").
Configuring time
By default CentOS ships with chrony NTP client. It uses default NTP servers listed below that might not be available in certain networks:
0.centos.pool.ntp.org
1.centos.pool.ntp.org
2.centos.pool.ntp.org
3.centos.pool.ntp.org
It is required that the time shall be synchronized between all of the nodes and controllers to allow for correct certificate verification.
To change the default servers run the following commands:
# Remove previously set NTP servers
sed -i '/^server /d' /etc/chrony.conf
# Allow significant time difference
# More info: https://chrony.tuxfamily.org/doc/3.4/chrony.conf.html
echo 'maxdistance 999999' >> /etc/chrony.conf
# Add new NTP server(s)
echo 'server <ntp-server-address> iburst' >> /etc/chrony.conf
# Restart chrony service
systemctl restart chronyd
To verify that the time is synchronized correctly run the following command:
chronyc tracking
Sample output:
...
GitHub Token").
Configuring time
By default CentOS ships with chrony NTP client. It uses default NTP servers listed below that might not be available in certain networks:
0.centos.pool.ntp.org
1.centos.pool.ntp.org
2.centos.pool.ntp.org
3.centos.pool.ntp.org
It is required that the time shall be synchronized between all of the nodes and controllers to allow for correct certificate verification.
To change the default servers run the following commands:
# Remove previously set NTP servers
sed -i '/^server /d' /etc/chrony.conf
# Allow significant time difference
# More info: https://chrony.tuxfamily.org/doc/3.4/chrony.conf.html
echo 'maxdistance 999999' >> /etc/chrony.conf
# Add new NTP server(s)
echo 'server <ntp-server-address> iburst' >> /etc/chrony.conf
# Restart chrony service
systemctl restart chronyd
To verify that the time is synchronized correctly run the following command:
chronyc tracking
Sample output:
Reference ID : 0A800239
Stratum : 3
Ref time (UTC) : Mon Dec 16 09:10:51 2019
System time : 0.000015914 seconds fast of NTP time
Last offset : -0.000002627 seconds
RMS offset : 0.000229037 seconds
Frequency : 4.792 ppm fast
Residual freq : -0.001 ppm
Skew : 0.000015914744 secondsppm
fastRoot of NTPdelay time Last offset : -0.000002627008066391 seconds
RMSRoot offsetdispersion : 0.003803928 seconds
Update interval : 0130.0002290372 seconds
FrequencyLeap status : Normal
Setup static hostname
In order to set some custom static hostname, a command can be used:
hostnamectl set-hostname <host_name>
: 4.792 ppm fast
Residual freq : -0.001 ppm
Skew : 0.744 ppm
Root delay : 0.008066391 seconds
Root dispersion : 0.003803928 seconds
Update interval : 130.2 seconds
Leap status : Normal
Setup static hostname
In order to set some custom static hostname a command can be used:
hostnamectl set-hostname <host_name>
Make sure if static hostname provided is proper and unique (refer to K8s naming restrictions). The hostname provided needs to be defined in /etc/hosts as well:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 <host_name>
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 <host_name>
Configuring inventory
In order to execute playbooks, inventory.ini
must be configure to include specific hosts to run the playbooks on.
The inventory contains three groups: all
, edgenode_group
, and controller_group
:
all
contains all the hosts (with configuration) used in any playbookcontroller_group
contains host to be set up as a Kubernetes master / Edge Controller
WARNING: Since only one Controller is supported,controller_group
can contain only 1 host.edgenode_group
contains hosts to be set up as a Kubernetes workers / Edge Gateways.
NOTE: All nodes will be joined to the master specified incontroller_group
.
In all
group user can specify all of the hosts for usage in other groups. Example all
group looks like:
[all]
ctrl ansible_ssh_user=root ansible_host=<host_ip_address>
node1 ansible_ssh_user=root ansible_host=<host_ip_address>
node2 ansible_ssh_user=root ansible_host=<host_ip_address>
Then user can use specified hosts in edgenode_group
and controller_group
, i.e.:
[edgenode_group]
node1
node2
[controller_group]
ctrl
Exchanging SSH keys with hosts
Exchanging SSH keys will allow for password-less SSH from host running Ansible to hosts being set up.
...
Make sure if static hostname provided is proper and unique (refer to K8s naming restrictions). The hostname provided needs to be defined in /etc/hosts as well:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 <host_name>
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 <host_name>
Configuring inventory
In order to execute playbooks, inventory.ini
must be configured to include specific hosts to run the playbooks on.
The inventory contains three groups: all
, edgenode_group
, and controller_group
:
all
contains all the hosts (with configuration) used in any playbookcontroller_group
contains host to be set up as a Kubernetes master / Edge Controller
WARNING: Since only one Controller is supported,controller_group
can contain only 1 host.edgenode_group
contains hosts to be set up as a Kubernetes workers / Edge Gateways.
NOTE: All nodes will be joined to the master specified incontroller_group
.
In all
group user can specify all of the hosts for usage in other groups. Example all
group looks like:
[all]
ctrl ansible_ssh_user=root ansible_host=<host_ip_address>
node1 ansible_ssh_user=root ansible_host=<host_ip_address>
node2 ansible_ssh_user=root ansible_host=<host_ip_address>
Then user can use specified hosts in edgenode_group
and controller_group
, i.e.:
[edgenode_group]
node1
node2
[controller_group]
ctrl
Exchanging SSH keys with hosts
Exchanging SSH keys will allow for password-less SSH from host running Ansible to hosts being set up.
First, host running Ansible must have generated SSH key. SSH key can be generated by executing ssh-keygen
and following program's output. Here's example - key is located in standard location (/root/.ssh/id_rsa
) and empty passphrase is used.
# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): <ENTER>
Enter passphrase (empty for no passphrase): <ENTER>
Enter same passphrase again: <ENTER>
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa
...
# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): <ENTER>
Enter passphrase (empty for no passphrase): <ENTER>
Enter same passphrase again: <ENTER>
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:vlcKVU8Tj8nxdDXTW6AHdAgqaM/35s2doon76uYpNA0 root@host
The key's randomart image is:
+---[RSA 2048]----+
| .oo.==*.pub.
The key fingerprint is:
SHA256:vlcKVU8Tj8nxdDXTW6AHdAgqaM/35s2doon76uYpNA0 root@host
The key's randomart image is:
+---[RSA 2048]----+
| .oo.==*|
| . . o=oB*|
| o . . ..o=.=|
| . oE. . ... |
| ooS. |
| ooo. . |
| . ...oo . o=oB*|
| o . . *o+..o= .= |
| =O==.o.o oE. . ... |
| ooS. |
| ooo. . |
| . ...oo |
| . .*o+.. . |
| =O==.o.o |
+----[SHA256]-----+
Then, generated key must be copied to every host from the inventory. It is done by running ssh-copy-id
, e.g.:
# ssh-copy-id root@host
|
+----[SHA256]-----+
Then, the generated key must be copied to every host from the inventory. It is done by running ssh-copy-id
, e.g.:
# ssh-copy-id root@host
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '<IP> (<IP>)' can't be established.
ECDSA key fingerprint is SHA256:c7EroVdl44CaLH/IOCBu0K0/MHl8ME5ROMV0AGzs8mY.
ECDSA key fingerprint is MD5:38:c8:03:d6:5a:8e:f7:7d:bd:37:a0:f1:08:15:28:bb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: Source1 of key(s)) remain to be installed -- if you are prompted now it is to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '<IP> (<IP>)' can't be established.
ECDSA key fingerprint is SHA256:c7EroVdl44CaLH/IOCBu0K0/MHl8ME5ROMV0AGzs8mY.
ECDSA key fingerprint is MD5:38:c8:03:d6:5a:8e:f7:7d:bd:37:a0:f1:08:15:28:bb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@host's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@host'"
and check to make sure that only the key(s) you wanted were added.
To make sure key is copied successfully, try to SSH to the host: ssh 'root@host'
. It should not ask for the password.
Setting proxy
If proxy is required in order to connect to the Internet it can be configured in group_vars/all.yml
file. To enable proxy provide values for proxy_
variables and set proxy_os_enable
to true
. Also append your network CIDR (e.g. 192.168.0.1/24
) to the proxy_os_noproxy
.
Sample configuration:
proxy_yum_url: "http://proxy.example.com:3128/"
proxy_os_enable: true
proxy_os_remove_old: true
proxy_os_http: "http://proxy.example.com:3128"
proxy_os_https: "http://proxy.example.com:3128"
proxy_os_ftp: "http://proxy.example.com:3128"
proxy_os_noproxy: "localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.1/24"
NOTE: Ensure the no_proxy environment variable in your profile is set
export no_proxy="localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.1/24"
Setting Git
GitHub Token
NOTE: Only required when cloning private repositories. Not needed when using github.com/open-ness repositories.
In order to clone private repositories GitHub token must be provided.
To generate GitHub token refer to GitHub help - Creating a personal access token for the command line.
To provide the token, edit value of git_repo_token
variable in in group_vars/all.yml
.
Customize tag/commit/sha to checkout
Specific tag, commit or sha can be checked out by setting git_repo_branch
variable in group_vars/edgenode_group.yml
for Edge Nodes and groups_vars/controller_group.yml
for Kubernetes master / Edge Controller
Running playbooks
For convenience, playbooks can be executed by running helper deployment scripts.
NOTE: All nodes provided in the inventory may reboot during the installation.
Convention for the scripts is: action_mode.sh [group]
. Following scripts are available for Network Edge mode:
deploy_ne.sh [ controller | nodes ]
cleanup_ne.sh [ controller | nodes ]
To run deploy of only Edge Nodes or Edge Controller use deploy_ne.sh nodes
and deploy_ne.sh controller
respectively.
Developer Guide and Troubleshooting
Developer Guide
Start by following the page Setting Up Your Development Environment, this covers items such as signing up for a linux foundation account, configuring git, installing gerrit and IDE recommendations.
Clone 5G-MEC-CLOUD-GAMING Code
Access https://gerrit.akraino.org/r/admin/repos/5g-mec-cloud-gaming to get the git clone command and clone projects.
Download Submodule
- git submodule update --init --recursive
Setup Env
Enter test directory:
- cd ./5g-mec-cloud-gaming
Execute the versify.sh script to setup the test environment:
versify.sh script first installs Golang and ginkgo,then installs docker and docker-compose.
Golang:
wget https://dl.google.com/go/ go1.13.4.linux-amd64.tar.gz
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
Ginkgo:
go get github.com/onsi/ginkgo/ginkgo
go get github.com/onsi/gomega/
Docker:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
Docker-compose:
curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
Running Playbooks
For convenience, playbooks can be executed by running helper deployment scripts.
NOTE: All nodes provided in the inventory may reboot during the installation.
Convention for the scripts is: action_mode.sh [group]. Following scripts are available for Network Edge mode:
- deploy_ne.sh [ controller | nodes ]
To run deploy of only Edge Nodes or Edge Controller use deploy_ne.sh nodes and deploy_ne.sh controller respectively.
NOTE: Playbooks for Edge Controller/Kubernetes master must be executed before playbooks for Edge Nodes.
NOTE: Edge Nodes and Edge Controller must be set up on different machines.
Uninstall Guide
Role of cleanup playbook is to revert changes made by deploy playbooks. Convention for the scripts is: action_mode.sh [group]. Following script is available for cleanup:
- cleanup_ne.sh [控制器| 节点]
Teardown is made by going step by step in reverse order and undoing the steps. For example, playbooks for Edge Controller/Kubernetes master must be executed before playbooks for Edge Nodes. During the unistall operation, the cleanup script of Edge Nodes should be executed first, and then run the cleanup script of Edge Controller / Kubernetes master.
Note that there might be some leftovers created by installed software.
Troubleshooting
Useful Commands
...
install the new keys
root@host's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@host'"
and check to make sure that only the key(s) you wanted were added.
To make sure key is copied successfully, try to SSH to the host: ssh 'root@host'
. It should not ask for the password.
Setting proxy
If proxy is required in order to connect to the Internet it can be configured in group_vars/all.yml
file. To enable proxy provide values for proxy_
variables and set proxy_os_enable
to true
. Also append your network CIDR (e.g. 192.168.0.1/24
) to the proxy_os_noproxy
.
Sample configuration:
proxy_yum_url: "http://proxy.example.com:3128/"
proxy_os_enable: true
proxy_os_remove_old: true
proxy_os_http: "http://proxy.example.com:3128"
proxy_os_https: "http://proxy.example.com:3128"
proxy_os_ftp: "http://proxy.example.com:3128"
proxy_os_noproxy: "localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.1/24"
NOTE: Ensure the no_proxy environment variable in your profile is set
export no_proxy="localhost,127.0.0.1,10.244.0.0/24,10.96.0.0/12,192.168.0.1/24"
Setting Git
GitHub Token
NOTE: Only required when cloning private repositories. Not needed when using github.com/open-ness repositories.
In order to clone private repositories GitHub token must be provided.
To generate GitHub token refer to GitHub help - Creating a personal access token for the command line.
To provide the token, edit value of git_repo_token
variable in in group_vars/all.yml
.
Customize tag/commit/sha to checkout
Specific tag, commit or sha can be checked out by setting git_repo_branch
variable in group_vars/edgenode_group.yml
for Edge Nodes and groups_vars/controller_group.yml
for Kubernetes master / Edge Controller
Running playbooks
For convenience, playbooks can be executed by running helper deployment scripts.
NOTE: All nodes provided in the inventory may reboot during the installation.
Convention for the scripts is: action_mode.sh [group]
. Following scripts are available for Network Edge mode:
deploy_ne.sh [ controller | nodes ]
cleanup_ne.sh [ controller | nodes ]
To run deploy of only Edge Nodes or Edge Controller use deploy_ne.sh nodes
and deploy_ne.sh controller
respectively.
Developer Guide and Troubleshooting
Developer Guide
Start by following the page Setting Up Your Development Environment, this covers items such as signing up for a linux foundation account, configuring git, installing gerrit and IDE recommendations.
Clone 5G-MEC-CLOUD-GAMING Code
Visit https://gerrit.akraino.org/r/admin/repos/5g-mec-cloud-gaming to obtain the git clone commands.
Download Submodule
- git submodule update --init --recursive
Setup Environment
Enter the work directory:
- cd ./5g-mec-cloud-gaming
Execute the versify.sh script to set up the build environment:
versify.sh script first installs Golang and ginkgo,then installs docker and docker-compose.
Golang:
wget https://dl.google.com/go/ go1.13.4.linux-amd64.tar.gz
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
Ginkgo:
go get github.com/onsi/ginkgo/ginkgo
go get github.com/onsi/gomega/
Docker:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
Docker-compose:
curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
Running Playbooks
For convenience, playbooks can be executed by running helper deployment scripts.
NOTE: All nodes provided in the inventory may reboot during the installation.
Convention for the scripts is: action_mode.sh [group]. Following scripts are available for Network Edge mode:
- deploy_ne.sh [ controller | nodes ]
To run deploy of only Edge Nodes or Edge Controller use deploy_ne.sh nodes and deploy_ne.sh controller respectively.
NOTE: Playbooks for Edge Controller/Kubernetes master must be executed before playbooks for Edge Nodes.
NOTE: Edge Nodes and Edge Controller must be set up on different machines.
Uninstall Guide
The role of cleanup playbook is to revert changes made by deploy playbooks. The convention for the scripts is: action_mode.sh [group]. Following script is available for cleanup:
- cleanup_ne.sh [ controller | nodes]
The teardown is made by going step by step in reverse order and undoing the steps. For example, playbooks for Edge Controller/Kubernetes master must be executed before playbooks for Edge Nodes. During the unistall operation, the cleanup script of Edge Nodes should be executed first, and then run the cleanup script of Edge Controller / Kubernetes master.
Note that there might be some leftovers created by installed software.
Troubleshooting
Proxy issues
For PRC users who have network problems, try the following mirrors.
- Kubernetes
Kubernetes repo URL: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64, as a replacement of https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64.
Kubernetes repo key: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg, as a replacement of https://packages.cloud.google.com/yum/doc/yum-key.gpg.
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg, as a replacement of https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg.
- Kubeovn
Kubeovn repo: https://gitee.com/mirrors/Kube-OVN.git, as a replacement of https://github.com/alauda/kube-ovn.git.
Kubeovn raw file repo: https://gitee.com/mirrors/Kube-OVN/raw, as a replacement of https://raw.githubusercontent.com/alauda/kube-ovn.
Useful Commands
To display pods deployed in the default namespace:
kubectl get pods
...
No | Software | version | licence |
---|---|---|---|
1 | openNESS | 20.03 | Apache 2.0 license |
2 | Kubernetes | 1.17 | Apache 2.0 license |
3 | Docker | 19.3 | Apache 2.0 license |
4 | etcd | 3.4.3-0 | Apache 2.0 license |
edge GW (aka edgenode in openNESS)
No | Software | version | licence |
---|---|---|---|
1 | openNESS | 20.03 | Apache 2.0 license |
2 | Kubernetes | 1.17 | Apache 2.0 license |
3 | Docker | 19.03 | Apache 2.0 license |
4 | openvswitch | 2.11.4 | Apache 2.0 license |
5 | kube-ovn | 0.10.2 | Apache 2.0 license |
5GCEmulator
No | Software | version | licence |
---|---|---|---|
1 | openNESS | 20.03 | Apache 2.0 license |