PLEASE REFER TO R1 NETWORK CLOUD RELEASE DOCUMENTATION
NC Family Documentation - Release 1
THIS DOCUMENTATION WILL BE ARCHIVED
Contents
Table of Contents | ||
---|---|---|
|
Prerequisites
- Internal and external network connectivity on all target hardware.
Steps
- Ensure all Installation OverviewHigh Level Requirements are met.
- Clone and download repositories and packages for the appropriate Akraino release. (Linux Foundation credentials required.)
- Akraino Gerrit: From the list of projects, clone all relevant repositories.
- Akraino Nexus 3: Download all relevant packages.
- Install the Regional Controller Node:
- Bootstrap the bare metal regional server node from the central node.
- Run installation scripts to launch the Portal, Camunda Workflow, and Database components.
- Login to the Akraino Portal UI.
- Install the Edge Node via the Portal UI:
- Complete the appropriate YAML template according to site requirements:
- Site name
- Username and ssh key(s) for node access
- Server names and hardware details
- PXE, Storage, Public, and IPMI/iDrac network details
- Installation OverviewSR-IOV interface details, including the number of virtual functions and BDF6 addresses
- Ceph storage configuration
- Choose the site to build, choose the required Blueprint, and select Build.
- Upon successful build, select Deploy. The following scripts will be run, with status conveyed to the UI:
- 1promgen.sh
- 2genesis.sh (invokes genesis.sh)
- 3deploy.sh
- Complete the appropriate YAML template according to site requirements:
Deployment Components
The following components are deployed in automated sequential fashion:
- Genesis Host
- This is the first control node. Genesis serves as the seed node for the control cluster deployed on Edge sites.
- Genesis contains a standalone Kubernetes instance with undercloud components (e.g., Airship) deployed via Armada.
- Once the Undercloud is deployed, Ceph is deployed via Armada.
- Remaining cluster control nodes are deployed next from bare metal, using MaaS. This requires an available PXE network. The Genesis host will provide a MaaS controller.
Control Hosts
Compute Hosts
- Airship
- Apache Traffic Server (VNF)
- Ceph
- Calico
- ONAP
- OpenStack
- SR-IOV
High Level Requirements
Review requirements in the following order:
Compute Node Details
Herewith are three methods to locate sufficient hardware details:
Code Block | ||
---|---|---|
| ||
$ sudo dmidecode -s system-manufacturer HP $ sudo dmidecode -s system-version Not Specified $ sudo dmidecode -s system-product-name ProLiant DL380 Gen9 $ sudo dmidecode | grep -A3 '^System Information' System Information Manufacturer: HP Product Name: ProLiant DL380 Gen9 Version: Not Specified $ sudo apt-get install -y inxi [ ... ] $ sudo inxi -Fx System: Host: mtxnjrsv124 Kernel: 4.4.0-101-generic x86_64 (64 bit gcc: 5.4.0) Console: tty 10 Distro: Ubuntu 16.04 xenial Machine: Mobo: HP model: ProLiant DL380 Gen9 serial: MXQ604036H Bios: HP v: P89 date: 07/18/2016 CPU(s): 2 Multi core Intel Xeon E5-2680 v3s (-HT-MCP-SMP-) cache: 61440 KB flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 119857 clock speeds: [ ... ] Graphics: Card: Failed to Detect Video Card! Display Server: X.org 1.18.4 drivers: fbdev (unloaded: vesa) tty size: 103x37 Advanced Data: N/A for root out of X Network: Card-1: Broadcom NetXtreme BCM5719 Gigabit Ethernet PCIe driver: tg3 v: 3.137 bus-ID: 02:00.0 IF: eno1 state: down mac: 14:02:ec:36:52:c4 [ ... ] Drives: HDD Total Size: 1320.2GB (16.2% used) ID-1: /dev/sda model: LOGICAL_VOLUME size: 120.0GB temp: 0C ID-2: /dev/sdb model: LOGICAL_VOLUME size: 1200.2GB temp: 0C Partition: ID-1: / size: 28G used: 17G (66%) fs: ext4 dev: /dev/dm-0 ID-2: /boot size: 472M used: 155M (35%) fs: ext2 dev: /dev/sda1 ID-3: /home size: 80G used: 21G (28%) fs: ext4 dev: /dev/dm-2 RAID: No RAID devices: /proc/mdstat, md_mod kernel module present Sensors: System Temperatures: cpu: 48.0C mobo: N/A Fan Speeds (in rpm): cpu: N/A Info: Processes: 397 Uptime: 39 days Memory: 41943.1/257903.7MB Init: systemd runlevel: 5 Gcc sys: 5.4.0 Client: Shell (sudo) inxi: 2.2.35 |
SR-IOV
Configure and determine the SR-IOV NIC as follows:
Code Block | ||
---|---|---|
| ||
$ # update /etc/default/grub with this line $ export GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt" $ sudo -E update-grub $ sudo reboot now $ cat /proc/cmdline $ sudo echo '32' > /sys/class/net/ens3f0/device/sriov_numvfs $ sudo ip link show ens3f0 # to verify it worked $ # add line to /etc/rc.local so it does this on reboot $ sudo echo '32' > /sys/class/net/ens3f0/device/sriov_numvfs |
BDF6 Addresses
Intel provides a script to locate BDF6 addresses from their NICs. Learn more about Bus:Device:Function (BDF) Notation.
Network
Akraino This Network Cloud blueprint requires:
- A network that can be PXE booted with appropriate network topology and bonding settings (e.g., a dedicated PXE interface on an untagged/native VLAN)
- A segmented VLAN with all nodes bearing routes to the following network types:
- Management: Kubernetes (K8s) control channel
- Calico
- Storage
- Overlay
- Public
Storage
Akraino This Network Cloud blueprint requires:
- Control plane server disks:
- Two disk RAID-1 mirror for the operating system.
- Configure remaining disks as JBOD for Ceph, with Ceph journals preferentially deployed to SSDs where available.
- Data plane server disks:
- Two disk RAID-1 mirror for the operating system.
- Configure remaining disks per the host profile target for each server (e.g., RAID-6; no Ceph).
Redfish
Akraino requiresThis Network Cloud blueprint requires:
- Configuring BIOS with HTTP boot as a primary device.
- Adding MAC address of the card to Switch and DHCP server for traffic to flow.
- Creating the configuration file for pre-seed on DHCP server.
- Rebooting the server to boot on HTTP device.
- Getting IP and the related package of OS to install Operating System.