Please do NOT edit this page.
Please see this page for the latest documentation: https://wiki.lfedge.org/display/LE/Ampere+Pod+1
Overview
The Ampere Pod consists of 3 Ampere HR330A servers with the following specifications per server:
Ampere HR330A | |
---|---|
CPU | 1x 32 core 3.3 GHz Ampere ARMv8 64-bit Processor |
RAM | 8x 16GB DDR4-DIMM |
Storage | 1x 480GB SSD |
Networking | ns 2x 25Gbe SFP+ (connected) 1x 1Gbe BASE-T (not connected) 1x IPMI / Lights-out Management |
Usage & Users
Blueprint(s) / Description(s): Connected Vehicle Blueprint
Primary Contact: Robert.Qiu(robertqiu@tencent.com) This information can now be found on the Akraino dashboard. Check the Shared Community Lab page for more information.
Server Access
For the meantime when you request access to the Akraino Pods at UNH-IOL we will ask that you send us your public ssh key so we can add it to the root users of those machines.
If you wish to have your own user we can get that setup, but it is much quicker and easier if you just stick with the root user.
IPMI Access
Once you have successfully connected to the VPN there is a file located in each machine in /opt
called ipmi_info.txt,
this file has the username and password for the IPMI interfaces.
IMPORTANT:
Please write down the information or copy the file somewhere safe in case of the machines getting reinstalled and the file will be gone. We are trying to prevent sending emails with passwords in them. If for some reason you did not have this info before it got wiped you can email akraino-lab@iol.unh.edu and we will help you by making a new file in the machines with the IPMI username and passwordWhen booking a pod, the Akraino dashboard will provide IPMI credentials via email.
You can access an IPMI interface like the example below:
ampere1-ampere1-ipmi.akr.iol.unh.edu
...
Server Name | IPMI Address | IPMI VLAN ID | Public Network Address | Public Network VLAN ID | Switch Port(s) | OS Installed |
---|---|---|---|---|---|---|
ampere1-ampere1 | 10.11.5.11 | 3005 | 10.11.6.11 | 3006 | Cisco TOR: Left 25Gbe Port 41, Right 25Gbe Port 42 | CentOS 7.6 |
ampere1-ampere2 | 10.11.5.12 | 3005 | 10.11.6.12 | 3006 | Cisco TOR: Left 25Gbe Port 43, Right 25Gbe Port 44 | CentOS 7.6 |
ampere1-ampere3 | 10.11.5.13 | 3005 | 10.11.6.13 | 3006 | Cisco TOR: Left 25Gbe Port 45, Right 25Gbe Port 46 | CentOS 7.6 |
ampereampere1-jumphost | 10.11.5.14 | 3005 | 10.11.6.14 | 3006 | Cisco TOR: Left 25Gbe Port 33, Right 25Gbe Port 34 |
Usage
This POD is used for connected vehicle Radio Edge Cloud blueprint. Refer to additional blueprint specific information.
Tarsframwork is deployed in Ampere Server 1.
TarsNode is deployed in Ampere Server 2 and Ampere Server 3.
To make the CI/CD work, we set up Jenkins master in Ampere Server 1, Jankins Jenkins slave Ampere Server 2 and 3.
Beyond that, we create a folder in CI/CD gerrit repo( path: ci/ci-management/jjb/connected-vehicle) and write a YAML file. Let Jenkins control the tars framework via the Yaml file.
In the end, we upload the CI/CD log to Akraino Community. In this way to make sure connected vehicle pass R2 release review.The CI work is recorded in
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|