Table of Contents
...
In a Regional Controller based deployment, the Regional Controller API will be used to upload the SEBA Blueprint YAML (for Akraino Release 2, SEBA blueprint reuses REC_blueprint.yaml available from the SEBA repository) which informs the Regional Controller of where to obtain the SEBA ISO images, the SEBA workflows (executable code for creating, modifying and deleting SEBA sites) and the SEBA remote installer component (a container image which will be instantiated by the create workflow and which will then invoke the SEBA Deployer (which is located in the ISO DVD disc image file) which conducts the rest of the installation.
The instructions below skip most of this and directly invoke the SEBA Deployer from the BMC, iLO or iDRAC of a physical server. The basic workflow of the SEBA deployer is to copy a base image to the first controller in the cluster and then read the contents of a configuration file (typically called user_config.yaml) to deploy the base OS and all additional software to the rest of the nodes in the cluster.
...
SEBA is a fully integrated stack from the hardware up to and including the application, so for best results it is necessary to use one of the tested hardware configurations. Although SEBA is intended to run on a variety of different hardware platforms, it includes a hardware detector component that customizes each installation based on the hardware present and will need (possibly minor) changes to run on additional hardware configurations. The primary focus of Akraino Release 2 self-certification testing for the SEBA blueprint is the Nokia Open Edge servers, so some issues may be encountered with other server types.
- Minimum of 3 nodes.
- Total Physical Compute Cores: 60 (120 vCPUs)
- Total Physical Compute Memory: 192GB minimum per node
- Total SSD-based OS Storage: 2.8 TB (6 x 480GB SSDs)
- Total Application-based Raw Storage: 5.7 TB (6 x 960GB SSDs)
- Networking Per Server: Apps - 2 x 25GbE (per Server) and DCIM - 2 x 10GbE + 1 1Gbt (shared)
...
- BIOS set to Legacy (Not UEFI)
- CPU Configuration/Turbo Mode Disabled
- Virtualization Enabled
- IPMI Enabled
- Boot Order set with Hard Disk listed as first in the list.
As of Akraino Release 2, the Telco Appliance blueprint family does not yet include automatic configuration for a pre-boot environment. The following versions were manually loaded on the Open Edge servers in the SEBA Blueprint Validation Lab (note: this may be facilitated with the same script utilized by REC for Akraino Release 1). In the future, automatic configuration of the pre-boot environment is expected to be a function of the Regional Controller under the direction of the SEBA pod create workflow script.
- BIOS1: 3B06
- BMC1: 3.13.00
- BMC2: 3.08.00
- CPLD: 0x01
Network Requirements:
...
You should see: Installation complete, Installation Succeeded.
Go to SEBA Blueprint Test Document and follow the steps outlined there to ensure that all nodes and services were properly deployed.
Deployment Failures
Sometimes failures happen, usually due to misconfigurations or incorrect addresses.
...
Enable legacy APIs by adding --runtime-config option to the command section of /etc/kubernetes/manifests/apiserver.yml on each node in the cluster. Connect to each node using ssh and edit the file to match the example below.
Code Block ssh cloudadmin@10.65.1.51 sudo vi /etc/kubernetes/manifests/apiserver.yml
Code Block title /etc/kubernetes/manifests/apiserver.yml collapse true --- apiVersion: v1 kind: Pod metadata: name: kube-apiserver namespace: kube-system spec: hostNetwork: true containers: - name: kube-apiserver image: registry.kube-system.svc.rec.io:5555/caas/hyperkube:1.16.0-5 securityContext: runAsUser: 144 command: - "/kube-apiserver" - --admission-control=DefaultStorageClass,LimitRanger,MutatingAdmissionWebhook,NamespaceExists,NamespaceLifecycle,NodeRestriction,PodSecurityPolicy,ResourceQuota,ServiceAccount,ValidatingAdmissionWebhook - --advertise-address=192.168.12.51 - --allow-privileged=true - --anonymous-auth=false - --apiserver-count=3 - --audit-policy-file=/var/lib/caas/policies/audit-policy.yaml - --audit-log-format=json - --audit-log-maxsize=100 - --audit-log-maxbackup=88 - --audit-log-path=/var/log/audit/kube_apiserver/kube-apiserver-audit.log - --authorization-mode=Node,RBAC - --bind-address=192.168.12.51 - --client-ca-file=/etc/openssl/ca.pem - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/etcd/ssl/ca.pem - --etcd-certfile=/etc/etcd/ssl/etcd1.pem - --etcd-keyfile=/etc/etcd/ssl/etcd1-key.pem - --etcd-servers=https://192.168.12.51:4111,https://192.168.12.52:4111,https://192.168.12.53:4111 - --experimental-encryption-provider-config=/etc/kubernetes/ssl/secrets.conf - --feature-gates=SCTPSupport=True,CPUManager=False,TokenRequest=True,DevicePlugins=True - --insecure-port=0 - --kubelet-certificate-authority=/etc/openssl/ca.pem - --kubelet-client-certificate=/etc/kubernetes/ssl/kubelet-server.pem - --kubelet-client-key=/etc/kubernetes/ssl/kubelet-server-key.pem - --kubelet-https=true - --max-requests-inflight=1000 - --proxy-client-cert-file=/etc/kubernetes/ssl/metrics.crt - --proxy-client-key-file=/etc/kubernetes/ssl/metrics.key - --requestheader-client-ca-file=/etc/openssl/ca.pem - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-key-file=/etc/kubernetes/ssl/service-account.pem - --service-account-lookup=true - --service-cluster-ip-range=10.254.0.0/16 - --tls-cert-file=/etc/kubernetes/ssl/tls-cert.pem - --tls-private-key-file=/etc/kubernetes/ssl/apiserver1-key.pem - --token-auth-file=/etc/kubernetes/ssl/tokens.csv - --runtime-config=apps/v1beta1=true,apps/v1beta2=true,extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=true resources: requests: cpu: "50m" volumeMounts: - name: time-mount mountPath: /etc/localtime readOnly: true - name: secret-kubernetes mountPath: /etc/kubernetes/ssl readOnly: true - name: secret-root-ca mountPath: /etc/openssl/ca.pem readOnly: true - name: secret-etcd mountPath: /etc/etcd/ssl readOnly: true - name: audit-kube-apiserver mountPath: /var/log/audit/kube_apiserver/ readOnly: false - name: audit-policy-dir mountPath: /var/lib/caas/policies readOnly: true volumes: - name: time-mount hostPath: path: /etc/localtime - name: secret-kubernetes hostPath: path: /etc/kubernetes/ssl - name: secret-root-ca hostPath: path: /etc/openssl/ca.pem - name: secret-etcd hostPath: path: /etc/etcd/ssl - name: audit-kube-apiserver hostPath: path: /var/log/audit/kube_apiserver/ - name: audit-policy-dir hostPath: path: /var/lib/caas/policies
Connect to the first controller in the cluster to run the remaining commands.
Code Block ssh cloudadmin@10.65.1.51
Delete the kube-apiserver pods and wait for the pods to be recreated.
Code Block kubectl delete pod -n kube-system kube-apiserver-192.168.12.51 kubectl delete pod -n kube-system kube-apiserver-192.168.12.52 kubectl delete pod -n kube-system kube-apiserver-192.168.12.53
Add cluster-admin rights to to the tiller service account.
Code Block kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Add the CORD repository and updated indexes.
Code Block helm repo add cord https://charts.opencord.org helm repo update
Install the CORD platform.
Code Block helm install -n cord-platform --version 6.1.0 cord/cord-platform
Wait until all 3 etcd CRDs are present in Kubernetes
Code Block kubectl get crd | grep -i etcd | wc -l
Install the SEBA profile.
Code Block helm install -n seba --version 1.0.0 cord/seba
Install the AT&T workflow
Code Block helm install -n att-workflow --version 1.0.2 cord/att-workflow
Wait for all pods to reach Completed or Running status.
Code Block collapse truekubectl get pods
Code Block title Example output collapse true NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES att-workflow-att-workflow-driver-6487d77db-rdwgk 1/1 Running 0 2m1s 10.244.0.27 192.168.12.52 <none> <none> att-workflow-tosca-loader-7btvq 0/1 Completed 4 2m1s 10.244.1.37 192.168.12.51 <none> <none> cord-platform-etcd-operator-etcd-backup-operator-84dfbc689vqsj9 1/1 Running 0 4m9s 10.244.2.13 192.168.12.53 <none> <none> cord-platform-etcd-operator-etcd-operator-8b6c64548-nnj2r 1/1 Running 0 4m9s 10.244.2.14 192.168.12.53 <none> <none> cord-platform-etcd-operator-etcd-restore-operator-7f5f5b95sdxw5 1/1 Running 0 4m9s 10.244.0.13 192.168.12.52 <none> <none> cord-platform-grafana-74c589b6db-jqnpv 2/2 Running 0 4m9s 10.244.1.24 192.168.12.51 <none> <none> cord-platform-kafka-0 1/1 Running 1 4m9s 10.244.1.25 192.168.12.51 <none> <none> cord-platform-kafka-1 1/1 Running 0 2m31s 10.244.0.26 192.168.12.52 <none> <none> cord-platform-kafka-2 1/1 Running 0 96s 10.244.2.29 192.168.12.53 <none> <none> cord-platform-kibana-7459967f55-z7sk8 1/1 Running 0 4m9s 10.244.2.18 192.168.12.53 <none> <none> cord-platform-logstash-0 1/1 Running 0 4m9s 10.244.0.15 192.168.12.52 <none> <none> cord-platform-onos-5b95b8f489-9s56b 2/2 Running 0 4m8s 10.244.0.19 192.168.12.52 <none> <none> cord-platform-prometheus-alertmanager-7df4f44f4d-tbfcl 2/2 Running 0 4m9s 10.244.2.15 192.168.12.53 <none> <none> cord-platform-prometheus-kube-state-metrics-76c8565f87-wslpw 1/1 Running 0 4m9s 10.244.0.14 192.168.12.52 <none> <none> cord-platform-prometheus-pushgateway-849c597464-pxhrf 1/1 Running 0 4m9s 10.244.1.26 192.168.12.51 <none> <none> cord-platform-prometheus-server-555b77dcd9-brtfk 2/2 Running 0 4m9s 10.244.2.17 192.168.12.53 <none> <none> cord-platform-zookeeper-0 1/1 Running 0 4m9s 10.244.0.16 192.168.12.52 <none> <none> cord-platform-zookeeper-1 1/1 Running 0 3m35s 10.244.1.31 192.168.12.51 <none> <none> cord-platform-zookeeper-2 1/1 Running 0 2m47s 10.244.2.27 192.168.12.53 <none> <none> etcd-cluster-4btz528zxt 1/1 Running 0 2m38s 10.244.0.25 192.168.12.52 <none> <none> etcd-cluster-qpjdpn9wdl 1/1 Running 0 3m2s 10.244.1.35 192.168.12.51 <none> <none> etcd-cluster-vg7v7rcdtn 1/1 Running 0 2m22s 10.244.2.28 192.168.12.53 <none> <none> kpi-exporter-9b9f87bd5-7xfcw 1/1 Running 3 4m8s 10.244.2.16 192.168.12.53 <none> <none> kpi-exporter-9b9f87bd5-gbzpm 1/1 Running 2 4m8s 10.244.0.17 192.168.12.52 <none> <none> sadis-server-6c6f649bb4-bfg4m 1/1 Running 1 3m2s 10.244.2.21 192.168.12.53 <none> <none> seba-base-kubernetes-tosca-loader-gsdwx 0/1 Completed 2 3m2s 10.244.2.22 192.168.12.53 <none> <none> seba-fabric-6879cd6dc9-dd2xt 1/1 Running 0 3m2s 10.244.2.19 192.168.12.53 <none> <none> seba-fabric-crossconnect-c684c6df5-wvpjp 1/1 Running 0 3m2s 10.244.0.21 192.168.12.52 <none> <none> seba-kubernetes-bb4fcd749-z4nr8 1/1 Running 0 3m2s 10.244.1.32 192.168.12.51 <none> <none> seba-onos-service-86697c97bf-sd2gz 1/1 Running 0 3m2s 10.244.0.22 192.168.12.52 <none> <none> seba-rcord-6975778bf6-brxvb 1/1 Running 0 3m2s 10.244.2.20 192.168.12.53 <none> <none> seba-seba-services-tosca-loader-ddnkz 0/1 Completed 4 3m2s 10.244.1.34 192.168.12.51 <none> <none> seba-volt-f6549c677-qqfcg 1/1 Running 0 3m2s 10.244.1.33 192.168.12.51 <none> <none> xos-chameleon-645f89cb68-5hvld 1/1 Running 0 4m7s 10.244.1.29 192.168.12.51 <none> <none> xos-core-868868885d-x9tjx 1/1 Running 0 4m7s 10.244.1.30 192.168.12.51 <none> <none> xos-db-7445f8dcb7-6867w 1/1 Running 0 4m8s 10.244.0.18 192.168.12.52 <none> <none> xos-gui-858b98bc9f-pc2b5 1/1 Running 0 4m8s 10.244.1.27 192.168.12.51 <none> <none> xos-tosca-fdbbc894b-2v264 1/1 Running 0 4m7s 10.244.0.20 192.168.12.52 <none> <none> xos-ws-6c76444b89-kj8q7 1/1 Running 0 4m8s 10.244.1.28 192.168.12.51 <none> <none>
...