This section provides instructions to quickly bring up SEBA.
Contents
Table of Contents | ||
---|---|---|
|
Note: This Installation Guide assumes that prerequisite hardware is met and software (Akraino Stack with CORD Platform) have already been installed.
Specifically, wait for the three EtcdCluster CustomResourceDefinitions to appear in Kubernetes:
kubectl get crd | grep etcd | wc -l
Once the CRDs are present, proceed with the
seba
chart installation.
Overview
This page walks through the sequence of Helm operations needed to bring up the SEBA profile.
Prerequisites
It assumes the Akraino Stack with CORD Platform has already been installed.
Installation
Install components as a whole
Add the CORD repository and update indexes
Code Block |
---|
$ helm repo add cord https://charts.opencord.org
$ helm repo update |
Install the CORD platform
Code Block |
---|
$ helm install -n cord-platform --version 6.1.0 cord/cord-platform |
...
Code Block |
---|
$ kubectl get crd | grep -i etcd | wc -l |
Install the SEBA profile
Code Block |
---|
$ helm install -n seba --version 1.0.0 cord/seba |
...
This section provides instructions to quickly bring up SEBA.
Contents
Table of Contents | ||
---|---|---|
|
Note: This Installation Guide assumes that prerequisite hardware is met and software (Akraino Stack with CORD Platform) have already been installed.
Specifically, wait for the three EtcdCluster CustomResourceDefinitions to appear in Kubernetes:
kubectl get crd | grep etcd | wc -l
Once the CRDs are present, proceed with the
seba
chart installation.
Overview
This page walks through the sequence of Helm operations needed to bring up the SEBA profile.
Prerequisites
It assumes the Akraino Stack with CORD Platform has already been installed.
Installation
Install components as a whole
Add the CORD repository and update indexes
Code Block |
---|
$ helm install repo add cord https://charts.opencord.org $ helm repo update |
Install the CORD platform
Code Block |
---|
$ helm install -n attcord-workflowplatform --version 6.1.0.2 cord/attcord-workflow |
Alternatively, install as separate components
...
platform |
Wait until 3 etcd CRDs are present in Kubernetes
Code Block |
---|
$ helmkubectl get repocrd add| incubator http://storage.googleapis.com/kubernetes-charts-incubator $ helm repo update |
...
grep -i etcd | wc -l |
Install the SEBA profile
Code Block |
---|
$ helm repo add cord https://charts.opencord.org $ helm repo updateinstall -n seba --version 1.0.0 cord/seba |
Install the CORD platform componentsAT&T workflow
Code Block |
---|
$ helm install -n onos cord/onos $ helm install -n xos-coreatt-workflow --version 1.0.2 cord/xos-core $ helm install --version 0.13.3 \ --set configurationOverrides."offsets.topic.replication.factor"=1 \ --set configurationOverrides."log.retention.hours"=4 \ --set configurationOverrides."log.message.timestamp.type"="LogAppendTime" \ --set replicas=1att-workflow |
Alternatively, install as separate components
Add the official Kubernetes incubator repository (for Kafka) and update the indexes
Code Block |
---|
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm repo update |
Add the CORD repository and update the indexes
Code Block |
---|
$ helm repo add cord https://charts.opencord.org
$ helm repo update |
Install the CORD platform components
Code Block |
---|
$ helm install -n onos cord/onos $ helm install -n xos-core cord/xos-core $ helm install --version 0.13.3 \ --set persistence.enabled=falseconfigurationOverrides."offsets.topic.replication.factor"=1 \ --set zookeeper.replicaCount=1configurationOverrides."log.retention.hours"=4 \ --set zookeeper.persistence.enabled=falseconfigurationOverrides."log.message.timestamp.type"="LogAppendTime" \ -n cord-kafka incubator/kafka |
Optionally, install the logging and monitoring infrastructure components
Code Block |
---|
$ helm install -n nem-monitoring cord/nem-monitoring $ helm installset replicas=1 \ --set elasticsearch.cluster.env.MINIMUM_MASTER_NODES="1"persistence.enabled=false \ --set elasticsearchzookeeper.client.replicasreplicaCount=1 \ --set elasticsearchzookeeper.masterpersistence.replicasenabled=2false \ -n cord-set elasticsearch.master.persistence.enabled=false \ kafka incubator/kafka |
Optionally, install the logging and monitoring infrastructure components
Code Block |
---|
$ helm install -n nem-monitoring cord/nem-monitoring $ helm install --set elasticsearch.data.replicas=1cluster.env.MINIMUM_MASTER_NODES="1" \ --set elasticsearch.dataclient.persistence.enabledreplicas=false1 \ -n logging cord/logging |
Install etcd-operator and wait until 3 etcd CRDs are present in Kubernetes
Code Block |
---|
$ helm install -n etcd-operator stable/etcd-operator --version 0.8.3
$ kubectl get crd | grep -i etcd | wc -l |
Install the rest of the SEBA profile components
Code Block |
---|
$ helm install -n voltha cord/voltha
$ helm install -n seba-service cord/seba-services
$ helm install -n base-kubernetes cord/base-kubernetes |
...
-set elasticsearch.master.replicas=2 \
--set elasticsearch.master.persistence.enabled=false \
--set elasticsearch.data.replicas=1 \
--set elasticsearch.data.persistence.enabled=false \
-n logging cord/logging |
Install etcd-operator and wait until 3 etcd CRDs are present in Kubernetes
Code Block |
---|
$ helm install -n att-workflow etcd-operator stable/etcd-operator --version 10.08.2 cord/att-workflow |
Verify your installation and next steps
Once the installation completes, monitor your setup using kubectl get pods
. Wait until all pods are in Running state and “tosca-loader” pods are in Completed state.
Note: The tosca-loader pods may periodically transition into error state. This is expected. They will retry and eventually get to the desired state. Note: Depending on the profile you're installing, you may need to check also different namespaces (for example, check the voltha namespace if you're installing SEBA with
kubectl get pods -n voltha
)
Your POD is now installed and ready for use.
POD Configuration
Once all the components needed for the SEBA profile are up and running on your POD, you will need to configure it. This is typically done using TOSCA.
In this page we are describing the process as a three steps process:
- Fabric Setup
- OLT Provisioning
- Subscriber Provisioning
as that is what logically makes sense, but be aware that all the configurations can be unified in a single TOSCA file.
This configuration is environment specific, so you will need to create your own, but the following can serve as a reference:
Fabric Setup
Code Block | ||
---|---|---|
| ||
tosca_definitions_version: tosca_simple_yaml_1_0 imports: - custom_types/switch.yaml - custom_types/switchport.yaml - custom_types/portinterface.yaml - custom_types/bngportmapping.yaml - custom_types/attworkflowdriverwhitelistentry.yaml - custom_types/attworkflowdriverservice.yaml - custom_types/serviceinstanceattribute3 $ kubectl get crd | grep -i etcd | wc -l |
Install the rest of the SEBA profile components
Code Block |
---|
$ helm install -n voltha cord/voltha
$ helm install -n seba-service cord/seba-services
$ helm install -n base-kubernetes cord/base-kubernetes |
Install the AT&T workflow
Code Block |
---|
$ helm install -n att-workflow --version 1.0.2 cord/att-workflow |
Verify your installation and next steps
Once the installation completes, monitor your setup using kubectl get pods
. Wait until all pods are in Running state and “tosca-loader” pods are in Completed state.
Note: The tosca-loader pods may periodically transition into error state. This is expected. They will retry and eventually get to the desired state. Note: Depending on the profile you're installing, you may need to check also different namespaces (for example, check the voltha namespace if you're installing SEBA with
kubectl get pods -n voltha
)
Your POD is now installed and ready for use.
POD Configuration
Once all the components needed for the SEBA profile are up and running on your POD, you will need to configure it. This is typically done using TOSCA.
In this page we are describing the process as a three steps process:
- Fabric Setup
- OLT Provisioning
- Subscriber Provisioning
as that is what logically makes sense, but be aware that all the configurations can be unified in a single TOSCA file.
This configuration is environment specific, so you will need to create your own, but the following can serve as a reference:
Fabric Setup
Code Block | ||
---|---|---|
| ||
tosca_definitions_version: tosca_simple_yaml_1_0 imports: - custom_types/switch.yaml - custom_types/onosappswitchport.yaml description: Configures a full SEBA POD topology_- custom_types/portinterface.yaml - custom_types/bngportmapping.yaml - custom_types/attworkflowdriverwhitelistentry.yaml - custom_types/attworkflowdriverservice.yaml - custom_types/serviceinstanceattribute.yaml - custom_types/onosapp.yaml description: Configures a full SEBA POD topology_template: node_templates: # Fabric configuration switch#leaf_1: type: tosca.nodes.Switch properties: driver: ofdpa3 ipv4Loopback: 192.168.0.201 ipv4NodeSid: 17 isEdgeRouter: True name: AGG_SWITCH ofId: of:0000000000000001 routerMac: 00:00:02:01:06:01 # Setup the OLT switch port port#olt_port: type: tosca.nodes.SwitchPort properties: portId: 1 host_learning: false requirements: - switch: node: switch#leaf_1 relationship: tosca.relationships.BelongsToOne # Port connected to the BNG port#bng_port: type: tosca.nodes.SwitchPort properties: portId: 31 requirements: - switch: node: switch#leaf_1 relationship: tosca.relationships.BelongsToOne # Setup the fabric switch port where the external # router is connected to bngmapping: type: tosca.nodes.BNGPortMapping properties: s_tag: any switch_port: 31 # DHCP L2 Relay config onos_app#dhcpl2relay: type: tosca.nodes.ONOSApp properties: name: dhcpl2relay must-exist: true dhcpl2relay-config-attr: type: tosca.nodes.ServiceInstanceAttribute properties: name: /onos/v1/network/configuration/apps/org.opencord.dhcpl2relay value: > { "dhcpl2relay" : { "useOltUplinkForServerPktInOut" : false, "dhcpServerConnectPoints" : [ "of:0000000000000001/31" ] } } requirements: - service_instance: node: onos_app#dhcpl2relay relationship: tosca.relationships.BelongsToOne |
...
Code Block |
---|
(voltha) device 00015698e67dc060
(device 00015698e67dc060) show
Device 00015698e67dc060
+------------------------------+------------------+
| field | value |
+------------------------------+------------------+
| id | 00015698e67dc060 |
| type | broadcom_onu |
| root | True |
| parent_id | 0001941bd45e71d8 |
| vendor | Broadcom |
| model | n/a |
| hardware_version | to be filled |
| firmware_version | to be filled |
| images.image | 1 item(s) |
| serial_number | BRCM22222222 |
+------------------------------+------------------+
| adapter | broadcom_onu |
| admin_state | 3 |
| oper_status | 4 |
| connect_status | 2 |
| proxy_address.device_id | 0001941bd45e71d8 |
| proxy_address.onu_id | 1 |
| proxy_address.onu_session_id | 1 |
| parent_port_no | 536870912 |
| vendor_id | BRCM |
| ports | 2 item(s) |
+------------------------------+------------------+
| flows.items | 5 item(s) |
+------------------------------+------------------+ |
to find the correct serial number.
Push a Subscriber into CORD
Once you have this information, you can create the subscriber by customizing the following TOSCA and passing it into the POD:
Code Block |
---|
tosca_definitions_version: tosca_simple_yaml_1_0
imports:
- custom_types/rcordsubscriber.yaml
description: Create a test subscriber
topology_template:
node_templates:
# A subscriber
my_house:
type: tosca.nodes.RCORDSubscriber
properties:
name: My House
c_tag: 111
s_tag: 222
onu_device: BRCM1234 # Serial Number of the ONU Device to which this subscriber is connected |
Using TOSCA to push to CORD
Once CORD is up and running, a node
can be added to a POD using the TOSCA interface by uploading the following recipe:
Code Block |
---|
tosca_definitions_version: tosca_simple_yaml_1_0
description: Load a compute node in XOS
imports:
- custom_types/node.yaml
topology_template:
node_templates:
# A compute node
GratefulVest:
type: tosca.nodes.Node
properties:
name: Grateful Vest |
In TOSCA terminology, the above would be called a TOSCA node template
.
Where to find the generated specs?
On any running CORD POD, the TOSCA apis are accessible as:
Code Block |
---|
$ curl http://<head-node-ip>:<head-node-port>/xos-----------+ |
to find the correct serial number.
Push a Subscriber into CORD
Once you have this information, you can create the subscriber by customizing the following TOSCA and passing it into the POD:
Code Block |
---|
tosca_definitions_version: tosca_simple_yaml_1_0 imports: - custom_types/rcordsubscriber.yaml description: Create a test subscriber topology_template: node_templates: # A subscriber my_house: type: tosca.nodes.RCORDSubscriber properties: name: My House c_tag: 111 s_tag: 222 onu_device: BRCM1234 # Serial Number of the ONU Device to which this subscriber is connectedtosca | python -m json.tool |
And it will return a list of all the recipes with the related url:
{
"image": "/custom_type/image",
"site": "/custom_type/site",
...
}
For examples, to site the TOSCA spec of the Site model, you can use the URL:
Code Block |
---|
$ curl http://<head-node-ip>:<head-node-port>/xos-tosca/custom_type/site |
If you have a running xos-tosca
container you can also find generated copies of the specs in /opt/xos-tosca/src/tosca/custom_types
.
How to load a TOSCA recipe in the system
The xos-tosca
container exposes two endpoint:
Code Block |
---|
POST http://<cluster-ip>:<tosca-port>/run
POST http://<cluster-ip>:<tosca-port>/delete |
To load a recipe via curl
you can use this command:
Code Block |
---|
$ curl -H "xos-username: xosadmin@opencord.org" -H "xos-password: <xos-password>" -X POST --data-binary @<path/to/file> http://<cluster-ip>:<tosca-port>/run
|
If you installed the xos-core
charts without modifications, the tosca-port
is 30007.
References
- SEBA installation guide: https://guide.opencord.org/profiles/seba/install.html
- If you have question on the above link, please join this group to ask questions there.
https://groups.google.com/a/opennetworking.org/forum/#!forum/seba-dev