Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

SDEWAN central controller provides central control of SDEWAN overlay networks by automatically configuring the SDEWAN CNFs located in edge location clusters and hub clusters:

  • To create secure overlays where each overlay connects application and hub clusters together.
  • To allow application connectivity with external entities and entities of other clusters.

System Architecture

SDEWAN central controller includes the following components as showed in below diagram:

  • Web UI: a HTML5 based web UI to provide configuration of Application Cluster Registration, Hub Registration, Overlay, Application/Service Registration and Status tracking.
  • API Server: Exports Restful API for Application Cluster management, Hub management, Overlay management, Status monitoring management, logging.
  • Scheduler Manager: a daemon service which accepts request from API server (through RPC) then generates relevant K8s CRs of SD-EWAN CNFs of various hubs and edges to establish the tunnels.
  • SDEWAN Management DB: a database to store information such as edge clusters, hubs, overlays, ip addresses, application/services etc.


System Design

Assumption

IP

  • Central Cloud has public IP as CIP
  • Traffic Hub has public IP as HIP1 HIP2, ...
  • Edge Location (Device) may have public IP in one edge node as EIP1, ... or don't have public IP (behind a gateway as EGIP1, ...)

Connection for control plane (e.g. central cloud to k8s API server): 

  • Central Cloud to Traffic Hub: Direct connection through Hub's public IP
  • Central Cloud to Edge Location:
    • Edge location has public IP: Direct connection through Edge Location's public IP
    • Edge location does not have public IP: Using Edge Location owned hub's SDEWAN CNF as proxy

IPSec Tunnel mode for data plane (for data traffic)

  • Edge to Edge: Host to host
  • Edge to Hub: Host (edge) to Site (Hub, using edge's subnet as rightsubnet)
  • Hub to Hub: Host to Host

Environment Setup (Pre-condition)

Central Cloud:

  • K8s cluster is setup (by Kud)
  • Web UI (Optional), API Server, Rsync backend, DB service are deployed (manually or through EMCO)

Traffic Hub:

  • K8s cluster is setup (by Kud)
  • Hub SDEWAN CRD Controller and CNF are deployed (through EMCO) with initial configuration (e.g. NAT: enable DNAT for k8s API service and Istio Ingress service).

Edge Location (With Public IP):

  • K8s cluster is setup (by Kud)
  • Edge SDEWAN CRD Controller and CNF are deployed (through EMCO) with initial configuration (e.g. NAT: enable DNAT for k8s API service and Istio Ingress service).

Edge Location (With Private IP):

  • K8s cluster is setup (by Kud)
  • Edge SDEWAN CRD Controller and CNF are deployed (through EMCO) with initial configuration (e.g. NAT: enable DNAT for k8s API service and Istio Ingress service; IPSec: as Initiator for Control plane - left: %any, leftsourceip:%config, right: Owned Hub's HIP, rightsubnet:0.0.0.0/0).

Restful API definition and Back-End flow

ResourceDescriptionURLFieldsBack-End flow
OverlayDefine a group of edge location clusters (devices) and hubs, a overlay is usually owned by one customer and full mesh connections are setup automacally between hub-hub and device-device (with public IPs)/scc/v1/overlays
  • name
  • description
  • caid

Registration:

  • SCC requests a CA from cert-manager, the CA is used as root CA for this overlay
  • SCC save the caid in DB
ProposalDefine proposals which can be used for IPsec tunnel in this overlay/scc/v1/overlays/{overlay-name}/proposals
  • name
  • description
  • encryption
  • hash
  • dhgroup

Registration:

  • SCC saves the proposals information in DB
HubDefine a traffic Hub in an overlay/scc/v1/overlays/{overlay-name)/hubs
  • name
  • description
  • publicIps
  • certificateId
  • kubeConfig

Registration:

  • SCC checks hub's k8s API server access with kubeConfig for each ip in publicIps
  • For each registered hub in this overlay
    • SCC requests cert-manager to generate a public/private key pair based on overlay CA
    • SCC generates the IPsec CR for new hub and registered hub then call rsync to deploy CR to setup route based host-host IPsec tunnel (with BGP/OSPF enabled):
      • All proposals in this Overlay will be used as candidate proposals for IPsec configuration
      • Use the public/privite key pair generated in previous step as IPsec cert
  • SCC saves hub information in DB
IPRangeDefine the overlay IPrange which will be used as OIP of devices/scc/v1/overlays/{overlay-name}/ipranges
  • name
  • description
  • subnet
  • minIp
  • maxIp

Registration:

  • SCC save ip range information in DB
DeviceDefine a edge location device information which may be a CNF, VNF or PNF/scc/v1/overlays/{overlay-name}/devices
  • name
  • description
  • publicIps
  • forceHubConnectivity
  • proxyHub
  • proxyHubPort
  • useHub4Internet
  • dedidatedSFC
  • certicatedId
  • kubeConfig

Registration:

  • If has publicIps and forceHubConnection==false:

    • SCC checks device's k8s API server access with kubeConfig for each ip in publicIps
    • For each registered device of this overlay:

      • SCC requests cert-manager to generate a public/private key pair based on overlay CA
      • SCC generates the IPsec CR for new device and registered device then call rsync to deploy CR to setup host-host IPsec tunnel:
        • All proposals in this Overlay will be used as candidate proposals for IPsec configuration
        • Use the public/privite key pair generated in previous step as IPsec cert
  • else
    • (Assumption) Kud configures device as Initiator to proxyHub
    • SCC find 1 available OIP from overlay's IPRange, configure then deploy (through rsync) IPsec CR for proxyHub as Responder with OIP as the only 1 candidate ip for Initiator
      • Expectation: the IPsec tunnel between proxyHub and device should setup up and the device will get the assigned OIP
    • SCC creates DNAT CR (dst: HIP, dst_port: proxyHubPort change to dst: OIP, dst_port: 6443) then deploy to proxyHub (SCC will auto geterate a proxyHubPort if not provided)
    • SCC checks device's k8s API server access with kubeConfig for proxyHub:proxyHubPort
    • For each registered device with public ip and forceHubConnection==false:

      • SCC requests cert-manager to generate a public/private key pair based on overlay CA
      • SCC generates the IPsec CR for new device and registered device (with public IP) then call rsync to deploy CR to setup host-host IPsec tunnel:
        • All proposals in this Overlay will be used as candidate proposals for IPsec configuration
        • Use the public/privite key pair generated in previous step as IPsec cert
  • SCC saves device information in DB
Hub-device connectionDefine a connection between hub and device/scc/v1/overlays/{overlay-name}/hubs/{hub-name}/devices/{device-name}
  • N/A

Create:

  • SCC find 1 available OIP from overlay's IPRange, configure then deploy (through rsync) IPsec CR for hub as Responder with OIP as the only 1 candidate ip for Initiator
  • SCC configure the deploy IPsec CR as Initiator to Hub for device
    • Expectation: the IPsec tunnel between hub and device should setup up and the device will get the assigned OIP

Todo: Confirm "ip route" rule for OIP in this hub and all other hub are setup automatically or need new CR to execute linux shell in host

Register Hub:

  • Trigger: Admin add/update hub information in Web UI or Remote Client Call with below informations:
    • Name, Description
    • Public IP address list
    • Managed IP ( ? )
    • Shared flag (whether the hub can be shared cross overlays)
    • Overlay name
    • CertificateId
    • Kubeconfig
  • Steps:
    • Save in DB
    • Setup control plane host-host tunnel with Central Cloud (e.g. Add a new IPSec policy in Central Cloud CNF with: left: CIP, right: HIP, CertificateId)

Opens:

  1. In case multiple public IPs, needs to define which HIP (Managed IP?) should be used in connection with Central Cloud - Yes?

Flow: Edge Location

Register Edge Location:

  • Trigger: Admin add/update edge location information in Web UI or Remote Client Call with below informations:
    • Name, Description
    • External IP address (empty if no public IP)
    • Flag as force Hub connectivity (Valid if external public IP is not empty)
    • Flag as use Hub for internet connectivity
    • Flag as Dedicated SFC
    • Number of overlay IP addresses
    • CertificateId
    • Kubeconfig
    • Owned Hub id
    • Owned Hub port (optional, used as proxy for Edge location's k8s server) 
  • Steps:
    • Save in DB
    • if public ip is not empty, Setup host-host tunnel with Central Cloud (e.g. Add a new IPSec policy in Central Cloud CNF with: left: CIP, right: EIP, CertificateId)
    • if public ip is empty, no more actions (suppose the tunnel had been setup after edge location setup)
    • if Owned Hub port is none, auto assign a port, then Setup DNAT rule (if DesPort: Owned Hub Port then change Destination IP: EOIP, DesPort: 443) in SDEWAN CNF of Owned Hub
    • Verify connection to Edge location's k8s API server

Opens:

  1. the OIP for control plane (with Central Cloud) will be generated by Hub responder, shall this OIP be used for data plane (e.g. edge1↔hub↔edge2) in Add-edge-location flow in overlay, and the Number of overlay IP address will be used to block Add-edge-location flow if exceeded?

Flow: Overlay

Add-basic-information:

  • Trigger: Admin add/update edge location information in Web UI or Remote Client Call with below informations:
    • Name, Description
    • CertificateId
    • Overlay IP ranges
  • Steps:
    • Save in DB

Opens:

  1. Can overlay IP ranges be same for different overlay? (Suppose "yes" as supposing the edges belongs to different overlays will not communicate even share the same hub)
  2. How to avoid EMCO deploy different microservices of one application into different overlays?

Add-hub:

  • Trigger: Admin add/update hub overlay information in Web UI or Remote Client Call with below informations:
    • Overlay name
    • hub name
    • Hub ip (if hub has more than 1 public IPs)
    • Hub overlay ip ranges
  • Steps:
    • Save hub list information in DB
    • Setup hub-hub tunnel (data plane): e.g. left: HIP1, right: HIP2, overlay CertificateId
    • Setup hub as responder of edge-hub tunnel (data plane): e.g. left: HIP, leftsubnet: Hub overlay ip subnet, rightsourceip: Hub overlay ip ranges

Opens:

  1. Does it need define overlay ip ranges special for a hub or use overlay's ip range directly?
  2. Can 2 Hub setup 2 channels with different masks/interface ids (Need check)?
  3. How to keep monitoring and restart IPsec tunnel if failed? - Enable IPsec DPD (Dead Peer Detection)


Add-edge-location:

  • Trigger: Admin add/update application cluster overlay information in Web UI or Remote Client Call with below information:
    • Overlay name
    • edge location name
    • connected Hub name(s)
  • Steps:
    • Save application cluster overlay information in DB
    • Setup edge-hub tunnel with first hub (data plane): e.g. as Initiator - left: %any, leftsourceip:%config, right: HIP, rightsubnet:0.0.0.0/0, overlay CertificateId
    • Get the assigned OIP, save to DB and broadcast to other hubs (add to exclude list of its responder - Need to check how to do it)
    • Setup edge-hub tunnel with all hubs (data plane): e.g. as host-host tunnel
      • Edge - left: EOIP, right: HIP, overlay's CertificateId
      • Hub - left: HIP, right: EOIP, overlay's CertificateId

Opens:

  1. Suppose a edge location can only belong to one overlay at the same time? - Yes and hub is only belong to one overlay, right?
  2. Can edge location connected to more than 1 hubs? if yes, Can it be assigned multiple OIPs from different hubs? - Yes
  3. For edge with public ip, does it need setup Initiator-responder tunnel or host-host tunnel with hub?
  4. Does it need configuration in Overlay to configure edge-edge tunnel (support one edge has public ip) and in which flow?

Flow: Application Connection

Add-application-rule:

  • Trigger: Admin add/update application crule in Web UI or Remote Client (ONAP4K8s connectivity orchestrator?) Call with below informations:
    • Application name
    • Deployed edge location name
    • Firewall/SNAT/DNAT
  • Steps:
    • Save application rule information in DB
    • Setup openwrt rules in SDEWAN CNF of edge location

Add-application2application-rule:

  • Trigger: Admin add/update application crule in Web UI or Remote Client (ONAP4K8s connectivity orchestrator?) Call with below informations:
    • Application1 name
    • Application2 name, port
    • Edge location1 names for application1
    • Edge location2 names for application2
  • Steps:
    • Save application2application rule information in DB
    • Setup openwrt SNAT rule of SDEWAN CNF for edge location1
    • Setup openwrt DNAT rule of SDEWAN CNF for edge location2
    • Setup ip route rule in hubs between edge location1 and edge location2

Opens:

  1. The registration of application/microservice connection information is done by Admin manually or triggled automatically by EMCO's deployment process (assume simaliar information shared)?  

Error handling

DB Schema

Module Design

Task Breakdowns

TasksDueOwnerStatusDescription
Scheduler Manager



-- Overlay: Setup tunnels for hubs and edges


Generates relevant K8s CRs of SD-EWAN CNFs of various hubs and edges to establish the tunnels
-- IP Address manager


Assigns/frees IP addresses from "overlay IP ranges" and dedicates them to that cluster
-- Application connectivity scheduler


Creates K8s resources required to be pushed into the edges and corresponding traffic hubs to facilitate the connectivity
-- Resource Synchronizer



-- CNF



API Server



-- Rest API Backend


Rest API server framework
-- DB Backend


Proxy to DB
-- Application Cluster management



-- Hub management



-- Overlay management



-- Status monitoring management



-- logging



Web UI



-- Web UI framework



-- Application Cluster Registration



-- Hub Registration



-- Overlay



-- Application/Service Registration



-- Status tracking



EMCO plugin for SDEWAN



E2E Integration


Integration test of overall system
  • No labels