Goal
Sdewan config agent is the controller of Sdewan CRDs. With the config agent, we are able to deploy CNFs. In this page, we have the following terms, let's define them here.
- CNF Deployment: A deployment running network function process(openWRT)
- Sdewan rule: The rule defines the CNF behaves. We have 3 classes of rules: mwan3, firewall, ipsec. Each class includes several kinds of rules. For example, mwan3 has 2 kinds: mwan3_policy and mwan3_rule. Firewall has 5 kinds: firewall_zone, firewall_snat, firewall_dnat, firewall_forwarding, firewall_rule. Ipsec has xx(ruoyu) kinds: xx, xx.
- Sdewan rule CRD: The CRD defines each kind of sdewan rule. For each kind of Sdewan rule, we have a Sdewan rule CRD. Sdewan rule CRD is namespaced resource.
- Sdewan rule CR: Instance of Sdewan rule CRD.
- Sdewan controller: The controller watching Sdewan rule CRs.
- CNF: A network function running in container.
To deploy a CNF, user needs to create one CNF deployment and some Sdewan rule CRs. In a Kubernetes namespace, there could be more than one CNF deployment and many Sdewan rule CRs. We use label to correlate one CNF with some Sdewan rule CRs. The Sdewan controller watches Sdewan rule CRs and applies them onto the correlated CNF by calling CNF REST api.
Sdwan Design Principle
- There could be multiple tenants/namespaces in a Kubernetes cluster. User may deploy multiple CNFs in any one or more tenants.
- The replica of CNF deployment could be more than one for active/backup purpose. We should apply rules for all the pods under CNF deployment. (This release doesn't implement VRRP between pods)
- CNF deployment and Sdewan rule CRs can be created/updated/deleted in any order
- The Sdewan controller and CNF process could be crash/restart at anytime for some reasons. We need to handle these scenarios
- Each Sdewan rule CR has labels to identify the type it belongs to. 3 types are available at this time:
basic
,app-intent
andk8s-service
. We extend k8s user role permission so that we can set user permission at type level of Sdewan rule CR - Sdewan rule CR dependencies are checked on creating/updating/deleting. For example, if we create a mwan3_rule CR which uses policy
policy-x
, but no mwan3_policy CR namedpolicy-x
exists. Then we block the request
CNF Deployment
In this section we describe what the CNF deployment should be like, as well as the pod under the deployment.
- CNF pod should has multiple network interfaces attached. We use multus and ovn4nfv CNIs to enable multiple interfaces. So in the CNF pod yaml, we set annotations:
k8s.v1.cni.cncf.io/networks
,k8s.plugin.opnfv.org/nfn-network
. - When user deploys a CNF, she/he most likely want to deploy the CNF on a specified node instead of a random node. Because some nodes may don't have provider network connected. So we set
spec.nodeSelector
for pod - CNF pod runs Sdewan CNF (based on openWRT in ICN). We use image
integratedcloudnative/openwrt:dev
- CNF pod should setup with rediness probe. Sdewan controller would check pod readiness before calling CNF REST api.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cnf-1 namespace: default labels: sdewanPurpose: cnf-1 spec: replicas: 1 strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: k8s.plugin.opnfv.org/nfn-network: |- { "type": "ovn4nfv", "interface": [ { "defaultGateway": "false", "interface": "net0", "name": "ovn-priv-net" }, { "defaultGateway": "false", "interface": "net1", "name": "ovn-provider-net1" }, { "defaultGateway": "false", "interface": "net2", "name": "ovn-provider-net2" } ]} k8s.v1.cni.cncf.io/networks: '[{ "name": "ovn-networkobj"}]' spec: containers: - command: - /bin/sh - /tmp/sdewan/entrypoint.sh image: integratedcloudnative/openwrt:dev name: sdewan readinessProbe: failureThreshold: 5 httpGet: path: / port: 80 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 securityContext: privileged: true procMount: Default volumeMounts: - mountPath: /tmp/sdewan name: example-sdewan readOnly: true nodeSelector: kubernetes.io/hostname: ubuntu18
Sdewan rule CRs
CRD defines all properties of a resource, but it's not human friendly. So we paste Sdewan rule CR samples instead of CRDs.
- Each Sdewan rule CR has a label named
sdewanPurpose
to indicate which CNF should the rule be applied onto - Each Sdewan rule CR has the
status
field which indicates if the latest rule is applied and when it's applied Mwan3Policy.spec.members[].network
should match the networks defined in CNF pod annotationk8s.plugin.opnfv.org/nfn-network
. As well asFirewallZone.spec[].network
CR samples of Mwan3 type:
apiVersion: batch.sdewan.akraino.org/v1alpha1 kind: Mwan3Policy metadata: name: balance1 namespace: default labels: sdewanPurpose: cnf-1 spec: members: - network: ovn-net1 weight: 2 metric: 2 - network: ovn-net2 weight: 3 metric: 3 status: appliedVersion: "2" appliedTime: "2020-03-29T04:21:48Z" inSync: True
apiVersion: batch.sdewan.akraino.org/v1alpha1 kind: Mwan3Rule metadata: name: http_rule namespace: default labels: sdewanPurpose: cnf-1 spec: policy: balance1 src_ip: 192.168.1.2 dest_ip: 0.0.0.0/0 dest_port: 80 proto: tcp status: appliedVersion: "2" appliedTime: "2020-03-29T04:21:48Z" inSync: True
CR samples of Firewall type:
apiVersion: batch.sdewan.akraino.org/v1alpha1 kind: FirewallZone metadata: name: lan1 namespace: default labels: sdewanPurpose: cnf-1 spec: newtork: - ovn-net1 input: ACCEPT output: ACCEPT status: appliedVersion: "2" appliedTime: "2020-03-29T04:21:48Z" inSync: True
apiVersion: batch.sdewan.akraino.org/v1alpha1 kind: FirewallRule metadata: name: reject_80 namespace: default labels: sdewanPurpose: cnf-1 spec: src: lan1 src_ip: 192.168.1.2 src_port: 80 proto: tcp target: REJECT status: appliedVersion: "2" appliedTime: "2020-03-29T04:21:48Z" inSync: True
apiVersion: batch.sdewan.akraino.org/v1alpha1 kind: FirewallSNAT metadata: name: snat_lan1 namespace: default labels: sdewanPurpose: cnf-1 spec: src: lan1 src_ip: 192.168.1.2 src_dip: 1.2.3.4 dest: wan1 proto: icmp status: appliedVersion: "2" appliedTime: "2020-03-29T04:21:48Z" inSync: True
apiVersion: batch.sdewan.akraino.org/v1alpha1 kind: FirewallDNAT metadata: name: dnat_wan1 namespace: default labels: sdewanPurpose: cnf-1 spec: src: wan1 src_dport: 19900 dest: lan1 dest_ip: 192.168.1.1 dest_port: 22 proto: tcp status: appliedVersion: "2" appliedTime: "2020-03-29T04:21:48Z" inSync: True
apiVersion: batch.sdewan.akraino.org/v1alpha1 kind: FirewallForwarding metadata: name: forwarding_lan_to_wan namespace: default labels: sdewanPurpose: cnf-1 spec: src: lan1 dest: wan1 status: appliedVersion: "2" appliedTime: "2020-03-29T04:21:48Z" inSync: True
CR samples of IPSec type(ruoyu):
Sdewan rule CRD Reconcile Logic
As we have many kinds of CRDs, they have almost the same reconcile logic. So we only describe the Mwan3Rule logic.
Mwan3Rule Reconcile could be triggered by the following cases:
- Create/Update/Delete Mwan3Rule CR
- CNF deployment ready status change (With predicate feature, we can only watch CNF deployment readiness status. With enqueueRequestsFromMapFunc, we can enqueue all Mwan3Rule CRs with specified
labels.sdewanPurpose
, if CNF deployment's ready status changes)- CNF becomes ready after creating
- CNF becomes ready after restart
- CNF becomes not-ready after crash
Mwan3Rule Reconcile flow:
def Mwan3RuleReconciler.Reconcile(req ctrl.Request): rule_cr = k8sClient.get(req.NamespacedName) cnf_deployment = k8sClient.get_deployment_with_label(rule_cr.labels.sdewanPurpose) if rule_cr DeletionTimestamp exists: # The CR is being deleted. finalizer on the CR if cnf_deployment exists: if cnf_deployment is ready: for cnf_pod in cnf_deployment: err = openwrt_client.delete_rule(cnf_pod_ip, rule_cr) if err: return "re-queue req" rule_cr.finalizer = nil return "ok" else: return "re-queue req" else: # Just remove finalizer, because no CNF pod exists rule_cr.finalizer = nil return "ok" else: # The CR is not being deleted if cnf_deployment not exist: return "ok" else: if cnf_deployment not ready: # set appliedVersion = nil if cnf_deployment get into not_ready status rule_cr.status.appliedVersion = nil return "re-queue req" else: for cnf_pod in cnf_deployment: runtime_cr = openwrt_client.get_rule(cnf_pod_ip) if runtime_cr != rule_cr: err = openwrt_client.add_or_update_rule(cnf_pod_ip, rule_cr) if err: # err could be caused by dependencies not-applied or other reason return "re-queue req" # set appliedVerson only when it's applied for all the cnf pods rule_cr.finalizer = cnf_finalizer rule_cr.status.appliedVersion = rule_cr.resourceVersion rule_cr.status.inSync = True return "ok"
Unsual Cases
- Controller goes down -> Create CNF Deployment and rule CRs -> Controller goes up
- No reconcile executed before the controller goes up. The rule CRs have empty status, no rules applied to the CNF deployment
- Once the controller goes up, it reconciles every rule CR. In the reconcile function, rules are applied and rule CRs status.appliedVersion are updated
- Controller goes down -> delete rule CRs -> Controller goes up
- During the controller are down, rule are not deleted from the CNF Deployment. The rule CRs are not deleted from k8s etcd because of finalizer.
- Once the controller goes up, it reconciles every rule CR. It calls CNF api to delete rules and remove CR finalizer.
- CNF deployment goes to not-ready -> after some time -> CNF deployment goes to ready status
- As the CNF deployment goes to not-ready, the controller reconciles every CR which matchs the CNF deployment, to set status.appliedVersion=nil.
- Once the CNF deployment goes to ready status, controller receives the event and reconciles every rule CR. It applies the rule and set status.appliedVersion.
- Controller goes down -> CNF deployment pod restart -> Controller goes up
- During the time controller is down, the CNF pod is restarted, the rules do not exist in restarted pod. So all the related rule CRs status.appliedVersion should be set to nil. But the controller is down, it can't receive the CNF down/up event.
- When the controller goes up, it reconciles every rule CR. But it doesn't know the CNF had every restarted. This is a problem, so we can't use CR status.appliedVersion to record if the rule is applied or not. Instead we should already call API to get existing rule and compare, and finally apply if existing rule in CNF is different with CR defination.
Admission Webhook Usage
We use admission webhook to implemention several features.
- Prevent creating more than one CNF of the same lable and the same namespace
- Validate CR dependencies. For example, mwan3 rule depends on mwan3 policy
- Extend user permission to control the operations on rule CRs. For example, we can control that ONAP can't update/delete rule CRs created by platform.
Sdewan rule CR type level Permission Implementation
8s support permission control on namespace level. For example, user1 may be able to create/update/delete one kind of resource(e.g. pod) in namespace ns1, but not namespace ns2. For Sdewan, this can't fit our requirement. We want label level control of Sdewan rule CRs. For example, user_onap can create/update/delete Mwan3Rule CR of label sdewan-bucket-type=app-intent
, but not label sdewan-bucket-type=basic
.
Let me first describe the extended permission system and then explain how we implement it. In k8s, user or serviceAccount could be bonded to one or more roles. The roles defines the permissions, for example the following role defines that sdewan-test
role can create/update Mwan3Rule CRs in default
namespace. Also sdewan-test
role can get Mwan3Policy CRs.
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: name: sdewan-test namespace: default rules: - apiGroups: - "" resources: - mwan3rules verbs: - create - update - apiGroups: - "" resources: - mwan3policies verbs: - get
We extend the Role with annotations. In the annotation, we can define labled based permissions. For example, the following role extends sdewan-test
role permission: sdewan-test
can only create/update Mwan3Rule CRs with label sdewan-bucket-type=app-intent
or sdewan-bucket-type=k8s-service
. Also it can only get Mwan3Policy CR with label sdewan-bucket-type=app-intent
.
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: sdewan-bucket-type-permission: |- { "mwan3rules": ["app-intent", "k8s-service"], "mwan3policies": ["app-intent"] } name: sdewan-test namespace: default rules: - apiGroups: - "" resources: - mwan3rules verbs: - create - update - apiGroups: - "" resources: - mwan3policies verbs: - get
We use admission webhook to implement the type level permission control. Let me describe how admission webhook in simple words. When k8s api receives a request, kube-api call webhook API before save the object into etcd. If the webhook returns allowed=true
, kube-api continues to persistent the object into etcd. Otherwise, kube-api reject the request. The webhook can optional tell kube-api to update the object together with allowed=true
returned. Webhook request body has a field named userInfo, it indicates who is making the k8s api request. With this field, we can implement the extended permission in webhook.
def mwan3rule_webhook_handle_permission(req admission.Request): userinfo = req["userInfo] mwan3rule_cr = decode(req) roles = k8s_client.get_role_from_user(userinfo) for role in roles: if mwan3rule_cr.labels.sdewan-bucket-type in role.annotation.sdewan-bucket-type-permission.mwan3rules: return {"allowd": True} return {"allowd": False}
ServiceRule controller (For next release)
We create a controller watches the services created in the cluster. For each service, it creates a FirewallDNAT CR. On controller startup, it makes a syncup to remove unused CRs.
References
- https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/doc.go
- https://book.kubebuilder.io/reference/using-finalizers.html
- https://godoc.org/sigs.k8s.io/controller-runtime/pkg/predicate#example-Funcs
- https://godoc.org/sigs.k8s.io/controller-runtime/pkg/handler#example-EnqueueRequestsFromMapFunc