Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Motivation

In ICN, we have ONAP4k8s as the service orchestration and Kubernetes as the resource orchestration. In the edge deployment, there will be multiple end-users sharing the same edge compute resource. The challenges are to isolate the end-users deployment and allocate the resource as per their demand and quota. This proposal addresses these challenges by creating a "Logical cloud" for the set of users, and provide logical isolation and resource quota.

Goal(in Scope)

Focusing on the solution within service orchestration

Goal(Out of Scope)

Working in Kubernetes core or API is clearly out of the scope of these documents. There are the solutions available to provide a separate control plane to each tenant in a cluster. But the creation of tenant within a cluster does not address the shared clusters and tenant creation should be at service orchestration instead of resource orchestration 

Outline

In this section, we define Logical cloud in general for the Service Orchestration engine. A Logical cloud can be defined as a group of resources bounded and isolated amount of compute, storage, networking and control plane in a kubernetes cluster. A Logical cloud can also be defined as a group of users slicing a bounded resource allocated for them. These resources can be as follows:

  • CPU, Memory, Extended Resources
  • Network bandwidth, I/O bandwidth, Resource orchestration(Kubernetes) cluster resource
  • Resource reservation to provide the Guaranteed QoS in Resource orchestration(Kubernetes) 

Requirement

  1. For a service provider, a Logical cloud is basically a group of end-user sharing the same cluster, we have to make sure that the end-user resources are tracked and accountable for their consumption in a cluster
  2. In a few cases, admin or end-user application is shared among multiple Logical cloud, in such case application resource should be tracked across the cluster
  3. Centralization resource quota or the allocation limits record should be maintained by admin or for the end-user. For example, just a " Resource query" API to Service Orchestration (ONAP4K8s) should display the resource quota and policy for each end-user or logical cloud
  4. In Edge use case, the service orchestration like ICN should get the resource details across multiple clusters by resource orchestration, should set the resource allocation for the cluster and decide the scheduling mechanism
  5. User credential centralization with application orchestration

Distributed Cloud Manager

Objectives:

  • User creation
  • Namespace creation
  • Logical cloud creation 
  • Resource isolation


Assumption:

  1. During the Cluster registration to the ONAP4K8s, the ONAP4K8s HPA features associate each cluster with the labels
  2. These labels are used by the DCM to identities the cluster and create the logical cloud

DCM Block diagram

DCM Flow:

  1. All components are exposed as DCM microservices and queries are made by Rest API
  2. DCM User controller microservices API is used to create the users with logical cloud admin and their associated logical namespace in the each cluster using Cluster labels
  3. DCM Manager looks for the quota information, if the quota information is not available it will apply the default quota for memory, CPU and kubernetes resources
  4. DCM Manager Microservices queries the database to create users, namespace, security controller root CA
  5. DCM Manager create logical cloud using Istio control plane using namespace and security controller root CA 
  6. The quota for the logical cloud could be tuned even after the logical cloud

Logical cloud creation(With default resource quota & users)

The following steps explain how to run the tenant controller in kubernetes

logical cloud creation
URL: /v2/projects/<project-name>/logical-clouds
POST BODY:
{
 "name": "logical-cloud-1",   //unique name for the new logical cloud
 "description": "logical cloud for walmart finance department",  //description for the logical cloud
 "cluster-labels": "abc,xyz",
 "resources": {
	"cpu": "400",
	"memory": "1000Gi",
	"pods": "500",
    "dummy/dummyResource": 100,
 },
  "user" : [{
    "name" : "user-1",  //name of user for this cloud
    "type" : "certificate",   //type of authentication credentials used by user (certificate, APIKey, UNPW)
    "certificate" : "/path/to/user1/logical cloud-1-user1.csr" ,  //Path to user certificate
    "permissions" : {
       "apiGroups" : ["stable.example.com"]
       "resources" : ["secrets", "pods"]
       "verbs" : ["get", "watch", "list", "create"]
     },
	"quota" : {
		"cpu": "100",
		"memory": "500Gi",
		"pods": "100",
    	"dummy/dummyResource": 20
	}]
  }
}

curl -d @create_logical_cloud-1.json http://onap4k8s:<multicloud-k8s_NODE_PORT>/v2/projects/<project-name>/logical-clouds \ 
	--key ./logical cloud-t1-admin-key.pem \
  	--cert ./logical cloud-t1-admin.pem \  

Return Status: 201
Return Body:
{
  "name" : "logical-cloud-1"
  "user" : "user-1"
  "Message" : "logical cloud and associated user successfully created"
}

Creating users:

Logical cloud admin key and certificate should be created by Logical cloud admin(the one who create the curl command). Authentication is required for the curl command. DCM should have the Admin logical cloud information to authenticate the curl command. Unauthorized users can't create the logical cloud.

How the user certificate should be created?

  • Create a private key for user1
    • openssl genrsa -out logical cloud-1-user-1.key 2048
  • Create a certificate sign request logical cloud-1-user1.csr
    • openssl req -new -key logical cloud-1-user-1.key -out logical cloud-1-user1.csr -subj "/CN=user-1/O=logical cloud-1“

This information should be created before creating logical cloud and inserted in the logical cloud creation

Binding the user certificates with the cluster? 

User controller does the following steps to bind the user certificate with the cluster using the cluster-labels : abc  and xyz. Itohan Ukponmwan (Deactivated) Please get the GET URL from HPA controller to get the cluster list with cluster labels

DCM queries HPA controller the list of cluster having cluster-labels abc and gets the cluster list c1 and c2

Each cluster(C1, C2) has the Kubernetes cluster certificate cluster (CA – c1-ca.crt & c1-ca.key), generate the final certificate logical cloud-1-user1.crt by using logical cloud-1-user1.csr (do the same for the cluster c2). user controller does the following steps once the logical cloud curl command is post through grpc with goclient API

$ openssl x509 -req -in logical cloud-1-user1.csr -CA CA_LOCATION/c1-ca.crt -CAkey CA_LOCATION/c1-ca.key -CAcreateserial -out logical cloud-1-user1-c1.crt -days 500

$ kubectl –kubeconfig=/path/to/c1/kubeconfig config set-credentials user-1 --client-certificate=./ logical cloud-1-user1-c1.crt  --client-key=./logical cloud-1-user-1.key


The following steps explain how to run the tenant controller in kubernetes

logical cloud creation
GET URL: /v2/projects/<project-name>/logical-clouds/<name>/users
RETURN STATUS: 200
RETURN BODY:
{
  users" : [{
    "name" : "user-1",  //name of user for this cloud
    "type" : "certificate",   //type of authentication credentials used by user (certificate, APIKey, UNPW)
    "certificate" : "/path/to/user1/logical cloud-1-user1.csr" ,  //Path to user certificate
    "permissions" : {
       "apiGroups" : ["stable.example.com"]
       "resources" : ["secrets", "pods"]
       "verbs" : ["get", "watch", "list", "create"]
     },
	"quota" : {
		"cpu": "100",
		"memory": "500Gi",
		"pods": "100",
    	"dummy/dummyResource": 20
	}
	},
    {
    "name" : "user-2",  //name of user for this cloud
    "type" : "certificate",   //type of authentication credentials used by user (certificate, APIKey, UNPW)
    "certificate" : "/path/to/user2/logical cloud-1-user1.csr" ,  //Path to user certificate
    "permissions" : {
       "apiGroups" : ["stable.example.com"]
       "resources" : ["secrets", "pods"]
       "verbs" : ["get", "watch", "list", "create"]
     },
	"quota" : {
		"cpu": "100",
		"memory": "500Gi",
		"pods": "100",
    	"dummy/dummyResource": 20
	}
	}
]

DELETE
URL: /v2/projects/<project-name>/logical-clouds/<name>/users
URL: /v2/projects/<project-name>/logical-clouds/<name>/user/<user-name>
 
RETURN STATUS: 204

Creating namespaces:

DCM queries the namespace controller through grpc to create namespace "logical cloud-1-ns" for the cluster with cluster labels abc and xyz. Namespace controller does the the following steps, to create the namespace and set the user with namespace through grpc with goclient API

$ kubectl create namespace logical cloud-1-ns --kubeconfig=/path/to/c1/kubeconfig

$ kubectl config set-context logical-cloud-1-user-1-context --cluster=c1 --namespace= logical cloud-1-ns --user=user1  --kubeconfig=/path/to/c1/kubeconfig

The following steps explain how to run the tenant controller in kubernetes

logical cloud creation
GET URL: /v2/projects/<project-name>/logical-clouds/<name>/namespaces
RETURN STATUS: 200
RETURN BODY:
{
  "clusters": {c1, c2}
  namespces" : {
    "name" : "logical cloud-1-ns",  //name of namespace for the logical cloud
	}
}

DELETE
URL: /v2/projects/<project-name>/logical-clouds/<name>/namespaces
 
RETURN STATUS: 204

DCM Database

DCM Database is based on Mongo DB.

Creating Logical cloud

  1. Logical cloud controller uses the Istio to create logical control using the https://istio.io/docs/setup/install/multicluster/gateways/ .
  2. DCM manager queries the Security controller with /v1/cadist/projects/{project-name}/logicalclouds/{logicalcloud-name}/clusters/{cluster-name} to get the bundle details for the cluster C1 and C2
  3. The expectation that "JSON bundle" should provide the path to the root cert.
  4. Logical cloud controller creates Isito control plane in Cluster C1 and C2 for namespace logical cloud-1-ns
logical cloud creation
URL: /v2/projects/<project-name>/logical-clouds/control-plane
POST BODY:
{
 "name": "logical-cloud-1",   //unique name for the new logical cloud
 "namespace": "Logical-cloud-1-istio-system",
 "ca-cert": "",
 "ca-key": "",
 "root-cert": "",
 "cert-chain" ""
  "user" : {
    "name" : "user-2",  //name of user for this cloud
    "type" : "certificate",   //type of authentication credentials used by user (certificate, APIKey, UNPW)
    "certificate" : "/path/to/user2/logical cloud-1-user2.csr" ,  //Path to user certificate
    "permissions" : {
       "apiGroups" : ["stable.example.com"]
       "resources" : ["secrets", "pods"]
       "verbs" : ["get", "watch", "list", "create"]
     },
	"quota" : {
		"cpu": "200",
		"memory": "300Gi",
		"pods": "200",
    	"dummy/dummyResource": 30,
	}
  }
}

curl -d @create_logical_cloud-1-user-2.json http://onap4k8s:<multicloud-k8s_NODE_PORT>/v2/projects/<project-name>/logical-clouds \ 
	--key ./logical cloud-t1-admin-key.pem \
  	--cert ./logical cloud-t1-admin.pem \  

Return Status: 201
Return Body:
{
  "name" : "logical-cloud-1"
  "user" : "user-2"
  "Message" : "logical cloud and associated user successfully created"
}

Creating new users in already existing Logical cloud

the following steps explain how to run the tenant controller in kubernetes

logical cloud creation
URL: /v2/projects/<project-name>/logical-clouds
POST BODY:
{
 "name": "logical-cloud-1",   //unique name for the new logical cloud
  "user" : {
    "name" : "user-2",  //name of user for this cloud
    "type" : "certificate",   //type of authentication credentials used by user (certificate, APIKey, UNPW)
    "certificate" : "/path/to/user2/logical cloud-1-user2.csr" ,  //Path to user certificate
    "permissions" : {
       "apiGroups" : ["stable.example.com"]
       "resources" : ["secrets", "pods"]
       "verbs" : ["get", "watch", "list", "create"]
     },
	"quota" : {
		"cpu": "200",
		"memory": "300Gi",
		"pods": "200",
    	"dummy/dummyResource": 30,
	}
  }
}

curl -d @create_logical_cloud-1-user-2.json http://onap4k8s:<multicloud-k8s_NODE_PORT>/v2/projects/<project-name>/logical-clouds \ 
	--key ./logical cloud-t1-admin-key.pem \
  	--cert ./logical cloud-t1-admin.pem \  

Return Status: 201
Return Body:
{
  "name" : "logical-cloud-1"
  "user" : "user-2"
  "Message" : "logical cloud and associated user successfully created"
}

User creation details:


The Tenant CRD object is defined by following CRD objects:


Tenant-CRD
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tenants.tenants.k8s.io
spec:
group: tenants.k8s.io
versions:
- name: v1alpha1
served: true
storage: true
scope: Cluster
names:
plural: tenants
singular: tenant
kind: Tenant
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: namespacetemplates.tenants.k8s.io
spec:
group: tenants.k8s.io
versions:
- name: v1alpha1
served: true
storage: true
scope: Cluster
names:
plural: namespacetemplates
singular: namespacetemplate
kind: NamespaceTemplate
shortNames:
- nstpl

Let's run the tenant controller and create a tenant object as follows:

Tenant object
$ kubectl create -f $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/poc/tenant-controller/data/manifests/crd.yaml
$ kubectl create -f $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/poc/tenant-controller/data/manifests/rbac.yaml
<<Running tenant controller>>
$ $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/out/tenant-controller/tenant-ctl -v=99 -kubeconfig=$HOME/.kube/config

$ # kubectl get crd
NAME                                             CREATED AT
namespacetemplates.tenants.k8s.io                2019-05-01T16:34:49Z
tenants.tenants.k8s.io                           2019-05-01T16:34:49Z

$ kubectl create -f $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/poc/tenant-controller/data/manifests/sample-nstemplate.yaml
$ kubectl create -f $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/poc/tenant-controller/data/manifests/sample-tenant.yaml
$ kubectl get tenants
NAME       AGE
tenant-a   7d

$ kubectl get ns
NAME            STATUS   AGE
default         Active   42d
kube-public     Active   42d
kube-system     Active   42d
tenant-a-ns-1   Active   7d18h
tenant-a-ns-2   Active   7d18h

A closer look into tenant Object

The tenant object looks like below:

tenant object
---
apiVersion: tenants.k8s.io/v1alpha1
kind: Tenant
metadata:
  name: tenant-a 
spec:
  namespaces:
      - name: ns-1
      - name: ns-2

The tenant controller takes this tenant spec as a template to creates the namespace for each namespace as tenant-a-ns1, tenant-a-ns2. In addition, the tenant object can also create an admin object for a tenant with a user for admin. 

In addition, it creates a namespace template, it defines templates that define Rolebinding, ClusterRole, NetworkPolicy for the namespace tenant-a-ns1 and tenant-a-ns2.


namespaceTemplate
$ kubectl get namespacetemplate
NAME         AGE
restricted   7d

$ kubectl get namespacetemplate restricted -o yaml
apiVersion: tenants.k8s.io/v1alpha1
kind: NamespaceTemplate
metadata:
  creationTimestamp: "2019-05-01T17:37:11Z"
  generation: 1
  name: restricted
  resourceVersion: "3628408"
  selfLink: /apis/tenants.k8s.io/v1alpha1/namespacetemplates/restricted
  uid: bffbe9c8-6c37-11e9-91c3-a4bf014c3518
spec:
  templates:
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: multitenancy:podsecuritypolicy
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: multitenancy:use-psp:restricted
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:serviceaccounts
  - apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: multitenancy-default
    spec:
      podSelector: {}
      policyTypes:
      - Ingress
      - Egress

Resource quota proposal for the tenant CRD

A tenant-based resource quota is required to implement resource tracking in ICN. The proposal here is to reuse the tenant controller work in Kubernetes and introduce the tenant resource quota CRD on the top of tenant controller

Tenant resource quota
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tenantresourcequota.tenants.k8s.io
spec:
  group: tenants.k8s.io
  versions:
    - name: v1alpha1
      served: true
      storage: true
  scope: Cluster
  names:
    plural: tenantresourcequotas
    singular: tenantresourcequota
    kind: TenantResourcequota
    shortNames:
    - trq

Before jumping into the definition of the tenant resource quota,  refer Kubernetes resource quota - https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more understanding on the resource quota

And tenant resource quota schema should be like this below


tenant resourcequota
// NamespaceTemplate defines a template of resources to be created inside a namespace.
type TenantResourceQuota struct {
    metav1.TypeMeta   `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`

    Spec TenantResourceQuotaSpec `json:"spec"`
}

// TenantResourceQuotaSpec defines the desired hard limits to enforce for Quota
type TenantResourceQuotaSpec struct {
	// Hard is the set of desired hard limits for each named resource
	// +optional
	Hard v1.ResourceList `json:"hard"`
	// A collection of filters that must match each object tracked by a quota.
	// If not specified, the quota matches all objects.
	// +optional
	Scopes []v1.ResourceQuotaScope `json:"scopes"`
	// ScopeSelector is also a collection of filters like Scopes that must match each object tracked by a quota
	// but expressed using ScopeSelectorOperator in combination with possible values.
	// +optional
	ScopeSelector *v1.ScopeSelector `json:"scopeSelector"`
    UserResourcequota  []string `json:"userResourcequota"`
}

And the example Tenant resource quota should be like this.

tenant resourcequota
---
apiVersion: tenants.k8s.io/v1alpha1
  kind: TenantResourcequota
  metadata:
    name: Tenant-a-resource-quota
  spec:
    hard:
      cpu: "400"
      memory: 1000Gi
      pods: "500"
	  requests.dummy/dummyResource: 100
    scopeSelector:
      matchExpressions:
      - operator : In
        scopeName: Class
        values: ["common"]
    userResourcequota:[
                      "silver-pool-resourcequota",
					  "gold-pool-resourcequota",
					  "vfirewall-pool-resourcequota"
                   ]
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: silver-pool-resourcequota
  namespace: tenant-a-ns-2 
spec:
  hard:
    limits.cpu: "100"
    limits.memory: 250Gi
	pod: 100
    requests.dummy/dummyResource: 25
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: gold-pool-resourcequota
  namespace: tenant-a-ns-1
spec:
  hard:
    limits.cpu: "200"
    limits.memory: 700Gi
	pod:300
    requests.dummy/dummyResource: 75
---
apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    name: vfirewall-v1
    namespace: tenant-a-ns-3
  spec:
    hard:
      cpu: "35"
      memory: 20Gi
      pods: "50"
    scopeSelector:
      matchExpressions:
      - operator : In
        scopeName: Firewall
        values: ["v1"]
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    name: vfirewall-v3
    namespace: tenant-a-ns-3
  spec:
    hard:
      cpu: "30"
      memory: 10Gi
      pods: "15"
    scopeSelector:
      matchExpressions:
      - operator : In
        scopeName: Firewall
        values: ["v3"]

Block diagram representation of tenant resource slicing     

Tenant controller architecture

ICN Requirement and Tenant controller gaps


ICN RequirementTenant  Controller

Multi-cluster tenant controller

  1. Tenant created at Multi scheduler site (ONAP4K8S)
Cluster level  tenant controller

Identifying K8S clusters for this tenant based on cluster labels

  1. Send the Tenant details to the K8s cluster
Tenant is created with CR at cluster level [Implemented]

At K8s cluster level

  1. Creating namespace
  2. Creating K8S users (Tokens, Certificates and User/Pwds)
  3. Creating K8S roles
  4. Creating permissions to various roles.
  1. Tenant controller at K8s cluster level [Implemented]
    1. A tenant can have multiple namespaces 
      1. Tenant-a
        1. ns1
        2. ns2
      2. It creates Tenant-a-ns1 and Tenant-a-ns
  2. Cluster-admin: This entity has full read/write privileges for all resources in the cluster including resources owned by various Tenants of the cluster [Not implemented].
  3. Cluster-view: This entity has read privileges for all resources in the cluster including reasources owned by various Tenants [Not implemented].
  4. Tenant-admin: This entity has privileges to create a new tenant, read/write resources scoped to that Tenant and update or delete that Tenant. This persona does not have any privileges for accessing resources that are either cluster-scoped or scoped to namespaces that are not associated with the Tenant object for which this persona has Tenant-admin privileges.[Implemented]
  5. Tenant-user: This entity has read/write privileges for all resources scoped within a specific Tenant (that is resources that are scoped within namespaces that are owned by a specific Tenant) [Not implemented].

Certificate Provisioning with Tenant

  • Suggestion to use Isito using citadel
Suggestion to bind the tenant with kubernetes context to see namespaces associated with it[Not implemented].
  • Quota at the application level.
  •  Tenant group support: Quota at the tenant group level (Multiple namespaces), ISTIO at the tenant group level.
  • Resource quota based on the tenant with multiple namespaces [Not implemented].

Multi-Cluster Tenant controller

<This section is incomplete and a work in progress ... needs rework and further updates ... >


  1. Define CRUD API - add/delete/modify/read MC Tenant.
    1. Cluster from the the Edge location are registered to the ONAP as follows :
      1. Cluster-100.2-labels: { Cascade, SRIOV, QAT,}
      2. Cluster-101.1-labels: { Sky-lake, SRIOV}
    2. Tenant Object is template in ONAP with following filed
      {
      	"metadata": {
      		"name": "tenant-a",
      		"clusterlabels": "label-A"
      	},
      	"spec": {
      		"users": [
      			{
      				"name": "users-1",
      				"crt": "/path/to/crt"
      			},
      			{
      				"name": "users-2",
      				"crt": "/path/to/crt"
      			}
      		]
      	}
      }
    3. ONAP create the Tenant based on the cluster labels
    4. Find the cluster Artifacts kubeconfig based on the cluster labels
    5. Get the Multicloud k8s-plugin API to create the Tenant json
      1. curl -d @create_tenant-a.json http://NODE_IP:30280/api/multicloud-k8s/v1/v1/tenant
    6. MC Cluster Tenant API - <This section is incomplete and a work in progress ... needs rework and further updates ... >
      1.  Create 
      2. Update 
      3. Delete 
      4. Get
      5. List 
      6. Watch
      7. Patch
    7. Each corresponding MC Cluster Tenant API will have a K8s Tenant CR API

       

  1. Design note :
    • On how this would be done as Micro-service in the ONAP.
    • How does interact with K8S clusters.
    • How does it ensure that all the configuration is applied (rollbacks, unsuccessful edges).
    • Visibility of the configuration applied on per MCTenant basis.
    • When new K8S cluster is added with the label of interest, taking care of creating tenant-specific information in that edge etc..
    • Extensibility (future K8S clusters having some other features that require configuration for multi-tenancy).

Open Questions:

  1. Slice the tenant with the cluster "--context" 
    1. [Kural] 
      1. Tenant creation from the ONAP4K8s should be shared down to the cluster in the edge location
      2. Tenant should have kubeconfig context a slice of their namespace alone 
  2. How to connect the istio Citadel certificates with Tenant? how to authenticate from the centralised location from onap4k8s to multi-cluster location?
    1. [Kural]
      1. Discuss so far with Istio folks and expertise, suggested that citadel certificate are bonded to namespace and specific for the application level. They are not targeted for the K8s Users
      2. For the k8s user, the certificates should be generated by the external entity and bind to the service account and the tenant as shown in the example - https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
  3. Tenant user bind to the certificates created from Citadel?
    1. [kural ]
      1. Initial Pathfinding show that Citadel may not be the right candidate for the K8s User certificate creation
  4.  How the cluster labels are configured in ONAP? how the MC tenant controller can identify them?
    1. [ kural ]
      1. Adding KUD and ONAP folks here @Ritu  @Kiran 
      2. Kubeconfig context should be passed from each KUD cluster to the ONAP
      3. KUD should invoke NFD immediately and enable the overall labels. And add those labels to cluster details and send back to the ONAP
      4. Cluster feature Discovery controller should be there in each Edge location cluster along with KUD, Run for each interval along with the NFD 

JIRA Story details


Reference

Kubernetes Multi-Tenancy Draft Proposal
Tenant Concept in Kubernetes

Kubernetes Tenant CRD
K8s Multi-tenancy WG Plan 

  • No labels