Table of Contents | ||
---|---|---|
|
...
Code Block | ||||
---|---|---|---|---|
| ||||
GET URL: /v2/projects/<project-name>/logical-clouds/<name>/namespaces RETURN STATUS: 200 RETURN BODY: { "clusters": {c1, c2} namespcesnamespaces" : { "name" : "logical cloud-1-ns", //name of namespace for the logical cloud } } DELETE URL: /v2/projects/<project-name>/logical-clouds/<name>/namespaces RETURN STATUS: 204 |
...
- Logical cloud controller uses the Istio to create logical control using the https://istio.io/docs/setup/install/multicluster/gateways/ .
- DCM manager queries the Security controller with /v1/cadist/projects/{project-name}/logicalclouds/{logicalcloud-name}/clusters/{cluster-name} to get the bundle details for the cluster C1 and C2
- The expectation that "JSON bundle" should provide the path to the root cert.
- Logical cloud controller creates Isito control plane in Cluster C1 and C2 for namespace logical cloud-1-ns-istio-system
Code Block | ||||
---|---|---|---|---|
| ||||
URL: /v2/projects/<project-name>/logical-clouds/control-plane POST BODY: { "name": "logical-cloud-1", //unique name for the new logical cloud "namespace": "Logical-cloud-1-istio-system", "ca-cert": "",/path/to/ca-cert.pem", "ca-key": "/path/to/ca-key.pem", "root-cert": "/path/to/root-cert.pem", "cert-chain" "/path/to/cert-chain.pem" } "user" : { "name" : "user-2", //name of user for this cloud "type" : "certificate", //type of authentication credentials used by user (certificate, APIKey, UNPW) "certificate" : "/path/to/user2/logical cloud-1-user2.csr" , //Path to user certificate "permissions" : { "apiGroups" : ["stable.example.com"] "resources" : ["secrets", "pods"] "verbs" : ["get", "watch", "list", "create"] }, "quota" : { "cpu": "200", "memory": "300Gi", "pods": "200", "dummy/dummyResource": 30, } } } curl -d @create_logical_cloud-1-user-2.json http://onap4k8s:<multicloud-k8s_NODE_PORT>curl -d @create_logical_cloud-1-user-2.json http://onap4k8s:<multicloud-k8s_NODE_PORT>/v2/projects/<project-name>/logical-clouds/control-plane \ --key ./logical cloud-t1-admin-key.pem \ --cert ./logical cloud-t1-admin.pem \ Return Status: 201 Return Body: { "name" : "logical-cloud-1" "Message" : "logical cloud 1 control plane is successfully created" } GET URL: /v2/projects/<project-name>/logical-clouds/control-planes RETURN STATUS: 200 RETURN BODY: { "name" : "logical cloud-1-ns", //name of namespace for the logical cloud "gateways" : "istio-egressgateway", "dns": "istiocoredns", clusters: {c1, c2} } DELETE URL: /v2/projects/<project-name>/logical-clouds \ --key ./logical cloud-t1-admin-key.pem \ --cert ./logical cloud-t1-admin.pem \ Return Status: 201 Return Body: {/control-planes RETURN STATUS: 204 |
Creating new users in already existing Logical cloud
Adding new users in existing Logical cloud 1
Code Block | ||||
---|---|---|---|---|
| ||||
URL: /v2/projects/<project-name>/logical-clouds POST BODY: { "name": "logical-cloud-1", //unique name for the new logical cloud "user" : { "name" : "logicaluser-cloud-12", //name of "user" : "user-2"user for this cloud "Messagetype" : "certificate"logical, cloud and associated//type userof successfully created" } |
Creating new users in already existing Logical cloud
the following steps explain how to run the tenant controller in kubernetes
Code Block | ||||
---|---|---|---|---|
| ||||
URL: /v2/projects/<project-name>/logical-clouds POST BODY: { "name": "logical-cloud-1", //unique name for the new logical cloud "userauthentication credentials used by user (certificate, APIKey, UNPW) "certificate" : "/path/to/user2/logical cloud-1-user2.csr" , //Path to user certificate "permissions" : { "nameapiGroups" : "user-2", //name of user for this cloud["stable.example.com"] "typeresources" : ["certificatesecrets", "pods"] //type of authentication credentials used by user (certificate, APIKey, UNPW) "certificate" : "/path/to/user2/logical cloud-1-user2.csr" , //Path to user certificate "permissions" : { "verbs" : ["get", "watch", "list", "create"] }, "quota" : { "cpu": "200", "memory": "300Gi", "pods": "200", "apiGroupsdummy/dummyResource": : ["stable.example.com"]30, } } } curl "resources" : ["secrets", "pods"] "verbs" : ["get", "watch", "list", "create"] }, "quota" : { "cpu": "200", "memory": "300Gi", "pods": "200", "dummy/dummyResource": 30, } } } curl -d @create_logical_cloud-1-user-2.json http://onap4k8s:<multicloud-k8s_NODE_PORT>-d @create_logical_cloud-1-user-2.json http://onap4k8s:<multicloud-k8s_NODE_PORT>/v2/projects/<project-name>/logical-clouds \ --key ./logical cloud-t1-admin-key.pem \ --cert ./logical cloud-t1-admin.pem \ Return Status: 201 Return Body: { "name" : "logical-cloud-1" "user" : "user-2" "Message" : "logical cloud and associated user successfully created" } |
Tuning Quota for logical cloud
This feature allows the logical cloud to tune their resources.
Code Block | ||||
---|---|---|---|---|
| ||||
URL: /v2/projects/<project-name>/logical-clouds POST BODY: \{ --key ./logical cloud-t1-admin-key.pem \ --cert ./logical cloud-t1-admin.pem \ Return Status: 201 Return Body: { "name" : "logical-cloud-1" "user" : "user-2" "Message" : "logical cloud and associated user successfully created" } |
User creation details:
The Tenant CRD object is defined by following CRD objects:
Code Block | ||||
---|---|---|---|---|
| ||||
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tenants.tenants.k8s.io
spec:
group: tenants.k8s.io
versions:
- name: v1alpha1
served: true
storage: true
scope: Cluster
names:
plural: tenants
singular: tenant
kind: Tenant
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: namespacetemplates.tenants.k8s.io
spec:
group: tenants.k8s.io
versions:
- name: v1alpha1
served: true
storage: true
scope: Cluster
names:
plural: namespacetemplates
singular: namespacetemplate
kind: NamespaceTemplate
shortNames:
- nstpl
|
...
"name": "logical-cloud-1", //unique name for the new logical cloud
"cluster-labels": "abc, xyz",
"resources": {
"cpu": "400",
"memory": "1000Gi",
"pods": "500",
"dummy/dummyResource": 100,
}
}
curl -d @create_logical_cloud-1.json http://onap4k8s:<multicloud-k8s_NODE_PORT>/v2/projects/<project-name>/logical-clouds \
--key ./logical cloud-t1-admin-key.pem \
--cert ./logical cloud-t1-admin.pem \
Return Status: 201
Return Body:
{
"name" : "logical-cloud-1"
"Message" : "logical cloud 1 is successfully tuned"
}
GET URL: /v2/projects/<project-name>/logical-clouds/<logical-cloud-name>/quotas
RETURN STATUS: 200
RETURN BODY:
{
"resources": {
"cpu": "400",
"memory": "1000Gi",
"pods": "500",
"dummy/dummyResource": 100,
}
}
DELETE
URL: /v2/projects/<project-name>/logical-clouds/<logical-cloud-name>/quotas
RETURN STATUS: 204 |
Logical cloud Cluster-labels
The following steps explain to get the cluster labels
Code Block | ||||
---|---|---|---|---|
| ||||
$ kubectl create -f $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/poc/tenant-controller/data/manifests/crd.yaml
$ kubectl create -f $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/poc/tenant-controller/data/manifests/rbac.yaml
<<Running tenant controller>>
$ $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/out/tenant-controller/tenant-ctl -v=99 -kubeconfig=$HOME/.kube/config
$ # kubectl get crd
NAME CREATED AT
namespacetemplates.tenants.k8s.io 2019-05-01T16:34:49Z
tenants.tenants.k8s.io 2019-05-01T16:34:49Z
$ kubectl create -f $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/poc/tenant-controller/data/manifests/sample-nstemplate.yaml
$ kubectl create -f $GOPATH/src/github.com/kubernetes-sigs/multi-tenancy/poc/tenant-controller/data/manifests/sample-tenant.yaml
$ kubectl get tenants
NAME AGE
tenant-a 7d
$ kubectl get ns
NAME STATUS AGE
default Active 42d
kube-public Active 42d
kube-system Active 42d
tenant-a-ns-1 Active 7d18h
tenant-a-ns-2 Active 7d18h |
A closer look into tenant Object
The tenant object looks like below:
Code Block | ||||
---|---|---|---|---|
| ||||
---
apiVersion: tenants.k8s.io/v1alpha1
kind: Tenant
metadata:
name: tenant-a
spec:
namespaces:
- name: ns-1
- name: ns-2 |
The tenant controller takes this tenant spec as a template to creates the namespace for each namespace as tenant-a-ns1, tenant-a-ns2. In addition, the tenant object can also create an admin object for a tenant with a user for admin.
In addition, it creates a namespace template, it defines templates that define Rolebinding, ClusterRole, NetworkPolicy for the namespace tenant-a-ns1 and tenant-a-ns2.
Code Block | ||||
---|---|---|---|---|
| ||||
$ kubectl get namespacetemplate
NAME AGE
restricted 7d
$ kubectl get namespacetemplate restricted -o yaml
apiVersion: tenants.k8s.io/v1alpha1
kind: NamespaceTemplate
metadata:
creationTimestamp: "2019-05-01T17:37:11Z"
generation: 1
name: restricted
resourceVersion: "3628408"
selfLink: /apis/tenants.k8s.io/v1alpha1/namespacetemplates/restricted
uid: bffbe9c8-6c37-11e9-91c3-a4bf014c3518
spec:
templates:
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: multitenancy:podsecuritypolicy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: multitenancy:use-psp:restricted
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: multitenancy-default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress |
Resource quota proposal for the tenant CRD
A tenant-based resource quota is required to implement resource tracking in ICN. The proposal here is to reuse the tenant controller work in Kubernetes and introduce the tenant resource quota CRD on the top of tenant controller
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tenantresourcequota.tenants.k8s.io
spec:
group: tenants.k8s.io
versions:
- name: v1alpha1
served: true
storage: true
scope: Cluster
names:
plural: tenantresourcequotas
singular: tenantresourcequota
kind: TenantResourcequota
shortNames:
- trq |
Before jumping into the definition of the tenant resource quota, refer Kubernetes resource quota - https://kubernetes.io/docs/concepts/policy/resource-quotas/ for more understanding on the resource quota
And tenant resource quota schema should be like this below
Code Block | ||||
---|---|---|---|---|
| ||||
// NamespaceTemplate defines a template of resources to be created inside a namespace.
type TenantResourceQuota struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec TenantResourceQuotaSpec `json:"spec"`
}
// TenantResourceQuotaSpec defines the desired hard limits to enforce for Quota
type TenantResourceQuotaSpec struct {
// Hard is the set of desired hard limits for each named resource
// +optional
Hard v1.ResourceList `json:"hard"`
// A collection of filters that must match each object tracked by a quota.
// If not specified, the quota matches all objects.
// +optional
Scopes []v1.ResourceQuotaScope `json:"scopes"`
// ScopeSelector is also a collection of filters like Scopes that must match each object tracked by a quota
// but expressed using ScopeSelectorOperator in combination with possible values.
// +optional
ScopeSelector *v1.ScopeSelector `json:"scopeSelector"`
UserResourcequota []string `json:"userResourcequota"`
} |
And the example Tenant resource quota should be like this.
Code Block | ||||
---|---|---|---|---|
| ||||
---
apiVersion: tenants.k8s.io/v1alpha1
kind: TenantResourcequota
metadata:
name: Tenant-a-resource-quota
spec:
hard:
cpu: "400"
memory: 1000Gi
pods: "500"
requests.dummy/dummyResource: 100
scopeSelector:
matchExpressions:
- operator : In
scopeName: Class
values: ["common"]
userResourcequota:[
"silver-pool-resourcequota",
"gold-pool-resourcequota",
"vfirewall-pool-resourcequota"
]
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: silver-pool-resourcequota
namespace: tenant-a-ns-2
spec:
hard:
limits.cpu: "100"
limits.memory: 250Gi
pod: 100
requests.dummy/dummyResource: 25
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: gold-pool-resourcequota
namespace: tenant-a-ns-1
spec:
hard:
limits.cpu: "200"
limits.memory: 700Gi
pod:300
requests.dummy/dummyResource: 75
---
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ResourceQuota
metadata:
name: vfirewall-v1
namespace: tenant-a-ns-3
spec:
hard:
cpu: "35"
memory: 20Gi
pods: "50"
scopeSelector:
matchExpressions:
- operator : In
scopeName: Firewall
values: ["v1"]
- apiVersion: v1
kind: ResourceQuota
metadata:
name: vfirewall-v3
namespace: tenant-a-ns-3
spec:
hard:
cpu: "30"
memory: 10Gi
pods: "15"
scopeSelector:
matchExpressions:
- operator : In
scopeName: Firewall
values: ["v3"] |
Block diagram representation of tenant resource slicing
Tenant controller architecture
ICN Requirement and Tenant controller gaps
...
Multi-cluster tenant controller
- Tenant created at Multi scheduler site (ONAP4K8S)
...
Identifying K8S clusters for this tenant based on cluster labels
- Send the Tenant details to the K8s cluster
...
At K8s cluster level
- Creating namespace
- Creating K8S users (Tokens, Certificates and User/Pwds)
- Creating K8S roles
- Creating permissions to various roles.
...
- Tenant controller at K8s cluster level [Implemented]
- A tenant can have multiple namespaces
- Tenant-a
- ns1
- ns2
- It creates Tenant-a-ns1 and Tenant-a-ns
- Tenant-a
- A tenant can have multiple namespaces
- Cluster-admin: This entity has full read/write privileges for all resources in the cluster including resources owned by various Tenants of the cluster [Not implemented].
- Cluster-view: This entity has read privileges for all resources in the cluster including reasources owned by various Tenants [Not implemented].
- Tenant-admin: This entity has privileges to create a new tenant, read/write resources scoped to that Tenant and update or delete that Tenant. This persona does not have any privileges for accessing resources that are either cluster-scoped or scoped to namespaces that are not associated with the Tenant object for which this persona has Tenant-admin privileges.[Implemented]
- Tenant-user: This entity has read/write privileges for all resources scoped within a specific Tenant (that is resources that are scoped within namespaces that are owned by a specific Tenant) [Not implemented].
...
Certificate Provisioning with Tenant
- Suggestion to use Isito using citadel
...
- Quota at the application level.
- Tenant group support: Quota at the tenant group level (Multiple namespaces), ISTIO at the tenant group level.
...
- Resource quota based on the tenant with multiple namespaces [Not implemented].
Multi-Cluster Tenant controller
<This section is incomplete and a work in progress ... needs rework and further updates ... >
...
- Cluster-100.2-labels: { Cascade, SRIOV, QAT,}
- Cluster-101.1-labels: { Sky-lake, SRIOV}
...
Code Block | ||
---|---|---|
| ||
{
"metadata": {
"name": "tenant-a",
"clusterlabels": "label-A"
},
"spec": {
"users": [
{
"name": "users-1",
"crt": "/path/to/crt"
},
{
"name": "users-2",
"crt": "/path/to/crt"
}
]
}
} |
...
curl -d @create_tenant-a.json http:
//NODE_IP
:30280
/api/multicloud-k8s/v1/v1/tenant
...
- Create
- Update
- Delete
- Get
- List
- Watch
- Patch
...
- Design note :
- On how this would be done as Micro-service in the ONAP.
- How does interact with K8S clusters.
- How does it ensure that all the configuration is applied (rollbacks, unsuccessful edges).
- Visibility of the configuration applied on per MCTenant basis.
- When new K8S cluster is added with the label of interest, taking care of creating tenant-specific information in that edge etc..
- Extensibility (future K8S clusters having some other features that require configuration for multi-tenancy).
Open Questions:
...
- [Kural]
- Tenant creation from the ONAP4K8s should be shared down to the cluster in the edge location
- Tenant should have kubeconfig context a slice of their namespace alone
...
- [Kural]
- Discuss so far with Istio folks and expertise, suggested that citadel certificate are bonded to namespace and specific for the application level. They are not targeted for the K8s Users
- For the k8s user, the certificates should be generated by the external entity and bind to the service account and the tenant as shown in the example - https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
...
- [kural ]
- Initial Pathfinding show that Citadel may not be the right candidate for the K8s User certificate creation
...
| |
GET URL: /v2/projects/<project-name>/logical-clouds/<logical-cloud-name>/defaultkubeconfig
RETURN STATUS: 200
RETURN BODY:
[{
"cluster": c1
"labels" : {abc,xyz,ijk,dfg}
},
{
"cluster": c2
"labels" : {abc,xyz,irk,iop}
}
}] |
Get Logical cloud-config
DCM merge the kube config of each cluster list c1 and c2.
Code Block | ||||
---|---|---|---|---|
| ||||
URL: /v2/projects/<project-name>/logical-clouds/<logical-cloud-name>/kubeconfig
GET
Return Status: 201
Return Body :
{
apiVersion: v1
clusters:
- cluster:
certificate-authority: path/to/my/cafile
server: http://2.2.2.2:6443
name: cluster-abc
- cluster:
certificate-authority: path/to/my/cafile
server: https://1.1.1.1:6443
name: cluster-xyz
contexts:
- context:
cluster: kubernetes
namespace: ns-1
user: user-1
name: logical-cloud-1
current-context: logical-cloud-1
kind: Config
preferences: {}
users:
- name: user-1
user:
client-certificate: path/to/my/client/cert
client-key: path/to/my/client/key
} |
Open Questions:
JIRA Story details
Reference
...