Table of Contents
@MIGU 补充相关内容
Introduction
The test is intended to provide a text emotion analysis capability function test report, and provide the capability caller with reference basis such as use cases and test data after the capability is put on the stage.
Akraino Test Group Information
Test Architecture
Test Bed
Test Framework
- Hardware
Control-panal: 192.168.30.12,192.168.30.21
Worker-Cluseter1: 192.168.30.5 、192.168.30.22、192.168.30.20
Worker-Cluseter2: 192.168.30.2、192.168.30.16、192.168.30.25
- Software
Traffic Generator
Test API description
...
Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
Kubernetes: an open-source system for automating deployment, scaling, and management of containerized applications.
sentiment: an text emotion analysis service
Test description
Propagate a deployment
In the following steps, we are going to propagate a deployment
1、Create a deployment.yaml
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
apiVersion: apps/v1
kind: Deployment
metadata:
name: sentiment
labels:
app: sentiment
spec:
replicas: 2
selector:
matchLabels:
app: sentiment
template:
metadata:
labels:
app: sentiment
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: sentiment
image: 192.168.30.20:5000/migu/sentiment:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9600
protocol: TCP
name: http
resources:
limits:
cpu: 2
memory: 4G
requests:
cpu: 2
memory: 4G |
2、Create nginx deployment in Karmada.
create a deployment named sentiment。 Execute command
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config create -f deployment.yaml |
3、Create a PropagationPolicy.yaml
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: sentiment-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: sentiment
placement:
clusterAffinity:
clusterNames:
- member1
- member2
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
weight: 1
- targetCluster:
clusterNames:
- member2
weight: 1 |
4、Create PropagationPolicy that will propagate sentiment to member cluster
we need to create a policy to propagate the deployment to our member cluster. Execute command
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config create -f propagationpolicy.yaml |
5、Check the deployment status
You can check deployment status, don't need to access member cluster. Execute command
in our member cluseter,you can see as follow:
6、Next, We will change deployment.yaml and propagationpolicy.yaml , then retry
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
apiVersion: apps/v1
kind: Deployment
metadata:
name: sentiment
labels:
app: sentiment
spec:
replicas: 4
selector:
matchLabels:
app: sentiment
template:
metadata:
labels:
app: sentiment
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: sentiment
image: 192.168.30.20:5000/migu/sentiment:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9600
protocol: TCP
name: http
resources:
limits:
cpu: 2
memory: 4G
requests:
cpu: 2
memory: 4G |
Execute command
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config apply -f deployment.yaml |
vi propagationpolicy.yaml
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: sentiment-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: sentiment
placement:
clusterAffinity:
clusterNames:
- member1
- member2
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
weight: 1
- targetCluster:
clusterNames:
- member2
weight: 3 |
Execute command
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config apply -f propagationpolicy.yaml |
7、Retry, Check the deployment status
You can check deployment status, don't need to access member cluster. Execute command
in our member cluseter,you can see as follow:
Rescheduling deployment
Users could divide their replicas of a workload into different clusters in terms of available resources of member clusters. However, the scheduler's decisions are influenced by its view of Karmada at that point of time when a new ResourceBinding
appears for scheduling. As Karmada multi-clusters are very dynamic and their state changes over time, there may be desire to move already running replicas to some other clusters due to lack of resources for the cluster. This may happen when some nodes of a cluster failed and the cluster does not have enough resource to accommodate their pods or the estimators have some estimation deviation, which is inevitable.
Member cluster component is ready
Ensure that all member clusters have joined Karmada and their corresponding karmada-scheduler-estimator is installed into karmada-host.
Check member clusters using the following command:
Descheduler has been installed
Ensure that the karmada-descheduler has been installed .
Create a Deployments
First we create a deployment with 2 replicas and divide them into 2 member clusters.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: sentiment-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: sentiment
placement:
clusterAffinity:
clusterNames:
- member1
- member2
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
dynamicWeight: AvailableReplicas
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sentiment
labels:
app: sentiment
namespace: migu
spec:
replicas: 2
selector:
matchLabels:
app: sentiment
template:
metadata:
labels:
app: sentiment
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: sentiment
image: 192.168.30.20:5000/migu/sentiment:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9600
protocol: TCP
name: http
resources:
limits:
cpu: 2
memory: 4G
requests:
cpu: 2
memory: 4G
|
It is possible for these 2 replicas to be evenly divided into 2 member clusters, that is, one replica in each cluster.
Now we taint all nodes in member1 and evict the replica.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
# mark node "member1-control-plane" as unschedulable in cluster member1
$ kubectl --context member1 cordon member1-control-plane
# delete the pod in cluster member1
$ kubectl --context member1 delete pod -l app=sentiment
|
A new pod will be created and cannot be scheduled by kube-scheduler
due to lack of resources.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
# the state of pod in cluster member1 is pending
$ kubectl --context member1 get pod
NAME READY STATUS RESTARTS AGE
sentiment-6fd4c7867c-jkcqn 1/1 Pending 0 80s
|
After about 5 to 7 minutes, the pod in member1 will be evicted and scheduled to other available clusters.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
# get the pod in cluster member1
$ kubectl --context member1 get pod
No resources found in default namespace.
# get a list of pods in cluster member2
$ kubectl --context member2 get pod
NAME READY STATUS RESTARTS AGE
sentiment-6fd4c7867c-hvzfd 1/1 Running 0 6m3s
sentiment-6fd4c7867c-vrmnm 1/1 Running 0 4s
|
Test Dashboards
Additional Testing
...