Table of Contents
@MIGU 补充相关内容
Introduction
The The purpose of this test is intended to provide a text emotion analysis capability function test report, and provide the capability caller with reference basis such as use cases and test data after the capability is put on the stageto demonstrate two scheduling use cases of text sentiment analysis service:
Case 1. Scheduling computing force by cluster weight;
Case 2. Rescheduling computing force when a cluster resource is abnormal.
Akraino Test Group Information
Test Architecture
...
Test Framework
Hardware:
Control-panal: 192.168.30.12,19212,192.168.30.21
Worker-Cluseter1: 192.168.30.5 、192、192.168.30.22、19222、192.168.30.20
Worker-Cluseter2: 192.168.30.2、1922、192.168.30.16、19216、192.168.30.25
Software:
Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
...
sentiment: an text emotion analysis service
Test description
Propagate a deployment
In the following steps, we are going to propagate a deployment
1、Create Case 1. Scheduling by weight
1.Create a deployment.yaml
...
language | yml |
---|---|
theme | DJango |
linenumbers | true |
apiVersion: |
...
apps/v1 |
...
kind: |
...
Deployment |
...
metadata: |
...
name: |
...
sentiment |
...
labels: |
...
app: |
...
sentiment |
...
spec: |
...
replicas: |
...
2 |
...
selector: |
...
matchLabels: app: sentiment template: metadata: labels: app: sentiment spec: imagePullSecrets: - name: harborsecret containers: - name: sentiment image: 192.168.30.20:5000/migu/sentiment:latest |
...
imagePullPolicy: |
...
IfNotPresent ports: - containerPort: 9600 protocol: TCP name: http resources: limits: cpu: 2 memory: 4G requests: cpu: 2 memory: 4G |
2.Create nginx deployment in Karmada.
create Create a deployment named sentiment。 Execute command
...
language | bash |
---|---|
theme | DJango |
linenumbers | true |
...
and name it sentiment。 Execute commands as follow:
kubectl --kubeconfig |
...
/etc/karmada/karmada-apiserver.config |
...
create |
...
-f |
...
deployment.yaml |
...
...
- Create a PropagationPolicy.
...
language | yml |
---|---|
theme | DJango |
linenumbers | true |
- yaml:
apiVersion: |
...
...
kind: |
...
PropagationPolicy |
...
metadata: |
...
name: |
...
sentiment-propagation |
...
spec: |
...
resourceSelectors: |
...
- apiVersion: |
...
apps/v1 |
...
kind: |
...
Deployment name: |
...
sentiment |
...
placement: |
...
clusterAffinity: clusterNames: - member1 - member2 replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided weightPreference: staticWeightList: - targetCluster: clusterNames: - member1 weight: 1 - targetCluster: clusterNames: - member2 weight: 1 |
- Create PropagationPolicy that will propagate sentiment to member cluster
...
- We need to create a policy to propagate the deployment to our member cluster. Execute
...
language | bash |
---|---|
theme | DJango |
linenumbers | true |
...
- commands as follow:
kubectl --kubeconfig |
...
/etc/karmada/karmada-apiserver.config |
...
create |
...
-f |
...
propagationpolicy.yaml |
...
5、Check 5.Check the deployment status
You We can check deployment status, don't need to access member cluster. Execute command
in our member cluseter,you can see commands as follow:
in worker cluseter,we can see the result as follow:
...
- Next,
...
- we will change deployment.yaml and propagationpolicy.yaml , then retry
...
- .
apiVersion: |
...
apps/v1 |
...
kind: |
...
Deployment |
...
metadata: |
...
name: |
...
sentiment |
...
labels: |
...
app: |
...
sentiment |
...
spec: |
...
replicas: |
...
4 |
...
selector: |
...
matchLabels: app: sentiment template: metadata: labels: app: sentiment spec: imagePullSecrets: - name: harborsecret containers: - name: sentiment image: 192.168.30.20:5000/migu/sentiment:latest |
...
imagePullPolicy: |
...
Execute command
...
language | bash |
---|---|
theme | DJango |
linenumbers | true |
...
IfNotPresent ports: - containerPort: 9600 protocol: TCP name: http resources: limits: cpu: 2 memory: 4G requests: cpu: 2 memory: 4G |
Execute command as follow:
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config |
...
apply |
...
-f |
...
deployment.yaml |
...
vi propagationpolicy.yaml
...
language | yml |
---|---|
theme | DJango |
linenumbers | true |
apiVersion: |
...
...
kind: |
...
PropagationPolicy |
...
metadata: |
...
name: |
...
sentiment-propagation |
...
spec: |
...
resourceSelectors: |
...
- apiVersion: |
...
apps/v1 |
...
kind: |
...
Deployment name: |
...
sentiment |
...
placement: |
...
Execute command
...
language | bash |
---|---|
theme | DJango |
linenumbers | true |
...
clusterAffinity: clusterNames: - member1 - member2 replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided weightPreference: staticWeightList: - targetCluster: clusterNames: - member1 weight: 1 - targetCluster: clusterNames: - member2 weight: 3 |
Execute commands as follow:
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config |
...
apply |
...
-f |
...
propagationpolicy.yaml |
...
7、Retry7.Retry, Check the deployment status
You We can check deployment status, don't need to access member cluster. Execute command
commands as follow:
in our member cluseter,you cluseter,we can see results as follow:
Rescheduling deployment
Users could divide their replicas of a workload into different clusters in terms of available resources of member clusters. However, the scheduler's decisions are influenced by its view of Karmada at that point of time when a new ResourceBinding
appears for scheduling. As Karmada multi-clusters are very dynamic and their state changes over time, there may be desire to move already running replicas to some other clusters due to lack of resources for the cluster. This may happen when some nodes of a cluster failed and the cluster does not have enough resource to accommodate their pods or the estimators have some estimation deviation, which is inevitable.
Member cluster component is ready
Ensure that all member clusters have joined Karmada and their corresponding karmada-scheduler-estimator is installed into karmada-host.
Check member clusters using the following command:
Descheduler has been installed
Ensure that the karmada-descheduler has been installed .
Create a Deployments
Case 2. Rescheduling
- First we create a deployment with 2 replicas and divide them into 2
...
- worker clusters.
...
language | yml |
---|---|
theme | DJango |
linenumbers | true |
apiVersion: |
...
...
kind: |
...
PropagationPolicy |
...
metadata: |
...
name: |
...
sentiment-propagation |
...
spec: |
...
resourceSelectors: |
...
- apiVersion: |
...
apps/v1 |
...
kind: |
...
Deployment name: |
...
sentiment |
...
placement: |
...
clusterAffinity: clusterNames: - member1 - member2 replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided weightPreference: dynamicWeight: AvailableReplicas --- apiVersion: apps/v1 kind: Deployment metadata: name: sentiment labels: app: sentiment namespace: migu spec: replicas: 2 selector: matchLabels: app: sentiment template: metadata: labels: app: sentiment spec: imagePullSecrets: - name: harborsecret containers: - name: sentiment image: 192.168.30.20:5000/migu/sentiment:latest |
...
imagePullPolicy: |
...
IfNotPresent ports: - containerPort: 9600 protocol: TCP name: http resources: limits: cpu: 2 memory: 4G requests: cpu: 2 memory: 4G |
It is possible for these 2 replicas to be evenly divided into 2
...
worker clusters, that is, one replica in each cluster.
2.Now we taint all nodes in
...
worker1 and evict the replica.
...
...
$ |
...
kubectl |
...
--context |
...
worker1 cordon control-plane |
...
# |
...
delete |
...
the |
...
pod |
...
in |
...
cluster |
...
worker1 $ |
...
kubectl |
...
--context |
...
worker1 delete |
...
pod |
...
-l |
...
app=sentiment |
...
A new pod will be created and cannot be scheduled
...
by kube-scheduler due to lack of resources.
...
...
# |
...
the |
...
state |
...
of |
...
pod |
...
in |
...
cluster |
...
worker1 is |
...
pending |
...
$ |
...
kubectl |
...
--context |
...
worker1 get |
...
pod NAME READY STATUS RESTARTS AGE sentiment-6fd4c7867c- |
...
jkcqn |
...
1/1 Pending 0 80s |
3.After about 5 to 7 minutes, the pod in
...
worker1 will be evicted and scheduled to other available clusters.
...
...
# |
...
get |
...
the |
...
pod |
...
in |
...
cluster |
...
worker1 $ |
...
kubectl |
...
--context |
...
worker1 get |
...
pod |
...
No |
...
resources |
...
found |
...
in |
...
default |
...
namespace. |
...
# |
...
get |
...
a |
...
list |
...
of |
...
pods |
...
in |
...
cluster |
...
worker2 $ |
...
kubectl |
...
--context |
...
worker2 get |
...
pod NAME READY STATUS RESTARTS AGE sentiment-6fd4c7867c-hvzfd |
...
|
...
1/1 Running 0 6m3s sentiment-6fd4c7867c-vrmnm |
...
|
...
1/1 Running 0 4s |
Test Dashboards
Additional Testing
N/A
Bottlenecks/Errata
...