Basically there is one performance management solution in CaaS sub-system and it is exposed to the application via the HPA API provided by Kubernetes platform. Applications can use both core metrics and custom metrics to horizontally scale themselves. The first is based on CPU and memory usage and the other uses practically every metric that the developer provides to the API Aggregator via an HTTP server.
...
Note that the database behind the performance management system is not persistent but uses time-series database to store metric values in both solutions.
Check core metrics in the system
Metrics APIs provided by Kubernetes can be gotten by:
Code Block |
---|
|
~]$ kubectl api-versions
...
custom.metrics.k8s.io/v1beta1
...
metrics.k8s.io/v1beta1
... |
Code Block |
---|
|
~]$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
172.24.16.104 1248m 62% 5710Mi 74%
172.24.16.105 1268m 63% 5423Mi 71%
172.24.16.107 1215m 60% 5191Mi 68%
172.24.16.112 253m 6% 846Mi 11% |
The printout shows the names of nodes actually the IP addresses of nodes, the usage of CPUs in percentage and milli standard for 2 CPUs in the example furthermore memory usage in percentage and Mi (MiB) standard.
...
Console output shows pod names and their CPU and memory consumption in the same format.
In case of the usage of Custom Metrics the developer has to provide the exposition of metrics in his application in Prometheus format. There are specific libraries that can be used for creating HTTP server and Prometheus client for this purpose in Golang, Python etc.
Code Block |
---|
|
from prometheus_client import start_http_server, Histogram
import random
import time
function_exec = Histogram('function_exec_time',
'Time spent processing a function',
['func_name'])
def func():
if (random.random() < 0.02):
time.sleep(2)
return
time.sleep(0.2)
start_http_server(9100)
while True:
start_time = time.time()
func()
function_exec.labels(func_name="func").observe(time.time() - start_time) |
...
This is a HTTP request with cURL, it shows the custom metrics exposed by an HTTP server of an application running in a Kubernetes pod.
Core metrics and Custom metrics examples
HPA manifest with core metrics scraping:
Code Block |
---|
|
php-apache-hpa.yml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache-hpa
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: php-apache-deployment
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 50 |
In this example HPA scrapes metrics from CPU consumption of php-apache-deployment. The initial pod number is one and the maximum replica counts are five. HPA initiates pod scaling when the CPU utilization is higher than 50%. If the utilization is less than 50% HPA starts scaling down the number of pods by one.
Code Block |
---|
|
podinfo-hpa-custom.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
namespace: kube-system
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: http_requests
targetAverageValue: 10
|
In the second example HPA uses custom metrics to manage the performance. The podinfo application contains the implementation of an HTTP server which exposes the metrics in Prometheus format. The initial number of pods are two and the maximum are ten. The custom metric is the cardinality of the http requests on the HTTP server regarding to the metrics exposed.
Code Block |
---|
|
~]$ kubectl create -f podinfo-hpa-custom.yaml --namespace=kube-system |
In case of starting core metrics HPA the command is the same.
Code Block |
---|
|
~]$ kubectl describe hpa podinfo --namespace=kube-system
Name: podinfo
Namespace: kube-system
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 19 Feb 2019 10:08:21 +0100
Reference: Deployment/podinfo
Metrics: ( current / target )
"http_requests" on pods: 901m / 10
Min replicas: 2
Max replicas: 10
Deployment pods: 2 current / 2 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests
ScalingLimited True TooFewReplicas the desired replica count is increasing faster than the maximum scale rate
Events: <none> |
Note that: HPA API supports scaling based on both core and custom metrics within the same HPA object.
View file |
---|
name | php-apache-deployment.yml |
---|
height | 250 |
---|
|
View file |
---|
name | php-apache-hpa.yml |
---|
height | 250 |
---|
|
View file |
---|
name | php-apache-service.yml |
---|
height | 250 |
---|
|
...