As previously described, we have a cluster with an example application scheduled accross the worker nodes evenly by the scheduler.
oc get pods -n podtesting -o wide | grep Running
django-psql-example-1-842fl 1/1 Running 0 2m7s 10.131.0.65 compute-3 <none> <none>
django-psql-example-1-h6kst 1/1 Running 0 24m 10.130.2.97 compute-2 <none> <none>
django-psql-example-1-pxhlv 1/1 Running 0 2m7s 10.128.2.13 compute-0 <none> <none>
django-psql-example-1-xms7x 1/1 Running 0 2m7s 10.129.2.10 compute-1 <none> <none>
postgresql-1-4pcm4 1/1 Running 0 26m 10.131.0.51 compute-3 <none> <none>
However, our 4 compute nodes are assembled with different hardware specification and are using different harddisks (sdd vs hdd).
Figure 1. Nodes with Different Specifications
Since our web application must run on fast disks must configure the cluster to schedule the pods on nodes with SSD only.
To start using nodeSelectors we first label our nodes accordingly:
oc label nodes compute-0 compute-1 disktype=ssd (1)
oc label nodes compute-2 compute-3 disktype=hdd
1 |
as key we are using disktype |
As crosscheck we can list nodes with a specific label:
oc get nodes -l disktype=ssd
NAME STATUS ROLES AGE VERSION
compute-0 Ready worker 7h32m v1.19.0+d59ce34
compute-1 Ready worker 7h31m v1.19.0+d59ce34
oc get nodes -l disktype=hdd
NAME STATUS ROLES AGE VERSION
compute-2 Ready worker 7h32m v1.19.0+d59ce34
compute-3 Ready worker 7h32m v1.19.0+d59ce34
|
If no matching label is found, the pod cannot be scheduled. Therefore, always label the nodes first.
|
The 2nd step is to add the node selector to the specification of the pod. In our example we are using a DeploymentConfig, so let’s add it there:
oc patch dc django-psql-example -n podtesting --patch '{"spec":{"template":{"spec":{"nodeSelector":{"disktype":"ssd"}}}}}'
This adds the nodeSelector into: spec/template/spec
nodeSelector:
disktype: ssd
Kubernetes will now trigger a restart of the pods on the supposed nodes.
oc get pods -n podtesting -o wide | grep Running
django-psql-example-3-4j92k 1/1 Running 0 42s 10.129.2.7 compute-1 <none> <none>
django-psql-example-3-d7hsd 1/1 Running 0 42s 10.129.2.8 compute-1 <none> <none>
django-psql-example-3-fkbfm 1/1 Running 0 14m 10.128.2.18 compute-0 <none> <none>
django-psql-example-3-psskb 1/1 Running 0 14m 10.128.2.17 compute-0 <none> <none>
As you can see, only nodes with a SSD (compute-0 and compute-1) are being used.