If you’ve ever run kubectl describe node on your control plane and wondered about this taint:
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Here’s what it does and why you want to keep it.
What It Does
This taint prevents regular pods from being scheduled on control plane nodes. Only pods that explicitly tolerate the taint can run there.
Why It Matters
Your control plane runs critical components:
- etcd - The cluster’s brain (all state lives here)
- kube-apiserver - The API everything talks to
- kube-controller-manager - Manages controllers
- kube-scheduler - Decides where pods run
If a misbehaving application pod consumes all CPU or memory on the control plane, these components starve and your entire cluster becomes unresponsive.
What Can Still Run There
Pods that tolerate the taint still schedule on control plane nodes:
- Control plane static pods (etcd, apiserver, etc.)
- DaemonSets like calico-node, kube-proxy, promtail
- Backup jobs that need control plane access (like etcd backups)
Example toleration in a pod spec:
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
Should You Remove It?
For small homelabs with limited nodes, you might be tempted to use the control plane for workloads. You can remove it:
kubectl taint nodes <control-plane-node> node-role.kubernetes.io/control-plane:NoSchedule-
But consider:
- One runaway pod can take down your cluster
- Control plane components have no resource guarantees against your workloads
- Debugging becomes harder when the control plane is overloaded
If you have even one dedicated worker node, keep the taint. Your cluster will thank you during that one time a pod goes haywire.
Re-Adding the Taint
If you removed it and want it back:
kubectl taint nodes <control-plane-node> node-role.kubernetes.io/control-plane:NoSchedule
Note: After an etcd restore, this taint may be missing because the restore replays cluster state from the backup timestamp. Check and re-add it if needed.