k9s displays CPU and MEM columns for pods and nodes, but on bare-metal kubeadm clusters these show “N/A” by default. This occurs because k9s relies on the Kubernetes Metrics API, which requires metrics-server to be installed.
The Problem
Without metrics-server:
NAME CPU MEM
k8s-master01 N/A N/A
k8s-worker01 N/A N/A
Managed Kubernetes services (EKS, GKE, AKS) typically pre-install metrics-server. Bare-metal kubeadm clusters do not include it.
Solution
Install metrics-server via Helm with configuration for kubeadm’s self-signed certificates.
Directory Structure
infrastructure/metrics-server/
├── metrics-server-values.yaml
├── setup-metrics-server.sh
└── README.md
Helm Values
# metrics-server-values.yaml
args:
# Required for kubeadm clusters with self-signed kubelet certificates
- --kubelet-insecure-tls
# Prefer internal IP for kubelet connections
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
# Run on control plane
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
memory: 256Mi
# Linkerd integration
podAnnotations:
linkerd.io/inject: enabled
The --kubelet-insecure-tls Flag
Kubeadm clusters use self-signed certificates for kubelet endpoints. Without this flag, metrics-server fails with:
unable to fully scrape metrics from node k8s-worker01:
x509: certificate signed by unknown authority
This flag skips TLS verification for kubelet connections. In a controlled homelab environment, this trade-off is acceptable compared to setting up proper PKI for kubelet serving certificates.
Setup Script
#!/bin/bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
NAMESPACE="kube-system"
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/ 2>/dev/null || true
helm repo update metrics-server
helm upgrade --install metrics-server metrics-server/metrics-server \
-f "${SCRIPT_DIR}/metrics-server-values.yaml" \
-n "${NAMESPACE}"
kubectl rollout status deployment/metrics-server -n "${NAMESPACE}" --timeout=120s
sleep 5
kubectl top nodes || echo "Note: Metrics may take 30-60 seconds to populate"
Installation
cd infrastructure/metrics-server/
chmod +x setup-metrics-server.sh
./setup-metrics-server.sh
Verification
Wait approximately 30 seconds after installation for metrics to populate:
kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 592m 14% 4502Mi 29%
k8s-worker01 197m 4% 3527Mi 22%
k8s-worker02 335m 8% 4053Mi 26%
polycephala 1750m 0% 23662Mi 9%
k9s now displays actual CPU and MEM values instead of N/A.
What metrics-server Enables
| Feature | Description |
|---|---|
kubectl top | CLI resource monitoring |
| k9s metrics | Visual resource columns |
| HPA | Horizontal Pod Autoscaler (scale on CPU/memory) |
| VPA | Vertical Pod Autoscaler (right-size requests) |
Troubleshooting
If metrics show N/A after installation:
# Check pod logs
kubectl logs -n kube-system -l app.kubernetes.io/name=metrics-server
# Verify API registration
kubectl get apiservice v1beta1.metrics.k8s.io
Metrics typically take 30-60 seconds to appear after the API aggregation layer registers.
Notes
- Headlamp and similar dashboards may show metrics even without metrics-server because they query Prometheus directly. k9s and
kubectl toprequire the Metrics API. - Running metrics-server on the control plane with appropriate tolerations keeps it co-located with other critical cluster components.
Configuration available at k8s-configs/infrastructure/metrics-server.