Elasticsearch was stuck in a crash loop with the error “health check failed due to broken node lock”. The data directory inside the container was empty, yet the PVC showed as bound. The root cause: the OpenEBS hostPath was a symlink, and Kubernetes local PVs do not follow symlinks.
The Problem
The Elasticsearch pod showed 1/2 containers running, with the readiness probe failing:
Warning Unhealthy 2m12s (x35651 over 2d) kubelet Readiness probe failed:
Elasticsearch is not ready yet. Check the server logs.
Elasticsearch logs revealed the issue:
{
"log.level": "WARN",
"message": "this node is unhealthy: health check failed due to broken node lock"
}
Checking the data directory inside the container showed it was empty:
kubectl -n elastic exec elasticsearch-es-default-0 -c elasticsearch -- ls -la /usr/share/elasticsearch/data/
total 0
The PVC was bound to a valid PV with an OpenEBS hostPath:
spec:
local:
path: /var/openebs/local/pvc-6e092fff-f1f4-4ace-8cb9-03c72a5aefd3
Root Cause
On the node, /var/openebs was a symlink pointing to /data/openebs:
ls -la /var/openebs
lrwxrwxrwx 1 root root 13 Jan 24 18:41 /var/openebs -> /data/openebs
The symlink was created to move OpenEBS data from the 68GB root partition to a 1.7TB data partition. The data existed at /data/openebs/local/pvc-... with all the Elasticsearch indices intact.
The problem: Kubernetes local PV mounts do not follow symlinks. The kubelet resolves the path literally, creating an empty directory at the mount point instead of following the symlink to the actual data.
Checking the mount inside the container confirmed this:
kubectl -n elastic exec elasticsearch-es-default-0 -c elasticsearch -- df -h /usr/share/elasticsearch/data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rl-root 69G 29G 40G 42% /usr/share/elasticsearch/data
The data directory was mounted from the root filesystem, not the data partition where the actual Elasticsearch data resided.
Solution: Bind Mount
A bind mount achieves the same goal as a symlink (redirecting storage to a larger partition) while working correctly with Kubernetes local PVs.
Remove the symlink and create a bind mount:
sudo rm /var/openebs
sudo mkdir -p /var/openebs
sudo mount --bind /data/openebs /var/openebs
Make it persistent across reboots by adding to /etc/fstab:
/data/openebs /var/openebs none bind 0 0
Reload systemd to pick up the fstab changes:
sudo systemctl daemon-reload
Restart the Elasticsearch pod:
kubectl -n elastic delete pod elasticsearch-es-default-0
Verification
After the pod restarted, the Elasticsearch cluster returned to green status:
kubectl -n elastic get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
elasticsearch green 1 8.17.0 Ready 60d
Summary
| Approach | Works with Local PVs | Persists Data Location |
|---|---|---|
| Symlink | No | Yes |
| Bind mount | Yes | Yes |
When relocating OpenEBS or other local storage to a different partition, use bind mounts instead of symlinks. Kubernetes local PVs resolve paths literally and do not follow symlinks, resulting in empty mount points and data appearing to be missing.