Comment bubbles icon

Adding Comments with Comentario: Self-Hosted PostgreSQL-Backed Comments

I wanted to add comments to this blog without using a third-party service like Disqus. After evaluating options, I chose Comentario - a self-hosted comment system that uses PostgreSQL for storage. This post covers the full deployment on Kubernetes with HAProxy TLS termination and custom theming. Why Comentario When choosing a comment system, I had a few requirements: Self-hosted - No third-party data collection PostgreSQL backend - I already run a shared PostgreSQL instance, so no extra backup infrastructure needed GitHub OAuth - Most of my readers are developers Simple embed - Just a script tag and web component I considered Remark42 (uses BoltDB) and Commento (abandoned), but Comentario hit all the marks. It’s an actively maintained fork of Commento with PostgreSQL support. ...

January 3, 2026 · 5 min · Will
NVIDIA GPU

Exposing an NVIDIA RTX 5070 Ti GPU in Kubernetes with Time-Slicing

This post covers exposing an NVIDIA RTX 5070 Ti (Blackwell architecture) as a schedulable Kubernetes resource with time-slicing support, allowing multiple pods to share the GPU. Hardware Node GPU Memory Compute Capability polycephala NVIDIA GeForce RTX 5070 Ti 16 GB 12.0 (Blackwell, sm_120) The RTX 5070 Ti uses the new Blackwell architecture (GB203 chip) with compute capability sm_120. This creates compatibility challenges with some software that hasn’t been updated yet. ...

January 2, 2026 · 7 min · Will
IDS Monitoring

OPNsense IDS Monitoring with Suricata, Loki, and Grafana

OPNsense includes Suricata for intrusion detection, but the built-in alerts page provides limited visibility. This post covers forwarding IDS alerts to Loki via syslog and visualizing them in Grafana alongside firewall logs. Architecture ┌─────────────────┐ UDP/514 ┌──────────────────┐ │ OPNsense │ RFC5424 │ Promtail │ │ ┌───────────┐ │ ───────────────▶ │ (syslog recv) │ │ │ Suricata │ │ │ 192.168.2.221 │ │ │ filterlog │ │ └────────┬─────────┘ │ └───────────┘ │ │ └─────────────────┘ ▼ ┌──────────────────┐ │ Loki │ │ (log storage) │ └────────┬─────────┘ │ ▼ ┌──────────────────┐ │ Grafana │ │ (dashboards) │ └──────────────────┘ Prerequisites OPNsense firewall with Suricata IDS enabled Kubernetes cluster with Loki deployed MetalLB or NodePort for exposing the syslog receiver Step 1: Enable Suricata IDS on OPNsense Navigate to Services → Intrusion Detection → Administration. ...

January 1, 2026 · 5 min · Will
Certificate Rotation

Automatic Certificate Rotation with cert-manager and Linkerd

Certificates expire. In a Kubernetes homelab with Linkerd service mesh, this means the identity issuer certificate needs renewal annually. Without automation, this becomes a manual task that’s easy to forget until mTLS breaks across the cluster. This post covers installing cert-manager on a bare-metal kubeadm cluster and configuring it to automatically rotate Linkerd’s identity issuer certificate. The Problem Linkerd uses a two-tier PKI: Certificate Purpose Default Lifetime Trust Anchor Root CA for the mesh 10 years Identity Issuer Signs proxy certificates 1 year The identity issuer expires annually. When it does, new proxy sidecars cannot obtain valid certificates, breaking mTLS. The trust anchor rarely needs rotation, but the identity issuer requires attention. ...

January 1, 2026 · 7 min · Will
Kubernetes Metrics

Enabling CPU and Memory Stats in k9s on Bare-Metal Kubernetes

k9s displays CPU and MEM columns for pods and nodes, but on bare-metal kubeadm clusters these show “N/A” by default. This occurs because k9s relies on the Kubernetes Metrics API, which requires metrics-server to be installed. The Problem Without metrics-server: NAME CPU MEM k8s-master01 N/A N/A k8s-worker01 N/A N/A Managed Kubernetes services (EKS, GKE, AKS) typically pre-install metrics-server. Bare-metal kubeadm clusters do not include it. Solution Install metrics-server via Helm with configuration for kubeadm’s self-signed certificates. ...

January 1, 2026 · 2 min · Will
ArgoCD GitOps

GitOps Blog Deployment with ArgoCD and Automatic Image Updates

I run a Hugo blog on my homelab Kubernetes cluster, and I wanted a proper GitOps workflow where pushing to main automatically deploys changes. No manual kubectl apply, no SSH-ing into servers, no scripts to remember. Just git push and walk away. This post covers how I set up ArgoCD to deploy this blog with automatic image updates using the ArgoCD Image Updater. The Goal ┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ ┌─────────────┐ │ Git Push │────▶│ GitLab CI │────▶│ Container │────▶│ ArgoCD │ │ (main) │ │ (build) │ │ Registry │ │ (deploy) │ └─────────────┘ └─────────────┘ └─────────────────┘ └─────────────┘ │ │ │ │ │ ▼ │ │ ┌───────────────┐ │ └─────────────▶│ Image Updater │ │ │ (detect new) │ ▼ └───────────────┘ Tags image with │ git SHA (d67fe5d) ▼ ┌───────────────┐ │ Kubernetes │ │ (updated) │ └───────────────┘ The workflow: ...

December 29, 2025 · 6 min · Will
Firewall Configuration

Configuring OPNsense Firewall Rules via API for Cross-VLAN Kubernetes

When I needed to add a node from my DMZ to my Kubernetes cluster on the LAN, I discovered OPNsense has a comprehensive REST API that lets you manage firewall rules programmatically. No clicking through the UI - just curl commands that create rules properly tracked in the configuration and included in backups. The Problem My Kubernetes cluster lives on my LAN (192.168.2.0/24), but I wanted to add a machine from my DMZ (192.168.4.0/24). By default, DMZ traffic can’t reach the LAN - that’s the whole point of a DMZ. I needed to punch specific holes for Kubernetes traffic while keeping everything else blocked. ...

December 28, 2025 · 10 min · Will
Node Drain

Why You Need --disable-eviction for Homelab Kubernetes Node Drains

If you’ve ever tried to drain a Kubernetes node in a homelab cluster and found yourself staring at a terminal that just… hangs, you’ve probably run into PodDisruptionBudget (PDB) conflicts. Here’s why it happens and how to fix it. The Problem I was upgrading my Kubernetes cluster from 1.34 to 1.35, which requires draining each node before upgrading. Simple enough, right? kubectl drain k8s-worker01 --ignore-daemonsets --delete-emptydir-data And then… nothing. The command just sat there. No error, no progress, just waiting. ...

December 28, 2025 · 5 min · Will
Control Plane

Why Your Kubernetes Control Plane Has a NoSchedule Taint

If you’ve ever run kubectl describe node on your control plane and wondered about this taint: Taints: node-role.kubernetes.io/control-plane:NoSchedule Here’s what it does and why you want to keep it. What It Does This taint prevents regular pods from being scheduled on control plane nodes. Only pods that explicitly tolerate the taint can run there. Why It Matters Your control plane runs critical components: etcd - The cluster’s brain (all state lives here) kube-apiserver - The API everything talks to kube-controller-manager - Manages controllers kube-scheduler - Decides where pods run If a misbehaving application pod consumes all CPU or memory on the control plane, these components starve and your entire cluster becomes unresponsive. ...

December 28, 2025 · 2 min · Will
etcd Backup

Backing Up etcd to MinIO with a Kubernetes CronJob

etcd is the heart of a Kubernetes cluster - it stores all cluster state including deployments, secrets, configmaps, and PVC definitions. Losing etcd means losing your entire cluster configuration. Yet many homelab setups neglect etcd backups until it’s too late. This post walks through setting up automated etcd backups using a Kubernetes CronJob that uploads snapshots to MinIO. The Challenge etcd runs as a static pod on the control plane node, which makes backing it up trickier than a regular application: ...

December 28, 2025 · 3 min · Will