When I needed to add a node from my DMZ to my Kubernetes cluster on the LAN, I discovered OPNsense has a comprehensive REST API that lets you manage firewall rules programmatically. No clicking through the UI - just curl commands that create rules properly tracked in the configuration and included in backups.
The Problem
My Kubernetes cluster lives on my LAN (192.168.2.0/24), but I wanted to add a machine from my DMZ (192.168.4.0/24). By default, DMZ traffic can’t reach the LAN - that’s the whole point of a DMZ. I needed to punch specific holes for Kubernetes traffic while keeping everything else blocked.
Network Topology
┌─────────────────┐
│ OPNsense │
│ 192.168.2.1 │ (LAN)
│ 192.168.4.1 │ (DMZ)
└────────┬────────┘
│
┌───────────────────┼───────────────────┐
│ │ │
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
│ LAN │ │ DMZ │ │ WAN │
│ 192.168.2.x │ 192.168.4.x │ │
└────┬────┘ └────┬────┘ └─────────┘
│ │
K8s Cluster minis
(master + workers) (new node)
Kubernetes Port Requirements
Before touching the firewall, I needed to understand what ports Kubernetes actually needs:
| Port | Protocol | Direction | Purpose |
|---|---|---|---|
| 6443 | TCP | DMZ → LAN | Kubernetes API server |
| 10250 | TCP | Both ways | Kubelet API (for logs, exec, metrics) |
| 2379-2380 | TCP | DMZ → LAN | etcd (if node needs direct access) |
| 8472 | UDP | Both ways | Flannel VXLAN overlay |
| 4789 | UDP | Both ways | Calico VXLAN overlay |
The bidirectional requirement for 10250 and the overlay ports is important - the control plane needs to reach the kubelet on the DMZ node, not just the other way around.
Setting Up OPNsense API Access
First, I created an API key in the OPNsense UI:
- System → Access → Users
- Edit your user (or create a dedicated API user)
- Scroll to API keys section
- Click + to generate a new key
- Save the key and secret
The API uses HTTP Basic Auth with the key as username and secret as password.
Testing the API
OPNsense runs its web UI on a custom port in my setup (8443), so the API is there too:
curl -s -k -u 'YOUR_KEY:YOUR_SECRET' \
https://firewall.minoko.life:8443/api/core/firmware/status | jq '.product.product_version'
If you get a version back, the API is working.
Creating Aliases
Aliases make firewall rules readable and maintainable. Instead of hardcoding IPs and ports, you reference named groups.
K8s Nodes Alias
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
-H "Content-Type: application/json" \
-d '{
"alias": {
"enabled": "1",
"name": "K8s_Nodes",
"type": "host",
"content": "192.168.2.102\n192.168.2.103\n192.168.2.104\n192.168.2.109",
"description": "Kubernetes cluster nodes on LAN"
}
}' \
'https://firewall.minoko.life:8443/api/firewall/alias/addItem'
TCP Ports Alias
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
-H "Content-Type: application/json" \
-d '{
"alias": {
"enabled": "1",
"name": "K8s_Ports_TCP",
"type": "port",
"content": "6443\n10250\n2379\n2380",
"description": "Kubernetes TCP ports (API, kubelet, etcd)"
}
}' \
'https://firewall.minoko.life:8443/api/firewall/alias/addItem'
UDP Ports Alias
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
-H "Content-Type: application/json" \
-d '{
"alias": {
"enabled": "1",
"name": "K8s_Ports_UDP",
"type": "port",
"content": "8472\n4789",
"description": "Kubernetes UDP ports (VXLAN overlay)"
}
}' \
'https://firewall.minoko.life:8443/api/firewall/alias/addItem'
Creating Firewall Rules
OPNsense has two firewall systems - the legacy one and the newer MVC-based automation rules. The API adds to the automation rules, which is what we want.
Rule 1: DMZ to LAN - TCP (API, Kubelet, etcd)
This allows the DMZ node to reach the Kubernetes API server, other nodes’ kubelets, and etcd:
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
-H "Content-Type: application/json" \
-d '{
"rule": {
"enabled": "1",
"action": "pass",
"interface": "opt2",
"direction": "in",
"ipprotocol": "inet",
"protocol": "TCP",
"source_net": "DMZ_Server",
"destination_net": "K8s_Nodes",
"destination_port": "K8s_Ports_TCP",
"description": "Kubernetes: DMZ to LAN TCP (API, kubelet, etcd)",
"log": "1"
}
}' \
'https://firewall.minoko.life:8443/api/firewall/filter/addRule'
Why this rule matters:
- 6443: The node needs to register with and receive instructions from the API server
- 10250: Nodes communicate with each other’s kubelets for pod networking
- 2379-2380: Direct etcd access (may not be needed depending on your setup)
Rule 2: DMZ to LAN - UDP (VXLAN Overlay)
Pod-to-pod networking across nodes uses VXLAN tunnels:
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
-H "Content-Type: application/json" \
-d '{
"rule": {
"enabled": "1",
"action": "pass",
"interface": "opt2",
"direction": "in",
"ipprotocol": "inet",
"protocol": "UDP",
"source_net": "DMZ_Server",
"destination_net": "K8s_Nodes",
"destination_port": "K8s_Ports_UDP",
"description": "Kubernetes: DMZ to LAN UDP (VXLAN overlay)",
"log": "1"
}
}' \
'https://firewall.minoko.life:8443/api/firewall/filter/addRule'
Why this rule matters:
- 8472: Flannel’s default VXLAN port
- 4789: Calico’s VXLAN port (standard VXLAN port)
Without this, pods on the DMZ node can’t communicate with pods on LAN nodes.
Rule 3: LAN to DMZ - Kubelet API
This is the rule people often forget. The control plane needs to reach the kubelet on your new node:
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
-H "Content-Type: application/json" \
-d '{
"rule": {
"enabled": "1",
"action": "pass",
"interface": "lan",
"direction": "in",
"ipprotocol": "inet",
"protocol": "TCP",
"source_net": "K8s_Nodes",
"destination_net": "DMZ_Server",
"destination_port": "10250",
"description": "Kubernetes: LAN to DMZ kubelet API",
"log": "1"
}
}' \
'https://firewall.minoko.life:8443/api/firewall/filter/addRule'
Why this rule matters:
kubectl logsandkubectl execgo through the kubelet- Metrics collection (metrics-server) queries the kubelet
- Without this, your node joins but
kubectl execfails with “unable to upgrade connection”
Applying the Changes
Rules aren’t active until you apply them:
# Apply filter rules
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
'https://firewall.minoko.life:8443/api/firewall/filter/apply'
# Reconfigure aliases
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
'https://firewall.minoko.life:8443/api/firewall/alias/reconfigure'
Verifying Connectivity
From the DMZ node, test that you can reach the Kubernetes API:
curl -k https://192.168.2.103:6443/healthz
# Should return: ok
Finding Interface Names
The interface names (opt2, lan, etc.) aren’t obvious. Query them:
curl -s -k -u "$API_KEY:$API_SECRET" \
'https://firewall.minoko.life:8443/api/diagnostics/interface/getInterfaceNames' | jq .
Returns something like:
{
"igc1": "WAN",
"igc0": "LAN",
"igc3": "DMZ",
...
}
The API uses the internal names (lan, opt2) not the physical names (igc0, igc3).
Benefits Over UI Configuration
- Reproducible: Script your firewall setup for disaster recovery
- Documented: The curl commands serve as documentation
- Auditable: Track changes in git alongside your infrastructure code
- Fast: Adding multiple rules takes seconds, not minutes of clicking
The Result
After applying these rules, my DMZ machine could successfully join the Kubernetes cluster:
kubeadm join 192.168.2.103:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx
The node came up ready, pods scheduled on it could communicate with pods on LAN nodes, and kubectl exec worked correctly.
Security Considerations
This setup pokes holes in the DMZ-LAN boundary specifically for Kubernetes. Consider:
- These rules only allow traffic from/to specific IPs (not the entire DMZ)
- Logging is enabled (
"log": "1") for audit trails - The DMZ node is still isolated from other LAN services
- Consider using network policies within Kubernetes for additional pod-level isolation
For a homelab, this is a reasonable trade-off. For production, you’d want to evaluate whether the DMZ node should really be in the cluster or if a separate cluster in the DMZ makes more sense.
Part 2: Preparing the Node
With the firewall configured, the DMZ machine needs Kubernetes prerequisites installed.
Kernel Modules and Sysctl
# Load required modules
sudo modprobe br_netfilter
echo br_netfilter | sudo tee /etc/modules-load.d/k8s.conf
echo overlay | sudo tee -a /etc/modules-load.d/k8s.conf
# Kubernetes networking requirements
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Disable Swap
Kubernetes requires swap to be disabled:
sudo swapoff -a
sudo systemctl mask [email protected] # Fedora uses zram
Configure containerd
If containerd is already installed (common on Fedora workstations with Docker), configure it for Kubernetes:
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
Install kubeadm and kubelet
# Match your cluster version
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.35/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.35/rpm/repodata/repomd.xml.key
EOF
sudo dnf install -y kubelet kubeadm
sudo systemctl enable kubelet
Part 3: The DNS Gotcha
When I ran kubeadm join, it failed with:
dial tcp: lookup k8s-master01 on 127.0.0.53:53: server misbehaving
The DMZ machine uses different DNS servers than the LAN, so it can’t resolve LAN hostnames. Two solutions:
Option 1: Use IP addresses
sudo kubeadm join 192.168.2.103:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx
Option 2: Add /etc/hosts entries
echo "192.168.2.103 k8s-master01" | sudo tee -a /etc/hosts
I went with Option 2 since Kubernetes will continue to reference the control plane by hostname.
Part 4: Joining the Cluster
On the control plane, generate a join token:
sudo kubeadm token create --print-join-command
On the DMZ node:
sudo kubeadm join k8s-master01:6443 --token j2gwho.fpjgl8m3omik9nfa \
--discovery-token-ca-cert-hash sha256:83a1a05847e7142b187347c59c1620cf5e4b1fa0cc0d268950993c3181d7fa7d
After about 30 seconds:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Adding the Worker Label
New nodes don’t automatically get the worker role:
kubectl label node minis-enp195s0 node-role.kubernetes.io/worker=
Verify
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP OS-IMAGE
k8s-master01 Ready control-plane 127d v1.35.0 192.168.2.103 CentOS Stream 9
k8s-worker01 Ready worker 127d v1.35.0 192.168.2.104 CentOS Stream 9
k8s-worker02 Ready worker 127d v1.35.0 192.168.2.102 CentOS Stream 9
minis-enp195s0 Ready worker 5m v1.35.0 192.168.4.50 Fedora Linux 42
polycephala Ready worker 48d v1.35.0 192.168.2.109 Rocky Linux 10.1
The DMZ node shows its 192.168.4.x address, confirming it’s on a different network but fully participating in the cluster.
Part 5: The Local Firewall Gotcha
After the node joined, I tried to view pod logs:
kubectl logs -n minoko-life-blog -l app.kubernetes.io/name=minoko-life-blog
And got:
Error from server: Get "https://192.168.4.50:10250/containerLogs/...": dial tcp 192.168.4.50:10250: connect: no route to host
The OPNsense firewall rules were correct, but the local firewall on minis (firewalld) was blocking port 10250. The control plane could reach the DMZ network, but the node itself was rejecting connections.
# On the DMZ node
sudo firewall-cmd --add-port=10250/tcp --permanent
sudo firewall-cmd --add-port=10255/tcp --permanent # Read-only kubelet port
sudo firewall-cmd --reload
After this, kubectl logs and kubectl exec worked correctly.
Part 6: Restricting Workloads to DMZ
Since this node is internet-facing, I don’t want random workloads scheduled on it. Only explicitly allowed services should run here.
Add Taint and Label
# Taint prevents pods from scheduling unless they tolerate it
kubectl taint nodes minis-enp195s0 zone=dmz:NoSchedule
# Label for node selection
kubectl label nodes minis-enp195s0 zone=dmz
Pod Spec Requirements
Now pods need both a toleration AND a nodeSelector to run on this node:
spec:
nodeSelector:
zone: dmz
tolerations:
- key: "zone"
operator: "Equal"
value: "dmz"
effect: "NoSchedule"
This ensures only internet-facing services I explicitly configure will run on the DMZ node.
Part 7: Minimizing Open Ports
After everything was working, I revisited the firewall rules. Worker nodes don’t actually need direct etcd access (ports 2379-2380) - only the control plane talks to etcd directly.
Update the TCP ports alias to remove etcd:
# Get the alias UUID
curl -s -k -u "$API_KEY:$API_SECRET" \
'https://firewall.minoko.life:8443/api/firewall/alias/searchItem' | \
jq '.rows[] | select(.name=="K8s_Ports_TCP") | .uuid'
# Update to only include necessary ports
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
-H "Content-Type: application/json" \
-d '{"alias":{"content":"6443\n10250"}}' \
'https://firewall.minoko.life:8443/api/firewall/alias/setItem/YOUR_UUID_HERE'
# Apply changes
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
'https://firewall.minoko.life:8443/api/firewall/alias/reconfigure'
Final minimal port list:
| Port | Protocol | Purpose |
|---|---|---|
| 6443 | TCP | API server |
| 10250 | TCP | Kubelet API |
| 8472 | UDP | VXLAN overlay |
| 4789 | UDP | VXLAN overlay |
Part 8: HAProxy Backend Update via API
My blog was previously served by Podman on port 80. After migrating to Kubernetes with a NodePort service on 30080, I needed to update HAProxy.
Find the HAProxy Server
curl -s -k -u "$API_KEY:$API_SECRET" \
'https://firewall.minoko.life:8443/api/haproxy/settings/searchServers' | \
jq '.rows[] | {uuid: .uuid, name: .name, address: .address, port: .port}'
{
"uuid": "8236b53a-794d-46ff-b5b0-1cce18b3689c",
"name": "hugo-server",
"address": "192.168.4.50",
"port": "80"
}
Update the Port
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
-H "Content-Type: application/json" \
-d '{"server":{"port":"30080"}}' \
'https://firewall.minoko.life:8443/api/haproxy/settings/setServer/8236b53a-794d-46ff-b5b0-1cce18b3689c'
Apply HAProxy Changes
curl -s -k -u "$API_KEY:$API_SECRET" -X POST \
'https://firewall.minoko.life:8443/api/haproxy/service/reconfigure'
The blog is now served from Kubernetes, with traffic flowing:
Internet → Cloudflare → OPNsense HAProxy (443) → minis:30080 (NodePort) → Pod
Conclusion
Adding a node from a different network segment to Kubernetes requires:
- OPNsense firewall rules for bidirectional Kubernetes traffic (API, kubelet, overlay networking)
- Local firewall configuration on the node itself (firewalld/iptables)
- DNS resolution or /etc/hosts entries for the control plane hostname
- Standard Kubernetes prerequisites (containerd, kubelet, kubeadm, kernel settings)
- Taints and tolerations if you want to restrict which workloads run on the node
- HAProxy backend updates if routing traffic through a reverse proxy
The OPNsense API made all the firewall and HAProxy configuration scriptable and reproducible. The rules are visible in the UI, included in backups, and can be version-controlled alongside the rest of your infrastructure code.