If you’re running a homelab with multiple network segments, there’s a good chance you have at least one machine connected to more than one network. Maybe your workstation has a wired connection to your DMZ and wireless to your trusted WLAN. Convenient? Yes. A potential security hole? Also yes.

The Problem

My workstation sits on two networks: wireless connected to my home WLAN (192.168.3.0/24) and wired into my DMZ (192.168.4.0/24). The DMZ is intentionally isolated—it’s where I run services exposed to the internet. The WLAN is where everything else lives: personal devices, management interfaces, the stuff I actually care about protecting.

The issue? Linux is happy to forward packets between interfaces if you ask nicely. And if you’re running Docker or Kubernetes, you’re probably already asking—container networking typically enables IP forwarding system-wide.

A quick check confirmed my suspicion:

$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

With forwarding enabled, my workstation could theoretically route traffic from the DMZ straight into my WLAN. Not ideal.

The Solution

Disabling IP forwarding entirely would break my container networking, so that’s out. Instead, I used firewalld to create explicit boundaries between interfaces.

Step 1: Separate the Zones

By default, both interfaces were sitting in the same FedoraWorkstation zone—same trust level, same rules. First step is moving the DMZ interface into its own zone:

sudo firewall-cmd --zone=dmz --change-interface=enp195s0 --permanent

The dmz zone is more restrictive than FedoraWorkstation. By default it only allows SSH, which is exactly the posture I want for that interface.

Step 2: Block Forwarding Between Interfaces

Zone assignment controls what can talk to your machine. But we also need to prevent the machine from acting as a router between networks. That requires explicit FORWARD chain rules:

# Block DMZ -> WLAN
sudo firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp195s0 -o wlp194s0 -j DROP

# Block WLAN -> DMZ
sudo firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i wlp194s0 -o enp195s0 -j DROP

sudo firewall-cmd --reload

These rules drop any packet attempting to traverse from one interface to the other, regardless of what the routing table says.

Verification

Confirm the zone assignment:

$ firewall-cmd --get-active-zones
dmz
  interfaces: enp195s0
FedoraWorkstation (default)
  interfaces: wlp194s0
docker
  interfaces: br-3ecdf874c444 br-4e5ee54a9794 docker0 br-ce4838725262

And the forward rules:

$ firewall-cmd --direct --get-all-rules
ipv4 filter FORWARD 0 -i enp195s0 -o wlp194s0 -j DROP
ipv4 filter FORWARD 0 -i wlp194s0 -o enp195s0 -j DROP

When You’re Also Running Kubernetes

Here’s where it gets interesting. My workstation is also a node in my Kubernetes cluster. The cluster’s control plane lives on my LAN (192.168.2.0/24), which communicates through my OPNsense firewall to reach the DMZ.

After implementing the zone isolation above, I discovered my K8s node was completely broken—kubectl exec failed, pods couldn’t communicate across nodes, and Linkerd proxies couldn’t reach the control plane. The dmz zone’s restrictive defaults were doing their job too well.

Opening Ports for Kubernetes

The dmz zone needs specific ports for cluster communication:

# Kubelet API (control plane -> node)
sudo firewall-cmd --zone=dmz --add-port=10250/tcp --permanent

# VXLAN overlay networking (Calico/Flannel)
sudo firewall-cmd --zone=dmz --add-port=4789/udp --permanent
sudo firewall-cmd --zone=dmz --add-port=8472/udp --permanent

# Linkerd control plane
sudo firewall-cmd --zone=dmz --add-port=8080/tcp --permanent
sudo firewall-cmd --zone=dmz --add-port=8086/tcp --permanent
sudo firewall-cmd --zone=dmz --add-port=8090/tcp --permanent

# Kubernetes API server (node -> control plane)
sudo firewall-cmd --zone=dmz --add-port=6443/tcp --permanent

# Calico BGP and Typha
sudo firewall-cmd --zone=dmz --add-port=179/tcp --permanent
sudo firewall-cmd --zone=dmz --add-port=5473/tcp --permanent

# NodePort range
sudo firewall-cmd --zone=dmz --add-port=30000-32767/tcp --permanent
sudo firewall-cmd --zone=dmz --add-port=30000-32767/udp --permanent

sudo firewall-cmd --reload

Trusting the CNI Interfaces

Here’s the gotcha that cost me hours of debugging: even with the ports open, pod-to-pod traffic was still failing with “Packet filtered”. The issue? Calico creates a veth pair for each pod (named cali*), and these interfaces weren’t in any firewalld zone. When traffic traversed these interfaces, firewalld’s default zone rules blocked it.

The fix is to trust all Calico interfaces using a wildcard, plus enable forwarding in the dmz zone:

# Trust all Calico pod interfaces (cali+ matches cali4c3e44de96c, etc.)
sudo firewall-cmd --zone=trusted --add-interface=cali+ --permanent

# Trust the VXLAN tunnel interface
sudo firewall-cmd --zone=trusted --add-interface=vxlan.calico --permanent

# Allow forwarding in the dmz zone (required for cross-network pod traffic)
sudo firewall-cmd --zone=dmz --add-forward --permanent

sudo firewall-cmd --reload

The cali+ wildcard is critical—Calico creates a new veth interface for every pod, and they’re all named with a cali prefix followed by a hash. Without this, new pods won’t be able to communicate.

Verify it’s applied:

$ firewall-cmd --get-active-zones
dmz
  interfaces: enp195s0
FedoraWorkstation (default)
  interfaces: wlp194s0
trusted
  interfaces: vxlan.calico cali+
docker
  interfaces: br-3ecdf874c444 br-4e5ee54a9794 docker0 br-ce4838725262

Don’t Forget Your Network Firewall

If your DMZ and LAN are separated by a firewall (like OPNsense), you’ll also need rules there. I created aliases for my K8s nodes and ports, then added bidirectional rules:

DirectionProtocolSourceDestinationPorts
DMZ → LANTCPDMZ nodeK8s nodes6443, 10250, 8080, 8086, 8090
DMZ → LANUDPDMZ nodeK8s nodes4789, 8472
LAN → DMZTCPK8s nodesDMZ node6443, 10250, 8080, 8086, 8090
LAN → DMZUDPK8s nodesDMZ node4789, 8472

Why This Matters

A compromised service in your DMZ shouldn’t have a free path into your trusted network. Defense in depth means assuming each layer might fail and having the next one ready. Even if something gets past your perimeter firewall and owns a DMZ service, it now has to deal with the fact that your dual-homed workstation won’t help it pivot.

The Kubernetes exceptions are surgical—we’re allowing specific ports for cluster operation, not opening the floodgates. The FORWARD rules still prevent the machine from routing arbitrary traffic between networks.

This isn’t paranoia—it’s just good network hygiene.