Migrating from UFW to Proxmox VE Firewall: Avoiding Common Pitfalls
I migrated my homelab from per-container UFW to the built-in Proxmox VE firewall. It took time and patience. I hit broken services, confusing logs and one maddening Vaultwarden outage. This guide shows what I saw, where the problems cropped up, how I diagnosed them, and how I fixed them step by step.
What you see
Symptoms are blunt and repeatable. Services stop responding. SSH times out intermittently. Apps like Vaultwarden fail to start after a firewall change. Typical evidence in logs looks like this (example lines you might see):
- container syslog: “UFW BLOCK IN=eth0 OUT= MAC=… SRC=1.2.3.4 DST=5.6.7.8 PROTO=TCP SPT=12345 DPT=80”
- host journal: “pve-firewall[1234]: ERROR: failed to apply rules: chain ‘ufw-user-input’ not found”
- Vaultwarden startup: “Error: bind EACCES 0.0.0.0:80”
Diagnostic commands I used and what I expected versus what I actually saw.
- On the container: ufw status verbose
- expected: inactive
- actual: active with ufw-user-* chains listed
- On the Proxmox host: systemctl status pve-firewall
- expected: active (running)
- actual: active, but journalctl -u pve-firewall -b showed rule application errors
- Check kernel firewall state: iptables -L -n
- expected: only Proxmox-created chains and default policy
- actual: extra ufw chains still present, mixing iptables chains from containers and host
Those mixed chains are the first sign that two firewall systems are stepping on each other. In my case, UFW inside containers left iptables chains that the Proxmox firewall tried to manage or reference. That created error messages and service interruptions.
Where it happens
The problems show up in a few places. Know them so you can check the right logs.
Container settings
- UFW runs inside each LXC or VM. If it is active, it modifies iptables rules inside that namespace. That can leave chains visible to the host or confuse rule translation.
- Check inside a container: sudo ufw status verbose, sudo systemctl status ufw, and look at /var/log/ufw.log for blocked packets.
Network interfaces
- Proxmox manages bridging and firewall rules at the host level. If the container firewall touched bridged interfaces, packets may be dropped before they hit a service.
- Use ip link and ip addr to confirm interfaces are as expected. Use ss -ltnp inside a container to confirm services are listening on expected addresses.
Proxmox VE configurations
- The Proxmox firewall runs centrally and can push rules to nodes, VMs and containers. The GUI and /etc/pve/firewall control that.
- Check: systemctl status pve-firewall and journalctl -u pve-firewall -b to find rule application errors. Look for messages about unknown chains, failed iptables commands or syntax errors when loading rules.
Spot the pattern: log errors referencing ufw chains, failures when applying rules to a VM, and services that were fine before the move. That points to a conflict between UFW and the Proxmox VE firewall.
Find the cause
I traced the root cause with a tight loop: reproduce, observe, test change, repeat. These were the main culprits.
Conflicting firewall rules
- UFW creates its own chains like ufw-user-input and ufw-user-forward. The Proxmox firewall expects to manage chains in a known order. If UFW is still active, the host or pve-firewall scripts try to call or re-order chains that no longer make sense. That throws errors and leaves packets blocked.
Missing dependencies
- Some container images do not ship ufw correctly configured for namespaced environments. A half-configured UFW can leave dangling rules that leak into the host ruleset.
Incorrect service configurations
- I found Vaultwarden and other apps bound to 0.0.0.0:80 but were blocked by a global deny rule added by UFW. The Proxmox firewall then tried to load its policy and hit the UFW chains, which caused rule application to fail and left the service unreachable.
Useful diagnostic commands I ran:
- ufw status verbose (inside container)
- systemctl status ufw (inside container)
- iptables -S (on host and inside container)
- journalctl -u pve-firewall -b
- ss -ltnp (inside container to confirm service listening)
Record expected vs actual for each command. That narrows the fault to either leftover UFW rules or a misapplied Proxmox rule.
Fix
I followed a strict, reversible process. Do one change at a time. Test after each change.
Remove UFW completely
- Inside each container: sudo ufw disable; sudo apt purge –auto-remove ufw
- Confirm removal: sudo systemctl status ufw should show inactive or not found; sudo ufw status should error or report inactive.
- Cleanup leftover iptables chains: check with iptables -S and nft list ruleset. If you still see ufw chains, a reboot of the container or recreating its network namespace clears them.
I removed UFW from all containers before I enabled the Proxmox firewall. That prevented cross-system conflicts. If you prefer less disruption, disable UFW first, let it be inactive for a day while monitoring, then purge.
Apply Proxmox firewall rules
- Enable the Proxmox VE firewall at node or datacenter level in the GUI, or via the CLI if you use it. Start with permissive rules, then tighten.
- Add explicit allow rules for management access (SSH, Proxmox GUI), and for any container services you expect to be reachable.
- Use short, testable rules. For example, allow TCP 80 and 443 to the VM IP, then test the service. Do not enable a global deny until you have tested the allow rules.
Test configurations incrementally
- After each rule change: curl -I http://
:80 or curl -I https:// :443 - Check service status: systemctl status
inside the container. - Inspect host firewall logs: journalctl -u pve-firewall -f while you apply a rule. Expected behaviour is clean rule application lines and no ERROR messages.
If something breaks, revert the last rule or disable the Proxmox firewall while you investigate. Keep backups of /etc/pve/firewall or export the GUI rule set before bulk changes.
Check it’s fixed
Verify service functionality
- Confirm services respond from the network: use curl, browser, or an external machine. For SSH, try ssh -vvv to see where the connection stalls.
- On the container, run ss -ltnp to confirm the service is listening on the expected address and port.
Monitor logs for errors
- Watch the Proxmox firewall journal: journalctl -u pve-firewall -f. Expected lines show rules applied without ERROR. Any remaining references to ufw or unknown chains mean something was missed.
- Check container logs: /var/log/syslog, /var/log/ufw.log if present, and the service logs for application-specific errors.
Conduct security assessments
- Run an external port scan from your LAN: nmap -Pn -p 1-65535
. Expected result is only allowed ports show open. - Review firewall rules in the Proxmox GUI. I kept an explicit list of inbound and outbound rules I applied. That made regression simple when I mis-typed a rule.
Concrete takeaways from my migration
- Remove UFW from containers before enforcing the Proxmox VE firewall.
- Apply Proxmox firewall rules in small steps and test after each change.
- Capture exact command outputs and log lines when you see errors. They point to the root cause.
- Keep a rollback plan. An easy revert saved me hours during the Vaultwarden outage.
I migrated the whole homelab with that process. It was slower than flipping a switch, but it left a stable Proxmox configuration and better homelab security.




