n8n in a homelab: isolation, snapshots, and recovery when the code execution hits the fan
CVE-2025-68613 carries a CVSS score of 9.9. It affects n8n versions from 0.211.0 up to but not including 1.120.4, 1.121.1, and 1.122.0, and it has been actively exploited in the wild. CISA confirmed exploitation and ordered federal agencies to patch. If you are running an unpatched self-hosted instance, the workflow expression evaluator is a direct path to code execution on your host.
The actual threat model for a homelab n8n instance
The vulnerability sits in n8n’s {{ }} expression evaluation system. User-supplied expressions are evaluated server-side in a context that is not sufficiently separated from the underlying Node.js runtime. An authenticated attacker who can create or edit a workflow can escape the expression sandbox and run arbitrary code with the privileges of the n8n process. In most default homelab deployments, that process runs as root inside the container, which is the worst-case scenario once the container boundary is in question.
The Docker default bridge network makes this significantly worse. Every container on the default bridge can reach every other container on the same host via their IP addresses. If n8n is compromised and sits on the same default network as your Vaultwarden, your Gitea, or your PostgreSQL instance, lateral movement is a single curl command away. Docker’s user-defined bridge networks provide DNS resolution between containers by name, but the default bridge network provides no such segmentation. Without explicit firewall rules, n8n can initiate connections to anything on that network.
Docker’s iptables integration inserts its own chains (DOCKER, DOCKER-USER, DOCKER-ISOLATION-STAGE-1, DOCKER-ISOLATION-STAGE-2) into the host’s filter table. Rules you add manually to the INPUT chain often do not intercept inter-container traffic because that traffic is handled by the FORWARD chain and Docker’s own rules. The DOCKER-USER chain is the correct place to insert custom rules that affect Docker-managed traffic, but most homelab setups never touch it. That leaves a gap where you believe a port is blocked but container-to-container traffic flows freely regardless.
Isolating n8n with a dedicated Docker network
Define a named network with internal: true in your docker-compose.yml. This prevents containers on that network from reaching the internet directly. n8n legitimately needs outbound access to call webhooks and external APIs, so you do not want to cut that entirely. The practical approach is two networks: one internal-only for n8n-to-PostgreSQL communication, and one external-facing for n8n’s outbound webhook traffic.
yaml
networks:
n8n-internal:
driver: bridge
internal: true
n8n-external:
driver: bridge
Attach n8n to both. Attach PostgreSQL only to n8n-internal. This way, a compromised n8n process cannot reach your PostgreSQL port from outside that internal network, and your PostgreSQL container has no path to the internet at all.
Add explicit egress restrictions via the DOCKER-USER chain on the host to limit which external IPs n8n can reach. This will not stop a determined attacker who already has code execution, but it raises the cost of exfiltrating credentials and reduces the usefulness of the container as a pivot point.
Removing the inbound attack surface entirely
The cleaner option for public access is a Cloudflare Zero Trust tunnel (cloudflared). Run cloudflared as its own container on the n8n-external network. No port on your host needs to be bound and exposed to the internet. The n8n container never receives a direct inbound connection from outside your host. Combined with Cloudflare Access policies, you can gate the n8n UI behind an identity check before a request ever reaches the application. This does not fix the expression injection vulnerability, but it requires an attacker to be authenticated before they can trigger it, which changes the threat from unauthenticated remote to post-authentication.
Drop --cap-add flags you do not need. Add --read-only to the container where the filesystem allows it, and use --security-opt no-new-privileges:true to prevent privilege escalation inside the container. These are not substitutes for patching, but they reduce what a successful RCE can do.
Recovering without losing your workflows or credentials
Patching is not always the first thing that happens. Sometimes you detect the compromise after the fact, and then the question becomes: can you recover cleanly without starting from scratch?
What you actually lose if you nuke the container
With a SQLite-based n8n deployment, all workflow definitions, execution history, and credentials live in a single file: /home/node/.n8n/database.sqlite. If you delete the container and its associated volume without snapshotting first, everything goes with it. Credential storage in n8n is encrypted at rest using the N8N_ENCRYPTION_KEY. If you recreate the instance with a different key, even a database restore will not decrypt your credentials correctly. The key is the only thing standing between you and a pile of unreadable ciphertext.
PostgreSQL-backed deployments are safer in this regard, but only if you are actually snapshotting the volume. The .n8n directory still matters even with PostgreSQL: it holds the encryption key, instance configuration, and source control assets. Backing up just the Postgres volume and ignoring .n8n is a common mistake that leaves you unable to decrypt any credentials on restore.
Snapshotting before teardown
Before stopping or removing anything, take the following in order:
- Copy the
.n8ndirectory off the volume:docker cp n8n:/home/node/.n8n /backup/n8n-dot-n8n-$(date +%Y%m%d) - If using PostgreSQL, run
pg_dumpfrom inside the postgres container:docker exec postgres pg_dump -U n8n n8n > /backup/n8n-pg-$(date +%Y%m%d).sql - Copy both to an offsite location before tearing down the compromised container. An rsync push to a separate host or an rclone upload to object storage will do. Do not trust a backup that lives on the same host as the compromised instance.
Keep the .n8n backup read-only and separate from the database dump. If an attacker has exfiltrated your N8N_ENCRYPTION_KEY, rotating it after restore is necessary.
Restoring to a verified clean state
Stand up a fresh container from the patched image. Restore the .n8n directory first, then restore the database. Confirm that n8n starts cleanly and that the UI shows your workflows before re-enabling any external access.
If you cannot rule out that the N8N_ENCRYPTION_KEY was read during the breach, rotate it after verifying the restore. The process requires decrypting all stored credentials with the old key and re-encrypting with the new one. n8n does not have a built-in key rotation command, so you will need to manually update each credential in the UI after setting the new N8N_ENCRYPTION_KEY in your environment. It is tedious, but it is the only way to confirm that old key material is no longer in use.
Daily workflow JSON exports as a safety net
Volume snapshots capture the database state, but they do not give you a human-readable, diffs-friendly record of what changed in your workflows. The n8n API exposes a /api/v1/workflows endpoint that returns all workflow definitions as JSON. A daily cron job that calls this endpoint and writes the output to a separate read-only volume gives you a workflow history that is independent of the database. If a snapshot is corrupted or the database file is modified by an attacker, you still have the workflow JSON from the previous day.
bash
curl -s -H “X-N8N-API-KEY: $N8NAPIKEY” \
http://n8n:5678/api/v1/workflows \
| jq ‘.’ > /backup/workflows/workflows-$(date +%Y%m%d).json
Mount the backup target as read-only from n8n’s perspective, so a compromised n8n process cannot overwrite or delete the export history. The cron job runs on the host or a separate container with write access; n8n itself never needs write permission on that path.
Patch to 1.122.0 or later. Everything above reduces the damage if RCE fires, but the expression sandbox fix in the patched versions is what closes the door.
