img choosing zfs for proxmox on your nuc proxmox storage strategies

Choosing ZFS for Proxmox on your NUC

Proxmox Storage Strategies matter on a NUC. I run Proxmox on small hardware all the time, so I will walk you through the choices, trade offs and a practical ZFS setup that fits a two‑slot NUC. I keep this hands on. No fluff. You will get commands you can run and checks you can make.

Start with the problem. A NUC with two M.2 NVMe slots forces choices about redundancy, wear and bulk storage. ZFS gives checksums, crash‑safe writes, snapshots and easy replication. That makes it attractive for virtualisation on Proxmox. ZFS does cost memory. It writes more because of copy‑on‑write and metadata. That increases wear on NVMe drives compared with simple ext4 or XFS. If you want redundancy with two drives, mirror them. If you want maximum write endurance and simple performance, put the VM images on a single NVMe and use an external NAS for heavy write workloads such as Frigate recordings. For a small NUC configuration, here are concrete configs I use in the field: mirror two identical NVMe drives when uptime through a single drive failure matters; use a single NVMe for the OS and put VMs on a second drive formatted ext4/XFS when write amplification is a concern; store long‑term media and high write streams on a networked Synology or a Thunderbolt enclosure and mount it over NFS. Call this a hybrid Proxmox Storage Strategies approach — local fast storage and remote bulk storage.

If you pick ZFS on Proxmox, set it up cleanly during install or rebuild the drives. Below are the steps I run on a two‑NVMe NUC. These are practical commands; adapt device names for your hardware. Follow them in order and verify state after each command.

  1. Wipe existing partitions if present.

    • sudo wipefs -a /dev/nvme0n1
    • sudo wipefs -a /dev/nvme1n1
  2. Create a mirrored pool with 4K alignment.

    • sudo zpool create -f -o ashift=12 tank mirror /dev/nvme0n1 /dev/nvme1n1
    • Reason: ashift=12 for modern NVMe sector sizes. Mirror gives simple redundancy with two disks.
  3. Create datasets for VMs and containers and set sensible options.

    • sudo zfs create tank/vmdata
    • sudo zfs set compression=lz4 tank
    • sudo zfs set atime=off tank
    • sudo zfs set xattr=sa tank
    • sudo zfs set mountpoint=/var/lib/vz tank/vmdata
    • Note: compression=lz4 is cheap and often improves effective write performance. atime=off reduces metadata churn.
  4. Import this pool into Proxmox storage.

    • In the Proxmox GUI, add a ZFS storage and point it to tank or use the zfs plugin. Then create VM disks on zvols or use the directory storage for qcow2/raw images.
  5. Tune for VM workloads where needed.

    • For QEMU disks, use zfs set recordsize=16K tank/vmdata for many small random writes, or leave recordsize at 128K for bulk sequential IO. Test and pick the best for your workload.

Verification steps to confirm health and state:

  • zpool status -v
  • zfs list
  • zpool get ashift tank
  • zpool scrub tank
    Run these after initial create and weekly or monthly for routine checks.

Backup strategies tie into the storage choices. ZFS snapshots are fast and cheap. Use them for quick rollback and short‑term restores. They are not a substitute for offsite backups. For consistent VM snapshots, install the qemu-guest-agent in the guest and use Proxmox snapshot jobs so the VM is quiesced before the snapshot. For file‑level backups or longer retention, use Proxmox Backup Server or rsync to a NAS. I run daily backups to a Synology and weekly replication to an offsite machine. If you must keep Frigate recordings, push them off the NUC to a Thunderbolt enclosure or NAS; they will thrash any local mirrored NVMe pair and shorten drive life.

Performance tuning and trade offs you must accept. ZFS loves RAM. Give the host as much memory as your NUC can handle. I set ZFS cache limits in small hosts to avoid starving VMs: add zfsarcmax in /etc/modprobe.d/zfs.conf or set it via sysctl on boots. Do not overcommit. Use compression; it often reduces writes and improves throughput on NVMe. Avoid multiple layers of write caching like disabling host cache on the disk and setting sync=always; that slows things. Leave sync=standard unless you understand the risks. If low latency matters, prefer raw disks on ZFS zvols. For heavy sequential write workloads, XFS on a single NVMe will outperform mirrored ZFS, but it gives no checksums or snapshots.

Troubleshooting common ZFS issues on a NUC is usually memory, entropy or device naming. If a pool goes degraded, check cables and device IDs; on NVMe the names can change after reboot if you rely on short names. Use /dev/disk/by-id when creating pools to avoid this. If you see slow performance, check zpool iostat -v and top for ZFS ARC pressure. If a drive fails, replace it with zpool replace tank /dev/disk/by-id/old-id /dev/disk/by-id/new-id and monitor resilvering. If you boot from ZFS and hit problems, keep a USB installer handy to import the pool and fix mountpoints. If silent corruption is a worry, remember ZFS was designed to detect and fix it, but that works best with mirrored storage and regular scrubs: schedule a scrub with zpool scrub tank and check the results.

Concrete examples to anchor choices. If you plan to run Home Assistant, Plex, Frigate and a couple of VMs on a NUC with two NVMe slots: mirror the two NVMe for the VM and config disks, use NFS to a Synology for media and Frigate recordings, and run automated ZFS snapshots with a retention policy of hours for quick rollbacks and daily replication offsite for disaster recovery. That gives quick restores, protects against drive failure and keeps heavy write workloads off the NVMe. If you prioritise raw performance and expect to replace drives regularly, put the OS on one NVMe, format the second as XFS for VMs, and back up regularly to a NAS.

Takeaways: ZFS gives safety and snapshots at the cost of RAM and extra writes. On a two‑slot NUC, mirror for redundancy unless you can offload writes to a NAS. Use compression, atime=off and ashift=12. Run regular scrubs and keep automated backups off the device. Follow the commands above; verify with zpool status and zfs list. That will get Proxmox Storage Strategies right for a constrained NUC without guessing.

Leave a Reply

Your email address will not be published. Required fields are marked *

Prev
Immich | v2.3.1
immich v2 3 1 2

Immich | v2.3.1

Immich v2

Next
paperless-ngx | v2.20.1
paperless ngx v2 20 1 2

paperless-ngx | v2.20.1

paperless-ngx v2

You May Also Like