Proxmox VE 9 brings newer kernels, updated QEMU/KVM, and refreshed management components. That helps with newer guest OSes and features, but it also shifts hardware requirements a bit. On older boxes, I start with the upgrade path and check what needs attention first. Proxmox is still a rolling set of Debian packages, so this is a package-driven change, not a single installer. Expect kernel, storage, and network driver changes. Plan for a reboot and have fallbacks ready.
A Dell R720 is a common homelab rack server. It often runs older Broadcom NICs, PERC RAID controllers, and Intel Xeon CPUs. Those parts usually work, but drivers and firmware are the variables. My approach is hardware-first: check firmware, RAID driver support, and NIC drivers before attempting a Proxmox VE 9 upgrade. Do not assume everything will boot the same after a kernel bump.
Useful checks:
lspci -nnto list controllers.dmesg | grep -i -E 'raid|error|firmware'to spot worries.uname -randpveversion -vto record current state.
If a driver is missing after the upgrade, the server may boot but lose storage or networking. That can be fixed, but it is slower than having a tested snapshot or backup.
I have seen plenty of threads where people upgrade older servers without issue, and plenty where they regret rushing it. The same pattern keeps turning up: the ones with backups and a live boot test recovered quickly. The ones without spent hours digging through logs. Treat online reports as data points. Use them to shape your test plan, not as a binary answer on compatibility.
Do not wing this.
Steps I use:
- Update firmware and BIOS on the Dell R720 to the latest vendor release you can get.
- Record current state:
uname -a,pveversion -v,lsblk,lspci -nn,ip a. - Put hosts in maintenance mode: shut down non-essential VMs and stop scheduled jobs.
- Create a file-level copy of
/etc/pveand/etc/network/interfaces(or your network config).
Commands:
-
apt update && apt full-upgrade systemctl stop pve-cluster pvedaemon pveproxy(only if you plan node-only work)
If you want to test first, boot the Proxmox VE 9 installer ISO on a USB and run it in live/test mode where possible. That shows driver problems without changing the host.
Backups save time.
I use two patterns:
- Host-level config backup:
tar czf /root/pve-config-YYYYMMDD.tgz /etc/pve /etc/network /root/.ssh - VM backups: use
vzdumpor the GUI backup job. I prefer snapshot mode for LVM or ZFS, and stop mode for critical VMs if snapshotting is not available.
Example vzdump command:
vzdump 101 --mode snapshot --compress zstd --storage local-lvmvzdump 102 --mode suspend --compress zstd --storage local
Keep at least two recent backups and validate one restore on a spare host or local VM. Store backups off the node if you can.
Firewall rules for Proxmox
Export and save firewall rules before changing packages. Proxmox stores cluster firewall rules under /etc/pve/firewall. I save them with:
-
tar czf /root/pve-firewall-YYYYMMDD.tgz /etc/pve/firewall
In the GUI, Datacenter → Firewall → Options shows status. Node-level: select the node → Firewall → Options. Note which IPs and ports are open. If the upgrade replaces pveproxy or changes the network stack, firewall rules may come back differently. Keep console access or IPMI ready.
Step-by-step upgrade process
- Verify backups and console access. Confirm
pveversion -vanduname -a. - Set maintenance: migrate or stop VMs you can.
- Update package lists:
apt update. - Perform the host upgrade:
apt full-upgrade. - Reboot into the new kernel.
- Verify services and VMs. If all is well, clean old kernels and packages.
Typical commands:
apt updateapt full-upgrade -yreboot
If you need to change repositories for a major release, do that only after reading the official Proxmox upgrade guide and replacing sources in /etc/apt/sources.list.d/* as directed.
GUI backup: Datacenter → Backup → Add. Choose storage, schedule, and mode (snapshot/suspend/stop). Click Start to run a one-off backup.
GUI upgrade checks: Datacenter → Node → Summary shows package information and kernel version before you reboot. After the upgrade:
- Click Node → Summary to check the kernel line.
- Datacenter → Summary to see cluster health.
CLI verification commands:
pveversion -vsystemctl status pvedaemon.service pve-cluster.service pveproxy.servicejournalctl -b -p err
Verification of successful upgrade
Expected outputs:
pveversion -vshould listpve-managerwith a 9.x version string.systemctl status pvedaemon.serviceshould showActive: active (running).dmesg | tailshould not list critical driver errors.
Verify VMs start:
qm startqm listshows running state.
If the kernel changed, check uname -r matches the expected new kernel.
If any step changes the node state, such as kernel or storage layout, note the rollback plan before reboot.
Post-upgrade checks to perform
Run through:
pveversion -vsystemctl status pve-cluster pvedaemon pveproxylsblkandmountto confirm storageip aand ping the default gatewayzpool statusif using ZFSceph -sif using Ceph
Start a representative VM and test networking and storage IO.
Troubleshooting tips
- Use IPMI or a physical console to avoid lockout.
- Boot an older kernel from GRUB to regain access.
chrootfrom a rescue environment to repair files if the root filesystem is damaged.- Reinstall or enable missing kernel modules with
aptandmodprobe. - Check firmware versions; sometimes rolling back a kernel is faster than chasing a firmware-only fix.
Commands that help:
dmesg | lessjournalctl -xeip link set dev uplsmod | grep
Takeaways: test, back up, and keep console access. If hardware looks fragile after a kernel bump, stop and recover from backups rather than debug under production load.

