theme-sticky-logo-alt
img steps for a successful proxmox ve 9 upgrade

Steps for a successful Proxmox VE 9 upgrade

Upgrading Proxmox VE on Older Hardware: A Practical Guide

Proxmox VE 9 upgrade

Overview of Proxmox VE 9 features

Proxmox VE 9 brings newer kernels, updated QEMU/KVM, and refreshed management components. That helps with newer guest OSes and features, but it also nudges hardware requirements. I focus on the upgrade path and what to check on older boxes. The key point is this: the platform is a rolling set of Debian packages. The upgrade is package-driven, not a single binary installer. Expect kernel, storage and network driver changes. Plan for a reboot and have fallbacks.

Compatibility with Dell R720

A Dell R720 is a common homelab rack server. It often runs older Broadcom NICs, PERC RAID controllers and Intel Xeon CPUs. Those parts usually work, but drivers and firmware are the variables. My approach is hardware-first: check firmware, RAID driver support and NIC drivers before attempting a Proxmox VE 9 upgrade. Do not assume everything will boot the same after a kernel bump.

Useful checks:

  • lspci -nn to list controllers.
  • dmesg | grep -i -E ‘raid|error|firmware’ to spot worries.
  • uname -r and pveversion -v to record current state.

If a specific driver is missing after the upgrade, the server may boot but lack storage or networking. That is fixable, but slower than having a tested snapshot or backup.

Community insights on upgrades

I read a lot of threads where people successfully upgrade older servers and some where they regret it. The common pattern is this: those who prepared backups and tested a live boot recovered quickly. Those who did not spent hours troubleshooting. Treat online reports as data points. Use them to build your test plan, not as binary yes/no on compatibility.

Setup

Preparing your environment

Do not wing this. Steps I use:

  1. Update firmware and BIOS on the Dell R720 to the latest vendor release you can get.
  2. Record current state: uname -a, pveversion -v, lsblk, lspci -nn, ip a.
  3. Put hosts in maintenance mode: shut down non-essential VMs and stop scheduled jobs.
  4. Create a file-level copy of /etc/pve and /etc/network/interfaces (or your network config).

Commands:

  • apt update && apt full-upgrade
  • systemctl stop pve-cluster pvedaemon pveproxy (only if you plan node-only work)

If you must test first, boot the Proxmox VE 9 installer ISO on a USB and run it in live/test mode where possible. That reveals driver problems without changing the host.

Backup patterns to consider

Backups save time. I use two patterns:

  • Host-level config backup: tar czf /root/pve-config-YYYYMMDD.tgz /etc/pve /etc/network /root/.ssh
  • VM backups: use vzdump or the GUI backup job. I prefer snapshot mode for LVM or ZFS, and stop mode for critical VMs if snapshotting is unavailable.

Example vzdump command:

  1. vzdump 101 –mode snapshot –compress zstd –storage local-lvm
  2. vzdump 102 –mode suspend –compress zstd –storage local

Keep at least two recent backups and validate one restore on a spare host or local VM. Store backups off the node if possible.

Firewall rules for Proxmox

Export and save firewall rules before changing packages. Proxmox stores cluster firewall rules under /etc/pve/firewall. I save them with:

  • tar czf /root/pve-firewall-YYYYMMDD.tgz /etc/pve/firewall

In the GUI: Datacenter → Firewall → Options to check status. Node-level: select node → Firewall → Options. Note which IPs and ports are open. If the upgrade replaces pveproxy or network stack, firewall rules may reapply differently. Keep console access or IPMI ready.

Steps

Step-by-step upgrade process

  1. Verify backups and console access. Confirm pveversion -v and uname -a.
  2. Set maintenance: migrate or stop VMs you can.
  3. Update package lists: apt update.
  4. Perform the host upgrade: apt full-upgrade.
  5. Reboot into the new kernel.
  6. Verify services and VMs. If all good, finish by cleaning old kernels and packages.

Numbered commands (typical):

  1. apt update
  2. apt full-upgrade -y
  3. reboot

If you need to change repositories for a major release, do that step only after reading the official Proxmox upgrade guide and replace sources in /etc/apt/sources.list.d/* as directed.

Commands and UI clicks

GUI backup: Datacenter → Backup → Add. Choose storage, schedule, and mode (snapshot/suspend/stop). Click Start to run a one-off backup.

GUI upgrade checks: Datacenter → Node → Summary shows package information and kernel version before you reboot. After the upgrade:

  • Click Node → Summary to check the kernel line.
  • Datacenter → Summary to see cluster health.

CLI verification commands:

  • pveversion -v
  • systemctl status pvedaemon.service pve-cluster.service pveproxy.service
  • journalctl -b -p err

Verification of successful upgrade

Expected outputs:

  • pveversion -v should list pve-manager with a 9.x version string.
  • systemctl status pvedaemon.service should show Active: active (running).
  • dmesg | tail should not list critical driver errors.

Verify VMs start:

  • qm start
  • qm list shows running state.

If the kernel changed, check uname -r matches the expected new kernel.

If any step changed the node state (kernel, storage layout), note the rollback plan before reboot.

Checks

Post-upgrade checks to perform

Run through:

  • pveversion -v
  • systemctl status pve-cluster pvedaemon pveproxy
  • lsblk and mount to confirm storage
  • ip a and ping default gateway
  • zpool status if using ZFS
  • ceph -s if using Ceph

Start a representative VM and test networking and storage IO.

Expected outputs and troubleshooting

Expected: no Active: failed services; VMs start. Troubles:

  • Missing NICs: check dmesg and lspci. Reload drivers with modprobe.
  • Storage not present: check RAID controller modules and /dev/mapper names.
  • Service fail: journalctl -u -b reveals the error.

Use journalctl -b -1 to view logs from the previous boot if needed.

Rollback procedures if needed

For VM rollback: restore from the vzdump backup via GUI or vzdump –restore.

For host rollback:

  1. Try booting an older kernel from GRUB.
  2. If kernel rollback works, remove the problematic kernel package.
  3. If the host is unbootable, reinstall Proxmox VE 9 or previous version and restore /etc/pve and VM backups.

Always test a restore before relying on it. Note which step changed state (for example, kernel install) and document how to undo it.

If it breaks

Common issues during upgrade

  • Booting to a blank screen because of missing storage drivers.
  • Network interfaces reordered or missing.
  • pveproxy or API failing due to missing dependencies.
  • ZFS or Ceph requiring kernel modules not present.

Troubleshooting tips

  1. Use IPMI or physical console to avoid lockout.
  2. Boot an older kernel from GRUB to regain access.
  3. chroot from a rescue environment to repair files if the root filesystem is damaged.
  4. Reinstall or enable missing kernel modules with apt and modprobe.
  5. Check firmware versions; sometimes rolling back a kernel is faster than chasing a firmware-only fix.

Commands that help:

  • dmesg | less
  • journalctl -xe
  • ip link set dev up
  • lsmod | grep

Community support resources

Search the Proxmox forum and r/Proxmox for similar Dell R720 reports. Look for threads that mention your RAID controller and NIC model. Follow official Proxmox upgrade notes for repository changes and any special steps. Community posts are practical, but treat them as tests to mirror, not guarantees.

Takeaways: test, backup, and keep console access. If hardware looks fragile after a kernel bump, pause and recover backups rather than debug under production load.

Share:
Category:AI, Lifestyle, Real Life
PREVIOUS POST
Installing Sophos Firewall on a mini PC

0 Comment

LEAVE A REPLY

15 49.0138 8.38624 1 1 4000 1 https://lab53.uk 300 1