Choosing between Proxmox and ESXi for virtual machines
I built an SFF PC homelab and ran an ESXi box for eight years without hypervisor crashes. Now I am choosing between Proxmox and ESXi for a new SFF PC that will run about ten virtual machines, including WordPress, WireGuard, Home Assistant and a large personal database. This guide shows how I test power management and thin provisioning, and how I decide which hypervisor fits a compact, low-power homelab.
Considerations for Your Homelab Setup
Evaluating Your Hardware Requirements
Pick the exact CPU and storage before choosing a hypervisor. Different CPUs can behave very differently for idle draw. I use an AMD Ryzen consumer chip; note that consumer silicon often exposes aggressive sleep states that some hypervisors or kernels handle differently. Match your test platform to the SFF PC you intend to deploy.
Count cores and threads against the VMs you plan to run. For around ten VMs, 8–12 threads is a practical starting point if most workloads are light. Budget RAM for the big database. Overprovision CPU and RAM in testing only after tracking real usage for a week.
Storage type matters for thin provisioning. NVMe and SATA SSDs both work, but their reclaim behaviour differs. Decide if you want local LVM-thin, qcow2 on ZFS, or a SAN/attached storage. Each path has slightly different reclaim steps.
Assessing Performance Needs
Measure target workloads. I run WordPress, WireGuard, Home Assistant and a database. That mix puts intermittent bursts on CPU and steady I/O on the DB. Run a simulated workload for at least 24 hours and capture:
- idle watt reading at the wall,
- watt reading under typical load,
- CPU ready or CPU steal for virtual machines.
Use a plug-in power meter for wall measurements. Use guest-side monitoring to detect excessive CPU ready time. Collect numbers before committing to a platform.
Long-term Reliability and Stability
My ESXi 6.7 host ran for over eight years without hypervisor crashes. That history matters. Stability comes from matching hardware that the hypervisor supports and sticking to conservative driver choices for NICs and storage controllers.
Where I have seen surprises is on consumer motherboards. BIOS power features, C-states and P-states sometimes need tweaking. Keep BIOS updated, disable experimental power options and test autostart behaviour after a simulated power loss.
Power Management Features
Test power management on the actual AMD or Intel CPU you will use. My method:
- Install the hypervisor and leave all VMs powered off, then measure idle watts.
- Boot one lightweight VM and measure again.
- Boot a DB VM and measure under load.
For Proxmox, use tools like turbostat and powertop in a debug VM to see C-state occupancy where possible. For ESXi, confirm power state handling with the host performance charts and BIOS settings. If idle draw differs by 5–15W between hypervisors on the same hardware, that can be decisive for an SFF PC.
Also confirm automatic VM autostart after power loss. Configure autostart in the hypervisor GUI, then pull mains power for a safe, quick test to see whether VMs come back in the order and timing you expect.
Thin Provisioning Capabilities
Thin provisioning lets you allocate large virtual disks without consuming physical storage until data is written. The practical problem is reclaim. Reclaim means freeing host space after you delete or shrink files inside a VM.
My approach:
- Use a filesystem in the guest that supports discard/TRIM.
- Inside the guest run fstrim -v / to issue discard ops.
- Check host free space before and after to confirm reclaim.
Some storage backends honour discard immediately. Some require a host-side command to punch holes. Test the exact combination you plan to use: create a 50GB file inside a VM, delete it, run fstrim inside the VM, then compare host storage usage. If reclaim does not happen, try alternative storage: LVM-thin, qcow2 with hole punching, or a storage array that supports UNMAP.
Making the Right Choice
Comparing Proxmox and ESXi Features
Proxmox is a Debian-based KVM host with ZFS and LXC integration. It is flexible and exposes Linux tooling. That means more knobs at the host level. Proxmox is good if you want full control over storage stacks, ZFS snapshots, and direct access to the Linux userland.
ESXi is a proprietary hypervisor with a long track record. It tends to be conservative with hardware support but very stable on supported platforms. ESXi features mature VM lifecycle tools and predictable behaviour on supported NICs and storage controllers.
For thin provisioning, both hypervisors can do the job. The difference lies in the storage backends and how discard is passed through. For power management, results depend on the CPU and motherboard. Do not assume one hypervisor will always win on idle draw.
Real-World Use Cases and Experiences
My ESXi box stayed online eight years with no hypervisor-level crashes. That matters when uptime is a priority. On a separate Proxmox test rig I found more visible control over storage reclaim routines, which saved space when I needed aggressive thin provisioning.
If you want container workloads in the same host, Proxmox gives an easier path with LXC. If you prefer a minimal, locked-down host that behaves the same across upgrades, ESXi has that appeal.
Recommendations for Specific Scenarios
- If your priority is minimal idle power in an SFF PC and your CPU is a consumer chip, test both on identical hardware and pick the lower draw result.
- If you want ZFS features, snapshots and flexible storage operations, choose Proxmox.
- If long-term, battle-tested hypervisor stability is the priority and your hardware is on VMware’s HCL or close enough, choose ESXi.
Testing and Validation Steps
Follow these numbered steps before migrating real VMs.
- Install both hypervisors on identical hardware or on the SFF PC if you can re-flash quickly.
- Measure idle power with nothing running. Record Watts at the wall.
- Boot a single VM of each workload type and re-measure idle and under-load Watts.
- Test thin provisioning reclaim:
- Inside a VM create a large file, delete it, run fstrim -v /.
- Check host storage usage before and after.
 
- Test automatic VM startup:
- Configure autostart.
- Simulate power loss and confirm VMs return in desired order.
 
- Run sustained load for 24 hours and watch for kernel panics, hypervisor logs, and VM crashes.
- Try overprovisioning RAM and CPU in the test environment and monitor real memory usage.
Verify each claim with measurements. If thin reclaim does not work, try an alternative storage path or enable discard/passthrough options and repeat the test.
Final Decision Factors
Make the decision on facts you collected. Compare idle and load watt figures. Compare actual reclaimed storage after fstrim. Compare stability during a 24–72 hour soak test. Account for operational preferences: do you prefer full control and Linux tooling, or a minimal, stable hypervisor?
I picked the hypervisor that delivered lower idle draw on my exact AMD Ryzen SFF PC and that reclaimed space reliably on the storage backend I planned to use. That combination gave the best balance of performance, low idle power and long-term reliability for this homelab.
 
			 
										 
										 
							 
								 
				 
				 
				