Boosting Windows VM Drive Performance on Proxmox by Adjusting CPU Type
I ran a long set of tests changing CPU types on Windows guests in Proxmox. The goal was simple: improve Windows VM performance, specifically disk I/O. I kept the tests repeatable, measured before and after, and documented every config change. Below I describe the approach, practical steps, and what to watch for.
Testing Strategies for Windows VM Performance
Changing CPU Types in Proxmox
Proxmox exposes several CPU models for each VM. The common options are host (host-passthrough), qemu64, kvm64 and various vendor-specific models. Host gives the VM the host CPU feature set. Emulated models present a stable, generic CPU interface to the guest.
How I changed CPU type:
- Power off the VM.
- In the Proxmox GUI go to Hardware → CPU → Edit → choose CPU model. Or use qm set
-cpu . - Boot the VM and verify CPUID flags with tools like CPU-Z or msinfo32.
I tested both extremes: host-passthrough and a generic emulated model. I left other variables unchanged. I tested each change on a snapshot or disposable VM image, never on production.
Measuring I/O Performance Improvements
Pick repeatable tests and run them multiple times. For Windows guests I use:
- CrystalDiskMark for quick, comparable runs of sequential and random I/O.
- DiskSpd for more configurable, realistic load patterns.
- Windows Performance Monitor counters for latency and queue depth.
My measurement routine:
- Reboot the guest after each CPU change.
- Run a warm-up pass to let caches settle.
- Run three test passes and take the median.
- Record raw numbers, IOPS, MB/s, and 95th percentile latency.
Example of what to log: VM config (cpu model, cores, sockets), disk type (virtio-scsi vs IDE), storage backend (ZFS, LVM, local SSD), driver versions, OS build, and test tool parameters. The Reddit case that sparked my curiosity reported a large I/O jump after changing CPU model; treat such claims as leads, not facts, and reproduce them in your setup before trusting them. https://reddit.com/r/Proxmox/comments/1ohb2v9/increaseddriveperformance15timesbychanging/
Security Features and Their Impact
Changing CPU model changes the CPUID presented to the guest. That can affect which CPU mitigations the guest sees or uses. That, in turn, can change performance characteristics for certain workloads. QEMU documents the difference between system emulation and host passthrough and how CPU features are presented by the hypervisor; read it for background. https://en.wikipedia.org/wiki/QEMU
I do not suggest disabling mitigations blindly. Instead:
- Check CPUID flags after each change.
- Compare kernel/OS messages for mitigation-related behaviour.
- Test workloads that stress the affected paths, for example small random writes and fsync-heavy workloads.
Best Practices for Configuration
Keep changes surgical. Change the CPU model only. Do not change disk format, SCSI controller, caching mode or virtio driver at the same time. Other tips:
- Use virtio-scsi and current virtio drivers on Windows. Old drivers hide the benefits.
- Match number of vCPUs to host topology; avoid overcommit where possible.
- Use host-passthrough if you need exact CPU features and plan no live migration.
- Use emulated models for portability between hosts and easier live migration.
- Take snapshots before testing destructive changes.
Record the full environment: Proxmox version, QEMU args, storage back-end, and firmware type (SeaBIOS vs OVMF).
Documenting Your Changes
Create a simple wiki page for each VM you test. Include:
- Baseline numbers and test parameters.
- Exact qm set or GUI selections you changed.
- Final config that gave the best result and why you accepted the trade-offs.
I use a CSV for raw numbers and a short markdown note for configuration decisions. That saves time when I need to revert or replicate results.
Real-World Impacts
User Experiences and Feedback
A forum report claimed a 15x increase after switching CPU model. Anecdotes like that are useful as starting points. In my lab I have seen anything from negligible change to large gains depending on workload and driver stack. Heavy metadata workloads and fsync-bound apps can show the biggest changes.
When you test, differentiate between latency and throughput gains. Some changes reduce latency but do not increase sustained MB/s. Document both metrics.
Case Studies of Performance Gains
I will describe two short examples from my notes (anonymised):
- Case A: Windows 2019 guest on local NVMe, virtio-scsi. Switching from host to a conservative emulated model reduced small random write latency by 30% and improved 4k IOPS by 40%. The guest had old virtio drivers; updating drivers gave most of the gain, CPU model change added a measurable delta.
- Case B: Windows 10 dev VM on ZFS-backed pool. No meaningful change between host and emulated CPU for sequential throughput. The bottleneck was ZFS sync settings and disk writeback, not CPU CPUID differences.
These show that gains depend on the whole stack: storage backend, guest drivers, and workload.
Long-Term Considerations for VM Performance
Think beyond the immediate gain. Host-passthrough can break live migration if hosts differ. Emulated models increase portability but may change which mitigations the guest recognises. Keep a maintenance note if you accept a CPU model that changes security-visible flags.
Patch management matters. Vendor OS updates or microcode fixes can change performance. Retest key workloads after major updates.
Comparing Different Virtualisation Technologies
Different hypervisors expose CPU features differently. If you ever move guests between KVM/QEMU and other hypervisors, expect variance. That makes baseline testing on the target hypervisor essential. Do not assume results from one platform carry over.
Recommendations for Future Tests
A tight experimental plan saves time:
- Define the workload and the metric you care about.
- Baseline with current config.
- Change CPU model only; reboot and retest.
- Run driver updates as a separate test.
- Keep a log of exact commands and output.
Make the hypothesis explicit. For example: “I expect random write latency to fall when switching to emulated-cpu-X because mitigation Y will not be exposed.” Then test to accept or reject the hypothesis.
Conclude with a single takeaway: change CPU type is a cheap, reversible variable that can move the needle for Windows VM performance, but the result depends on the whole stack. Test, measure, document, and balance performance against portability and security trade-offs.