Managing GPU passthrough for virtual machines with Proxmox VE GPU Passthrough
I set up multi-user GPU passthrough on a single Proxmox VE server so several people can run full GPU workloads at once. This guide shows the exact checks, commands and config edits I used. Read it and follow the steps on one machine first, then scale.
Preparation for GPU Passthrough
Hardware Requirements
- CPU with IOMMU support: Intel VT-d or AMD-Vi. Check your CPU spec and motherboard manual.
- Enough PCIe lanes for the number of GPUs you want to pass through. Consumer motherboards can be limited.
- At least one discrete GPU per person if you want true passthrough. Shared vGPU tech exists but is hardware and driver dependent.
- 32 GB RAM or more for multiple virtual machines running games or heavy apps.
- A PSU with headroom for all GPUs and the CPU.
Proxmox VE Installation
- Install the latest stable Proxmox VE. I install from the official ISO on a USB stick.
- Use ZFS or ext4 for storage depending on your backup and snapshot needs. ZFS adds robustness at the cost of RAM.
- Keep Proxmox updated with apt update; apt dist-upgrade after snapshots.
BIOS Configuration
- Enable VT-d for Intel or SVM/IOMMU for AMD.
- Disable CSM/Legacy boot if your GPU drivers prefer UEFI.
- Turn off onboard GPU if you will use only discrete cards for host and guests.
- If the motherboard has ACS override options, do not enable them unless you understand the IOMMU group ramifications.
GPU Compatibility
- Consumer Nvidia drivers block passthrough in some driver versions; expect driver fiddling.
- AMD GPUs are often easier for Linux passthrough and lack Nvidia consumer restrictions.
- Check IOMMU groupings on the machine: run lspci -nnk and find IOMMU groups with a script or from /sys/kernel/iommu_groups.
- If a GPU shares an IOMMU group with other essential devices, passthrough can be unsafe or impossible without ACS patching.
Network Configuration
- Give the Proxmox host a static IP on your LAN.
- If guests will be used for gaming, put them on the same network segment as clients to keep latency low.
- Use a managed switch if you expect heavy traffic between guests and local clients. Keep MTU standard unless you know the network implications.
Configuring GPU Passthrough
VM Creation in Proxmox
- Create a VM with the OS you want for the guest. For gaming I use Windows 10/11 images.
- Allocate CPUs, RAM and disk. I start with 4 vCPU and 16 GB for a gaming VM, increase if needed.
- Do not add a display device yet. Passthrough will give direct hardware access.
Editing Configuration Files
- Enable IOMMU in GRUB. Edit /etc/default/grub and add to GRUBCMDLINELINUX_DEFAULT:
- Intel: intel_iommu=on iommu=pt
- AMD: amd_iommu=on iommu=pt
- Update grub: update-grub and reboot.
- Install vfio modules: create /etc/modprobe.d/vfio.conf with vfio, vfioiommutype1, vfiopci, vfiovirqfd.
- Blacklist host drivers for the GPU (for example nouveau or nvidia) by adding them to /etc/modprobe.d/blacklist.conf.
- Find the GPU PCI IDs with lspci -nn. Example: 0000:01:00.0 and 0000:01:00.1 for audio.
- Bind the GPU to vfio-pci by adding the IDs to /etc/modprobe.d/vfio-pci.conf or use driver_override in the host.
- For a VM, edit /etc/pve/qemu-server/
.conf and add:
hostpci0: 01:00.0,pcie=1,x-vga=1
hostpci1: 01:00.1 - If you want USB passthrough for keyboard and mouse, use hostusb or pass through a dedicated USB controller.
Testing GPU Functionality
- Start the VM and install guest OS drivers.
- For Windows, install the correct GPU driver from the vendor site. If the VM blue-screens, check host driver blacklists and vfio binding.
- Run a GPU benchmark or game to confirm full acceleration. Check Device Manager for the GPU and ensure no yellow warnings.
- From the host, confirm the GPU is bound to vfio-pci with lspci -k.
Troubleshooting Common Issues
- VM boots but GPU driver in guest shows error 43 (Nvidia). Try vendor documentation for consumer GPU passthrough workarounds or use an OEM driver. Some Nvidia consumer cards block passthrough at driver level; a hardware change may be required.
- IOMMU groups too large: move GPUs to a different slot, try a different motherboard, or accept limited passthrough.
- GPU audio device shows but no sound: pass both audio and display functions of the GPU.
- Persistent host access needed for one GPU: assign a different GPU as the host console or use a cheap GPU for the host and keep discrete cards for guests.
Scaling for Multiple Users
- I set up one test VM, validate passthrough, then clone the process for each additional card and VM.
- Track PCIe lanes and power. A single CPU with limited lanes will throttle performance once multiple GPUs are fitted.
- Use identical GPU models where possible to simplify driver handling.
- Assign GPUs to separate hosts of PCIe root ports where possible so IOMMU groups are clean.
- If hardware or licence limits make full passthrough impossible, consider a mixed approach: one or two full-passthrough VMs and several lighter VMs using GPU-accelerated encoding or CPU-only profiles.
- For thin-client access in a homelab gaming setup, give each guest a static IP, RDP or Parsec sessions, and match input latency to the thin client hardware.
Final practical notes
- Start small: one GPU and one VM. Confirm Proxmox configuration, GRUB options and vfio binding before adding more cards.
- Keep a quick recovery plan: store a backup of host configs and a Proxmox snapshot of any VM before major changes.
- Measure real-world load: check CPU, PCIe lane usage and temperatures once multiple guests run simultaneously.
I documented a simple multi-GPU, multi-user layout in a home project called ProxBi. It began as the single-server idea: multiple GPUs, multiple VMs, thin clients for access. Use that pattern to plan cabling, cooling and power before buying parts.