I use Proxmox CLI GPU Passthrough every time I need raw GPU power inside a VM. This guide strips the noise and gives the commands and checks that actually matter. Read it top to bottom, then copy the commands into your shell. I assume a single Proxmox host running a recent Proxmox VE release and root or sudo access.
Start by enabling IOMMU on the host. Edit GRUB and add the kernel parameter for your CPU: intel_iommu=on iommu=pt for Intel, or amd_iommu=on iommu=pt for AMD. For example, edit /etc/default/grub and change the GRUB_CMDLINE_LINUX_DEFAULT line, then run update-grub and reboot. After reboot check the kernel messages with dmesg | grep -e DMAR -e IOMMU to confirm IOMMU is active. Find your GPU and its device IDs with lspci -nn | grep -i 'vga|3d|nvidia|amd'. Note the slot, for example 0000:01:00.0, and the vendor:product IDs like 10de:1b80. If the card is split across IOMMU groups, list them with for g in /sys/kernel/iommu_groups/*; do echo \"Group $(basename $g)\"; ls $g/devices; done. Be aware that poor IOMMU grouping is common on consumer motherboards.
Bind the GPU to vfio on the host so it is unavailable to the host kernel. Create /etc/modprobe.d/vfio.conf with a line such as options vfio-pci ids=10de:1b80,10de:10f0 using your IDs. Blacklist the host driver and nouveau where needed: create /etc/modprobe.d/blacklist.conf with blacklist nouveau and blacklist nvidia if the host loads them. Regenerate initramfs if you blacklisted nouveau: update-initramfs -u. Reboot. Confirm the GPU is bound to vfio-pci with lspci -nnk -s 01:00.0 and look for Kernel driver in use: vfio-pci.
Assign the GPU to the VM via the CLI. Either edit the VM config at /etc/pve/qemu-server/VMID.conf and add a line like hostpci0: 0000:01:00.0,pcie=1 or run qm set VMID --hostpci0 0000:01:00.0,pcie=1. For devices with audio functions add hostpci0: 0000:01:00.0,pcie=1,x-vga=1 if the guest needs VGA. Set the CPU model to host to reduce emulation issues: qm set VMID --cpu host. Allocate enough RAM and disable CPU hotplug for best stability. Start the VM with qm start VMID. Inside the guest check lspci and the guest OS GPU drivers. For Windows guests use Device Manager to confirm the device appears. For Linux guests run lspci -nn and the GPU driver utilities.
Troubleshoot the common problems quickly. If the host still claims the device, re-check lspci -nnk on the host and confirm vfio-pci is driver in use. If the guest shows the device but drivers fail, check Windows error codes or dmesg in Linux. If IOMMU groups block passthrough, pcie_acs_override=downstream on the kernel line can split groups on many motherboards; use it only if you accept weaker isolation. Consumer Nvidia cards may block drivers in guest if the vendor enforces a check. For that use vendor workarounds like adding x-vga=1 and pass through the GPU’s audio function too. If the GPU resets fail on reboot, try a host power-cycle or test with another slot.
For performance and reliability, use these rules. Use --cpu host for the VM. Give the VM dedicated CPU cores and avoid overcommitting the host. Use hugepages for heavy GPU workloads in Linux guests. Disable ballooning and KSM for GPU VMs. Keep host drivers minimal and avoid running the GPU on the host. Test each change on one VM before applying to others.
Final takeaways: enable IOMMU and confirm groups, bind the GPU to vfio-pci on the host, assign via qm set or the VM config, and validate inside the guest. Keep commands short and repeatable. Proxmox CLI GPU Passthrough is blunt but reliable when done methodically.