I ran GPU passthrough for an Ubuntu 24.4 VM on Proxmox VE 9.1.1 recently. I’ll walk through the practical steps I used, with the exact commands and checks that mattered. The goal is a working virtual machine that owns an AMD Radeon device. I cover what to check before you start, the key configuration edits on the host, the minimal VM changes in Proxmox, and the usual troubleshooting checks if the guest or host refuses to boot.
Start by confirming hardware and firmware. You need a CPU and motherboard that support IOMMU and PCIe passthrough. Check BIOS or UEFI for SVM or VT-d and enable it. On the Proxmox host, check IOMMU is visible: run dmesg | grep -e IOMMU -e AMD-Vi. Add the kernel parameter for AMD in /etc/default/grub by editing GRUBCMDLINELINUXDEFAULT to include amdiommu=on iommu=pt and run update-grub then reboot. Load VFIO modules automatically by adding vfio vfiopci vfioiommu_type1 to /etc/modules. Find the GPU and its audio function with lspci -nn | grep -i radeon or lspci -nn | grep -i vga. Note the PCI addresses and vendor:device ids from the [vendor:device] output; you will use those for binding. If the GPU is the host display, do not blacklist the amdgpu driver unless you have a spare display output or remote access. Blacklisting will leave the host without a display and can cause hangs.
Bind the device to vfio-pci and attach it to the VM. Create a small file /etc/modprobe.d/vfio.conf containing options vfio-pci ids=VENDOR:DEVICE[,VENDOR:DEVICE]. Replace VENDOR:DEVICE with the ids you got from lspci. If you prefer manual binding for testing, echo the vendor and device ids to /sys/bus/pci/drivers/vfio-pci/new_id. Reboot so the vfio driver holds the device. In Proxmox use the qm tool to attach the PCI device to the VM. For example: qm set 100 –hostpci0 01:00.0,pcie=1,multifunction=on where 100 is the VMID and 01:00.0 is the GPU PCI address from lspci. Use –bios ovmf if you prefer UEFI guests and set machine type to q35 for better PCIe handling. Inside the Ubuntu 24.4 guest install the amdgpu driver package so the guest can use the AMD Radeon card. If the GPU provides audio, passthrough both the GPU and its HDMI/DisplayPort audio function as separate hostpci entries.
Typical failure modes and concrete checks. If the host hangs during boot after enabling IOMMU, check dmesg for errors about IOMMU groups or driver binding. Run find /sys/kernel/iommu_groups -type l to map devices to groups. If the GPU shares an IOMMU group with crucial devices, passthrough will be blocked unless we split groups at the hardware level or use ACS override, which has security implications. If the VM starts but the guest shows a black screen, confirm the VFIO driver owns the card on the host with lspci -k and look for vfio-pci in the Kernel driver in use line. Inside the guest use lspci -k and dmesg | grep amdgpu to check driver initialisation. If the host loses control of the integrated GPU and freezes, revert the blacklist or move the passthrough to a discrete card. For complex AMD iGPU cases like a Radeon 8060S, expect extra quirks. Some modern APUs need vendor reset or newer kernels. If the guest cannot initialise the GPU after passing it, try a newer kernel on the guest or update OVMF/BIOS firmware files in Proxmox.
A quick checklist to verify each change worked: 1) host shows IOMMU active in dmesg, 2) vfio modules are loaded, 3) lspci -k shows vfio-pci bound to the passed device, 4) qm config shows hostpci entries for the VM, 5) guest dmesg shows the amdgpu driver initialising. Follow those five checks and you will narrow any fault to either host binding, VM config, or guest driver. That is the shortest path to a usable GPU passthrough setup with Proxmox VE and Ubuntu 24.4.