Thursday, August 21, 2014

Dual VGA assignment, GeForce + Radeon

This is a little old, but it's a nice bit of eye candy to start things off.  On the left we have an Nvidia 8400GS and on the right an AMD HD5450.  These are both very, very low-end cards (but they're quite; fanless).  A big bottleneck here is the hard disk.  Two VMs exercising an old spinning rust drive can get pretty slow.  In future posts I'll show how we can improve this.

The script I used has evolved from what's shown in the video, so this will differ from what you can read off the screen.  Here's the current version:

#!/bin/sh -x


GPU=$(lspci -D | grep VGA | grep -i geforce | awk '{print $1}')
AUDIO=$(lspci -D -s $(echo $GPU | colrm 12 1) | grep -i audio | awk '{print $1}')
LIBVIRT_GPU=pci_$(echo $GPU | tr ':' '_' | tr '.' '_')
LIBVIRT_AUDIO=pci_$(echo $AUDIO | tr ':' '_' | tr '.' '_')

virsh nodedev-detach $LIBVIRT_GPU
virsh nodedev-detach $LIBVIRT_AUDIO


if [ -v PASS_AUDIO ]; then
AUDIO_CMD="-device vfio-pci,host=$AUDIO,addr=9.1"


HUGE="-mem-path /dev/hugepages"

NEED=$(( $MEM / 2 ))
TOTAL=$(grep HugePages_Total /proc/meminfo | awk '{print $NF}')
WANT=$(( $TOTAL + $NEED ))

echo $WANT > /proc/sys/vm/nr_hugepages

AVAIL=$(grep HugePages_Free /proc/meminfo | awk '{print $NF}')
if [ $AVAIL -lt $NEED ]; then
echo $TOTAL > /proc/sys/vm/nr_hugepages


-enable-kvm -rtc base=localtime \
-m $MEM $HUGE -smp sockets=1,cores=2 -cpu host,hv-time,kvm=off \
-vga none -nographic \
-monitor stdio -serial none -parallel none \
-nodefconfig \
-device intel-hda -device hda-output \
-netdev tap,id=br0,vhost=on \
-device virtio-net-pci,mac=02:12:34:56:78:91,netdev=br0 \
-drive file=$DISK,cache=none,if=none,id=drive0,aio=threads \
-device virtio-blk-pci,drive=drive0,ioeventfd=on,bootindex=1 \
-device vfio-pci,host=$GPU,multifunction=on,addr=9.0,x-vga=on \

TOTAL=$(grep HugePages_Total /proc/meminfo | awk '{print $NF}')
echo $RELEASE > /proc/sys/vm/nr_hugepages

virsh nodedev-reattach $LIBVIRT_GPU
virsh nodedev-reattach $LIBVIRT_AUDIO

Much of this is similar to what you'll find in the ArchLinux BBS thread, but there are some differences.  First notice that I'm not using the vfio bind scripts found there, I'm using libvirt through virsh for that.  Next, note that I'm not using the QEMU Q35 chipset model.  The importance of Q35 has been largely exaggerated.  If you are mostly concerned with assigning GPUs to Windows guests, Windows seems to be perfectly happy using the default 440FX chipset model.  Linux guests won't like this, particularly with Radeon cards because the driver blindly attempts to poke at the upstream PCIe root port.

On the QEMU side of things, everything should be included in QEMU 2.1.  The last required piece was the kvm=off cpu option, which is necessary for Nvidia 340+ guest drivers.  On the kernel side, this setup requires only the i915 patch since IGD is my host graphics.  This means DRI is disabled on the center, host monitor and using i915.enable_hd_vgaarb=1 on the kernel commandline.  The GeForce and Radeon devices are bound to pci-stub using kernel commandline options, ex. pci-stub.ids=10de:0e0f,1002:aab0,10de:1280,1002:6611  The only other kernel option necessary is intel_iommu=on.  I do not require the PCIe ACS override patch because one card is installed in a slot from the processor-based (PEG) root port and the other is installed in a slot from the PCH root port.  I find many users on the ArchLinux thread using the option vfio_iommu_type1.allow_unsafe_interrupts=1.  In most cases this is entirely unnecessary since most new processors are going to support VT-d2 and therefore have interrupt remapping support.  AMD IOMMU users have always had hardware support for interrupt remapping and any recent kernel can be configured to enable it.

For mouse and keyboard in the guest, I use synergy.


  1. Alex, thanks for this. Very interesting. Could you clarify if the 440FX model is preferred over Q35 model or does it not matter which one is used (apart from the issue you mention)? If the 440FX is preferred, why?

    1. 440FX currently has better libvirt support, so you'll see a number of users in the archlinux thread going to great lengths to setup root ports and attach the assigned device there, underneath the capabilities that libvirt provides. This leads to problems with file ACLs and required capabilities for the domain, which all becomes a mess. Not to mention my latest post on using OVMF to avoid VGA routing issues, which doesn't yet support Q35. On the other hand, Linux can have trouble on 440FX with some hardware based on assumptions in the driver. There's no single answer. Pick what works for you based on how you want to manage the VM and what guest you want to run.


Comments are not a support forum. For help with problems, please try the vfio-users mailing list (