I get the following error when attempting to start the guest:
Answer:vfio: error, group $GROUP is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
There are more devices in the IOMMU group than you're assigning, they all need to be bound to the vfio bus driver (vfio-pci) or pci-stub for the group to be viable. See my previous post about IOMMU groups for more information. To reduce the size of the IOMMU group, install the device into a different slot, try a platform that has better isolation support, or (at your own risk) bypass ACS using the ACS override patch.
I've applied the ACS override patch, but it doesn't work. The IOMMU group is the same regardless of the patch.
The ACS override patch needs to be enabled with kernel command line options. The patch file adds the following documentation:
The option pcie_acs_override=downstream is usually sufficient to split IOMMU grouping caused by lack of ACS at a PCIe root port. Also see my post discussing IOMMU groups, ACS, and why use of this patch is potentially dangerous.pcie_acs_override =[PCIE] Override missing PCIe ACS support for:downstreamAll downstream ports - full ACS capabiltiesmultifunctionAll multifunction devices - multifunction ACS subsetid:nnnn:nnnnSpecfic device - full ACS capabilitiesSpecified as vid:did (vendor/device ID) in hex
I have Intel host graphics, when I start the VM I don't get any output on the assigned VGA monitor and my host graphics are corrupted. I also see errors in dmesg indicating unexpected drm interrupts.
You're doing VGA assignment with IGD and have failed to apply or enable the i915 VGA arbiter patch. The patch needs to be enabled with i915.enable_hd_vgaarb=1 on the kernel commandline. See also my previous post about VGA arbitration and my previous post about using OVMF as an alternative to VGA assignment.
I have non-Intel host graphics and have a problem similar to Question 3.
You need the other VGA arbiter patch. This one is simply a bug in the VGA arbiter logic. There are no kernel command line options to enable it.
I have Intel host graphics, I applied and enabled the i915 patch and now I don't have DRI support on the host. How can I fix this?
See my previous post about VGA arbitration to understand why this happens. This is a know side-effect of enabling VGA arbitration on the i915 driver. The only solution is to use a host graphics device that can properly opt-out of VGA arbitration or avoid VGA altogether by using a legacy-free guest.
How can I prevent host drivers from attaching to my assigned devices?
The easiest option is to use the pci-stub.ids= option on the kernel commandline. This parameter takes a comma separated list of PCI vendor:device IDs (found via lspci -n) for devices to be claimed by pci-stub during boot. Note that if vfio-pci is built statically into the kernel, vfio-pci.ids= can be used instead. There is currently no way to select only a single device if there are multiple matches for the vendor:device ID.
Do I need the NoSnoop patch?
No, it was deprecated long ago.
Do I need vfio_iommu_type1.allow_unsafe_interrupts=1?
Probably not. Try vfio-based device assignment without it, if it fails look in dmesg for this:
No interrupt remapping support. Use the module param "allow_unsafe_interrupts" to enable VFIO IOMMU support on this platformIf, and only if, you see that error message do you need the module options. Also note that this means you opt-in to running vfio device assignment on a platform that does not protect against MSI-based interrupt injection attacks by guests. Only trusted guests should be run in this configuration. (Actually I just wish this was a frequently asked question, common practice seems to be to blindly use the option without question)
I use the nvidia driver in the host. When I start my VM nothing happens. What's wrong?
The nvidia driver locks the VGA arbiter and does not release it causing the VM to stop on its first access to VGA resources. If this is not yet fixed in the nvidia driver release, user contributed patches can be found to avoid this problem.
I'm assigning an Nvidia card to a Windows guest and get a Code 43 error in device manager.
The Nvidia driver, starting with 337.88 identifies the hypervisor and disables the driver when KVM is found. Nvidia claims this is an unintentional bug, but has no plans to fix it. To work around the problem, we can hide the hypervisor by adding kvm=off to the list of cpu options provided (QEMU 2.1+ required). libvirt support for this option is currently upstream.
Note that -cpu kvm=off is not a valid incantation of the cpu parameter, a CPU model such as host, or SandyBridge must also be provided, ex: -cpu host,kvm=off.
Update: The above workaround is sufficient for drivers 337.88 and 340.52. With 344.11 and presumably later, the Hyper-V CPUID extensions supported by KVM also trigger the Code 43 error. Disabling these extensions appears to be sufficient to allow the 344.11 driver to work. This includes all of the hv_* options to -cpu. In libvirt, this includes:
<spinlocks state='on' retries='8191'/>
<timer name='hypervclock' present='yes'/>
Unfortunately removing these options will impose a performance penalty as these paravirtual interfaces are designed to improve the efficiency of virtual machines.