Passing GTX 1060 Through To Windows VM

I'm building a homelab server running Proxmox with a Windows 10 VM for daily usage, so I wanted to use PCI passthrough to let the VM access the GTX 1060 graphics card that's installed in the server. Here's what I had to do to get past the infamous “code 43” error from the Nvidia driver when you try to pass through consumer-grade cards to Windows VMs. Credit for the first three points goes to this post on the vfio-users mailing list.

  1. Include “kvm=off” in your KVM “-cpu” option
    • In Proxmox, this is done automatically when you specify the “x-vga=1” option for your PCI passthrough line in the VM config (e.g. “hostpci0: 03:00,pcie=1,x-vga=1”; see an explanation for “pcie=1” below).
    • You can also explicitly add the “hidden=1” option to your “cpu” config line (e.g. “cpu: host,hidden=1”).
  2. Rename the hypervisor vendor ID
    • In Proxmox, this is also done automatically when you specify “x-vga=1”. “proxmox” is used as the new vendor ID.
  3. Enable MSI (Message Signaled Interrupts)
    • You can follow this popular guide to add the “MSISupported” entries to the registry.
    • MSI needs to be enabled for both the graphics device and the audio device. For the audio device, don't use the “Device Instance Path” parameter to locate the registry key, because that parameter is under “HDAUDIO\…”; instead, you can usually find the “PCI\…” key for the audio device right next to the one for the graphics device, with the same “VEN” vendor ID.
      • edit In my case the values were, HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\PCI\VEN_10DE&DEV_1C03&SUBSYS_85AE1043&REV_A1\x&xxxxxxxx&x&xxxx\Device Parameters\Interrupt Management\MessageSignaledInterruptProperties\MSISupported for video and HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\PCI\VEN_10DE&DEV_10F1&SUBSYS_85AE1043&REV_A1\x&xxxxxxxx&x&xxxx\Device Parameters\Interrupt Management\MessageSignaledInterruptProperties\MSISupported for audio. Setting both to 1 enables MSI.
    • In Proxmox, in order to truly enable MSI, I had to switch the VM machine type to “q35” (by adding a “machine: q35” line to the VM config file) and enable PCI-e mode for the passthrough (by using the “pcie=1” option mentioned above).
    • Once MSI is truly enabled, you can see the negative-numbered interrupts in the Windows Device Manager, and also see “MSI: Enable+” in the host's “lspci -v” output.
  4. Disable memory ballooning
    • The memory ballooning feature and/or the vfio-balloon Windows driver doesn't seem to play well with the Nvidia driver, so disable ballooning in the VM configuration.
    • It may also help to uninstall the Windows vfio-balloon device/driver completely, and then restart the VM a couple times.

With these steps, I was able to finally have a working passed-through graphics card in the Windows VM. Note that every time you update the graphics driver in the VM, you may have to repeat the steps to re-enable MSI, because new drivers create new keys in the registry that don't have the “MSISupported” entries. edit Note that a Windows whole system update may also automatically update the drivers and reset the MSI flags; if you get stuttering audio after an update, this may be the cause.


· 2017/04/02 22:03 · reply

I thought you might be interested in the utility mentioned in post #69 in this thread:

This utility creates the registry entries that you've created manually.

Thanks for your post!

· 2017/06/28 10:44 · reply

FWIW, I had to reboot both the host and guest system in order to get MSI fully enabled. After that, I was able to see the negative-numbered interrupts in the Windows Device Manager, and the “MSI: Enable+” in the host's “lspci -v” output.

You may be able to do it dynamically by rmmod / modprobe the vfio modules, but it's probably cleaner to just fully reboot.

· 2017/10/02 02:44 · reply


still getting 43, is there a way to troubleshoot this?

Jim Chen
· 2017/10/02 16:10 · reply

@ohmanthisistakingsomuchtime: I don't know of any good ways to troubleshoot unfortunately, because the error code can have a variety of causes. You can double check that MSI is really enabled, but beyond that, it will probably involve a lot of trial and error.

· 2017/12/25 22:55 · reply

Greetings! This thread seems to be the closest to what I'm looking for. Feel free to point me to another thread if appropriate.

Background: I've got a Host Laptop running Windows 10 X64 Pro w/ Hyper-V enabled. The laptop has a discrete nvidia 1060 graphics card.

I've created a local Hyper-V Windows 10 Pro X64 Guest OS client, configured to use the RemoteFx Video Adapter

Issue: I'm attempting to install software on the Guest OS, that is looking for a 'compatible' video subsystem and is failing to run, because it can't find one. This same software runs on the Host without issue.

Question: Is there a way to 'spoof' software installed in a Guest OS into believing it is communicating with the Host GPU type installed in the Host?

Thanks in advance for any/all assistance!