Skip to content

GPU Passthrough notes for multi-gpu and multi-monitor use case

Notifications You must be signed in to change notification settings

adgross/gpu-passthrough

Repository files navigation

VFIO - GPU Passthrough

This is an example of VFIO usage with Nvidia GTX 1080 and ATI Radeon 5450.
Iam assuming Xorg usage. Not tested under Wayland.
Based on ArchWiki how-to for gpu passthrough.

Use case

Possibility to activate the gpu passthrough when needed. You may want to use the primary GPU for the host or don't need the guest running all the time, like when you don't need the gpu passthrough to play some Windows-only game.

This use case have two states:

  1. no guest running → host use the monitors connect to all gpus;
  2. guest running → host use the secondary monitors and the guest use the primary monitors.

Tested Hardware

  • CPU: AMD FX-8350
  • GPU primary slot: AMD HD 5450
  • GPU some secondary slot: Gigabyte NVIDIA GTX 1080 Windforce OC ( passthrough )
  • Motherboard: ASUS CROSSHAIR V FORMULA-Z (BIOS version: 2201)
  • RAM: 32GB
  • Monitors: at least 2
    • 1 for host, connected to AMD HD 5450
    • 1 for guest, connected to NVIDIA GTX 1080
  • The boot cmdline used for this machine and configuration:

BOOT_IMAGE=/vmlinuz-linux root=/dev/mapper/main-root rw loglevel=3 quiet apparmor=1 lsm=lockdown,yama,apparmor,bpf amd_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=12

Improve general usage

  • If you have a ton of RAM and don't need it always on the host, use Static Huge Pages for better performance.
  • If your VM still using a NVIDIA driver prior to version 465, need changes to fix error 43.
  • Control mouse and keyboard with evdev.
    • The mouse/keyboard swap between host and guest is simple as LCtrl + RCtrl.
  • Control mouse and keyboard using Barrier.
    • with this method it is recommended to have a virtio interface (NAT or isolated network), some fast host<->guest network.
    • you can configure hotkeys to execute on host when pressed inside the guest, for example, push-to-talk applications running on host while playing a game on guest.
  • Audio using Virtual Sound Card - scream.
    • configure one IVSHMEN.
    • install the driver on Windows guest.
    • run the scream receiver on host.
    • more info
  • Audio using 5.1 usb-audio
  • You can use the MSI utility v3 to ensure MSI for virtualized devices are working, checking if IRQs are negative.
  • On this example, the VM have 2 disks, one as LVM's LV and another as raw image for demonstrative purposes. Usage of LVM allow us to snapshot, even with raw format, and is prefered.

About the example configuration

  • win10.xml = VM settings with GPU Passthrough.
  • win10-spice.xml = I made this configuration to be able to modify the same VM without the need of GPU passthrough, using spice.
  • nvidia_to_guest.sh = script to enable vfio module
  • nvidia_to_host.sh = script to disable vfio module

The systemctl start/stop is required because xorg need to be restarted to unbind the nvidia modules.
If needed, we can use this time while xorg is not started to change the configuration files, to ensure the system will works without the Passthrough GPU. On both scripts I manipulate the 10-nvidia.conf symbolic link, so we have:

  • host with both GPUs → /etc/X11/xorg.conf.d/{10-nvidia.conf, 20-radeon.conf}
  • host running GPU Passthrough → /etc/X11/xorg.conf.d/20-radeon.conf

You may need a similar setup if xorg can't autodetect the GPUs and displays in both states (with and without GPU passthrough).

If xorg need .conf files, how create the setup

  1. Write down the xorg ".conf" files that works for you with both GPUs. Think on this as the 1) "xorg without guest" state
  2. Stop the display-manager service
  3. Load the vfio module in the passthrough GPU using the virsh command, use your PCI address got from lspci (08:00.0 and 08:00.1 in this example).

virsh nodedev-detach pci_0000_08_00_0
virsh nodedev-detach pci_0000_08_00_1

  1. Create the needed xorg configuration for your secondary GPU only.
  2. Now create scripts that recreate both states.
    You can check both nvidia_to_guest and nvidia_to_host scripts, notice the usage of symbolic link, given this simplifies a lot this setup.
  3. Run the scripts when you want to passthrough the GPU or return to host
    There are many ways to run both scripts, I prefer manual execution, like:
    * 1) Using GNU Screen / tmux
    * 2) Go to another tty (ex: Ctrl + Alt + F2) and run from there.
    * The script need to survive the display-manager restart

Workflow - Common usage

  1. Note: I run both scripts inside a tmux session.
  2. We run nvidia_to_guest.sh as root or sudo.
    • this will kill every process depending on xorg.
  3. Display manager should restart, we login again.
  4. Start the GPU Passthrough VM (you can use virt-manager).
  5. VM is running... do whatever you want.
    • use evdev / Barrier / ... to control host and guest.
  6. Shutdown the VM.
  7. Run nvidia_to_host.sh as root or sudo.
    • this will kill every process depending on xorg.
  8. We back again to start, with both gpus for the host.

Troubleshooting

If the system crash while running the VM and we are forced to reboot.

When you need to change xorg configuration files between "no pci passthrough" and "pci passthrough" state, in the first boot after the crash, nvidia modules will load, but xorg will not use the Nvidia GPU because there is no '.conf' file that wants it. To fix, simple run the nvidia_to_host.sh script.

About

GPU Passthrough notes for multi-gpu and multi-monitor use case

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages