Virtually Gaming, Part 2: Evolution – Consolidation and Move to KVM

In the previous article in this series, I detailed the journey to my original configuration with a single host providing multiple gaming capable virtual machines as a multi-seat workstation. But things have changed since then – many game distribution platforms such as Steam, GOG and Desura have native Linux versions, and many games have been ported to run natively on Linux. The vast majority of the ones that haven’t now work perfectly under WINE.

Consequently, the ideal solution has changed as well. In the original configuration, there were 3 seats on the system – two Windows VMs for gaming and one Linux VM for more serious use. At least one of the Windows VMs could now be removed, and it’s functionality replaced with WINE and native ports.

At the same time KVM advanced greatly in features and stability, and is now much better aligned with the requirements of this multi-seat workstation project. Perhaps most importantly, the latest QEMU even provides a feature that provides a much better workaround for the issue I had to patch Xen’s hvmloader for: max-ram-below-4g (option to the -machine parameter). Setting this to 1GB comprehensively works around the IOMMU compatibility bug of the Nvidia NF200 PCIe bridges on the EVGA SR-2, without any negative side effects.

Even better, KVM also includes patches that neuter the Nvidia driver’s ability to detect it is running in the VM (add kvm=off to the list of options passed to the -cpu parameter). That means that modifying the GPU firmware or hardware to make it appear as a Quadro or Tesla card is no longer required for using it in a virtual machine. This is a massive advantage over the original Xen solution for most people.

Summary of the most significant changes:

  • Host system updated to EL7 (CentOS)
    Required to facilitate easier running of more recent kernels and Steam (no more need to build and update an additional package set to support Steam as on EL6, including glibc). On the downside – this necessitates putting up with systemd.
  • Xen replaced by KVM
  • Windows 7 VM now uses UEFI instead of legacy BIOS
    This does away with all of legacy VGA complications such as VGA arbitration and the UEFI OVMF firmware even downloads and executes the PCI devices’ BIOS during the VM’s POST, which results in the full splash screen and even UEFI BIOS configuration menus being available during the VM boot on the external console.
  • XP x64 VM removed
    Superseded by using native Linux game ports and WINE for the rest (so far every XP compatible game I have tried works)

Some of the extra repositories I used for this are:

OVMF UEFI and SeaBIOS Firmware repository from here: https://www.kraxel.org/repos/

Mainline kernel from elrepo repository: http://elrepo.org/tiki/tiki-index.php

Bleeding edge QEMU (needed for the max-ram-below-4g option).

The full libvirt xml configuration file I use for QEMU is here:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>edi</name>
<uuid>11111111-1111-1111-1111-111111111111</uuid>
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>16777216</currentMemory>
<vcpu placement='static'>4</vcpu>
<sysinfo type='smbios'>
<bios>
<entry name='vendor'>GENERIC</entry>
<entry name='version'>GENERIC</entry>
<entry name='date'>01/01/2014</entry>
<entry name='release'>0.91</entry>
</bios>
<system>
<entry name='manufacturer'>GENERIC</entry>
<entry name='product'>GENERIC</entry>
<entry name='version'>GENERIC</entry>
<entry name='serial'>1</entry>
<entry name='uuid'>11111111-1111-1111-1111-111111111111</entry>
<entry name='sku'>GENERIC</entry>
<entry name='family'>GENERIC</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-2.2'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu>
<topology sockets='1' cores='4' threads='1'/>
</cpu>
<clock offset='localtime'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' io='native'/>
<source dev='/dev/zvol/normandy/edi'/>
<target dev='vda' bus='virtio'/>
<serial>1</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</disk>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:11:22:33'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<hostdev mode='subsystem' type='pci' managed='no'>
<source>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='no'>
<source>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='no'>
<source>
<address domain='0x0000' bus='0x0d' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</hostdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
<qemu:commandline>
<qemu:arg value='-drive'/>
<qemu:arg value='if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd'/>
<qemu:arg value='-cpu'/>
<qemu:arg value='host,kvm=off'/>
<qemu:arg value='-machine'/>
<qemu:arg value='pc-i440fx-2.2,max-ram-below-4g=1G,accel=kvm,usb=off'/>
</qemu:commandline>
</domain>

The reason for the qemu:commandline section is that libvirt and especially virt-manager do not actually understand all possible QEMU parameters. The ones that they don’t support directly are in this section to avoid errors and complaints from virsh and virt-manager in normal use.

You may also notice that there are some unusual sections and values in there, so let me touch upon them in groups.

Windows Activation and Associated Checks

When you first activate Windows with a key, it keeps track of several important details of the hardware in order to detect whether the same installation has been moved into another machine. Most licenses (e.g. OEM ones) are not transferable to another machine. So in order to ensure that our installation is portable (e.g. if we upgrade to a different hypervisor at a later date), we set the various values to something static, easily memorable and predictable, so that if we ever need to migrate the VM to another host, it will not cause deactivation issues. The important settings are here (these are not in all cases complete sections, only the fragments required for this purpose, see above for the full configuration):

<uuid>11111111-1111-1111-1111-111111111111</uuid>
<sysinfo type='smbios'>
  <bios>
    <entry name='vendor'>GENERIC</entry>
    <entry name='version'>GENERIC</entry>
    <entry name='date'>01/01/2014</entry>
    <entry name='release'>0.91</entry>
  </bios>
  <system>
    <entry name='manufacturer'>GENERIC</entry>
    <entry name='product'>GENERIC</entry>
    <entry name='version'>GENERIC</entry>
    <entry name='serial'>1</entry>
    <entry name='uuid'>11111111-1111-1111-1111-111111111111</entry>
    <entry name='sku'>GENERIC</entry>
    <entry name='family'>GENERIC</entry>
  </system>
</sysinfo>
<os>
  <smbios mode='sysinfo'/>
</os>
<devices>
  <disk type='block' device='disk'>
    <serial>1</serial>
  </disk>
<devices>

Nvidia Bugs/Features Workarounds

The following sections are required in order to work around the NF200 PCIe bridge bugs (max-ram-below-4g=1G) and the Nvidia driver feature that disables GeForce GPUs in virtual machines (kvm=off):

<qemu:commandline>
  <qemu:arg value='-cpu'/>
  <qemu:arg value='host,kvm=off'/>
  <qemu:arg value='-machine'/>
  <qemu:arg value='pc-i440fx-2.2,max-ram-below-4g=1G,accel=kvm,usb=off'/>
</qemu:commandline>

CPU Configuration

<cpu>
  <topology sockets='1' cores='4' threads='1'/>
</cpu>

The reason this is important is because most non-server editions of Windows only allow up to two CPU sockets. By default QEMU presents each CPU core as being on a separate socket. That means that no matter how many CPUs you pass to your Windows VM, while they will all show up in Device Manager, only a maximum of two will be used (you can verify this using Task Manager). What the above configuration block does is instruct libvirt to tell QEMU to present four cores in a single CPU socket, so that all are usable in the Windows VM.

VFIO and Kernel Drivers

In my system I have two identical Nvidia GPUs. Numerically, the second one is primary (host), and the first one is the one I am passing to a virtual machine. I am also passing the NEC USB 3.0 controller to the VM. This is the script I wrote (in /etc/sysconfig/modules/) to bind the devices intended for the VM to the VFIO driver:

!/bin/bash
 nvidia1='lspci | grep "GTX 780 Ti" | head -1 | awk '{print $1;}`
 hda1=`echo $nvidia1 | sed -e 's/.0$/.1/'`
 nvidia2=`lspci | grep "GTX 780 Ti" | tail -1 | awk '{print $1;}'
 hda2=`echo $nvidia2 | sed -e 's/.0$/.1/'
 nec=`lspci | grep "NEC" | awk '{print $1;}'
 echo nvidia        > /sys/bus/pci/devices/0000:$nvidia2/driver_override
 echo snd-hda-intel > /sys/bus/pci/devices/0000:$hda2/driver_override
 echo vfio-pci      > /sys/bus/pci/devices/0000:$nvidia1/driver_override
 echo vfio-pci      > /sys/bus/pci/devices/0000:$hda1/driver_override
 echo vfio-pci      > /sys/bus/pci/devices/0000:$nec/driver_override
 modprobe vfio-pci
 echo 10de 1284     > /sys/bus/pci/drivers/vfio-pci/new_id
 echo 10de 0e0f     > /sys/bus/pci/drivers/vfio-pci/new_id
 echo 1033 0194     > /sys/bus/pci/drivers/vfio-pci/new_id
 echo 0000:$nvidia1 > /sys/bus/pci/devices/0000:$nvidia1/driver/unbind
 echo 0000:$hda1    > /sys/bus/pci/devices/0000:$hda1/driver/unbind
 echo 0000:$nec     > /sys/bus/pci/devices/0000:$nec/driver/unbind
 echo 0000:$nvidia1 > /sys/bus/pci/drivers/vfio-pci/bind
 echo 0000:$hda1    > /sys/bus/pci/drivers/vfio-pci/bind
 echo 0000:$nec     > /sys/bus/pci/drivers/vfio-pci/bind
 modprobe nvidia

Note that the PCI bus IDs will change if you add more hardware to the machine – that is why I wrote this script, rather than assigned the devices statically by ID. The above script works for me on my hardware – you will almost certainly need to modify it for your configuration, but it should at least give you a reasonable idea of the approach that works.

Important: The devices this identifies have to match what is in your libvirt XML config file in the relevant hostdev sections. You will have to adjust that manually for your configuration, either using virsh edit or virt-manager.

Also depending on your hardware, you may need to do the initial Windows installation on the emulated GPU rather than the real one (e.g. if you are using a USB controller for the VM that requires additional drivers, as is the case with the USB 3.0 controller I am using for my VM). Otherwise you will get display output but be unable to use your keyboard/mouse during the installation.

Gaming on Linux: Steam

Pre-packaged Steam binary used to be available form the rpmfusion repository, but this no longer appears to be there. Thankfully, there is also a maintained negativo17’s repository for Steam for Fedora 20+, which installs and runs fine on EL7. You may also need to grab a few RPMs from Fedora 19 because EL7 doesn’t ship with a full complement of 32-bit libraries. The ones I found I needed are these:

libbsd-0.6.0-3.fc19.i686
libtxc_dxtn-1.0.0-3.fc19.i686
libxkbcommon-0.3.0-1.fc19.i686
openal-soft-1.16.0-2.fc19.i686
SDL2-2.0.3-1.fc19.i686
SDL2_image-2.0.0-4.fc19.i686

The reason these are from Fedora 19 is because F19 is virtually identical in terms of package versions to EL7.

Typically, the Steam RPM installation is a one-off, mostly to bootstrap the initial run, and install the dependencies. After that, a local version of Steam will be installed in the user’s home directory in ~/.local/share/Steam/. In light of the recent Steam bug resulting in deletion of the user’s entire home directory, I implemented a solution that runs Steam as a separate steam user, from that user’s own home directory. That way should anything similar to this ever happen, the only thing that would be deleted is the steam user’s home directory rather than any important files not related to running Steam games.

To do this, you will need to add a steam user, and give it necessary permissions:

$ sudo adduser steam
$ sudo usermod -a -G audio,games,pulse-access,video steam

Add the following to /etc/sudoers.d/steam:

%games ALL = (steam) NOPASSWD: /bin/steam

Create the following script (e.g. /usr/local/bin/steam.sh):

!/bin/bash
 xhost +SI:localuser:steam
 chgrp audio /run/user/$UID /run/user/$UID/pulse
 chmod 750 /run/user/$UID /run/user/$UID/pulse
 sudo -u steam /usr/bin/steam
 sudo -u steam pkill dbus-launch

From there on, when you invoke steam.sh, it will launch steam as the steam user, and pass the graphical output to the Xorg session of the logged in user. The net result is that any potentially damaging bug in Steam or associated games can only do damage to the files owned by the steam user. This security model is not dissimilar to the Android security model where every application runs under it’s own user, for similar security reasons.

Gaming on Linux: WINE

There are two obvious options for this:

1) PlayOnLinux

2) More traditional WINE (I use the one from DarkPlayer’s repository)

I only had to make one configuration change to WINE, and that is to disable the dwrite.dll library in WINE (to disable it, run winecfg, go to Libraries -> add dwrite.dll, edit dwrite.dll entry and set it to disabled). I am using XP version emulation, which isn’t even supposed to include dwrite.dll, and the problem it causes is that fonts are invisible in Steam and some other applications.

End Result

The end result is a much cleaner virtual machine configuration: e.g. no missing RAM like before with Xen, due to the NF200 bug workaround, and no need for hardware modification of my GeForce cards. The performance seems very smooth, and so far the entire setup has been completely trouble free.

There is also one fewer virtual machine and one fewer GPU in the system without any loss of functionality. Should I require an additional seat in the future, it will most likely be a Linux one, and implemented using a Xorg multi-seat configuration.