Chromebook Pixel – Long Term Review

I have been using my Chromebook Pixel for nearly a year now, so I feel it has been long enough to form a reasonably objective view, which may be useful to others who are considering buying one.

When I bought it, I was looking for a worthy successor to my venerable ThinkPad T60. The ThinkPad had been upgraded as far as it would go, with a 2.33GHz Core 2 Duo, 3GB of RAM, and most importantly, a 2048×1536 screen. It is still quite a usable machine, but the main reasons why I was looking for a replacement were battery life (90 minutes with the extended capacity battery on a good day), and weight (I haven’t weighed it, but when carrying it around for any length of time it feels like it weighs a tonne. All in all, barely livable with for the commute to work.

The Pixel was promising to address all the issues I had with the ThinkPad – it weighs a fraction as much, the battery life is about 6 hours, depending on the load, the other features are no worse, with the screen being a significant improvement on the ThinkPad. Since I use Linux (EL6), I needed to make sure all of the hardware is fully supported, which was the main reason why I didn’t choose a Macbook Pro Retina – the only other contender at the time.

Needless to say, ChromeOS only lasted for long enough to enable developer mode to facilitate installing a proper Linux distribution.

How did this very promising spec on paper work out in reality? Well, my experience is very mixed. The performance is more than sufficient, even for light gaming loads (e.g. Left 4 Dead 2 with maxed out settings at 1280×800, quarter of native resolution). The screen is nothing short of amazing. The touchpad is reliable. The battery life is good. But that is where the good things I can say about it end.

There are two things that let it down quite badly. The keyboard is less than perfect – it lacks a number of keys: PgUp, PgDn, Home, End, Delete, Insert, F11 and F12. While inconvenient, this is reasonably workable around using a custom keyboard map and key combinations using AltGr.

The fundamental thing that makes the Chromebook Pixel nearly unusuable is the amount of heat it produces. Under any load above idle, the aluminium casing gets too hot to touch. Under a gaming load, even the plastic keys on the keyboard get so hot they are painful to touch. The CPU itself doesn’t overheat (it tops out at about 85C), but it feels like most of those 85C make it to the outer casing. I would go as far as saying the Pixel poses a risk to injury due to this problem. This problem isn’t unique to the Chromebook, either – I had a similar problem with the Macbook Pro Retina I was using at work previously. Consequently, I can only strongly recommend against using the Chromebook Pixel.

Due to these issues, I am still using my old ThinkPad more frequently than the Pixel. My commute to work machine is now an ARM based Chromebook (XE303C12), which stays stone cold even under a heavy load, the battery lasts 6-8 hours, and is even lighter than the Pixel. It’s touchpad is quite terrible, but I can live with that in return for it not burning me as soon as I ask it to compile something for me.

All I can say is – beware the marketing hype and sexy looks. A laptop that looks fantastic on paper can easily turn out to be nearly useless due to how hot it gets.

Microsoft Security Essentials on 64-bit XP

Yet another Windows related article – this detour from more typical content is expected to be short lived.

Microsoft Security Essentials was never officially supported on 64-bit Windows XP, but version 2 nevertheless installed on it and worked fine. Version 4 (version 3 never existed) refuses to install directly, saying that the version of Windows is unsupported. However, if you install version 2, the version 4 installer will happily run and install version 4 as an upgrade. It will pop up a message every time you log in warning that XP64 is EOL, but otherwise it will work just fine. So the trick is to install version 2 and then upgrade to version 4.

You may be wondering why this is relevant. My findings are that most realtime anti-malware programs thoroughly cripple performance. I used to run ClamWin+ClamSentinel as one of the least bad options, but even this was quite crippling. MSSE, on the other hand, is much more lightweight, and has thus far proved itself to be as effective in tests as most of the alternatives. The overall performance of the system is now much more acceptable.

Chrome Installer Error 0xc0000005 on Windows XP

I don’t tend to write much about Windows because it’s usefulness to me is limited to functioning as a Steam boot loader, and even that usefulness is somewhat diminished with Steam and an increasing number of games being available for Linux. Unfortunately, I recently had to do some testing that needed to be carried out using a Windows application, and I noticed that Chrome reported the above error when attempting to update itself.

The Chrome installer crash with the opaque 0xc0000005 error code on XP64 (Chrome is still supported on XP, even though MS is treating XP as EOL). Googling the problem suggested disabling the sandbox might help, but this isn’t really applicable since the problem occurs with the installer, not once Chrome is running (it runs just fine, it’s updating it that triggers the error).

A quick look at the crash dump revealed that one of the libraries dynamically linked at crash time was the MS Application Verifier, used for debugging programs and sending them fake information on what version of Windows they are running on. Uninstalling the MS Application Verifier cured the problem.

Steam on EL6 (RHEL6 / Scientific Linux 6 / CentOS 6)

The fact that Steam have decided to only officially support .deb based distributions, and only relatively recent ones at that has been a pet peeve of mine for quite some time. While there are ways around the .deb only official package availability (e.g. alien), the library requirements are somewhat more difficult to reconcile. I have finally managed to get Steam working on EL6 and I figure I’m probably not the only one interested in this, so I thought I’d document it.

Different packages required to do this have been sourced from different locations (e.g. glibc from fuduntu project, steam src.rpm from steam.48.io (not really a source rpm, it just packages the steam binary in a rpm), most of the rest from more recent Fedoras, etc.). I have rebuilt them all and made them available in one place:

http://ftp.redsleeve.org/pub/steam/

You won’t need all of them, but you will need at least the following:

glibc-2.15-60.el6.i686.rpm
glibc-2.15-60.el6.x86_64.rpm
glibc-common-2.15-60.el6.x86_64.rpm
glibc-devel-2.15-60.el6.x86_64.rpm
glibc-headers-2.15-60.el6.x86_64.rpm
libtxc_dxtn-1.0.0-2.1.i686.rpm
SDL2-2.0.3-2.el6.i686.rpm
steam-1.0.0.39-2.i686.rpm
xz-5.0.5-1.el6.x86_64.rpm
xz-compat-libs-5.0.5-1.el6.x86_64.rpm
xz-libs-5.0.5-1.el6.x86_64.rpm
xz-lzma-compat-5.0.5-1.el6.x86_64.rpm

First install some the dependencies from the standard distribution packages:

yum install gtk2-engines.i686 \
            openal-soft.i686 \
            alsa-plugins-pulseaudio.i686 \
            gtk+.i686

The install the updated packages:

rpm -Uvh glibc-2.15-60.el6.i686.rpm \
         glibc-2.15-60.el6.x86_64.rpm \
         glibc-common-2.15-60.el6.x86_64.rpm \
         glibc-devel-2.15-60.el6.x86_64.rpm \
         glibc-headers-2.15-60.el6.x86_64.rpm \
         libtxc_dxtn-1.0.0-2.1.i686.rpm \
         SDL2-2.0.3-2.el6.i686.rpm \
         steam-1.0.0.39-2.i686.rpm \
         xz-5.0.5-1.el6.x86_64.rpm \
         xz-compat-libs-5.0.5-1.el6.x86_64.rpm \
         xz-libs-5.0.5-1.el6.x86_64.rpm \
         xz-lzma-compat-5.0.5-1.el6.x86_64.rpm

If you have pyliblzma from EPEL installed (required by, e.g. mock), updated xz-lzma-compat package will trigger a python bug that causes a segfault. This will incapacitate some python programs (yum being an important one). If you encounter this issue and you must have pyliblzma for other dependencies, reinstall the original xz package versions after you run steam for the first time. Updated xz only seems to be required when the steam executable downloads updates for itself.

Finally, run steam, log in, and let it update itself.

One of the popular games that is available on Linux is Left 4 Dead 2. I found that on ATI and Nvidia cards it doesn’t work properly in full screen mode (blank screen, impossible to Alt-Tab out), but it does work on Intel GPUs. It works on all GPU types in windowed mode. Unfortunately, it runs in full screen mode by default, so if you run it without adjusting its startup parameters you may have to ssh into the machine and forcefully kill the hl2_linux process. To work around the problem, right click on the game in your library, and go to properties:

Steam Game Properties

Click on the “SET LAUNCH OPTIONS…” button:

Steam Game Properties 2

You will probably want to specify the default resolution as well as the windowed mode to ensure the game comes up in a sensible mode when you launch it.
Add “-windowed -w 1280 -h 720″ to the options, which will tell L4D2 to start in windowed mode with 1280×720 resolution. The resolution you select should be lower than your monitor’s resolution.

Steam Game Launch Options

If you did all that, you should be able to hit the play button and be greeted with something resembling this:

Left4Dead 2 with Steam on Linux

ATI cards using the open source Radeon driver (at least with the version 7.1.0 that ships with EL6) seem to exhibit some rendering corruption, specifically some textures are intermittently invisible. This leads to invisible party members, enemies, and doors, and while it is entertaining for the first few seconds it renders the game completely unplayable. I have not tested the ATI binary driver (ATI themselves recommend the open source driver on Linux for older cards and I am using a HD6450).

Nvidia cards work fine with the closed source binary driver in windowed mode, and performance with a GT630 constantly saturates 1080p resolutions with everything turned up to maximum. I have not tested with the nouveau open source driver.

With Intel GPUs using the open source driver, everything works correctly in both windowed and full screen mode, but the performance is nowhere nearly as good as with the Nvidia card. With all the settings set to maximum, the performance with the Intel HD 4000 graphics (Chromebook Pixel) is roughly the same at 1920×1200 resolution as with the Radeon HD6450, producing approximately 30fps. The only problem with playing it on the Chromebook Pixel is that the whole laptop gets too hot to touch, even with the fan going at full speed. Not only does the aluminium casing get too hot to touch, the plastic keys on the keyboard themselves get painfully hot. But that story is for another article.

QNAP TS-421 – Review, Modification and RedSleeve Linux

Requirement

With the RedSleeve Linux release rapidly approaching, I needed a new server. The current one is a DreamPlug with an SSD and although it has so far worked valiantly with perfect reliability, it doesn’t have enough space to contain all of the newly build RPM packages (over 10,000 of them, including multiple versions the upstream distribution contains), and is a little lower on CPU (1.2GHz single core) and RAM (512MB) than ideal to handle the load spike that will inevitably happen once the new release becomes available. I also wanted a self contained system that doesn’t require special handling with many cables hanging off of it (like SATA or USB external disks). I briefly considered the Tonido2 Plug, but between the slower CPU (800MHz) and the US plug, it seemed like a step backward just for the added tidyness of having an internal disk.

Specification

The requirements I had in mind needed to cover at least the following:
1) ARM CPU
2) SATA
3) At least a 1.2GHz CPU
4) At least 512MB of RAM
5) Everything should be self contained (no externally attached components)

Selection

Very quickly the choice started to focus on various NAS appliances, but most of them had relatively non-existant community support for running custom Linux based firmware. The one exception to this is QNAP NAS devices which have rather good support from the Debian community; and where there is a procedure to get one Linux distribution to run, getting another to run is usually very straightforward. After a quick look through the specifications, I settled on the QNAP TS-421, which seems to be the highest spec ARM based model:

CPU: 2GHz ARMv5 Marvell Kirkwood (same as in the DreamPlug but 66% higher clock speed)
RAM: 1GB (twice as much as DreamPlug)
SATA: 4x 3.5″ SATA disk trays, based on the excellent Marvell 88SX7042 PCIe SATA controller
eSATA: 2x
Ethernet: 2x Gigabit (same as DreamPlug)
USB: 2x 2.0, 2x 3.0

Disks

At the time when I ordered the QNAP TS-421, it was listed as supporting 4TB drives – the largest air filled that were available at the time. I ordered 4x 4TB HGST drives because they are known to be more reliable than other brands. In the 10 days since then Toshiba announced 5TB drives, but these are not yet commercially available. I briefly considered the 6TB Helium filled Hitachi drives, but these are based on a new technology that has not been around for long enough for long term reliability trends to emerge – and besides, they were prohibitively expensive (£87/TB vs £29/TB for the 4TB model), and to top it all off, they are not available to buy.

Overview

Once the machine arrived, it was immediately obvious that the build quality is superb. One thing, however, bothered me immediately – it uses an external power brick, which seems like a hugely inconvenient oversight on an otherwise extremely well designed machine.

In order to make playing with alternative Linux installations I needed to get serial console access. To do this you will need a 3.3V TTL serial cable, same as what is used on the Raspberry Pi. These are cheaply available from many sources. One thing I discovered the hard way after some trial and error is that you need to invert the RX and TX lines between the cable and the QNAP motherboard, i.e. RX on the cable needs to connect to TX on the motherboard, and vice versa. There is also no need to connect the VCC line (red) – leave it disconnected. My final goal was to get RedSleeve Linux running on this machine, the process for which is documented on the RedSleeve wiki so I will not go into it here.

Modifying

One thing that becomes very obvious upon opening the QNAP TS-421 is that there is ample space inside it for a PSU, which made the design decision to use an external power brick all the more ill considered. So much so that I felt I had to do something about it. It turns out the standard power brick it ships with fits just fine inside the case. Here is what it looks like fitted.

QNAP TS-421 with internalized PSU
QNAP TS-421 with internalized PSU
QNAP TS-421 with internalized PSU

It is very securely attached using double sided foam tape. Make sure you make some kind of a gasket to fit between the PSU and the back of the case – this is in order to prevent upsetting the crefully designed airflow through the case. I used some 3mm thick expanded polyurethane which works very well for this purpose. The cable tie is there just for extra security and to tidy up the coiled up DC cable that goes back out of the case and into the motherboard’s power input port. This necessitated punching two 1 inch holes in the back of the case – one for the input power cable and one for the 12V DC output cable. I used a Q.Max 1 inch sheet metal hole punch to do this. There is an iris type grommet for the DC cable to prevent any potential damage arising from it rubbing on the metal casing.

QNAP TS-421 with cable holes punched through the back of the case

The finished modification looks reasonably tidy and is a vast improvement on a trailing power brick.

QNAP TS-421 running RedSleeve Linux

One other thing worth mentioning is that internalizing the PSU makes no measurable difference to internal temperatures with the case closed. In fact, if anything the PSU itself runs cooler than it does on the outside due to the cooling fan inside the case. The airflow inside the case is incredibly well designed, hence the reason why it is vital you use a gasket to seal the gap between the power input port on the PSU and the back of the case. To give you the idea of just how well the airflow is designed, with the case off, the HGST drives run at about 50-55C idle and 60-65C under load. With the case on they run at about 30C idle and 35C under full load (e.g. ZFS scrub or SMART self tests).

Virtualized Gaming: Nvidia Cards, Part 3: How to Modify 2xx – 4xx series GeForce into a Quadro

There has been a large amount of interest in the previous two articles in this series and many calls for a modifying guide. In this article I will explain the details of how to modify your Fermi based GeForce card into a corresponding equivalent Quadro card. Specifically, you the following:

GeForce Model GPU Quadro Model
GeForce GTS450 GF106 Quadro 2000
GeForce GTX470 GF100 Quadro 5000
GeForce GTX480 GF100 Quadro 6000

The Tesla (2xx/3xx) and Fermi (4xx) series of GPUs can be modified by modifying the BIOS. Earlier cards can also be modified, but the modification is slightly different to what is described in this article. There is no hardware modification required on any of these cards. The modification is performed by modifying what is known as the “straps” that configure the GPU at initialization time. The nouveau project (free open source nvidia driver implementation for Xorg) has reverse engineered and documented some of the straps, including the device ID locations. We can use this to change the device ID the card reports. This causes the driver to enable a different set of features that it wouldn’t normally expose on a gaming grade card, even though the hardware is perfectly capable of it (you are only supposed to have those features if you paid 4-8x more for what is essentially the same (and sometimes even inferior) card by buying a Quadro).

The main benefit of doing this modification is enabling the card to work in a virtual machine (e.g. Xen). If the driver recognizes a GeForce card, it will refuse to initialize the card from a guest domain. Change the card’s device ID into a corresponding Quadro, and it will work just fine. On the GF100 models, it will even enable the bidirectional asynchronous DMA engine which it wouldn’t normally expose on a GeForce card even though it is there (on GF100 based GeForce cards only a unidirectional DMA engine is exposed). This can potentially significantly improve the bandwidth between the main memory and GPU memory (although you probably won’t notice any difference in gaming – it has been proven time and again that the bandwidth between the host machine and the GPU is not a bottleneck for gaming workloads).

Another thing that this modification will enable is TCC mode. This is particularly of interest to users of Windows Vista and later because it avoids some of the graphics driver overheads by putting the card in a mode only used for number-crunching. Note: Although most Quadros have TCC mode available, you may want to look into modifying the card into a corresponding Tesla model if you are planning to use it purely for number crunching. You can use the same method described below, just find a Tesla based on the same GPU with equal or lower number of enabled shader processors, find it’s device ID in the list linked at the bottom of the article, and change the device IDs using the strap.

Before you begin even contemplating this make sure you know what you are doing, and that the instructions here come with no warranty. If you are not confident you know what you are doing, buy a pre-modified card from someone instead or get somebody who does know what they are doing to do it for you.

To do this, you will require the following:

  • NVFlash for Windows and/or NVFlash for DOS
    Note: You may need to use the DOS version – for some reason the Windows version didn’t work on some of my Fermi cards. If you use the DOS version, make sure you have a USB stick or other media set up to boot into DOS.
  • Hex editor. There are many available. I prefer to use various Linux utilities, but if you want to use Windows, HxD is a pretty good hex editor for that OS. It is free, but please consider making a small donation to the author if you use it regularly.
  • Spare Graphics card, in case you get it wrong. If you are new to this, your boot graphics card (the spare one, not the one you are planning to modify) should preferably not be an Nvidia one (to avoid potential embarrassment of flashing the wrong card). Skip this part at your peril.

On Fermi BIOS-es the strap area is 16 bytes long and it starts at file offset 0x58. Here is an example based on my PNY GTX480 card:
0000050: e972 2a00 de10 5f07 ff3f fc7f 0040 0000 .r*..._..?...@..
0000060: ffff f17f 0000 0280 7338 a5c7 e92d 44e9 ........s8...-D.

The very important thing to note here is that the byte order is little-endian. That means that in order to decode this easily, you should re-write the highlighted data as:
7FFC 3FFF 0000 4000 7FF1 FFFF 8002 0000

This represents two sets of straps, each containing an AND mask and an OR mask. The hardware level straps are AND-ed with the AND mask, and then OR-ed with the OR mask.

The bits that control the device ID are 10-13 (ID bits 0-3) and 28 (bit 4). We can ignore the last 8 bytes of the strap since all the bits controlling the device ID is in the first 8 bytes.

This makes the layout of the strap bits we need to change a little more obvious:

Fxx4xxxx xxxxxxxx xx3210xx xxxxxxxx
   ^                ^^^^
   |                ||||-pci dev id[0]
   |                |||--pci dev id[1]
   |                ||---pci dev id[2]
   |                |----pci dev id[3]
   |---------------------pci dev id[4]
F - cannot be set, always fixed to 0

The device ID of the GTX480 is 0x06C0. In binary, that is:
0000 0110 1100 0000
We want to modify it into a Quadro 6000, which has the device ID 0x06D8. In binary that is:
0000 0110 1101 1000

The device ID differs only in the low 5 bits, which is good because we only have the low 5 bits available in the soft strap.

So we need to modify as follows
From:   0000 0110 1100 0000
To:     0000 0110 1101 1000
Change: xxxx xxxx xxx1 1xxx

We only need to change two of the strap bits from 0 to 1. We can do this by only adjusting the OR part of the strap.

It is easier to see what is going on if we represent this as follows:

ID Bit:   4                  32 10
Strap: -xxA xxxx xxxx xxxx xxAx xxxx xxxx xxxx
Old Strap:
AND-0: 7F        FC        3F        FF
       0111 1111 1111 1100 0011 1111 1111 1111
OR-0:  00        00        40        00
       0000 0000 0000 0000 0100 0000 0000 0000
New Strap:
AND-0: 7F        FC        3F        FF
       0111 1111 1111 1100 0011 1111 1111 1111
OR-0:  10        00        60        00
       0001 0000 0000 0000 0110 0000 0000 0000

Note that in the edit mask above, bit 31 is marked as “-“. Bit 31 is always 0 in both AND and OR strap masks.
Bits we must keep the same are marked with “x”. Bits we need to amend are marked with “A”.

So what we need to do is flash the edited strap to the card. We could do this directly in the BIOS, but this would require calculating the strap checksum, which is tedious. Instead we can use nvflash to take care of the strap rewrite for us, and it will take care of the checksum transparently.
The new strap is:
0x7FFC3FFF 0x10006000 0x7FF1FFFF 0x80020000
The second pair is unchanged from where we read from the BIOS above. Make sure you have ONLY changed the device ID bits and that your binary to hex conversion is correct – otherwise you stand a very good chance of bricking the card.

We flash this onto the card using:
nvflash --index=X --straps 0x7FFC3FFF 0x10006000 0x7FF1FFFF 0x00020000
Note:
1) The last OR strap is 0x00020000 even though the data in the BIOS reads as if it should be 0x80020000. You cannot set the high bit (the left-most one) to 1 in the OR strap (just like you cannot set it to 0 in the AND strap). Upon flashing nfvlash will turn the high bit to 1 for you and what will end up in the BIOS will be 0x80020000 even though you set it to 0x00020000. This is rather unintuitive and poorly documented.
2) You will need to check what the index of the card you plan to flash is using nvflash -a, and replace X with the appropriate value.

Here is an example (from my GTX480, directly corresponding the the pre-modification fragment above) of how the ROM differs after changing the strap:

0000050: e972 2a00 de10 5f07 ff3f fc7f 0060 0010 .r*..._..?...`..
0000060: ffff f17f 0000 0280 7338 a597 e92d 44e9 ........s8...-D.

The difference at byte 0x6C is the strap checksum that nvflash calculated for us.

Reboot and your card should now get detected as a Quadro 6000, and you should be able to pass it through to your virtual machine without problems. I have used this extensively to enable me to pass my GeForce 4xx series cards to my Xen VMs for gaming. I will cover the details of virtualization use with Xen in a separate article. Note that I have had reports of cards modified using this method also working virtualized using VMware vDGA, so if this is your preferred hypervisor, you are in luck. Quadro 5000 and 6000 are also listed as supported for VMware vSGA virtualization, so that should work, too – if you have tried vSGA with a modified GeForce card, please post a comment with the details.

The same modification method described here should work for modifying any Fermi card into the equivalent Quadro card. Simply follow the same process. You may find this list of Nvidia GPU device IDs useful to establish what device ID you want to modify the card to. The GPU should match between the GeForce card the the Quadro/Tesla/Grid you are modifying to – so check which Nvidia card uses which GPU.

Many thanks to the nouveau project for reverse engineering and documenting the initialization straps, and all the people who have contributed to the effort.

In the next article I will cover modifying Kepler GPU based cards. They are quite different and require a different approach. There are also a number of pitfalls that can leave you chasing your tail for days trying to figure out why everything checks out but the modification doesn’t work (i.e. the card doesn’t function in a VM).

Virtualized Gaming: Nvidia Cards, Part 2: GeForce, Quadro, and GeForce Modified into a Quadro – Higher End Fermi Models

Following the success with QuadForce 2450 modification (GeForce GTS450 -> Quadro 2000), I went on to investigate whether the same modification will work on the GTX470 to turn it into a Quadro 5000 and on a GTX480 to turn it into a Quadro 6000. Modifying a GTX580 into a somewhat obscure Quadro 7000 was also undertaken.

Model Core Configuration Memory Channels Memory
GeForce GTX470 448:56:40 5x 1.25GB
GeForce GTX480 480:60:48 6x 1.50GB
Quadro 5000 352:44:40 5x 2.50GB
Quadro 6000 448:56:48 6x 6.00GB

In all three cases, the modifications were successful, and they all worked as expected – features like VGA passthrough work on the 5000 and 6000 models and gaming performance is excellent, as you would expect – I can play Crysis at 3840×2400 in a virtual machine. Again, the extra GL functions aren’t there (if you compare the output of glxinfo between a real Quadro and a QuadForce, you will find a number of GL primitives missing), so some aspects of OpenGL performance are still crippled. PhysX support is also a little hit-and-miss. In a VM, on Windows 7 it seems to work on Quadro cards; on XP it appears to not be working. On bare metal on Windows XP it works. This appears to be due to the Quadro driver itself, rather than due to the cards not being genuine Quadros.

Finally, the GF100 based cards (GTX470/480) also get an extra feature enabled by the modification – second DMA channel. Normally there is a unidirectional DMA channel between the host and the card. Following the modification, the second DMA channel in the other direction is activated. This has a relatively moderate impact on gaming performance, but it can have a very large impact on performance of I/O bound number crunching applications since it increases the memory bandwidth between the card and the system memory (you can read and write to/from the GPU memory at the same time). Compare the CUDA-Z Memory report for the GTX470 before and after modifying it into a Quadro 5000 – GTX470 only has a unidirectional async memory engine, but after modifying it the engine becomes bidirectional:

GTX470 CUDA-Z MemoryQuadForce 5000 CUDA-Z Memory

The same happens on the GTX480 – it’s async engine also becomes bidirectional after modification.

Quadro 7000 is a little different from the other two. It doesn’t have dual DMA channels, and Nvidia don’t list it as MultiOS capable. The drivers do not do the necessary adjustments to make it work with VGA passthrough. That means that, unfortunately, the gain from modifying a GTX580 is questionable in terms of what you will gain. Note, however, that the Quadro 7000 was never aimed at the virtualization market; it was only available as a part of the QuadroPlex 7000 product – an external GPU enclosure designed for driving multiple monitors for various visualisation work. Hence the lack of MultiOS support on it.

Here is how the QuadForce 5470 does in SPECviewperf (GTX470 = 100%):

QuadForce 5470 SPECviewperf

Compared to the QuadForce 2450, the performance improvements are more modest – the only real difference is observable in the lightwave benchmark.

Unfortunately, my QuadForce 6480 is currently in use, so I cannot get measurements from it, but since the they are both based on the GF100 GPU, the results are expected to be very similar.

On the QuadForce 7580 there was no observed SPEC performance improvement.

I have since acquired a Kepler Based 4GB GTX680 and successfully modified it into Quadro K5000. Modifying it into a Grid K2 also works, but there don’t appear to be any obvious advantages from doing so at the moment (K5000 works fine for virtualization passthrough, even though it wasn’t listed as MultiOS last time I checked). This QuadForce K5680 is why my GTX470 became free for testing again. More on Quadrifying Keplers in the next article. I also have a GTX690 now (essentially two 680s on the same PCB), which will be replacing the QuadForce 6480, so this will also be written up in due time. Unfortunately, however, quadrifying Keplers in most cases requires some hardware as well as BIOS modifications. I will post more on all this soon, along with a tutorial on soft-modding.

Virtualized Gaming: Nvidia Cards, Part 1: GeForce, Quadro, and GeForce Modified into a Quadro

Recently I built a new system with the primary intention of running Linux the vast majority of the time and never having to stop what I am doing to reboot into Windows every time I wanted to play a game. That meant gaming in a VM, which in turn meant VGA passthrough. I am an Enterprise Linux 6 user, and Fedora is too bleeding edge for me. What I really wanted to run is KVM virtualization, but the support for VGA passthrough didn’t seem to work for me with EL6 packages, even after a selective update to much newer kernel, qemu and libvirt related packages. VMware ESX won’t work with PCI passthrough on my EVGA SR-2 motherboard because EVGA, in their infinite wisdom, decided to put all the PCIe slots behind Nvida NF200 routers/bridges which don’t support PCIe ACS functionality, which ESX requires for PCI passthrough. That left me with Xen as the only remaining option. I now mostly have Xen working the way I want – not without issues, but I will cover virtualized gaming and Xen details in another article. For now, what matters is that Xen VGA passthrough currently only works with ATI cards and Nvidia Quadro (but not GeForce) cards.

ATI cards are not an option for me due to various driver bugs (e.g. handling monitors on which refresh rate is dependant on resolution due to bandwidth limitations), lack of features (no option to use anything but EDID modes, to the extent of completely ignoring monitor driver .inf files; the custom mode feature used to exist in the drivers (the documentation for it can still be found on the AMD website) but has been removed at some point) and most importantly, lack of multiple DL-DVI outputs on cards more recent than the Radeon HD4xxx series (Radeon HD5xxx and later cards only come with a single DL-DVI port – on those that come with a second DVI port, even though it physically looks like a DL, it only provides a single link).

Nvidia GeForce cards don’t work in a virtual machine, at least not without unmaintained patches that don’t work with all cards and guest operating systems.

That leaves Nvidia Quadro cards. Unfortunately, those are eyewateringly expensive. But, on paper, the spec lists the same GPUs used on GeForce and Quadro cards. This got me looking into what makes a Quadro a Quadro and a few days of research and a weekend of experimentation yielded some interesting and very useful results. While it looks like some features such as certain GL functions are disabled in the chips (probably by laser cutting), some features are purely down to the driver deciding whether to enable them or not. It turns out, making cards work in a VM is one of the driver-depentant features.

Phase 1: Verify That Quadros Cards Work in a VM When GeForce Don’t

Looking at the specification and feature list of Quadro cards, Quadro 2000, 4000, 5000 and 6000 models support the “MultiOS” feature, which is what Nvidia calls VGA passthrough. So, the first thing I did was acquire a “cheap” second hand quadro Quadro 2000 on eBay. Cheap here being a relative term because a second hand Quadro costs between 3 and 8 times the amount the equivalent (and usually higher specification) GeForce card costs. The Quadro card proved to work flawlessly, but the Quadro 2000 is based on a GF106 chip with only 192 shaders, so gaming performance was unusable at 3840×2400 (I will let go of my T221 monitors when they are pried out of my cold, dead fingers). Gaming at 1920×1200 was just about bearable with some detail level reductions, but even so it was borderline.

Here is how the genuine Quadro 2000 shows up in GPU-Z and CUDA-Z:

Quadro 2000 GPU-ZQuadro 2000 CUDA-Z CoreQuadro 2000 CUDA-Z MemoryQuadro 2000 CUDA-Z Performance

And here are the genuine Quadro 2000 SPECviewperf11 results:

Viewset Composite
catia-03 23.86
ensight-04 16.63
lightwave-01 43.12
maya-03 36.25
proe-05 7.07
sw-02 32.21
tcvis-02 18.82
snx-01 17.50

Phase 2: Get an Equivalent GeForce Card and Investigate What Makes a Quadro a Quadro

The next item on the acquisition list was a GeForce GTS450 card. On paper the spec for a GTS450 is identical to a Quadro 2000:
GF106 GPU
192 shaders
1GB of GDDR5
Note: There are some models that are different despite also being called GTS450. Specifically, there is an OEM model that only has 144 shaders, and there is a model with 192 shaders but with GDDR3 memory rather than GDDR5. The DDR3 model may be more difficult to modify due to various differences, and the 144 shader model may not work properly as a Quadro 2000.

Armed with the information I dug out, I set out to modify the GTS450 into a QuadForce (a splice between a Quadro and a GeForce – and Gedro just doesn’t sound right). This was successful, and the card now detected as a Quadro 2000, and everything seemed to work accordingly. The VGA passthrough worked, and since the GTS450 is clocked significantly higher than the Quadro 2000, the gaming performance was improved to the point where 1920×1200 performance was quite livable with. What didn’t improve to Quadro levels is OpenGL performance of certain functions that appear to have been disabled on the GeForce GPUs. Consequently, SPECviewperf11 results are much lower than on a real Quadro 2000 card, but the GeForce GTS450 scores higher on every gaming test since games don’t use the missing functionality, and the GeForce card is clocked higher. It is unclear at the moment whether the extra GL functionality was disabled on the GPU die by laser cutting or whether it is disabled externally to the GPU, e.g. by different hardware strapping or pin shorting via the PCB components – more research into this will need to be done by someone more interested in those features than me. Since the stamped-on GPU markings are different between the GTS450 (GF106-250, checked against 3 completely different GDDR5 GTS450 cards) and the Quadro 2000 (GF106-875 on the one I have), it seems likely the extra GL functionality is laser cut out of the GPU.

Here is how the GTS450 modified to Quadro 2000 shows up in GPU-Z and CUDA-Z:
QuadForce 2000 GPU-ZQuadForce 2000 CUDA-Z CoreQuadForce 2000 CUDA-Z MemoryQuadForce 2000 CUDA-Z Performance

CUDA-Z performance seems to scale with the clock speeds, so the faux-Quadro card wins.

Here are the SPECviewperf11 results for a GTS450 before and after modifying it into a Quadro 2000. As you can see, in this test those missing GL functions make a huge difference, but in some tests there is still a substantial improvement:

GTS450:

Viewset Composite
catia-03 3.33
ensight-04 20.67
lightwave-01 10.80
maya-03 5.38
proe-05 0.36
sw-02 6.75
tcvis-02 0.35
snx-01 2.37

QuadForce 2450:

Viewset Composite
catia-03 3.24
ensight-04 17.83
lightwave-01 10.72
maya-03 7.75
proe-05 0.37
sw-02 6.87
tcvis-02 0.35
snx-01 2.35

Here is the data in chart form (relative performance, real Quadro 2000 = 100%).

GTS450 vs. Quadro 2000

As you can see the real Quadro dominates in all tests except ensignt-04 where it gets soundly beaten by the GeForce card. Modification does seem to improve some aspects of performance. In particular, Maya results seem to improve by a whopping 44% following the modification.

If you are only interested in support and VGA passthrough for virtual machines, modifying a GeForce card to a Quadro can be an extremely cost effective solution (especially if your budget wouldn’t stretch to a real Quadro card anyway). If you are only interested in performance of the kind measured by SPECviewperf, then depending on the applications you use, a real Quadro is still a better option in most cases.

Note: I am selling one of my Quadrified GTS450 cards. I bought several fully expecting to brick a few in the process of attempting to modify them, but the success rate was 100% so I now have more of them than I need.

IBM T221 3840×2400 204dpi Monitor – Part 7: Positive Update

For once it would appear that I have a positive update on the subject of Nvidia drivers. It would seem that patching the latest (319.23) driver is no longer required on Linux. Even better, there is a way to achieve a working T221 setup without RandR getting in the way by insisting the two halves are separate monitors. I covered the issues with Nvidia drivers in a previous article.

The build part now works as expected out of the box. Simply:

export IGNORE_XEN_PRESENCE=1
bash ./NVIDIA-Linux-x86_64-319.23.run

and everything should “just work”.

Best of all, there appears to be a workaround for the RandR information being visible even when Xinerama is being overridden. It turns out, Ximerama and RandR seem to be mutually exclusive. So even though the option disabling RandR explicitly seems to get silently ignored, enabling Xinerama fixes that problem. And since the Nvidia driver’s Xinerama info override still works, this solves the problem!

You may recall from a previous article the following in xorg.conf:

[...]
Section "ServerLayout"
	Identifier "Layout0"
	Screen 0 "Screen0" 0 0
	Option "Xinerama" "0"
EndSection
[...]
Section "Screen"
	Identifier "Screen0"
[...]
	Option "NoTwinViewXineramaInfo" "True"
	Option "TwinView" "1"
	Option "TwinViewOrientation" "RightOf"
	Option "metamodes" "DFP-0:1920x2400, DFP-3:1920x2400"
[...]
EndSection

It turns out the solution is to simply enable Xinerama:

Section "ServerLayout"
	Identifier "Layout0"
	Screen 0 "Screen0" 0 0
	Option "Xinerama" "1"
EndSection

This implicitly disables RandR and Nvidia driver’s Xinerama info override takes care of the rest. Magic. :)

Update:
If you are still having problems when using KDE, there is another trick you can use to force xinerama and disable RandR. Ammend the following line in kdmrc:

/etc/kde/kdm/kdmrc:
ServerArgsLocal=-extension RANDR +xinerama -nr -nolisten tcp

WQUXGA – IBM T221 3840×2400 204dpi Monitor – Part 6: Regressing Drivers and Xen

I recently built a new machine, primarily because I got fed up of having to stop what I’m working on and reboot from Linux into Windows whenever my friends and/or family invited me to join them in a Borderlands 2 session. Unfortunately, my old machine was just a tiny bit too old (Intel X38 based) to have full, bug-free VT-d/IOMMU support required for VGA passthrough to work, so after 5 years, I finally decided it was time to rectify this. More on this in another article, but the important point I am getting to is that VGA passthrough requires a recent version of Xen. And there this part of the story really begins.

Some of you may have figured out that RHEL derivatives are my Linux distribution of choice (RedSleeve was a big hint). Unfortunately, RedHat have dropped support for Xen Dom0 kernels in EL6, but thankfully, other people have picked up the torch and provide a set of up to date, supported Xen Dom0 kernels and packages for EL6. So far so good. But it was never going to be so simple, at a time when drivers are getting increasingly dumber, feature sparse and more bloated at the same time. That is really what this story is about.

For a start, a few details about the system setup that I am using, and have been using for years.

  • I am a KDE, rather than Gnome user. EL6 comes with KDE 4, which use X RandR rather than Xinerama extensions to establish the geometry of the screen layout. This isn’t a problem in itself, but there is no way to override whatever RandR reports, so on a T221 you end up with a regular desktop on half of the T221, and an empty desktop on the other, which looks messy and unnatural.
  1. EL6 had had a Xorg package update that bumped the ABI version to from 10 to 11
  2. Nvidia drivers have changed the way TwinView works after version 295.x (TwinView option in xorg.conf is no longer recognized)
  3. Nvidia drivers 295.x do not support Xorg ABI v11.
  4. Nvidia kernel drivers 295.x do not build against kernels 3.8.x.

And therein lies the complication.

Nvidia drivers v295 when used with options TwinView and NoTwinViewXineramaInfo also seem to override RandR geometry to the show there is a single, large screen available, rather than two screens. This is exactly what we want when using the T221. Drivers after 295.x (304.x seems to be the next version), don’t recognize the TwinView configuration option, and while they provide Xinerama geometry override when using the NoTwinViewXineramaInfo option, they do not override RandR information any more. This means that you end up with a desktop that looks as you would expect it to if you used two separate monitors (e.g. status bar is only on the first screen, no wallpaper stretch, etc.), rather than a single, seamless desktop.

As you can see, there is a large compound issue in play here. We cannot use the 295.x drivers, because

  1. They don’t support Xorg ABI 11 – this can be solved by downgrading the xorg-x11-server-* and xorg-x11-drv-* packages to an older version (1.10 from EL 6.3). Easily enough done – just make sure you add xorg-x11-* to your exclude line in /etc/yum.conf after downgrading to avoid accidentally updating them in the future.
  2. They don’t build against 3.8.x kernels (which is what the Xen kernel I am using is – this is regardless of the long standing semi-allergy of Nvidia binary drivers to Xen). This is more of an issue – but with a bit of manual source editing I was able to solve it.

Here is how to get the latest 295.x driver (295.75) to build against Xen kernel 3.8.6. You may need to do this as root.

Kernel source acquisition and preparation:

wget http://uk1.mirror.crc.id.au/repo/el6/SRPMS/kernel-xen-3.8.6-1.el6xen.src.rpm
rpm -ivh kernel-xen-3.8.6-1.el6xen.src.rpm
cd ~/rpmbuild/SPEC
rpmbuild -bp kernel-xen.spec
cd ~/rpmbuild/BUILD/linux-3.8.6
cp /boot/config-3.8.6-1.el6xen.x86_64 .config
make prepare
make all

Now that you have the kernel sources ready, get the Nvidia driver 295.75, the patch, patch it and build it.

wget http://uk.download.nvidia.com/XFree86/Linux-x86_64/295.75/NVIDIA-Linux-x86_64-295.75.run
wget https://dl.dropboxusercontent.com/u/61491808/NVIDIA-Linux-x86_64-295.75.patch
bash ./NVIDIA-Linux-x86_64-295.75.run --extract-only
patch < NVIDIA-Linux-x86_64-295.75.patch
cd NVIDIA-Linux-x86_64-295.75
export IGNORE_XEN_PRESENCE=y
export SYSSRC=~/rpmbuild/BUILD/linux-3.8.6
cp /usr/include/linux/version.h $SYSSRC/include/linux/
./nvidia-installer -s

And there you have it Nvidia driver 295.75 that builds cleanly and works against 3.8.6 kernels. The same xorg.conf given in part 3 of this series will continue to work.

It is really quite disappointing that all this is necessary. What is more concerning is that the ability to use a monitor like the T221 is diminishing by the day. Without the ability to override what RandR returns, it may well be gone completely soon. It seems the only remaining option is to write a fakerandr library (similar to fakexinerama). Any volunteers?

It seems that Nvidia drivers are both losing features and becoming more bloated at the same time. 295.75 is 56MB. 304.88 is 65MB. That is 16% bloat for a driver that is regressively missing a feature, in this case an important one. Can there really be any doubt that the quality of software is deteriorating at an alarming rate?