Today we have part two of Kris Roberts’ coverage of GDC 2013. This time, Kris shares what he learned from Valve’s presentations about VR technology and what game developers need to look out for.

Why Virtual Reality Is Hard (And Where It Might Be Going)
Michael Abrash (Valve Software)
Quite a few years ago, before I switched gears in my career and went into games professionally, I read Michael Abrash’s Graphics Programming Black Book. I had been running a Quake server for a while and found it fascinating to read about what went into the state of the art in computer graphics at the time and learn about some of the background story behind how they developed Quake in particular. Michael writes with a splendid tone that gets complex concepts across clearly and succinctly, but without the impression that details are being glossed over or simplified. His talk this year was the first time I had ever seen him in person, and it was no big surprise that his personal presentation style was just as illuminating.
My main take away from his talk was that VR is hard, really hard – and once we solve the problems at hand, its likely to expose even harder things to solve. However, the session was hardly a downer or buzz kill for everyone excited about VR! Instead, I think it was more of a call to arms for our best and brightest to get to work, and that even with the problems that are intractable we should strive to overcome because at the end of the day it really is going to be awesome and worthwhile.

Michael started his talk describing his own flashback to 1996 and the notion that the Metaverse from Neal Stephenson’s book Snow Crash was coming. That’s when he went to work with John Carmack on Quake and although it wasn’t quite the Metaverse, what they produced was pretty amazing.
So now it’s 2013 and the overwhelming sense is that virtual reality is coming! We have heard this before, but why is it different this time? We are seeing a convergence right now with flat-panel displays, batteries and power management, mobile CPU/GPUs, wireless, cameras, gyroscopes, accelerometers, compasses, projectors, waveguides, computer vision, tracking hardware, and content in the form of 3D games.
True virtual reality, a simulation indistinguishable from reality, is not just around the corner. We are only seeing the very first glimpse of it with the Rift, and that’s just the start. The technology has room for improvement across the board, and it will take many years to fully refine the virtual reality dream. Augmented reality is going to be even harder.
How hard can it be? What’s so hard about mounting a screen in a visor and displaying images? Is that all there is to it?
These are the really hard parts Michael laid out: Tracking, latency, and producing perceptions indistinguishable from reality.
For tracking to work convincingly, images must seem fixed in space. This is different with a VR headset, as it moves differently than other displays we use. It moves relative to both real reality and our eyes. It moves with our head. The truth is that your head can move very fast, and while your head is moving your eyes can counter rotate just as fast.

In rapid relative motion, your head moves (which moves the display) while your eyes move in the opposite direction, requiring the pixels that make up an object you are tracking to shift on the display and yet appear to be stationary in the space of the presented stereo projection. Images must always be exactly in the right place relative to real reality and your head. Any errors introduced in any part of the system come across as anomalies, and the human perceptual system is tuned to pick out just these kinds of problems. After all, anomalies may be something trying to eat us, or something WE might want to hunt and eat!
The main point is that tracking has to be super accurate.
How close is VR tracking now? The Rift uses an inertial measurement unit (IMU) which is inexpensive and lightweight, but it does drift and only supports rotation, not translation. Translation is an important part of tracking, and moving side to side or forward and backward without that movement being reflected in the simulation is disorienting.
Latency is the delay between head motion and the virtual world update reaching the eyes. When it is too slow or fails, images get drawn in the wrong place and appear as anomalies. The type and severity of latency related anomalies varies with the type of head motion.
To overcome this, latency needs to be super-low. How low is super-low? The goal is somewhere between 1 and 20 ms total for:
- Tracking
- Rendering
- Transmitting to the display
- Getting photons coming out of the display
- Getting photons to stop coming out
- Google “Carmack Abrash latency” for details
That’s a lot to do in not very much time!
The remainder of the presentation focused on an investigation of the issues for tracking using space-time diagrams to examine pixel based movement over each frame. In true reality, photons are bouncing off the object and entering our eyes continuously. In virtual reality we have different display technologies which show color by either sequentially flashing RGB or simultaneously showing each component and additionally having various lengths of persistence.
With a sequential RGB display, the light for a pixel moving across the display with our eyes fixed appear to come one after the other and show the color properly and in the right location. But if the eyes are moving, it effectively slants the segment on the space-time diagram and we see the fringing ‘rainbows’ familiar from DLP displays when your eyes dart across the screen. When that happens on a VR display it is less than satisfactory.
The alternative display technology shows each color component simultaneously, and for a set duration each frame. Full persistence shines for the entire frame, and stops only when the next frame starts. Half persistence shines for the initial period and then goes dark well before the next frame. A zero persistence display just flashes at high intensity at the start of each frame.
With each of the persistence types, a pixel moving across the screen shows properly. However moving your head causes the light to stay in the old location for as long as it shines and then pops to the proper location at the start of the next frame. So the shorter the persistence, the less smearing shows. This would lead us to think that zero persistence would be ideal, and indeed it looks good when your eye tracks the motion, but untracked motion of the pixel going across the screen strobes because the distance is too far for the eye to reconcile smoothly. The punch line is that none of the technologies we currently have is perfect in every situation, and great VR visual quality is going to take a lot of time and R&D. There are certainly bigger problems than just color fringing, strobing and judder.
Once we have a handle on the hard problems, we can move on to thinking about the really really hard problems:
- Per-pixel focus
- Haptics
- Input
- How to be environmentally aware of real reality while using virtual reality
- Solving virtual reality motion sickness
- Figuring out what is uniquely fun about virtual reality
The figuring out what’s great in VR is where all the effort is leading. We don’t know what it is now, it may seem obvious looking back at it, but nobody can tell for sure ahead of time. Michael says he thinks the road map for VR is likely to be pretty straightforward. First the Rift ships and is successful, and that starts a lot of development and activity similar to what we saw with 3D accelerators with various companies competing and improving the technology. The real question is whether VR ends up as a niche or the foundation for a whole new platform.
So yes, VR is going to be hard – but the same can be said for real-time 3D and just look at the tremendous strides that have been made in that technology since Quake. These are exciting times, and the problems are not to be shied away from.
For more info and references check out:
http://blogs.valvesoftware.com/abrash/
Google “abrash black book” (free online)