Search

MTBS3D RT @IFCSummit: .@Dell’s Director of Virtualization and Commercial #VR and #AR is speaking at #IFCSummit. https://t.co/aBSSFDfmE6
MTBS3D RT @IFCSummit: .@tifcagroup’s International Future Computing Summit ushers in #clientocloudrevolution. #IFCSummit #PC #cloud #XR #VR #AR #A
MTBS3D RT @IFCSummit: Dr. Ali Khayrallah, Engineering Director for @ericsson speaking at #IFCSummit. #clientocloudrevolution #cloudcomputing #futu
MTBS3D RT @tifcagroup: TIFCA releases new #ClienttoCloud Vision Document and a $200 off code for @IFCSummit tickets. #TIFCA #IFCSummit #cloud #cli
MTBS3D RT @IFCSummit: .@tifcagroup releases new #ClienttoCloud Vision Document and a $200 off code for #IFCSummit tickets. #TIFCA #cloud #clientot
MTBS3D RT @MTBS3D: Interview with Shawn Frayne, CEO of @LKGGlass, #3D footage included. Alex Hornstein, CTO of Looking Glass Factory, will be spe…
MTBS3D Interview with Shawn Frayne, CEO of @LKGGlass, #3D footage included. Alex Hornstein, CTO of Looking Glass Factory,… https://t.co/sMLRxLd7eE
MTBS3D RT @IFCSummit: #IFCSummit is proud to announce @intel as a Platinum Sponsor! #Intel #futurecomputing #cloud #gamedev #AI #AR #VR https://t.…
MTBS3D RT @IFCSummit: IFC Summit is proud to announce @AMD as a Silver Sponsor for #IFCSummit! #CloudComputing #FutureComputing #AI #gamedev #AR #…
MTBS3D RT @IfcSummit: IFC Summit welcomes Professor Bebo White to our futurists panel. @beboac is a Department Associate (Emeritus) at the SLAC Na…
MTBS3D RT @IfcSummit: Nima Baiati Global Head of Cybersecurity Solutions for @Lenovo is speaking at #IFCSummit. #IFCSummit2019 #CyberSecurity http…
MTBS3D RT @IfcSummit: Jeffrey Shih Lead Product Manager for @unity3d’s efforts in #ArtificialIntelligence is speaking at #IFCSummit. #IFCSummit201
MTBS3D RT @IfcSummit: We are excited to welcome Director in Privacy and Security, Paul Lanois, for @Fieldfisher as a speaker at #IFCSummit. Paul…
MTBS3D Jim Jeffers talked about @intel’s efforts to enable over a billion users with creative and computing tools.… https://t.co/Z9fi0pS8xp

GDC 2013 in 3D, Part II

What We Learned Porting Team Fortress 2 to Virtual Reality
Joe Ludwig (Valve Software)


Joe was involved with the Valve team's effort to support VR in Team Fortress 2. The game is free to play, and with the current update you simply need to add -vr to the command line launch options to run in Virtual Reality mode with support for the Oculus Rift.


They worked on it with the Nvis St-50 headset as well as prototype (duct tape and love) early versions of the Rift that Palmer would send them. Joe seemed pretty excited about the actual production versions of the devkits and maybe a little jealous of the developers who are going to get to use them without knowing the joy of working with early prototype hardware.


Two specific recommendations he had were to turn off desktop effects in Windows and to get a DVI. The desktop effects can introduce latency and with a splitter you can simultaneously run the headset and also see whats being displayed on a monitor. At Valve they use an Aluratek model which you can find online for around $80. With whatever splitter you get, you may have to experiment the connections and/or particular power on sequences to ensure that the proper EDID data gets to the right devices. You'll figure it out.

Once you have your Rift and development environment all setup, what are the critical pieces involved in porting your game to VR?
  • Latency
  • Stereo Rending
  • User Interface
  • Input
  • VR Motion Sickness
The first topic of Latency is super important, but for the sake of time in the talk, Joe was not going to cover it and instead provided these links to reference material:
http://www.altdevblogaday.com/2013/02/22/latency-mitigation-strategies/
http://blogs.valvesoftware.com/abrash/latency-the-sine-qua-non-of-ar-and-vr/
Google "John Carmack latency" and "Michael Abrash latency"

Stereo Rendering on the Rift is done with a 1280x800 panel, split with 640x800 per eye – but in practice the visible area is less than that and with the lens distortion and correction it needs to be calculated at a higher resolution in the rendering pipeline. But in the end you need to have two virtual cameras that respect the interpupillary distance set for the user at the time.

In the regular version of TF2 they use a player weapon model for the first person character that just includes the gun, hands and arms to the elbow. Normally that moves with the screen and you never see where the geometry ends. But in the VR mode and wider FOV it was too easy to look around and see that the model was incomplete. They ended up using the third person model so you would look down and see the entire body. I know we did similar tricks with the player model with the cockpit camera mode in Midnight Club Los Angeles – but we had to eliminate the character's head so there were no clipping issues with the geometry in the same place as the cameras. Even though Joe said they were using the full third person model I suspect they did have to do some geometry elimination.

Getting the world and character to look good in the stereoscopic view from the headset sounded like it was pretty straight forward, with the exception of full screen effects. Almost none of them worked right away and did require some effort to get working in stereo. That is a pretty common problem for anyone who has worked on porting a game to work in stereoscopic 3D.


The user interface sounds like it presented the most aesthetic challenges. The first set revolving around conflicting depth cues. When you look at a scene in 3D, these are the factors which help you identify the relative position of objects: Size, Occlusion, Parallax, Convergence, Perspective Distance Fog, Stereo Disparity, Focal Depth

Putting user interface elements in the player's view at any depth typically introduces conflicts in occlusion and convergence. These mismatched depth cues make it confusing or distracting to have UI on a virtual HUD as your eyes switch back and forth between looking at the UI information and the general scene. But in the end, the TF2 hud was basically shrunk down and positioned within the low-distortion high-resolution usable display space near the center of the player's view to make it legible and convey the information the player needs.

They did continue to use full screen menus but discovered that players were more comfortable when they still had head tracking and were not locked into a static view forcing them to see nothing but the menu.


Handling the targeting reticule is a classic problem for stereo games, and their solution was to cast a ray and render the reticule at the distance of the targeted object. This will pop its position in and out of the scene as you move your view, but in practice it sounds like most players were unaware and even after being told that's what was being done they didn't necessarily recognize the effect.

Joe then went over the various experiments they tried for the actual player input. How to consider the head tracking, mouse and keyboard is a big domain for design in VR and they set up a number of various modes in TF2 that you can switch between to see which work best for you.

Input mode 0 has you aim and steer with your nose. The mouse or control pad just rotates your torso.

Mode 1 has you aim with your nose, but move your body with the mouse. There is some drift in the Rift tracking and it sounded like sometimes players would get a little confused as to which direction their 'body' was pointed when they were looking in another direction.

Modes 2, 3 and 4 experimented with a vertical band around the center of the screen where the reticule could move freely, but if it got to the edge would pull the view along with it. The default they ended setting was mode 3 which I think I understood had the look/move direction tied to the torso. Play with the various modes and see what works for you and what ideas you have to try in your games.

The last topic Joe covered was VR motion sickness. It's something very real that the majority of players experience to some degree. The symptoms vary, but being sensitive to it as developers and trying to minimize the things that make people the most uncomfortable is important. The first thing they realized was that taking orientation control away and/or animating it without player input - typically done in death camera sequences or cutscenes - is really disorienting. This is most evident when introducing roll or moving the camera sideways without the gamer's control or intent. Even when players are in control of their view, certain movements in the game – such as going up/down stairs or ramps seems to bother players, perhaps because they are moving in two directions simultaneously.


To summarize, these were the parting topics Joe wanted to reiterate:

Eliminate latency
Buy a splitter
Fix your screen-space effects
Fix your player weapon models
Pre-distort in a shader
Eliminate the HUD if you can
Draw the HUD in stereo if you cant
Draw the crosshair at the aim depth
Include a way to turn around on the mouse
Give people some aiming without head motion
Dont mess with the horizon. Ever.
Keep view rotation 1:1 with head tracking
Dont slide the camera sideways

Great stuff! Next up, Kris had a chance to go through the GDC exhibit floor and spot some gems to look forward to. Come back for more!