Search

MTBS3D RT @GetImmersed: Dr. Ofer Shai is the Director of Omnia AI at @DeloitteCanada. He talked about the misconceptions about #ArtificialIntellig
MTBS3D RT @GetImmersed: The use of #futurecomputing in #healthcare was one of the prominent tracks at #Immersed2018, and we got to see some really…
MTBS3D RT @GetImmersed: Ricardo Wagner, Director of Product Marketing for #Office365 at @microsoftcanada, talked about their efforts to make moder…
MTBS3D RT @GetImmersed: Pascal Langlois, Founder of Collective Intent, talks about the potential of using motion capture technologies to re-enable…
MTBS3D RT @GetImmersed: David Parker, Founder of @teamwishplay, talked at #Immersed2018 about how they are using #immersivetechnologies like #Virt
MTBS3D RT @GetImmersed: Richard Huddy, Head of the Game Ecosystem at the Samsung Research Institute (UK), was the second keynote at #Immersed2018.…
MTBS3D RT @GetImmersed: .@JoanneAska, Co-Founder of @TribeOfPan, talks about @TheChoice_VR their innovative #VR project that addresses the topic o…
MTBS3D .@ArozziChairs makes high-end #gaming chairs and tables. Scott Nishi, Sales Manager for Arozzi, spoke to us at… https://t.co/4U4LyU1SJn
MTBS3D .@pimaxofficial interview from #CES2019 includes news about their latest #5K and #8K #HMDs, eye tracking and new co… https://t.co/mmgw69jRTa
MTBS3D .@HP unleashes the #VR dinosaurs at #CES2019. 🦕 🦖 https://t.co/Ufed2K99F5 https://t.co/Rd5irCXzMZ
MTBS3D Today’s interview is with Jan Ludvig from @SenseArena. Jan was a professional #NHL #hockey player. He talked about… https://t.co/3fT7zWGmyI
MTBS3D Chia Chin Lee of, CEO of @BigBoxVR talks Population One at #CES2019. #VR #eSports https://t.co/xfIWYboVkQ https://t.co/3pW2AEPaxG
MTBS3D At #CES2019 we met with Rikard Steiber, President of #HTCViveport, and he talked about their new @htcvive Pro Eye,… https://t.co/WjugF0l5gJ
MTBS3D We met with Ryan McCall, Director of Strategy and Business Development for @UL_Benchmarks at #CES2019. He talked ab… https://t.co/lo8HZkYs5p
MTBS3D .@OmronAutomation talked about their ping pong playing robot at #CES2019. 🏓🤖 #Robotics #technologyhttps://t.co/SvdLiCYlbZ
MTBS3D MSI showcased their latest 17" GS75 Stealth laptop computer and talked about the availability of #VR readiness in t… https://t.co/3UrISM7nWK

GDC 2013 in 3D, Part II

What We Learned Porting Team Fortress 2 to Virtual Reality
Joe Ludwig (Valve Software)


Joe was involved with the Valve team's effort to support VR in Team Fortress 2. The game is free to play, and with the current update you simply need to add -vr to the command line launch options to run in Virtual Reality mode with support for the Oculus Rift.


They worked on it with the Nvis St-50 headset as well as prototype (duct tape and love) early versions of the Rift that Palmer would send them. Joe seemed pretty excited about the actual production versions of the devkits and maybe a little jealous of the developers who are going to get to use them without knowing the joy of working with early prototype hardware.


Two specific recommendations he had were to turn off desktop effects in Windows and to get a DVI. The desktop effects can introduce latency and with a splitter you can simultaneously run the headset and also see whats being displayed on a monitor. At Valve they use an Aluratek model which you can find online for around $80. With whatever splitter you get, you may have to experiment the connections and/or particular power on sequences to ensure that the proper EDID data gets to the right devices. You'll figure it out.

Once you have your Rift and development environment all setup, what are the critical pieces involved in porting your game to VR?
  • Latency
  • Stereo Rending
  • User Interface
  • Input
  • VR Motion Sickness
The first topic of Latency is super important, but for the sake of time in the talk, Joe was not going to cover it and instead provided these links to reference material:
http://www.altdevblogaday.com/2013/02/22/latency-mitigation-strategies/
http://blogs.valvesoftware.com/abrash/latency-the-sine-qua-non-of-ar-and-vr/
Google "John Carmack latency" and "Michael Abrash latency"

Stereo Rendering on the Rift is done with a 1280x800 panel, split with 640x800 per eye – but in practice the visible area is less than that and with the lens distortion and correction it needs to be calculated at a higher resolution in the rendering pipeline. But in the end you need to have two virtual cameras that respect the interpupillary distance set for the user at the time.

In the regular version of TF2 they use a player weapon model for the first person character that just includes the gun, hands and arms to the elbow. Normally that moves with the screen and you never see where the geometry ends. But in the VR mode and wider FOV it was too easy to look around and see that the model was incomplete. They ended up using the third person model so you would look down and see the entire body. I know we did similar tricks with the player model with the cockpit camera mode in Midnight Club Los Angeles – but we had to eliminate the character's head so there were no clipping issues with the geometry in the same place as the cameras. Even though Joe said they were using the full third person model I suspect they did have to do some geometry elimination.

Getting the world and character to look good in the stereoscopic view from the headset sounded like it was pretty straight forward, with the exception of full screen effects. Almost none of them worked right away and did require some effort to get working in stereo. That is a pretty common problem for anyone who has worked on porting a game to work in stereoscopic 3D.


The user interface sounds like it presented the most aesthetic challenges. The first set revolving around conflicting depth cues. When you look at a scene in 3D, these are the factors which help you identify the relative position of objects: Size, Occlusion, Parallax, Convergence, Perspective Distance Fog, Stereo Disparity, Focal Depth

Putting user interface elements in the player's view at any depth typically introduces conflicts in occlusion and convergence. These mismatched depth cues make it confusing or distracting to have UI on a virtual HUD as your eyes switch back and forth between looking at the UI information and the general scene. But in the end, the TF2 hud was basically shrunk down and positioned within the low-distortion high-resolution usable display space near the center of the player's view to make it legible and convey the information the player needs.

They did continue to use full screen menus but discovered that players were more comfortable when they still had head tracking and were not locked into a static view forcing them to see nothing but the menu.


Handling the targeting reticule is a classic problem for stereo games, and their solution was to cast a ray and render the reticule at the distance of the targeted object. This will pop its position in and out of the scene as you move your view, but in practice it sounds like most players were unaware and even after being told that's what was being done they didn't necessarily recognize the effect.

Joe then went over the various experiments they tried for the actual player input. How to consider the head tracking, mouse and keyboard is a big domain for design in VR and they set up a number of various modes in TF2 that you can switch between to see which work best for you.

Input mode 0 has you aim and steer with your nose. The mouse or control pad just rotates your torso.

Mode 1 has you aim with your nose, but move your body with the mouse. There is some drift in the Rift tracking and it sounded like sometimes players would get a little confused as to which direction their 'body' was pointed when they were looking in another direction.

Modes 2, 3 and 4 experimented with a vertical band around the center of the screen where the reticule could move freely, but if it got to the edge would pull the view along with it. The default they ended setting was mode 3 which I think I understood had the look/move direction tied to the torso. Play with the various modes and see what works for you and what ideas you have to try in your games.

The last topic Joe covered was VR motion sickness. It's something very real that the majority of players experience to some degree. The symptoms vary, but being sensitive to it as developers and trying to minimize the things that make people the most uncomfortable is important. The first thing they realized was that taking orientation control away and/or animating it without player input - typically done in death camera sequences or cutscenes - is really disorienting. This is most evident when introducing roll or moving the camera sideways without the gamer's control or intent. Even when players are in control of their view, certain movements in the game – such as going up/down stairs or ramps seems to bother players, perhaps because they are moving in two directions simultaneously.


To summarize, these were the parting topics Joe wanted to reiterate:

Eliminate latency
Buy a splitter
Fix your screen-space effects
Fix your player weapon models
Pre-distort in a shader
Eliminate the HUD if you can
Draw the HUD in stereo if you cant
Draw the crosshair at the aim depth
Include a way to turn around on the mouse
Give people some aiming without head motion
Dont mess with the horizon. Ever.
Keep view rotation 1:1 with head tracking
Dont slide the camera sideways

Great stuff! Next up, Kris had a chance to go through the GDC exhibit floor and spot some gems to look forward to. Come back for more!