MTBS3D RT @IFCSummit: Melissa Brown, Senior Director of Developer Relations for @ultra_io will be speaking on our upcoming #Crypto and #Blockchain
MTBS3D RT @IFCSummit: Ernestine Fu is a Venture Partner at @alsoplouie and will be joining the #IFCSummit investment panel. #venturecapital #cloud
MTBS3D RT @IFCSummit: New ticket option available for independent studios, small businesses and start-up companies. It’s a full access ticket sell…
MTBS3D RT @IFCSummit: #IFCSummit is thrilled to welcome Jen MacLean, Head of Worldwide Business Development for Small and Mid-Sized Studios in the…
MTBS3D RT @IFCSummit: #IFCSummit ticket sales end October 31, 2019 at 5:00PM PST or when tickets are sold out. If possible, new block may become a…
MTBS3D RT @IFCSummit: Christopher Croteau, Executive in Residence at @intel Capital is speaking at #IFCSummit. He has 25+ years of experience in t…
MTBS3D RT @IFCSummit: .@AccelByteInc builds and operates large scale online game publishing platforms via cloud technologies. Kevin Robertson, the…
MTBS3D RT @IFCSummit: .@CintooCloud converts data in the #Cloud. Dominique Pouliquen, CEO of Cintoo, will discuss their unique problem-solving cli…
MTBS3D RT @IFCSummit: Nick Thomas is the Vice-President of Commercial Partnerships for @playhatchglobal and is speaking at #IFCSummit. #playhatch
MTBS3D RT @IFCSummit: #VR/#AR industry spokesperson and thought leader,@tipatat Chennavasin of the Venture Reality Fund is speaking at #IFCSummit.…
MTBS3D RT @IFCSummit: .@Dell’s Director of Virtualization and Commercial #VR and #AR is speaking at #IFCSummit.
MTBS3D RT @IFCSummit: .@tifcagroup’s International Future Computing Summit ushers in #clientocloudrevolution. #IFCSummit #PC #cloud #XR #VR #AR #A
MTBS3D RT @IFCSummit: Dr. Ali Khayrallah, Engineering Director for @ericsson speaking at #IFCSummit. #clientocloudrevolution #cloudcomputing #futu
MTBS3D RT @tifcagroup: TIFCA releases new #ClienttoCloud Vision Document and a $200 off code for @IFCSummit tickets. #TIFCA #IFCSummit #cloud #cli
MTBS3D RT @IFCSummit: .@tifcagroup releases new #ClienttoCloud Vision Document and a $200 off code for #IFCSummit tickets. #TIFCA #cloud #clientot
MTBS3D RT @MTBS3D: Interview with Shawn Frayne, CEO of @LKGGlass, #3D footage included. Alex Hornstein, CTO of Looking Glass Factory, will be spe…
MTBS3D Interview with Shawn Frayne, CEO of @LKGGlass, #3D footage included. Alex Hornstein, CTO of Looking Glass Factory,…
MTBS3D RT @IFCSummit: #IFCSummit is proud to announce @intel as a Platinum Sponsor! #Intel #futurecomputing #cloud #gamedev #AI #AR #VR https://t.…
MTBS3D RT @IFCSummit: IFC Summit is proud to announce @AMD as a Silver Sponsor for #IFCSummit! #CloudComputing #FutureComputing #AI #gamedev #AR #…

GDC 2013 in 3D, Part I

Running the VR Gauntlet – VR Ready, Are You?
Nate Mitchell and Michael Antonov (Oculus VR, Inc.)
DESCRIPTION: Virtual reality may be poised to revolutionize the way we play our favorite games, but creating a great VR game is surprisingly challenging. Developers have to carefully consider latency, user input, rendering performance, UI design, and overall user experience. We'll discuss what developers need to know about supporting the Oculus Rift, how to tackle the major technical hurdles associated with truly immersive virtual reality, and what we've learned so far from building a new platform for VR games.

Nate started off the talk with a general overview of what Oculus is all about - building a platform for VR games. The Oculus Rift is a devkit for game developers that includes a head-mounted display, head tracking, computer interface and software SDK.

Many of us have seen VR demos in the past, various head-mounted displays and other pieces of the puzzle, but for the first time the supporting technology is finally here: Lightweight mobile components, MEMS sensors, high resolution mobile screens, fast GPUs. All of this is coming together to allow for affordable hardware that we can use to start seeing what virtual really can really be.

There are lots of differences between the development of a VR game and developing games for traditional monitors. The inputs, gameplay and storytelling are unique in VR and the ultimate goal is immersion: the player should feel as though he or she is a part of the virtual world.

This really is just the beginning though, there is so much to learn and the message was repeated many times that we are at day zero. The development Oculus has done so far has been focused primarily on the production of the platform of the Rift hardware and SDK.  It's the game developers getting the equipment who are going to be the pioneers discovering what they can do with it. This is a community driven effort and we are going to learn a lot from each other.

Michael took over on the technical side, explaining how immersion requires linking the motion input to your senses. Player interaction matching up with expected reality. With the Rift there is head tracking input and rendering with the goal of maximizing performance and minimizing the motion to photon latency.

On the game design side, immersion is bringing the player into a different world. Key to that is avoiding standard things that would break out of gameplay. Keeping the player in control of at least their point of view and controlling how information is displayed are really important.

In terms of VR input, the Rift today provides head tracking from a system with gyro, accelerometer and magnetic sensitivity. The SDK lets you query orientation to get a quaternion which you can apply to the view transformation to look around. Using that information with a model of the head and neck helps a lot to position the cameras and make the motion more natural and convincing. Most of the demos and games so far are combining orientation from the head tracking and a controller – roll and pitch from the Rift, yaw combined with the controller input. More advanced input systems would include translation of the head tracking system (not in the current Rift), hand tracking, and full body kinematics.

For stereo rendering, the 7” devkit display is a 1280x800 panel, which gets split into 640x800 per eye. The virtual cameras use a parallel Z axis, a computed field of view (FOV) based on the HMD and custom eye separation adjustments. To account for the correction of lens distortion its best to render at a much higher resolution of 2170x1360 and then correct the image in a post process shader:

k0 * r + k1 r^3 + k2*r^5 (where r is the radius from lens center)

The increased render size for the relatively low resolution on the HMD panel was something I hadn't realized before, and its a little daunting to realize how large the render would have to be to support what we would hope to see in significantly higher resolution panels in the future; especially with the performance requirements for maintaining 60Hz without tearing or stuttering. One suggestion Michael had was moving the sensor sampling onto the render thread which he reported from their experiments showed a ~20ms improvement on motion latency.

Switching to Nate's part of the presentation, the focus shifted from the technical considerations to the realization that building VR games is fun! In many ways it is different from the way games have been made for traditional monitor/controller systems.

In a VR simulation, the player has a much more accurate intuition for the relative scale of themselves and the environment. Where in 2D games player size has been something which has been set arbitrarily or to accommodate other considerations, players in VR games know right away if they are four feet tall or a giant. Other aspects of level design and art direction which may have been necessary to exaggerate scale or sense of speed to come across on a monitor may well be overwhelming in VR.

Providing information to players is also particularly challenging. In a conventional game, 2D screen space HUD elements are the common way of showing data to the player – but in VR that becomes problematic. It may be tempting to just shrink an existing HUD and place it where the player can't help but look at it in the center of the screen, but that certainly affects the sense of immersion and introduces potential conflicts or visual contradictions in terms of the depth of UI elements and the scene. If a static HUD element is being drawn at any depth the player will see it double when their convergence shifts to look at other elements in the world. Alternate approaches to show the information players need on more natural objects within the scene seems promising but can be new territory in UI design.

Simulation sickness and individual player tolerance for intense VR experiences is also a serious consideration. While there is no single widely accepted cause of simulation sickness, many players do experience symptoms to various degrees particularly when they start with VR. The factors of latency, tracking precision, game content and the player themselves are all involved and something that developers need to be aware of. There are lots of things that players do in games that would make any person nauseous if they attempted them in the real world, and effective VR can have the same result.

If VR is about putting the player into the world, what types of games provide the best experiences? Nate presented two ends of the spectrum with Call Of Duty and Flower – both would be very powerful experiences in VR and obviously very different.

We are at the very beginning.  This is Day Zero for virtual reality gaming.