Search

MTBS3D RT @BelayIP: First meeting of #CES19 in the books. Online interview with Neil Schneider of #mtbs3d. Come say hi to me and @BasemarkLtd if y…
MTBS3D RT @GetImmersed: .@AffordStudio Co-Founder Avery Rueb talked about the status of #technology in the classroom and new innovations that will…
MTBS3D RT @GetImmersed: Dereck Orr of National Institute of Standards and Technology @usnistgov gave the first keynote at #Immersed2018. He talked…
MTBS3D RT @GetImmersed: At #Immersed2018, Charlie Choo of @studio216 talked about how they are using #immersivetechnologies like #AR, #VR, and #MR
MTBS3D RT @GetImmersed: #ArtificialIntelligence is an important part of what's next in #futurecomputing and was a vital topic at #Immersed2018. B…
MTBS3D What better way to exemplify future computing than to talk about prototyping using #VirtualReality and… https://t.co/zeNEOR7Uhw
MTBS3D MTBS-TV: Rama Krishna Aravind is the Founder and Creative Director of Poco Loco Amusements. He shared his inspirati… https://t.co/AR2VAvcZBP
MTBS3D MTBS-TV: Dr. David Rolston @dwrolston speaks about the future of #AI & #VR at @GetImmersed. #Immersed2018 https://t.co/5V9A18c6Vk
MTBS3D RT @GetImmersed: Kevin Williams, Chairman of the @DNA_Conference is easily one of the market's go-to-guys for all things dealing with out o…
MTBS3D RT @GetImmersed: Bob Raikes is the Founder of Meko Ltd. who publish the leading industry journal @Display_Daily. @brmeko spoke about displ…
MTBS3D RT @GetImmersed: Easily one of the highlights at #Immersed2018, Mike Domaguing, @Survios' VP of Marketing, gave a rundown of the #VR projec…
MTBS3D RT @GetImmersed: .@elumenati makes immersive projection solutions that multiple people can enjoy at the same time. At #Immersed2018, Hilary…
MTBS3D Daryl Sartain talked #immersive #AI & #Blockchain at @GetImmersed. Sartain is the Director and Worldwide Head of… https://t.co/LmqWZjQn0w
MTBS3D RT @GetImmersed: #Immersed2018 starts on Thursday! Tickets still available: https://t.co/5CJiYUiNKF #Immersed18 #business #healthcare #AI #…

GDC 2013 in 3D, Part I

Running the VR Gauntlet – VR Ready, Are You?
Nate Mitchell and Michael Antonov (Oculus VR, Inc.)
DESCRIPTION: Virtual reality may be poised to revolutionize the way we play our favorite games, but creating a great VR game is surprisingly challenging. Developers have to carefully consider latency, user input, rendering performance, UI design, and overall user experience. We'll discuss what developers need to know about supporting the Oculus Rift, how to tackle the major technical hurdles associated with truly immersive virtual reality, and what we've learned so far from building a new platform for VR games.


Nate started off the talk with a general overview of what Oculus is all about - building a platform for VR games. The Oculus Rift is a devkit for game developers that includes a head-mounted display, head tracking, computer interface and software SDK.

Many of us have seen VR demos in the past, various head-mounted displays and other pieces of the puzzle, but for the first time the supporting technology is finally here: Lightweight mobile components, MEMS sensors, high resolution mobile screens, fast GPUs. All of this is coming together to allow for affordable hardware that we can use to start seeing what virtual really can really be.

There are lots of differences between the development of a VR game and developing games for traditional monitors. The inputs, gameplay and storytelling are unique in VR and the ultimate goal is immersion: the player should feel as though he or she is a part of the virtual world.


This really is just the beginning though, there is so much to learn and the message was repeated many times that we are at day zero. The development Oculus has done so far has been focused primarily on the production of the platform of the Rift hardware and SDK.  It's the game developers getting the equipment who are going to be the pioneers discovering what they can do with it. This is a community driven effort and we are going to learn a lot from each other.

Michael took over on the technical side, explaining how immersion requires linking the motion input to your senses. Player interaction matching up with expected reality. With the Rift there is head tracking input and rendering with the goal of maximizing performance and minimizing the motion to photon latency.

On the game design side, immersion is bringing the player into a different world. Key to that is avoiding standard things that would break out of gameplay. Keeping the player in control of at least their point of view and controlling how information is displayed are really important.

In terms of VR input, the Rift today provides head tracking from a system with gyro, accelerometer and magnetic sensitivity. The SDK lets you query orientation to get a quaternion which you can apply to the view transformation to look around. Using that information with a model of the head and neck helps a lot to position the cameras and make the motion more natural and convincing. Most of the demos and games so far are combining orientation from the head tracking and a controller – roll and pitch from the Rift, yaw combined with the controller input. More advanced input systems would include translation of the head tracking system (not in the current Rift), hand tracking, and full body kinematics.


For stereo rendering, the 7” devkit display is a 1280x800 panel, which gets split into 640x800 per eye. The virtual cameras use a parallel Z axis, a computed field of view (FOV) based on the HMD and custom eye separation adjustments. To account for the correction of lens distortion its best to render at a much higher resolution of 2170x1360 and then correct the image in a post process shader:

k0 * r + k1 r^3 + k2*r^5 (where r is the radius from lens center)

The increased render size for the relatively low resolution on the HMD panel was something I hadn't realized before, and its a little daunting to realize how large the render would have to be to support what we would hope to see in significantly higher resolution panels in the future; especially with the performance requirements for maintaining 60Hz without tearing or stuttering. One suggestion Michael had was moving the sensor sampling onto the render thread which he reported from their experiments showed a ~20ms improvement on motion latency.

Switching to Nate's part of the presentation, the focus shifted from the technical considerations to the realization that building VR games is fun! In many ways it is different from the way games have been made for traditional monitor/controller systems.

In a VR simulation, the player has a much more accurate intuition for the relative scale of themselves and the environment. Where in 2D games player size has been something which has been set arbitrarily or to accommodate other considerations, players in VR games know right away if they are four feet tall or a giant. Other aspects of level design and art direction which may have been necessary to exaggerate scale or sense of speed to come across on a monitor may well be overwhelming in VR.


Providing information to players is also particularly challenging. In a conventional game, 2D screen space HUD elements are the common way of showing data to the player – but in VR that becomes problematic. It may be tempting to just shrink an existing HUD and place it where the player can't help but look at it in the center of the screen, but that certainly affects the sense of immersion and introduces potential conflicts or visual contradictions in terms of the depth of UI elements and the scene. If a static HUD element is being drawn at any depth the player will see it double when their convergence shifts to look at other elements in the world. Alternate approaches to show the information players need on more natural objects within the scene seems promising but can be new territory in UI design.

Simulation sickness and individual player tolerance for intense VR experiences is also a serious consideration. While there is no single widely accepted cause of simulation sickness, many players do experience symptoms to various degrees particularly when they start with VR. The factors of latency, tracking precision, game content and the player themselves are all involved and something that developers need to be aware of. There are lots of things that players do in games that would make any person nauseous if they attempted them in the real world, and effective VR can have the same result.

If VR is about putting the player into the world, what types of games provide the best experiences? Nate presented two ends of the spectrum with Call Of Duty and Flower – both would be very powerful experiences in VR and obviously very different.

We are at the very beginning.  This is Day Zero for virtual reality gaming.