Search

MTBS3D RT @IFCSummit: Daryl Sartain is Director & Worldwide Head of #XR, Displays, and Wireless Ecosystems for @Radeon at @AMD. He is also Chair o…
MTBS3D RT @IFCSummit: Arvind Kumar is a Senior Principal Engineer for @intel @IntelSoftware. At #IFCSummit he explained the workings of the Client…
MTBS3D RT @IFCSummit: Neil Schneider’s #IFCSummit opening presentation. #CloudComputing https://t.co/CFqiNxSzPV
MTBS3D RT @IFCSummit: Our #videogames in the #clienttocloud revolution is going on now featuring @playhatchglobal @AccelByteInc @awscloud and @Sha
MTBS3D RT @IFCSummit: On stage now, the #Crypto and #Blockchain markets with Professor Bebo White @bebo and Melissa Brown, Senior Director Develop…
MTBS3D RT @IFCSummit: Finishing off our busy morning is Mark Morrison, Director of Sales for @magicleap. He was talking about Building ROI in the…
MTBS3D RT @IFCSummit: Dominique Pouliquen CEO of @CintooCloud did a live demo of their #clienttocloud technology. #IFCSummit #CloudComputing #VR h…
MTBS3D RT @IFCSummit: Next up is Stefan Holzer CTO of @fyusioninc on Democratizing #3D Content for #ecommerce. #IFCSummit #FutureComputing https:/…
MTBS3D RT @IFCSummit: Enabling an Immersive World panel moderated by Kevin Krewel of @tiriasresearch with Alex Hornstein CTO of @LKGGlass Rama Kri…
MTBS3D RT @IFCSummit: Vinay Narayan VP at @htcvive starts off the morning for us here at #IFCSummit. He is speaking on how #5G is powering the nex…
MTBS3D RT @IFCSummit: We finished the first day of #IFCSummit with a great Futurists Panel moderated by @deantak of @VentureBeat with @awscloud @D
MTBS3D RT @IFCSummit: Some highlights of our busy morning here at #IFCSummit. With @IntelSoftware @AMDPC @AMDGaming @jonpeddie #futurecomputing #C
MTBS3D RT @IFCSummit: Hours left in our #Halloween  flash sale! Take over 600.00 off the price of a ticket today only! 👻🎃🧛🏼‍♂️🧟‍♀️#IFCSummit #Clou
MTBS3D RT @thekhronosgroup: Today only! Get access to the @IFCSummit for only $349. Happy Halloween! https://t.co/eg2jOce4eQ
MTBS3D RT @IFCSummit: #Halloween flash sale! Take over 600.00 off the price of a ticket today only! 👻🎃🧛🏼‍♂️🧟‍♀️#IFCSummit #CloudComputing #CloudGa
MTBS3D RT @TheFoundryTeam: Mathieu Mazerolle, our Cloud Product Manager will be speaking at The @IFCSummit running November 5-6, 2019 at the Compu…

GDC 2013 in 3D, Part I

Running the VR Gauntlet – VR Ready, Are You?
Nate Mitchell and Michael Antonov (Oculus VR, Inc.)
DESCRIPTION: Virtual reality may be poised to revolutionize the way we play our favorite games, but creating a great VR game is surprisingly challenging. Developers have to carefully consider latency, user input, rendering performance, UI design, and overall user experience. We'll discuss what developers need to know about supporting the Oculus Rift, how to tackle the major technical hurdles associated with truly immersive virtual reality, and what we've learned so far from building a new platform for VR games.


Nate started off the talk with a general overview of what Oculus is all about - building a platform for VR games. The Oculus Rift is a devkit for game developers that includes a head-mounted display, head tracking, computer interface and software SDK.

Many of us have seen VR demos in the past, various head-mounted displays and other pieces of the puzzle, but for the first time the supporting technology is finally here: Lightweight mobile components, MEMS sensors, high resolution mobile screens, fast GPUs. All of this is coming together to allow for affordable hardware that we can use to start seeing what virtual really can really be.

There are lots of differences between the development of a VR game and developing games for traditional monitors. The inputs, gameplay and storytelling are unique in VR and the ultimate goal is immersion: the player should feel as though he or she is a part of the virtual world.


This really is just the beginning though, there is so much to learn and the message was repeated many times that we are at day zero. The development Oculus has done so far has been focused primarily on the production of the platform of the Rift hardware and SDK.  It's the game developers getting the equipment who are going to be the pioneers discovering what they can do with it. This is a community driven effort and we are going to learn a lot from each other.

Michael took over on the technical side, explaining how immersion requires linking the motion input to your senses. Player interaction matching up with expected reality. With the Rift there is head tracking input and rendering with the goal of maximizing performance and minimizing the motion to photon latency.

On the game design side, immersion is bringing the player into a different world. Key to that is avoiding standard things that would break out of gameplay. Keeping the player in control of at least their point of view and controlling how information is displayed are really important.

In terms of VR input, the Rift today provides head tracking from a system with gyro, accelerometer and magnetic sensitivity. The SDK lets you query orientation to get a quaternion which you can apply to the view transformation to look around. Using that information with a model of the head and neck helps a lot to position the cameras and make the motion more natural and convincing. Most of the demos and games so far are combining orientation from the head tracking and a controller – roll and pitch from the Rift, yaw combined with the controller input. More advanced input systems would include translation of the head tracking system (not in the current Rift), hand tracking, and full body kinematics.


For stereo rendering, the 7” devkit display is a 1280x800 panel, which gets split into 640x800 per eye. The virtual cameras use a parallel Z axis, a computed field of view (FOV) based on the HMD and custom eye separation adjustments. To account for the correction of lens distortion its best to render at a much higher resolution of 2170x1360 and then correct the image in a post process shader:

k0 * r + k1 r^3 + k2*r^5 (where r is the radius from lens center)

The increased render size for the relatively low resolution on the HMD panel was something I hadn't realized before, and its a little daunting to realize how large the render would have to be to support what we would hope to see in significantly higher resolution panels in the future; especially with the performance requirements for maintaining 60Hz without tearing or stuttering. One suggestion Michael had was moving the sensor sampling onto the render thread which he reported from their experiments showed a ~20ms improvement on motion latency.

Switching to Nate's part of the presentation, the focus shifted from the technical considerations to the realization that building VR games is fun! In many ways it is different from the way games have been made for traditional monitor/controller systems.

In a VR simulation, the player has a much more accurate intuition for the relative scale of themselves and the environment. Where in 2D games player size has been something which has been set arbitrarily or to accommodate other considerations, players in VR games know right away if they are four feet tall or a giant. Other aspects of level design and art direction which may have been necessary to exaggerate scale or sense of speed to come across on a monitor may well be overwhelming in VR.


Providing information to players is also particularly challenging. In a conventional game, 2D screen space HUD elements are the common way of showing data to the player – but in VR that becomes problematic. It may be tempting to just shrink an existing HUD and place it where the player can't help but look at it in the center of the screen, but that certainly affects the sense of immersion and introduces potential conflicts or visual contradictions in terms of the depth of UI elements and the scene. If a static HUD element is being drawn at any depth the player will see it double when their convergence shifts to look at other elements in the world. Alternate approaches to show the information players need on more natural objects within the scene seems promising but can be new territory in UI design.

Simulation sickness and individual player tolerance for intense VR experiences is also a serious consideration. While there is no single widely accepted cause of simulation sickness, many players do experience symptoms to various degrees particularly when they start with VR. The factors of latency, tracking precision, game content and the player themselves are all involved and something that developers need to be aware of. There are lots of things that players do in games that would make any person nauseous if they attempted them in the real world, and effective VR can have the same result.

If VR is about putting the player into the world, what types of games provide the best experiences? Nate presented two ends of the spectrum with Call Of Duty and Flower – both would be very powerful experiences in VR and obviously very different.

We are at the very beginning.  This is Day Zero for virtual reality gaming.