Now that GDC Vault is online, it’s an opportune time to share a summary of what was shared at GDC in the VR and immersive technology space. Props to MTBS Field Writer Kris Roberts for putting this together!
The Dawn of Mobile VR
Speaker: John Carmack (Oculus)
The dawn of consumer mobile VR is close. Come hear the technical details of making mobile VR a reality; techniques and strategies for maximizing the quality of your VR games, applications, and experiences; and thoughts about the future of VR, including what it means for the mobile ecosystem. Q&A to follow (until chased out).
Takeaway: Game developers will walk away with a better understanding of mobile VR, techniques and strategies for developing mobile VR content, and what the future of consumer VR might look like.
You can watch the full video of John’s talk on the Oculus Twitch channel:
http://www.twitch.tv/oculus/v/3862049
Last year I was surprised to hear about the announcement of the Samsung Gear VR and the departure for Oculus from the high-end PC driven Rift to a mobile powered HMD as their first consumer offering. After all the emphasis on precision tracking, ultra low latency, and crushing GPU requirements, the decision to release a phone-based mobile product was kind of shocking. Hearing that John Carmack was heading up the effort for mobile VR within Oculus helped increase my confidence that the results would be good.
It was great to have the opportunity to hear directly from John what his motivations were for pursuing Mobile VR and where he sees it going. I would strongly recommend watching the presentation on Twtich, since he clearly will do a better job of getting the points across than I can.
If you want the TL;DR summary, I would say that the first “Innovator Edition” of the Samsung Gear VR was a success. They sold well, had very few returns, and developers have actively been developing content for them. If you are looking to do a VR project in the next 12 months, the clear indication is that there will be a second version of the Samsung product to be released before the end of the year and it will “go wide” with much stronger marketing and sales potential than the first. Without being held back, Samsung could have demo units in practically every cell phone retail store and move MANY units. Developers looking to be in on the first wave of consumer VR would do well to view the mobile platform as a viable input vector into a large potential consumer market.
VR Direct: How Nvidia Technology is Improving the VR Experience (Presented by Nvidia)
Speakers: Nathan Reed (Nvidia), Dean Beeler (Oculus)
Virtual reality is the next frontier of gaming, and Nvidia is leading the way by introducing VR Direct, a set of hardware and software technologies designed to cut down graphics latency and accelerate stereo rendering performance. In this talk, we’ll show how developers can use Nvidia GPUs and VR Direct to improve the gaming experience on the Oculus Rift and other VR headsets.
Slides from the presentation are going to be available here:
https://developer.nvidia.com/gdc-2015
VR Direct is an umbrella term for various Nvidia technologies that are designed to help with some of the hard problems in VR. The main two topics discussed in this talk were “Asynchronous Timewarp” to reduce latency and “VR SLI” for stereo rendering.
The idea of “Timewarp” and “Asynchronous Timewarp” were discussed a lot this year at GDC. The concept is to re-sample and position the frame about to be displayed with as up to date rotation data as possible. That way the image the viewer sees is in the right place even if they were moving their head while it was being rendered and displayed. It only helps for rotation and not translation, and also won’t help for lag in animations. In an ideal case, the game should be hitting the native framerate (90Hz) and the timewarp would only be a safety net when an extreme case or special effect caused it to lag and avoid tearing. What Nvidia is bringing to the table is driver level support for high-priority context and preemption (at the draw-level) to help in practical Asynchronous Timewarp implementations.
To help with stereo rendering, Nvidia is suggesting using Multiview rendering and Stereo SLI, leveraging as much as possible from the shared stages in the render pipeline. The goal with Multiview is to take advantage of the fact that a stereo view shows almost the same visible objects, uses almost the same render commands and has the driver do almost the same internal work. Keeping all the stages separate is the most flexible, but the least optimizable. Combining stages can improve optimization at the cost of some flexibility. Adding multiple graphics cards to do the stereo rendering in SLI sounds good, but it’s unrealistic to expect 2x performance increases. Part of the solution that Nvidia is providing is a dedicated copy engine that allows non-dependent rendering to continue while the data from the second card is being blitted between GPUs on the PCIe bus.
Admittedly, what was discussed in this session is all very much on the bleeding edge and hot out of the oven – but its really good to see VR specific support in low level API functions on the Nvidia driver to help developers get the high performance required for good VR.
Technology-Infused Storytelling: VR Challenges That Lie Ahead (Presented by Epic Games)
Speakers: Nick Donaldson (Epic Games), Alasdair Coull (Weta Digital), Tim Elek (Epic Games), Daniel Smith (Weta Digital)
In this intermediate to advanced session, Weta Digital and Epic discuss the state of using VR to tell rich, authentic stories through the lens of VFX.
A virtual reality experience running on the Unreal Engine called “Thief In The Shadows” with assets from The Hobbit motion picture was produced in collaboration between Weta Digital and Epic Games. The demo itself was on display in the Epic/Unreal booth in the GDC Expo Hall, but when asked about a public distribution, it did not sound like it was going to see a wider release.
The content of the session went back and forth between the guys from Weta and Epic talking about the various challenges they faced and their general creative process. Clearly, taking VFX assets from a major motion picture and using them to create a VR experience is exciting and full of opportunities – but it is also new territory and required the team to innovate on design and art as well as technical solutions.
Similar in many ways to how game developers have been discovering the VR experience is different from traditional video games, it sounded like this project helped show how movies are also different. VR requires different timing, player / observer perspective, and setting and characterization.
Once they had made the the creative decisions as far as the setting and characters, the challenges for the team shifted to the technical side. How to get art assets; the models, animations and lighting looking as good in VR as possible – ideally as good as the movie?
They really wanted to keep Smaug as the star of the show and do everything they could with VFX to add to the experience but not detract from his performance. Probably the best example came from Nick Donaldson at Epic who explained the steps they took to reconstruct the dragon’s eye and give it a sense of depth and translucency. The initial attempts using Weta’s texture baking tool just looked flat and unconvincing. Instead, going with hand crafted textures, bump offset to “clear coat” the eye, and an innovative set of virtual UVs based on an eye bone projected from the planar surface of the iris along its look vector came together with a really good looking final effect.
Practical Virtual Reality for Disney Themeparks
Speaker: Bei Yang (Walt Disney Imagineering)
Thinking about creating your own VR experiences using the Oculus? Walt Disney Imagineering, being one of the original pioneers in VR in the late 1980s, has in truth never stopped playing with VR and using it. This lecture will focus on some of the basic learnings from the last 20 years, and how it applies to VR experience development today. The talk will focus on HMD and CAVE-based experiences, their design considerations, technical implementation details, and will cover some real-world examples.
Takeaway: Attendees will learn about some of the practical problems when creating VR experiences for the real world. This will include what causes nausea, what gives a sense of immersion, and the problems that arise when technically implementing VR from both a software and hardware perspective.
Wow! The Disney VR session was amazing, and mostly because Bei did an excellent job talking about things Imagineers have been doing in the VR space for decades and particularly how even though so much of the current buzz surrounding VR makes it all seem so new – it’s not. The subtitle of the talk was “How to sound smart about VR at cocktail parties” and it really did deliver on providing a great overview of where VR has been and what’s really important in high quality immersive experiences.
The notion of Virtual Reality goes back pretty far, this description of what perfect VR would be was from Ivan Sutherland in 1965:
“… a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal.”
A short VR History – first head mounted display in 1968, birth of modern computer graphics in 1972-1976 (texture mapping, z-buffering), machines capable of real time rendering in 1982-1994 and circle vision (360 degree movies), emergence of digital projectors and CAVE systems 1993, Disney VR at Imagineering Labs and DisneyQuest 1992-1994, Virtuality first startup to make commercial VR systems 1994, the height of VR with Sega VR, Virtual Boy and DisneyQuest in 1995. Then, the dark times in 1995-2010. SGI folds, DisneyQuest Chicago was the last one. Sega VR and VIrtual Boy doesn’t sell.
The 1995-2010 period wasn’t actually all that dark. Many SGI employees went to Nvidia and ATI. The military continued VR for training. Disney used VR internally to develop attractions. Cellphone and digital media markets developed. Better rendering techniques developed because of the video game market. Motion capture was developed as a tool for movies.
Internally, the Imagineers use VR to prototype attractions in the form of the Digital Immersive Showroom which uses multiple 4k projectors at 120Hz blended onto a 360 degree surface.
Virtual Reality is more than just head mounts. We need to think about the human system, all our perceptual systems: Sight, sound, touch, smell, taste – and proprioception (your minds sense of what your body is doing). A lot of what Disney does to create immersive experiences uses projection, stereoscopy and tactile things people interact with – and the results are really engaging.
Developing VR Apps with Oculus SDK (Presented by Oculus)
Speaker: Anuj Gosalia (Oculus SDK)
Learn about programming VR apps using the Oculus SDK. The talk will cover the current SDK, upcoming features and some future directions we are considering.
The last few years, there have been technical talks presented by Oculus about their SDK and best practices for good VR. This year the director of engineering, Anuj Gosalia gave a great overview of the past, present and possible future of the Oculus SDK.
I’d like to start up front with the links that Anuj provided at the end of his talk for further information. They do a much better job of explaining the details of the technical concepts he discussed than I can do here in my report:
Mastering the Oculus SDK by Michael Antanov and Volga Askov
https://www.youtube.com/watch?v=PoqV112Pwrs
Optimizing VR Graphics with Late Latching by Atman Binstock
https://www.oculus.com/blog/optimizing-vr-graphics-with-late-latching/
Asynchronous Timewarp Examined by Michael Antanov
https://www.oculus.com/blog/asynchronous-timewarp/
Developing VR Experiences with Oculus Rift by Tom Forsyth
http://static.oculus.com/connect/slides/OculusConnect_Developing_VR_Experiences_with_the_Oculus_Rift.pdf
Oculus Blog: https://www.oculus.com/blog/
The high level message remains the same: VR is demanding. A good experience requires high resolution (~4MP for stereo pair), ultra low latency (<20ms) and needs to hit a high framerate (75-90Hz) and be glitch free. The focus on the SDK team so far has been on latency over throughput, hiding the VR headset from the OS and fixing up rotation with orientation timewarp.
In the old days (~2 years ago) they started with “App Rendering” and provided the code for the lens correcting distortion as reference. Developers were encouraged to work out the details for their specific project and innovate / experiment.
Now the focus has shifted to “SDK Rendering” and Direct Mode with the goal of doing more of the optimizations, timewarp, and CPU/GPU serialization within the new VR Compositor (VRC) to free the app from dealing with a lot of the VR processing. Now the app renders to shared eye buffers and submits them to the VRC API.
The near term areas of focus for the SDK are stability, efficiency and resilience. A big part of that is moving away from libOVR being compiled in with the app and switching to a libOVR DLL to encapsulate the interaction with the SDK at runtime.
Moving into the future, these were topics that Anuj tossed out as possibilities:
- Asynchronous Timewarp – tradeoff judder for positional latency.
- Dealing with large eye buffers. Finer samping at center and under sample around the edges.
- Mixing 2D UI over 3D scene to ensure text and UI elements are as clear as possible.
- Positional timewarp – translate and orientation warp of image. But reprojection techniques have perceptual artifacts in VR.
…and there you have it! GDC 2015 was a welcome mix of a wide rang of professional education sessionsa and lots of new technology and demonstrations on the show floor. There are still more interviews to come from GDC MTBS. Thanks for putting this together, Kris!