EDITORIAL

Possibly the REAL Story of ILMxLab!

By Editorial No Comments

ILMxLab, a new subsidiary of Lucasfilm, has been earning a lot of press because they are introducing augmented reality, virtual reality, and stereoscopic 3D components to the Star Wars franchise and are developing new forms of storytelling. In our opinion, this is just a small part of a much grander picture. It’s clearly a far older and more thought out process than a sudden ILMxLab branding!

John Gaeta is Lucasfilm’s Creative Director of New Media and Experiences, and we happened upon this interview he did in 2011 where he effectively spelled out where things are headed and why. The whole thing makes for great viewing, and things get extra juicy at the eighteen minute mark.

Gaeta starts by acknowledging the limitations of cinema. A core issue with cinema innovation is it’s not a continual enhancement of the technology. Innovations are created for a specific movie, the film is released, and maybe those innovations are used again in the next franchise when needed. Innovation is effectively a staggered series of starts and stops as movie studios move from film to film and franchise to franchise. Over the course of 30+ years, the biggest innovations have effectively run their course, and it’s time for the next step in cinematic storytelling.

While film will continue in a fixed perspective form, there is also a completely new class of storytelling that is being created. It’s no longer just about what you see on the camera; viewers will be able to have a “god-view” and look around the scene and get nuances to the story they wouldn’t otherwise. While this may resemble concepts we take for granted in video games, the goal is to add true to life fidelity to the mix.

One point he highlighted that I found very interesting is that if we look at the core of film, it’s baked imagery. The filmed content is fixed, and while the special and visual effects can be elaborate, they too are baked on to the image – never changing once recorded. The future is dynamic content or “omni-capture” using real-time engine technology. The physical actors are universally captured and can then manifest in vr media as perfect holo-digital equivalents.  In the interview, Gaeta calls this “volumetric video”, though he likely meant to say “volumetric cinematography”.

While the concept of the Bullet Time effects in The Matrix movie exemplified the physics-breaking magic that can occur in a virtual world versus the real world, it was no more than a hack of the readily available and limited camera technology. Using technology like the Kinect and other devices, all the gesture and body capture data is really there to change the virtual world around us according to how our body influences it.

Remembering that this interview was done in 2011, Gaeta insisted that this vision isn’t pie in the sky. He expected holographic viewing technologies available within five years, and the processing power needed to make true-to-life rendering possible within ten. Ironically, when an audience member quipped about having an elephant in her house, Gaeta was quick to point out that she will have that elephant within three years!

Pretty close, John!

Of course, Gaeta highlighted the potential evils of the technology. In addition to storytelling, we are also looking at the development of a new culture; even a new “punk” culture based on this media. The cartoon metaverse will eventually be replaced with true-to-life imagery as technology allows, and core to making many of these new experiences possible are cameras and immeasurable amounts of data being collected from each user. The vendors will effectively know everything about you. In 2011, Gaeta used the Facebook reference and that he made a conscious choice to not create a Facebook account. This new era will be far more intrusive than that.

I’m not doing John justice. Best you watch the video for yourself and share your thoughts! Very interesting!

Summary of Sessions at GDC 2015

By Editorial No Comments

Now that GDC Vault is online, it’s an opportune time to share a summary of what was shared at GDC in the VR and immersive technology space.  Props to MTBS Field Writer Kris Roberts for putting this together!

The Dawn of Mobile VR

Speaker: John Carmack (Oculus)

The dawn of consumer mobile VR is close. Come hear the technical details of making mobile VR a reality; techniques and strategies for maximizing the quality of your VR games, applications, and experiences; and thoughts about the future of VR, including what it means for the mobile ecosystem. Q&A to follow (until chased out).

Takeaway: Game developers will walk away with a better understanding of mobile VR, techniques and strategies for developing mobile VR content, and what the future of consumer VR might look like.

You can watch the full video of John’s talk on the Oculus Twitch channel:
http://www.twitch.tv/oculus/v/3862049

Last year I was surprised to hear about the announcement of the Samsung Gear VR and the departure for Oculus from the high-end PC driven Rift to a mobile powered HMD as their first consumer offering. After all the emphasis on precision tracking, ultra low latency, and crushing GPU requirements, the decision to release a phone-based mobile product was kind of shocking. Hearing that John Carmack was heading up the effort for mobile VR within Oculus helped increase my confidence that the results would be good.

It was great to have the opportunity to hear directly from John what his motivations were for pursuing Mobile VR and where he sees it going. I would strongly recommend watching the presentation on Twtich, since he clearly will do a better job of getting the points across than I can.

If you want the TL;DR summary, I would say that the first “Innovator Edition” of the Samsung Gear VR was a success. They sold well, had very few returns, and developers have actively been developing content for them. If you are looking to do a VR project in the next 12 months, the clear indication is that there will be a second version of the Samsung product to be released before the end of the year and it will “go wide” with much stronger marketing and sales potential than the first. Without being held back, Samsung could have demo units in practically every cell phone retail store and move MANY units. Developers looking to be in on the first wave of consumer VR would do well to view the mobile platform as a viable input vector into a large potential consumer market.

VR Direct: How Nvidia Technology is Improving the VR Experience (Presented by Nvidia)

Speakers: Nathan Reed (Nvidia), Dean Beeler (Oculus)

Virtual reality is the next frontier of gaming, and Nvidia is leading the way by introducing VR Direct, a set of hardware and software technologies designed to cut down graphics latency and accelerate stereo rendering performance. In this talk, we’ll show how developers can use Nvidia GPUs and VR Direct to improve the gaming experience on the Oculus Rift and other VR headsets.

Slides from the presentation are going to be available here:
https://developer.nvidia.com/gdc-2015

VR Direct is an umbrella term for various Nvidia technologies that are designed to help with some of the hard problems in VR. The main two topics discussed in this talk were “Asynchronous Timewarp” to reduce latency and “VR SLI” for stereo rendering.

The idea of “Timewarp” and “Asynchronous Timewarp” were discussed a lot this year at GDC. The concept is to re-sample and position the frame about to be displayed with as up to date rotation data as possible. That way the image the viewer sees is in the right place even if they were moving their head while it was being rendered and displayed. It only helps for rotation and not translation, and also won’t help for lag in animations. In an ideal case, the game should be hitting the native framerate (90Hz) and the timewarp would only be a safety net when an extreme case or special effect caused it to lag and avoid tearing. What Nvidia is bringing to the table is driver level support for high-priority context and preemption (at the draw-level) to help in practical Asynchronous Timewarp implementations.

To help with stereo rendering, Nvidia is suggesting using Multiview rendering and Stereo SLI, leveraging as much as possible from the shared stages in the render pipeline. The goal with Multiview is to take advantage of the fact that a stereo view shows almost the same visible objects, uses almost the same render commands and has the driver do almost the same internal work. Keeping all the stages separate is the most flexible, but the least optimizable. Combining stages can improve optimization at the cost of some flexibility. Adding multiple graphics cards to do the stereo rendering in SLI sounds good, but it’s unrealistic to expect 2x performance increases. Part of the solution that Nvidia is providing is a dedicated copy engine that allows non-dependent rendering to continue while the data from the second card is being blitted between GPUs on the PCIe bus.

Admittedly, what was discussed in this session is all very much on the bleeding edge and hot out of the oven – but its really good to see VR specific support in low level API functions on the Nvidia driver to help developers get the high performance required for good VR.

Technology-Infused Storytelling: VR Challenges That Lie Ahead (Presented by Epic Games)

Speakers: Nick Donaldson (Epic Games), Alasdair Coull (Weta Digital), Tim Elek (Epic Games), Daniel Smith (Weta Digital)

In this intermediate to advanced session, Weta Digital and Epic discuss the state of using VR to tell rich, authentic stories through the lens of VFX.

A virtual reality experience running on the Unreal Engine called “Thief In The Shadows” with assets from The Hobbit motion picture was produced in collaboration between Weta Digital and Epic Games. The demo itself was on display in the Epic/Unreal booth in the GDC Expo Hall, but when asked about a public distribution, it did not sound like it was going to see a wider release.

The content of the session went back and forth between the guys from Weta and Epic talking about the various challenges they faced and their general creative process. Clearly, taking VFX assets from a major motion picture and using them to create a VR experience is exciting and full of opportunities – but it is also new territory and required the team to innovate on design and art as well as technical solutions.

Similar in many ways to how game developers have been discovering the VR experience is different from traditional video games, it sounded like this project helped show how movies are also different. VR requires different timing, player / observer perspective, and setting and characterization.

Once they had made the the creative decisions as far as the setting and characters, the challenges for the team shifted to the technical side. How to get art assets; the models, animations and lighting looking as good in VR as possible – ideally as good as the movie?

They really wanted to keep Smaug as the star of the show and do everything they could with VFX to add to the experience but not detract from his performance. Probably the best example came from Nick Donaldson at Epic who explained the steps they took to reconstruct the dragon’s eye and give it a sense of depth and translucency. The initial attempts using Weta’s texture baking tool just looked flat and unconvincing. Instead, going with hand crafted textures, bump offset to “clear coat” the eye, and an innovative set of virtual UVs based on an eye bone projected from the planar surface of the iris along its look vector came together with a really good looking final effect.

Practical Virtual Reality for Disney Themeparks

Speaker: Bei Yang (Walt Disney Imagineering)

Thinking about creating your own VR experiences using the Oculus? Walt Disney Imagineering, being one of the original pioneers in VR in the late 1980s, has in truth never stopped playing with VR and using it. This lecture will focus on some of the basic learnings from the last 20 years, and how it applies to VR experience development today. The talk will focus on HMD and CAVE-based experiences, their design considerations, technical implementation details, and will cover some real-world examples.

Takeaway: Attendees will learn about some of the practical problems when creating VR experiences for the real world. This will include what causes nausea, what gives a sense of immersion, and the problems that arise when technically implementing VR from both a software and hardware perspective.

Wow! The Disney VR session was amazing, and mostly because Bei did an excellent job talking about things Imagineers have been doing in the VR space for decades and particularly how even though so much of the current buzz surrounding VR makes it all seem so new – it’s not. The subtitle of the talk was “How to sound smart about VR at cocktail parties” and it really did deliver on providing a great overview of where VR has been and what’s really important in high quality immersive experiences.

The notion of Virtual Reality goes back pretty far, this description of what perfect VR would be was from Ivan Sutherland in 1965:

“… a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal.”

A short VR History – first head mounted display in 1968, birth of modern computer graphics in 1972-1976 (texture mapping, z-buffering), machines capable of real time rendering in 1982-1994 and circle vision (360 degree movies), emergence of digital projectors and CAVE systems 1993, Disney VR at Imagineering Labs and DisneyQuest 1992-1994, Virtuality first startup to make commercial VR systems 1994, the height of VR with Sega VR, Virtual Boy and DisneyQuest in 1995. Then, the dark times in 1995-2010. SGI folds, DisneyQuest Chicago was the last one. Sega VR and VIrtual Boy doesn’t sell.

The 1995-2010 period wasn’t actually all that dark. Many SGI employees went to Nvidia and ATI. The military continued VR for training. Disney used VR internally to develop attractions. Cellphone and digital media markets developed. Better rendering techniques developed because of the video game market. Motion capture was developed as a tool for movies.

Internally, the Imagineers use VR to prototype attractions in the form of the Digital Immersive Showroom which uses multiple 4k projectors at 120Hz blended onto a 360 degree surface.

Virtual Reality is more than just head mounts. We need to think about the human system, all our perceptual systems: Sight, sound, touch, smell, taste – and proprioception (your minds sense of what your body is doing). A lot of what Disney does to create immersive experiences uses projection, stereoscopy and tactile things people interact with – and the results are really engaging.

Developing VR Apps with Oculus SDK (Presented by Oculus)

Speaker: Anuj Gosalia (Oculus SDK)

Learn about programming VR apps using the Oculus SDK. The talk will cover the current SDK, upcoming features and some future directions we are considering.

The last few years, there have been technical talks presented by Oculus about their SDK and best practices for good VR. This year the director of engineering, Anuj Gosalia gave a great overview of the past, present and possible future of the Oculus SDK.

I’d like to start up front with the links that Anuj provided at the end of his talk for further information. They do a much better job of explaining the details of the technical concepts he discussed than I can do here in my report:

Mastering the Oculus SDK by Michael Antanov and Volga Askov
https://www.youtube.com/watch?v=PoqV112Pwrs

Optimizing VR Graphics with Late Latching by Atman Binstock
https://www.oculus.com/blog/optimizing-vr-graphics-with-late-latching/

Asynchronous Timewarp Examined by Michael Antanov
https://www.oculus.com/blog/asynchronous-timewarp/

Developing VR Experiences with Oculus Rift by Tom Forsyth
http://static.oculus.com/connect/slides/OculusConnect_Developing_VR_Experiences_with_the_Oculus_Rift.pdf
Oculus Blog: https://www.oculus.com/blog/

The high level message remains the same: VR is demanding. A good experience requires high resolution (~4MP for stereo pair), ultra low latency (<20ms) and needs to hit a high framerate (75-90Hz) and be glitch free. The focus on the SDK team so far has been on latency over throughput, hiding the VR headset from the OS and fixing up rotation with orientation timewarp.

In the old days (~2 years ago) they started with “App Rendering” and provided the code for the lens correcting distortion as reference. Developers were encouraged to work out the details for their specific project and innovate / experiment.

Now the focus has shifted to “SDK Rendering” and Direct Mode with the goal of doing more of the optimizations, timewarp, and CPU/GPU serialization within the new VR Compositor (VRC) to free the app from dealing with a lot of the VR processing. Now the app renders to shared eye buffers and submits them to the VRC API.

The near term areas of focus for the SDK are stability, efficiency and resilience. A big part of that is moving away from libOVR being compiled in with the app and switching to a libOVR DLL to encapsulate the interaction with the SDK at runtime.

Moving into the future, these were topics that Anuj tossed out as possibilities:

  • Asynchronous Timewarp – tradeoff judder for positional latency.
  • Dealing with large eye buffers. Finer samping at center and under sample around the edges.
  • Mixing 2D UI over 3D scene to ensure text and UI elements are as clear as possible.
  • Positional timewarp – translate and orientation warp of image. But reprojection techniques have perceptual artifacts in VR.

…and there you have it!  GDC 2015 was a welcome mix of a wide rang of professional education sessionsa and lots of new technology and demonstrations on the show floor.  There are still more interviews to come from GDC MTBS.  Thanks for putting this together, Kris!

ImmersiON – VRelia Secrets Revealed!

By Editorial No Comments

If you don’t have a 3D display, choose “no glasses” and “left image only” in the YouTube 3D viewing options.

At E3 Expo, MTBS shared news (shown above) that ImmersiON and VRelia are joining forces on a new line of head mounted displays. This news was formalized in a press release last Wednesday, but there is far more to the story than meets the eye…er…eyes. MTBS again caught up with Manuel Gutierrez after E3 Expo to get more info and to clear up some of the confusion around the branding and company history. Here goes!

The Relationship to TDVision Corp.

TDVision Corp. has a long history that predates Meant to be Seen. Manuel R. Gutierrez founded the company in Mexico in 2001, and in the US in 2003.

TDVisor HMD Prototype at MTBS Event

It is true that they were trying to enter the head mounted display market several years ago. Their early HMD prototypes did exist – we know because MTBS still has one! 😉  According to Manuel, TDVision sold hundreds of these prototype/devkit units to the Military.

TDVision had developed infrastructure from acquisition (TDVCam), encoding, gaming, decoding and visualization (TDVisor), and they realized the importance of having enough content to drive the interest. They decided to focus most of their energies into the Codec and their place in the 3D Bluray standard.

TDVision figured that if there was no content, then there was no point in releasing their HMD. There was also the preference of some of the movie studios to stick with the “TV family experience” instead of moving to a “personal device”, and this forced TDVision Corp to focus on driving 3D televisions instead of head mounted displays. Game developers were equally hesitant to support stereoscopic 3D through head-mounts, so further TDVisor development was put off until later.

With the advent of HMDs by Oculus, Sony, Technical Illusions and others, the industry interest has changed, and the conditions and the technology are clearly ready to embrace VR on HMDs, HUDs and wearables. Manuel decided to relaunch the HMD initiative under a new company: ImmersiON. After some key strategic meetings, it was possible to match the ImmersiON roadmap with VRelia’s development to create ImmersiON-VRelia.

The ImmersiON-VRelia Roadmap

Manuel Gutierrez (CEO of ImmersiON) and Jose Antonio García Marín (VRelia’s CEO) decided to fuse their initiatives into one single partnership.

ImmersiON and VRelia are officially partnered and acting as one to create HMDs together. While this partnership was originally framed as focusing on the professional market, they are working together in the consumer space as well.

First, they are developing a smartphone based HMD called the “GO version” – a snap-on accessory that uses the cellphone screen and tracking. However, this article focused on getting info on the unit(s) called the “PRO Version” because they have their own embedded screens and tracking systems that will be applicable to PC gamers. They are both compatible with nearly all sources, and will work with the ImmersiON AlterSpace application that is in development.

Their roadmap features three classes of head mounted display:

I. Beta Testing Unit (available in 4 to 6 months)

The ImmersiON-VRelia “BT (Beta Tester) units” will be VR-Only devices that require a PC to work. This is for their Early Adoption Program (EAP) or Beta Tester Program (BTP), and the units will be injection moulded – not 3D printed.

The ImmersiON-VRelia “BT unit” will feature 1080p per eye initially and the target is 2K per eye for the follow-up versions.  These units will have 120 degree FOV.

The beta units will not feature augmented reality cameras yet, and we didn’t discuss connector types or pricing.

II. “PRO G1” (avilable within 6 to 12 months)

This will be the actual “Pro” HMD release, and it will feature augmented reality cameras as showcased in the render. Similar to the beta version, this HMD will connect to any HDMI source including PCs, Tablets and Mobile phones.

This has not been publicized until now, but according to Manuel, they have achieved 160-degree field of view on their workbench.

III. “PRO G2” (available within 12 to 18 months)

Immersion-VRelia Pro G2

Immersion-VRelia Pro G2

The PRO G2 will feature both virtual reality and augmented reality cameras. In addition to having PC connectivity, it will be able to run autonomously with an Android platform and feature connectivity with nearly every format.

Pricing

Even though we were advised of the expected pricing for the “PRO G1” device, all we can say for now is that the units will be directly competitive with other consumer targeted devices – within reason given the nature of the specs.

Positional Tracking

This is where the fun begins. The industry is in agreement that the best VR experiences are those that feature positional tracking which capture the nuances of how your head is positioned relative to your body. Positional tracking represents far more information than what is provided by just capturing the raw head rotations, and this is needed to minimize the chances for motion sickness.

The big players are all using some kind of infrared or optical positioning system which requires that the user is in direct sight of a fixed camera. This means extra hardware and a limited range of movement / a space to work in.

ImmersiON-VRelia wants to achieve the same thing without the need for an external camera, but still maintain near zero latency as is required for a great VR experience. What is their magic bullet?

Do you remember Grush? This is a successfully backed Indiegogo product that features a motion tracked toothbrush. The idea is that the Grush toothbrush will tell parents if their kids have fully brushed their teeth in an entertaining way for children. To work, Grush needs to quickly capture every little movement with little to no latency. Grush’s 9DoF motion tracking sensor is an unreleased part, and will be making its VR debut in the ImmersiON-VRelia HMD product lines.

Standards / Formats

The ImmersiON-VRelia HMDs will be using standard connectors and can accept most if not all 3D formats. With the exception of interacting with the head tracker, very little is expected to be proprietary. They are planning to have a direct connection to their Alterspace VR environment for content selection and experience distribution, but this isn’t required to make their hardware work.

Final Thoughts

MTBS has been at this a long time now, so we’re always concerned about magic pixie dust and vaporware. ImmersiON-VRelia is basically promising the ultimate spec for an affordable price in a relatively short period of time. It’s also a lot of work to release one head mounted display let alone several models.

However, let’s look at the facts. VRelia has a demonstrated history and was showing impressive prototypes at GDC 2014 that featured high resolution dual panels. Grush exists, and its motion tracker is definitely a plausible tool for making positional tracking work in VR. The owners of ImmersiON have been in business a long time, and we know their VR software tools exist because we had seen them several years ago.

Similar to the other market leaders, it wouldn’t surprise us if there is some scheduling slippage. That’s just the nature of doing something like this. We also don’t know if it’s necessary to release so many options so close together. However, having known them for years and having sampled VRelia in person, they’ve got a real shot at releasing a good VR product.

We’ll be getting a beta unit as soon as it’s available, and look forward to reporting back.

NeuroGaming Logo

The VR Conference Tour 2014

By Editorial No Comments

A proud member of The Immersive Technology Alliance, Dr. Jason Jerald is the CEO of NextGen Interactions.  Dr. Jerald has worked and / or consulted for the likes of Sixense, Valve, Oculus VR, Virtuix and more.  Over the past few months, Dr. Jerald has been touring several conferences featuring an immersive technology theme, and he put together this summary for MTBS readers!

Over the past couple of months, I have had the pleasure of attending several conferences related to Virtual Reality. This article provides a review for three of those conferences. For a report on the IEEE VR and 3D User Interfaces Symposium held in late March / early April, see http://www.nextgeninteractions.com/ieee-virtual-reality-3d-user-interfaces. For a story of VR at the East Coast Game Conference, see http://www.gamersnexus.net/gg/1429-virtual-reality-gaming-usability-hurdles-ecgc

Neurogaming Conference

NeuroGaming Logo

The Neurogaming conference is very exciting to me because although I have a basic understanding of neuroscience,it is not my specialty. As a result, there is plenty of room for me to learn and see new technologies.. This is the second year of the conference and the second time I attended. My big takeaway? I have a lot more to learn about Neuroscience!

NeuroGaming Organizer Zack Lynch asked me to return and moderate the VR Panel again this year. Palmer Luckey of Oculus also returned to the panel , and we were fortunate to have other VR pioneers participate, as well. Richard Marks, creator of the Playstation Move and a primary developer for the Sony Morpheus HMD, Amir Rubin, CEO of Sixense Entertainment, and Ana Maiques, CEO of Neuroelectrics, joined us this year. All the panelists had great insight about where VR is going. Ana in particular had some interesting perspectives as she comes from a neuroscience background. She showed an HMD device with EEG input and, most intriguing, electrical stimulation! Her device allows both input and output directly to and from the brain. It turns out there have been studies actually showing brain-to-brain communication using similar technologies. Of course, such devices don’t allow us to read minds in the way we read about in science fiction, but early developments certainly provide exciting potential of where the technology may be headed.

NeoruGaming VR Panel. Left to right:Jason Jerald—Cofounder of NextGen Interactions, Palmer Luckey—Cofounder of Oculus VR, Amir Rubin—CEO of Sixense Entertainment, Richard Marks—Director of the Sony Playstation Magic Lab, and Ana Maiques—CEO of Neuroelect

NeoruGaming VR Panel. Left to right:Jason Jerald—Cofounder of NextGen Interactions, Palmer Luckey—Cofounder of Oculus VR, Amir Rubin—CEO of Sixense Entertainment, Richard Marks—Director of the Sony Playstation Magic Lab, and Ana Maiques—CEO of Neuroelect

The Neurogaming Conference wasn’t all about VR but “The Immersive Experiences – Virtual Reality NeuroGaming” Panel was!

Jason showing a CyberFace HMD courtesy of Paul Mlyniec of Digital ArtForms and Sixense Entertainment. This HMD is from 1989 and was used by VR Pioneer Jaron Lanier and his company VPL.

Jason showing a CyberFace HMD courtesy of Paul Mlyniec of Digital ArtForms and Sixense Entertainment. This HMD is from 1989 and was used by VR Pioneer Jaron Lanier and his company VPL.

Silicon Valley Virtual Reality Conference

SVVR Logo

Karl Krantz did an incredible job of putting together one of the most amazing conferences I have ever attended — and he did it in only a couple of months! The overwhelming excitement of a new industry forming is like nothing I have ever experienced. Time after time, I heard about people quitting their day jobs to forge a new life and to pursue changing reality as we know it. The conference was filled with startups, many in their very early stages but with potential to offer substantial value in novel, new ways.

The Legendary Sword of Damocles

The Legendary Sword of Damocles

SVVR was held at the Computer History Museum in Mountain View, CA. It is the location of the world’s first head-mounted display — the Sword of Damocles — created by Ivan Sutherland in 1968. After navigating the maze of technical history, I finally found the object of my quest. I was surprised the system was not bulkier.

Demos

Sixense Entertainment

Sixense was the primary sponsor of the conference. They are doing a really great job of positioning themselves as a leader in the VR Space. As would be expected, CEO Amir Rubin and Company were showing off their STEM system — five tracked points — along with the Oculus Rift.

CloudHead Games

I’ve been a big fan of CloudHead Games since I first met Denny Unger around the time they launched their Kickstarter over one year ago. The detailed artistic worlds Denny and his team are creating are extremely fascinating and compelling. Denny was experimenting with an interesting rotation scheme where instead of rotating oneself virtually via a smooth rotation, he ratchets rotations at discrete 10 degree intervals. Denny claims that reports of motion sickness are dramatically decreased using this technique. It, of course, doesn’t seem quite as real as smooth virtual rotations, but might be a great alternative to those that are more prone to simulator sickness. I assume this will be a user-controlled option for players to choose from depending on their susceptibility to simulator sickness.

Sony

I finally had the chance to try out the Sony Morpheus HMD (unfortunately I missed GDC this year), and, overall, I was quite impressed. The HMD quality wasn’t quite as good as the Oculus Crystal Cove, in my opinion, however the team was doing more to tackle a larger and more challenging problem — general VR consisting of standing/walking along with hand tracking. I very much look forward to the next iteration from Sony.

Leap Motion

I was very happy to see Leap Motion showing a drastically improved system with skeletal tracking. The device has more consistent tracking even when the hands are partially occluded.

Immersive Film

Laurent Scallie, former CEO of Atlantis Cyberspace (where he created an outstanding VR system) was especially excited about immersive film. I was a bit skeptical about immersing users in passive experiences, as I believed the best VR experiences would take complete advantage of fully interactive technology. However, I was open to being convinced otherwise. Laurent introduced me to Paul of felixandpaul.com, an individual quietly showing a demo beyond the regular exhibits near the lunch tables. I had passed Paul several times without giving it much thought having no idea what users were seeing. As it would turn out, the demo blew me away and is something that cannot be described with words.

All I can say is that Paul changed my opinion of immersive film as it was literally as real as sitting with another person in a living room in almost every aspect. Because Paul’s demo was utilizing captured real-world data, it made me realize how far we will have to go in computer graphics before we are able to reach immersive photorealism. In the meantime, their technical solution as well as the artistic attention to details was flawless with no visible seams or holes. After my experience, I brought several people over to try the demo, and they were as impressed as I was.

The Panels

There were two days of fascinating panels that consisted of many VR pioneers with various backgrounds, areas of expertise, and opinions. The panel that Karl Krantz asked me to moderate was about 3D User Input and Locomotion and consisted of Danny Woodall, Creative Director of Sixense Entertainment, Richard Marks of Sony Playstation Magic Lab, Jan Goetgeluk, CEO of Virtuix, Natha Burba,CEO of Survios, and David Holz,CTO and Cofounder of Leap Motion. All the panelists had some great insight about what are some of the most important aspects that need solved to provide the most compelling experiences.

Augmented World Expo

AWE Logo

Ori Inbar and Tish Shute once again did a great job organizing the Augmented World Expo. I haven’t attended AWE in a couple of years so it was great to see the dramatic improvement of companies and demos.

Examples of Talks

Tish Shute and Helen Papagiannis on the AWE stage.

Tish Shute and Helen Papagiannis on the AWE stage.

I had the pleasure of catching up with many colleagues and also met a slew of new people. They are undertaking and spoke about all sorts of interesting projects, so much so that I cannot write about all of them. Here is sample of just some of the talks:

David Smith Talked about Extreme AR and VR, in the sense of wide-field of view HMDs of up to 180 degrees per eye! David is Chief Innovation Officer at Lockheed Martin and has accomplished many things in his career, including creating the Colony — the first real-time 3D adventure shooter that is a precursor to today’s first person shooters, working with James Cameron on virtual sets and virtual cameras, and founding Red Storm Entertainment with Tom Clancy. David is not new to the VR space and there is no doubt that we will continue to see great things from him and I hope to see his AR and VR 180 degree field of view HMDs he showed a select few of us being commercialized soon.

Kevin Williams gave a talk about his area of expertise — Out of Home Entertainment. Kevin described how Out of Home Entertainment gives us the opportunity to develop for high end systems but also has its own challenges (e.g., ruggedizing HMDs). I have been looking forward to finally meeting Kevin and I must say Kevin is one of the most interesting personalities I have come across in the VR industry (and there are many interesting personalities!). Kevin is a former Disney Imagineer that helped DisneyQuest show VR to a lot of people back in the 1990s and is a leading expert for VR entertainment outside of the home.

I presented a talk about Interacting with Virtual Humans where I showed some of my own work as well as the work by some of my colleagues. My concluding point was that today we are selling the technology, but to appeal to the masses we must become really good at selling emotions and stories. I believe one of the best ways to do that is through virtual humans, whether controlled by a real human or a computer. The talk is available at https://www.youtube.com/watch?v=DwJM_DSzz1c

Mixed Reality

Mixed Reality Demo

Mixed Reality Demo

One of the more exciting areas today is mixing augmented and virtual reality together and forming mixed reality. One such company that is doing this is Sulon Technologies. Mixed Reality can add textures, geometry, special effects, and characters onto the real world through video-see-through HMDs. Most impressive to me, is that Sulon was actually showing the system which, up until then, I assumed was largely an idea that would be difficult to implement. Although there remains work to be one, Sulon has a good start, and I look forward to seeing future demonstrations as they evolve.

Mark Billinghurst and his New Zealand HITLab team demonstrated the usage of integrating two video cameras with the Oculus Rift resulting in a nice mixed reality demo. I was also impressed with their interface where users could interact with 3D GUI elements and manipulate objects in some ways similar to Sixense’s MakeVR modeling system.

Great submission, Jason!  The story doesn’t end here, of course!  MTBS is headed off to E3 Expo where we are looking forward to exciting interviews and surpises.

The VR Conference Tour 2014

By Jason Jerald, PhD
NextGen Interactions

 Over the past couple of months, I have had the pleasure of attending several conferences related to Virtual Reality.  This article provides a review for three  of those conferences.  For a report on the IEEE VR and 3D User Interfaces Symposium held in late March / early April, see http://www.nextgeninteractions.com/ieee-virtual-reality-3d-user-interfaces.  For a story of VR at the East Coast Game Conference, see http://www.gamersnexus.net/gg/1429-virtual-reality-gaming-usability-hurdles-ecgc

Neurogaming Conference

The Neurogaming conference is very exciting to me because although I have a basic understanding of neuroscience,it is not my specialty.  As a result, there is plenty of room for me to learn and see new technologies..  This is the second year of the conference and the second time I attended. My big takeaway?  I have a lot more to learn about Neuroscience!

NeuroGaming Organizer Zack Lynch asked me to return and moderate the VR Panel again this year.  Palmer Luckey of Oculus also returned to the panel , and we were fortunate to have other VR pioneers participate, as well.  Richard Marks, creator of the Playstation Move and a primary developer for the Sony Morpheous HMD, Amir Rubin, CEO of Sixense Entertainment, and Ana Maiques, CEO of Neuroelectrics, joined us this year.  All the panelists had great insight about where VR is going.  Ana in particular had some interesting perspectives as she comes from a neuroscience background.  She showed an HMD device with EEG input and,most intriguing,electrical stimulation!  Her device allows both input and output directly to and from the brain.  It turns out there have been studies actually showing brain-to-brain communication using similar technologies.  Of course, such devices don’t allow us to read minds in the way we read about in science fiction, but early developments certainly provide exciting potential of where the technology may be headed.


The Neurogaming Conference wasn’t all about VR but “The Immersive Experiences – Virtual Reality NeuroGaming” Panel was.  Left to right:Jason Jerald—Cofounder of NextGen Interactions, Palmer Luckey—Cofounder of Oculus VR, Amir Rubin—CEO of Sixense Entertainment, Richard Marks—Director of the Sony Playstation Magic Lab, and Ana Maiques—CEO of Neuroelectrics.


Jason showing a CyberFace HMD courtesy of Paul Mlyniec of Digital ArtForms and Sixense Entertainment.  This HMD is from 1989 and was used by VR Pioneer Jaron Lanier and his company VPL.

Silicon Valley Virtual Reality Conference

Karl Krantz did an incredible job of putting together one of the most amazing conferences I have ever attended—and he did it in only a couple of months!  The overwhelming excitement of a new industry forming is like nothing I have ever experienced.  Time after time, I heard about people quitting their day jobs to forge a new life and to pursue changing reality as we know it. The conference was filled with startups, many in their very early stages but with potential to offer substantial value in novel, new ways.

The Legendary Sword of Damocles

SVVR was held at the Computer History Museum in Mountain View, CA.  It is the location of the world’s first head-mounted display—the Sword of Damocles—created by Ivan Sutherland in 1968.  After navigating the maze of technical history, I finally found the object of my quest.  I was surprised the system was not bulkier.

 

Demos

Sixense Entertainment
Sixense was the primary sponsor of the conference.  They are doing a really great job of positioning themselves as a leader in the VR Space.  As would be expected, CEO Amir Rubin and Company were showing off their STEM system—five tracked points—along with the Oculus Rift.

CloudHead Games
I’ve been a big fan of CloudHead Games since I first met Denny Unger around the time they launched their Kickstarter over one year ago.  The detailed artistic worlds Denny and his team are creating are extremely fascinating and compelling.  Denny was experimenting with an interesting rotation scheme where instead of rotating oneself virtually via a smooth rotation, he ratchets rotations at discrete 10 degree intervals.  Denny claims that reports of motion sickness are dramatically decreased using this technique.  It, of course, doesn’t seem quite as real as smooth virtual rotations, but might be a great alternative to those that are more prone to simulator sickness.  I assume this will be a user-controlled option for players to choose from depending on their susceptibility to simulator sickness.

Sony
I finally had the chance to try out the Sony Morpheus HMD (unfortunately I missed GDC this year), and, overall, I was quite impressed.  The HMD quality wasn’t quite as good as the Oculus Crystal Cove, in my opinion, however the team was doing more to tackle a larger and more challenging problem—general VR consisting of standing/walking along with hand tracking.  I very much look forward to the next iteration from Sony.

Leap Motion
I was very happy to see Leap Motion showing a drastically improved system with skeletal tracking.  The device has more consistent tracking even when the hands are partially occluded.

Immersive Film
Laurent Scallie, former CEO of Atlantis Cyberspace (where he created an outstanding VR system) was especially excited about immersive film.  I was a bit skeptical about immersing users in passive experiences, as I believed the best VR experiences would take complete advantage of fully interactive technology.  However, I was open to being convinced otherwise. Laurent introduced me to Paul of felixandpaul.com, an individual quietly showing a demo beyond the regular exhibits near the lunch tables.  I had passed Paul several times without giving it much thought having no idea what users were seeing.  As it would turn out, the demo blew me away and is something that cannot be described with words.  All I can say is that Paul changed my opinion of immersive film as it was literally as real as sitting with another person in a living room in almost every aspect.  Because Paul’s demo was utilizing captured real-world data, it made me realize how far we will have to go in computer graphics before we are able to reach immersive photorealism.  In the meantime, their technical solution as well as the artistic attention to details was flawless with no visible seams or holes.  After my experience, I brought several people over to try the demo, and they were as impressed as I was.

The Panels

There were two days of fascinating panels that consisted of many VR pioneers with various backgrounds, areas of expertise, and opinions.  The panel that Karl Krantz asked me to moderate was about 3D User Input and Locomotion and consisted of Danny Woodall,Creative Director of Sixense Entertainment, Richard Marks of Sony Playstation Magic Lab, Jan Goetgeluk,CEO of Virtuix, Natha Burba,CEO of Survios, and David Holz,CTO and Cofounder of Leap Motion.[jj1]   All the panelists had some great insight about what are some of the most important aspects that need solved to provide the most compelling experiences.

Augmented World Expo

news_banner

Ori Inbar and Tish Shute once again did a great job organizing the Augmented World Expo.  I haven’t attended AWE in a couple of years so it was great to see the dramatic improvement of companies and demos.


Tish Shute and Helen Papagiannis on the AWE stage.

 

Examples of Talks

I had the pleasure of catching up with many colleagues and also met a slew of new people.  They are undertaking and spoke about all sorts of interesting projects, so much so that I cannot write about all of them.  Here is sample of just some of the talks:

David Smith Talked about Extreme AR and VR, in the sense of wide-field of view HMDs of up to 180 degrees per eye!  David is Chief Innovation Officer at Lockheed Martin and has accomplished many things in his career, including creating the Colony—the first real-time 3D adventure shooter that is a precursor to today’s first person shooters, working with James Cameron on virtual sets and virtual cameras, and founding Red Storm Entertainment with Tom Clancy.  David is not new to the VR space and there is no doubt that we will continue to see great things from him and I hope to see his AR and VR 180 degree field of view HMDs he showed a select few of us being commercialized soon.

Kevin Williams gave a talk about his area of expertise—Out of Home Entertainment. Kevin described how Out of Home Entertainment gives us the opportunity to develop for high end systems but also has its own challenges (e.g., ruggedizing HMDs).  I have been looking forward to finally meeting Kevin and I must say Kevin is one of the most interesting personalities I have come across in the VR industry (and there are many interesting personalities!).  Kevin is a former Disney Imagineer that helped DisneyQuest show VR to a lot of people back in the 1990s and is a leading expert for VR entertainment outside of the home.

I presented a talk about Interacting with Virtual Humans where I showed some of my own work as well as the work by some of my colleagues.  My concluding point was that today we are selling the technology, but to appeal to the masses we must become really good at selling emotions and stories.  I believe one of the best ways to do that is through virtual humans, whether controlled by a real human or a computer.  The talk is available at https://www.youtube.com/watch?v=DwJM_DSzz1c

 

Mixed Reality

One of the more exciting areas today is mixing augmented and virtual reality together and forming mixed reality.  One such company that is doing this is Sulon Technologies.  Mixed Reality can add textures, geometry, special effects, and characters onto the real world through video-see-through HMDs.   Most impressive to me, is that Sulon was actually showing the system which, up until then, I assumed was largely an idea that would be difficult to implement.  Although there remains work to be one, Sulon has a good start, and I look forward to seeing future demonstrations as they evolve.

Mark Billinghurst and his New Zealand HITLab team demonstrated the usage of integrating two video cameras with the Oculus Rift resulting in a nice mixed reality demo.  I was also impressed with their interface where users could interact with 3D GUI elements and manipulate objects in some ways similar to Sixense’s MakeVR modeling system.


 [jj1]So what?  What was the takeaway from the panel?

Lunadroid 237

Top Five Quick Oculus Demos

By Editorial 2 Comments

Introduction

We often find ourselves taking the rift somewhere to give demos to people. During these times the question always presents itself: What games and experiences should I demo? In my mind, there are five key factors to look for when deciding if an experience would make a good demo: Are the controls simple and quick to pick up, is the game or experience relatively short (under ten or fifteen minutes), does it leave the camera alone, and is it a comfortable overall VR experience? In essence, it needs to be quick, easy to understand, and not cause VR sickness. With that said, here are my top five game demo recommendations:

The VR Demonstration Showcase

Lunadroid 237

Lunadroid 237

Lunadroid 237

Find it at https://share.oculusvr.com/app/lunadroid-237—an-interactive-narrative

Lunadroid caught me by surprise, I wasn’t expecting much from it but it was actually really cool. It is a quick demo, clocking in at around five minutes, but has some very neat effects. Among other things, you’ll see a rocket blast off leaving a trail of sparks behind it, and space itself opening up in front of you. It is a very smooth VR experience, no controls are needed except ‘w’ to move forward in the direction you are looking. It also showcases head tracking by forcing the player to look around. Headphones or some sound device is necessary to get the most out of this demo as the soundtrack goes a long way to set the mood.

Proton Pulse

Proton Pulse

Proton Pulse

Find it at https://share.oculusvr.com/app/proton-pulse-rift

This was one of the first really good games for The Rift, and it has stood up to the (currently very short) test of time. The concept is easy to grasp: just a 3D game of brick breaker, and the controls are also easy to grasp – simply look around to aim! One of the best things about this demo is that it can be very short, only have each person play one or two games, or much longer. It just depends on the number of people being demoed to and how long they want to play. Proton Pulse does a good job of showcasing the 3D effects with bright, pulsating neon colors and special effects when bricks are broken or power-ups obtained. Sound does improve this game’s experience a lot, but it’s not really necessary to get the feeling across.

Kit & Lightning’s The Cave

Kit and Lighting's The Cave

Kit and Lighting’s The Cave

Find it at https://share.oculusvr.com/app/kite–lightnings-the-cave

The cave is a simple showcase of what is possible on the Rift as far as graphics are concerned. It’s short at only five to ten minutes, but it looks great. Two things stand out in The Cave: the sense of scale due to very tall ceilings in the environment and an amazingly slick iron-man like briefing section. You walk up to a platform and a holographic map appears, you get a floating hud, and everything is very eye grabbing. It is one of the best looking Rift demos I have tried to date. Controls are pretty simple, just normal first person movement scheme with your look dictating the movement. This is a demo everyone should try!

Titans of Space

Titans of Space

Titans of Space

Find it at https://share.oculusvr.com/app/titans-of-space

As many of you probably know, this game gives a tour of the objects in our solar system and some stars outside it. The sense of scale in this game is used to great effect, and it is a great showcase for that facet of VR. It doesn’t require any controls except looking around and pressing enter to continue to the next object, but there are a few other things you can do (such as zooming in). Music is important in this game, so if at all possible, include some headphones or even crappy laptop speakers. It is a longer demo, usually taking around 20 minutes, which is why I don’t use it too often. However, if you know someone who loves reading about space then this demo will resonate strongly with them.

UE4 coaster (or any kind of rollercoaster game)

UE4 Coaster

UE4 Coaster

Find it at: (UE4 coaster) https://developer.oculusvr.com/forums/viewtopic.php?f=42&t=8032
Rollercoaster games can make for very good rift demos. No controls to worry about, sound isn’t vital, they’re short, and most are easy to run. They check all the boxes. The only problem you can run into is making the user feel sick – just like a real coaster!

The UE4 coaster is my personal favorite because it shrinks you down and lets you ride over and under furniture throughout the room. Since you are smaller than normal, everything looks gigantic. This gives a sense of scale that will surprise most people. The UE4 coaster is a bit more extreme than something like the original Rift Coaster. There is a much steeper drop and the coaster is faster paced, so it might not be ideal for some people. Your stomach will definitely have butterflies at certain parts in it, such as the very first drop. Some people will get that and think that VR is amazing while others will think VR is horrible and not enjoyable. The person giving the demo has to judge how intense of an experience they should give someone, or just warn the person and ask them if they still want to try it.

Conclusion

There are obviously many more great rift experiences out there than what I covered, such as Ciess, but these are the main ones I use when demoing the rift. They are simple and quick, but still show off the powerful capabilities of VR and hopefully leave users with a strong impression. Of course, depending on who you are demoing to, a full featured game such as Team Fortress 2 or Ciess might be better. But in general, for non-gamers, I’ve found these to be some of the best demos. Let me know what demos you use when showing off the Rift or if you feel one of the above demos is a bad showcase to use!

VOR-Gain Explained

Oculus VR at GDC Part II

By Editorial One Comment

Kris Roberts is back to cover the second GDC 2014 Oculus VR Session: Developing Virtual Reality Games and Experiences.

Tom Forsyth | Software Architect, Oculus VR

Virtual reality is significantly different to monitor-based games in many ways. Many choices that are merely stylistic for traditional games become incredibly important in VR, while others become irrelevant. Working on the Team Fortress 2 VR port to taught me a lot of surprising things. Some of these lessons are obvious when looking at the shipped product, but there are many paths we explored that ended in failure, and minor subtleties that were highly complex and absolutely crucial to get right. In this talk, I’ll focus on a handful of the most important challenges that designers and developers should be considering when exploring virtual reality for the first time.

Tom started his presentation with a quick history of Oculus, an overview of the specs for DK2 and shared some interesting statistics about just how fast their developer community has grown. In March 2013 they shipped the first 10K kickstarter and initial order devkits. Over the course of the rest of the year 55K more devkits have shipped. But interestingly, there are 70K developers registered on the Oculus dev portal. That means that there are five thousand developers who are registered but don’t have a devkit!

Before getting into the meat of the content of his talk, Tom asked the audience to allow him to do a little “preaching” and the message was loud and clear: be kind to your players. His feeling is that as developers we tend to get used to the VR we are working on and build up a tolerance to aspects or issues which can be jarring and uncomfortable for our users. It’s important to keep in mind that everyone responds to VR differently and that care needs to be taken to keep the intensity down so that the experience is enjoyable for the majority of players. He suggests having options that allow eager players to turn up effects and movement if that’s what they want, but to have the default be low and make it easy for players to change and find the level that is best for them.

VOR-Gain Explained

VOR-Gain Explained

The Vestibulo-Optical Reflex (VOR) is the adaptation we have which helps keep our eyes fixed on an object even while our head moves. It’s a smooth motion in our eye muscles controlled by our ears sensitivity to rotation – it’s involuntary, happens whether we are seeing anything or not (eyes closed or in the dark) and usually gives a 1:1 compensation between head rotation and eye motion. The tuning of the system is also extremely slow – on the order of weeks and most commonly experienced by people in the real world when they get a new eyeglass prescription. VOR-Gain can be thought of as the ratio between ear motion and eye response. Like when we get new glasses, VR can change the proportion and mess with the way our brain responds to the difference in VOR-Gain, and it’s almost always unpleasant. To preserve VOR Gain, it’s imperative that the simulation must render images that match the HMD and user characteristics. Unlike a desktop game, FOV is not an arbitrary choice but rather needs to be calculated with regard to the physical pitch of the display and the user’s IPD. The SDK helps you match this precisely with data from the user configuration tool and we are discouraged from changing the settings no matter how tempting that may be.

Moving on to talking about the IPD, Tom explained that its more complex that most people think. Instead of just being the distance between the eyes, its actually two components per eye: nose to pupil distance and eye relief (distance from the lens surface to the pupil) and neither of these are related to the dimensions of the HMD. It was interesting to note that these are seldom symmetrical. Taken together, the components form a center-to-eye vector which is set during user configuration and stored in the user profile. This center eye position is roughly where players “feel” they are and is a good place to use for positional things like audio, line of sight checks and the origin for reticule/crosshair ray-casts. Within the application, there should be an easy way for users to reset their position when they are in a neutral forward pose, set by calling sensor->Recenter().

Although Tom was emphatic about not messing with the user’s settings, scaling them uniformly is a way of effectively changing the world scale – and something he suggests we do experiment with. In general most users find reducing the world scale helps reduce the overall intensity as it scales down all motions and accelerations – but dont go too far or convergence can get tricky.

One question that every VR application needs to answer is how tall is the player? The SDK does provide a value for the eye height off the ground calculated from the user’s provided height in real life. Sometimes that makes sense to use, and other times it doesnt. If your game is about being a character of a particular stature, the player’s real life size may not be a good value to use. In other applications, using the players real size may help them feel comfortable and ease them into presence. Another interesting observation is the phenomenon of “floor dragging” which is the distance your brain tells you is how far away the floor is. The same VR experience can feel very different with the player seated as opposed to standing up!

Animating the player character presents a set of problems that most every game is going to have to consider. There are often unavoidable transition animations when you enter/exit vehicles, get up after being knocked down interacting with elements in the world and the like. There is the temptation to animate the camera as you would in a desktop game, but in Tom’s experience from TF2 this almost never works well for the player. In practice his advice is to almost always do snap cuts or fade out and fade back in while never taking camera control away from the player.

Meathook Avatars

Meathook Avatars

Animating the player’s avatar can have a strong positive impact, especially with first person actions like high fives, or calling for a medic in TF2. But they need to play without moving the camera position – the virtual camera should always move with the player’s real head and the position of the avatar’s head should coincide with the camera position. To accomplish this, Tom suggests an approach he calls “Meathook Avatars”. The idea is pretty simple, in that you find the avatar’s animated head position, eliminate (scale to zero) the avatar’s head geometry and then move the body so it lines up with the player’s virtual camera position. Visualized by hanging the animating body of the avatar from a meathook located at the camera position.

The last couple topics Tom talked about had to do with maintaining framerate. For a normal game, a fluctuating framerate can be annoying but in VR it will almost certainly break the player’s sense of presence. Rendering a stereo scene at the higher resolution required by the DK2 at 75FPS is challenging for even the strongest PCs and GPUs today and the main costs are draw calls and fillrate.

This is not news to developers who have worked on stereoscopic projects in the past, but for many people working in VR doing 3D is new as well. For good VR the trick of doing 2D plus depth doesn’t work very well and it is strongly recommended to do two renders – which in general results in twice as many draw calls, but a number of things can be done once: culling, animation, shadows, some distant reflections/effects and certain deferred lighting techniques. Fill rate on the DK2 is set with the 1080×1920 frame buffer (and dont change this!) but the camera-eye typically renders 1150×1450 per eye and is determined by the user’s face and eye position (set by the profile & SDK). The advice is that it’s okay to change the size of the virtual camera renders, but not the framebuffer size. The distortion correction pass will resample and filter it anyway. It’s also okay to dynamically scale it every frame – if you have lots of particles or explosion effects that frame, drop the size. The SDK supports this use case explicitly.

Lessons Learned

Lessons Learned

In conclusion, my impression was that both the VR talks were well attended and well received. The Oculus guys have made a lot of progress this year from the initial devkit to the DK2 and the introduction of the Sony HMD means developers will have more platform options for their VR projects. These are certainly amazing days to be involved with game development, and the fact that virtual reality equipment is being developed to a higher quality than ever by some of the very smartest people makes it that much more exciting. We are almost there…

Oculus VR at GDC Part I

By Editorial No Comments

Still lots of coverage left from GDC 2014!  Kris Roberts was on the scene, and he got to check out both of Oculus VR’s GDC presentations.  More to come, of course!

Almost Ready Player One!

At GDC last year there was a big buzz around Virtual Reality. The very first Oculus Rift devkits shipped the week of the conference. The VR focused sessions were standing room only, and the enthusiasm from the developer community was obvious.

This year the VR developer community has grown tremendously and the announcement of the Sony Project Morpheus headset means VR game developers have options on the PC and PS4 for development of exclusive or cross platform titles. We will also have to see who will be the first to the market with a consumer unit. But at GDC the interest is all about the development equipment and SDK details.

In the GDC 2014 program there were two VR sessions at GDC 2014, both presented by Oculus:

—-
Working with the Latest Oculus Rift Hardware and Software (Presented by Oculus VR)
Michael Antonov | Chief Software Architect, Oculus VR
Nate Mitchell | VP of Product, Oculus VR

Since the debut of the original Oculus Rift development kit at GDC 2013, we’ve shown off a set of critical improvements including a high-definition display, positional tracking, and low-persistence support. Likewise, behind the scenes we’ve also been making critical improvements to the core Oculus SDK like new feature support, optimizations (particularly around latency), and overall simplicity. In this talk, we’ll discuss everything you need to know to get started integrating the latest Oculus Rift hardware with your VR game or experience. The talk will be split into an overview of the latest hardware, a technical breakdown for engineers, and a game design discussion relevant to the new features. We’ll also talk about our vision for future development hardware leading to the consumer Rift and what that path might look like.
—-

Nate started the session by confirming the big announcement from Oculus at the show which was their second devkit – DK2. They had it on display in the expo and are now taking pre-orders (expected unofficially in July). He also pointed out that the sessions from GDC 2013 are available on YouTube and encouraged people who have not seem them to go watch:

2013 presentation “Running the VR Gauntlet”

The two biggest improvements in DK2 are the higher resolution low persistence OLED display and 6 degree of freedom positional head tracking. The low persistence display is supposed to eliminate motion blur and judder. The camera based 6DOF tracking system adds translation and does not drift. Other new features worth mentioning are the display’s higher 75Hz refresh, a built-in latency tester, new optics, an on-headset USB port and the elimination of the control box.

Presence. That’s the new buzzword for successful VR – when the user experiences the magic of VR and believes they are in the simulation. The DK2 is presented as having the fundamental building blocks to deliver presence, but Nate made it clear that it’s not the holy grail and it won’t deliver presence for everyone. He suggested that DK2 can provide an improved taste of the experience over what was possible with DK1 – and that with it developers will have the tools they need to craft quality VR, but that the upcoming consumer version will provide another jump in quality comparable to the difference between DK1 and DK2. He also stressed that DK2 is the last dev kit they will produce. Once the consumer version is available, that will be both the end user device as well as the tool for developers.

Michael went on to talk about some of the technical considerations of DK2. In particular, the low persistence screen does help provide a stable image as you turn without motion blur, but the current display can exhibit a rolling shudder right-to-left with a 3ms band of light which can be seen about 20% of the time. The consumer version could combat this with global persistence.

He also described how the positional tracking system works with an external camera positioned to look at the user and a set of infra red LEDs on the headset. It provides an area for the user within a 72H x 52W degree FOV and 0.5m – 2.5m tracking range. If the user does move outside the view of the camera, the system falls back to the gyro and accelerometer for inputs; but loses the ability to provide translation in the process.

Moving into the software, we were told that within the next few weeks developers should expect the release of the Oculus SDK version 0.3. The new SDK will work with both the DK1 and DK2, have support for the new positional tracking system, provide a C language interface, and include optimizations for performance and reducing latency. To take the best advantage of the current and future optimizations, the strong suggestion is to use the SDK for rendering. They are working hard on reducing latency, including a novel “Timewarp” approach pioneered by John Carmack which re-samples the orientation sensor before the end of the current frame to do projected rendering.

Nate finished out the session with a quick discussion of some design considerations for VR content. If you have not already, the Best Practices Guide is an excellent place to start:

http://static.oculusvr.com/sdk-downloads/documents/OculusBestPractices.pdf

The three main goals to shoot for are presence, comfort and fun. Although a lot of the first projects and demos were first person shooters, they are finding that more sedentary and relaxed experiences can have a more positive impact. One of the demos they have in the booth now called “Couch Knights” was developed with Epic and you hang out on a couch in the simulation and control a little knight character who runs around the environment. In some ways it’s more of what people would expect in AR but works great in VR. This is the first medium where people “feel” the experience, and making sure all the parts of the integration are correct is essential: scale, FOV, tracking and player IPD.

Wrapping up, Nate reiterated that Oculus is working on locking down the specs for the consumer version of the Rift. The CV1 is going to be a big step from what we have now, but the fundamental building blocks are in DK2.

More to come tomorrow!

Oculus Rift Development Kit Review

By Editorial No Comments

Introduction

For the uninitiated, the Oculus Rift may seem alien sitting next to a monitor. A head mounted display, or HMD, the outside of the Rift is essentially a box with straps. Developed by Palmer Luckey, a long time moderator in the Meant to Be Seen forums, the Rift was designed to be a modern reinvention of the HMD. With a vast collection of HMD models, possibly the largest in the world, Luckey was largely unimpressed with the state of Virtual Reality – so he decided to change it.

Enter the Oculus Rift – a lightweight prototype built out of cheap parts. In its first iteration, the Rift became a perfect example of the dangers of judging by appearance. A rickety kludge of duct tape and hand-made circuitry, it was clear that whatever value it had would come from within. Demoed by industry luminary John Carmack at E3, the industry was suddenly very interested. One incredibly successful Kickstarter campaign later, the prototype is here.

With a personal history soaked in cyberpunk and futurism, I was excited from the very moment I discovered the Rift. After spending a year dwelling on the possibilities, researching the technology, and briefly considering building my own, I bit the bullet in early March 2013 and ordered my own development kit. Compelled by the dream of entering other worlds, I wonder if the industry is approaching a Rubicon; it’s clear that the Oculus Rift will revolutionize HMDs, but I wonder if it will also revolutionize display technology in general.

The Rest of SIGGRAPH 2013

By Editorial No Comments

There was a lot to be excited about at this years’ Siggraph. When I went to the conference, I knew about the Nvidia Light Field HMD and had a list of exhibitors checked off to go see – but it almost always seems like its the unexpected things that turn out to be the most interesting!

Epson
Eric Mizufuka

“Exceed Your Vision” was the tagline at the Epson booth where they were showcasing the Moverio BT-100 see-through display with a collection of partners who have been working on developing applications for it.

The HMD itself is clearly an initial offering and it’s encouraging to see companies like Epson bringing a product like this to market. The device has two prism based transparent display elements which can work together to provide a stereoscopic image. But there is no integrated head tracking, camera or sensor – so getting the image floating in your vision to match up with the real reality you are also seeing requires additional equipment.

To demonstrate where things could be going, one of their partners called Meta does combine a 3D sensor for object detection and gesture recognition. The vision for the project is pretty grand, and it’s clear there is a long way to go for an immersive and intuitive augmented reality experience – but its also pretty amazing to see what they are already doing. The most impressive part of the demo was the system’s ability to detect a sheet of paper I was holding in front of me and overlay video content which was scaled, oriented and positioned properly as if it was on the paper itself. It’s not particularly practical to warp a video onto the shape of the paper, but it does show how powerful a system could be that is able to analyze the world around us to incorporate real objects into interfaces and display surfaces.

Let me be clear that I want awesome augmented reality. I think most of us do. What I expect someday is Terminator or Iron Man style visual overlays where the computer is constantly scanning and aware of everything I can see. It identifies people and objects that are of interest, looks up all the pertinent data and tells me whatever I might want to know – helping me understand the world around me with superhuman senses. I imagine natural ways of interacting with the system using voice, eye movement, and gestures. All of this needs to happen with little to no latency, and be calibrated to my personal physiology and vision so the computer display meshes seamlessly with the real world.

It goes without saying that what we have today falls short of those expectations, and these AR challenges are hard and numerous. The equipment that Epson has built and the systems that their partners like Meta are developing are the first ones we actually have – and although there is obviously room for improvement in almost every dimension, it’s clear to me that with persistence and ingenuity we will actually get there.

Z-Vector
Julius Tuomisto – Delicode http://z-vector.com

The tag line for Delicode is “Shaping the future of natural interaction” but what I think they really have with Z-Vector is a super nifty party toy. The system uses an Oculus VR devkit with a PrimeSense sensor bar strapped on top and software Julius has written to give the user a psychedelic experience by processing and displaying the data of the space around them with color and patterns that are visualizations of the musical soundtrack you play through it. You can download it for free and use it with or without the headset or sensor bar. Its pretty trippy, and I like it.

Nvidia’s Light Field HMD at SIGGRAPH 2013

By Editorial 2 Comments

And so begins MTBS’ coverage of SIGGRAPH 2013!  Today, Kris Roberts checks out Nvidia’s Light Field HMD protoype.  Obviously at the proof of concept stage, this new display technique holds a lot of promise for VR’s impending future.

https://research.nvidia.com/publication/near-eye-light-field-displays

I was really excited to see the Nvidia research project’s HMD prototype. Using a light-field display has a number of significant advantages over conventional display techniques that are very attractive for virtual reality. I very much wanted to see how it looks for myself.

The demonstration equipment they had on display was basically divided into two groups. One was a working real time stereoscopic HMD prototype built from off the shelf components and using a pair of small microlens-covered 1440×720 OLED panels and a 3D printed housing. The other was a set of film slides with a loose microlens to demonstrate what the display could look like with much higher resolution.

With the goal of producing perceptions indistinguishable from reality, a light-field display has the unique property of letting the viewer’s eye decide what to focus on in the image. With a conventional display either the entire scene is in focus, or the focus is determined by the rendering/photographic system. A light-field display presents something much more natural and realistic in letting the viewer decide not only what part of a scene to converge on, but also which part to focus – and the areas not in focus blur out exactly as they do in reality. Another really interesting aspect of this approach is that the display itself can be calibrated to accommodate the flaws in a users’ vision, eliminating the need to wear both corrective lenses and the HMD!

The stereoscopic prototype did demonstrate the focus aspect of the display very well with scenes that had fish swimming in an aquarium. It was really cool to switch between the close and distant fish and see them go in and out of focus. In my view, this plays an important part in tricking my mind into thinking what I’m seeing is actually real and not just a flat image being held in front of my eye.

Another advantage is the size, particularly the thickness of the display assembly. With a normal HMD there are one or more lenses in front of the image panel that require some significant distance to focus properly – and the result is a large and often heavy piece of equipment. With the light-field approach, both the lens membrane and the image panel are thin, light, and require a focal distance measured in millimeters. The demonstration prototype was about 1 centimeter thick. Since they were using components from an off the shelf HMD, they chose to keep it simple and mount the controlling electronics on top of the eye pieces, but really that could be relocated to a package that would go in your pocket or elsewhere and is not necessary to have be on the headset itself. Despite the extra bulk, the entire unit was still much smaller and lighter than any other HMD I have seen.

The primary shortcoming of the system in my opinion is the effective resolution of the image seen by the user. With the 720p panels in the stereoscopic prototype, I was told the image you perceive is in the range of 200p – and honestly, that seemed generous. The color, contrast and stereoscopic depth were all reasonably good, but my impression of the resolution of the actual image was very low. So, how fine a resolution would be required to meet or exceed the perceived resolution of the ultra realistic HMD we would all like to have? Well, the demonstration slides they were using were actual film with a resolution of 3000dpi, and they looked pretty good – but not flawless in clarity. So with the best contemporary mobile device screens in the ~350dpi range it seems like it will be some time before we have affordable panels that are large enough to provide satisfactory field of view and fine enough to have an acceptable perceived resolution.

Another difference which may be a significant factor for the light-field approach is the nature of the rendering process. Unlike a traditional single view display, a light-field display uses many small views of the scene. The GPUs and rendering pipelines we have today have been developed and optimized for a single output image, and their suitability for a system that requires potentially thousands of simultaneous views may not be ideal.

The stereoscopic prototype on display was running on a consumer level graphics card, but was rendering a 1440×720 image with 144 individual images which I believe were each 80×80. I’m not sure how well that will scale to the ultra high number of scenes that would be required to produce a really convincing high resolution light-field display, but Douglass was jovial when talking about how Nvidia is after all a rendering company and ideally positioned to solve those problems.

So in practice, what is easily available now with a light-field display falls quite a bit short of the image quality we can see with the current traditional HMD displays (and resolution is often cited as one of the main areas for improvement in those). I am very glad to have had the opportunity to see the prototype and do think there is tremendous potential and unique advantages with this approach – we just need ultra high resolution panels and rendering equipment that can pump out a tremendous number of tiny views.

This is just the beginning!  Come back regularly for a lot more SIGGRAPH 2013 coverage!