Search

MTBS3D It’s been a major boon for the Client-to-Cloud Revolution at #E3. #E32019 #E319 #GoogleStadia #BethesdaE3https://t.co/IqIrR81D8o
MTBS3D RT @IfcSummit: Sixth International Future Computing Summit Moves to Silicon Valley November 5-6, 2019. Open Call for Visionary Speakers and…
MTBS3D RT @GetImmersed: We’re moving to Silicon Valley! @IfcSummit November 5 & 6, 2019 at the Computer History Museum in Mountain View, CA. Call…
MTBS3D Julien Le Corre, Lead Developer at @InnerspaceVR , talked about their latest #VR escape room title The Corsair's Cu… https://t.co/uuOT6SG0NA
MTBS3D As fun as Arizona Sunshine is in traditional #VR, @Vertigo_Games took it up a notch by transforming it into a locat… https://t.co/YkGpv2wLMM
MTBS3D .@OfficialGDC would not be complete without visiting SVVR's annual #VR Mixer! In today's interview, we catch up wi… https://t.co/hibivrbYdq
MTBS3D Spencer Jackson, Software Engineer at @NordicTrack, talks about their latest iFit #VR Bike paired with an #HTCVivehttps://t.co/5b2uD9Hoa9
MTBS3D William Provancher is the CEO of @TacticalHaptics. He demonstrated their latest haptics controllers for us in this… https://t.co/Ir1Cog8bRI
MTBS3D Gaspar Ferreiro is the CEO of Project Ghost Studios. In this interview, he talks about their new Project Ghost dem… https://t.co/T2xz1VdtGI
MTBS3D .@EpicGames had loads of news to share at @OfficialGDC. Marc Petit is the General Manager of #Epic's @UnrealEnginehttps://t.co/CnqpGAB2f4
MTBS3D Chris Hook, Graphics & Visual Technologies Marketing Chief for @intel spoke to us during @OfficialGDC. We talked ab… https://t.co/ji6AKJpfwM
MTBS3D We interviewed @networknextinc at #GDC2019. They are in the business of ensuring the best connectivity and lowest l… https://t.co/87b06uMAm7
MTBS3D .@reality_clash is a developing #AugmentedReality combat game. We got to interview Tony Pearce, the CCO and Co-Fou… https://t.co/24P5kLz0Ef
MTBS3D Robots explode at #GDC2019 with @FuturLab. They have a new title for #PSVR called Mini Mech Mayhem. #GDC19https://t.co/JiIuJgGZ64

Nvidia's Light Field HMD at SIGGRAPH 2013

And so begins MTBS' coverage of SIGGRAPH 2013!  Today, Kris Roberts checks out Nvidia's Light Field HMD protoype.  Obviously at the proof of concept stage, this new display technique holds a lot of promise for VR's impending future.

https://research.nvidia.com/publication/near-eye-light-field-displays


I was really excited to see the Nvidia research project's HMD prototype. Using a light-field display has a number of significant advantages over conventional display techniques that are very attractive for virtual reality. I very much wanted to see how it looks for myself.

The demonstration equipment they had on display was basically divided into two groups. One was a working real time stereoscopic HMD prototype built from off the shelf components and using a pair of small microlens-covered 1440x720 OLED panels and a 3D printed housing. The other was a set of film slides with a loose microlens to demonstrate what the display could look like with much higher resolution.


With the goal of producing perceptions indistinguishable from reality, a light-field display has the unique property of letting the viewer's eye decide what to focus on in the image. With a conventional display either the entire scene is in focus, or the focus is determined by the rendering/photographic system. A light-field display presents something much more natural and realistic in letting the viewer decide not only what part of a scene to converge on, but also which part to focus – and the areas not in focus blur out exactly as they do in reality. Another really interesting aspect of this approach is that the display itself can be calibrated to accommodate the flaws in a users' vision, eliminating the need to wear both corrective lenses and the HMD!

The stereoscopic prototype did demonstrate the focus aspect of the display very well with scenes that had fish swimming in an aquarium. It was really cool to switch between the close and distant fish and see them go in and out of focus. In my view, this plays an important part in tricking my mind into thinking what I'm seeing is actually real and not just a flat image being held in front of my eye.

Another advantage is the size, particularly the thickness of the display assembly. With a normal HMD there are one or more lenses in front of the image panel that require some significant distance to focus properly – and the result is a large and often heavy piece of equipment. With the light-field approach, both the lens membrane and the image panel are thin, light, and require a focal distance measured in millimeters. The demonstration prototype was about 1 centimeter thick. Since they were using components from an off the shelf HMD, they chose to keep it simple and mount the controlling electronics on top of the eye pieces, but really that could be relocated to a package that would go in your pocket or elsewhere and is not necessary to have be on the headset itself. Despite the extra bulk, the entire unit was still much smaller and lighter than any other HMD I have seen.


The primary shortcoming of the system in my opinion is the effective resolution of the image seen by the user. With the 720p panels in the stereoscopic prototype, I was told the image you perceive is in the range of 200p – and honestly, that seemed generous. The color, contrast and stereoscopic depth were all reasonably good, but my impression of the resolution of the actual image was very low. So, how fine a resolution would be required to meet or exceed the perceived resolution of the ultra realistic HMD we would all like to have? Well, the demonstration slides they were using were actual film with a resolution of 3000dpi, and they looked pretty good – but not flawless in clarity. So with the best contemporary mobile device screens in the ~350dpi range it seems like it will be some time before we have affordable panels that are large enough to provide satisfactory field of view and fine enough to have an acceptable perceived resolution.

Another difference which may be a significant factor for the light-field approach is the nature of the rendering process. Unlike a traditional single view display, a light-field display uses many small views of the scene. The GPUs and rendering pipelines we have today have been developed and optimized for a single output image, and their suitability for a system that requires potentially thousands of simultaneous views may not be ideal.


The stereoscopic prototype on display was running on a consumer level graphics card, but was rendering a 1440x720 image with 144 individual images which I believe were each 80x80. I'm not sure how well that will scale to the ultra high number of scenes that would be required to produce a really convincing high resolution light-field display, but Douglass was jovial when talking about how Nvidia is after all a rendering company and ideally positioned to solve those problems.

So in practice, what is easily available now with a light-field display falls quite a bit short of the image quality we can see with the current traditional HMD displays (and resolution is often cited as one of the main areas for improvement in those). I am very glad to have had the opportunity to see the prototype and do think there is tremendous potential and unique advantages with this approach – we just need ultra high resolution panels and rendering equipment that can pump out a tremendous number of tiny views.

This is just the beginning!  Come back regularly for a lot more SIGGRAPH 2013 coverage!