Light field display

Talk about Head Mounted Displays (HMDs), augmented reality, wearable computing, controller hardware, haptic feedback, motion tracking, and related topics here!
Post Reply
CityZen
One Eyed Hopeful
Posts: 9
Joined: Fri Jun 08, 2012 3:36 pm

Light field display

Post by CityZen »

Here's my idea for a light field display.

First off, though, what is a light field display? A light field is just the set of all the various rays of light within a given volume of space. A light field display is one that can replay all of those rays, or at least some interesting portion of them. You'd really only care about the rays that could possibly enter your eye, from wherever you're looking at the display from.

A regular 2D screen only replays those rays that are emitted in uniform cones from a flat surface. That is to say, each pixel emits the same color (and intensity) light in all forward directions (at least, ideal screens do this pretty well; with cheap LCDs, the color gets "wrong" as you look from off axis).

A light field display would operate similarly to a regular 2D screen, except that for every pixel, different color rays would go out in different directions. You could say that each pixel becomes like a little image itself, except that its sub-pixels correspond to different angular directions rather than 2D positions.

Such a display is what you'd use to make a Holodeck, if you wanted to go that route. With this display, you'd have to focus your eyes to see things at different distances. It would also offer motion parallax, in that if you move your head to see behind something, you'd be able to (while the display is showing a static light field). Multiple viewers could do this at once. In short, the ideal version of this display would pretty much be able to replicate anything that you could naturally see.

So now let's think about something else: the Lytro light-field camera. You may have heard about this camera that let's you take a picture and then focus it later? That's the Lytro. It captures a light field, and lets you choose how you want to reduce that light field into a regular 2D image later.

How does it work? It's quite simple: it uses a very high resolution image sensor with a microlens array in front of it. In essence, it's a big array of little cameras (or think about images within pixels again).

You might also want to consider another light-field camera: the "bullet-time" camera setup used for The Matrix. This was a (curved) linear array of cameras that captured the same shot from many different positions at the same time. By choosing which camera image to look at, you could achieve motion parallax. The Lytro is similar, except that instead of a big long line of cameras, they've been condensed into a 2D array and put into a little box.

The other difference is that the resulting images are all kind of interleaved. That's fine though, because you can sort them out in software.

Anyway, my final idea is quite simple: the Lytro can capture a lightfield using a high-resolution sensor, a micro-lens array, and the right software to sort things out.

To go in the opposite direction, you just need to replace the sensor with a display panel. Of course, the display panel needs the same kind of pixel count that the sensor has (which seems to be 11 - 16 megapixels). Also, the Lytro only captures light fields that enter its main lens, which is pretty small.

You can begin to see the problems: for a monitor-size light-field display, the number of "total pixels" (rays or cones, really) would be massive. As I said before, multiple each ordinary pixel by a whole image worth of pixels. The computation needed to generate such an image would scale similarly (though there would be lots of "shortcuts" you'd take to scale the problem down).

To make this practical, you'd want to limit the number of rays, and this is where HMDs come in. By putting the displays in front of your eyes, you really cut down the number of rays that you'd have to display and compute, since you only care about the ones that can enter your pupils.

This means that the microlenses don't have to project the micropixels into such wide angles. Also, you don't need symmetry, since micropixels on the left side of the display don't need to be projected towards the left, as you wouldn't see them. Of course, making each microlens a custom shape makes this a bit more difficult.

That's the beginnings of the idea. I need to hit the hay now.
foisi
Cross Eyed!
Posts: 105
Joined: Wed Sep 22, 2010 3:47 am
Location: Toulouse, France

Re: Light field display

Post by foisi »

Hello,

I thought about the same thing and I did some research to see if something like that already existed and I found this : http://en.wikipedia.org/wiki/Integral_imaging

I also think that having this in an HMD would be nice (an alternative to the variable focusing lens) but since we already have not so good resolutions to be stretched in a massive FOV, it would be even worse to sacrifice group of pixels to make the little images.
CityZen
One Eyed Hopeful
Posts: 9
Joined: Fri Jun 08, 2012 3:36 pm

Re: Light field display

Post by CityZen »

Nice find! I was quite sure my idea wasn't unique, but had no idea it was over 100 years old.

I have the feeling that making extremely high DPI displays is possible, but nobody sees a reason to do it yet. However, it will likely require new approaches to get to the same kind of density as CMOS sensors offer.

Of course, the other end of this is the computation. I just saw an article today about a team from MIT that is working on this same problem. So this kind of thing may become within reach before you know it.
zalo
Certif-Eyed!
Posts: 661
Joined: Sun Mar 25, 2012 12:33 pm

Re: Light field display

Post by zalo »

I don't know if your idea would look too good. Remember, the Lytro sorts it all out in software from the micro lens array.

It would look kind of funky to a human I think (like a fly's eye).

Luckily, I think this is also a light field display (but I don't think anything big has happened with it since 2008):
http://www.holografika.com/

We have a hard enough time shrinking normal density tvs down to HMD size, how could we possibly do a light field density one?

Variable focusing will have to do for "now" (nearer future).
CityZen
One Eyed Hopeful
Posts: 9
Joined: Fri Jun 08, 2012 3:36 pm

Re: Light field display

Post by CityZen »

The images to be displayed would also be created using a computer to correctly sort out the rays such that you'd only see "proper" images from any given direction. This requires much more work than traditional rendering, since you're not just creating a 2D projection of a scene, but rather multiple projections. But rather than just rendering the scene dozens of times over, you'd want to take advantage of the fact that there's lots of things in common with each view.

As I think about this, it might not really be that much more difficult than traditional rendering. If you consider the traditional forward rendering graphics pipeline, you might simply augment the vertex transformation stage with an array of different (but related) projections (and their associated coordinates). The pixel rendering part could be mostly identical (the diffuse color is the same regardless of projection; just the specular part changes), but then you'd have multiple depth/color buffers (1 per projection)... you'd want to avoid writing the data out to each buffer if it's the same in all of them; you'd only want to write out the differences...

Clearly, there's an order of magnitude more work involved. We just need to find ways to take advantage of the redundancies.
Post Reply

Return to “General VR/AR Discussion”