It is currently Fri Nov 15, 2019 3:59 pm



Reply to topic  [ 12 posts ] 
 HMD lightfield question 
Author Message
One Eyed Hopeful

Joined: Thu Dec 26, 2013 7:57 pm
Posts: 32
Reply with quote
I'm interested in building lightfield type head-mounted displays and after reviewing the patents for these from Magic Leap, Microsoft and others I noticed that unlike the free space or TV style displays the HMDs don't have any multiview or real parallax component, The methods described were only for multifocal images.

What I am wondering is if the eyes can perceive the directional part of the voxels in a lightfield in a way that can't be corrected in graphics using eye-tracking data?


Tue May 26, 2015 5:00 pm
Profile
One Eyed Hopeful

Joined: Wed Sep 25, 2013 12:36 am
Posts: 23
Reply with quote
The pinlight display paper talks about using eye tracking in section 3.3.1
http://www.cs.unc.edu/~maimone/media/pi ... h_2014.pdf

http://pinlights.info/ and https://research.nvidia.com/publication ... d-displays are good reading for super-multiview type lightfield display tech.


Wed May 27, 2015 5:34 pm
Profile
One Eyed Hopeful

Joined: Sun Sep 08, 2013 11:21 am
Posts: 4
Reply with quote
FMPrime, If I understand what you're asking - Does it really take a real lightfield to portray a 'focus-able' 3d scene?
This could prove to be a very cool question! Eye tracking should be able to eliminate orders of magnitude of optical complexity.

Bearing in mind that the eye sees only one 2-d image (on the retina) at any given moment, then we should not have to generate a lightfield.
A true lightsfield's only differentiating trait is its capacity to introduce depth-related blur into an image.
When the eye interacts with the world's incoming light field, the retina sees any unfocused regions as blurry shapes.
But the eye can't really tell the difference between out-of-focus details and perfectly-focused blurs! They're both just a hazy retinal image.

What if:
We project a perfectly-focused image onto the retina, then render some blur to exactly simulate what the eye expects at that level of focus?
As the eye tries to change focus, we physically shift the lens to allow the eye to focus at the expected depth then update the blur to match.

It would need variable lens though. (and a *really* precise means of monitoring the eye's focus depth)
Any thoughts? Am I missing something?


Mon Jun 15, 2015 8:02 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Alidor wrote:
... then render some blur to exactly simulate what the eye expects at that level of focus?
As the eye tries to change focus, we physically shift the lens to allow the eye to focus at the expected depth then update the blur to match. ...
David Marr's "Vision" suggests that we need multiple levels of Gaussian blur, for adequate monocular depth perception. I need to dig out my copy and read it again (a very worthwhile read on stereoscopism, depth perception, and other related vision topics).

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Tue Jun 16, 2015 8:44 am
Profile
One Eyed Hopeful

Joined: Thu Dec 26, 2013 7:57 pm
Posts: 32
Reply with quote
Alidor, I'm actually looking in to that, but I'm waiting on the assembly of prototype laser engines and some other stuff to be finished. Geekmaster, I've ordered that book now :P


Wed Jun 17, 2015 6:10 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
I have an intuitive feeling that some of the formulas published here might be useful for creating synthetic lightfields:
Complex Wavelet Bases, Steerability, and the Marr-Like Pyrimid (pdf)

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Wed Jun 17, 2015 7:40 pm
Profile
One Eyed Hopeful

Joined: Sun Sep 08, 2013 11:21 am
Posts: 4
Reply with quote
Wow. That escalated quickly!
Wavelets and FFT are bound to find themselves in the rendering stages - That's going to be a whole challenge unto itself.


As for the hardware, one simplification might be found in ignoring the lens's focus depth and just assuming the accommodation will couple tightly enough to the convergence. That is, the eyes' convergence point will suggest the depth. (with a few exceptions, eg: zones outside the overlap region)
I tried out an eyetracked HMD at Siggraph last year - A team had integrated their solution into an early generation Oculus Rift.
Now I'm wondering if they were extracting depth data out of each eye's angular difference. They were able to select different scene elements with uncanny precision.

I still don't know how one would dynamically change the actual focal length though. Especially for such a massive field of view.
Liquid lenses? Plain old servos might work - the eye is fairly slow at changing focus, (350ms from infinity to 6.5cm says Wikipedia)

Maybe there's some way to scan a near-zero-width beam through the eye's lens? That way the eye couldn't blur it when it tries to focus.
I'm thinking of a pinhole camera - always in focus. It might let the eye get lazy though, especially after extended use.

What did you have in mind for this Haloar?


Thu Jun 18, 2015 7:31 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Alidor wrote:
... I tried out an eyetracked HMD at Siggraph last year - A team had integrated their solution into an early generation Oculus Rift.
Now I'm wondering if they were extracting depth data out of each eye's angular difference. They were able to select different scene elements with uncanny precision.
Actually, for rendered data, it can be much simpler than angular difference. Only ONE eye needs to be tracked for depth determination. Just fetch the z-buffer depth for the location that eye is looking at, while generally safe to assume that the other eye is converged on the same object at the same depth. If no depth data, then yes, mutual gaze-angle difference and some trigonometry (with per-user calibration) should do the job. Except if the viewer has a lazy eye, of course, where z-buffer depth would be inherently "more better" (assuming you are tracking the "good" eye), and where lightfield tech is most promising as an alternative to parallax depth perception.
;-)

Perhaps a fusion of both methods would be more robust for more folks (with some sort of automated "lazy eye" determination)?

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Thu Jun 18, 2015 8:39 am
Profile
One Eyed Hopeful

Joined: Sun Sep 08, 2013 11:21 am
Posts: 4
Reply with quote
@Geekmaster
You're right, that would be a simple grab from the depth buffer. It might fail on transparent objects or fine nearby details (like blades of grass or a chain link fence) but it's clear a robust solution is doable.
I just wandered onto an eye-tracking HMD project called Fove that appears to have thought of this. They have a demo that blurs the scene according to the gaze point. - Certainly interesting but the display still cannot actually change accommodation to match. It is still fixed-focus.
So I guess the real challenge to realizing Haloar's idea will variable-focus optics.

I don't have any simulation software but I'm imagining just physically moving a wide-FOV lens toward the screen.
I can't decide if this would this produce any meaningful focal control.


Mon Jun 22, 2015 4:10 pm
Profile
One Eyed Hopeful

Joined: Wed Sep 25, 2013 12:36 am
Posts: 23
Reply with quote
Simulated depth of field is not enough because no depth will focus properly except the screen depth. However, multi-focal displays can use depth interpolation blur to simulate more focal planes.

Hong Hua worked on a liquid-lens prototype at the University of Arizona:
http://www.researchgate.net/publication ... iquid_lens
http://3dvis.optics.arizona.edu/video/D ... lane4s.wmv

She's also worked on an integral imaging (lightfield) HMD:
https://www.osapublishing.org/oe/abstra ... 2-11-13484

Retinal projection uses 'maxwellian view condition' where collimated light focused through a lens directly onto the eye's lens will pass through the eye's lens and project onto the retina without blurring. This allows infinite depth of field. This can be combined with super-multiview or eye-tracked DoF simulated blurring.


Tue Jul 07, 2015 1:43 am
Profile
One Eyed Hopeful

Joined: Wed Sep 25, 2013 12:36 am
Posts: 23
Reply with quote
Yet another interesting paper out of nVidia:
https://research.nvidia.com/publication ... ure-arrays

Similar to the 2013 paper, but more focused on using pinholes instead of microlenses.


Sun Jul 19, 2015 1:04 am
Profile
One Eyed Hopeful

Joined: Sun Sep 08, 2013 11:21 am
Posts: 4
Reply with quote
The nvidia pinlight approach looks novel, I got a chance to try it out last year and found it required a lot of calibration.
It reminded me of these old lensless glasses they sold on late-night 90's TV. Basically opaque domed plastic with laser drilled holes in a hex pattern. Loses a lot of light but it lets you focus on literally anything.
I tried using that method on an HMD I (half) made with a 2" Casio screen. I just used a thin card with pinholes tapped through with a needle. (coincidentally the same method nvidia used) It could be made to work but there was a prominent hex pattern and it was basically just too dark. Mind you, using the pinholes as the light *source* (rather than as an aperture) means there's no inherent light loss. The arrangement has merit but it's going to need eye tracking and some kind of closed loop auto calibration to work. Each pinhole essentially projects a circle of pixels that the screen needs to trim into variable-sized hexagons (or squares, if they're in a regular grid) to be sure they *just* tessellate without overlapping or leaving gaps. A little tricky, but do-able.

I digress. You mentioned a 'Maxwellian' arrangement. Is someone investigating this? If we had a sort of collimated, "eyeball-centric" inward-facing retinal projector, then as you say, it would give infinite depth of focus and a false blur could be rendered according to the eye-tracked convergence depth.
I recall some work out of Washington (HITLab) in the late 90's that used oscillating mirrors to raster scan lasers onto the retina. (just pray to god the mirrors don't jam - You'd have a pretty intense laser spot just boring through your your retina)
Anyway, have you heard of a more recent work on this? Admittedly we're getting away from the basement tinkerer zone but I'm curious.

It would be great to lick this focus issue with a single 2d image. Anything to avoid rendering a 99% unused light field...


Wed Aug 12, 2015 12:20 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 12 posts ] 

Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Powered by phpBB® Forum Software © phpBB Group
Designed by STSoftware.