Solving FOV, by taking advantage of the natural acuity curve

Talk about Head Mounted Displays (HMDs), augmented reality, wearable computing, controller hardware, haptic feedback, motion tracking, and related topics here!
Post Reply
AdamHarley
One Eyed Hopeful
Posts: 4
Joined: Wed Jun 13, 2012 6:37 pm

Solving FOV, by taking advantage of the natural acuity curve

Post by AdamHarley »

I think there's a lot to be said for "playing WITH" VR and AR setups, in the sense of understanding the limitations of an inadequate technology, and actively staying within its optimal bounds. There's also a lot to be said for "playing WITH" our brains, and acknowledging the limits of our own "technology," to the effect of fooling our senses and creating illusions. I think these concepts can work together, and in solving the problem of wide FOV, I think they can be especially useful.

The ideal is to create a setup with 180-degree FOV, because that is approximately the range of what our eyes can naturally see. However, as many of us know, the acute range within that is very small; most of our FOV is blurry most of the time (see the Wikipedia diagram). What keeps our world in focus is the constant jumping around of our eyes, which happens at a much larger range, but still nowhere near 180 degrees.

So here's the idea: out of the 180 degrees we're trying to accomodate, only a small part needs to be available in high resolution. The rest can be a blurry mess, and no one will ever know the difference (as long as they play along). A lot of the media that we (or at least I) intend to consume on VR devices is already centre-of-the-screen focused, so avoiding the limits here shouldn't be too great a burden.

What we need to work against is that "display rectangle" that's so prominent in current HMDs. Edges are attractive to our brains, and having them in plain sight completely ruins the immersion. Getting rid of them doesn't require a 180-degree perfect display, but only a 45-degree good one IN FRONT OF a 180-degree terrible one.

Two displays of differing quality, on in front of the other, isn't necessarily the best way to do it, but I think it makes for a good description of the concept. The videos linked below demonstrate roughly the same idea: their software monitors the edges of the screen, then they stretch that out and project it onto walls, to increase the effective FOV.

[youtube]http://www.youtube.com/watch?v=t5sRdBUyr2c[/youtube]
[youtube]http://www.youtube.com/watch?v=koLOyFbqFDU[/youtube]
You do not have the required permissions to view the files attached to this post.
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by cybereality »

Yeah, I totally agree with the dual display idea for wide FOV. The peripheral vision can be blurry or low quality and still add a lot to the immersion. Even those ambilight setups seem to add something to the experience, and thats only with a solid color. So I think this area certainly can use some more research into possible solutions.
User avatar
brantlew
Petrif-Eyed
Posts: 2221
Joined: Sat Sep 17, 2011 9:23 pm
Location: Menlo Park, CA

Re: Solving FOV, by taking advantage of the natural acuity c

Post by brantlew »

Another consequence of this phenomenon is for video compression. If you render and transmit the scene at the full resolution across the entire visual field you are wasting a huge amount of computational power and bandwidth. In theory you could create video compression algorithms with a variable compression rate that started with lossless compression in the center and moved to extremely high compression rates towards the edges. So you wouldn't need to transmit 9x the number of bytes to send video destined for wide FOV displays. You could probably just generate 2 or 3 times the amount of information.
PalmerTech
Golden Eyed Wiseman! (or woman!)
Posts: 1644
Joined: Fri Aug 21, 2009 9:06 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by PalmerTech »

If you google "High resolution insert HMD", you will find a lot of old attempts at this kind of thing, some very fascinating designs.

Lots of ways to do this:

1) High resolution inserts
2) Multiple displays with tiled optics
3) Ambilight type setups with simple color projection
4) Using lenses with geometry distortion to compress most of the pixels on a single display into the center

I am working on the latter three, the "best" solution is tiled lenses with multiple displays combined with geometry distortion for the primary optics. Time and experimentation will tell if we can get away with less. :)
AdamHarley
One Eyed Hopeful
Posts: 4
Joined: Wed Jun 13, 2012 6:37 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by AdamHarley »

PalmerTech wrote:If you google "High resolution insert HMD", you will find a lot of old attempts at this kind of thing, some very fascinating designs.
Indeed! Thanks, Palmer.
nrp
Two Eyed Hopeful
Posts: 95
Joined: Tue Jul 19, 2011 11:19 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by nrp »

PalmerTech wrote:If you google "High resolution insert HMD", you will find a lot of old attempts at this kind of thing, some very fascinating designs.

Lots of ways to do this:

1) High resolution inserts
2) Multiple displays with tiled optics
3) Ambilight type setups with simple color projection
4) Using lenses with geometry distortion to compress most of the pixels on a single display into the center

I am working on the latter three, the "best" solution is tiled lenses with multiple displays combined with geometry distortion for the primary optics. Time and experimentation will tell if we can get away with less. :)
What are you using for #3?

I could imagine a couple of ways to do it. With an FPGA, one could pull data live from HDMI, average out the hue/saturation/intensity of regions of pixels near each edge of the screen and associate each region with an RGB LED nearby facing away from the screen. That would involve a lot of hardware, but no desktop side software.

The easier way would just be to have a microcontroller attached to the LEDs interfaced over USB and told by the desktop what colors to show.
WiredEarp
Golden Eyed Wiseman! (or woman!)
Posts: 1498
Joined: Fri Jul 08, 2011 11:47 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by WiredEarp »

I think that a high res insert, which would necessarily be slaved to the head position, not eye position. would feel strange/unrealistic. If eyetracked and positioned accordingly, it would be sweet, but that would raise a whole new set of difficulties...
profvr
One Eyed Hopeful
Posts: 44
Joined: Thu Jun 07, 2012 7:22 am

Re: Solving FOV, by taking advantage of the natural acuity c

Post by profvr »

Hi, I've tried various of these displays, and I know at least one TV production company that is paying serious attention to ambilight displays. I've only had a little personal experience with ambilight displays and they can really help with giving a sense of the brightness of the scene and the atmosphere, though some colleagues find it annoying for gaming as the brightness of the main display doesn't compensate.

I work with CAVE-like displays, and the wide field of view really helps with some tasks. Notably simply awareness of movement - the periphery is really sensitive to high frequency motion. The idea about high-resolution inserts has been achieved on CAVE displays using steerable projectors. However the eye moves very quickly, it is hard to steer the display that fast and even harder to track it accurately enough.

As for the possible solutions for HMDs:

For #1 have a look at the Wide-5 HMD, it has an high-resolution insert and the boundary between the two is hard to see. Unfortunately I don't think these are being made any more; I've seen one and it was great, the first time in a HMD that I felt I was seeing an outdoor scene in daylight.

For #2 have a look at the Sensics piSight display.
User avatar
android78
Certif-Eyable!
Posts: 990
Joined: Sat Dec 22, 2007 3:38 am

Re: Solving FOV, by taking advantage of the natural acuity c

Post by android78 »

I think that it would work better if they had some distance between the tv screen and the wall behind and setup 3 projectors, pointing up left and right. Having a great distance between the display and the peripheral displays as in the second video would seem to detract from the possible immersion that could be achieved. The first video is pretty cool concept, but would seem little improvement over just the display itself.
Philips already released a display with pretty much the same thing (Ambilight):
[youtube]http://www.youtube.com/watch?v=-s3BUUaPWg8[/youtube]
or this:
[youtube]http://www.youtube.com/watch?v=LrgLW2p8 ... re=related[/youtube]
PalmerTech
Golden Eyed Wiseman! (or woman!)
Posts: 1644
Joined: Fri Aug 21, 2009 9:06 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by PalmerTech »

nrp wrote:What are you using for #3?
I am cheating, just using the power lines from the MadCatz Cyborg lights hooked up to external LEDs. :D Time will tell if designing something specifically for an HMD is worth it.
profvr wrote:For #1 have a look at the Wide-5 HMD, it has an high-resolution insert and the boundary between the two is hard to see. Unfortunately I don't think these are being made any more; I've seen one and it was great, the first time in a HMD that I felt I was seeing an outdoor scene in daylight.
Actually, the Wide5 only uses a single display. It uses the 4th method I mentioned, using geometric distortion to compress more pixels into the center. They are not being made anymore, but they really are amazing! I have one at work that I get to use several times a week. :)
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by cybereality »

@nrp: With the 3D driver I am working, there are plans to add hooks for ambilight setups. Basically I will find a way to expose the back buffer as pixel data, and other people will be able to write a plug-in to connect this with external LEDs, secondary displays, etc. Still have a lot of work to do, but I am hoping to have something fully featured before years end.
CityZen
One Eyed Hopeful
Posts: 9
Joined: Fri Jun 08, 2012 3:36 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by CityZen »

This is not as straightforward as you may imagine. Your peripheral vision is sensitive to aliasing artifacts. There's a big difference between a properly-rendered blurry scene and a low-resolution scene. It may actually take more work to generate a proper blurry scene than it would to generate a nice high-resolution scene.

There's also the issue that you can move your eyes around, so any part of the field of view that you can focus on needs to be capable of showing proper high-resolution imagery.

As you get over 60 degrees or so of horizontal FOV, you'll find that the "edges" issue isn't as big as you might imagine. (Yes, I've used ProView SR80. No, it wasn't anything I bought myself.)

Also, the content matters. When you're watching a good TV show, and you're really into it, then you're not being bothered by what your TV bezel looks like.
AdamHarley
One Eyed Hopeful
Posts: 4
Joined: Wed Jun 13, 2012 6:37 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by AdamHarley »

WiredEarp wrote:I think that a high res insert, which would necessarily be slaved to the head position, not eye position. would feel strange/unrealistic. If eyetracked and positioned accordingly, it would be sweet, but that would raise a whole new set of difficulties...
Actually, in some of the papers Palmer pointed us to, the prospect of eye-tracking has been researched quite a lot. I haven't yet found a working "eye-slaved" HMD solution, but there's some really interesting research in that area.

Tracking appears to work pretty well these days, and updating/moving the high-res section appears to be making steady progress, at least in finding workarounds.

Here are some interesting bits:
- The high-res section has to be able to follow the user's gaze with a maximum lag of 60ms. Any slower, and blur or motion is perceived.
- Gaze locations can occasionally be predicted. Predictions can be based on the location of previous gazes, as well as on the content of the image (new/salient sections are prioritized).
- Enlarging the high-res area compensates for tracking inaccuracy (of course), but some estimate a surprisingly low point of diminishing returns: 15 degrees, they say! But this isn't with any mechanically-moving "insert", these types of experiments use a large high-res display, and update the image with graphics power.

For anyone else searching through the academic papers on this topic, try "gaze-contingent multiresolutional displays".
CityZen wrote:This is not as straightforward as you may imagine. Your peripheral vision is sensitive to aliasing artifacts. There's a big difference between a properly-rendered blurry scene and a low-resolution scene. It may actually take more work to generate a proper blurry scene than it would to generate a nice high-resolution scene.
You've got a point there, at least with the first part. It's true that we get a lot of information from our peripheral vision: in terms of detecting slight motion and dim light, it is actually more sensitive than the fovea. But I think we can sate those receptors with low-resolution images, as long as the basic content is still there.

A poor way to do it (and indeed the way some companies are handling this) is to have the peripheral display just repeat what's on the central screen, e.g. extract the main colours and project them outward (e.g. ambilights). A better way (as I suppose you're suggesting) is to give the periphery real information from the game/movie/whatever, so that it's not just "immersive" because it's occupying the receptors, but because it's providing your brain with extra information on the scene. But that information doesn't have to be detailed -- it just has to exist, so that you can immediately turn your head, and enhance it with your fovea/insert. I think low-resolution displays can handle this, and I'm confident that such a setup would enhance the effect of the central FOV.

...Hey, I just realized, re eye- vs. head-slaved inserts: my glasses are head-slaved, and they work pretty well. I'm able to look outside its optimal range, but I've learned not to. We shouldn't underestimate this learning aspect: in my initial post I mentioned that users can "actively stay" within the optimal range of a limited technology, but more realistically, users LEARN to stay within that range, and eventually forget the limits exist.
User avatar
Tone
One Eyed Hopeful
Posts: 42
Joined: Fri Jun 25, 2010 9:13 am
Location: USA
Contact:

Re: Solving FOV, by taking advantage of the natural acuity c

Post by Tone »

A few things worth mentioning on this topic:
  • Peripheral vision is monochromatic. No need to supply any color information.
  • Peripheral vision compresses about 70 million rods (mono sensors) down to less than a million nerve pathways in the optic nerve by means of edge detection. Presenting a blurred image in the periphery is the worst possible technique, as there will be no crisp edges to detect. Blocky (untextured) shapes with sharp edges would be better.
  • The eyes' central field of view (high resolution) is about 2 degrees (about the size of your thumbnail at arm's length.) Saccades (rapid eye movements scanning a scene) can move as fast as 900 deg/sec. Thus, your entire high resolution field can be transited in about 2ms, implying that we need eye tracking in sub-millisecond range, perhaps less than 100 microseconds total loop time from eye movement to display update. The fastest available displays refresh in about 5 ms, almost two orders of magnitude slower than required.
  • In poor lighting, you have no central field vision, i.e. no color and no high resolution. Astronomers know this, as they've trained themselves to avert their gaze by about 5-10 deg. while viewing dim celestial objects.
User avatar
android78
Certif-Eyable!
Posts: 990
Joined: Sat Dec 22, 2007 3:38 am

Re: Solving FOV, by taking advantage of the natural acuity c

Post by android78 »

AdamHarley wrote:...A poor way to do it (and indeed the way some companies are handling this) is to have the peripheral display just repeat what's on the central screen, e.g. extract the main colours and project them outward (e.g. ambilights)...
I think you're onto something here. I wonder what would happen if you were to have a projector sitting behind your main display with an extremely wide angle lens (spreading the image to almost 180 deg) that would also blur the image, but have it display the same image as your main display. I would assume that putting a large fisheye camera lens in front of a regular projector lens would work for this... maybe palmer could comment on this idea?
User avatar
3dpmaster
Cross Eyed!
Posts: 116
Joined: Tue Sep 21, 2010 3:05 pm

Re: Solving FOV, by taking advantage of the natural acuity c

Post by 3dpmaster »

The main problem are the rolling eyes.... The image has to be sharp everywhere. :(
Full immersive research:

HMD:
SONY HMZ-T1
FOV: 40° diagonal

HMD project:
FOV: >180°

Link: http://www.mtbs3d.com/phpBB/viewtopic.php?f=26&t=14332
Post Reply

Return to “General VR/AR Discussion”