It is currently Tue Nov 19, 2019 2:49 am



Reply to topic  [ 104 posts ]  Go to page Previous  1, 2, 3  Next
 Fresnel lens stack for "supernatural" FoV 
Author Message
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
PasticheDonkey wrote:
you're avoiding being a grumpy old man then?
I try to avoid that, but other grumpy old men around here are contagious! The boat used in that movie is stored within walking distance of me, however. :D

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Wed Feb 27, 2013 1:32 pm
Profile
Binocular Vision CONFIRMED!
User avatar

Joined: Mon Jan 28, 2013 10:37 am
Posts: 273
Location: Brighton, UK (Sometimes London)
Reply with quote
Is the boat near Lake Rebecca?


Wed Feb 27, 2013 1:39 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Diorama wrote:
Is the boat near Lake Rebecca?
Yes, actually... Lucky guess!
http://www.imdb.com/title/tt0107050/locations
:D

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Wed Feb 27, 2013 2:45 pm
Profile
One Eyed Hopeful

Joined: Wed Jan 09, 2013 7:45 pm
Posts: 13
Reply with quote
So did you get any reaction from the Oculus folks about this geekmaster?


Wed Feb 27, 2013 4:15 pm
Profile
Golden Eyed Wiseman! (or woman!)

Joined: Fri Aug 21, 2009 9:06 pm
Posts: 1644
Reply with quote
Anamorphic lenses are something I have played with in the past, trying to counteract standard SBS image compression. We are all busy getting these developer kits out there, there will be more time for experimentation in the future.


Wed Feb 27, 2013 4:26 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
PalmerTech wrote:
Anamorphic lenses are something I have played with in the past, trying to counteract standard SBS image compression. We are all busy getting these developer kits out there, there will be more time for experimentation in the future.
Thanks for the update!

It is great to know that we can experiment with replacement lenses and novel pre-warp algorithms for our Rift Dev Kits. For that matter, we may even be able to hack some fresnel lens stacks into Rift eyecups to see what anamorphic decompression can do in these devices. Until we find "real" lenses that can do this.

I guess that experimentation is one of the primary reasons to even HAVE these developmental version of the Rift HMD. I am sure that it will turn out to be a wise choice, and will help the consumer version to becaome all that it can be.

I understand that the Rift probably uses simple aspheric lenses BECAUSE it is based on a prototype that used off-the-shelf lenses (5x aspheric acrylic loupe magnifiers). And the need to rapidly ramp up production of that design due to unexpected HUGE Kickstater support left little time to try alternative CUSTOM lenses when a QUICK redesign was needed after a shortage of the original 5-inch screens.

The lack of aspherics in the Rift is a simple evolutionary step from its DIY off-the-shelf heritage. The novel eyecup design allows us to improve upon this design. We have a great future to look forward to!

Thanks Palmer!

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Wed Feb 27, 2013 4:43 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Palmer gave permission to post our relevant PM discussion:
geekmaster wrote:
After I convince you that the Rift needs aspheric anamorphic lenses, perhaps you can sell an upgrade kit (at nominal cost) that contains lens cups with lenses that stretch the image more horizontally (and perhaps less vertically), giving a landscape (or wider) aspect ratio similar to eyeglass lenses and movie theater screens, while potentially shifting pixels above or below the simple aspheric lens FoV into the central FoV to increase central pixel density.

I gave comparison details in my most recent post in my "fresnel lens stack" thread.

I am trying to help here. I really do believe that simple aspheric lenses for an SBS-Half image (portrait mode) do not effectively map the display to the maximum FoV (landscape mode), but anamorphic lenses can do exactly that.

I want the Rift to be all that it can be, without significantly increasing weight or cost.

Going with the recent dual screen approach, where the designer acknowledged that he got the motivation to finish his display from my "fresnel lens stack" thread, may be fine for extreme VR enthusiasts (I want one), but for a practical mass-market HMD we need to keep the weight and costs down.

I like the removable lens cups in the Rift, when that is exactly what allows the end-user to easily install a simple anamorphic lens upgrade kit to increase the horizontal FoV and/or to increase the central pixel density (depending on display position and whether eyeglasses are worn with the rift).
PalmerTech wrote:
I am certainly aware of anamorphic lenses, I have built several. Some of the Olympus Eye-Trek line used anamorphic lenses to get a widescreen view out of a 4:3 microdisplay. Some of my first HMD prototypes used anamorphic lenses to compensate for standard SBS video, it worked quite well.

It is too early to say if we will release any upgraded optics sets, but one of the reasons we made the eyecups removable was so people could tinker with the design themselves. The future Rifts will probably use an anamorphic lens.
geekmaster wrote:
Thanks. That is good to know!

I was not aware of anamorphic lenses in HMDs until after I realized that my fresnel lens stacks were behaving similar to anamorphic lenses, and then I found them in an HMD prior art search.

My primary interests have been in robotics and motion control. With 3D and VR only a secondary interest, that is something I did not notice until now. And I was a bit surprised that the Rift did not use anamorphic lenses (which makes it either sacrifice horizontal FoV or vertical resolution).

Is it okay if I post this reply from you in my "fresnel lens stack" thread? I think it would help. It would give people a better appreciation for the removable eyecups.

I am fairly new here, and there is so much unindexed content that I have not seen yet. For all I know, you discussed anamorphic lenses in the forums and I have just not stumbled across those posts yet. Sadly, some of your older posts about the history/evolution of your Rift-like designs contain broken images.

Again, may I post this reply to my thread?
PalmerTech wrote:
Feel free to post it!
Some duplication with prior posted content, but also some new stuff, so enjoy!

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Thu Feb 28, 2013 8:22 am, edited 2 times in total.



Wed Feb 27, 2013 5:14 pm
Profile
Sharp Eyed Eagle!

Joined: Thu May 22, 2008 9:13 pm
Posts: 380
Reply with quote
Hi,

I found an interesting patent whilst researching 'fresnel lens field flatteners'.

What a doosy this is.

Quote:

Date Issued: February 2, 1988

ASSIGNEE: The United States of America as represented by the Secretary of the Navy

FIELD OF THE INVENTION

The present invention relates to the field of optics and more particularly to the aspect of the field of optics dealing with lenses particularly in multiple lens systems. In greater particularity, the present invention relates to fresnel lenses used in a virtual image display system. By way of further particularization, the present invention can be described as a fresnel lens employing curved facets which are selectively undercut to improve optical clarity in a Virtual Image Display System.

SUMMARY OF THE INVENTION

The present invention overcomes the problems associated with the above systems by using fresnel lenses arranged in a very lightweight system to present a virtual image to the viewer with little optical distortion and highly acceptable optical transmission characteristics. The novel lenses have directionally undercut facets in selected regions to reduce the non-refractive area of the lens. The fresnel lenses also have curved facets in order to more clearly approximate the effect of common glass lenses. Both the directionally undercut facets and the curved facets are made possible by the new technology of diamond turning of soft optical materials, such as plastic.




The link is here : http://www.patentgenius.com/patent/H423.html

The good news is the patent has now expired, so basically its open source. Ha ha. :lol:

Thanks.


Thu Feb 28, 2013 7:07 am
Profile
Cross Eyed!

Joined: Wed Sep 22, 2010 3:47 am
Posts: 105
Location: Toulouse, France
Reply with quote
No offense but just to re-establish the truth
Quote:
Going with the recent dual screen approach, where the designer acknowledged that he got the idea for his display from my "fresnel lens stack" thread

that's not the case, I ordered the first 4 FRL021 fresnel lenses on Aug 8, 2012 and your first post about fresnel lenses on these forums was on Sun Jan 27, 2013 in the DIY Rift thread.

_________________
InfinitEye - 210° HFOV HMD


Thu Feb 28, 2013 7:35 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
foisi wrote:
No offense but just to re-establish the truth
Quote:
Going with the recent dual screen approach, where the designer acknowledged that he got the idea for his display from my "fresnel lens stack" thread
that's not the case, I ordered the first 4 FRL021 fresnel lenses on Aug 8, 2012 and your first post about fresnel lenses on these forums was on Sun Jan 27, 2013 in the DIY Rift thread.
Oh, sorry. While writing that, I must have incorrectly remembered the following post from your HMD thread started Sat Feb 15, 2013:
foisi wrote:
Thank you guys !

while I'm trying to upload a video of the HMD to youtube, I'll try to answer some of your questions
...
@geekmaster:
I read your posts about fresnel lens stack :) Actually I read a lot of post here on MTBS3D even if I'm not participating a lot (mainly because my english is not so good and I don't know of to tell things sometimes), and I was quite happy to see that I was not alone to think about using stacked fresnel lenses, it gave me motivation to finish the construction of the HMD.
The weight is not too much thanks to the light weight LCDs, simple design and the use of fresnel (I can't imagine the weight of equivalent glass lenses)
...
It looks like it took me a long time to write all these answers, the video is ready...
Giving you "motivation to finish" is good too. If not for my thread, your awe-inspiring HMD might still be an unpublished experiment. At least I remembered you giving me credit for SOMETHING...
:lol:
The post that you quoted above has now been corrected.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Thu Feb 28, 2013 8:08 am
Profile
Cross Eyed!

Joined: Wed Sep 22, 2010 3:47 am
Posts: 105
Location: Toulouse, France
Reply with quote
Thanks :)

_________________
InfinitEye - 210° HFOV HMD


Thu Feb 28, 2013 9:02 am
Profile
Certif-Eyed!

Joined: Fri Jan 11, 2013 5:10 pm
Posts: 645
Reply with quote
One thing I am unclear on.
Why do you need a stack, and not just a single Fresnel? Is it just due to the greater accessibility of the weaker Fresnel that you are stacking?


Thu Feb 28, 2013 12:02 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Mystify wrote:
One thing I am unclear on.
Why do you need a stack, and not just a single Fresnel? Is it just due to the greater accessibility of the weaker Fresnel that you are stacking?
The goal of this experiment was to use parts that are inexpensive and locally available, and the fresnel magnifiers I used here fit both of those requirements. Regarding WHY I stack them, it is not just for increased magnification, but mostly for asymetrical (anamorphic) magnification, as you can read in this quote from another thread:
geekmaster wrote:
For Rift-like HMDs that use SBS-Half video content, anamorphic lenses may be required to stretch the image half from portrait mode to landscape mode to fill more of the available horizontal FoV (like the shape of eyeglass lenses or movie theater screens).

The Rift Dev Kits have replaceable lens cups, so experimental aspheric anamorphic lenses may be used with them when they become available. A stacked pair of fresnel offset lenses may also be used to provide nonlinear anamorphic expansion, for inexpensive experimentation. An offset lens is cut from the outer portion of a larger aspheric or fresnel lens, and provides asymmetrical magnification proportional to its distance from the original lens center. A stacked pair of left and right offset lenses can stretch the image both left and right, eliminating any need for a center divider to prevent seeing a portion of the wrong image, and can provide additional peripheral FoV as well.
I would love to find a custom lens that provides both left and right offset and required magnification in a single lens. If you can find one, please let us know. I am interested in good quality fresnel lenses, and in affordable solid aspheric anamorphic lenses (for better image quality).

Comparing the current design of the Rift Dev Kit to my experimental "offset lens stack" HMD design being developed in this thread, while my "nonlinear anamorphic" lens stack maps a protrait mode image half to a landscape mode virtual image, the Rift aspheric lenses stretch the portrait mode image half equally in all directions. This means that the virtual image in the Rift will be either too tall (wasting available pixels in the vertical dimentsion) or not wide enough (making black bars visible in the outer peripheral view, and possibly making a portion of the "other" image half visible to the wrong eye (unless a divider is used to block that view. The effects of Rift symetrical magnification will be subjective, depending on choice of lens cups, whether or not eyeglasses are worn with them, and screen distance from the viewer (FoV adjustments). For my trimmed fresnel lens stacks, I place them touching the skin of my nose, cheek, and eyebrow ridge (close enough that my eylashes brush against them) giving maximum possible FoV while using as many screen pixels as possible. I place the screen just close enough to eliminate a partial view of the wrong image, and to eliminate the very top and bottom edges of the screen. By not wasting vertical pixels when adjusted for maximum horizontal FoV, I have MORE vertical pixel density in the central viewing area that the Rift would have under similar horizontal FoV. With my planned experiments with custom lenses for my Rift Dev Kit, I hope to (partially) reproduce my "increased FoV" results with my Rift.

The reason that we have Rift Dev Kits in the first place is so that we can do these experiments, allowing these enhancements to be added to the Consumer Rift when it becomes available. Because using a 1280x800 display gives us a choice between high FoV or high pixel density, some people may choose the lower FoV (especially for games where you need to see distant detail). If the Consumer Rift uses a higher resolution display, using anamorphic lenses for increased horizontal FoV may be desirable more people and more applications.

Regarding your question about "greater accessiblity of the weaker Fresnel", you are partially correct. With the left and right offsets providing anamorphic magnification, we still need a little more magnification to stretch the image for full vertical FoV (and more horizontal FoV). For this, I first added the center (symetrical) section from a fresnel magnifier to my lens stack, then later I used a pair of 3x magnifying eyeglasses in place of the center fresnel section.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Thu Feb 28, 2013 12:20 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Now that I have two sets of fresnel lens stacks, I have been looking at stereoscopic content instead of just viewing with one eye at a time.

I have to shrink the image pair, or 3D video, so that I can converge them on my 7-inch display. If the IPD is too much larger than my IPD, I cannot merge them when magnified, even though I have no problem merging a fullscreen 3D image pair on a 7-inch screen with my naked eye.

We REALLY need some stereoscopic images and SBS-Half video with the IPD pulled in closer to the center (i.e. less stereo separation). My LG passive 3D TV has such an adjustment, and using it moves one of the images left or right, causing the borders to no longer overlap on the screen. We need a way to do that for Rift video content.

The problem I see is that if we are not looking through the lens centers, the stereo separation and the pre-warp adjustments may need to be decoupled, so that users can adjust stereo separation independently from pre-warp set by lens position.

I can probably do that in my "PTZ Tweening" code that decouples head tracking from VR content rendering. We will be able to slide the image around for head tracking (digital image stabilization), so we can just slide one of the images a little more or less to allow stereo separation adjustments just like a 3D TV allows.

FYI, in other forums, I post (public) things as I think about them, in a stream-of-consciousness style. That way you can follow my thinking and my experiments as I interactively develop my ideas into a working demo that proves it works and shows how you can do it to. And hopefully, my publishing method allows you to duplicate my R&D methods too, in case it helps you research and develop your own ideas. I plan to continue my publishing style here, in my own threads. Feel free to ask questions about the topic of my threads, and also about my style and methods too if you are interested in learning more about them.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Thu Feb 28, 2013 11:50 pm
Profile
Cross Eyed!
User avatar

Joined: Mon Dec 17, 2012 7:16 pm
Posts: 164
Reply with quote
geekmaster wrote:
FYI, in other forums, I post (public) things as I think about them, in a stream-of-consciousness style. That way you can follow my thinking and my experiments as I interactively develop my ideas into a working demo that proves it works and shows how you can do it to. And hopefully, my publishing method allows you to duplicate my R&D methods too, in case it helps you research and develop your own ideas. I plan to continue my publishing style here, in my own threads. Feel free to ask questions about the topic of my threads, and also about my style and methods too if you are interested in learning more about them.


Thanks for the service, Geekmaster. I've been enjoying your posts so far, even though I don't always fully understand the content. It will be cool to see what comes of your high-FOV and PTZ tweening experiments.


Fri Mar 01, 2013 12:36 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
twofoe wrote:
Thanks for the service, Geekmaster. I've been enjoying your posts so far, even though I don't always fully understand the content. It will be cool to see what comes of your high-FOV and PTZ tweening experiments.
It is hard to strike a balance between defining all the terminology and explaining all the little details, and keeping my posts to a reasonable size. I am glad you are enjoying it. Thanks...
:D

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Thu Mar 14, 2013 8:15 pm, edited 1 time in total.



Fri Mar 01, 2013 1:21 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
I read about DIY anamorphic lenses (made from back-to-back prisms):
http://www.zuggsoft.com/theater/prism.htm

I realized that the left and right edges of large high-magnification aspheric solid or fresnel lenses behave essentially as such prisms, which is why they stretch the image anamorphically (with distortion).

I held my Nexus 7 tablet screen against my face to see just how much FoV was possible. It touched my forhead and the tip of my nose. I slid it upward until the image on the screen was just becoming obsured by my eyebrows at the upper border. That was pretty big by itself, but to focus on a screen only an inch from my eyes, I would need powerful magnification that would also stretch the image, making an even LARGER FoV.

To focus, I starting opening more of my fresnel magnifiers, and adding them to a stack. When looking with one eye through the center of the stack at the screen very close to my face, I tried bending them around the curve of my forehead (touching from my nose, across my eyebrows, and touching my cheekbone and just in front of my ear). I tried placing the fresnel ridges both toward and away from my face, but this only worked for me when the ridges were TOWARD my face. These lenses were uncut, keeping their original dimensions.

Amazingly, the FoV was total, stretching ONE image from a an SBS-Half pair beyond my nose as far as I could see inward, and outward also as far as I could see (virtually perpendicular to the screen).
templeto anamorphically stretch the FoV more while keeping central focus (acceptable overall focus, at least for me).

At such close range, it also filled my vertical FoV from eyebrow above to my mustache below.

By flexing the shape of the curved lenses, I could get just about everything into focus. There was some pretty severe chromatic aberration in the periphery, but software can compensate for that.

The main problem is that when stretching so few pixels over such a large area, dedicating a higher pixel density to central FoV creates some pretty large pixels in the peripheral view (which will not be a problem with future high resolution displays).

What I proved here (at least to myself) is that I can create a TOTAL FoV display (everything my eye can see is a pixel) using only a SINGLE 7-inch LCD panel. Of course, stacking FIVE fresnel lenses is rather extreme, but even so, their quality (dense ridge pattern) was mostly adequate for the low magnified pixel density of the currently popular 1280x800 7-inch displays.

For some applications (where total FoV is more important than pixel density), this is a totally immersive experience.

Combining total immersion from this thread with guaranteed low-latency low-power head-tracking from my "PTZ" thread, I will have a totally immersive experience (even with a totally portable Rift and Raspberry Pi setup). Combining this with wireless network to stream rendered skyboxes to the RasPi fast enough to satisfy PoV (persistence of vision) for smooth animation, I think we will have an optimal portable experience (until better hardware lets us incrementally upgrade to an even better experience).

So, I want this "EXTREME FoV" HMD for total immersion VR content, and another "meniscus eyeglass lens" HMD for working with text and GUI-based VR content.

For a final full immersion HMD, I really want better quality lenses, like a 15x fresnel, or a custom curved solid 15x lens (assuming that stacking FIVE page magnifies is 15x magnification).

I am exrremely pleased with the results of this experiment! If you try this, remember that we are stretching the pixel density way beyond its limits to use only half the screen to everything your eye can see. Also remember that we are stacking FIVE cheap dollar store fresnel lenses (HAH!!!). Do not expect to read an eBook on this HMD. But expect to be blown away by the awesome (but extremely low resolution and rather blurred) FoV. Luckily, with such low pixel density, a bit of blur actually helps our vision to understand low resolution details.

Now, to build a prototype HMD with TEN fresnel magnifiers (a stack of five for each eye). Then because the distortion for each eye is a MIRROR warp of that used by the other eye, I will need to make a "mirrored pre-warp" filter to work with this HMD, it crhomatic aberration correction in this pre-warp filter will be more important than ever.

Enjoy!
:D

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Fri Mar 01, 2013 12:11 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
REQUEST: Where can I get some stereoscopic image pairs that are already adjusted to work on the 7-inch Rift Dev Kit?

I found a simple way to mount the trimmed fresnel lens stacks inside cheap dollar store safety googles like shown in the first post. It turns out that they have air vents in the upper and lower corners of the outside edges, and the untrimmed corners of the lens stacks snap securely into them, holding them exactly in the correct position against my nose. Fortunate coincidence there!

The image pairs I have use the wrong distance between their warp centers. I was testing them by shrinking them in my web browser on my Nexus 7 (pinch and stretch), but although that puts the IPD of the images the correct distance apart, the pre-warp centers are no longer in the lens centers giving an asymmetrical warp, with throws off stereo convergence near the edges of the lenses.

I believe that my lenses will use symmetrical pre-warp just like the Rift, but I need that the new pre-warped images to test for Rift-compatibility. It is possible that I may need a custom pre-warp for proper convergence, but I think it will work okay. To do it right though, I really need to squeeze a wider FoV into them with anamorphic compression (how SBS-Half 3D video is compressed). But it would be nice to at least have correct 3D convergence even when using the "Rift-standard" algorithm even though things will look too tall and narrow in my HMD.

I will adjust some images myself now to work with my HMD, but I still want to test with images known to work well on a real Rift Dev Kit.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Fri Mar 01, 2013 6:04 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2255
Location: Perpignan, France
Reply with quote
geekmaster wrote:
REQUEST: Where can I get some stereoscopic image pairs that are already adjusted to work on the 7-inch Rift Dev Kit?
I'm tracking every shot I can find for the Rift here : http://vr.wikinet.org/wiki/3D_images

There is a least the one from the Oculus Kickstarter update that is adapted for the devkit, don't know for the others. I should probably categorize these images by device, but that will take time.


Fri Mar 01, 2013 6:16 pm
Profile WWW
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Fredz wrote:
geekmaster wrote:
REQUEST: Where can I get some stereoscopic image pairs that are already adjusted to work on the 7-inch Rift Dev Kit?
I'm tracking every shot I can find for the Rift here : http://vr.wikinet.org/wiki/3D_images

There is a least the one from the Oculus Kickstarter update that is adapted for the devkit, don't know for the others. I should probably categorize these images by device, but that will take time.
The one image that is wide-eye view and appears to not have full overlap is this one:

Image

But, unfortunately, to work with my fresnel lens stack HMD, although the images pre-warp is okay, the unwarped image centers are too far apart for me to converge them. I need the content BEFORE warping to be closer together. My eyes hurt a little from TRYING to converge the images beyond "infinity" (diverged eye view). If I shrink the images so I can converge them, then the warping becomes asymmetrical. So rather than just resizing the warped images to put the centers closer together, I need to move the image centers closer BEFORE pre-warp.

Does the image shown above look correct on a real Rift Dev Kit, or does it have the same problem I experienced?

EDIT: Installing my fresnel lens stacks inside the dollar store safety goggles worked much better than I expected. The view through the googles WITHOUT fresnel lenses is much more restrictive than WITH the lenses. The magnification of the lens stacks actually pushes the borders of the lenses far outward, making my nose/eyebrows/mustache "invisible" just like when holding the lenses with my hands. I am pleasantly surprised. Besides perhaps different software pre-warp, I am convinced more than ever that I can construct an HMD accessory for a 7-inch tablet PC using only two dollar store fresnel magnifiers, one pair of dollar store safety goggles, and some cardboard (or foamboard) and tape to hold the tablet display in front of the safety goggles.

Here is the same image as above, but with part of the middle border cropped out. I will now view this with my "HMD":

Attachment:
quake7.jpg


EDIT2: Yes! I was able to (just barely) converge that, fullscreen. It still needs a bit more cropped from the center. And I think the warping needs to be done AFTER pulling the images closer together. But progress! I tried viewing these with just the two offset lens portions (no center magnification layer). There was a little black margin visible on the right, so it needs perhaps just a LITTLE more magnification (not a full 3x). I am a bit near-sighted though, so with perfect vision perhaps an 3x magnifer would be about correct. I should probably write my own pre-warp filter now. It is time reload my dev tools on this new computer...


You do not have the required permissions to view the files attached to this post.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Fri Mar 01, 2013 7:14 pm, edited 2 times in total.



Fri Mar 01, 2013 6:48 pm
Profile
One Eyed Hopeful

Joined: Wed Jan 23, 2013 10:28 am
Posts: 26
Reply with quote
Wow this is really amazing and exciting. Maybe when we finally get our de kits we can mod it for even more fov.


Fri Mar 01, 2013 7:06 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
langmyersknow wrote:
Wow this is really amazing and exciting. Maybe when we finally get our dev kits we can mod it for even more fov.
As you can see from an earlier discussion with Palmer, the Rift Dev Kits have removable lens cups (eyecups), and the lenses are held in them with removable clips, so we CAN replace the lenses in an "unused" set of lens cups with anamorphic lenses.

Palmer said he previously used anamorphic lenses to expand 4:3 (1.33 AR) content to 16:9 (1.78 AR). I plan to go farther, expanding 8:9 (half screen width, 0.89 AR) to (almost) as wide as the eye can see. The Rift Dev Kit may be limited somewhat by the size of the lens cups and the frame around the lenses, but it should still be a significant increase in horizontal FoV (at the expense of pixel density, especially near the edges).

I will also do more with my other experiment involving a curved stack of 5 uncut fresnell magnifiers for each eye, wrapped completely around and touching the face from ear to ear. That version gives MAXIMUM FoV, with pixels everywhere you CAN look (but at correspondingly lower pixel density).

I love "as simple as possible", and this first version with trimmed lens stacks is developing nicely in that direction. Although I am designing it around my Nexus 7, even a dedicated 7-inch LCD panel may be used in its place (for a full DIY HMD).

Beware that we will need custom pre-warp plug-ins for these HMDs with different horizontal FoV, so it would be GREAT if HMD driver software allowed such plug-ins, or at least accepted a custom displacement map image or custom warp formula configuration.

For applications that can tolerate the reduction in pixel density (from spreading available pixels over a larger area), this significant increase in FoV is awesome and even MORE immersive.

But even without this increased FoV, the Rift ALREADY crosses the required threshold, and will be truly amazing even in its own default configuration. And for some applications, the increased pixel density from a smaller FoV may actually be beneficial. So different sets of lenses may be useful for different applications, even with the Rift Dev Kit.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Fri Mar 01, 2013 7:31 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Okay, I added the center portions of my fresnel magnifiers, for increase symmetric magnification. This got rid of the black borders at the edges. It uses the pixels very efficiently with almost no waste.

That third "uniform magnification" layer just sits inside on top of the other layers, partially held in place by the vent holes mentioned previously. They are easy to replace, so that different magnifications could be used for different people.

I will take some photos now.

EDIT: Here are the photos. It works quite well, especially for only $3USD in parts. :D

Note that all lenses except the 3x magnifier centers must have the ridges toward the eyes. The 3x center-cut lenses can be flipped either way.
Front:
Attachment:
gog-front1.jpg

Attachment:
gog-front2.jpg

Bottom:
Attachment:
gog-bottom.jpg

Top right:
Attachment:
gog-upright.jpg

Back:
Attachment:
gog-back1.jpg

Closeup of air vents:
Attachment:
gog-vents.jpg

Extra 3x magnification lenses (not installed):
Attachment:
gog-lens3.jpg

Extra 3x magnification lenses (installed, easy to remove):
Attachment:
gog-lens3a.jpg

Note that the 3x magnification lense cut from the fresnel magnifier centers can have the ridges toward or away from the eyes. In fact, the ones in the photos are cut identical, so one is flipped over (ridges away), and it works equally well. Perhaps it would protect them more to keep the smooth side toward the eye, so eyelashes brushing them would be easier to clean.

Also, when on the face, the portion that touches the skin spreads out more than when relaxed as in these photos. That has the effect of moving the lens stacks so that they are parallel to the display surface, exactly as needed.

This arrangement makes almost ALL of the pixels viewable, when the screen distance is adjusted to focus for my near-sighted eyes. Different eye prescriptions may require using a different magnification for the third layer, allowing the screen distance to be adjusted for full FoV while remaining in focus. In some cases (much more near-sighted) the third layer of magnifying lenses might not be required.

My 7-inch tablet display sits about one inch in front of the goggles. It will be held in place by an adjustable foamboard shroud, which I will construct later. All non-tablet parts are from a "Dollar Tree" store, with about $4 total parts cost (plus sales tax). It may need additional head support (such as a stronger strap) to hold the display weight in front of my face.

The left and right offset lenses were taped at top and bottom edges into a secure two-lens stack. The air vents actually held them inside the safety goggles. Although this actually held together and worked without tape, I added some tape at the top and bottom of the lens stacks, wrapped over the goggle edges, to secure them to the goggles.


The FoV Viewed through the fresnel lens stacks is MUCH larger than wearing the safety goggles without magnifying lenses. It extends beyond my nose, and my eyebrows and my mustache below (i.e. "supernatural" FoV).

I have impressed myself with this one (so far)! Soon it will need software support. I need to see how well the FoV2Go demo for Android devices works on it now. But it would look better with pre-warp and anamorphic adjustments.


You do not have the required permissions to view the files attached to this post.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Fri Mar 01, 2013 10:14 pm, edited 5 times in total.



Fri Mar 01, 2013 8:17 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2255
Location: Perpignan, France
Reply with quote
geekmaster wrote:
The one image that is wide-eye view and appears to not have full overlap is this one
I was talking about the first one, under SDK | Calibration application. It's been created by Oculus, so I guess you won't find anything better than that in term of correct rendering for the 7" devkit.


Fri Mar 01, 2013 9:04 pm
Profile WWW
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Fredz wrote:
geekmaster wrote:
The one image that is wide-eye view and appears to not have full overlap is this one
I was talking about the first one, under SDK | Calibration application. It's been created by Oculus, so I guess you won't find anything better than that in term of correct rendering for the 7" devkit.
Okay, thanks. I will try that.

Notice that I added photos of my "$3 HMD for 7-inch tablets" to my previous post. I plan to make an adjustable shroud from foam board (also from the dollar store) to block external light and to hold the display in focus. This may add an additional dollar to my materials cost.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Fri Mar 01, 2013 10:02 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
I just tried the "Fov2Go Minus Lab" demo on my Nexus 7. Although I can merge the images easily with my naked eyes (no eyeglasses), the images are just too far apart to merge while wearing my fresnel stack goggles. I can see aspheric warping near the edges, but the central area all likes quite good, and the entire display is in relatively sharp focus for both eyes. Of course, I do not have a REAL Rift Dev Kit to compare against yet, but I did try this demo with the DIY Rift 5x aspheric acrylic loupe lenses, and that worked fine (provided I moved the lenses to a position that allowed me to merge the stereoscopic image pairs).

With this anamorphic stretching, I really need to move my image centers closer together. I have not tried the recommended 7-inch Dev Kit test image yet. I will try that soon (after my Nexus 7 battery recharges).

I am having fun with this. I like inexpensive and simple stuff. And thanks Palmer, for the Fov2Go stuff (and the Rift Dev Kits)!

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Fri Mar 01, 2013 10:22 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Fredz wrote:
geekmaster wrote:
REQUEST: Where can I get some stereoscopic image pairs that are already adjusted to work on the 7-inch Rift Dev Kit?
I'm tracking every shot I can find for the Rift here : http://vr.wikinet.org/wiki/3D_images

There is a least the one from the Oculus Kickstarter update that is adapted for the devkit, don't know for the others. I should probably categorize these images by device, but that will take time.
Fredz wrote:
geekmaster wrote:
The one image that is wide-eye view and appears to not have full overlap is this one
I was talking about the first one, under SDK | Calibration application. It's been created by Oculus, so I guess you won't find anything better than that in term of correct rendering for the 7" devkit.
Okay, I tested these fresnel stack goggles with image that you recommended:

Factory Room Test Target:
Image

It looks pretty good, but also, it DOES demonstrate stereoscopic misalignment problems that Palmer Luckey warned me about. The effect is that for a round object near the center in one eye, but stereoscopically offset (by parallax) toward a side in the other eye, the non-linear anamorphic effect from offset lenses stretches the circle into an ellipse that gets wider toward the edge of the FoV. This makes it difficult to merge. And worse, things in the corners may get expanded higher or lower in one image IF THE PRE_WARP DOES NOT MATCH THE LENSES.

These problems can be fixed in two different ways. One is to use prismatic anamorphic stretching with a pair of back-to-back prisms (as shown in a previous link in this thread). Another method is to adapt the pre-warp algorithm to compensate for ANAMORPHIC warping. This includes making sure things in the corners keep the same height in both stereoscopic images, regardless of which one is closer to an edge. This anamorphic pre-warp can also assure that a circle remains circular and not elliptical as it approaches an edge of the FoV.

The central areas look pretty good, except the text labels above and below, which get their edges pulled outward asymmetrically from each other, disturbing stereoscopic merging. This can CERTAINLY be compensated for in software, but it PROVES that anamorphic lenses need DIFFERENT pre-warp than simple aspheric lenses. At least the central area works reasonably well with the WRONG pre-warp.

Now, the next step (after adding a shroud to mount the tablet display to the safety goggles) is to develop an anamorphic prewarp function (and/or displacement map)...

This DOES prove that software needs to support multiple different pre-warp filters, depending on choice of HMD optics.

EDIT: More testing with this factory test image, both with the extra 3x magnification fresnels inserted and with them removed, shows that there is too much magnification for me at the best focal distance, wasting too many pixels. When I test without extra magnification, I can see a small but noticeable amount of border around the edges. Perhaps I need an extra 1.5x magnification to fully use the available pixels on the screen. And the amount of extra magnification depends on the needs of the user, much like the eyecups on the Oculus Rift. For now, it looks sharper without the extra 3x fresnel. That could be because of less fresnel distortion, or it could be because of the higher pixel density with less magnification, but I suspect some of both...

This anamorphic HMD needs variable magnification for different people (just like the Rift Dev Kits), but also needs DIFFERENT pre-warp than the Rift uses, to keep things near the edge vertically aligned with each other in both eyes.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Tue Aug 16, 2016 9:51 am, edited 1 time in total.



Fri Mar 01, 2013 11:08 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
I tried wearing reading glasses, with the "2-lens-stack" googles over them. I put a bit of pressure on my nose with the fresnel lenses pushing against the eyeglasses. But the horizontal FoV was perfect at proper focal depth. The problem is that the eyeglasses do not cause aspheric distortion, so the top and bottome centers of the image collapse inward somewhat, letting me see past the screen edges up to the tablet edges. So for magnification, we really do need aspheric loupe lenses (or the fresnel equivalent). Just not a full 3X in my case.

Okay, next experiment. I put on my corrective eyeglasses so I have nearly normal vision. I wore the 3-lens stacks in the goggles (with extra 3x magnifiers), OVER my normal glasses. It was the best yet, with FoV vertical and horizontal. The problem is that I needed my small corrective lens that sit below my eyebrows very close to my eyes. Contact lenses would be even better. So assuming you have perfect (or corrected) vision, the extra 3x magnification layer is about right, I think. Everything in that test image looked pretty good except the text labels that had vertical misalignment preventing stereoscopic mergering of the text (until the pre-warp gets adjusted to compensate for that).

The problem with corrective eyewear is that it can push the fresnel stacks too far from the eyes, making this method not work well.

What would be ideal is to wear custom contact lenses that give you perfect vision AND focus about one inch from your face. Then you can just hold your tablet screen touching your nose and eyebrows. I tried it and I am sure it would work, giving maximum possible FoV. There are a lot of things to try, but many of them will require a custom pre-warp algorithm, and NOT the one that the Rift uses, That means we need custom pre-warp support in our drivers and/or games.

Soon, when I get my real Rift, this stuff may become a distant memory. But for those who have a tablet computer and will not soon get their Rift, they may want to continue my experiments here.

At this time, besides a shroud to attach my goggles and screen together and block stray light, I just need some custom pre-warp code, and then we can draw up some DIY construction plans.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Sat Mar 02, 2013 12:05 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
My 5x and 6x magnification reading glasses arrived today. The "large" lenses are smaller than huge ones I got at the dollar store.

I tried wearing the 6x glasses while viewing the Fov2Go "Minus Lab" demo on my 7-inch android tablet (Nexus 7), but I could not merge the images. Their centers are too far apart. I was able to merge them while hand-holding a pair of 5x loupes (recommend for DIY Rift-clones) but only while holding the lens centers offset from my IPD so I was peering through the inner half of the lenses. When offsetting the lens centers like this, the image is shifted away from the lens center by an amount proportional to the distance from the lens centers.

When viewing through the eyeglasses, there was no distortion. Rectangles looked like rectangles. Vertical and horizontal edges were not warped. This is because eyeglasses are design with meniscus lenses that curve outward from the eye, which focuses your eye on a flat plane (such as a LCD panel held at a close distance).

The 6x lenses were not enough for the screen to fill my FoV, but they do focus nicely not very far from my face. I am near sighted, so with perfect vision they would probably focus even farther away. Based on this experiment, I suspect that I would need about 10x magnification to have the 7-inch screen fill my vertical FoV while it is in landscape mode.

Although the IPD distance for the Fov2Go demo was too large, the Oculus factory configuration demo looked great with these glasses, with good stereo merge (even on the text labels). The only problem was that the image had pre-warp distortion that these high-magnification eyeglasses do not need.

After a little more thought about my recent experiments with 3-lens fresnel stacks mounted in safety googles, I decided that the physical position of the lenses in the stacks, and relative to my IPD, is probably important. I just hacked them together by ROUGHLY cutting 6-inch magnifiers into 3 pieces and stacking those pieces. Although that works, and a pre-warp could be figured out (empirically through trial and error), it would probably change somewhat for each pair of VR goggles made that way. We need to assemble them with more precision, so that one pre-warp filter will work for all of them lens stacks, and for both eyes.

Thinking a bit more about fresnel stack pre-warp distortion, it is clear that using a stack of lenses with 3 DIFFERENT centers would require warping around all three of those lens centers. Each warp is like a portion of a single lens, and all three warps can be performed (approximately) in any order. Testing shows that it does not matter (noticeably) what order the lenses are in the stack.

Now would be a good time to document what I have been thinking, about how the lens stacks work. The fresnel lenses operate just like an equivalent solid aspheric lenses. To make the fresnel design simpler, they are usually modeled after solid lenses that are flat (planar) on one side and convex (curved outward) on the other side (i.e. plano-convex lenses). Here is a the wikipedia page, and a diagram showing two equivalent lenses:
http://en.wikipedia.org/wiki/Fresnel_lens
Image

A short and simple description of how magnifying lenses operate is that they make make an image appear larger when viewed through the lenses. The 5x lenses used in DIY Rifts enlarge 5x. To do that, they do not move the perceived position of a pixel viewed through their axis, but as you look further away from the lens axis (toward the edges), the perceived position of the pixels is moved away from the lens axis, by the original distance from the lens center multiplied by the lens magnification. A pixel 2cm from lens center would appear at 10cm from lens center, outward in the same direction.

Now, when moving the lens center so you are looking though a position closer to its edge, everything you see is pushed outward from the REAL lens center, that may be well out of your FoV. The lens stacks I use have a pair of lense that start at 1-inch outside of lens center, and extend to 3-inches outside of lens center. This means that EVERYTHING is shifted, and more so at the outer extreme. These outer portions of a larger lens are called "offset lenses".

... time to go, to be continued later ...

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Sat Mar 02, 2013 5:35 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
... Okay, I'm back ...

Now, reading this may seem strangely disorganized. That is because it is raw "stream of consciousness" content, being typed in as I explore these ideas in my head. When I write professional technical documentation, it begins much the same way, but as the content grows, I periodically go back and refactor it, by moving related things to be near each other, organized into topics that you may later find in an outline for the document. When paragraphs are moved, some sentences may be rearranged or reworded to make it simpler, less redundant, and more coherent.

But, here, this is just raw content, so you can get a feel for how I think about this stuff. Later, some of this text may become comments in source code that implements my ideas in software.

Now, back to the theory of stacked offset lenses. The purpose of stacking lenses was to map enlarge the vertical FoV so the LCD display height fills your vertical FoV, and also to stretch a narrow image, from an SBS-half image pair into, into a wide landscape image such as you see through typical eyeglass lenses, or on a movie theater screen.

In a movie theater, this is typically done with true anamorphic lenses that do not cause optical distortion. These are somewhat complex and consist of mutlipe individual lens that are stacked together, some touching (and perhaps glued together) and some with space between them. What is revolutionary about the design of the Oculus Rift is the idea of using an inexpensive high-magnification single lens, positioned close to the eye to INTENTIONALLY cause image distortion, and then to correct for that distortion using a SOFTWARE pre-warp filter. We can extend that concept to perform something similar to what an anamorphic lens does, stretching a narrow image to fit a much wider screen. Instead of complex true anamorphism that stretches evenly in a horizontal direction with no distortion, I am using a stack of OFFSET lenses to stretch horizontally, at the expense of causing distortion that will be corrected in software, much like (but different from) a Rift pre-warp filter.

In my descriptions of potentially difficult technical concepts, I like to simplify (or technically OVERsimplify) my terminology, to make my already wordy posts somewhat less wordy. Although it makes them a bit inaccurate (especially to optical perfectionists in this case), I will continue to do so if it makes my posts sufficiently accurate that they can be understood, without making them "TL;DR" (Too Long; Didn't Read). In the case of lenses, I often refer a stack of offset lenses as an anamorphic lens, even though we need to add software pre-warp to achieve true anamorphic expansion.

Now, regarding a description of how lenses work, we will think in terms of a thoretically perfect pure magnifying lens (not convex, or plano-convex, or meniscus). Something that can be simulated easily with a small intuitive computer program, in this case. For this "perfect" (oversimplified) lens, we will take the example of a 5x magnification lens such as the one recommended for DIY Rift clones.

When you look at a 1-cm wide group of pixels through this lens, it will appear to be 5-cm wide. To accomplish this, all pixels are pushed outward from the center axis of the lens, proportional to the distance from that lens center, multiplied by its magnification factor. A pixel that is directly in the center of the lens axis will not appear to have moved when viewed through the lens. But a pixel immediately to its right will appear to have moved 5 pixels to the right, and a pixel that is 10 pixels to the right appears to be 50 pixels to the right. A pixel to the immediate left of the center pixel appears to be 5 pixels to the left, 1 up appears at 5 up, and 1 down appears at 5 down. Likewise, a pixel that is 10 pixels down from the center pixel looks like it is a distance of 50 pixels down when viewed through such a 5x magnifying lens.

Now what happens if you start with a 5x lens that is 6-inches WIDE? The center 2-inches acts just like the 5x magnifier lens described above. But when you get out to the edge that is 3-inches from the optical center, pixels viewed through them will appear to be 15 inches out from the optical center. Of course, in real life, you get a lot of distortion when viewing though a lens at such a sharp angle. You have to deal with things like "Brewster's Angle" where behond that you get total reflection from the surface and no refraction through the lens. But in this "perfect" lens we do not worry about such optical complexities.

Remember that I use OFFSET lenses in my stack. An offset lens is just an outer portion of a much larger lens, with its center far from the optical center of the original much larger lens. With our 6-inch fresnel magnifiers, when we cut them into three pieces that are each about 2-inches wide, the left and right outer portions start at 1-inch from optical center and extend to 3-inches from optical center. The effect is that when viewing the LCD screen through it with your eye over the ORIGINAL optical center, it appears to shift pixels from 5-inches to the right on the left edge, up to 15-pixels to the right on the right edge. But we do use it with our eye positioned 1-inch to the left of its left edge. Instead we move it so our eye is centered over it, which probably complicates the mathematics a bit, but by observing an image on the screen it behaves pretty much as described above. Pixels in front of my nose move to the right about 1-inch, and pixels at stretch wider the farther to the right you look, with pixels all the way to the right edge of the lens (in theory, about 15-inches to the right). Disturbingly, for a pair of images where you can ALREADY see some of the left image with the right eye, that left image is pulled q-inch into the right FoV! Luckily we can add an equivalent offset lens cut from the LEFT side of the original lens, on stacked flat against this RIGHT offset lens, to stretch the image toward the left. The stretching for the left offset lens will be a mirror of the right offset lens, so that the right edge pushes pixels about 5-inches to the left, and the left edge pushes pixels 15-pixels to the left. Because we stack these two offset lenses and our eye is not over the ORIGINAL optical center, there are mathematical complications that we will not bother to predict. Instead we will observe the net results using real fresnel lenses trimmed from a fresnel page magnifier that I estimate to be about 3x magnification.

When viewing an SBS-Half image pair, with the nose over the center line and a pair of stacked 3x offset lenses (as describe above) are positioned close to the eye and parallel to the screen, the inner FoV extends well past the nose (making visible pixels where the nose should be), and outer FoV is vastly extended to. For my (near-sighted) vision, I can see borders all around the image (including part of the image for the other eye), but adding extra overall magnification pushes all borders just past the limits of my FoV (exaclty what we want). To get that extra magnification, I use the center portion of the fresnel magifiers (which actually magifies a bit too much for me but is probably about right for somebody with perfect vision). Ideally, the offset lenses should be the amount of magnification needed to not require an extra magnifying lens, but the required magnification probably depends on how much the viewer is near-sighted or far-sighted. what I did is to tape the offset lenses into my safety goggles, and then (optionally) place the extra magnifier layer on top of them, easy to be removed later. The observed result when viewed through one eye is nearly perfect coverage of the full FoV, at the expense of increased distortion extending from TWO DIFFERENT optical centers (or THREE optical centers when a magnifier lens is added to the lens stack). The pre-warp algorithm has to take all of these lens optical centers (and offsets) into account to compensate for them.

When viewing an image that has no (or incorrect) pre-warp distortion, some objects diagonally offset from the optical center may be stretched DIAGONALLY outward by different amounts, caused by parallax differences between them in the stereoscopic image pair. This results in different VERTICAL OFFSETS making them impossible to stereoscopically merge near the outer edges of the display, unless the correct pre-warp is applied to prevent mismatched vertical offsets. For the "anamorphic" pre-warp filter to match the real physical lens centers, the lenses must be measured accurately and cut uniformly, unlike the lenses that I crudely rough-cut just for experimentation.

Conlusion: Before making a software pre-warp filter, I need to construct another set of stacked fresnel offset lenses and optional magnifier, using careful measurements and uniform trimming, to exactly match my software filter. Otherwise, mismatched vertical offsets in stereoscopic pairs will cause eyestrain. With careful construction of the lens elements and lens stacks, I believe we can have a vast FoV from a single 7-inch LCD panel (as used in my Nexus 7 tablet PC).

EDIT: In another set of experiments, I used a stack of FIVE untrimmed 7-inch wide fresnel magnifiers, touching my face just between my eyes, and curved to wrap around my eyebrow and touching the side of my face just in front of my ear. The lenses were also touching the tip of my nose, and my cheekbone. When my 7-inch display containing an SBS-Half stereoscopic image pair was placed in front of my face while holding the magnifier stack in this position, I could see pixels (all in pretty good focus) in every possible direction, including where parts of my face should have obstructed my view. This was TOTAL FoV, all from a single 7-inch display, and you cannot get any better FoV than that. The only problem is that stretching half a million pixels over this large of a visual field makes the pixel density awefully low, but even so, the immersion is truly amazing, even if a bit blurred. To use TWO lens stacks, touching your nose, cheeks, and eyebrows from ear to ear, you would need a special pre-warp filter to allow stereoscopic merging, but the results would look truly amazing, and the screen is only about 1-inch in front of your nose so the full HMD would be extremely close to the face making the weight of the display less of an issue from reduced leverage. And it might look a little less "dorky".
:D

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Sat Mar 02, 2013 6:35 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
At this point, I would like some "peer review" feedback. How am I doing? Is this stuff interesting? Do you see any flaws in my conclusions that I reached during my experiments? Does anybody have a simple formula to calculate the pre-warp filter I will need for a stack of two offset lenses (with different optical centers) as described above? How about a formula for adding the third optional magnifier lens (with a third optical center added to the mix)? Should I try cutting fresnel lenses to stick onto the inside surface of high-power reading glasses (with meniscus lenses) so there is no third optical center to deal with, and different people can use different reading glass magnifications? Do you have any other comments, suggestions, or questions about my lens experiments?

And, really, how do you like my experimental approach and my method of describing it here? Does it help, or should I keep more of it to myself?

I need feedback, to help me decide how to spend my time here and what I should document. Thanks.

So MANY ideas, and so LITTLE time... ;)

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Sat Mar 02, 2013 8:41 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
My graphics experience is primarily in directly manipulating pixels in a frame buffer. I have been doing that since the days of CGA on the original IBM PC (before they even had a hard drive option). So I really feel comfortable with pushing pixels to do my bidding. One side benefit is that I can pipeline everything using custom blitters (Bit-Block Transfer functions) that can simultaneously do filtering on pixels while they are being moved. This was the only way to go on antique processors, and now it is even more important to handle data from RAM only once, while it remains fresh in modern multi-level cache. This is the best way to guarantee low latency. Passing a line-at-a-time through a series of filters is faster than passing an entire frame through a sequence of filters, especially when that frame does not fit in cache, such as when using low power processors like the Raspberry Pi.

I realize that I should use modern GPUs, but I want to start with the basics and work up from there. I always like to start with a firm but minimalistic foundation, doing a lot with a little, and then keep that as a least-common-denominator fallback position (such as software rendering).

My little adapter just came today that converts analog video to 720p or 1080p, and even a lowly PIC or AVR can output analog video with just a couple of output pins and a couple of resistors, so it would be cool to see what kind of minimal immersive experience could be had by connecting an Oculus Rift to a PIC processor.
:D

Now, I plan to start with processing raw pixels directly in framebuffers, and work up with there, for the reasons described above. One common denominator for framebuffer access that works with LOTS of different computers is SDL (Simple DirectMedia Layer):
http://en.wikipedia.org/wiki/Simple_DirectMedia_Layer

There is a ton of software out there that can use SDL, and one of them is Doom (or prboom). And OGRE can also run on top of SDL. But I plan to start with using SDL just for device independent framebuffer access.

I will show how to implement line and arc drawing, and bitmap and vector character sets (Hershey fonts) using a Raspberry Pi. That means that I should get around to ordering one soon. I like the fact that the RasPi can output video directly to HDMI, so does not need to depend on the "analog video to HDMI" adapter I just got today.

After basic vector art primitives, I will move on to bitmap graphics and procedural texturing. And then we can do the all-important displacement mapping. Why is that important? We will use simple displacement mapping to create a pre-warp filter for the Oculus Rift (and another displacement map for my "anomorpic-like" fresnel lens stacks).

A displacement map is just a bitmap image that contains colored pixels which represent how much to MOVE the "input pixel pointer" away from its normal X and Y coordinates while blitting am image from one location to another. Displacement mapping can also be pipelined into other graphics primitives (such as the ones described above) to drawing directly to the output display buffer without blitting between buffers, which minimizes latency.

A displacement image typically contains two color planes such as red and green. We can use the color value as a signed number that specifies how much to move the input pixel pointer from its normal position. For example, a color value with RED=0 and GREEN=0 would put the pixel in its normal location (no offset). A value of RED=8 would move right 8 pixels, and RED=-12 would move left 12 pixels. GREEN=23 moves down 23 pixels, and GREEN=-1 moves up one pixel from its default input position.

Using a properly constructed displacement map will warp an image in such a way that viewing the warped image though our lenses that cause warp distortion when place very close to the eye, the resulting image will appear to be totally correct. In otherwords, we intentionally warp the image so that the distortion in our lenses puts the pixels back where they should be. The pre-warp can do more than just remove unwanted lens distortion. It can also do anamorphic (asymmetrical) stretching, to convert a portrait mode SBS-Half image into a landcape mode (wide-FoV) image.

Our displacement maps can be calculated, and then adjusted empirically while viewing test patterns through our lenses until everything looks correct. For our displacement maps to work on multipe devices (such as my fresnel lens stacks) they will need to be constructed carefully so that they can be reproduced without needing custom displacement map images for each unit.

... More to come later ...

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Sun Mar 03, 2013 1:10 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Regarding my thoughts about using the Raspberry Pi as a portable gaming device to be used with an Oculus Rift, here is an example of a RasPi app that I would like to port to the Rift:

https://web.archive.org/web/20130514124 ... rena?adult

Image

Image

Image

This needs to be head-tracked and pre-warped for the Rift. If the RasPi can do this in stereoscopic 3D too, all the better!

Remember that by uncoupling head tracking from rendering, we can still have (a portion of) the view track our head rotation quickly, keeping us immersively anchored in the VR reference frame, even while the game world around us updates at a slower pace. That requires rendering a larger FoV than our HMD can see, so we have some margin to look into when we move our heads a little in and out of the visible margins.

But hey, this game for the RasPi shows that we can do more with it as a Rift gaming device than just simple "90's era" VR games.
:D

And because most 7-inch tablet PCs are MUCH more powerful than a RasPi, something like this should be a "piece of cake" for a Fov2Go-style HMD add-on for a tablet PC.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Tue Aug 16, 2016 9:56 am, edited 5 times in total.



Sun Mar 03, 2013 2:31 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
And here is Pi3D, a 3D Python graphics & resources app for the Rasberry Pi:
http://store.raspberrypi.com/projects/skillmanmedia

Pi3D source code:
https://github.com/tipam/pi3d

Image

Image

Image

With the source code, we can add Rift (and Rift-like clone) support to Pi3D.

EDIT: I found a configuration script for the RasPi that makes it display the desktop in SBS-Half format, which might work okay with the Rift, or with a tablet based HMD (after pre-warp is added).

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Tue Aug 16, 2016 9:58 am, edited 3 times in total.



Sun Mar 03, 2013 2:38 am
Profile
Binocular Vision CONFIRMED!

Joined: Thu Jun 07, 2012 8:40 am
Posts: 237
Location: New York
Reply with quote
Interesting read so far. Very interested in seeing where this and foisi's HMD are going.

_________________
Ibex 3D VR Desktop for the Oculus Rift: http://hwahba.com/ibex - https://bitbucket.org/druidsbane/ibex


Sun Mar 03, 2013 9:40 am
Profile WWW
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
druidsbane wrote:
Interesting read so far. Very interested in seeing where this and foisi's HMD are going.
It is a good thing to have multiple options in HMDs, at different price points and for different applications. In my experimental approach to HMD designs, I am using minimum cost and maximum simplicity (at the expense of image quality) as my fundamental guidelines. I think that in the tradeoff between immersion and visual fidelity, when using low cost portable computers, immersion has to come first. We can use high quality and custom optics later. I am just exploring some options in basic design concepts for now, while waiting for my Rift.
:D

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Sun Mar 03, 2013 10:44 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
I just did another experiment with lens stacking. This time I stacked my 6x and 5x reading glasses. Then I viewed the Oculus test image on my Nexus 7. With that much magnification, it focused closely, but not close enough. I could see a bit of both overlapped images (including part of the wrong image) and I could see the outer borders of the display.

Next, I added a pair of 3x reading glasses on top. Those have larger lenses so they fit nicely. My ears and nose managed to hold them all in place. With all three glasses (6x and 5x and 3x) stacked, the test image filled the FoV of my reading glasses, without seeing any of the borders or of the wrong image. This worked quite well. The only minor problem is that these glasses have meniscus lenses, which cause almost no warp distortion or chromatic aberration even at such close distance to the display. The pre-warp for reading glass lens stacks would need to be much less than what the Rift uses, and perhaps no pre-warp would be acceptable. That gives a much sharper display than when warping, and would be MUCH better for displaying text, such as a windows desktop. The downside is that the FoV is limited to the eyeglass frames (the same as in Real Life). Also, the pixel density is pretty much the same all over the display.

So we have FOUR options:
1) Oculus Rift, with distortion but increased central pixel density. This has support by SDK and some games.
2) Eyeglass lens stacks, with almost no distortion, with even pixel density and increased text readability. Things look curved when viewed with Oculus pre-warp.
3) Fresnel lens stacks very close to the eyes, for "anamorphic-like" (dual offset) stretching from portrait to landscape mode giving very increased horizontal FoV.
4) Extreme fresnel lens stacks curved around face, for "total" FoV, but at the expense of a lot more distortion.

All options require custom pre-warp that is different for each lens option. Fresnel offset stacks have much more critical pre-warp to prevent stereo pair vertical misalignment.

And recent experiments using the Oculus test image confirm that non-overlapped images work great (as long as you cannot see part of the wrong image). For reference, here is that image again:

Image

If you free view the above wide-eye image pair, you may want to hold a divider (sheet of dark paper) sticking from the screen (between the images) extending out toward between your eyes. That is usually not necessary with fully overlapped images, but with these non-overlapped images it is pretty important.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Tue Aug 16, 2016 10:00 am, edited 1 time in total.



Mon Mar 04, 2013 12:21 am
Profile
Golden Eyed Wiseman! (or woman!)
User avatar

Joined: Tue Feb 12, 2008 5:22 am
Posts: 1515
Reply with quote
Can you show a picture of your lens "stack". From memory Fresnel's seems to do very little when stacked flat together without a gap between lenses. My head has a decent gap between them to get the FOV increase.

Image

_________________
"I did not chip in ten grand to seed a first investment round to build value for a Facebook acquisition."
Notch on the FaceDisgrace buyout.


Mon Mar 04, 2013 1:39 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Okta wrote:
Can you show a picture of your lens "stack". From memory Fresnel's seems to do very little when stacked flat together without a gap between lenses. My head has a decent gap between them to get the FOV increase.
I posted photos earlier in this thread. My stacks are not all centers. And the lenses are flipped over from how you have them. I am exploiting the offset property from looking through the outer portions of the lenses, which act like prisms because they model a portion of a larger lens that is thick on one edge and thin on the other. I am stacking opposite sides of the larger lens, which is effectively the same as a DIY anamorphic lens. But because my "prisms" are cut from a convex lens, they also have some magnification and distortion (similar to that in the Rift lenses). The magnification is significant, but not what you get by spacing them apart. That is why I had to stack FIVE lenses in another design when wrapped around my face from ear to ear.

To see my photos, look back through this thread. To understand how they work, read these linked references:

Offset lenses:
http://www.hcinema.de/pdf/infocus-in72- ... set-en.pdf

DIY anamorphic prism stacks:
http://www.zuggsoft.com/theater/prism.htm

Remember, I am exploiting the DISTORTION properties of these fresnel lens stacks much more than their magnification. From an empirical point of view, it appears that stacking three 3x lenses so that they touch gives about 5x vertical magnification, but horizontally, it varies from about 6x on the left to (just estimating here) perhaps 18x on the right. The complex distortion in my lens stacks can be corrected with suitable pre-warp software, similar to that used by the Rift.

EDIT: Here is an photo of a fresnel lens stack from an earlier post (click photo for larger version):

Image

The lens stack shown above contains the left 1/3 and right 1/3 of a 6-inch fresnel page magnifier. These "offset lenses" stretch the image horizontally much more than vertically, because their physical centers are offset 2-inches to the right and 2-inches to the left of the original optical center in the center 1/3 of the original magnifier. As you can see in the photo, the corner of the lens stack actually rests on the nose piece, and actually touches my face between my nose and cheek. They sit so close to my eyes that my eyelashes slightly brush against them. I also add a third mangifying lens to the stack when using them (cut from the center 1/3 of the original fresnel page magnifier), to stretch the images evenly so I cannot see the borders. This gives a huge FoV. You can get a better idea how and where they are mounted from more photos in this post:
viewtopic.php?f=140&t=16373&start=60#p103461

Keep in mind that viewing images that do not have proper pre-warp correction applied to them will cause steroscopic covergence problems in the outer areas of the image, but the image center and low-parallax areas with distant objects look pretty good. Just know that the images CAN be corrected in software, and pay most attention to WHERE you can see all those pixels. I still need a way to mount my 7-inch display (tablet PC, or LCD panel) in front of the safety goggles at the right distance, which as I recall is no more than about two inches (or less).

This is a difficult concept to understand, and it is really important to actually try it yourself to see how it works. Photos and descriptions alone are not adequate. I intentionally used parts you can get at a local dollar store (or hardware stores and book stores), just so that you CAN duplicate my experiments quickly, easily, and inexpensively. Just DO it!

Enjoy! :D

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Mon Mar 04, 2013 7:43 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Here is a simple "thought experiment" to show how an offset lens works:

1) Imagine a video projector is projecting a movie onto a movie screen.
2) Adjust your video content so that everything of interest falls into the right 1/3 of the screen.
3) Place a sheet of cardboard in front of the projector so that it casts a shadow on left 2/3 of the movie screen.
4) Remove the portions of the lens and video content that are not putting pixels onto the screen.

As you can imaging, the image on the screen is offset far from where the projector appears to be pointing.

5) Now slide trimmed offset lens portion in the projector so that image on the movie screen also slides, back toward the center of the FoV.

Because the offset portion of a lens behaves as a curved prism, it will stretch the image horizontally after you slide it as describe. It will also add chromatic aberration just like when viewing a rainbow cast by a prism. Because this offset lens combines features of a magnifying lens and a prism, the resulting image is stretched outward away from the original lens optical center.

6) Take another offset lens cut from the other side of the original lens, and place it in front of the first offset lens.

This has the effect of partially correcting for the other lens distortion, but in a nonlinear fashion. The net result is that the image is stretched most on the outer left and right sides, and somewhat less in the center. This is exacly what we need to combine anamorphic stretching with the non-linear stretching used to our advantage inside the Rift Dev Kits, plus an anamorphic effect that lets us stretch a portrait mode SBS-Half image horizontal to much more of our available FoV, without also stretching vertically which would waste vertical resolution by pushing onscreen pixels well beyond our vertical FoV.

As mentioned previously, the relatively low resolution of available displays at this time makes this a tradeoff between resolution, central pixel density, total FoV, pixel distortion, and chromatic aberration.

Ideally, we would have high quality meniscus lenses (magnifying reading glasses) that can focus on a display screen just one inch or so from our faces, and much higher screen resolution, so we can (mostly) eliminate the need for pre-warp correction.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Mon Mar 04, 2013 10:06 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 104 posts ]  Go to page Previous  1, 2, 3  Next

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Powered by phpBB® Forum Software © phpBB Group
Designed by STSoftware.