Re: Development thread
Posted: Thu Jun 20, 2013 11:51 pm
ChrisJD wrote:Another set of tests if anyone has the time.
3 sets of images http://chrisjd32.imgur.com/
Pick the best in each set and indicate if it's still off or not.
Hi Chris,
Before I answer this, I want to show you something. I know we've been having a lot of discussion around convergence because Oculus VR has been promoting that you shouldn't have any convergence with VR rendering.
http://www.reallusion.com/crazytalk/hel ... Vision.htm
When I'm talking about convergence, I'm talking about offsetting the left/right images after they have been captured by the cameras (virtual or otherwise) to determine how much of the 3D is inside or outside the screen. The argument against this with head mounted displays is that there is no screen, so in theory, your eyes shouldn't cross and have these supposed out of screen effects.
So, the first question...how do the words translate into practice?
This is a screen capture from the first Tuscany demo when you walk as close as possible to a candle stick. Tuscany is part of Oculus' SDK:
Remember that the left eye is on the left side, and the right eye is on the right side. Look at the candlestick positioning very carefully. It's crossed! By definition (there is no avoiding it), this is an out of screen experience.
Ok. So, Oculus added rules about no convergence, right? Check out the latest version of Tuscany:
In this case, Oculus tried to avoid negative parallax by erasing the object so you never get the chance to see it. Even if it's a bug, look at the candlestick base...definitely swapped! Not only is this limitation unnatural, it's unnecessary. Your eyes naturally converge on objects, and it's an important function to keep things visually interesting.
Now I'm not a programmer, so I can't speak to the mathematics you are struggling with on the drivers. I look at the visuals and the visual relationships to see if things are right or wrong...mathematics be damned!
Using the sample from the mouth of babes, I think the first image from Tuscany is really the best one. So how can you mimic this behavior without having a Rift in your hands? Here are some litmus tests that may be helpful.
1. If you have a standard IPD, extrapolate where it physically is on the lens. Using SHOCT as an example, place a vertical red line on the image of each eye to represent where on the lens the pupil actually is.
2. A driving game isn't the best example, but choose an object in the far distance (as far as possible). If you can get that object under the red line in both eyes, your math is in good shape. Remember that at infinity or the most distant object, your eyes point straight forward and no further apart (unless you have divergence which is a no-no).
3. Get extremely close to an object like a wall edge, a candle stick, etc. Just as Oculus' demo shows, it's ok if the images cross. The result should either be mono (zero parallax) or somewhat reversed. SHOCT's convergence test is a good visual indicator you can use so having a Rift isn't necessary.
I apologize in advance for not going through every image in your list. However, based on Valez's input, I chose image 160 to use as an example. Below is the original:
I know you don't have a Rift or 3D display to work with. As there isn't anything close to the eyes in this scene, I think the image should look something like this:
This is just a 3D map to work by. Until the IPD lines are in place, there is no way to know if the game's separation and convergence are correct. The hands aren't reversed, by the way - the interlaced image is a little misleading that way.
I hope this helps.
Regards,
Neil