Page 1 of 1

Explanation of "2D plus Depth"?

Posted: Wed Mar 21, 2012 7:38 am
by Fasty
I'm currently writing an article (for the MTBS bonanza contest!) but I want it to be as accurate as possible while describing in an easy to understand manner the definition of "2D plus Depth".

Here is what I have so far, and it applies to gaming (I know 2D plus depth/3D conversions are probably done a bit differently in film):
[2d plus depth] is where, to save on the processing power required to render out two separate images for the left and right eye, the game renders out one image, then overlays that image on to depth data generated from the 3D geometry in the scene. Some computer trickery is then applied in order to try and fill in the information that is missing from the second image that the left eye could see but the right eye couldn’t (the background behind an object close to the screen for example).
Is this roughly correct?

Re: Explanation of "2D plus Depth"?

Posted: Wed Mar 21, 2012 12:32 pm
by Fredz
It's a good illustration but I don't think the "overlay of the image on the depth data" is really true to the inner mechanism of the technique, even if the result is the same in the end.

Basically there is a shader which reads the value of the depth buffer for each pixel in the image, then calculates its shifted horizontal position for the second viewpoint using simple geometric projection and finally sets the value of this pixel to the same color than in the first image.

Then as you said, for the missing pixels in the second image there are some heuristics used to fill the gaps in the generated image (aka occluded zones), which seem to differ between the different implementations (DDD Virtual 3D, Sony, Crytek).

Re: Explanation of "2D plus Depth"?

Posted: Thu Mar 22, 2012 2:13 am
by Fasty
Hm ok yeah got it, thanks. Might just have to reword it slightly, but I really want to make it as easy to understand as possible. Cheers!