We have a proof of concept 3D lesson for everyone today! Unless you have a medical limitation, you are probably able to perceive 3D, and this is how it works: if you close your eyes one at a time, you will see that each eye sees a slightly offset view from the other. In addition to being offset, each eye sees more of one view than the other. Your brain takes the unique images provided by your left and right eyes, and combines them into a single picture. This picture includes the depth we all take for granted.
In the S-3D gaming market, there are two dominant schools of thought. The first is complete left and right camera view rendering that could require double the processing power (in some cases). Most PC stereoscopic 3D drivers and a selection of console games do this 100% of the time. The second option is 2D+Depth which only renders a single camera, and based on the game’s inherent Z-buffer information, it places the objects at different depths. This is advantageous because there is little to no loss of performance, but you lose that extra information that a second camera would normally provide. So how important is this extra information?
Microsoft’s Kinect has been getting a lot of headline space because it’s a new type of game controller that doesn’t require you to hold anything in your hands. It determines your body’s positioning with the help of a stereoscopic 3D camera. Oliver Kreylos’ video (inadvertently) demonstrates the importance and quantity of extra information provided by a second camera view:
If 2D+depth was shown on camera, we theorize that straight on, it would look the same as this video. However, when the camera is rotated, the objects would be on different depth plains and appear as paper thin elements – like a deck of cards standing up on a table. It’s this lack of visual information that makes it difficult for 2D+Depth to have convincing out of screen effects. Is it worth the trade off in performance? Share your thoughts!
UPDATE! On closer inspection, while there are two cameras with Kinect, they aren’t equal resolution. One is 640X480, and one is 320X240. The 320X240 unit is used for the depth information capture, and it is offset from the first. So let’s leave it to the MTBS membership to decide. Is Kinect in fact 2D+depth and is this article’s theory wrong? Or is it more a true stereoscopic 3D capture device? How do you differentiate?
Prizes are still going out for the U-Decide winners, and several more digital download keys will be going out today.
Congratulations Mark Stuart, winner of the SteelSeries Siberia V2 Headphones! If you get a prize in physical form, or think you can get a picture of yourself and your prize, please let us know!