Yeah that's what I wanted to see. I wanted to see a video in Stereoscopic player. But I guess that is not to be.
According to the Stereoscopic Player version history
it seems the 2D + depth is an input option, so I guess it should not only work with images but also with videos.
How does this work? Do yew need a camera? It looks like a conversion, maybe? Is there software to convert from 2D to the depth version? Oh dear. So many questions.
For video games I think it should be quite easy to extract the Z-buffer (depth map) as well as the 2D render to obtain a 2D + depth video (Z-buffer extraction is part of the shadow mapping algorithms
for example). But I don't see the benefit of this over making a stereoscopic rendering using a second point of view.
For videos shot in 2D, the problem would be the same than for general 2D to 3D conversion, but I'm not aware of any existing solution that is acceptable at this time.
For videos shot in 3D it should be more or less possible to extract the depthmap at each frame with existing algorithms
, but I really don't see any benefit in doing so since you've already got a 3D video in the first place. The only use I can see for this would be to retarget 3D videos for different screen sizes and viewing distances, like what has been done here : A stereoscopic movie player with real-time content adaptation to the display geometry