Yet another 2D - 3D converter...

Post Reply
3Tree
One Eyed Hopeful
Posts: 13
Joined: Wed Jan 09, 2013 7:45 pm

Yet another 2D - 3D converter...

Post by 3Tree »

Anyone heard of 3-dvision? It's an upcoming 2D-3D converter. They had a crowdsource funding contest that was suppose to accept designs for the final product. I was talking about it here: http://www.mtbs3d.com/phpbb/viewtopic.p ... &start=300

http://www.fundable.com/3-dvision

It's suppose to work pretty well.
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11406
Joined: Sat Apr 12, 2008 8:18 pm

Re: Yet another 2D - 3D converter...

Post by cybereality »

Sounds interesting, and I wish them the best of luck, but I seriously doubt it will work very good (if at all).
voliale
One Eyed Hopeful
Posts: 30
Joined: Fri Nov 09, 2012 2:32 pm

Re: Yet another 2D - 3D converter...

Post by voliale »

Contd. from another thread..
3Tree wrote:
voliale wrote:
3Tree wrote:He mentions that he uses some concepts from holography to help with the conversion.
I must have missed that part, do you remember in which video (and at what timecode, as the videos are very long) he said that? :)
I was hoping you would find it since they gave a lot of information lol.

Here we go: http://youtu.be/AlWckIQNncs?t=27m30s

The whole video is a great watch though and he talks about the rotoscopy method and the lenticular method, what's good/bad about them..etc.
Thanks! I watched it again, isn't he rather talking about how holography will help fix the problems with 3D stereoscopy by replacing it it with holography?

http://www.killerstartups.com/bootstrap ... vel-again/

And a bit more (although vague) info on the tech:
Can you talk a bit about your technology, and how it's different from other, 3D technology out there?

Gene Dolgoff: There are basically three kinds of 3D converters, besides ours. One is the manual technique, which uses rotoscoping, which requires a graphic artist to sit at a workstation, and convert frame-by-frame into 3D. To convert a single, full length movie, that takes hundreds of graphic artists, four months, and five to 15 million dollars. That's only good for blockbuster movies, and that's prohibitive for television. There also automatic converters. There are two kinds besides ours. One is really fake, and is the "simulated 3D" that most converters use, which essentially offsets different lines on the screen different amounts, which makes a picture look like it has different depth. It doesn't have any correlation to the real depth in the scene. The second kind uses something which are called depth maps, which are algorithms that assign different properties to different depths, for example, you might look at the brightness of an image to try to determine depth, which has a little more do to with actual depth and is more accurate. The problem there, is it requires lots more computing overhead, and requires a much bigger, more expensive machine. That usually causes lag, and that's prohibitive when you're playing a game.

Then, there is our system. Our system uses a different technique, based on the study of the human brain over the past 40 years. We look at two frames at a time, and look at all the 3D factors. You'll notice that most of the time, more depth means a scene has differences in brightness, contrast, and color saturation, and are higher in the frame. Plus, when a camera moves back and forth, the slower and object is, the further back it is. All of these factors are taken into account, and we get an image which is stereoscopically aligned with the algorithm in our brain. The workload of our computing is greatly reduced, and lots of this is done in the human brain. The actual depth information is detected and placed in a lot of the areas of the scene. It's not the actual depth, but when the brain sees actual depth in the areas, it fills in the rest of the missing depth information in the area, by remembering its previous 3D experience. That's how this is able to create good, 3D with accuracy, yet with low compute overhead.
http://www.socaltech.com/GodOFSpandex/s-0044318.html
A-ha, I think I understand it now.

Instead of detecting and tracking features over several frames, he compares only two frames. That explains how he is able to do it real time.

No tracking, just brute force, pixel by pixel, and then use statistics to do a best guess on each pixel. And even if the results are strange and noisy and many pixels are wrong, he's able to make it sufficiently "stereoscopically aligned with the algorithm in our brain" (might this be where he, allegedly, uses principles from holography?) that our brain accepts it, fills in the details, fixes it in post.

Does it work? I don't know. But very cool if it does.
User avatar
Fredz
Petrif-Eyed
Posts: 2255
Joined: Sat Jan 09, 2010 2:06 pm
Location: Perpignan, France
Contact:

Re: Yet another 2D - 3D converter...

Post by Fredz »

It can work quite well on static scenes with aligned images using the best algorithms available (see Middlebury Stereo Evaluation), but you'd need more than two frames and lot more complex algorithms to do it right for videos with moving or deformable objects.
voliale
One Eyed Hopeful
Posts: 30
Joined: Fri Nov 09, 2012 2:32 pm

Re: Yet another 2D - 3D converter...

Post by voliale »

Fredz wrote:It can work quite well on static scenes with aligned images using the best algorithms available (see Middlebury Stereo Evaluation), but you'd need more than two frames and lot more complex algorithms to do it right for videos with moving or deformable objects.
Cool link, thanks!
3Tree
One Eyed Hopeful
Posts: 13
Joined: Wed Jan 09, 2013 7:45 pm

Re: Yet another 2D - 3D converter...

Post by 3Tree »

voliale wrote:Contd. from another thread..


Thanks! I watched it again, isn't he rather talking about how holography will help fix the problems with 3D stereoscopy by replacing it it with holography?


A-ha, I think I understand it now.

Instead of detecting and tracking features over several frames, he compares only two frames. That explains how he is able to do it real time.

No tracking, just brute force, pixel by pixel, and then use statistics to do a best guess on each pixel. And even if the results are strange and noisy and many pixels are wrong, he's able to make it sufficiently "stereoscopically aligned with the algorithm in our brain" (might this be where he, allegedly, uses principles from holography?) that our brain accepts it, fills in the details, fixes it in post.

Does it work? I don't know. But very cool if it does.
I should of mentioned it the other day, but yes, it seems he is pretty much using the holographic method in place of the other method(s). And yes, he was mainly talking about the cons of the two stereoscopic methods and how his method would address their faults. He is suppose to be working with a display company that is helping him bring his technology over to TVs. I suspect the company is mentioned in this thread here: http://www.avsforum.com/t/1451497/autos ... y-distance Interestingly, the person in that thread mentions the term autoscopic. I suppose that is what they would call his method instead of stereoscopic, assuming the poster didn't just coin the term.

I wouldn't know as I'm not a 3D researcher lol. I thought he said he was using concepts from holography from what he mentioned in that video and from a previous article.

All the articles on the fundraiser site seem to be pretty positive and mention that it works, but the draw backs were with very old 2d video not working so well and the typical display related issue of crosstalk.
Post Reply

Return to “General Stereoscopic 3D Discussion”