Yes, with VorpX and the freely available System Shock II DX9 patch.Any chance of pants shitting VR in system shock 2?
SSII support for Vireo Perception might be added in the future, of course, if someone does it.
Yes, with VorpX and the freely available System Shock II DX9 patch.Any chance of pants shitting VR in system shock 2?
For sure, before the end of the month. Maybe sooner, stay tuned.2EyeGuy wrote:Speaking of giving any date... when will you be releasing Vireio Perception? And when can we start helping with the code?
What about offering games to Cyber using the gift option on Steam ?Mart wrote:Additionally it would be great to have a "vote with your money" facility to drive game compatibility. People would select (or suggest) a game and then make a donation/pledge. Once a developer has added support for a game, they receive the donations tied to that game.
This is my kind of guy right hereloserspearl wrote:Is there any word on whether or not you will add support for Metro 2033/Metro: Last Light? It would be the only reason I would buy the Oculus Rift or the driver support.
Hi 3Tree, interesting project. I watched the videos and this guy seems like a true 3D enthusiast. However I've looked into (supervised) 2D to 3D conversion before and I'm skeptical.3Tree wrote:Will we be able to turn off the 3D option and utilize just these features? I wanted to be able to use your driver with this upcoming device in the future:cybereality wrote:In particular, it will pre-warp the image to match the Oculus Rift optics, handle custom aspect-ratios (needed for the Rift's strange 8:10 screen), and utilize full 3DOF head-tracking.
http://www.fundable.com/3-dvision
Thanks for the link; I'm checking it out right now.voliale wrote:Hi 3Tree, interesting project. I watched the videos and this guy seems like a true 3D enthusiast. However I've looked into (supervised) 2D to 3D conversion before and I'm skeptical.3Tree wrote:Will we be able to turn off the 3D option and utilize just these features? I wanted to be able to use your driver with this upcoming device in the future:cybereality wrote:In particular, it will pre-warp the image to match the Oculus Rift optics, handle custom aspect-ratios (needed for the Rift's strange 8:10 screen), and utilize full 3DOF head-tracking.
http://www.fundable.com/3-dvision
The problem is, even though a 2D image contains a lot of depth cues (parallax, scale, shapes, focus, color saturation, brightness, etc), and even if the computer should get a perfect 3D solution (almost impossible), it still has to fill in a lot of information that is simply not present in the 2D image. And to do all that real time... (movies would be easier as they can be processed and buffered in advance)
Check out the two videos here, they explain the process well: http://www.yuvsoft.com/products/2d-to-3d-conversion/
My guess is that he uses these same techniques and just throws a lot of processing power at it, look at the size of that hardware. The results might be good, perhaps even surprisingly good. However he mentions that it doesn't have to be perfect because our brain corrects for errors. But with the Rift's massive field of view I'm not so sure our brains would be that forgiving.
And with his credentials and proven record, if he really has come up with a revolutionary device, why did he have to turn to a fundraiser webpage?
From a 2D image only it's most probably impossible, but from a succession of 2D images it's somewhat feasible, but not in realtime though.voliale wrote:The problem is, even though a 2D image contains a lot of depth cues (parallax, scale, shapes, focus, color saturation, brightness, etc), and even if the computer should get a perfect 3D solution (almost impossible), it still has to fill in a lot of information that is simply not present in the 2D image.
Not too shabby. It helps though that the footage is a perfect candidate for conversion.Fredz wrote:From a 2D image only it's most probably impossible, but from a succession of 2D images it's somewhat feasible, but not in realtime though.voliale wrote:The problem is, even though a 2D image contains a lot of depth cues (parallax, scale, shapes, focus, color saturation, brightness, etc), and even if the computer should get a perfect 3D solution (almost impossible), it still has to fill in a lot of information that is simply not present in the 2D image.
Thanks to epipolar geometry there is a lot of 3D information that can be infered from 2 or more images of the same scene viewed at different angles. This can provide a mathematically correct 3D representation of the scene, much better than the luminance-infered depth or manual reconstructions used in most 3D conversions, which are mostly hacks.
Have a look at this video, it's the best example I know of automatic 3D reconstruction from a 2D video :
[youtube]http://www.youtube.com/watch?v=tyPtNlGnQqk[/youtube]
I must have missed that part, do you remember in which video (and at what timecode, as the videos are very long) he said that?3Tree wrote:He mentions that he uses some concepts from holography to help with the conversion.
Yeah, this scene is quite perfect for this kind of algorithm, but it's still a lot more complicated than what it looks like, even if it's not really evident from this video. They use several state of the art techniques in computer vision and they are able to compute the 3D parameters of the camera for each frame to enforce the epipolar constraint, so it's not a simple depthmap extraction in 2D from panning images.MaterialDefender wrote:it's clearly the most simple case possible though. A lateral camera move around a fixed point of view in a scene without any foreground objects. That has basically a depth matte baked in over time.
They use a depth from motion implementation as part of the process, but I don't think it's anywhere near the complexity of the algorithms used by this technique. In their case it looks a lot more like a simple depth estimation from a panning camera without geometrical knowledge about the scene.voliale wrote:The yuvsoft software I linked actually uses this same technique, along with several others. Still takes a lot of work to clean it up. Also, background reconstruction.
I was hoping you would find it since they gave a lot of information lol.voliale wrote:I must have missed that part, do you remember in which video (and at what timecode, as the videos are very long) he said that?3Tree wrote:He mentions that he uses some concepts from holography to help with the conversion.
http://www.killerstartups.com/bootstrap ... vel-again/Why did you choose crowdfunding?
We need funding to be able to mass-produce and market and sell the product. The conventional route, which I’ve done before, is you go to investors and you make a presentation to them, you put together a business plan, you raise some private money, and eventually you go public and sell the stock on the stock market. I’ve done all that before.
And if you don’t raise the money do you think you’ll go the traditional route?
Well, I am still going the traditional route already. We already have our business plan… We started in December. We’ve been looking around, talking to potential investors and that’s how I found the Fundable guys. They have an investment membership organization called the Go Big Network and I’m part of that. The idea there is to be introduced to their investors. They tell me they have 20,000 investor members of that company.
I am going that route and I will continue doing that unless I don’t need to."
http://www.socaltech.com/gene_dolgoff_s ... 44318.htmlCan you talk a bit about your technology, and how it's different from other, 3D technology out there?
Gene Dolgoff: There are basically three kinds of 3D converters, besides ours. One is the manual technique, which uses rotoscoping, which requires a graphic artist to sit at a workstation, and convert frame-by-frame into 3D. To convert a single, full length movie, that takes hundreds of graphic artists, four months, and five to 15 million dollars. That's only good for blockbuster movies, and that's prohibitive for television. There also automatic converters. There are two kinds besides ours. One is really fake, and is the "simulated 3D" that most converters use, which essentially offsets different lines on the screen different amounts, which makes a picture look like it has different depth. It doesn't have any correlation to the real depth in the scene. The second kind uses something which are called depth maps, which are algorithms that assign different properties to different depths, for example, you might look at the brightness of an image to try to determine depth, which has a little more do to with actual depth and is more accurate. The problem there, is it requires lots more computing overhead, and requires a much bigger, more expensive machine. That usually causes lag, and that's prohibitive when you're playing a game.
Then, there is our system. Our system uses a different technique, based on the study of the human brain over the past 40 years. We look at two frames at a time, and look at all the 3D factors. You'll notice that most of the time, more depth means a scene has differences in brightness, contrast, and color saturation, and are higher in the frame. Plus, when a camera moves back and forth, the slower and object is, the further back it is. All of these factors are taken into account, and we get an image which is stereoscopically aligned with the algorithm in our brain. The workload of our computing is greatly reduced, and lots of this is done in the human brain. The actual depth information is detected and placed in a lot of the areas of the scene. It's not the actual depth, but when the brain sees actual depth in the areas, it fills in the rest of the missing depth information in the area, by remembering its previous 3D experience. That's how this is able to create good, 3D with accuracy, yet with low compute overhead.
Ok, I had a look at the papers in your link and must admit that you were right and I was wrong.Fredz wrote:They use a depth from motion implementation as part of the process, but I don't think it's anywhere near the complexity of the algorithms used by this technique. In their case it looks a lot more like a simple depth estimation from a panning camera without geometrical knowledge about the scene.
In their video they even don't show it on a filmed scene but on a 3D rendered scene, with only two single shots of blurry depth maps, which makes me seriously doubt they have anything near the sophistication of what I posted.
And by judging the different 3D conversions I've seen, it seems 3D conversion companies are still very far from what is done in the computer vision research field.
It depends how much latency the TrackIR has, that's the big thing about making it all work in a way that feels realistic versus feeling drunk.VFRHawk wrote:Surely anything that's good with TrackIR's likely to be a good fit with a Rift?
I did/do not have a great experience with my TrackIR 5 pro. I had it mounted on the side of my DIY Rift in such a way as to make the 'dots' easily visible to the sensor across a wide FOV, but it was still quite temperamental. I spent more time worrying about how to move my head so as not upset the TrackIR than paying attention to the game itself, so lag was the least of the worries. Definite immersion-breaker.German wrote:It depends how much latency the TrackIR has, that's the big thing about making it all work in a way that feels realistic versus feeling drunk.VFRHawk wrote:Surely anything that's good with TrackIR's likely to be a good fit with a Rift?