http://www.mtbs3d.com/phpBB/viewtopic.p ... 327#p74295
Here were some posts:
Nick3DvB wrote:hast wrote:JohnCarmack: I've seen you mention in several videos that you haven't gotten translation tracking to work properly with the unit. One thing I wanted to try with it was to use a Playstation Move or similar device to optically track the 3d location of the unit and try to use that in game. Have you already tried that and found out that it doesn't work for some reason?I’m really glad you guys raised this, John has touched on it several times in interviews and I think that being that able to demonstrate parallax simulation effects would give a massive boost to the whole project; in fact it could be your "killer-app". I don’t underestimate the importance of the FOV improvements, that’s a massive achievement in itself, but combine that with the potential of the tracking implementation and I’m really starting to get excited, so forgive me if I go on a bit. Firstly I have to admit I’m a complete HMD noob, I tried one in a Uni lab about 15 years ago but the “floating TV” effect and tiny FOV meant I never really caught the bug. I’ve played around with some TrackIR type kit, but like many I was happy with my projector & shutter specs, and have been patiently waiting for those “holo-deck contact lenses” to arrive ever since…MSat wrote:@JohnCarmack - It sounds like you have made significant progress in this area, but it leaves me wondering, from your experience, in what ways is motion tracking currently lacking? It sounds like front-to-back, side-to-side translation does not currently exist. How much of a detriment is this to the experience? What sort of movement tracking do you believe is important for the immersion factor?
Whilst the current gen console’s refresh cycle may arguably have robbed us of years of graphics innovation, (those expecting another UE3 / IDTech4 quantum-leap may be disappointed), they have gifted use one thing- mass market adoption of tracking sensors, and that could be SO much more important in the long run. Many have said that we are nearing a tipping-point, the convergence of technologies that will deliver a real game-changer, but we’re not there yet, something is still missing, it reminds me of that quote from the matrix:
There IS something fundamentally wrong with “3D”, everyone here knows it but many of the wider public still don’t understand why. Move laterally whilst viewing a stereoscopic image, something very strange happens, your brain is not getting any of the extra information it expects, from “behind objects”, it tries to compensate, it fails, and the illusion is broken instantly. I think people coming to HMDs for the first time really need to understand that head-tracking is NOT just another control system; the feedback it provides for optical effects, like parallax simulation, are a fundamental part of emersion in the 3D space. But they’ll have to see it to believe it.Morpheus wrote:You're here because you know something. What you know you can't explain, but you feel it. You've felt it your entire life, that there's something wrong with the world. You don't know what it is, but it's there, like a splinter in your mind…
I’m sure many of you played with Jonny Lee’s Wiimote VR demo a few years back, I have to admit I spent many hours pacing around in front of my projector screen wearing two old TV remotes taped to a baseball cap! I never quite got it working properly with stereoscopic 3D, I played with TriDef but the geometry was always off somehow? When the Kinect arrived there was a stream of similar projects, but still no sign of the killer-app we were hoping for. Autostereoscopic / holographic displays, and plenoptic / light-field imaging systems are all going to feed development in this area, sadly they are still a long way off, but surely WE can do something with this now! There’s no doubt retro-hacking parallax simulation it into current games is an immense challenge, if John is struggling with it, what hope us mortals? But if we could get a few basic demos together at launch to showcase these parallax effects it could have a huge impact on public awareness, when people actually see it working it will blow there mind...
You can probably tell that I’m way out of my depth here, so my terminology is probably all wrong, but if we can demo the strafe “peek around” type effects (dynamic occlusion?) or maybe just something done with lighting of static objects, have reflections distort dynamically or specula highlights move along edge surfaces realistically as the viewer moves their head position (whilst standing still, not just as their body moves through the 3D scene). I hope I’m making sense, hopefully those with a better grasp of basic physics and more knowledge of rendering process can articulate this better. I’m sure most of you have seen all this before but for the uninitiated here are some great videos of parallax simulation in action:
http://www.youtube.com/watch?v=Jd3-eiid-Uw
http://www.youtube.com/watch?v=BduSDvUU6MY
http://www.youtube.com/watch?v=8SDGG9HhbgQ
http://www.youtube.com/watch?v=1dnMsmajogA
http://www.youtube.com/watch?v=6tuizfOcdLQ
http://www.youtube.com/watch?feature=pl ... a5NQK563OI
Also, some basic DOF simulation in Quake III and other really cool stuff:
http://www.youtube.com/watch?v=HdW1v9TPNYw
http://www.3dfocus.co.uk/glasses-free-3 ... 3d-tv/8626
http://www.3dfocus.co.uk/3d-news-2/3d-t ... ature/6695
[/END RANT]
You guys keep up the good work, I'll keep spreading the word!
brantlew wrote:The technologies for tracking accurate head translation are more problematic than the ones for orientation. Gyroscopic orientation sensors operate within a local reference frame which makes them more generalized for a wide array of applications and immune to scaling problems. They are a near perfect solution for orientation tracking.
Local inertial sensors on the other hand have enormous error ranges when used to estimate translations - on the order of meters per second error!!! So to track translation you have to use an external reference. The Hydra uses magnetic fields and does a pretty good job, but most systems are based on optical technology - thus the Johnny Lee tracker, TrackIR, etc.. You can get really great results with these systems but they suffer from scaling problems - the sensor and reference points have to be within range, and in the case of optical there is a problem of occlusion. Not that these problems can't be solved, but it requires more complex setup and calibration (multiple cameras, light sources, etc) than the simple gyroscope used for orientation. So you don't see setups that include head translation as often because they don't fit in a neat little package, instead requiring a whole specially designed and calibrated play area.
Chriky wrote:I'm currently trying to make a headtracking system that uses the PS3Eye. I think it's a good piece of hardware to use because you can buy them for £11 delivered from eBay; much cheaper than most other pieces of kit. They can output 320x240 at 187FPS using the free driver from Code Laboratories, it supports up to 2 cameras (there's a paid version that supports more).
320x240 seems like a small resolution but bear in mind the Wiimote camera only has an actual resolution of 128x96, but by averaging several pixels' intensities it gets an effective resolution of 1024x768.
Anyway my basic idea is to put a camera on the head, looking up at the ceiling which will have a sort of poster with a known set of coloured dots on it. You can use the 2D position of the dots on the screen, and their known position in 3D space to work out the position and orientation of the camera (this is called the PnP problem, its not hard and there are loads of very fast (a few microseconds) algos out there to solve it). You only need 4 points to get the position and orientation. The hard and potentially slow bit is identifying the dots in the camera image.
The best thing about this system is that it can scale arbitarily; you can cover a whole ceiling in the dots, combined with some way of getting a global positon (say, QR codes every so often), and then you could walk around freely with a backtop system. What's more, several users could use it simultaneously with no extra work.
http://i.imgur.com/gqU9f.jpg
Still very early stages at the moment, but I've got it working out 4 dots' screen positions. As in, the coloured circles on the screen are drawn using actual (x, y) double coordinates.
@Chriky: This system has some angular limits but in general I think it's a good idea. It addresses the range and scaling problems nicely and putting the sensor equipment on the player and the markers in the environment is much cheaper than the other way around. One of the users on this forum, smaesen, has done some extensive academic work very similar to this. He uses cheap LED light ropes on the ceiling as markers.
"Scalable Optical Tracking - A Practical Low-Cost Solution for Large Virtual Environments"
http://research.edm.uhasselt.be/~smaese ... tions.html