Hasn’t been anything new in this thread from the OP for a week, so I’ll add some more thoughts (at least until Geekmaster asks me to shut up
External vs internal device
I think this all comes down to bandwidth and latency. While not impossible to establish fairly high bandwidth external connections (USB 3.0, eSata) those are generally directed/optimized towards data throughput, with latency (of the order we are talking about) as an afterthought. They also use their own protocols and have specific interface chips, which are another layer to transvers that might have its own quirks. It seems that at least INITIALLY, you might be picking fights you don’t need to by trying to go external, as opposed to talking over the internal PCI-E bus that was built with graphics in mind.
Warping – moving the discussion closer to implementation
The IDEA of warping has certainly been established, and though I had never considered it before reading Carmack’s latency ideas, it does seem to make a lot of sense, at least for the next few generations of hardware.
I specifically want to bring up in this thread the method of deciding what gets warped and by how much. Regardless of if you use spherical projections, or “sliced depth layers” or whatever, when you are trying to manipulate 2d images against a 3d matrix, you have to know the 3d position of each pixel relative to the camera, even if it’s not to quite the same degree as when it was first rendered.
Storing depth information in a 2d image means using a depth buffer (zbuf), either as an additional image or in an image format that supports more than just R, G and B values per pixel. Also – if you’re using a single zbuf range over a whole image, which could contain values been inches away to potentially several miles, you’re probably going to need LOTS-O-BITS. Though to be fair, the distribution would hardly need to be linear. Basic testing would probably be able to work out how much precision would be needed for various distances, with near objects almost certainly needing much more than far objects.
And given that the whole point of this is stereo vision, everything is, of course, multiplied by 2. At least that’s been the case so far. In a system such as the one we are discussing, it may very well be possible that things rendered more than X distance away need ONLY be rendered ONCE and then be copied and translated for use in the other eye. That might be the source of some speed increase, as long as the calculations required to determine when it should be used are less costly than just doing the rendering in the first place.
The idea is currently in use
Something else to point out: companies are already doing something along these lines with 2D to 3D movie conversions. Of course, they are working from a fully 2d source, and then have to try and calculate depth values and/or rotoscope objects onto different planes, as well as regenerating parts for the image no longer occluded. Generating the depth information in real time from the engine obviously gives the implementation discussed here a massive leg up over the methods used for movie conversions. But none the less, taking a 2D image and changing it to simulate 3D is already happening, and is being done by industry, not just academia.