how this upcoming "New Standard" will differ from the "Old Standard", namely VESA-1997-11
VESA Stereo is a very simple protocol. Just like other USB and analog VGA solutions, it essentially privides the way to physically syncronize shutters with each frame of seqiential stereo, describing the sync protocol and the connector plug. It does not specify any underlying application model that can be used to work with the sequential stereo.
If you are a developer who needs to support stereo in his/her application, you face many complex questions. How to detect the type of stereo display and transmission method (line-interleaved, side-by-side, dual video interface, frame sequential etc)? How many different devices and formats to support? Where can you get the source code for pre-processing the images for all these devices and formats? How to ensure a proper sync with shutter glasses for sequential stereo? What internal stereo format to use and how to transcode your internal format to the native display format?
At this time, none of these points are adressed by industry standards. The current effort is only about making ATI/AMD hardware compatible with 3rd party shutters and 3rd party stereo drivers. Software developers will still have to code explicit support for many different solutions and formats right in their applications, and the users will still have the burden of configuring up each different application to work with the stereoscopic solution they use.
But in orfer for stereoscopic gaming and home video to become more popular, everyting should be far more easier for both developers and users, so we need to have standard programming interfaces. At a minimum, these should include:
1) a OS-level device driver model (or a 3rd-party plug-in model) which lets hardware vendors provide a standard driver/plug-in for their specific display and/or shutter glasses solutions, and
2) a 3rd-party or OS component which will provide a standard programming interface, abstracting the underlying hardware for application developers.
So the user would only have to plug-in the glasses and install the drivers, and the application developer would use a standard API to pass the picture in device-independent stereo format. The middleware component will directly talk with the vendor-provided driver and provide format conversion, image processing and frame syncronisation.
I could see you took multiple passes to get him to answer the driver question, but he kept dodging it
What else could he say? We all know Nvidia is not currently interested in supporting anything other than their own 3D Vision glasses, and Bit Cauldron are not in charge of the whole industry, unlike Microsoft and Khronos, they are only trying to ensure their product will at least work with AMD hardware.