Zalman drops Nvidia stereo driver support for new monitors

Find a good article? Got a news story to share? VR, AR, 3D...it's all good! No self promotion please.
Post Reply
GHG
One Eyed Hopeful
Posts: 19
Joined: Mon Apr 19, 2010 8:17 am

Zalman drops Nvidia stereo driver support for new monitors

Post by GHG »

Wasn't sure where to put this (apologise if its in the wrong section) but I recieved this email from Zalman support today having enquired about Nvidia driver support for their new line of 3D monitors:
Hi,



I apologize for the delay in responding to your inquiry. I was away on multiple business trips abroad, and was terribly occupied.



No, the new ZM-M215W (21.5” Full HD) 3D monitor is not supported by Nvidia. This is because Zalman has decided that there is not a reason to pay enormous licensing fees per year, when it is expected that new game titles will support 3D directly within the game, making 3rd party 3D drivers for gaming less of a concern for the future.

For example, Avatar: The Game supports line-by-line interleaved format among others and Zalman’s Full HD monitors will work smoothly.

I have attached an introduction of Zalman’s other models and the general technology for your reference. Feel free to pass it on.



If you have any needs, please feel free to contact me.



Thanks…



Jihoon



Jihoon Jo

Manager, 3D Business Development
Here's the PDF he sent me ----> http://www.megaupload.com/?d=IR5Y1FFO" onclick="window.open(this.href);return false;

In the PDF there's also news of 4 new monitors; 2 24" full HD models and 2 32" models (one is full HD and the other is only 1366 x 768 in resolution).

I've also uploaded it at the bottom of this message.

This was my original email for reference:
Hi,

I'm looking into purchasing the Zalman M215W but have a question regarding 3D support before I go ahead with my purchase:

Is this monitor officially supported by Nvidia? It is not clear whether this is the case on your website, and the Nvidia website doesn't have this particular model on its supported products list in the Zalman Stereoscopic 3D drivers section. So is there any official word as to whether the Nvidia drivers are compatible with this monitor?

Thanks.

This is all very interesting. It first of all implies Zalman were paying a hefty fee and didn't deem their drivers to be superior enough over IZ3D and Tridef in order to carry on paying it for the new line of monitors. This also gives us some insight into why ATI have kept a low profile during all of this 3D development. If future games are to have native 3D support 3rd party drivers will slowly be phased out (Similar to the way in which youtube now has full native 3D support). So ATI not investing time and money into their own driver solution that will eventually become redundant is very sensible on their part. They'll support 3rd party drivers (IZ3D and Tridef) and let them deal with it for them for the meantine and then when the transition to native in-game support takes place they wont have wasted any resources at all. Its up to the developers at the end of the day to add the support natively. However, the problem will start to come in when Nvidia start paying devs off to only support their stereo solution, much like they do now with PhysX.

The future of 3D gaming will be very interesting indeed.
You do not have the required permissions to view the files attached to this post.
User avatar
yuriythebest
Petrif-Eyed
Posts: 2476
Joined: Mon Feb 04, 2008 12:35 pm
Location: Kiev, ukraine

Re: Zalman drops Nvidia stereo driver support for new monito

Post by yuriythebest »

wow.... when the zalman displays came out - before 3d vision - people were scared this would happen only they thought Nvidia would be the one to eventually drop support. seems everyone was half-right :shock:
Oculus Rift / 3d Sucks - 2D FTW!!!
User avatar
Likay
Petrif-Eyed
Posts: 2913
Joined: Sat Apr 07, 2007 4:34 pm
Location: Sweden

Re: Zalman drops Nvidia stereo driver support for new monito

Post by Likay »

Even more: I don't think 3d-vision would had been born unless for the Zalman support. The heavy work was so to say already done.
Zalman owners don't have to fear though. It's supported free by the iz3d drivers and as a bonus not bound to special hardware.
Mb: Asus P5W DH Deluxe
Cpu: C2D E6600
Gb: Nvidia 7900GT + 8800GTX
3D:100" passive projector polarized setup + 22" IZ3D
Image
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

GHG wrote: If future games are to have native 3D support 3rd party drivers will slowly be phased out ... so ATI not investing time and money into their own driver solution that will eventually become redundant is very sensible on their part
They might as well wait for major graphics APIs to incorporate native stereoscopic support, whatever. The truth is they probably don't have the resources to start the development of stereo driver.

The future of PC Gaming is not as bright as you think. The vast majority of 120 Hz 3DTV sets are only capable of displaying "half-resolution" or non-native resolution modes for gaming, and 120 Hz PC monitors are supported exclusively by NVidia.

You can't support page flipping 120 Hz stereo displays at the application level only, be it either Dual-link DVI, Displayport, or HDMI 3D (in the future), you still need driver support. And it looks like iZ3D are not going to support 120 Hz displays for now, they are only supporting 120 Hz projectors:
we need to separate projector market from display market based on company obligations
So if iZ3D is not really participating, how exactly ATI/AMD and Bit Cauldron are going to make use of their new 120 Hz shutter glasses with ATI cards? Would it be yet another software hack to enable Quad Buffering in Direct3D which an application can make use of? Well, it would take quite a long time before native Stereo 3D applications appear and make use of this feature. Avatar and Unigine are pretty much the only games that support native stereo rendering, everyone else is relying on 3rd party middleware drivers.
Last edited by DmitryKo on Fri May 07, 2010 10:46 pm, edited 1 time in total.
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Zalman drops Nvidia stereo driver support for new monito

Post by BlackShark »

Well you need drivers to supply the hdmi1.4 metadata but you can actually produce and transmit the picture without drivers straight from the application, be it hdmi frame packing, side by side or top-bottom. I don't relly care about 120Hz pageflipping, It breaks a fundamental rule of consumer digital displays that separate the source generator from the display device. I consider it a temporary solution (which started 10 years ago) until a proper standard for 1080p60 is available (Display Port ? Hdmi1.4 optional mode) I don't know about display port though, Isn't there any basic stereo3D support in DP1.1 or has everything been standardized in DP1.2 ? Do the ATi graphics drivers provide DP3D outputs ?


The problem in my opinion is that game developers have not embraced 3D yet, some of them give a few tries with minimal risk by doing a few tweaks for the nvidia driver but nothing really serious.
The day game developers will actually consider 3D as an important feature, they'll deal with the 3D camera systems internally, produce a left and right picture themselves just like Avatar : the Game
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Zalman drops Nvidia stereo driver support for new monito

Post by cybereality »

Arrg, the day every Zalman owner feared was coming. I'm not surprised. Who knows what kind of ridiculous fees Nvidia was charging. I mean, I think they have the right to charge something but I would much rather it be to the end user like the Nvidia 3DTV Play or the iz3d driver does. I think the worst part about this is Nvidia hasn't released a Zalman driver for the GTX 400 series so now it is 2007 all over again! Nvidia with their proprietary non-sense. I really hope ATI can do something in this space to give them a little competition. Its too much.

Also, all this talk about game developers adding native support is bogus. Although they did that for Avatar, I highly doubt we will see more of this anytime soon if at all. If we see any 3D support it will probably be from devs that got paid by Nvidia to optimize the game for their drivers (as we have seen recently). I think we will have a 3D gaming standard before we see game devs programming for a dozen different formats.

Oh, and that 24" monitor sounds pretty good, I might want.
User avatar
tritosine5G
Terrif-eying the Ladies!
Posts: 894
Joined: Wed Mar 17, 2010 9:35 am
Location: As far from Hold Display guys as possible!!! ^2

Re: Zalman drops Nvidia stereo driver support for new monito

Post by tritosine5G »

Don't forget where the money is for nVidia : Quadro.
Quadro has: edge blending.

I hope that feautre comes with 3d vision sorround. If you have a polarised pj setup with 2 pj's already, you would be mad not to go for edge blending and 3:1 (curved) screen as a secondary setup. This is just not comparable to zalman's toys, it can be used in military complexes.
http://translate.googleusercontent.com/ ... zcqArpu0kQ" onclick="window.open(this.href);return false;
I dont care who gives us edge blending, Im using that. Such commercial software is 1700 usd.
-Biased for 0 Gen HMD's to hell and back must be one hundred percent hell bent bias!
ssiu
Binocular Vision CONFIRMED!
Posts: 320
Joined: Tue May 15, 2007 8:11 am

Re: Zalman drops Nvidia stereo driver support for new monito

Post by ssiu »

I wish Zalman can tell us (approximately) when they will be available and how much.
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

ssiu wrote:I wish Zalman can tell us (approximately) when they will be available and how much.
Zalman ZM-M240W has already been released; as for the rest, a 24" IPS monitor is going to cost well over $1200, and 32" models are "professional" displays based on TV panels.
cybereality wrote:the day every Zalman owner feared was coming. I'm not surprised
Aren't you?

cybereality wrote:all this talk about game developers adding native support is bogus. Although they did that for Avatar, I highly doubt we will see more of this anytime soon if at all.

I think we will have a 3D gaming standard before we see game devs programming for a dozen different formats
Exactly.

It's even not about display formats. Developers can only do so much as wasting resources to perform multiple rendering, but a proper stereoscopic driver and API would allow better utilize the available resources. If the API and GPUs supported stereo render targets, there would be no need to pass all the geometry twice down the rendering path to create a stereo view.

Also, some effects are just impossible without proper driver/API support, things like in-game stereoscopic video screens - I don't think these are possible without stereo texture support, and ATI/AMD doesn't have it, because it comes nothing short of implementing a full-featured stereoscopic driver. Hardware companies just can't pass this burden to game developers.

Look at Sony, they first made a full-featured stereo 3D version of the SDK and released a firmware ("driver") update. If ATI/AMD are serious about stereo gaming, they can't rely on 3rd party middleware that is limited by varuious technological obstacles and licensing terms.
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

BlackShark wrote:you can actually produce and transmit the picture without drivers straight from the application, be it hdmi frame packing, side by side or top-bottom.
No, you can't. Both DisplayPort and HDMI 1.4 use system control messages for signalling the formats, which are separate from the active video signal, so preparing the correct picture is not enough, the video driver has to provide low-lewel access to the control protocols.
I don't relly care about 120Hz pageflipping, It breaks a fundamental rule of consumer digital displays that separate the source generator from the display device. I consider it a temporary solution (which started 10 years ago) until a proper standard for 1080p60 is available (Display Port ? Hdmi1.4 optional mode)
There is confusion about the terminology.

More technically correct term for 120 Hz stereo video signal is either "frame alternative" or "frame sequential". In practice this format is practically the same as top/bottom and "frame packing" formats, and provides the best processing latency for 120 Hz displays.


Displays are not doing "page flipping", video card's framebuffers are. This term is a reference to goold old days when the programmer had to actually change a hardware pointer to the video buffer memory page, which was mapped into CPU memory space.

All current 3D APIs use at least double buffering. The back buffer is used for rendering the next frame, and front buffer contains the current frame and is accessed by RAMDAC or TDMS transmitter. When the next frame is ready, the API makes the driver switch the buffers. " This way, the video card can render say 1 frame per minute, but the video monitor will be updated according to its refresh rate of 60 to 75 Hz. Quad buffering is an extension of this scheme - there are two front buffers and two back buffers, and when the next stereo frame is ready, the back buffer is presented as front buffer.


I don't know about display port though, Isn't there any basic stereo3D support in DP1.1 or has everything been standardized in DP1.2 ? Do the ATi graphics drivers provide DP3D outputs ?
DisplayPort 1.1a supports 1080p frame sequential stereo, DisplayPort 1.2 supports additional video formats. I don't think ATI bothered to support it, there are no 120 Hz displays with DisplayPort connectors.

they'll deal with the 3D camera systems internally, produce a left and right picture themselves just like Avatar : the Game
Does Avatar: The Game feature those cool stereoscopic video tablets which were shown in the movie?
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Zalman drops Nvidia stereo driver support for new monito

Post by BlackShark »

DmitryKo wrote:It's even not about display formats. Developers can only do so much as wasting resources to perform multiple rendering, but a proper stereoscopic driver and API would allow better utilize the available resources. If the API and GPUs supported stereo render targets, there would be no need to pass all the geometry twice down the rendering path to create a stereo view.

Also, some effects are just impossible without proper driver/API support, things like in-game stereoscopic video screens - I don't think these are possible without stereo texture support, and ATI/AMD doesn't have it, because it comes nothing short of implementing a full-featured stereoscopic driver. Hardware companies just can't pass this burden to game developers.

Look at Sony, they first made a full-featured stereo 3D version of the SDK and released a firmware ("driver") update. If ATI/AMD are serious about stereo gaming, they can't rely on 3rd party middleware that is limited by varuious technological obstacles and licensing terms.

[...]

Does Avatar: The Game feature those cool stereoscopic video tablets which were shown in the movie?
You must be kidding right ?
I've seen stereoscopic video screens working almost perfectly using the iZ3D driver : in Half-Life 2 /episode1/episode2 and Devil May Cry 4 and they worked in 3D. Separation and convergence weren't perfect but they were 3D and with proper camera tuning they look just as good as the rest of the world.
If the game developers were to deal with the stereo themselves they can easily correct any issue. There is not much that can't be done with DX9, let alone DX10 and 11. Stereoscopic textures do not exist in the directX and OpenGL APIs but it's quite easy to emulate them by using two separate textures for the two cameras.

In Avatar the game, all screens are static textures (and they're blurry), the few moving video screens in the game are pre-recorded 2D videos with highly visible video compression artefacts. It's obvious that the developers didn't consider these displays important (especially when you see the low resolution textures on them) They could have made them stereo 3D but to me it appears clear they just didn't care about these screens.
They made the volumetric 3D table though... will sort-of, they used a few semi transparent textured polygons.

Driver-level optimisations would save a little bit of CPU power but would probably not save much on the GPU side. No matter how you look at it, if you want to draw two polygons you have to spend the time to draw, rasterize, texture and shade these two polygons.
It's not for nothing that Sony, even with their exclusive low level control and over two years of R&D, still have to reduce the resolution and framerate in 3D to keep the geometry generation and number of shader operations within the limits of the RSX GPU.
I don't know what Sony did with their 3D firmware update but my guess is that it's just adding the Hdmi3D output signalling and a few example libraries to help developers convert their games quicker with pre-made stereo camera related functions.

The real optimisations in stereo3D won't come from the hardware side : it will come from software and the game developers ability to do clever stereo 3D engines that use shortcuts to avoid drawing unnecessary 3D geometry : current 3D engine use brute force techniques : draw everything twice. A hybrid engine could draw some objects in mono + lateral shift for objects far in the background and only recompute the shaders if necessary while limiting drawing double geometry to objects close to the camera where the trick would be visible.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Zalman drops Nvidia stereo driver support for new monito

Post by cybereality »

DmitryKo wrote:
cybereality wrote:the day every Zalman owner feared was coming. I'm not surprised
Aren't you?
Well I won't lie. I was keeping the hope alive that Nvidia would continue supporting Zalman but deep down inside I knew it was only a matter of time. I'm still hoping there are a few months left in their yearly contract, enough for Nvidia to release a GTX 400 compatible Zalman driver. That would hold me over until something better comes. Otherwise I might just start buying ATI! Not sure if I want to buy back in to Nvidia's stranglehold.
ssiu
Binocular Vision CONFIRMED!
Posts: 320
Joined: Tue May 15, 2007 8:11 am

Re: Zalman drops Nvidia stereo driver support for new monito

Post by ssiu »

DmitryKo wrote:
ssiu wrote:I wish Zalman can tell us (approximately) when they will be available and how much.
Zalman ZM-M240W has already been released; as for the rest, a 24" IPS monitor is going to cost well over $1200, and 32" models are "professional" displays based on TV panels.
Zalman can certainly choose to price them as professional items, but I hope they'd be sensible and don't do that. From OP's PDF, page 2 has "The Most Affordable S3D Display" as a feature, page 15 has "Simple 4 Step Process" as Core Advantage (which a buyer only cares if the cost advantage is passed along to him).

The 3D premium for the regular 24" Zalman model is $200-$300, and it shouldn't need to cost any more for the IPS model. An HP ZR24w (24" 1920x1200 S-IPS) is $425 list price; the Zalman 24" IPS shouldn't be more than ~$800; certainly not "well over $1200".

A named brand, 32" 1080p 10-bit panel HDTV can be had for ~$600; how much should the 3D premium be? I'd say the 3D Trimon 32" 1080p should be $1000 to $1200. Keep in mind a 46" Samsung (non-LED backlit) 3DTV is $1700 and this is 32"; and historically the polarized 3D monitors (Zalman, iZ3D) need to be priced similar to an equivalent shutter-glasses 3D model (without cost of glasses) to be competitive.

Zalman is known as a consumer brand company, not professional brand. Zalman's website http://www.zalman.co.kr/ENG/3D/work01.asp mentions "3D TV (coming soon)", the term "TV" sounds like consumer item to me.
User avatar
Dom
Diamond Eyed Freakazoid!
Posts: 824
Joined: Sun Oct 19, 2008 12:30 pm
Contact:

Re: Zalman drops Nvidia stereo driver support for new monito

Post by Dom »

It beats me why the newest 3DTV's don't just have the driver installed into the display unit themselves. Then just use the remote control to adjust separation and convergence/auto and such. Usually the kernels in manufactured box sets unlike windows/mac/linux don't have the lag or BSOD or heat shutdown as pc's do. Just like how the 3d blu-ray to 3dtv is ---> encoded 3d content 2xview/left,right to 3dtv separate. This should be available to a tv set that can signal in any coherent video signal either by 2 view or interpolate-transencode the left,right from the new cameras that are out with descreen filters and also video to directx polygons video materials. And most of this should be defaulted by any connection vga,hdmi,DP,composite whichever for the resolution you need or video quality Progressive, Interlaced and also later wireless computing for video and wireless KVM/Stereo3d signals will be supreme. Nobody thinks all this kind of stuff works so it all goes to waste or to a rich company that makes 5 billion a year, It's better than defective or a half working system that needs replaced the month after you buy it or returns. I have noticed this with stereo3d in that when you get something new always after, another product comes out and now your new product you bought sucks bags. :(

As long as there is no stand still in stereo3d companies thats all that matters.
http://www.cns-nynolyt.com/files/doms-systemspecs.html My System specs In HTML

Image

Cyberia on Youtube

__________________________________________________________________________________________
Image
User avatar
Neil
3D Angel Eyes (Moderator)
Posts: 6882
Joined: Wed Dec 31, 1969 6:00 pm
Contact:

Re: Zalman drops Nvidia stereo driver support for new monito

Post by Neil »

Hi Guys!

Read this for clarification:

http://www.mtbs3d.com/index.php?option= ... 2&catid=35" onclick="window.open(this.href);return false;

Regards,
Neil
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Zalman drops Nvidia stereo driver support for new monito

Post by cybereality »

So that confirms it: the Zalman licensing contract is up. Guess its just a fantasy that Nvidia will ever support the GTX 400 series on Zalman (but I am still hoping). Good thing I decided against getting a GTX 480 and picked up a used GTX 285 instead.
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Zalman drops Nvidia stereo driver support for new monito

Post by cybereality »

I just got confirmation from Nvidia that the latest Zalman driver (for the 19" and 22" models) DOES work with the GTX 400 series. You need the 197.45 Zalman driver and then the 197.75 video drivers. I wish I knew that earlier because I just purchased a GTX 285 thinking it was the last card with Zalman support. :cry:
User avatar
Dom
Diamond Eyed Freakazoid!
Posts: 824
Joined: Sun Oct 19, 2008 12:30 pm
Contact:

Re: Zalman drops Nvidia stereo driver support for new monito

Post by Dom »

I hear ya Cybereality. Theres always some catch with purchases and correct current information about. If you really want your GTX 480, then I suggest you re-selll your gtx 285 for more than you paid. I read your other posts and why not sell it for like $240-250 or something. I have a gtx 285 nvidia and I love it. Its been going good for almost a year now and I have been waiting for prices to go down on it, $400 to $190 I hope. I would offer to buy it from you for like $230-240. I just don't want to ruin your new gaming experience.
http://www.cns-nynolyt.com/files/doms-systemspecs.html My System specs In HTML

Image

Cyberia on Youtube

__________________________________________________________________________________________
Image
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Zalman drops Nvidia stereo driver support for new monito

Post by cybereality »

Its not a big deal really, I am sure the GTX 285 will be enough for my needs. I'm more than happy to play older games at medium settings as long as they are in 3D.
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

BlackShark wrote:
DmitryKo wrote:Also, some effects are just impossible without proper driver/API support, things like in-game stereoscopic video screens - I don't think these are possible without stereo texture support, and ATI/AMD doesn't have it, because it comes nothing short of implementing a full-featured stereoscopic driver. Hardware companies just can't pass this burden to game developers.
You must be kidding right ?
Why, you think I'm a clown? No, I'm not kidding.
Stereoscopic textures do not exist in the directX and OpenGL APIs but it's quite easy to emulate them by using two separate textures for the two cameras.
Rendering two views into two separate textures is not an ideal solution since it requires multiple geometry passes, and multiple render targets (MRT) have practical quality limitations which only make it useable for rough reflections and shadows.

http://msdn.microsoft.com/en-us/library ... spx#ID4E4F
http://msdn.microsoft.com/en-us/library ... S.85).aspx
Driver-level optimisations would save a little bit of CPU power but would probably not save much on the GPU side. No matter how you look at it, if you want to draw two polygons you have to spend the time to draw, rasterize, texture and shade these two polygons.

It's not for nothing that Sony, even with their exclusive low level control and over two years of R&D, still have to reduce the resolution and framerate in 3D to keep the geometry generation and number of shader operations within the limits of the RSX GPU.
If the API and and NV40/7800GTS hardware had orthogonal stereo support, that is supported simultaneous rendering to two separate and equal framebuffers, the amount of overhead would considerably decrease, since you won't have to passs the geometry twice and wouldn't probably have to calculate separate normals for each view. As of now, no graphics hardware really supports simultaneous rendering of two views AFAIK, even though Direct3D 7 had some rudimentary support which was removed in Direct3D 8.

http://msdn.microsoft.com/en-us/library/bb172506" onclick="window.open(this.href);return false;
The real optimisations in stereo3D won't come from the hardware side : it will come from software and the game developers ability to do clever stereo 3D engines that use shortcuts to avoid drawing unnecessary 3D geometry : current 3D engine use brute force techniques : draw everything twice. A hybrid engine could draw some objects in mono + lateral shift for objects far in the background and only recompute the shaders if necessary while limiting drawing double geometry to objects close to the camera where the trick would be visible.
It's not that simple as "draw everything twice". AFAIK, NVidia stereo drivers automagically modify geometry and pixel shaders provided by the game to account for an additional view.

Rough view frustum culling in the game has been recommended practice for the last decade, but rendering far away objects like skybox in "mono" wouls actually result in a visual artifact, as outlined in S3D Gaming Anomaly guide. They could use simpler shading to output the same color to both views, however far away objects still need to be rendered in stereo and have the appropriate offset (eye separation distance for the infinity).
If the game developers were to deal with the stereo themselves they can easily correct any issue.
Game developers can only make use of hardware features that are available to them. Maybe in the future we will see graphics card evolve to fully programmable parallel cores on the main CPU which will be programmable in pure C++; this is what Epic Games' Tim Sweeney is evengelising for the last 5 or so years as a "return to software rendering".

As of now, specialized graphics hardware is still multiple orders of magnitude faster than general purpose CPUs, so we are still 20-30 years away from rendering RenderMan movie-quality shaders in real-time.
In Avatar the game, all screens are static textures (and they're blurry)
Well, if the creators of most advertised stereoscopic game until now didn't have the resources for a thorough implementation, how can you expect it from your vanilla games?
Last edited by DmitryKo on Mon May 31, 2010 11:22 am, edited 1 time in total.
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Zalman drops Nvidia stereo driver support for new monito

Post by BlackShark »

What about dropping that idea of yours that developers must absolutely draw both views with single calls with the limited Multiple-render-target function instead of just sticking the the fully flexible, fully capable, fully supported and unrestricted method : drawing the two views in succession.

That's how almost all games do their multi-view renders (2 or 4 player games on the same screen).

Yes of course you have to make the cpu calls twice... so what ? That's the brute force method (non optimized) and it takes exactly 2x the GPU power, and relative to the GPU the CPU has actually less work to do (it doesn't need to update gameplay physics and IA).
Just make one render normally, store the back buffer in a texture, make the second render with the updated camera position normally, store the back buffer in an other texture, apply display function and flush (via display API or native picture transform for display).

Here you go, a fully functioning native stereo 3D Crysis.
Why do you think they made the 3D engine fully up and running in only 2 days (said by Crytek at the siggraph 2009 conference).
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

BlackShark wrote:Yes of course you have to make the cpu calls twice... so what ?
Yeah, sure. Even though most recent games are only able to render 20-30 fps at 1920x1080 even with the latest and most expensive graphics hardware, let's just take a brute force approach. So what (c) BlackShark. Why don't you propose to deprecate pixel shaders and just render everything with multipass texturing, like it was 10 years ago?
Why do you think they made the 3D engine fully up and running in only 2 days (said by Crytek at the siggraph 2009 conference)
I think we've talked about this before. They can add a new feature to the engine in a few days and then spend months tuning it up and adjusting the content.
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Zalman drops Nvidia stereo driver support for new monito

Post by BlackShark »

DmitryKo wrote:
BlackShark wrote:Yes of course you have to make the cpu calls twice... so what ?
Yeah, sure. Even though most recent games are only able to render 20-30 fps at 1920x1080 even with the latest and most expensive graphics hardware, let's just take a brute force approach. So what (c) BlackShark. Why don't you propose to deprecate pixel shaders and just render everything with multipass texturing, like it was 10 years ago?
Why do you think they made the 3D engine fully up and running in only 2 days (said by Crytek at the siggraph 2009 conference)
I think we've talked about this before. They can add a new feature to the engine in a few days and then spend months tuning it up and adjusting the content.
You're still stuck on that "it has to be done by the driver" mentality. We're speaking about native 3D games here.
Games on which game developers take 3D into consideration from the ground up, take the framerate loss into account in their framerate performance targets (they make sure the game runs 60fps in 2D so that hardcore gamers with new PCs can expect at least 30fps in 3D and a solid 60fps if they got a killer machine with multi-GPU).

Game developers currently have a very high amount of freedom to program their game engines the way they want. They can already use the entire Library of DX9/10/11 functions at wish, that is, only if they render the left eye and right eye images natively inside their game engine. When developers do so, they solve 99% of the problems (HUDs, shadows, shaders, lens flares, etc...) All the screen-space bugs that we have with drivers are instantly gone because the developers took the time to do a proper full frame render for each eye.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

BlackShark wrote: You're still stuck on that "it has to be done by the driver" mentality. We're speaking about native 3D games here.

Game developers currently have a very high amount of freedom to program their game engines the way they want. They can already use the entire Library of DX9/10/11 functions at wish, that is, only if they render the left eye and right eye images natively inside their game engine.
I don't quite approve of your point. Again, by this logic, game programmers had all the freedom they needed even with first-generation hardware that didn't support pixel shaders, since they could use multiple textures and multipassing even back in 1998.

If GPU makers provide developers with new functionality such as penalty-free 4x antialiasing, like they do it on the Xbox 360, or hardware-assisted fully orthogonal multi-view rendering, how can you argue that this is not important?
Games on which game developers take 3D into consideration from the ground up, take the framerate loss into account in their framerate performance targets (they make sure the game runs 60fps in 2D so that hardcore gamers with new PCs can expect at least 30fps in 3D and a solid 60fps if they got a killer machine with multi-GPU).
I don't think it's such a good idea. The brute force approach where stereoscopic games would trade image quality and automatically revert to a substantially lower graphics settings or lower resolution, could only lead to a bunch of mediocre-looking stereoscopic games, customer dissatisfaction and a loss of interest for stereoscopic gaming.

As I said, most new and visual-intensive games can choke even on the latest and greatest hardware. Considering the PC tradition which allows full user control over game settings and the broad variation of graphical performace among the PCs, most developers already cap their typical graphics settings for a worst-case scenario of a $100 entry-level card and an entry-level Pentium/Celeron CPU. As a rule, a mid-level graphics card that allows compfortable play with highest graphics setting typically comes 1.5-2 years after the game's original release date. In this environment, 30-50% performance hit in steroscopic mode is way too much.
User avatar
tritosine5G
Terrif-eying the Ladies!
Posts: 894
Joined: Wed Mar 17, 2010 9:35 am
Location: As far from Hold Display guys as possible!!! ^2

Re: Zalman drops Nvidia stereo driver support for new monito

Post by tritosine5G »

4xMSAA is old & bad.

http://visual-computing.intel-research. ... s/mlaa.pdf

next console generation comes with 1024x MSAA worth of analytical anti aliasing. FullHD will be more overrated than ever :lol:
-Biased for 0 Gen HMD's to hell and back must be one hundred percent hell bent bias!
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Zalman drops Nvidia stereo driver support for new monito

Post by BlackShark »

There is no such a thing as a penalty free AA4x, especially on consoles where most game developers prefer to leave the hardware AA off to save a few megabytes of RAM (which is extremely limited on consoles). On newer console games, developers often replace the hardware AA with a post-process shader that smoothens visible edges.

GPUs had almost no programmability at all back in the days, DX8 added a small amount of programmability, but DX9 was the one that freed the shaders, and that's why it lasted so long and why so many new games look so similar between DX9, DX10 and even DX11 (just look at Colin Mc.Rae DIRT2, or Crysis with the modified config files that enable very high graphics in DX9 mode). In terms of shaders you can do almost everything with DX9c.
Shaders were not possible in hardware back in 1998, it had to be done in software on the CPU (like in the game Outcast for example). You cannot replace a shader with a shader-less multi-layer texture, no matter how sophisticated it can be (you'd have to generate a new texture on the CPU for each frame).

I'm not against adding new and more optimized functions to do stuff more efficiently, but when the effect is already possible today and the gain is so small, just why making it seem like if it were so important : it's not !

The only way to save performance in stereo 3D is in the game engine programming : to render some of the world, shadows, lights differently to try and compute only once all the items that are common between the two eyes and only compute twice what absolutely needs to be done twice.
Each game engine is different : the hardware or the GPU driver cannot do that automatically.
Until then : the brute force method is what is being used today and will stay the best way to go for the first generation of native stereoscopic games.

Having a double GPU load is not that much of a problem if the game developers know it in advance and do their games accordingly : Avatar the game did that and it looks absolutely gorgeous both in 2D and S3D. Sure it's not Crysis or GTA4/PC with their gigantic high detail outdoor environments with infinite visibility (which put to a crawl any CPU and GPU), but at least it's still pretty and it's fully playable.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
Dom
Diamond Eyed Freakazoid!
Posts: 824
Joined: Sun Oct 19, 2008 12:30 pm
Contact:

Re: Zalman drops Nvidia stereo driver support for new monito

Post by Dom »

You guys are on the right track in that both software and hardware both need to be optimized and developed alot more for gaming to increase realism. GPU makers need forsure a dedicated processing chip on the graphics card or soley use a whole video card for the processing. Just like how physx is now only use that for 3D. Physx by the way is now a chip on the graphics card and almost a waste using another video card for it. Sure it adds more to the game but whats the new chip for then? Even if you have two sli cards and the third for 3d processing then why not? How ever the game is done with code is that all new pc games need to be basically 1024x768x32 to have the decent 120 fps. What this 30fps or 60fps thats not good enough. And also the game in gonna run slow on only 1 core. I went to my local computer store today and the tech/owner says only photoshop is used by four cores and most don't use 2 cores, so this is going to impact the 3d processing too. Neil should make a list of recommended facts that we gather for performance wise gaming in stereo3d cause a 50 percent drop in fps and a choppy game is always a dead end street. The main purpose for stereo 3d is the effect and to acheive this effect is to give proper lighting to all fields of point even shadows with dark light twilight not dead dark no see. Even to use the human eye as an example to use adjusting light sight bright adjust to bright and bright to dark adjust to dark and light rays. Once the developers complete a steady stream enviroment with auto adjusting everything then the 3d will take off in gaming. The more the user has to tinker with the experience the less appealing its going to be and a waste. Also what about setting up convergence and separation to particular objects and distances given the fact the more seperation reduces the fps too. A little here and there is a far lot less than a whole bunch in the entire display or api.
http://www.cns-nynolyt.com/files/doms-systemspecs.html My System specs In HTML

Image

Cyberia on Youtube

__________________________________________________________________________________________
Image
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

BlackShark wrote:There is no such a thing as a penalty free AA4x, especially on consoles where most game developers prefer to leave the hardware AA off to save a few megabytes of RAM
Whatever. It looks like in your reality, Xbox 360 does not exist, so just continue to ignore everything I'm saying.
so many new games look so similar between DX9, DX10 and even DX11
World in Conflict does look noticeably nicier in DX10 rendering mode. Most current games were developed with D3D9 features and mind mostly ported to D3D10 either at the last minute or post release. As more and more native D3D10 games and benchmarks appear, the difference will be more noticeable.
You cannot replace a shader with a shader-less multi-layer texture, no matter how sophisticated it can be (you'd have to generate a new texture on the CPU for each frame)
You can use a bunch of pre-generated low-res textures and do a crazy amount of multitexturing/multipassing. With the raw multitexturing power of current cards, the result will be the same.
I'm not against adding new and more optimized functions to do stuff more efficiently, but when the effect is already possible today and the gain is so small, just why making it seem like if it were so important
What exact gain is "so small"? Are there any practical realizations of fully orthogonal two-view rendering in the hardware which were benchmarked and showed this "small" gain? It's only your perception that the gain will be small for some unspecified reason.
The only way to save performance in stereo 3D is in the game engine programming : to render some of the world, shadows, lights differently to try and compute only once all the items that are common between the two eyes and only compute twice what absolutely needs to be done twice.
I don't understand why can't this feature be implemented at the graphics API level. Then again, no matter how far the object is, it still needs to be rendered into two separate framebuffers, so the savings will be minimal. It's detailed close-up objects that create the highest load on graphics hardware.
Each game engine is different : the hardware or the GPU driver cannot do that automatically.
Graphics hardware does not accelerate game engines. What it does is accelerate graphics rendering of polygonal models, and I dont think there are modern game engines which use anything other than polygonal models.
it looks absolutely gorgeous both in 2D and S3D. Sure it's not Crysis or GTA4/PC with their gigantic high detail outdoor environments
IMHO Avatar has quite simple D3D9-level graphics engine, the "wow" factor comes from the artwork and storyline.
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Zalman drops Nvidia stereo driver support for new monito

Post by BlackShark »

DmitryKo wrote:
BlackShark wrote:There is no such a thing as a penalty free AA4x, especially on consoles where most game developers prefer to leave the hardware AA off to save a few megabytes of RAM
Whatever. It looks like in your reality, Xbox 360 does not exist, so just continue to ignore everything I'm saying.
I've read numerous articles about hardware AA, for both PC and consoles, and in every single one of them was stated the same fact : Hardware AA is not free : it requires more RAM to render. And this Ram is extremely limited in consoles which is why so many console games do not use this "free" AA because they need this Ram for other things.
DmitryKo wrote:
You cannot replace a shader with a shader-less multi-layer texture, no matter how sophisticated it can be (you'd have to generate a new texture on the CPU for each frame)
You can use a bunch of pre-generated low-res textures and do a crazy amount of multitexturing/multipassing. With the raw multitexturing power of current cards, the result will be the same.
so many new games look so similar between DX9, DX10 and even DX11
World in Conflict does look noticeably nicier in DX10 rendering mode. Most current games were developed with D3D9 features and mind mostly ported to D3D10 either at the last minute or post release. As more and more native D3D10 games and benchmarks appear, the difference will be more noticeable.
Then you said it : doing the equivalent of a shader would require a "crazy" amount of complex Multi texturing and multi passes, not including the possibility of running into a hardware limitation of the GPU that would tremendously slow down the added passes. for myself I personally can't even think how it could be done really.
Reproducing a shader with multi-pass multi-texture looks everything but simple to me. Unlike stereoscopy which can be done easily if implemented by the game developer.

Hardware T&L with a few fixed shaders for some special effects (DX7) and fully programmable shaders (DX9) are such completely different beasts.
They are the difference between having an effect that works in real time at 60fps and the same effect that stalls the entire rendering process.
They are the new feature that makes the difference between an effect that makes you say : "hey, that's a nice improvement" and "Wow ! that just couldn't be done before !"
In terms of visual quality and new effects
DX7->DX9 is a complete revolution
DX9->DX10/11 are step by step small improvements
For instance the only major new features that DX11 has over DX9 are standardized hardware tessellation and Direct Compute. They are not features that will unlock revolutionary new effects, but they will allow developers to do differently some things they could already do before.

DX10 has been here for more than 3 years now, and I have yet to see one game that really stuns me in this aspect : there have been some really beautiful games but nothing like the ass kicking that I experienced when I switched from DX8 to DX9, the differences were like night and day.
DmitryKo wrote:
I'm not against adding new and more optimized functions to do stuff more efficiently, but when the effect is already possible today and the gain is so small, just why making it seem like if it were so important
What exact gain is "so small"? Are there any practical realizations of fully orthogonal two-view rendering in the hardware which were benchmarked and showed this "small" gain? It's only your perception that the gain will be small for some unspecified reason.
The only way to save performance in stereo 3D is in the game engine programming : to render some of the world, shadows, lights differently to try and compute only once all the items that are common between the two eyes and only compute twice what absolutely needs to be done twice.
I don't understand why can't this feature be implemented at the graphics API level. Then again, no matter how far the object is, it still needs to be rendered into two separate framebuffers, so the savings will be minimal. It's detailed close-up objects that create the highest load on graphics hardware.
I strongly disagree
Large outdoor environments with very long visibilities do take a significant amount of ressources. LOD 3D model swaps and tesselation can't do miracles : if you can't replace far objects with a simple 2D texture and are stuck with having to draw your objects with polygons, your scene will be heavy.

The gain of a dedicated stereo 3D API will only be of some programmer's time to make a stereoscopic 3D game engine, but the actual raw performance gain will be minimal.
No matter how you look at it, geometry need to be drawn, textured, shaded, post-processed. With current game engines running on the current hardware, the amount of overhead to pass the draw commands is very small, what takes the most of the time is the work that needs to be done on the GPU itself.
No matter how you look at it : if you want to draw a scene twice (once for each eye) by using polygons, the GPU will have to do the the entire work twice. I don't see any way around this, other than for developers to modify the way they render their scenes to spare as much computation as possible by re-using pre-computed assets that are just copied from one eye view to the other with small transforms (copying parts of the left eye backbuffer to the left eye backbuffer) in order to try and save as much time as possible over the entire scene re-computation.
DmitryKo wrote:
Each game engine is different : the hardware or the GPU driver cannot do that automatically.
Graphics hardware does not accelerate game engines. What it does is accelerate graphics rendering of polygonal models, and I dont think there are modern game engines which use anything other than polygonal models.
I expressed myself wrongly, I meant each game uses shaders differently, uses the different functions available differently, draw scenes differently, wants to prioritize certain geometry and shaders over others for an artistic purpose, and use hacks to make scenes render faster. This is the reason why Nvidia, iZ3D and DDD have such a hard time making stereoscopic 3D drivers and why there are so many S3D artefacts, especially since the introduction of programmable shaders.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Zalman drops Nvidia stereo driver support for new monito

Post by cybereality »

You know what the perfect solution to all this is? Just require SLI/crossfire for 3D. 2 cards, 2 eyes. Seems simple.
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

cybereality wrote:Just require SLI/crossfire for 3D. 2 cards, 2 eyes
That would be limiting Stereo3D support to like 1.5% of the hardcore gaming market, according to steampowered.com/hwsurvey (see "Multi-GPU systems").
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

BlackShark wrote:I've read numerous articles about hardware AA, for both PC and consoles, and in every single one of them was stated the same fact : Hardware AA is not free : it requires more RAM to render
BlackShark, we were talking about rendering performance. For all I know, 4x MSAA is essentially "free" to enable on the Xbox 360. It doesn't mean it comes for free - of course it does require a bigger framebuffer which may consume more graphics memory, it does incur a higher part cost beceuse requires more electric power and consumes silicon space etc. But the performance drop for enabling 4x MSAA is barely noticeable.
No matter how you look at it : if you want to draw a scene twice (once for each eye) by using polygons, the GPU will have to do the the entire work twice.
Did you ever hear about parallel computing? If you have setup and rasterization units which support two-view rendering, you pass the geometry only once and the two views are rasterized in parallel.

What you need is fully orthogonal support of multi-view render targets, so that the shaders support two sets of pixel color outputs, depth values, transform matrices etc. and hardware does not impose some limitations on blending modes or AA support. Every single game object is submitted for rendering only once, and hardware renders the two stereo views in parallel. With this approach, you are essentially computing the two color outputs for each view in the same shader, so you can re-use shader computations and intermediate values and can save on texture look-ups from the memory, so the drawing overhead would be minimal, certainly far less than required for a brute force approach where you just render everying twice.

don't see any way around this, other than for developers to modify the way they render their scenes to spare as much computation as possible by re-using pre-computed assets that are just copied from one eye view to the other with small transforms (copying parts of the left eye backbuffer to the left eye backbuffer) in order to try and save as much time as possible over the entire scene re-computation.
You got the idea, however you don't need to copy anything from backbuffers. Everything you need for intermediate shader computations can be saved in like 4K temporary registers and unlimited constant memory available to the shader program. You just compute and output the two final color to the respective backbuffer.

DmitryKo wrote:for myself I personally can't even think how it could be done really. Reproducing a shader with multi-pass multi-texture looks everything but simple to me
Why not? This is how it has been done before shaders, and even today it's sometimes simplier to just apply another texture layer rather than perform complex procedural computations in the shader.
DX9->DX10/11 are step by step small improvements For instance the only major new features that DX11 has over DX9 are standardized hardware tessellation and Direct Compute. They are not features that will unlock revolutionary new effects, but they will allow developers to do differently some things they could already do before.
Fully orthogonal common shader execution blocks in Direct3D10 are not quite a "small improvement".
DX10 has been here for more than 3 years now, and I have yet to see one game that really stuns me in this aspect : there have been some really beautiful games but nothing like the ass kicking that I experienced when I switched from DX8 to DX9, the differences were like night and day.
Again, most current games were not designed from with Direct3D possibilities from ground up, because in order to unlock new features you need to adjust game content and use different asset design and generation tools. When game development takes 3-5 years and costs millions of dollars, game studion don't typically approve to redo entire game artwork in the end of the cycle in order to use some new and cool graphics effect. Also, Windows Vista adoption rate has been well below the acceptance threshold - even today, almost half of D3D10 capable cards still run on Windows XP, but this is certainly going to change with Windows 7.

Expect really breakthrough graphics to come in next-generation engines, such as Unreal Engine 4, 3D Mark 2011, etc., which were designed with Direct3D 10/11 features in mind.
Large outdoor environments with very long visibilities do take a significant amount of ressources. LOD 3D model swaps and tesselation can't do miracles : if you can't replace far objects with a simple 2D texture and are stuck with having to draw your objects with polygons, your scene will be heavy.
Progressive LOD techniques have been known for years, check Nvidia SDK examples. AFAIK large outdoor scenes typically place much higher load on the CPU, because game engines have to perform extensive occlusion culling at the scene graph level to avoid stalling the graphics card with entire level geometry.

http://www.gamasutra.com/view/feature/2 ... _fast_.php" onclick="window.open(this.href);return false;

the actual raw performance gain will be minimal
This is your (un)educated guess.
the amount of overhead to pass the draw commands is very small
I did not talk about draw call overhead in the API.
DmitryKo wrote:each game uses shaders differently, uses the different functions available differently, draw scenes differently, wants to prioritize certain geometry and shaders over others for an artistic purpose. This is the reason why Nvidia, iZ3D and DDD have such a hard time making stereoscopic 3D drivers and why there are so many S3D artefacts
The basics of rendering pipeline as it was designed by Silicon Graphics in 1980s still remain the same. There are triangles which get transformed and rasterized into pixels. Even though there are some programmability at various stages, there are still projection matrices, frame/back buffers and Z/W buffers.

Yurithebest will correct me if I'm wrong, but most glitches mentioned in MTBS Game Anomaly Guide are just the product of programmer's placing the objects at wrong depth or using incorrect projection matrices. No stereo driver can automagically correct these errors, because this would require human intelligence.
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Zalman drops Nvidia stereo driver support for new monito

Post by BlackShark »

DmitryKo wrote:
No matter how you look at it : if you want to draw a scene twice (once for each eye) by using polygons, the GPU will have to do the the entire work twice.
Did you ever hear about parallel computing? If you have setup and rasterization units which support two-view rendering, you pass the geometry only once and the two views are rasterized in parallel.
Yes I do, and I have also heard that GPU computing is already a massively parallel process.
And I'm not very sure but I also heard something that says that in order to do parallel computing you need additional compute units similar or identical to the first one.

The issue with your logic is that the hardware required to compute the second (left and right eye) polygons simultaneously would be so close to the hardware which already does the polygon computing in 2D that It would be silly to expect these units not to be used to increase the raw power of 2D rendering as well. Unless nobody plays mono games anymore, which isn't going to happen any time soon, you will always get a very close to two-fold performance difference between mono and stereo operations, because there is little to no difference between rendering two views simultaneously in one long pass and rendering each view in sequence at twice the speed.
DmitryKo wrote:
for myself I personally can't even think how it could be done really. Reproducing a shader with multi-pass multi-texture looks everything but simple to me
Why not? This is how it has been done before shaders, and even today it's sometimes simplier to just apply another texture layer rather than perform complex procedural computations in the shader.
This sounds more like optimizing the available resources to balance the available computational power and the bandwidth by choosing between computing in real time and using pre-rendered data than actually being actually able to produce an effect.
The fact is there are a number of visual effects that simply were not possible, not even in tech demos before the introduction of the Shaders or they had to be done on the CPU. Deformable surface reflections (I remember still reflections did exist in hardware but were produced by copying a mirrored geometry), angular dependant reflection/transparency for water effects and light glare on curved objects, high precision dynamic shadows, motion blur and depth of field.
The only games which had these kinds of effects were those which used hybrid CPU/GPU engines which used the CPU to generate all these effects the GPU simply could not do and the visual difference between the two was simply mind blowing.
DmitryKo wrote:
DX9->DX10/11 are step by step small improvements For instance the only major new features that DX11 has over DX9 are standardized hardware tessellation and Direct Compute. They are not features that will unlock revolutionary new effects, but they will allow developers to do differently some things they could already do before.
Fully orthogonal common shader execution blocks in Direct3D10 are not quite a "small improvement".
Given that all DX9 games gained a tremendous performance boost on DX10 cards from this very feature, I don't consider this a benefit from DX10 but rather a benefit from an architecture improvement.

Also, I have a vague souvenir of reading an article published when the Geforce8 were released stating that DX10 did not made this architecture mandatory at the hardware level, it could be emulated in the drivers with a previous architecture and the device would still be DX10 compliant (although it would clearly not run at it's best performance). Don't quote me on that, it's really a very vague thing I remember reading very quickly while skipping half of the details.
DmitryKo wrote:Expect really breakthrough graphics to come in next-generation engines, such as Unreal Engine 4, 3D Mark 2011, etc., which were designed with Direct3D 10/11 features in mind.
You've got faith. I'm afraid I'm a non believer, I think the only visible things that will come from DX11 will be hardware tessellation to improve close-ups and the use of Direct Compute to replace CUDA/Physix which won't change much in terms of gameplay in 99% of games in which the difference between real time and pre-computed versions of these outrageously massive effects is barely visible.
DmitryKo wrote:
Large outdoor environments with very long visibilities do take a significant amount of ressources. LOD 3D model swaps and tesselation can't do miracles : if you can't replace far objects with a simple 2D texture and are stuck with having to draw your objects with polygons, your scene will be heavy.
Progressive LOD techniques have been known for years, check Nvidia SDK examples. AFAIK large outdoor scenes typically place much higher load on the CPU, because game engines have to perform extensive occlusion culling at the scene graph level to avoid stalling the graphics card with entire level geometry.

http://www.gamasutra.com/view/feature/2 ... _fast_.php" onclick="window.open(this.href);return false;
I see this situation the other way around :
Outdoors are such performance hogs for GPUs that game developers spend extra engine R&D and CPU ressources to spare as much computations as possible
DmitryKo wrote:
the actual raw performance gain will be minimal
This is your (un)educated guess.
the amount of overhead to pass the draw commands is very small
I did not talk about draw call overhead in the API.
Maybe you can enlighten me in this case.
It was my understanding that Nvidia and ATi provided developers toold to help them make sure their engines would keep the GPU units busy actually computing stuff rather than reconfiguring themselves or waiting for the next instruction/pass/resynchronisation point. Did i understand wrong ?
DmitryKo wrote:
each game uses shaders differently, uses the different functions available differently, draw scenes differently, wants to prioritize certain geometry and shaders over others for an artistic purpose. This is the reason why Nvidia, iZ3D and DDD have such a hard time making stereoscopic 3D drivers and why there are so many S3D artefacts
The basics of rendering pipeline as it was designed by Silicon Graphics in 1980s still remain the same. There are triangles which get transformed and rasterized into pixels. Even though there are some programmability at various stages, there are still projection matrices, frame/back buffers and Z/W buffers.

Yurithebest will correct me if I'm wrong, but most glitches mentioned in MTBS Game Anomaly Guide are just the product of programmer's placing the objects at wrong depth or using incorrect projection matrices. No stereo driver can automagically correct these errors, because this would require human intelligence.
I'm going to be the Devil's advocate here :
Game developers have the freedom to program their games the way they want. Unless the monoscopic game looks bad, all object locations and shader programming are correct whether the game is 3D driver friendly or not.

I consider that if a game developer wants his game to be stereo3D, then he should make his engine to compute the left and right eye views and not program a monoscopic engine hoping some 3D driver or new graphics card architecture will "auto-magically" transform his game into 3D.
Perfect 3D drivers don't exist and never will because they require restrictions on how the game engine should work.
Perfect 3D games do exist : Avatar is proof of it, and it works with all the developer freedom you want (the game has up to 8x AA, a full range of blur effects and all)

I consider 3D drivers are just gap fillers. 2D to 3D conversion software that take advantages that games are rendered in real time to try and do better than photo and video conversion.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
DmitryKo
Diamond Eyed Freakazoid!
Posts: 776
Joined: Tue Jan 08, 2008 2:25 am
Location: Moscow, Russia

Re: Zalman drops Nvidia stereo driver support for new monito

Post by DmitryKo »

BlackShark wrote:The issue with your logic is that the hardware required to compute the second (left and right eye) polygons simultaneously would be so close to the hardware which already does the polygon computing in 2D that It would be silly to expect these units not to be used to increase the raw power of 2D rendering as well. Unless nobody plays mono games anymore, which isn't going to happen any time soon, you will always get a very close to two-fold performance difference between mono and stereo operations, because there is little to no difference between rendering two views simultaneously in one long pass and rendering each view in sequence at twice the speed.
This is a difference between processing the geometry and pixels twice per each frame, and processing geometry only once while generating twice as many pixels.

The closest analogy is running the game at 1920x1080 (2M pixels) versus 2560x1440 (4 M pixels). Most games do not take a 50% hit when rendering at twice the screen resolution, so if you have a graphics part that is able to practically run modern games at 2560x1440/2560x1600 (something like Radeon HD6870 would certainly do quite easily with its projected 10 GFLOPS of processing power), it will also run 1920x1080 stereo with the same speed.


I took some benchmarks from IXBT.com monthly graphics card roundup (which I introduced in another unrelated thread), using the latest issue (Apr 2010) and taking results for most powerful cards, GeForce GTX 480 SLI 2x and Radeon HD 5970 CF 2x, in 2560x1600 and 1920x1200 resolutions with either no AA and 4x AA 16x AF, on a very powerful Core i7 975 EE six core 3333 MHz system with 6 GBytes of RAM and Windows 7 Ultimate, to better match future video cards which will be released in 2011 and beyond.

In the end, performance drop for 2560x1600 (4Mpx) versus 1920x1200 (2Mpx) was about 25%-35% in most cases.

Code: Select all

                   NO AAA              AAA   
               5870 CF  480 SLI  5870 CF   480 SLI
Tropics           76%     69%       72%     67%
Heaven DX10       68%     62%       63%     61%
Heaven DX11       73%     68%       66%     67%
Just Cause 2      73%     80%       67%     75%
FarCry2           80%     85%       73%     77%
CRYSIS DX10       57%     73%       56%     69%
CRYSIS Warhead    60%     81%       59%     75%
3DMark Vantage    74%     67%       72%     66%
Dirt2             85%     94%       80%     90%
DOW2             100%     99%      100%     96%
Absolute FPS figures:

Code: Select all

                        NO AAA           AAA   
                       5870 CF  480 SLI  5870 CF 480 SLI
Tropics         1920   127,6    165,4    87,7    116,1
                2560    97,3    113,7    62,9     78,1
Heaven DX10     1920   108,4    126,3    83,0    103,0
                2560    73,5     78,3    52,7     63,1
Heaven DX11     1920    67,7     96,7    54,1     78,3
                2560    49,2     65,8    35,6     52,1
Just Cause 2    1920    60,8     60,6    52,6     59,8
                2560    44,4     48,5    35,0     44,8
FarCry2         1920   149,3    175,9   117,5    163,3
                2560   120,1    148,7    86,1    126,3
CRYSIS DX10     1920    53,4     57,7    52,1     55,3
                2560    30,2     42,2    29,2     38,3
CRYSIS Warhead  1920    39,0     40,0    38,1     39,3
                2560    23,3     32,2    22,5     29,4
3DMark Vantage  1920   20689    23582   17139    21421
                2560   15257    15823   12348    14152
Dirt2           1920   126,6    138,9   120,6    132,6
                2560   108,0    131,0    96,0    119,2
DOW2            1920    81,6     77,7    81,6     77,7
                2560    81,5     77,0    81,4     74,6
(also in attached .XLS file)
2560vs1920.xls
The full graphs (in Russian) and XLS sheet with bechmark results are available:
http://www.ixbt.com/video3/i0410-video.shtml#diags" onclick="window.open(this.href);return false;
http://www.ixbt.com/video/itogi-video/i ... f-0410.zip" onclick="window.open(this.href);return false;

the only visible things that will come from DX11 will be hardware tessellation to improve close-ups and the use of Direct Compute
I wasn't specifically talking about Direct3D version 11. What I'm trying to say is, most titles are not currently designed to take full advantage of "Direct3D 10 architecture" that encompasses Direct3D 10, 10.1 and 11 (where each new version is a strict superset of the previous), since it requires a completely different design philosophy and approach to game art design.

Nvidia has an online book called "GPU Gems 3" which covers many advanced new algorythms that are possible with Direct3D 10. The very first example is the Marching Cubes algorythm which uses voxels and look-up tables for polygonal landscape generation right in the GPU. You can not adopt existing game to this algoruthm without a major redesign of the game engine, design tools, and artwork.
http://http.developer.nvidia.com/GPUGem ... _ch01.html" onclick="window.open(this.href);return false;

Hopefully with the market approval rates of Windows 7, Direct3D 10/11 will enjoy a much broader market share which will make it far more relevant, and more and more developers will switch to Direct3D 10/10.1/11 as their principal rendering target, especially since Direct3D 11 runtime indludes D3D10 Feature Level 9, which is a fallback version of Direct3D 10/10.1/11 for very old cards which onlly support SM 3.0 and below.



This sounds more like optimizing the available resources to balance the available computational power and the bandwidth by choosing between computing in real time and using pre-rendered data than actually being actually able to produce an effect.
Blackshark, textures and texture samplers are not going anywhere even with pixel shaders, and they won't untill (and if) we will be able to run RenderMan shaders in realtime :) Things like procedural shader-generated fur are still too computational intensive, and require a different artistic approach, so most current game engines still do a lot of multitexturing. As I said, this might change in next-gen engines.
The only games which had these kinds of effects were those which used hybrid CPU/GPU engines which used the CPU to generate all these effects the GPU simply could not do and the visual difference between the two was simply mind blowing.
Well, why use a hardware solution (shaders) when the developer can program a similar solution using the CPU geometry? :P
DmitryKo wrote:Given that all DX9 games gained a tremendous performance boost on DX10 cards from this very feature, I don't consider this a benefit from DX10 but rather a benefit from an architecture improvement.
8-O If the architectural improvement was stipulated by Direct3D 10 spec, which Microsoft have been discussed with partners for like 5 years before the release, it cannot be a totally independent development of the architecture which came out of nowhere.
DX10 did not made this architecture mandatory at the hardware level, it could be emulated in the drivers with a previous architecture and the device would still be DX10 compliant
In theory, yes. In practice, that would be quite a stupid thing to do, since it would eliminate many benefits of common shader cores.
I see this situation the other way around : Outdoors are such performance hogs for GPUs that game developers spend extra engine R&D and CPU ressources to spare as much computations as possible
I said exactly just that. The fact is, outdoors are a least as much CPU intensive as GPU intensive.
Nvidia and ATi provided developers toold to help them make sure their engines would keep the GPU units busy actually computing stuff
Yes, but I was talking about "draw overhead" in the sense of performance loss for rasterizing additional view in the GPU, not the extra CPU time spent by the API runtime on non-optimal draw calls.
DmitryKo wrote:I'm going to be the Devil's advocate here : Game developers have the freedom to program their games the way they want.
I consider that if a game developer wants his game to be stereo3D, then he should make his engine to compute the left and right eye views and not program a monoscopic engine hoping some 3D driver or new graphics card architecture will "auto-magically" transform his game into 3D.
Preach, brother! :) I say just give them "auto-magical" stereo mode right in the Direct3D runtime.

With proper hardware and driver support, the stereo rendering mode can be just one API call away for the programmer, and no extensive modification to the game engine will be required, since everything you need is already presented to the hardware.

It would be interesting to see how Sony handled stereo rendering in the PS3, however I don't have access to the SDK and don't have a spare $10 000 (ten thousand dollars) to find out.
You do not have the required permissions to view the files attached to this post.
User avatar
tritosine5G
Terrif-eying the Ladies!
Posts: 894
Joined: Wed Mar 17, 2010 9:35 am
Location: As far from Hold Display guys as possible!!! ^2

Re: Zalman drops Nvidia stereo driver support for new monito

Post by tritosine5G »

Free AA : http://igm.univ-mlv.fr/~biri/mlaa-gpu/MLAAGPU.pdf" onclick="window.open(this.href);return false;

cryengine3 and ue3.5 all support s3d rendering now , its up to developers , should take a few days to check if works correctly, even if 3d was not in mind.
-Biased for 0 Gen HMD's to hell and back must be one hundred percent hell bent bias!
Post Reply

Return to “User Contributed Immersive Technology News & Announcements”