The iZ3D Driver + Shutter Glasses Thread

User avatar
The_Doctor
Binocular Vision CONFIRMED!
Posts: 294
Joined: Sat Jan 19, 2008 12:00 am

Post by The_Doctor »

There's no way it will work at high refresh rates, the 280 was "only" doing about 35-40 fps average on anaglyph at high resolutions, and it won't get much faster at lower ones like 1024*768 because it's really a high resolution card and doesn't improve a lot at low resolutions with cpu limiting and all that stuff. Plus not everybody has the latest stuff. We have to get it fixed somehow.
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

LukePC1 wrote:
The_Doctor wrote:It is with a 7900GTX, the 280 is gone, too much heath. But yes, as long as the card can keep more than your refresh the stereo works and stays in sync. Just testing, of coure it can't be used at 30 HZ.
oh sorry man...
Maybe you have more luck with the next one ;-)

If that results above were with gf7, then it's ok. So its easy to get a good FPS with recent hardware and then you have at least a good fram rate when the hardware works :D

I only hope it CAN be fixed, because they might have no access to the hardware/backbuffer as a 3rd party. When you make the game or the hardware driver it's much more easy!
It will be fixable. There is no legal license restriction allowing nVidia to be the only ones allowed to utilize any particular part of the hardware, so there is no excuse this can't be fixed really - even if nVidia refuse to help iz3d (speaking from a software engineering perspective) should be able to easily trace the execution of nVidias own driver to ascertain how this is being acheived... There are all sorts of tools out there for purposes such as these, which allow them to monitor low-level API calls and therefore replicate them.
There is nothing nVidia can do that iz3d cannot - if the hardware is there it is a case of research and implementation, to be honest we were all thinking they had done this before the beta release hence the point in including the 'simple shutter' mode in the first place. I just hope after the extremely long wait by countless members of the community for this shutter functionality (following all the hype and suggestion this was being introduced) we aren't all going to be sat waiting around for another month for this to be working :(
User avatar
RAGEdemon
Diamond Eyed Freakazoid!
Posts: 740
Joined: Thu Mar 01, 2007 1:34 pm

Post by RAGEdemon »

Not to sound patronising (again?) but has anyone tried googling "nvidia backbuffer" or "access nvidia backbuffer" - there seems to be a ton of info. Also came across an interesting thread discussing the issue, was going to post link till I realised it was posted by our very own Tril and you guys probably already know about it.

Also came across posts about RivaTuner being able to manipulate the Backbuffer. Perhaps one might try sending them an email asking how they did it? :P

Just a few suggestions that i thought *might* help :)
Windows 11 64-Bit | 12900K @ 5.3GHz | 2080 Ti OC | 32GB 3900MHz CL16 RAM | Optane PCIe SSD RAID-0 | Sound Blaster ZxR | 2x 2000W ButtKicker LFE | nVidia 3D Vision | 3D Projector @ DSR 1600p | HP Reverb G2
Borg_Rootan
Two Eyed Hopeful
Posts: 52
Joined: Fri Apr 11, 2008 12:19 pm

Post by Borg_Rootan »

chrisjarram wrote:
LukePC1 wrote:
The_Doctor wrote:It is with a 7900GTX, the 280 is gone, too much heath. But yes, as long as the card can keep more than your refresh the stereo works and stays in sync. Just testing, of coure it can't be used at 30 HZ.
oh sorry man...
Maybe you have more luck with the next one ;-)

If that results above were with gf7, then it's ok. So its easy to get a good FPS with recent hardware and then you have at least a good fram rate when the hardware works :D

I only hope it CAN be fixed, because they might have no access to the hardware/backbuffer as a 3rd party. When you make the game or the hardware driver it's much more easy!
It will be fixable. There is no legal license restriction allowing nVidia to be the only ones allowed to utilize any particular part of the hardware, so there is no excuse this can't be fixed really - even if nVidia refuse to help iz3d (speaking from a software engineering perspective) should be able to easily trace the execution of nVidias own driver to ascertain how this is being acheived... There are all sorts of tools out there for purposes such as these, which allow them to monitor low-level API calls and therefore replicate them.
There is nothing nVidia can do that iz3d cannot - if the hardware is there it is a case of research and implementation, to be honest we were all thinking they had done this before the beta release hence the point in including the 'simple shutter' mode in the first place. I just hope after the extremely long wait by countless members of the community for this shutter functionality (following all the hype and suggestion this was being introduced) we aren't all going to be sat waiting around for another month for this to be working :(
Hmm... and what if iZ3D will cooperate with Ati (..now AMD of course :) ) S3D is dynamic technology and this choice will be favourable for both. (end of nVidia absurd S3D leading without S3D support) :!:
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

RAGEdemon wrote:Not to sound patronising (again?) but has anyone tried googling "nvidia backbuffer" or "access nvidia backbuffer" - there seems to be a ton of info. Also came across an interesting thread discussing the issue, was going to post link till I realised it was posted by our very own Tril and you guys probably already know about it.

Also came across posts about RivaTuner being able to manipulate the Backbuffer. Perhaps one might try sending them an email asking how they did it? :P

Just a few suggestions that i thought *might* help :)
You don't, mainly because backbuffering is not what we are discussing here - it is pageflipping ;) I saw some things about rivatuner being able to treat older nVidia cards as Quaddros (to utilize openGL quad-buffering for example) but that was Geforce 1-3 series. All backbuffering does is allow you to draw to one buffer while the other is displaying, and swap them out when drawing has finished (to eliminate render flicker). With pageflipping, you need to be able to have 2 of these backbuffers and 2 front buffers (which can be 'page-flipped' between), one for each eye. So, you draw the left and right and when they are both ready move them to the front buffers for the output to flip between at 85hz (or whatever) while the back buffers are again being repopulated.
Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Location: Canada

Post by Tril »

RAGEdemon wrote:Not to sound patronising (again?) but has anyone tried googling "nvidia backbuffer" or "access nvidia backbuffer" - there seems to be a ton of info. Also came across an interesting thread discussing the issue, was going to post link till I realised it was posted by our very own Tril and you guys probably already know about it.

Also came across posts about RivaTuner being able to manipulate the Backbuffer. Perhaps one might try sending them an email asking how they did it? :P

Just a few suggestions that i thought *might* help :)
Are you talking about a thread I started at gamedev? If you are, I'd like to say something about it. I was just doing some experiments to figure out if it was possible to program a shutter glasses mode in DirectX 9. I said it worked in the thread but the truth is that I never got it working completely. I was trying to display at a constant fps (fps that is the same as the refresh rate) even if the scene is rendered at a lower fps than the refresh rate. The solution I tried required Vista because it uses a feature of DirectX 9 that is only available on Vista. What I tried is explained below.

In a rendering thread with a low priority (low CPU priority and low GPU priority), you render to two textures (left and right eyes).
In another thread with high priority (high CPU priority and high GPU priority), you copy the textures from the other thread to the backbuffer and you send to backbuffer to be seen on screen.
The textures are shared across the two threads.
When using DirectX 9 Ex (only available in Vista) you can set the priority of the GPU. This way you can render two frames at the same time and set priorities. The one with the higher priority gets higher fps.

It worked partially. I did some tests at 85 Hz. I was able to display at close to 85 Hz when the rendering fps was lower than 85 Hz. However, there was still some dips below 85 Hz many times per second so it was unusable with shutter glasses. Maybe it's something that I did wrong or maybe it's just impossible to do in DirectX 9 with a slow video card. From what I read, in DirectX 9 Ex, you can stop the rendering between each shader (to give priority to another rendering process or application) but not in the middle of the shader (Source : Greg Schechter's Blog - The role of the Windows Display Driver Model in the DWM). Maybe this is why it's hard to achieve what I tried to do in a recent game with shaders that are big and complex and that take a lot of time to render. Or maybe it's something else. Based on the link I posted, the method might work better in DirectX 10.

In my test, if I forced the rendering to be unnaturally slow (by adding a wait time of one second between each rendering loop, thus rendering at 1 fps), it did work at displaying at the refresh rate of the monitor. Maybe you could get it working by manually slowing the game rendering a bit (not as much as I tried) to be sure that the display fps is constant. It's just a wild guess but I'm thinking that the problem could be caused by a synchronisation problem between the CPU and GPU. The GPU can calculate up to 3 frames in advance. When it has three frames in memory, it stalls and waits for the CPU to catch up. When it's waiting, it probably stops executing commands sent to it. It would be fine if it just stopped the rendering but if it also stops the displaying, it can't keep the fps exactly the same as the refresh rate.

I'm not sure this method can be used in a S-3D driver like the one from iZ3D (even if it could, it might not work). To use the method, you need to render and display in two separates threads. It's easy to do if you have the source code of a game and design it that way but I'm not sure that you can do it and get stable results if you do it in code that modifies the game code on the fly. The difficulty is that DirectX is not too multithreading friendly. If you do render in two threads (like this method needs) but do it incorrectly, DirectX gets unstable and you get random crashes or slowdowns.

Sorry for the long post.
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image
User avatar
RAGEdemon
Diamond Eyed Freakazoid!
Posts: 740
Joined: Thu Mar 01, 2007 1:34 pm

Post by RAGEdemon »

I see. Very interesting indeed. I think this is the first time the goings on in the graphics card are being discussed in so much detail.

I'm a hardware guy... don't know much about the software side (hated having to do anything but low level programming back in education), so like for most people here, this thread is a great education too.

There is a utility that neil made us aware of a while back called D3D overrider which comes in the rivatuner bundle. It creates a third buffer so you can do triple buffering in D3D. the person who created it must know quite a bit about buffers and how to manipulate them. Perhaps asking him for some help might yield some results? Even if you can't get access to the backbuffer, with his help, maybe you can create other buffers which can act like the back buffer which you can have complete control over?

What about purging the D3D queue so the command that isn't getting there in time has a quicker path?

Just grasping at straws now :P

Don't quite know how it works in software... Maybe I'm just thinking of the buffers as literal memory registers on the card which they aren't?
Windows 11 64-Bit | 12900K @ 5.3GHz | 2080 Ti OC | 32GB 3900MHz CL16 RAM | Optane PCIe SSD RAID-0 | Sound Blaster ZxR | 2x 2000W ButtKicker LFE | nVidia 3D Vision | 3D Projector @ DSR 1600p | HP Reverb G2
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

RAGEdemon wrote:I see. Very interesting indeed. I think this is the first time the goings on in the graphics card are being discussed in so much detail.

I'm a hardware guy... don't know much about the software side (hated having to do anything but low level programming back in education), so like for most people here, this thread is a great education too.

There is a utility that neil made us aware of a while back called D3D overrider which comes in the rivatuner bundle. It creates a third buffer so you can do triple buffering in D3D. the person who created it must know quite a bit about buffers and how to manipulate them. Perhaps asking him for some help might yield some results? Even if you can't get access to the backbuffer, with his help, maybe you can create other buffers which can act like the back buffer which you can have complete control over?

What about purging the D3D queue so the command that isn't getting there in time has a quicker path?

Just grasping at straws now :P

Don't quite know how it works in software... Maybe I'm just thinking of the buffers as literal memory registers on the card which they aren't?
The software threading method won't work as it is, it was something I'd considered but ruled out in my mind immediately for the same reasons Tril has given. In terms of the hardware access afaik iz3d need to investigate the low level routines themselves - I'd be pretty astonished if their programmers didnt know all this already being graphics driver engineers! :o It is more important time is not spent going down roads with dead-ends; the wheel has already been invented - all they need to do is copy it :)
Nobsi
Cross Eyed!
Posts: 108
Joined: Sat Apr 14, 2007 4:34 pm

Post by Nobsi »

From http://www.orthostereo.com/geometryopengl.html:
Quad-buffering is the ability to render into left and right front and back buffers independently. The front left and front right buffers displaying the stereo images can be swapped in sync with shutterglasses while the back left and back right buffers are being updated - giving a smooth stereoscopic display.
What iZ3d had to do for software based page flipping is to realize quad buffering for DirectX. In opposite to the dual output methods (native iZ3d, planar, mirrored etc.) this would require driver level manipulation.
Of course nVidia is in advantage here, because they can control the behaviour of their driver and implement this stuff easily.

But iZ3d would have to analyse how the driver for different video card vendors (namely nVidia and AMD) work and try to intercept and manipulate its functionality regarding the left/right front/back buffer swapping in perfect synchronisation with Vsync.

This of course could be done, because there exist tools to examine program flow on driver level and also injection tools to manipulate 3rd party drivers, but that would cost tremendous engeneering time and effort.
I do not think that iz3D is willing to do that, as they will not sell any monitor more with this.

Also it has to be seen how many people are really willing to pay $50 to $100 for proper shutter support , so iZ3d may not have a proper return of investment regarding this.

So it looks as iZ3D has to leave this field to nVidia. The iZ3d solution will rely on some external hardware which is able to interpret the L/R markers to generate a stable sync signal for the shutter glasses.

If on the other hand the announced big OEM parter would be AMD, maybe we see finally proper software based IZ3D driver shutter support for ATI cards...
User avatar
RAGEdemon
Diamond Eyed Freakazoid!
Posts: 740
Joined: Thu Mar 01, 2007 1:34 pm

Post by RAGEdemon »

@Tril:

You really seem to know what you are talking about mate. The thread I was referring to was indeed on gamedev:

http://www.gamedev.net/community/forums ... _id=479002

You seem to have a huge amount of knowledge regarding the issues. I'm sure others will find your insight extremely helpful :)

@chrisjarram

Hehe, quite so. I think one of the issues might be that shutter glasses support was never iZ3D's main focus point so no-one really invested too much time into it for the beta... I guess they didn't think it would invoke so much interest. As you say, Judging from the response this thread has got, I think maybe the good people at iZ3D are reconsidering that stance.

@Nobsi:

I understand what you mean mate. I guess a lot of people won't be able to pay $100 for the driver. On the other hand, I personally would pay a lot more for a proper driver if it meant the use of my projector and a z800 again :P

-- Shahzad.
Windows 11 64-Bit | 12900K @ 5.3GHz | 2080 Ti OC | 32GB 3900MHz CL16 RAM | Optane PCIe SSD RAID-0 | Sound Blaster ZxR | 2x 2000W ButtKicker LFE | nVidia 3D Vision | 3D Projector @ DSR 1600p | HP Reverb G2
User avatar
shonofear
Cross Eyed!
Posts: 137
Joined: Sat Dec 22, 2007 3:38 am
Location: Down Under, Ozzie
Contact:

Post by shonofear »

hey all,
Not sure if this is relevant at all but Nvidia just released there NVAPI to the public... (or developers...not sure exactly)

http://developer.nvidia.com/object/nvapi.html

thought it might be good for sumthing...... or not

all the best fellas :idea:
waiting patiently......
Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Location: Canada

Post by Tril »

shonofear wrote:hey all,
Not sure if this is relevant at all but Nvidia just released there NVAPI to the public... (or developers...not sure exactly)

http://developer.nvidia.com/object/nvapi.html

thought it might be good for sumthing...... or not

all the best fellas :idea:
Interesting. It might be worth checking for iZ3D.

Read this :
NVAPI comes in two "flavors" -- the public version, available below, and a more extensive version available to registered developers under NDA.
and this :
Use NDA edition for full control of this feature :
Frame Rendering
Ability to control Video and DX rendering not available in DX runtime.
Do they mean DirectX when they say DX? If so, I would be curious to see the documentation of the version under NDA to know what they mean by "ability to control DX rendering not available in DX runtime". :)
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image
Nobsi
Cross Eyed!
Posts: 108
Joined: Sat Apr 14, 2007 4:34 pm

Post by Nobsi »

That is very interesting. Maybe this is the ticket to proper shutter support for nVidia cards.

I did a quick check of the API documentation of the public version, but there is nothing regarding frame buffer timing control.

Indeed the NDA version has a lot more to offer and the features Tril has highlighted may be exactly what iZ3D would need.

So the question is, will they take the ball and invest in research here, after all this is a nVidia API and it would not work on AMD cards.

@Tril and shonofear:
Have you put this information in the iZ3D forum so this important info is not lost in this giant thread?

Edit: Ok, I added a link to this posts at the iZ3D forum 1.09 beta section
User avatar
shonofear
Cross Eyed!
Posts: 137
Joined: Sat Dec 22, 2007 3:38 am
Location: Down Under, Ozzie
Contact:

Post by shonofear »

thanks for letting the iz3d forums know about it,
hope they or sum 1 can make sumting of it though :roll:

cheers
waiting patiently......
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

"Please note that it may take a few weeks to receive a decision on your application, due to the large number of applicants."..

NOOOOoooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo!


Nobsi wrote:That is very interesting. Maybe this is the ticket to proper shutter support for nVidia cards.

I did a quick check of the API documentation of the public version, but there is nothing regarding frame buffer timing control.

Indeed the NDA version has a lot more to offer and the features Tril has highlighted may be exactly what iZ3D would need.

So the question is, will they take the ball and invest in research here, after all this is a nVidia API and it would not work on AMD cards.

@Tril and shonofear:
Have you put this information in the iZ3D forum so this important info is not lost in this giant thread?

Edit: Ok, I added a link to this posts at the iZ3D forum 1.09 beta section
:shock: :shock: :o
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

Tril wrote: Do they mean DirectX when they say DX?
Yes.
stepsbarto

Post by stepsbarto »

... it came in my mind that we could need a kind of black box with two inputs
... set the iz3d driver to
dual output and inside this ..ehm ... magical box the signal is alternately
given to one output including a sync signal ...

:-)

..but i dont know if this is even technical possible ...
Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Location: Canada

Post by Tril »

Looks like the NVAPI solution is a dead end. BlackQ said that they are a registered NVIDIA programmer, that they already read the NDA version of NVAPI and they found nothing useful for their need.
stepsbarto wrote:... it came in my mind that we could need a kind of black box with two inputs
... set the iz3d driver to
dual output and inside this ..ehm ... magical box the signal is alternately
given to one output including a sync signal ...

:-)

..but i dont know if this is even technical possible ...
I already thought about it. The two outputs of the video card are not in sync as far as I know. The easy way to know is that if you connect two VGA monitors in clone mode and try to make the shutter glassses work, they will only be synced to one monitor. That means the signals from the two video card outputs are not in sync.

What this means is that we would need a box that has memory. It would need to be able to retain the two last frames of each eye in memory. When you get to high resolutions, the frames take a lot of memory.

24 bit/pixel.
1600x1200 = 1920000 pixels
1920000 pixels x 24 bit/pixel = 46,080,000 bit = 43.9 MB/eye
43.9 MB/eye * 2 eyes = 87.8 MB

Such a device is possible to make but making a device with that much memory is a bit out of the reach of diy community (too costly and too hard to make). It would probably need an FPGA with some RAM and some high speed analog-to-digital converters on the input and some digital-to-analog converters on the output. The nice thing about such a device would be that you could make it output any frequency and you could also make it support DVI as input.
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image
ssiu
Binocular Vision CONFIRMED!
Posts: 320
Joined: Tue May 15, 2007 8:11 am

Post by ssiu »

Tril wrote:Looks like the NVAPI solution is a dead end. BlackQ said that they are a registered NVIDIA programmer, that they already read the NDA version of NVAPI and they found nothing useful for their need.
stepsbarto wrote:... it came in my mind that we could need a kind of black box with two inputs
... set the iz3d driver to
dual output and inside this ..ehm ... magical box the signal is alternately
given to one output including a sync signal ...

:-)

..but i dont know if this is even technical possible ...
I already thought about it. The two outputs of the video card are not in sync as far as I know. The easy way to know is that if you connect two VGA monitors in clone mode and try to make the shutter glassses work, they will only be synced to one monitor. That means the signals from the two video card outputs are not in sync.

What this means is that we would need a box that has memory. It would need to be able to retain the two last frames of each eye in memory. When you get to high resolutions, the frames take a lot of memory.

24 bit/pixel.
1600x1200 = 1920000 pixels
1920000 pixels x 24 bit/pixel = 46,080,000 bit = 43.9 MB/eye
43.9 MB/eye * 2 eyes = 87.8 MB

Such a device is possible to make but making a device with that much memory is a bit out of the reach of diy community (too costly and too hard to make). It would probably need an FPGA with some RAM and some high speed analog-to-digital converters on the input and some digital-to-analog converters on the output. The nice thing about such a device would be that you could make it output any frequency and you could also make it support DVI as input.
I think calculation is off by a factor of 8. 24 bit/pixel = 3 bytes/pixel. So it should be around 11 MB (megabytes) instead of 88 MB.

=====

Why can't IZ3D driver render into quad buffers (front and back buffer for each eye) and then brute-force copy the correct screen to the "real display buffer" at the right time? For example, 1920x1080 = ~6MB per buffer. If we run at 120Hz, we will be copying 120 times per second, which is about 720MB/sec. Modern graphic cards has memory bandwidth like 50GB/sec or more, so <1GB/sec copying should be doable.
User avatar
LukePC1
Golden Eyed Wiseman! (or woman!)
Posts: 1387
Joined: Wed May 16, 2007 11:30 am
Location: Europe
Contact:

Post by LukePC1 »

Tril wrote: 24 bit/pixel.
1600x1200 = 1920000 pixels
1920000 pixels x 24 bit/pixel = 46,080,000 bit = 43.9 MB/eye
43.9 MB/eye * 2 eyes = 87.8 MB
didn't you mix up bit and byte here? 46Mbit /8 = roughly 6MB. But it is still too much for a good DIY project...

I'd go more to the interlaced image with 2times the resolution. That way you had perfect sync and if you wouldn't write these 'black' lines you would get full resolution, too.
Would that need memory? I think not, but might be mistaken :idea:
Play Nations at WAR with this code to get 5.000$ as a Starterbonus:
ayqz1u0s
http://mtbs3d.com/naw/" onclick="window.open(this.href);return false;

AMD x2 4200+ 2gb Dualchannel
GF 7900gs for old CRT with Elsa Revelator SG's
currently 94.24 Forceware and 94.24 Stereo with XP sp2!
stepsbarto

Post by stepsbarto »

Ok ..this box would be more expensive than any other good stereoscopic setup :-) ...

... I've just found a stereoconverter one active to (2) passive or passive (2) to active one
but this is high end stuff and the two graphic cards have to be genlocked.
http://www.mindflux.com.au/products/cyviz/xpo2pa.html

so... hmmm ...nothing helpful ...
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

stepsbarto wrote:Ok ..this box would be more expensive than any other good stereoscopic setup :-) ...

... I've just found a stereoconverter one active to (2) passive or passive (2) to active one
but this is high end stuff and the two graphic cards have to be genlocked.
http://www.mindflux.com.au/products/cyviz/xpo2pa.html

so... hmmm ...nothing helpful ...
Certainly from my perspective I want to be rendering at 3072 * 768 with these new drivers, which is well beyond the capablities of this box anyway... iz3d need to use 4 buffers in the video memory.
User avatar
KindDragon
Cross Eyed!
Posts: 108
Joined: Sat Mar 10, 2007 4:05 am
Location: Russia

Post by KindDragon »

ssiu wrote: Why can't IZ3D driver render into quad buffers (front and back buffer for each eye) and then brute-force copy the correct screen to the "real display buffer" at the right time? For example, 1920x1080 = ~6MB per buffer. If we run at 120Hz, we will be copying 120 times per second, which is about 720MB/sec. Modern graphic cards has memory bandwidth like 50GB/sec or more, so <1GB/sec copying should be doable.
We don't have access to any functions which don't provide D3D9.
Driver just do that things:
1. Game draw to left and right view surfaces
1. Copy from left view surface to backbuffer
2. Call D3D9 method Present(), that flip front and back buffers when v-sync signal occurred
3. Copy from right view surface to backbuffer
4. Call D3D9 method Present()
We don't control when image will be displayed on monitor.
Stereoscopic Steam Group: Join now
Image
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

KindDragon wrote:
ssiu wrote: Why can't IZ3D driver render into quad buffers (front and back buffer for each eye) and then brute-force copy the correct screen to the "real display buffer" at the right time? For example, 1920x1080 = ~6MB per buffer. If we run at 120Hz, we will be copying 120 times per second, which is about 720MB/sec. Modern graphic cards has memory bandwidth like 50GB/sec or more, so <1GB/sec copying should be doable.
We don't have access to any functions which don't provide D3D9.
Driver just do that things:
1. Game draw to left and right view surfaces
1. Copy from left view surface to backbuffer
2. Call D3D9 method Present(), that flip front and back buffers when v-sync signal occurred
3. Copy from right view surface to backbuffer
4. Call D3D9 method Present()
We don't control when image will be displayed on monitor.
Sounds like getting access to this is the order of the day then... there MUST be a way! (shuffling on seat)
User avatar
Likay
Petrif-Eyed
Posts: 2913
Joined: Sat Apr 07, 2007 4:34 pm
Location: Sweden

Post by Likay »

Too bad. Worst issue with high level programming is that you're not able to control everything. On the other hand it's a bit easier and timeefficient to program in a higher level language. Wish you good luck. Appreciate everything you do right now.
Mb: Asus P5W DH Deluxe
Cpu: C2D E6600
Gb: Nvidia 7900GT + 8800GTX
3D:100" passive projector polarized setup + 22" IZ3D
Image
Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Location: Canada

Post by Tril »

LukePC1 wrote:
Tril wrote: 24 bit/pixel.
1600x1200 = 1920000 pixels
1920000 pixels x 24 bit/pixel = 46,080,000 bit = 43.9 MB/eye
43.9 MB/eye * 2 eyes = 87.8 MB
didn't you mix up bit and byte here? 46Mbit /8 = roughly 6MB. But it is still too much for a good DIY project...

I'd go more to the interlaced image with 2times the resolution. That way you had perfect sync and if you wouldn't write these 'black' lines you would get full resolution, too.
Would that need memory? I think not, but might be mistaken :idea:
You're right. I did not pay attention and made a stupid calculation error. 43.9 is in Mb. It gives about 5.5 MB. I was not sure how to do the conversion and used this webpage but read the wrong line. It's still too much to use the onboard memory of a microcontroller or FPGA.
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

Likay wrote:Too bad. Worst issue with high level programming is that you're not able to control everything. On the other hand it's a bit easier and timeefficient to program in a higher level language. Wish you good luck. Appreciate everything you do right now.
This is precisely why provision is made in most compilers for assembly code injection, so you can work with both. No-one would work in strictly one or the other in a scenario such as this, that would be extremely restrictive. It is this that iz3d will need to do to achieve this somewhat trivial task following an investigation into the code execution of existing nvidia drivers.
Hugo
Two Eyed Hopeful
Posts: 69
Joined: Thu Mar 22, 2007 8:31 pm
Location: Bavaria

Post by Hugo »

For me it seems, that it´s just a problem of a missing officially
global API. I´m sure that iz3d could do the job, but they have to implement
it for different producers and perhaps for different chips as well. This
might be too expensive.

By the way, within some high level languages, you´re able to use
assembler (c++ -> asm{...}) :)
ssiu
Binocular Vision CONFIRMED!
Posts: 320
Joined: Tue May 15, 2007 8:11 am

Post by ssiu »

KindDragon wrote:
ssiu wrote: Why can't IZ3D driver render into quad buffers (front and back buffer for each eye) and then brute-force copy the correct screen to the "real display buffer" at the right time? For example, 1920x1080 = ~6MB per buffer. If we run at 120Hz, we will be copying 120 times per second, which is about 720MB/sec. Modern graphic cards has memory bandwidth like 50GB/sec or more, so <1GB/sec copying should be doable.
We don't have access to any functions which don't provide D3D9.
Driver just do that things:
1. Game draw to left and right view surfaces
1. Copy from left view surface to backbuffer
2. Call D3D9 method Present(), that flip front and back buffers when v-sync signal occurred
3. Copy from right view surface to backbuffer
4. Call D3D9 method Present()
We don't control when image will be displayed on monitor.
So the "Game draw to left and right view surfaces" is slower (e.g. managing only 30 fps) than the CRT monitor refresh rate (e.g. 85Hz). I think you need to make this into 2 execution threads:

the main "game draw" thread continues to do the "Game draw to left and right view surfaces", but keep 2 buffers for left view and 2 buffer for right view, so we always have a stable copy of each view

the "display thread" does something like this:
1. wake up at regular interval according to monitor refresh rate (e.g. every 10 ms if 100Hz)
2. copy from (stable version of) left view surface to backbuffer
3. Call D3D9 method Present()
4. sleep till next wake-up time
5. copy from (stable version of) right view surface to backbuffer
6. Call D3D9 method Present()
Repeat

Disclaimer: I know nothing about D3D programming; just throwing out ideas ...
Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Location: Canada

Post by Tril »

ssiu wrote:
KindDragon wrote:
ssiu wrote: Why can't IZ3D driver render into quad buffers (front and back buffer for each eye) and then brute-force copy the correct screen to the "real display buffer" at the right time? For example, 1920x1080 = ~6MB per buffer. If we run at 120Hz, we will be copying 120 times per second, which is about 720MB/sec. Modern graphic cards has memory bandwidth like 50GB/sec or more, so <1GB/sec copying should be doable.
We don't have access to any functions which don't provide D3D9.
Driver just do that things:
1. Game draw to left and right view surfaces
1. Copy from left view surface to backbuffer
2. Call D3D9 method Present(), that flip front and back buffers when v-sync signal occurred
3. Copy from right view surface to backbuffer
4. Call D3D9 method Present()
We don't control when image will be displayed on monitor.
So the "Game draw to left and right view surfaces" is slower (e.g. managing only 30 fps) than the CRT monitor refresh rate (e.g. 85Hz). I think you need to make this into 2 execution threads:

the main "game draw" thread continues to do the "Game draw to left and right view surfaces", but keep 2 buffers for left view and 2 buffer for right view, so we always have a stable copy of each view

the "display thread" does something like this:
1. wake up at regular interval according to monitor refresh rate (e.g. every 10 ms if 100Hz)
2. copy from (stable version of) left view surface to backbuffer
3. Call D3D9 method Present()
4. sleep till next wake-up time
5. copy from (stable version of) right view surface to backbuffer
6. Call D3D9 method Present()
Repeat

Disclaimer: I know nothing about D3D programming; just throwing out ideas ...
Unfortunately, that solution does not quite work as far as I know. I think the problem if you try to do it that way is that the two threads still have to share the same GPU (they can't execute at the same time) and even if you set different priorities, the "game draw thread" often ends up taking too much time so the "display thread" misses the flip at the current VSYNC and you get eyes that reverse.
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image
ssiu
Binocular Vision CONFIRMED!
Posts: 320
Joined: Tue May 15, 2007 8:11 am

Post by ssiu »

Tril wrote:... I think the problem if you try to do it that way is that the two threads still have to share the same GPU (they can't execute at the same time) ...
Okay so it's not like multi-threaded CPU programming, that's too bad ...
Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Location: Canada

Post by Tril »

ssiu wrote:
Tril wrote:... I think the problem if you try to do it that way is that the two threads still have to share the same GPU (they can't execute at the same time) ...
Okay so it's not like multi-threaded CPU programming, that's too bad ...
Yes and no. I mean that it's like a CPU with one core. If you execute two tasks at the same time, you are actually separating the time in time slices and giving each task some time slices and you can give more time slices to the task with higher priority. That feature to give more priority to some tasks is only available in Vista in an upgraded version of DirectX 9 and in DirectX 10 and it has its limitations.
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image
User avatar
LukePC1
Golden Eyed Wiseman! (or woman!)
Posts: 1387
Joined: Wed May 16, 2007 11:30 am
Location: Europe
Contact:

Post by LukePC1 »

But every modern GPU has multiple (unified) shaders. Why not take some from them and leave them idle when they wait and do the flipping? It wouldn't need that much power - if it worked. But is it even possible to program single shader units :?
Play Nations at WAR with this code to get 5.000$ as a Starterbonus:
ayqz1u0s
http://mtbs3d.com/naw/" onclick="window.open(this.href);return false;

AMD x2 4200+ 2gb Dualchannel
GF 7900gs for old CRT with Elsa Revelator SG's
currently 94.24 Forceware and 94.24 Stereo with XP sp2!
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

ssiu wrote:
KindDragon wrote:
ssiu wrote: Why can't IZ3D driver render into quad buffers (front and back buffer for each eye) and then brute-force copy the correct screen to the "real display buffer" at the right time? For example, 1920x1080 = ~6MB per buffer. If we run at 120Hz, we will be copying 120 times per second, which is about 720MB/sec. Modern graphic cards has memory bandwidth like 50GB/sec or more, so <1GB/sec copying should be doable.
We don't have access to any functions which don't provide D3D9.
Driver just do that things:
1. Game draw to left and right view surfaces
1. Copy from left view surface to backbuffer
2. Call D3D9 method Present(), that flip front and back buffers when v-sync signal occurred
3. Copy from right view surface to backbuffer
4. Call D3D9 method Present()
We don't control when image will be displayed on monitor.
So the "Game draw to left and right view surfaces" is slower (e.g. managing only 30 fps) than the CRT monitor refresh rate (e.g. 85Hz). I think you need to make this into 2 execution threads:

the main "game draw" thread continues to do the "Game draw to left and right view surfaces", but keep 2 buffers for left view and 2 buffer for right view, so we always have a stable copy of each view

the "display thread" does something like this:
1. wake up at regular interval according to monitor refresh rate (e.g. every 10 ms if 100Hz)
2. copy from (stable version of) left view surface to backbuffer
3. Call D3D9 method Present()
4. sleep till next wake-up time
5. copy from (stable version of) right view surface to backbuffer
6. Call D3D9 method Present()
Repeat

Disclaimer: I know nothing about D3D programming; just throwing out ideas ...
Didnt you read this thread? :) This was already covered here several days ago. TBH thats not a D3D programming issue, its more fundamental programming. You cannot possibly guarantee you'd have enough thread priority for one to sit there doing the page flipping without losing sync.
chrisjarram
Binocular Vision CONFIRMED!
Posts: 304
Joined: Sat Dec 22, 2007 3:38 am

Post by chrisjarram »

Hugo wrote:For me it seems, that it´s just a problem of a missing officially
global API. I´m sure that iz3d could do the job, but they have to implement
it for different producers and perhaps for different chips as well. This
might be too expensive.

By the way, within some high level languages, you´re able to use
assembler (c++ -> asm{...}) :)
Isn't that what I said in the post above? ;)

I have to say some of the ideas being thrown around here do seem to be getting increasingly optimistic and confusing the issue somewhat - there is provision in the HW to do this already, and afaik the absolute only solution (without a dedicated buffering contoller which would require input decoding, frame buffers and output signal generation as described above) is to use this HW provision. If there was any 'trick' to work around this it is unlikely nvidia would have bothered providing this functionality in HW in the first place.

No matter how good a workaround anyone thinks they have, the implementation is so realtime-critical (i.e. drop one sync and you're screwed) that the only way to do this (without said hardware dongle) is the proper way, otherwise it is likely the knowledgable bods (and not people who 'dont know anything about d3d programming') would have already experiemented with this. As with anyone else however, I do think its great the time and effort people are putting in here to try and find a solution to this problem, and (at the risk of being told you need to 'explore these possibilities before you can eliminate them') I honestly think, in my professional experience, this time and effort would be much better spent helping iz3d look to find the specifics of the actual low-level API calls they need to implement. However, I'd have thought they have the in-house experience to do these invesigations themselves being graphics driver experts, and perhaps now its likely they will dedicate some more resources to doing this (after seeing the _3000_ views this thread has had in as many days! :) ).
jumbo_spaceman
Two Eyed Hopeful
Posts: 51
Joined: Tue Jan 01, 2008 10:49 pm

Post by jumbo_spaceman »

No matter how good a workaround anyone thinks they have, the implementation is so realtime-critical (i.e. drop one sync and you're screwed) that the only way to do this (without said hardware dongle) is the proper way, otherwise it is likely the knowledgable bods (and not people who 'dont know anything about d3d programming') would have already experiemented with this. As with anyone else however, I do think its great the time and effort people are putting in here to try and find a solution to this problem, and (at the risk of being told you need to 'explore these possibilities before you can eliminate them') I honestly think, in my professional experience, this time and effort would be much better spent helping iz3d look to find the specifics of the actual low-level API calls they need to implement. However, I'd have thought they have the in-house experience to do these invesigations themselves being graphics driver experts, and perhaps now its likely they will dedicate some more resources to doing this (after seeing the _3000_ views this thread has had in as many days! Smile ).
I agree completely. It also seems clear that the BETA was released according to their schedule and was delivered incomplete to meet (or nearly meet) a deadline. Now, given enough time, they will have to deliver the functionality, eventually. The job has been done by others before, so the information exists somewhere in the vast sea of electrons and the rest is just a matter of asking (or hiring) the right people. As BlackQ mentioned on the BETA forum, the Nvidia API doesn't contain relevant info. The visible commitment to getting this done is promising, though, and I wouldn't be surprised if there was a major breakthrough within the week. But I shouldn't pollute your thread with my pointless comments anymore - I'm just excited by the prospect of finally dusting off my shutters.
genetic
Cross Eyed!
Posts: 119
Joined: Sun May 27, 2007 11:59 pm

Post by genetic »

Mercy Yamada wrote:@genetic
@RAGEdemon

My Z800 works with the iZ3D driver.
I am impressed!!

hi Mercy Yamada,

if you have a minute, can you tell me what you did to get the Z800 working with the IZ3D driver? I am having no luck
User avatar
RAGEdemon
Diamond Eyed Freakazoid!
Posts: 740
Joined: Thu Mar 01, 2007 1:34 pm

Post by RAGEdemon »

I think the iZ3D uses the VSync signal for syncing. I think he will find that when the frame rate drops low, the eyes will swap, much like it does with shutter glasses :P

Setting the driver out to shutter simple should just "work", in theory.
Windows 11 64-Bit | 12900K @ 5.3GHz | 2080 Ti OC | 32GB 3900MHz CL16 RAM | Optane PCIe SSD RAID-0 | Sound Blaster ZxR | 2x 2000W ButtKicker LFE | nVidia 3D Vision | 3D Projector @ DSR 1600p | HP Reverb G2
genetic
Cross Eyed!
Posts: 119
Joined: Sun May 27, 2007 11:59 pm

Post by genetic »

RAGEdemon wrote:I am glad it works for you.

Earlier, interlaced didn't work for me.

I tried again but it gives me an error that there are 0 days left on trial and I need to activate it... I only installed the driver last night.

I tried to buy the driver but the activation page only asks for username and password... there doesn't seem to be a way to buy an activation code.

All other modes work without the activation dialogue popping up. Have tried uninstall/install.


Well, Its a BETA... I'm sure they will get the bugs fixed starting Monday :p

-- Shahzad.

i am having the same problem now but only for shutter.

I did get it to come back again but reinstalling on a different hard drive and only shutter and OpenGL selected for install. tha worked but only for about 15 minutes and then I was no longer able to use shutter again.

In that 15 minutes, I was still unable to get my Z800 to work. i terned on Vsink but nothing new happened
Bo_Fox
Two Eyed Hopeful
Posts: 73
Joined: Sun Oct 28, 2007 9:13 pm

Post by Bo_Fox »

What I did is:

1) Go to http://edimensional.com/support_updates.php

and download E-D Activator from there. It's a very small program that when launched, it goes to your system tray.

2) By right-clicking on the system tray, I place a check next to the Page Flip mode rather than Interleaved mode.

3) I launch the game (Portal, for example) and then press Ctrl + F10 to "activate" the shutterglasses so that they start blinking. (Or before launching the game, you could turn on the S3D glasses by checking Stereo-On/Resync in the system tray so that you do not have to press Ctrl + F10 in the game)

4) I press Num* to toggle Stereo3D using IZ3D's drivers. And then increase the separation to like 50%.

It works perfectly fine as long as the frame rate is above the refresh rate. If I fire the gun, the frame rate drops below the refresh, only letting me see S-3D for one eye rather than both eyes. After a few seconds of moving around, it returns to normal S-3D.


Anyways, where do I place the MarkingSpecXml file? Is it supposed to help maintain correct S-3D if the frame rates drop?
Last edited by Bo_Fox on Sun Sep 14, 2008 12:54 am, edited 1 time in total.
8800GTX, 24" CRT (Sony GDM-FW900)
i7 920 @ 3.7GHz, Foxconn Bloodrage, WinXP-32 and Vista x64 SP2
3D shutters, 181.00 Forceware +162.50 S3D for WinXP

Other rig: 4870 1GB, i7 920 @ 4GHz, DFI T3eH8
24" LCD + IZ3D anaglyph, WinXP-32 and Win7 x64
Post Reply

Return to “iZ3D Legacy Drivers”