It is currently Mon Dec 22, 2014 4:39 pm



Reply to topic  [ 26 posts ] 
 OpenGL to Direct3D to 3D Vision: yes, it works great! 
Author Message
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
UPDATE: The source code for this solution is available here: https://github.com/tliron/opengl-3d-vision-bridge

Hello friends,

This is going to be a long, long post. Like a few other people, I got the combination to work, but there were many pitfalls along the way, lots of trial and error, and mountains of frustation. I want to share my experience so that others would suffer less. :/ I'm posting to MTBS beause it seems like the best minds are here! I hope to get feedback and corrections, as I'm sure I got some details wrong.

The bottom line is that I can develop my game in pure OpenGL, ensuring cross-platform compatibility, while also allowing for 3D Vision on supported Windows boxes. It really truly works: I run my game in Ubuntu, Mac OS X and Windows, and I'm working on mobile (Android/iOS) versions. On Windows I can put on the 3D Vision glasses and enjoy full stereoscopy.

This post is aimed at programmers: there is no magic switch by which you can enable this for any OpenGL game, sorry. I will explain a bit about that.

OK! Let's get to it. Lessons learned:

1. You know how a lot of Direct3D games support 3D Vision even though they weren't designed for it? This is called 3D Vision Automatic, and it only works for pure Direct3D, not OpenGL. It's really clever, actually: it seems that the driver captures all drawing operations between the BeginScene and EndScene calls, and renders each using slightly different projection matrices for each eye. There's a heuristic for it in the driver, but games can have "profiles" stored in the driver registry tweaking the heuristic for parts of the scene. Of course, many games that are not designed with 3D Vision in mind do all kinds of tricks with rendering (especially for shadows), and that's the main reason they look terrible in 3D Vision Automatic. So, why not Automatic for OpenGL? I can only guess: OpenGL has many, many different ways to "render scenes" due to its long history and many different specs. It seems that it would be much harder to implement 3D Vision Automatic for OpenGL, and there would be much more room for breakage. So, NVIDIA just decided not to even try. [Correction: apparently Automatic for OpenGL was supported until 2008.]

2. But... you don't need to use 3D Vision Automatic. There is a very poorly documented method for putting in the left- and right-eye images in yourself. This involves doing a StretchRect to your device's back buffer surface, whereby the origin surface includes two side-by-side images and a special, magical NVIDIA signature in the last row. Even though the origin is double the width of the destination, and also has an extra row, the driver recognizes the magical signature and turns on 3D mode. Where is the documentation for this sorcery?! You can kinda get an overview of it in this presentation, from page 37.

3. And where is the API mentioned in the presentation? Ha! The only official distribution I found of it is very obscure. It's in the NVIDIA SDK, but not in the usual places. It's actually hidden inside the "StereoIssues" code sample. (Here's a direct link to it.) And even in the code example, it's hidden in the /Direct3D/Source/StereoIssues directory. Finally after a long journey you will find the "nvstereo.h" file mentioned in the presentation. There you will find the definition of the NVSTEREOIMAGEHEADER, including the crucial NVSTEREO_IMAGE_SIGNATURE, which you need to send in your StretchRect.

4. The StereoIssues sample is also the only place I found "nvapi.h" and "nvapi.lib". The header file is surprisingly well-documented for NVIDIA! Unfortunately, it's not overly useful, at least not initially. It will let you hook into the 3D Vision driver's user interface, so that you can detect when users press the assigned keys for turning stereo on/off, changing depth and convergence, etc. But since we're not using 3D Vision Automatic, it's up to us to implement what actually happens when the user presses these keys. Unless you really want to mimic the "3D Vision Automatic experience", you might as well implement these features (and/or others!) in your own game settings. For example, maybe setting convergence is not good enough for you, and you want to let users tweak the field-of-view. We'll get to that later. Also note that most of "nvapi.h" only works in Windows Vista+, and the 3D Vision stuff is only a small part of it. Anyway, the bottom line is that you do not need "nvapi.h" at all to make any of this work. In fact, it might even confuse you. For example, a NvAPI_Stereo_Activate call will not simply make your application work.

5. Let me elaborate that point: because you're not using 3D Vision Automatic, it's going to be up to your application to generate the images for each eye. You'll have to generate each scene twice, from the point of view of each eye. Since I'm using the OpenGL interop, this meant that I actually created two frame buffer objects. I switch between them, rendering the scene to each, before finally "flushing" both eyes to the final Direct3D surface and then the back buffer. Rendering for each eye sounds hard, but actually it's great, because you get complete control over what each eye sees. You don't lose any performance over this compared to 3D Vision Automatic: after all, it's exactly what 3D Vision Automatic itself does behind the scenes. Actually, you can make things work faster than Automatic because you can optimize: some parts of the scene might not have to be rendered twice, and can be more simply copied to each eye surface. Futhermore, there are a lot of things that can be handled in stereoscopy without using any 3D API. For example, making 2D overlays appear distant or near can be done with simple separation. You can even handle things like shadows in 2D using simple techniques. Doing stereoscopy well is hard: it depends a lot on the field-of-view of your game, and exactly what kinds of scenes you are displaying. I suggest you read up on some of the optical theory of this to get the best user experience.

6. That final, magical StretchRect is indeed magical. The NVSTEREO_IMAGE_SIGNATURE actually seems to trigger an entirely different path in the driver than the usual StretchRect API. Once discovered, it enables 3D mode in your driver and nonitor, and displays the two side-by-side images in proper stereoscopy. Yay!

7. How weird is this magical StretchRect? Weird. For example, it seems to ignore the source and destination rects, so you don't have to set them: it always uses the entire source buffer and the entire back buffer. On that note, it seems that the dwWidth, dwHeight, and dwBPP fields in the NVSTEREOIMAGEHEADER are also ignored. Only the dwFlags field seems to be used.

8. This magic is incredibly annoying: there is no coherent error message from the StretchRect and no way to know why the driver is failing to enable stereo. Indeed, sometimes stereo won't work at all, and you will just see the side-by-side images (as well as that bottom row with the signature!). Makre sure that you test the return codes for every single API call until you get to the final StretchRect, to be sure that everything is OK along the way. Another bizarre error that happened to me was that I did get stereoscopy to be enabled, but only the left eye was showing! Again, no error messages, no nothing. I had to do a lot of trial and error to find out what works. Miserable.

9. Fullscreen or windowed? By default, the magical StretchRect will only work in fullscreen mode: it will just be ignored in windowed mode, even if you force 3D mode to be "always on" in the NVIDIA settings. But, as you know, there are a few select applications that do seem to work fine in windowed mode. For example, the browser plugins for Firefox/IE, and the 3D Vision movie player. How come it works for them and not for you? Astoundedly, allowing for windowed 3D mode seems to be triggered by the name of your executable. Yes, as far as I can tell these names are hardcoded in the driver. I have not found entries in the registry. Wow. So, if you want 3D working for your app in windowed mode (especially useful for testing during development) then just rename it to "googleearth.exe". Yes, it's that simple and that stupid. Just, wow. I would really like to get more information on this bizarre issue.

10. Another annoyance: in your call to CreateDevice you can set a flag in D3DPRESENT_PARAMETERS to enable fullscreen mode. I personally have not been able to make fullscreen work properly, perhaps due to my specific environment (I use SDL to create the window), though I found that if my window is fullscreen I can use windowed CreateDevice and 3D Vision still works. Weird, right? But, be that as it may, the CreateDevice can fail with an error code that is not documented in the Windows API. I imagine that this is a particularity in the NVIDIA Direct3D implementation. Anyway, very annoying. How am I supposed to know what an unknown error code means?

11. So far I only talked about 3D Vision, but what about the OpenGL/Direct3D interop? This adds a lot of potential for things going wrong, which is again made doubly miserable due to the lack of error messages. Even if you don't use OpenGL, I suggest you read on, because it taught me further lessons about 3D Vision oddities.

12. First of all, the API for the OpenGL/Direct3D interop is implemented as a WGL extension named WGL_NV_DX_interop. Read the documentation very carefully and make sure you follow the rules for creating your Direct3D device and OpenGL textures and frame buffer objects. Importantly, things work differently for Windows XP and Windows Vista+. The advent of WDDM means that you must use Direct3D 9Ex for Vista+. How can you know at runtime? You can use GetVersionEx and see if the major version is 6+.

13. The example code included in the WGL_NV_DX_interop documentation is minimal and incomplete. For a more thorough example see this great contribution from Snippets & Driblits. Unfortunately, that example is also not a complete application, and is also not well-documented in the code. But I learned a lot from studying it, and especially was encouraged from their bottom line: this whole thing actually works!

14. When creating your frame buffer, make sure to call wglDXLockObjectsNV before you call glFramebufferTexture2D or other OpenGL frame buffer object APIs. If you don't, glCheckFramebufferStatus will return an unknown error code. (I'm getting really tired of these unknown error codes, aren't you?)

15. What kind of surfaces would work? You'll see in many of the examples for 3D Vision posted online that people use CreateOffscreenPlainSurface for the final surface (by "final surface" I mean the side-by-side surface plus the NVSTEREOIMAGEHEADER that you feed into the magical StretchRect). Unfortunately, the OpenGL/Direct3D interop does not support offscreen plain surfaces, and there are limitations in the Direct3D API that mean that if I don't use offscreen plain surfaces for my sources, I cannot use an offscreen plain surface for the final surface. See the StretchRect API documentation for a table of what kinds of source surfaces can be stretched to what kinds of destinations.

16. OK, so if CreateOffscreenPlainSurface doesn't work, you could just use CreateRenderTarget for all your surfaces, right? Well, kinda. The OpenGL/Direct3D interop works fine with this: I was able to create OpenGL frame buffer objects, and use StretchRect to the final surface. But... 3D Vision did not work properly. I got it to turn on, but only one eye was showing. Great. No error code, no nothing.

17. I have no explanation for this final lesson. Why should 3D Vision care about the source from which you stretch to the final surface? But, here's the fact: when I created my source surfaces using CreateTexture and GetSurfaceLevel, the one-eyed bug disappeared. Somehow CreateRenderTarget and CreateTexture create a different kind of raster, with the former, after a StretchRect, making 3D Vision unhappy. I have no idea what the difference could be, because if I disable 3D Vision and just show the final surface on the screen, using either kind of surface produces an identical result. Anyway, with CreateTexture, both OpenGL/Direct3D and 3D Vision are happy. I'm happy, the players will be happy, everybody's happy.

18. Final issue, and it could potentially be a big one for you: OpenGL and Direct3D have different coordinate systems. (0,0) is bottom-left in OpenGL, top-left in Direct3D. So, all your surfaces will appear upside down. One solution is to ask your players to stand on their head ;). Another solution could be to flip the surface using some kind of blit function, but that would require extra memory for a target surface, and is a waste. What I did instead is put a flag in my code, and make sure I render everything upside down in OpenGL when I know I that I am sending it to Direct3D in the end. Three potential sub-issues to consider: 1) The easy part is creating a projection matrix for flipped mode; 2) Since your surfaces are rendered in OpenGL, they are counter-clockwise from the outside, but that will be reversed with the flip. You don't have to change your vertex order. Well, you can, but I imagine it would be very hard to do in your code. A much easier fix is something like: glCullFace(flip ? GL_FRONT : GL_BACK); 3) Finally, if you are doing any 2D overlays, you will also have to render them upside down. That might be harder depending on how much control you have over your 2D rendering library.

Though I'm happy I got this to work, I'm very unsatisfied. Between no error codes, unknown error codes, poor-to-non-existent documentation, bizarre hacks hardcoded into the driver and bizarre bugs (only one eye showing?!), I feel just damn lucky that I got all this to work. Lucky, but also angry about what NVIDIA put me through. NVIDIA, you should really support these features properly. My very reasonable wish list:

1. It would be nice to get better documentation for and access to the API
2. It woulld be nice to get any documentation for the proprietary error codes
3. Would it kill you to add some kind of way to get an error code out of that failed magical StretchRect?
4. Could you please document how a 3D Vision application can run in windowed mode? The insane .exe rename hack is not a solution.

Good luck, everybody. May the force be with you.


Last edited by emblemparade on Tue Jan 14, 2014 2:48 am, edited 10 times in total.



Mon Feb 04, 2013 2:14 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
emblemparade wrote:
It seems that it would be much harder to implement "3D Vision Automatic" for OpenGL, and there would be much more room for breakage. So, NVIDIA just decided not to even try.
OpenGL has been supported since 2001 in the older stereo 3D driver from NVIDIA. It's only for the release of their GeForce 3D Vision driver (now 3D Vision) in 2008 that they dropped support for OpenGL.


Mon Feb 04, 2013 3:05 pm
Profile WWW
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
Fredz wrote:
OpenGL has been supported since 2001 in the older stereo 3D driver from NVIDIA. It's only for the release of their GeForce 3D Vision driver (now 3D Vision) in 2008 that they dropped support for OpenGL.

Interesting! I didn't know that. How well did it work?

Anyway, I still think that doing Automatic for OpenGL is much harder than for Direct3D, which is one reason why NVIDIA decided to stop supporting this feature. Well, there's that, and also conspiracy theories out there about a Microsoft-NVIDIA plan to kill OpenGL. Well, excuse me for being a skeptic, but I don't think 3D Vision is so pervasive as to make the lack of support for OpenGL such a good way to kill it...


Mon Feb 04, 2013 3:12 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Apr 07, 2007 4:34 pm
Posts: 2887
Location: Sweden
Oh... just let nvidia smell a few pennies and see what happens...

_________________
Mb: Asus P5W DH Deluxe
Cpu: C2D E6600
Gb: Nvidia 7900GT + 8800GTX
3D:100" passive projector polarized setup + 22" IZ3D
Image


Mon Feb 04, 2013 3:30 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
emblemparade wrote:
Interesting! I didn't know that. How well did it work?
As good as the 3D Vision driver and even better, since it's exactly the same thing repackaged for Vista/7 minus several important features (several models of shutter glasses and 3D displays, locked convergence, stereo OSD, OpenGL support, a lot less games supported, etc.). A lot of people in this forum have been using that older driver when it was the only one available on Windows. After they dropped support for XP in their newest one people moved on.

emblemparade wrote:
Anyway, I still think that doing Automatic for OpenGL is much harder than for Direct3D, which is one reason why NVIDIA decided to stop supporting this feature.
Since both drivers are basically the same thing that doesn't make sense at all. And there is OpenGL support in the current 3D Vision driver, but it's only available for specific applications like Doom 3 BFG (see : http://3dvision-blog.com/tag/doom-3-bfg-edition/ ). Their 3D Vision Pro driver has also a complete support for OpenGL.

They probably dropped support for OpenGL because there were almost no games using it in 2008, and its future was quite uncertain on Windows since Microsoft decided to not support it (and even made it conflict with Aero). There was also the fact that the pros were using OpenGL as their primary 3D API and were willing to spend much more than consumers for 3D, hence the availability of 3D Vision Pro which supports OpenGL.

emblemparade wrote:
Well, there's that, and also conspiracy theories out there about a Microsoft-NVIDIA plan to kill OpenGL.
Actually Microsoft did have a plan for that with Fahrenheit (started in 1997), but it was with SGI (who died shortly after that). In the end they abandoned it for Direct3D, which basically killed OpenGL in the consumer market after some years.


Mon Feb 04, 2013 4:11 pm
Profile WWW
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
Fredz, are you sure that Doom 3 BFG isn't using a technique similar to the one I'm using?

Anyway, even if it is using special and specific support from NVIDIA, it still doesn't disprove my theory that OpenGL is harder to support across the board. I know OpenGL pretty well, and just the thought of supporting all versions and extensions with something like 3D Vision Automatic terrifies me.


Mon Feb 04, 2013 4:18 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
emblemparade wrote:
Fredz, are you sure that Doom 3 BFG isn't using a technique similar to the one I'm using?
They are probably not using the same technique since they have a direct access to the standard OpenGL quad buffer stereo implementation in their driver. No need to use off-screen rendering in this case, it's just a matter of calling the rendering functions with the appropriate GL_LEFT/GL_RIGHT parameters.

emblemparade wrote:
Anyway, even if it is using special and specific support from NVIDIA, it still doesn't disprove my theory that OpenGL is harder to support across the board. I know OpenGL pretty well, and just the thought of supporting all versions and extensions with something like 3D Vision Automatic terrifies me.
Hem, actually I've implemented an OpenGL stereo 3D driver on Linux that I intend to use for the Oculus Rift.

It's probably a lot easier to implement than a Direct3D one since functions for modelview and projection matrices are standardized (contrary to Direct3D) and haven't basically changed for the past 20 years. Also there is absolutely no need to handle any different versions or extensions, I've been able to support games from 2000 to 2012 with basically no modification to my OpenGL code base. The hardest part is in fact the interception layer at the OS level.

Also 3D stereo drivers with OpenGL support have existed since 1999 with Elsa (and the Revelator glasses), before NVIDIA got involved in this field. In fact NVIDIA bought the Elsa 3D stereo driver and released it as their own stereo 3D driver in 2001. At that time Elsa didn't even have access to the underlying quad buffer stereo implementation to support OpenGL 3D stereo rendering.

Stereo 3D support in OpenGL is even a lot older than that (long before Direct3D even existed), quad buffer stereo was basically part of the 1.0 version launched in 1992.


Mon Feb 04, 2013 5:12 pm
Profile WWW
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
Fredz, to ensure I'm not misunderstood: I'm not saying the OpenGL on 3D Vision would be hard for NVIDIA to support, I'm saying that specifically OpenGL on 3D Vision Automatic would be a nightmare.

Many OpenGL applications don't use OpenGL matrices at all. They've been deprecated and you're encouraged to use vertex buffer objects, which bypass all of the old matrix stuff that OpenGL provides. This is especially true for OpenGL ES and mobile applications. In my opinion, this was a good idea, and cleans the OpenGL out of a lot of heavy cruft that it's been gathering for a while. Thing is, when you use VBOs, it's up to you to implement projections. Because vertex shaders can do anything they want with the incoming vertices, there is no way for the driver to know what projection you are using, if at all. A common technique is to construct them on your own (on the CPU) and bind them as uniforms to your shader program. But, that's just a convention, and is not standardized. I just don't see how 3D Vision Automatic can work in those cases. An Automatic support option could only possibly work with "old-style" OpenGL apps, which is probably how things are working for you.

There have been a few OpenGL stereoscopy APIs suggested in the past, and it indeed would be nice if NVIDIA supported them in 3D Vision. But the end result will still not be 3D Vision Automatic. You would have to render each eye in your program. Looking at the Doom 3 BFG implementation, I have a strong feeling they did that. They support none of the 3D Vision Automatic standard UIs: you need to configure depth and convergence inside the game itself.


Mon Feb 04, 2013 5:32 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
emblemparade wrote:
Fredz, to ensure I'm not misunderstood: I'm not saying the OpenGL on 3D Vision would be hard for NVIDIA to support, I'm saying that specifically OpenGL on 3D Vision Automatic would be a nightmare.
I'm not sure I'm really understanding what you mean, but I don't see why it would be more complicated than with Direct3D. With the fixed matrix functions it's a piece of cake, without it would be of the same complexity than with Direct3D.

emblemparade wrote:
Many OpenGL applications don't use OpenGL matrices at all. They've been deprecated and you're encouraged to use vertex buffer objects, which bypass all of the old matrix stuff that OpenGL provides.
Yes they've been deprecated but I don't know any OpenGL game that is not using these functions. I'm not even sure the latest notable OpenGL game that has been released (Rage) is not using them. Its iPhone version still does at least. It's possibly different for applications or on recent games on mobile phones though, but that's not a problem for now for 3D gaming on the PC.

emblemparade wrote:
I just don't see how 3D Vision Automatic can work in those cases. An Automatic support option could only possibly work with "old-style" OpenGL apps, which is probably how things are working for you.
Yes, that's the reason why it's been so easy to support, without the fixed functions pipeline I guess it would basically be the same situation than for Direct3D. But anyway, the 3D Vision automatic mode is supported with OpenGL ES on Android devices, so I guess it wasn't that hard to implement for them.

emblemparade wrote:
There have been a few OpenGL stereoscopy APIs suggested in the past, and it indeed would be nice if NVIDIA supported them in 3D Vision.
AFAIK there is only one stereoscopic API in OpenGL, the one that has been part of the standard since its launch. Anyway NVIDIA will not support it in their consumer driver as long as their 3D Vision Pro market is so lucrative, they have really no reason to cut in their profits.

emblemparade wrote:
But the end result will still not be 3D Vision Automatic. You would have to render each eye in your program. Looking at the Doom 3 BFG implementation, I have a strong feeling they did that. They support none of the 3D Vision Automatic standard UIs: you need to configure depth and convergence inside the game itself.
Yes it's been implemented in-game, I guess John Carmack preferred to have a direct control to get the best end result since he's not exactly a newcomer in 3D, including stereo 3D.

If you think of an interesting OpenGL 3D game or app that doesn't use the fixed function pipeline don't hesitate to post it, I'd like to have a try at supporting it with my driver. It should have a Linux version available though.


Mon Feb 04, 2013 6:55 pm
Profile WWW
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
Well, our discussion is all pretty much academic, because OpenGL is not publicly supported on 3D Vision, whether Automatic or not...

I'm honestly not overly interested in Automatic: I would much prefer to have an open, well-documented API and have the application developer handle it properly. There's just no single "automatic" heuristic that would work for all games. Indeed, there are so few Automatic games that work perfectly with 3D Vision, even with the addition of the hardcoded profile database in the driver (bleh!). Thank the gods for Helix Mod cleaning up the mess. :)

But, the whole point of my post was to show that there is an entirely practical option to do this right, via OpenGL/Direct3D interop, which is well-supported by NVIDIA, and in fact it works really well. It doesn't matter if you used fixed function or not, and depends on no profile stored in the NVIDIA driver. Not a single API call is needed beyond standard OpenGL and Direct3D 9. My game runs at about 300 FPS with stereo enabled in fullscreen, and I'm even doing SSAO. It was tricky to get working, but the bottom line is that it's been solved: I render each eye separately, and the rest is taken care of. I would really like to see "my" approach widely adopted by the major OpenGL game engines: it would allow us all to enjoy 3D Vision, without having to decide between "only Direct3D and only Windows" vs. OpenGL crossplatform ability. (I do plan to GPL mine in the future.)

It's frustrating to see just how poorly NVIDIA is doing making 3D Vision friendly to devs, even after all these years it's been available. It's almost as if someone up the ladder in NVIDIA doesn't want this to succeed, or sees it as a niche market for a few Quadro implementations. If there was an easy-to-use SDK, we would be seeing so many games supporting it by now, whether OpenGL or DirectX.

If you want to experiment with an OpenGL game: try Minecraft. It's written in Java, so easy to make sure you are running the same version on Linux and Windows. I know that modders have already got it to work with 3D Vision, though these mods keep breaking as new versions are released. If you get it to work well, you will make a lot of gamers happy (including me!).

By the way, my dev environment is also Linux. I crosscompile all my Windows code on Ubuntu, including the Direct3D stuff mentioned in my post, using MinGW. For OSX there is no good crosscompiler, so I have to build on an OSX box using MacPorts. Oh, and I use glLoadGen to create custom OpenGL and WGL linkage, strongly recommend that project.


Mon Feb 04, 2013 7:20 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
Your project is a welcome addition for supporting OpenGL on 3D Vision, I guess implementing a userland library that emulates the OpenGL stereoscopic API could be useful for more widespread usage. You may have a look at this project that's basically trying to do that for Linux as an intercept library : https://github.com/magestik/glQuadBufferEmu


Mon Feb 04, 2013 7:34 pm
Profile WWW
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
You know, I honestly haven't really thought of it, but it's actually quite likely that I could write an OpenGL quad-buffer API wrapper for my library. It would be more portable, for sure.

But the biggest challenge I can think of is the reversed coordinate system (this is a final point #18 I edited into my original post, it could be that you haven't read it). For OpenGL quad-buffer to work transparently over 3D Vision, the implementation would have to blit-flip the buffers for Direct3D to be happy. It seems terribly inefficient to me! At least for the purposes of my game, I would rather use my own proprietary API rather than OpenGL quad-buffer, and just render the scenes in reverse. If you use OpenGL quad-buffer, you likely will not expect to have to do that. :/

Using my API is ridiculously easy. A create() method creates two OpenGL frame buffer objects, one for each eye, using OpenGL/Direct3D interop so that they are bound to Direct3D surfaces. A call to activate_left() or activate_right() lets you pick one of them. Finally, instead of doing an OpenGL buffer swap, I call my flush() method, which does the magic StretchRect and a Direct3D present. So, four method calls. Pretty easy to add to any game. The challenge is not the API, but making sure that your whole rendering pipeline (including the 2D overlays) can allow for flipping the Y axis. I admit that's not trivial, but doable.

So, really it shouldn't be overly hard for a game that already supports OpenGL quad-buffer to also support my API. That would give you the best performance on platforms that support quad-buffer while also having the best performance for consumer 3D Vision.


Mon Feb 04, 2013 7:52 pm
Profile
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
Ah, so I found this tweet by John Carmack that proves that ID is using quad-buffer on consumer 3D Vision. Those lucky bastards. :)

For the rest of us, there's my method.


Mon Feb 04, 2013 10:57 pm
Profile
One Eyed Hopeful

Joined: Sat Mar 02, 2013 12:18 pm
Posts: 11
Here's an interesting development: quad-buffered OpenGL stereoscopic 3D is working with 3D Vision on my GeForce 680M laptop! No translation to Direct3D required. GeForce driver 314.07 18 Feb 2013. http://www.mtbs3d.com/phpbb/viewtopic.php?f=105&t=16849


Fri Mar 22, 2013 4:02 pm
Profile WWW
Certif-Eyed!
User avatar

Joined: Sat Dec 22, 2007 3:38 am
Posts: 516
Location: 3rd Stone from the Sun
emblemparade wrote:
Ah, so I found this tweet by John Carmack that proves that ID is using quad-buffer on consumer 3D Vision. Those lucky bastards. :)

For the rest of us, there's my method.



Hi What exactly is your method can I dl what you use?? I am trying to get a Old Game working it is EF2000 3DFX Version.. Some guys got it working in DosBox and 3DFX Working..

Now I tried a few things the NVidia emitter lights up but the image is not doubled.. I am close I can see but still far away..

Is there a way I can get this verion to work in 3D.. I think it uses OpenGL for the render but you need to run it in a special version of Dos box..

_________________
Intel i5 3570K @ 4.1ghz / Asus P8 Z68-V Gen3 / Corsair XMS 8gb / eVGA 660 SC GTX / Rocketfish 7.1 SC / 3 - Sharp XR 10XL Projectors / 3 -45" Screen's w/Screen Goo / nVidia 3D Vision / HOTAS Cougar / Thrustmaster MFD's


Wed Jul 24, 2013 6:44 am
Profile
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
My method is detailed in the *very long* initial post. Please ask specific questions and I'll see what I can do to help.


Wed Jul 24, 2013 10:10 am
Profile
Certif-Eyed!
User avatar

Joined: Sat Dec 22, 2007 3:38 am
Posts: 516
Location: 3rd Stone from the Sun
emblemparade wrote:
My method is detailed in the *very long* initial post. Please ask specific questions and I'll see what I can do to help.



Well is thisd method for a game you designed and made yourself?? I asked my question in my very short response but here is the direct question again.

I am trying to get a Flight simulation called EF2000 3DFX Version to work in 3D as it uses an Dos Box modded version which runs the game in Glide mode..

Now I am trying to get this to work in Stereoscopic 3D but if your method is not for a game like this or any regular game this is a dead end, as I think your Method does not make if you don't have the code???

I saw on the other OPEN GL Post where they say NVidia Made OpenGL support for S3D which on my system I have not gotten to work as of yet I tried 2 different set of drivers and a few different Open GL games..

Guess this is not what I was hoping for.. Search continues.....

_________________
Intel i5 3570K @ 4.1ghz / Asus P8 Z68-V Gen3 / Corsair XMS 8gb / eVGA 660 SC GTX / Rocketfish 7.1 SC / 3 - Sharp XR 10XL Projectors / 3 -45" Screen's w/Screen Goo / nVidia 3D Vision / HOTAS Cougar / Thrustmaster MFD's


Thu Jul 25, 2013 9:22 am
Profile
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
What you want is likely impossible.

NVIDIA 3D Vision supports an "automatic mode" (used by 99% of games), in which general Direct3D calls are extrapolated to a stereoscopic perspective. For this to work, the game *must* use the Direct3D API, and additionally ensure that all objects (2D overlays, dynamic shadows, cursors) are rendered via Direct3D. These requirements do not match for Dosbox, which doesn't event support Direct3D (I think?).

It might be possible to arrange for something that works if you create a Direct3D adpatation layer for other 3D APIs... I know that some people have done simple tranlsations for OpenGL, that work for a very small subset of games, but it's hardly a general solution.


Thu Jul 25, 2013 10:01 am
Profile
Certif-Eyed!
User avatar

Joined: Sat Dec 22, 2007 3:38 am
Posts: 516
Location: 3rd Stone from the Sun
well I think it is possible just not with your method.. I will continue searching.. Funny thing someone said I wouldn't get Total Air War Working in S3d but got it working with the help from some modders..

I will continue this search and look for possibilities, heck they said we would never get EF2000 3DFX to work in Glide mode without a 3D FX Card, well they even did that so nothing is impossible it is just if someone does the mod of hack to get it working..

I know they used to have 3D FX Games working in S3d Back in the day with older S3D Hardware.. ;)

_________________
Intel i5 3570K @ 4.1ghz / Asus P8 Z68-V Gen3 / Corsair XMS 8gb / eVGA 660 SC GTX / Rocketfish 7.1 SC / 3 - Sharp XR 10XL Projectors / 3 -45" Screen's w/Screen Goo / nVidia 3D Vision / HOTAS Cougar / Thrustmaster MFD's


Fri Jul 26, 2013 5:17 am
Profile
One Eyed Hopeful

Joined: Sun Sep 22, 2013 11:15 am
Posts: 2
Hi, Your post looks great. I'm developing an application sharing OpenGL Textures for viewing with NVIDIA 3D Vision stereoscopic mode.
I create these two textures using OpenGL, and then I share these two textures with OpenCL. In this way, running OpenCL kernel, it fill those textures. Now I want that these two textures fill a left and right DirectX Surface for the 3D Vision.
I'm studying the code posted in https://sites.google.com/site/snippetsa ... lDxInterop but it is poorly described its usage. Could you post a full project that only simply loads two textures (for example to png files) in OpenGL and then show them in 3D Vision using DirectX (using the interoperability library of the link above). I would really appreciate your help, because I'm becoming crazy.
Tks a lot, hope in your answer
Lorenzo


Sun Sep 22, 2013 11:24 am
Profile
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
Thanks for the compliment. I plan on releasing my work, but as you know it's extra effort to modularize and release the source code, doing it right and documenting... I'm considering writing it specifically as an SDL 2.0 extension.

It may happen in 1-2 months. Of course I will post here when this happens.


Fri Sep 27, 2013 10:12 pm
Profile
One Eyed Hopeful

Joined: Sun Sep 22, 2013 11:15 am
Posts: 2
Hi, I try to explain what I'm doing.
I created a window with a valid OpenGL Context and then I pass the hWnd to the Initialize method of GlDx9RenderTarget. (The code I'm using is the one posted in https://sites.google.com/site/snippetsa ... lDxInterop as I said in the previous reply. When I generate the left OpenGL Texture, I read a png file using a library (I'm using VTK) and then I fill the texture generated in this way:

glGenTextures(1, &m_glLeftColorBuffer);
glBindTexture (GL_TEXTURE_2D, m_glLeftColorBuffer);

glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

glTexEnvi (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);

unsigned char * pData = (unsigned char *)pImage->GetScalarPointer();
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, 1920, 1080, 0, GL_RGBA, GL_UNSIGNED_BYTE, pData);

(No OpenGL error)

Now, the code creates the texture in this way
result = m_device->CreateTexture(width, height, 0, 0, d3ddm.Format, D3DPOOL_DEFAULT, //where width and height are 1920, 1080
&m_dxLeftColorTexture, NULL);
//d3ddm is obtained in this way
result = m_direct3D->GetAdapterDisplayMode( D3DADAPTER_DEFAULT, &d3ddm );

and finally it associates the texture to the surface:
m_dxLeftColorTexture->GetSurfaceLevel(0, &m_dxLeftColorBuffer); //where m_dxLeftColorTexture is a IDirect3DTexture9 while m_dxLeftColorBuffer is a IDirect3DSurface9.
Now this is my full Render method:

void Draw()
{
dxRenderer->BeginGlDraw();
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glViewport (0, 0, 1920, 1080); //This is my screen size because this window is full screen, and my png are of this size
// so in term of size everything should be ok
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho 0.,1920.,0.,1080.,-1.0,1.0);
glMatrixMode (GL_MODELVIEW);

glPushMatrix ();
glLoadIdentity ();
const float exty = 1080 * 0.5f;
const float extx = 1920 * 0.5f;
glTranslatef (extx, exty, 0);

glEnable (GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, m_glLeftColorBuffer);

glBegin (GL_QUADS);
glClearColor(0,0,0,0);
glTexCoord2f(0.0, 1.0);
glVertex3f (-extx, exty, 0);
glTexCoord2f (1.0, 1.0);
glVertex3f (extx, exty, 0);
glTexCoord2f (1.0, 0.0);
glVertex3f (extx, -exty, 0);
glTexCoord2f (0.0, 0.0);
glVertex3f (-extx, -exty, 0);
glEnd ();

glFlush();
glDisable(GL_TEXTURE_2D);
glPopMatrix ();
dxRenderer->EndGlDraw();
dxRenderer->Flush(); (Of cours I don't call the glSwapBuffer because in this Flush method it calls Present())
}

Finally I get a black Full screen window. Any ideas? If I want to draw using, for example GL_LINES It works ok.

Please I'm becoming crazy.
Tks
Lorenzo


Sat Sep 28, 2013 5:37 am
Profile
One Eyed Hopeful
User avatar

Joined: Sun Feb 03, 2013 5:11 pm
Posts: 15
I'm very happy to announce that I'm sharing my source code with the world:

https://github.com/tliron/opengl-3d-vision-bridge

Sorry it took so long! But it takes time to properly isolate the API and create documentation.

Enjoy! If you have issues, please report them on github, not here.


Tue Jan 14, 2014 2:48 am
Profile
3D Angel Eyes (Moderator)
User avatar

Joined: Sat Apr 12, 2008 8:18 pm
Posts: 10992
This is awesome! Thanks for posting the code.

_________________
check my blog - cybereality.com


Tue Jan 14, 2014 9:54 pm
Profile
One Eyed Hopeful

Joined: Tue Jan 12, 2010 5:09 pm
Posts: 14
Location: Romania
BIG thanks for your WORK emblemparade!!!

Sent a donation your way! I know is not much but hope it will be enough for a night in town:)

Also I want to say that I am using your library to create a wrapper and I was able to basically enable 3D Vision in a commercial OpenGL game.
The original post is on NVIDIA forums and you can see it here:https://forums.geforce.com/default/topic/682130/3d-vision/-opengl-3d-vision-wrapper-and-3d-vision-surround-is-now-possible-wip-/

At the moment is working with one game, but is huge progress. Again I want to thank you for your work, as without it, it wouldn't have been possible;))

Best Regards,
helifax


Sun Feb 09, 2014 4:53 pm
Profile YIM
One Eyed Hopeful

Joined: Fri Mar 14, 2014 7:20 am
Posts: 4
emblemparade your work yes great but I'm a little noob... how can I use that?
In my code I create a openGL frame: I want that this frame works with the 3D Nvidia Vision.. but how? I undestand that I need to build an image that have left and right frame both but I need to write a signature for the 3D Nvidia driver...this kind of stuff is more simple with your code?

This is my code for the openGl Frame:

Code:
if (!newFrm.empty())
         {
            reservedFrm=newFrm.clone();
            imageTex = matToTexture(newFrm, GL_NEAREST, GL_NEAREST, GL_CLAMP);
            glBindTexture(GL_TEXTURE_2D, imageTex);

            glMatrixMode(GL_MODELVIEW);
            glLoadIdentity();
            glViewport(0,240,640,400);
            glMatrixMode(GL_PROJECTION);
            glLoadIdentity();
            glOrtho(0,1,1,0,-1,1);

            glBegin(GL_QUADS);
            glTexCoord2f(0,0);
            glVertex2f(0,0);

            glTexCoord2f(1,0);
            glVertex2f(1,0);

            glTexCoord2f(1,1);
            glVertex2f(1,1);

            glTexCoord2f(0,1);
            glVertex2f(0,1);
            glEnd();

            glEnable(GL_DEPTH_TEST);
            glClear(GL_DEPTH_BUFFER_BIT);
         }


Fri Mar 14, 2014 8:07 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 26 posts ] 

Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
Powered by phpBB® Forum Software © phpBB Group
Designed by STSoftware.