It is currently Wed Jul 23, 2014 9:25 am



Reply to topic  [ 56 posts ]  Go to page 1, 2  Next
 Biclops - Open RIFT-Compatibility layer 
Author Message
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
What does it do?


Mainly, it will do side-by-side 3D stereoscopy, and barrel warp the resulting image to adjust for the RIFT's optics. (So far, the latter is just based on screenshots gleaned from John Carmack's E3 experiments.)

What does it support?
Not a lot, yet. I've tested it with Skyrim and Mirror's Edge so far, but it's really alpha right now. UI's can be pretty weird, Skyrim needs its shadows disabled entirely, etc.

Where can I get it?
NOTE: This uses very similar techniques to intercept the DirectX layer as many common online cheats. I'm not doing anything special to hide myself from VAC or PB or GameGuard, so it's entirely possible that those systems would flag this as a cheat. Use with multiplayer games at your own risk!

You can download pre-compiled versions for the above two games at https://github.com/josh-lieberman/Biclops/downloads, or check out the source code at https://github.com/josh-lieberman/Biclops.

What can I do?
At this stage, there's probably more bugs than not, but play around with it--if you have 3D programming experience feel free to play with the source too. More supported games would be great.


Sat Jun 23, 2012 9:45 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2227
Location: Perpignan, France
A supid question probably, but do you feed the cameras with a 90x100° FOV or do you use the current FOV and warp the result ? Also, it seems the two frames are from a different point in time, are you doing a dual render at a given time or is the second image rendered on a subsequent frame ?


Sat Jun 23, 2012 10:32 pm
Profile WWW
Petrif-Eyed
User avatar

Joined: Sat Sep 17, 2011 9:23 pm
Posts: 2186
Location: Irvine, CA
Thank you Emerson! Very generous of you to open source this.

As I mentioned in another thread. Tridef should have supported the Rift, because now their usefulness will slowly erode as Rift enthusiasts add open-source stereoscopic support for all the popular titles.


Sat Jun 23, 2012 10:38 pm
Profile
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
Quote:
A supid question probably, but do you feed the cameras with a 90x100° FOV or do you use the current FOV and warp the result ?

Using the current FOVs, for now. I'd expect altering them will differ on a per-game basis.

Quote:
Also, it seems the two frames are from a different point in time, are you doing a dual render at a given time or is the second image rendered on a subsequent frame ?

It's also true that the two frames are one-after-another, time-wise, though if the frame-rate is high enough it's not super-noticable in practice.
This is because, to my knowledge, there's no easy way of rendering the exact same geometry twice--we'd have to essentially record and replay the entire sequence of draw calls and shader passes. It's a tradeoff, and if I'm missing some obvious DirectX mechanism to do it, I'd gladly fix it. :)


Sat Jun 23, 2012 10:42 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2227
Location: Perpignan, France
That's how the NVIDIA stereo 3D driver does it, by duplicated all drawing calls. But then it's somewhat close to a nightmare to do this, since you need to duplicate all render surfaces and use some heuristics to determine what is going to be directly rendered and what is not.

Anyway that's still a very good first step into a stereo 3D driver, that's how I also did it for my first try on a Linux stereo 3D intercept driver. There will always be some place for improvement, but for now I think the *real* FOV rendering is the most important challenge for perfect immersion.


Sat Jun 23, 2012 11:14 pm
Profile WWW
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
Fredz wrote:
That's how the NVIDIA stereo 3D driver does it, by duplicated all drawing calls. But then it's somewhat close to a nightmare to do this, since you need to duplicate all render surfaces and use some heuristics to determine what is going to be directly rendered and what is not.

I bet they get some assistance from knowing what goes in those mysterious black box drivers that sit beneath it all. ;)

Quote:
Anyway that's still a very good first step into a stereo 3D driver, that's how I also did it for my first try on a Linux stereo 3D intercept driver. There will always be some place for improvement, but for now I think the *real* FOV rendering is the most important challenge for perfect immersion.

I think messing with the FOV in the intercept layer will only work for reducing the FOV, because I'm betting they cull on a frustum approximating their expected FOV.


Sat Jun 23, 2012 11:23 pm
Profile
One Eyed Hopeful

Joined: Mon Dec 12, 2011 5:44 pm
Posts: 48
Emerson wrote:
Quote:
Using the current FOVs, for now. I'd expect altering them will differ on a per-game basis.


So that's why that Skyrim video looks a bit vertically stretched? I wonder how many games would break somehow due to being played in a portrait mode, bet quite a few!


Sun Jun 24, 2012 4:58 am
Profile
Binocular Vision CONFIRMED!
User avatar

Joined: Fri Jan 27, 2012 11:24 am
Posts: 228
Would it be possible to set the game to 1024x768, then roll the camera 90deg, render like that and then rotate it back for when you display it? It should sort out the aspect ratio for you shouldn't it?


Sun Jun 24, 2012 7:46 am
Profile
One Eyed Hopeful

Joined: Mon Dec 12, 2011 5:44 pm
Posts: 48
If you do that then the frustum used for culling geometry will be at 90 degrees to the camera, and we'll probably have missing geometry or graphical effects on the top and bottom parts of the image.


Sun Jun 24, 2012 10:57 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2227
Location: Perpignan, France
Emerson wrote:
I bet they get some assistance from knowing what goes in those mysterious black box drivers that sit beneath it all. ;)
I don't know if you already read it, but here is a PDF describing basics of how stereo is implemented in the NVIDIA driver :
http://www.nvidia.com/content/PDF/GDC20 ... oscopy.pdf

Emerson wrote:
I think messing with the FOV in the intercept layer will only work for reducing the FOV, because I'm betting they cull on a frustum approximating their expected FOV.
Yes, the game should be aware of the real FOV that is used for rendering to avoid culling. For now I guess the simplest solution would be to modify this value in the game configuration, but it would be better to find a way to modify this value directly in the game.

For Skyrim FOV config you may have a look here :
http://www.pcgamer.com/2011/11/11/the-e ... -and-more/
And for Mirror's Edge :
http://www.wsgf.org/node/293

I guess simply setting the FOV to 90° won't be enough since the horizontal and vertical FOV are different for the Rift (90 and 110°). Maybe you should also modify the aspect ratio in config files or set the FOV to 110° and discard left and right parts of the image to get a 90° horizontal FOV.


Sun Jun 24, 2012 12:49 pm
Profile WWW
3D Angel Eyes (Moderator)
User avatar

Joined: Sat Apr 12, 2008 8:18 pm
Posts: 10854
Awesome work Emerson.

_________________
check my blog - cybereality.com


Sun Jun 24, 2012 1:12 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Apr 07, 2007 4:34 pm
Posts: 2876
Location: Sweden
Good job Emerson!

_________________
Mb: Asus P5W DH Deluxe
Cpu: C2D E6600
Gb: Nvidia 7900GT + 8800GTX
3D:100" passive projector polarized setup + 22" IZ3D
Image


Sun Jun 24, 2012 1:46 pm
Profile
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
Thanks! It's been a fun project so far. :D


Sun Jun 24, 2012 1:59 pm
Profile
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
Just for kicks, I tried doing the requisite reprojection for the FOV.
It kinda works, but stuff does flicker in and out on the top.
Image


Sun Jun 24, 2012 2:18 pm
Profile
Golden Eyed Wiseman! (or woman!)

Joined: Fri Aug 21, 2009 9:06 pm
Posts: 1644
How much of the top?

The Rift cuts off the very top and bottom of the image, so it might not be a problem.


Sun Jun 24, 2012 2:49 pm
Profile
One Eyed Hopeful

Joined: Mon Dec 12, 2011 5:44 pm
Posts: 48
So out of curiosity when you run Skyrim with your DLL, what do you set the resolution to, and what FOV have you got in your settings? I'm not too knowledgable about the inner workings of DirectX and stuff, I'm guessing you can't just set the resolution of the game to 640x800 with FOV in the settings at 90 degrees, render the scene twice, grab the scene from the framebuffer each time and then manually output to a 1280x800 window with those images side-by-side?


Sun Jun 24, 2012 2:55 pm
Profile
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
PalmerTech wrote:
How much of the top?

Looks like this (parts of the geometry getting culled out at the top).
Image

I can try to change the FOV to 110 in the game preferences and just change the aspect ratio, which fixes the clipping, but feels weird for some reason.
Image

rajveer wrote:
So out of curiosity when you run Skyrim with your DLL, what do you set the resolution to, and what FOV have you got in your settings?

Right now as the codebase sits, it expects the game to run at 1280x800, and the default FOV (probably somewhere between 75 and 90 in most games). It just squashes when it rerenders the side-by-side, which makes everyone super skinny. On my box, I've been experimenting with correcting the aspect ratio, but it's basically like pan-n-scanning the normal frustum-the sides get chopped off which makes it feel "zoomed" in and therefore not the FOV you'd expect.

Quote:
I'm not too knowledgable about the inner workings of DirectX and stuff, I'm guessing you can't just set the resolution of the game to 640x800 with FOV in the settings at 90 degrees, render the scene twice, grab the scene from the framebuffer each time and then manually output to a 1280x800 window with those images side-by-side?

It could be theoretically possible, but I think I'd have to intercept the Win32 calls that setup the original HWND handle to go 1280x800, which gets more complicated (it's not part of the DirectX API per se). Also, I'm not sure the game would support 640x800 as a default resolution, or necessarily even have a good design for an 8:10 aspect ratio (in terms of positioning weapon models on the screen).


Sun Jun 24, 2012 3:18 pm
Profile
Cross Eyed!

Joined: Sat Jul 17, 2010 10:28 am
Posts: 140
I wonder if with the Rift the center pixel is supposed to be when the eye is looking straight ahead or if it represents convergence some distance ahead. The screenshots you're generating look like the later ( if it were straight ahead distant objects would be in the same position in both images).

_________________
1F3sxoFRtaCx5tvYoC2QoDvBra9QNj2hSb
Projects Backed


Mon Jun 25, 2012 4:04 pm
Profile
One Eyed Hopeful

Joined: Thu Jun 07, 2012 7:22 am
Posts: 44
Hi Emerson, I am really interested in all things VRish, and this work on pre-distortion is really interesting. I did some work on lens correction several years ago, and I'm not sure your code is doing barrel distortion completely "by the book", but then there are many variations depending on the optics you are correcting and your correction already looks good.

Here is a simple test rig that may be of some interest. I put it together in GeeXLab. It may be of interest to those coding directly in OpenGL/GLSL or wanting to write a similar DLL interceptor. You can load in stereo images (the one attached is from WikiMedia) and tweak the values to fit. I made a variant of your algorithm using the values you found, and then added a cubic radial barrel distortion model, that does radial distortion as r' = a + b*r + c*r^2 + d*r^3. I think this distortion gives a slightly rounder barrel distortion, plus its more tweakable to fit different optical configurations. You can switch between the two correction models by switching the value "method" on the roll-out pane with the parameters. Plenty of optimisations could be done on the shader, but it was just a quick demo for testing.

example.png is an example screenshot of geexlab. Put chicago_lion.jpg and DEMO.xml in the same directory and load DEMO.xml into GeexLab and you are away.


You do not have the required permissions to view the files attached to this post.


Mon Jun 25, 2012 5:34 pm
Profile
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
Very cool! My distortion was just trying to fit a model onto the screenshot data I had; it's been mostly trial and error so far.

I'm sure that once hardware samples start floating around, all the stuff I've done so far will need lots of tweaking. We'll probably just have to toss up a grid on the display and hand-tweak the algos/constants until we get as close as possible (might even be different constants for different instances of the kit, depending on the assembly tolerances). Worst-case scenario, since the output resolution is fixed, I have a simple distortion map shader to just manually set each pixel's offset in a texture and brute force a solution.

I'll try to get this function as an option in the shaders, and hopefully a bit better of a configuration mechanism to handle the constants and algo swapping. The more options at our disposal, the more likely we can find the right correction once the hardware's out.

LeeN wrote:
I wonder if with the Rift the center pixel is supposed to be when the eye is looking straight ahead or if it represents convergence some distance ahead. The screenshots you're generating look like the later ( if it were straight ahead distant objects would be in the same position in both images).

There's two adjustable options at play--the eye offset parameter just translates the two cameras on the x-axis, shifting them outwards, the frustum offset's effect is to make the frustum pyramids lop-sided. Kinda like "rotating" the views inward or outward but hopefully keeping a single focal plane instead of just a vertical intersection as you'd get with a plain rotation. This latter option corresponds to convergence. I think some 3D drivers out there probably sample the z-coordinates of vertices getting uploaded to try and auto-judge a convergence value, that would be a nice-to-have someday.

My tweaks so far have just been to try and comfortably see 3D on my desktop screen. I'm not really sure what's going to feel comfortable on an actual Rift, we'll find out. :)


Mon Jun 25, 2012 8:49 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 17, 2011 9:23 pm
Posts: 2186
Location: Irvine, CA
http://pc.gamespy.com/articles/122/1225382p1.html

One thing that stuck out to me in this article is the characteristic:

Quote:
Variable Acuity Resolution (VAR) puts more pixels in the center of the image, and fewer in the periphery, mimicking the natural characteristics of the eye.


I assume this is a function of the lenses just stretching pixels on the edges more than the center - so there is a higher pixel density in the center. Curious - is this something that the warp drivers need to take into account or does it just naturally occur if the warp function matches the geometry of the lens?


Wed Jul 11, 2012 11:36 am
Profile
One Eyed Hopeful

Joined: Thu Jun 07, 2012 7:22 am
Posts: 44
brantlew wrote:
http://pc.gamespy.com/articles/122/1225382p1.html

One thing that stuck out to me in this article is the characteristic:

Quote:
Variable Acuity Resolution (VAR) puts more pixels in the center of the image, and fewer in the periphery, mimicking the natural characteristics of the eye.


I assume this is a function of the lenses just stretching pixels on the edges more than the center - so there is a higher pixel density in the center. Curious - is this something that the warp drivers need to take into account or does it just naturally occur if the warp function matches the geometry of the lens?


VAR just sounds like a natural property of the magnification of such systems! You can correct any apparent distortion with a barrel-like correction, so yes the warp drivers take this in to account.


Wed Jul 11, 2012 4:05 pm
Profile
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
It sounds like a fancy way of spinning the fact that the image is least distorted near the middle, heh.


Thu Jul 12, 2012 11:27 am
Profile
Golden Eyed Wiseman! (or woman!)

Joined: Fri Aug 21, 2009 9:06 pm
Posts: 1644
Emerson wrote:
It sounds like a fancy way of spinning the fact that the image is least distorted near the middle, heh.


Yep. ;) It really is a good effect, though, since it lets you get away with using fewer pixels in the periphery. It looks much, much better than a non-compressed resolution of 640x800 would appear at 90 degrees. If you wanted to be really fancy, you could do variable rendering with higher detail in the center of the image, but the returns on that will not be useful till we have much higher resolution.


Thu Jul 12, 2012 1:26 pm
Profile
Certif-Eyed!

Joined: Sun Mar 25, 2012 12:33 pm
Posts: 649
I wonder if you would still notice LOD popping if it occurs as the object moves from the center of your vision to the periphery.


Thu Jul 12, 2012 4:39 pm
Profile
Cross Eyed!

Joined: Fri May 18, 2012 5:31 pm
Posts: 102
Location: Houston, TX
Out of curiosity, how much does detail break down when using your eyeballs to scan around the edge of the image inside of a VAR HMD, where the optics are "robbing" pixels from?

From looking at the pre-warped images that people have posted (and the fact that reporters haven't commented on the effect), I'm optimistic that the effect isn't too noticeable. Though, I does seem that everyone is basing their calculations off of John Carmack's sample Twitter image, which he mentioned wasn't as severe as the final distortion ended up needing to be.

I don't doubt that VAR is totally the correct answer right now. I'm just wondering how well it will scale as we approach our Utopian full-FOV displays (which will encourage more eye-only motion). ;)


Thu Jul 12, 2012 5:53 pm
Profile WWW
Binocular Vision CONFIRMED!

Joined: Thu Jun 07, 2012 8:40 am
Posts: 226
Location: New York
PalmerTech wrote:
you could do variable rendering with higher detail in the center of the image, but the returns on that will not be useful till we have much higher resolution.


I thought the only way to get reasonable detail in the middle using a shader was to simply render at 2x resolution or something higher and then warp that and scale it down. As this is very non-linear I don't think there is any way a vertex shader can warp the geometry as well, but as post-processing it should be quite simple to do on a higher resolution pixel shading post-processing step. Am I missing something?

_________________
Ibex 3D VR Desktop for the Oculus Rift: http://hwahba.com/ibex - https://bitbucket.org/druidsbane/ibex


Fri Jul 13, 2012 12:53 pm
Profile WWW
Petrif-Eyed
User avatar

Joined: Sat Sep 17, 2011 9:23 pm
Posts: 2186
Location: Irvine, CA
Could you render the same view twice - once with a high FOV but low resolution and once with a low FOV but high resolution. Then - just sort of insert the high res image inside the the low res like so.

Attachment:
DualRes.jpg


Now of course the final display resolution of the entire image is extremely high, but because the outside "pixels" are stretched into giant blocks the entire image can be compressed significantly for transmission on the line. The edges between the regions might look funny, but you might be able to do this 3 or 4 times and create a concentric pattern that dropped off gradually in detail.

Or maybe there is a much more elegant way to do this mathematically on the GPU so that the "effective" resolution drops gradually away from the center. I'm thinking something like ray-tracing where you can control the angle of the intersection between the view surface and the light ray.


You do not have the required permissions to view the files attached to this post.


Fri Jul 13, 2012 1:39 pm
Profile
Cross Eyed!

Joined: Fri May 18, 2012 5:31 pm
Posts: 102
Location: Houston, TX
brantlew wrote:
Could you render the same view twice - once with a high FOV but low resolution and once with a low FOV but high resolution. Then - just sort of insert the high res image inside the the low res like so.

[...]

Now of course the final display resolution of the entire image is extremely high, but because the outside "pixels" are stretched into giant blocks the entire image can be compressed significantly for transmission on the line. The edges between the regions might look funny, but you might be able to do this 3 or 4 times and create a concentric pattern that dropped off gradually in detail.


In a traditional forward renderer, you'd end up transforming the geometry for the scene twice for the example shown, and another additional time for each extra ring (multiplied by two views for stereo vision). It adds up quick. A better approach may be to just run less expensive pixel shaders, perform less tessellation, etc. on things at the extremes of the view.


Fri Jul 13, 2012 3:32 pm
Profile WWW
Certif-Eyed!

Joined: Sun Mar 25, 2012 12:33 pm
Posts: 649
I bet it would work for raytraced games though (the future!)


Sat Jul 14, 2012 9:52 am
Profile
Cross Eyed!

Joined: Tue Jan 25, 2011 7:53 pm
Posts: 168
Location: Sweden
Yeah, a raytracer could possibly even be written to trace out the variable LOD the Rift has. :) so that you wouldn't waste any processing power on multisampling the edges.

_________________
Image
"This is great!"


Sun Jul 15, 2012 2:23 pm
Profile
Golden Eyed Wiseman! (or woman!)

Joined: Fri Aug 21, 2009 9:06 pm
Posts: 1644
When I place the files in same directory as the Skyrim executables, the game fails to launch and throws me runtime errors. Could I be doing something wrong? Anyone else had trouble, or is something wrong on my end? :P


Tue Jul 17, 2012 12:24 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 17, 2011 9:23 pm
Posts: 2186
Location: Irvine, CA
You may be the first tester since you're the only one with the hardware.


Tue Jul 17, 2012 12:53 pm
Profile
Golden Eyed Wiseman! (or woman!)

Joined: Fri Aug 21, 2009 9:06 pm
Posts: 1644
At this point, I do not have a Rift hooked up, just trying to run it on my laptop display. Going to have a few friends see if they can get it running.


Tue Jul 17, 2012 1:04 pm
Profile
Binocular Vision CONFIRMED!

Joined: Wed Sep 30, 2009 8:29 pm
Posts: 236
I was abe to get it running - I simply put the files into the main directly and loaded skyrim direct (not through Tridef). Looks promising - man I cant wait to get my eyes into a Rift!!!

On the initial startup it appears that the right view is not working however once in game it works perfect. Also - when entering into the menu screens the right side freeses up as well.


Tue Jul 17, 2012 4:50 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 17, 2011 9:23 pm
Posts: 2186
Location: Irvine, CA
@Emerson & Cyber: Just curious about a few things. How easily does the technique transfer from game to game? Does the code transfer easily between games using the same engine or does each game require a similar amount of customization? Or put another way - are there more differences between individual games or between engines?


Tue Jul 17, 2012 9:41 pm
Profile
3D Angel Eyes (Moderator)
User avatar

Joined: Sat Apr 12, 2008 8:18 pm
Posts: 10854
brantlew wrote:
@Emerson & Cyber: Just curious about a few things. How easily does the technique transfer from game to game? Does the code transfer easily between games using the same engine or does each game require a similar amount of customization? Or put another way - are there more differences between individual games or between engines?

My issue currently is not with game specific implementations details, but with the hook process even working at all. For whatever reason I have gotten L4D to work, and noticed the Portal 2 also works (surely the engine code is very close since they both use Source). I bet Portal 1 and L4D2 also work. However, most other games I've tried, probably a dozen or so, just crash immediate when trying to run the executable. Sometimes there is an error message, but it is different for each game, sometimes nothing happens.

If I got the initial hook to work, then the 3D would probably work. Most of the code is generic, but some is specific (for example, enabling or disabling the alteration of certain matrices). So there will likely need to be specific tweaks done, but this could even be externalized into a config file or something to make new game profiles easier. But I am just guessing at this point since I don't understand why so many games don't work. And its made even more difficult because I can't really do any debugging with the method I am using, so I am taking shots in the dark here trying to figure out whats wrong.

_________________
check my blog - cybereality.com


Tue Jul 17, 2012 9:58 pm
Profile
One Eyed Hopeful

Joined: Wed Jun 13, 2012 11:16 pm
Posts: 34
I think the general process is the same for most games, but each game has its own unique and quirky ways of doing things. What I've been hacking on the last week or two in my free time is a mechanism that, while it won't relieve the need for game-specific hacks, should at least compartmentalize them so that the driver core doesn't get bogged down with tons of edge cases.

I do think it will be an open question as to how to support games that require specialized injection methods though--doubly so for ones that have some sort of cheat defense mechanism (obviously even if I were to find a reasonable circumvention method that works, if I open sourced it, it would make its way into cheats sooner or later).


Tue Jul 17, 2012 11:08 pm
Profile
One Eyed Hopeful

Joined: Fri Jul 06, 2012 9:07 pm
Posts: 5
Let me first say I'm mostly ignorant on whats possible and not possible with what ya'll are doing. So please forgive my ignorance. =)

Finding thread got me interested enough to do some reading on the subject, and, from what I've found, it seems that both you and cybereality are using the Proxy DLL method to intercept DirectX calls by replacing the DX DLL in the game directory with your own. But, from my limited reading, there are more efficient ways to intercept it that doesn't require you to add support for one game at a time and would just work with all games.

For instance, Fraps uses system hooks to inject itself to capture and modify the image from any DirectX application. This is a description of how Fraps does it:
http://www.ring3circus.com/gameprogramming/case-study-fraps/

In addition, there's an open-source Fraps-alternaive that likely works in the same way:
http://sourceforge.net/projects/taksi/
You might be able to get some ideas or code from that project.

I realize you've been using C++, but I also found a .NET open source assembly "that allows you to safely install hooks from managed code into unmanaged functions. (Note: EasyHook takes care of all the issues surrounding DLL injection - e.g. CREATE_SUSPENDED, ACL's, 32 vs 64-bit and so on)." Its supposedly easy to use:
http://easyhook.codeplex.com/


Also, I'm wondering: why try to re-invent the wheel by making a 4th program that does 3D? If neither TriDef or IZ3D will build in support for the Rfit, could you not use the same kind of hooks to grab TriDef's, IZ3D's, or 3D Vision's output and apply the distortion there? If you could do that, then you wouldn't have to build in support for each game separately and it would work with online games that frown on replacing game files. TriDef and IZ3D both have a financial incentive to make sure that their programs work with as many games as possible and would do nearly all of the programming work for supporting current and future games without you having to do anything other than maintain compatibility with TriDef/IZ3D/3D Vision. In addition, you won't have to try to replicate their 3D features into your code. Furthermore, this would greatly increase the Rift's popularity, since, if someone's favorite game isn't on the list of games that you support, they won't buy it or, worse, return it and give it a bad review.


Again, I apologize if I ended up just talking out of my a** due to sheer ignorance. =/

Lastly, I'm thrilled and appreciative that there are two talented people willing to give so much time so that the Oculus Rift has the software support that it needs! =)


Thu Jul 19, 2012 12:15 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 17, 2011 9:23 pm
Posts: 2186
Location: Irvine, CA
@TheInevitable: You might be able to do a post-process of the Tridef output images for warp correction but I think you would need access earlier in the pipeline to accomplish the necessary FOV changes.


Thu Jul 19, 2012 12:47 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 56 posts ]  Go to page 1, 2  Next

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
Powered by phpBB® Forum Software © phpBB Group
Designed by STSoftware.