Mirror's Edge Rift D3D9 driver

The place for all discussion of the Oculus Rift compatible open source 3D drivers.
Post Reply
yillard
One Eyed Hopeful
Posts: 8
Joined: Sat Feb 08, 2014 3:25 pm

Mirror's Edge Rift D3D9 driver

Post by yillard »

Hi,

with Vireio Perception I was expieriencing problems with Mirror's Edge ( cd version, non steam). At FHD resolution the frame rate was very low ( 1-2 fps and lower) although my machine is not that bad ( Vireio game_type = 201, Machine: AMD Phenom II x4, ATI Radeon HD 6870).
So I tried to figure out why this is so and made a Rift d3d9 driver for Mirrors Edge only, based on the Vireio Perception base classes. It works good for me and runs with 40-50 fps. I noticed that switching the render target with multisampling = 4 and pixel format = D3DFMT_A8R8G8B8 impacts the performance very hard on my machine. With pixel format = D3DFMT_A16R16G16B16 it works good. So I set the multisampling to 0 for this specific render target, the difference in quality is not noticable as the image per eye is two times bigger then needed. But changing this in vireio did not improved the performance much. I did not measured other functions performance, but I think it's perhaps due to the overhead as vireio is a universal driver.


The driver with source is attached as zip file.

If you like it, I can try to integrate the driver into Vireio as a D3DProxyDeviceMirrorsEdge which is chosen from the D3DProxyDeviceFactory for a specific game type , for example game_type = 1029 for Mirror's Edge. I think that should be not difficult because I used the base classes from Vireio.
For now the driver does not have any save/load options and there are not many settings to change ( CameraDistance, ScreenSize, Scaling, AspectRatio, FOV). The default settings are for Oculus DK1, can be changed in game to work with RiftUP for example. The controls are for now proprietary and not the vireio ones. The character shadows are disabled for now, because some of them did not worked properly.


So if you like it, let me know and I will try to implement the driver as Vireio "plugin"
You do not have the required permissions to view the files attached to this post.
Last edited by yillard on Sun Jun 01, 2014 2:14 pm, edited 1 time in total.
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: Mirror's Edge Rift D3D9 driver

Post by cybereality »

Interesting. Thanks for your effort.
User avatar
Neil
3D Angel Eyes (Moderator)
Posts: 6882
Joined: Wed Dec 31, 1969 6:00 pm
Contact:

Re: Mirror's Edge Rift D3D9 driver

Post by Neil »

Woah!

I haven't tried this yet, but you should get yourself added to the Vireio development team if you are interested.

The next driver release shouldn't be much longer.

Regards,
Neil
yillard
One Eyed Hopeful
Posts: 8
Joined: Sat Feb 08, 2014 3:25 pm

Re: Mirror's Edge Rift D3D9 driver

Post by yillard »

Thanks for the offer!

I'm interested to support vireio, but you should know I am new to DirectX and it could take a little bit longer for me to accomplish a task. Most things I programm is hardware, since i'am a hardware guy. Where can I subscribe to the team?

I found some points in vireio that could be changed. One for example is the OculusTracker::updateOrientation() method. There is a hack to avoid errors while translating over 360/0. This causes glitches when you move your head too fast, so the movement gets suppressed. I think a better method is to choose the shortest possible path of the movement to resolve the direction ambiguity (clockwise, counter clockwise). I implemented this in the Mirror's Edge driver (OculusRiftDK1Tracker::updateOrientation()). The function chooses always the path, that is smaller then 180 deg. One possible way to become wrong results is to move your head faster then 180deg per frame. For 10fps it would be 1800deg per second which is 5 turns per second and that is very very fast for a head :D. An other method I tried is using the direction reading from the tracker (angular velocity) additionaly to resolve the ambiguity, but the results were poor when you don't move the head due to the noise floor. You then would have to filter the noise and that would become more complex, so I think the shortest path method is a better option.

The other point is adding a second method of translating the virtual camera. For Mirrors'edge for example the method of eliminating the projection matrix from ViewProjectionMatrix by multiplying it with the inverse projection matrix, then translating it and then applying the projection matrix did not worked well for water reflection in the start menu (shearing occured) but for other geometry it works good. I think I found why. This method will work perfect only if you use exactly the same projection matrix as the game does and perhaps the game uses two very different projections for geometry and reflection.
If you use a slightly different projection matrix (which is mostly the case, bacause you don't know the original projection matrix from the game) to eliminate the original projection matrix from ViewProjectionMatrix, then translate it and then project it again by the slightly different projection matrix, you get the new ViewProjection matrix which has some coefficients added, which I think produce the shearing in water reflection if they are too big. So you have:

ViewMatrix * ProjectionOriginal = ViewProjectionMatrix <-- this is what you get from the game
ViewProjectionMatrix * InverseProjectionOriginal * Translation * ProjectionOriginal <-- this is the best option you want, but can't get it if you don't know the game projection matrix
ViewProjectionMatrix * InverseProjectionUser * Translation * ProjectionUser <-- this is the usual way to do that. But this introduces error terms in the first column of the resulting matrix, the error depends on the difference between the [3,3]-coefficients of the original projection matrix and the used projection matrix.

Original projection matrix =
[px, 0, 0, 0]
[0, py, 0, 0]
[0, 0, pc, 1]
[0, 0, pd, 0]

User projection matrix =
[a, 0, 0, 0]
[0, b, 0, 0]
[0, 0, c, 1]
[0, 0, d, 0]

So the difference between pc and c contributes to the error term. Best option is pc = c, then the error is 0. ( See attached file)

But I found a different way to add translation to the virtual camera, which does not depend on the original projection matrix. I tested this in Mirror's Edge and HalfLife2 and it even seems to work :)

So consider the following View matrix ( it also can be a WorldView matrix), in DirectX notation (transposed to the normal mathematical notation):

WorldViewMatrix ( or only ViewMatrix) =
[m11, m12, m13, 0]
[m21, m22, m23, 0]
[m31, m32, m33, 0]
[m41, m42, m43, 1]

if you apply projection to it (let be the projection matrix the "original projection matrix" from above) you get:

WorldViewMatrix * ProjectionOriginal = WorldViewProjectionMatrix =
[m11*px, m12*py, m13*pc, m13]
[m21*px, m22*py, m23*pc, m23]
[m31*px, m32*py, m33*pc, m33]
[m41*px, m42*py, pd+m43*pc, m43]

This is the matrix you get from the game for the shader.

But what you want is "WorldViewMatrix * CameraTranslation * ProjectionOriginal" instead of "WorldViewMatrix * ProjectionOriginal" for stereo rendering.

So with CameraTranslation =
[1, 0, 0, 0]
[0, 1, 0, 0]
[0, 0, 1, 0]
[tx, 0, 0 1]

you would get ( if the game would render stereo)

WorldViewMatrix * CameraTranslation * ProjectionOriginal =
[m11*px, m12*py, m13*pc, m13]
[m21*px, m22*py, m23*pc, m23]
[m31*px, m32*py, m33*pc, m33]
[px*tx+m41*px, m42*py, pd+m43*pc, m43]

This is nearly the same as the WorldViewProjectionMatrix which you get from the game except the 41 coefficient: original "m41*px" needed "px*tx+m41*px".
So you can simply take the WorldViewProjectionMatrix from the game as it is and add some value "+/-camera_separation" to the "m41*px" coefficient and you get
"+/-camera_separation + m41*px" in the resulting new WorldViewProjectionMatrix for left and right respectively. For a asymmetric projection matrix you can then simply apply the asymmetric transformation at the end ( like in oculus sdk documentation). I implemented this in the driver in the ProxyDirect3DDevice9::SetVertexShaderConstantF method, you might have a look.

I attached the wxMaxima calculation file, if you want to look at the error terms discribed above.

I think it's gonne be a big post... sorry for that ;)
You do not have the required permissions to view the files attached to this post.
Boulotaur2024
One Eyed Hopeful
Posts: 1
Joined: Mon Sep 15, 2014 7:13 am

Re: Mirror's Edge Rift D3D9 driver

Post by Boulotaur2024 »

yillard wrote:[...] eliminating the projection matrix from ViewProjectionMatrix by multiplying it with the inverse projection matrix
This is interesting... By any chance would you know if there is a way to isolate the game projection matrix with a similar trick ?

EDIT :
yillard wrote:If you use a slightly different projection matrix (which is mostly the case, bacause you don't know the original projection matrix from the game)

Ah there's my answer I'm afraid :/
yillard
One Eyed Hopeful
Posts: 8
Joined: Sat Feb 08, 2014 3:25 pm

Re: Mirror's Edge Rift D3D9 driver

Post by yillard »

Boulotaur2024 wrote:
yillard wrote:[...] eliminating the projection matrix from ViewProjectionMatrix by multiplying it with the inverse projection matrix
This is interesting... By any chance would you know if there is a way to isolate the game projection matrix with a similar trick ?

EDIT :
yillard wrote:If you use a slightly different projection matrix (which is mostly the case, bacause you don't know the original projection matrix from the game)

Ah there's my answer I'm afraid :/

That depends.
If you know the ViewMatrix, then you could isolate the ProjectionMatrix by calculating and applying the inverse ViewMatrix from the left:
ViewMatrixInverse*ViewProjectionMatrix => ProjectionMatrix
Post Reply

Return to “Development / General Discussion”