on "rectilinear renderings" in game engines and what can be

Post Reply
JamesMccrae
One Eyed Hopeful
Posts: 12
Joined: Thu Aug 30, 2012 12:22 am

on "rectilinear renderings" in game engines and what can be

Post by JamesMccrae »

Hi, I'm James McCrae. (Coincidentally my middle name is Palmer which is kind of weird.)

I love the Oculus Rift project and I check up on it constantly :) I originally went into school with the end goal of wanting to have the ability to make "great games", but throughout the process that evolved into a desire to self-improve and to think fundamentally about the next level of immersion that one could have with technology. Certainly those "immersive experiences" are the best times you can have with a computer. Those old id software games certainly were an early, formative part of defining what I wanted to do in life.

A little more about me - I'm a PhD student in computer graphics. I read the recent reddit AMA and came across a certain response I didn't agree with:

"
johnnd:

What would you say is the practical FOV limit for a non-curved screen? Are we there yet with 110 vertical/90 horizontal? Will there be improvements in the near future?
permalink

palmerluckey:

Rendering power is the main problem right now, actually. I have a prototype with 270 degree FOV, but the limits on rectilinear projections in game engines means you have to render FOUR independent cameras in the game, then fuse them. Huge performance suck, to get 60fps, you would need to be capable of running the game on a normal monitor at 240fps.
"

Image

It's possible to fix that! It is possible to employ vertex shaders and map vertices to NDC (normalized device coordinates) with the projection you want, and simply perform orthogonal projection there. (The caveat being this relies on your geometry being finely-enough tessellated that the projection is smooth.)

As an example, I did a paper that relied on exactly rendering in 6 view directions with 90 degree FOV (axis-aligned, each with 90 degree FOV):

http://www.dgp.toronto.edu/~mccrae/proj ... e3DNav.pdf
Video:
http://www.dgp.toronto.edu/~mccrae/proj ... mccrae.avi

But realized it could be optimized using "dual parabolic maps" (180 degree FOV projections, was hoping to do research with this established concept, but not relevant to my thesis, so screw it :), it's a rough draft not meant to see the light of day):

http://www.dgp.toronto.edu/~mccrae/proj ... abolic.pdf
Video:
http://www.dgp.toronto.edu/~mccrae/proj ... icmap2.mp4

Long story short you wouldn't need more than 2 projections to grab everything in sight. :)

James
User avatar
Nogard
Cross Eyed!
Posts: 101
Joined: Thu Aug 16, 2012 5:30 am

Re: on "rectilinear renderings" in game engines and what can

Post by Nogard »

As a mild computer graphics programmer i must say that is quite snazzy, i know we have quad tessellation with the new D3D, but does also effect just triangles?. You could sub-divide the triangles into a tri-force pattern, and some how sneak that into the vertex buffer with out the fps noticing? saying that just loading the model pre-teslated would make much much more sense
I have very limited experiences with optimization graphics programming from my games tech degree so forgive me for any offending ignorance i might have displayed. One other small thing, wouldn't a vertex shader that warps geometry screw up lighting?
zalo
Certif-Eyed!
Posts: 661
Joined: Sun Mar 25, 2012 12:33 pm

Re: on "rectilinear renderings" in game engines and what can

Post by zalo »

Whew! And I thought we'd have to resort to raytracing to do this!

Though, that caveat is interesting. Does that mean that each "face" of that cube room is formed out many many polygons?
I suppose it will be less of an issue for terrain environments (where the ground has lots of faces), but indoor ones where the walls are usually just a few faces could look funky.

You're really cool, by the way. I just want to know how you plan on surmounting this issue. Will these recent pushes for tessellation be in your favor?
User avatar
brantlew
Petrif-Eyed
Posts: 2221
Joined: Sat Sep 17, 2011 9:23 pm
Location: Menlo Park, CA

Re: on "rectilinear renderings" in game engines and what can

Post by brantlew »

Interesting.
druidsbane
Binocular Vision CONFIRMED!
Posts: 237
Joined: Thu Jun 07, 2012 8:40 am
Location: New York
Contact:

Re: on "rectilinear renderings" in game engines and what can

Post by druidsbane »

I think the hard part is getting the geometry tessellated enough in a real-world game to actually be able to implement this. I'd prefer this to shaders though any day :)
Ibex 3D VR Desktop for the Oculus Rift: http://hwahba.com/ibex - https://bitbucket.org/druidsbane/ibex
User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11407
Joined: Sat Apr 12, 2008 8:18 pm

Re: on "rectilinear renderings" in game engines and what can

Post by cybereality »

Cool. But can't you just render a game normally with high FOV and do the warping in a full-screen pixel shader? Does that not work?
User avatar
Nogard
Cross Eyed!
Posts: 101
Joined: Thu Aug 16, 2012 5:30 am

Re: on "rectilinear renderings" in game engines and what can

Post by Nogard »

cybereality wrote:Cool. But can't you just render a game normally with high FOV and do the warping in a full-screen pixel shader? Does that not work?
Not really because, the way projection(the view) is normally calculated is on a planar surface for the screen. Well that is 'good' until you go past 180 then it goes abit crazy, ideally everything should be projected onto spheres but alas we don't really have a effective(i think :P ) system for curved triangles, hence the tessellation.

At least that is what i think.
JamesMccrae
One Eyed Hopeful
Posts: 12
Joined: Thu Aug 30, 2012 12:22 am

Re: on "rectilinear renderings" in game engines and what can

Post by JamesMccrae »

Hello.

Yes the caveat may be an issue in the general setting - if you have a very large polygon (in terms of its projected area relative to your viewpoint) that can be problematic.

So yes, you need be careful to create the geometry for the environment such that it is finely tessellated enough. Another potential solution is to rely on the graphics hardware to perform dynamic tessellation. I know that OpenGL has support for this, I had the privilege of sitting in on a talk by Dave Shreiner (author of OpenGL red book) a year or two back on the subject, I think that was at SIGGRAPH Asia 2011. There are some cool demos of the tessellation (Stone Giant, Unigine "Heaven") that show the tessellation patterns interpolating and becoming finer and finer as the viewpoint nears the geometry. So a solution may lie there as well.

@zalo: Yes, in the case of the demo I have shown, I am able to get a smooth projection since there are vertices at each cell of the grid/cube face. And you guys also seem like a cool bunch, very interesting stuff going on here :)

@cybereality: You can do a "high FOV" - but this is limited to being less than 180 degrees (as Nogard correctly mentions at that point the projection breaks down). To think of this visually, consider your viewing frustum (essentially a pyramid with the tip cut off, this forms your image/projection plane). At 180 degrees FOV, the pyramid flattens into a plane - failing to capture the entire halfspace that a 180 degree FOV would require.

Another subtle advantage to the above method is that the distribution of projected light "rays" has better sampling uniformity at the higher FOV settings compared to a standard frustum (and this is arguably more ideal). (To see this, consider that in the case of the viewing frustum, each outward "ray" must intersect a common far plane (the back plane of the frustum). However with the paraboloid method, there is no such restriction, the "far plane" effectively is a "far hemisphere". :) )
MaterialDefender
Binocular Vision CONFIRMED!
Posts: 262
Joined: Wed Aug 29, 2012 12:36 pm

Re: on "rectilinear renderings" in game engines and what can

Post by MaterialDefender »

Is this really that much faster than rendering to a cubemap? More polygons don't come for free, and with the cubemap method you don't need the plane behind the camera except for really extrem fovs, top and bottom plane should be pretty fast to render in most cases provided you do some object culling, plus you don't have to render every side of the cube at full resolution; I would guess with the cube method you can easily get away with about three times the number of pixels that are needed for a normal rectilinear projection. Hard to believe that a heavily tesselated scene should be that much faster to render.

Still a nice concept though, should be interesting to see some benchmarks comparing both methods in a thoroughly optimized form with some real content.
Owen
Cross Eyed!
Posts: 182
Joined: Mon Aug 13, 2012 5:21 pm

Re: on "rectilinear renderings" in game engines and what can

Post by Owen »

Cubemap rendering generally has more CPU overhead since you have to cull and submit the scene 6 (or 5 as you point out) times, or if you use hardware instancing the burden will be placed on the raster pipeline. With the dual paraboloid method you could still use extra clipping planes (or just per-object culling) to avoid drawing stuff that is outside of your maximum field of view.

I think that ultimately the future solution will be to have eye tracking with any device over 180 degree FOV, then just use single paraboloid rendering to cover the hemisphere that is currently being observed. Visual acuity beyond 90 degrees from the fovea is so low that nobody will notice that those pixels are just smeared out from the edge.
MaterialDefender
Binocular Vision CONFIRMED!
Posts: 262
Joined: Wed Aug 29, 2012 12:36 pm

Re: on "rectilinear renderings" in game engines and what can

Post by MaterialDefender »

I see your point, but I'm still not convinced. If you take a look at a typical game scene like it is done today, you would need a massive increase in polygon count to make this work well. That can't be easily dismissed. You don't have to render a scene multiple times, but instead you need multiple times more polygons.

The eyetracking solution sounds like a good idea, but could be used for further optimization of cubemap rendering too, since you would have to render only half of the pixels for the left, right, top and bottom plane.

I would really like to see different scenes rendered with both methods, using optimized algorithms and optimized content for either method. I could imagine this being a head-to-head race with no clear winner.
Pyry
Two Eyed Hopeful
Posts: 85
Joined: Mon Aug 13, 2012 5:55 pm

Re: on "rectilinear renderings" in game engines and what can

Post by Pyry »

Another potential pitfall is that your meshes have to be manifold, because if they aren't, the vertex warping will cause cracks. Of course, a lot of the useful mesh-editing operations (subdivision surfaces, smoothing, etc.) already require manifold geometry, so this might not be too onerous a restriction.
User avatar
Nogard
Cross Eyed!
Posts: 101
Joined: Thu Aug 16, 2012 5:30 am

Re: on "rectilinear renderings" in game engines and what can

Post by Nogard »

Pyry wrote:Another potential pitfall is that your meshes have to be manifold, because if they aren't, the vertex warping will cause cracks. Of course, a lot of the useful mesh-editing operations (subdivision surfaces, smoothing, etc.) already require manifold geometry, so this might not be too onerous a restriction.
I did not think of that, and in fact i think it is worse then you think because mesh optimization is built on not having the meshes manifold, and on top of that models are often made of separate meshes to save on polys. mmm couldn't you render the screen 64 times at tiny res whilst moving the camera each time then texture and warp that i suppose?
Could one of you explain how i am no doubt wrong.
JamesMccrae
One Eyed Hopeful
Posts: 12
Joined: Thu Aug 30, 2012 12:22 am

Re: on "rectilinear renderings" in game engines and what can

Post by JamesMccrae »

Hi guys! :mrgreen:

Probably responding to a thread "long dead", but here are some responses:

1. I originally was addressing Palmer's problem about the "have to do multiple rendering passes for my 270 degree FOV". I wanted to mention with existing technology it's possible in a single rendering pass, using a vertex shader to transform your geometry. That was the point :D
@Cyberreality: I address why the "pixel shader which samples a framebuffer" approach alone wouldn't work - you can't render that 270 degree FOV framebuffer in one pass without what I'm suggesting. Not with the standard 4x4 matrix pipeline anyway. In less than 180 degree FOV the "render once then use a pixel shader to sample from that rendered framebuffer with whatever correct transform you want" technique would work great though. (PS Congrats on being with Oculus, Vireio - saw code on github, what up!!! Love that) :mrgreen:

2. @MaterialDefender: you're right, finely triangulated geometry is required to produce a "smooth" projection, and this implies a cost in number of triangles rendered. But consider "tesselation shaders" - which give you a degree of subdivision "for free" by the hardware. Typically/ideally under perspective projection, tessellation is finest for the closest surfaces to the viewpoint, and the triangulation for any surface is scaled dynamically based on viewing distance. This isn't the only metric one might use to define the "level of detail" for the tessellation (or "how tri-forcey do we get") - for instance a surface which is distant but has a large projected size you might also wish to have finer tessellation.

3. Re: "the cracks". Whatever metric you use for your dynamic tessellation - as long as it's defined purely by parameters of an edge (that is, the two vertex coordinates), then you will not see any cracks if you make the subdivision points the same for each face that shares the edge. (Consider two faces that share a common edge but have different subdivision amounts, following the projection you would see "gaps" because the subdivided edge would get sampled at different points - you would get "different zig-zags" across the shared edge since the projection in the vertex shader transforms a linear and discretely sampled edge to some 3D curve.)

Cheers! :woot

PS March is the month we finally can get to work, dev kits should finally arrive! Can't wait to program for it and bring to light my projects!! No pressure Oculus, but I'm not sure I've been more excited for anything "tech" in my life :)
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: on "rectilinear renderings" in game engines and what can

Post by geekmaster »

JamesMccrae wrote:...
But realized it could be optimized using "dual parabolic maps" (180 degree FOV projections, was hoping to do research with this established concept, but not relevant to my thesis, so screw it :), it's a rough draft not meant to see the light of day):

http://www.dgp.toronto.edu/~mccrae/proj ... abolic.pdf
Video:
http://www.dgp.toronto.edu/~mccrae/proj ... icmap2.mp4

Long story short you wouldn't need more than 2 projections to grab everything in sight. :)

James
Those second two links in the first post are dead (the two shown above). Are they available at another URL?
Pyry
Two Eyed Hopeful
Posts: 85
Joined: Mon Aug 13, 2012 5:55 pm

Re: on "rectilinear renderings" in game engines and what can

Post by Pyry »

JamesMccrae wrote:Hi guys! :mrgreen:
3. Re: "the cracks". Whatever metric you use for your dynamic tessellation - as long as it's defined purely by parameters of an edge (that is, the two vertex coordinates), then you will not see any cracks if you make the subdivision points the same for each face that shares the edge. (Consider two faces that share a common edge but have different subdivision amounts, following the projection you would see "gaps" because the subdivided edge would get sampled at different points - you would get "different zig-zags" across the shared edge since the projection in the vertex shader transforms a linear and discretely sampled edge to some 3D curve.)
Yes, tesselation will work fine. I'm more worried about human created meshes that might have various 'problems' (not being watertight, being non-manifold) that are not apparent in a standard perspective projection, but which will show up once you start warping vertices around.
JamesMccrae
One Eyed Hopeful
Posts: 12
Joined: Thu Aug 30, 2012 12:22 am

Re: on "rectilinear renderings" in game engines and what can

Post by JamesMccrae »

The links should work again :)
JamesMccrae
One Eyed Hopeful
Posts: 12
Joined: Thu Aug 30, 2012 12:22 am

Re: on "rectilinear renderings" in game engines and what can

Post by JamesMccrae »

@Pyry: If done correctly - if watertight before - watertight after.
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: on "rectilinear renderings" in game engines and what can

Post by geekmaster »

JamesMccrae wrote:The links should work again :)
Thanks!
Pyry
Two Eyed Hopeful
Posts: 85
Joined: Mon Aug 13, 2012 5:55 pm

Re: on "rectilinear renderings" in game engines and what can

Post by Pyry »

JamesMccrae wrote:@Pyry: If done correctly - if watertight before - watertight after.
Yes, but what I meant was that frequently people produce meshes which aren't watertight, because they're lazy.
JamesMccrae
One Eyed Hopeful
Posts: 12
Joined: Thu Aug 30, 2012 12:22 am

Re: on "rectilinear renderings" in game engines and what can

Post by JamesMccrae »

Pyry wrote:
JamesMccrae wrote:@Pyry: If done correctly - if watertight before - watertight after.
Yes, but what I meant was that frequently people produce meshes which aren't watertight, because they're lazy.
The problem I was addressing was not relating to the topological properties of a surface, but a problem with high FOV rendering.

But still I don't see this as an issue. Take an example some geometric "seam" with a parabolic map transform and then orthogonally projected, how this is visually different to orthogonal or perspective projection of the same seam?
Pyry
Two Eyed Hopeful
Posts: 85
Joined: Mon Aug 13, 2012 5:55 pm

Re: on "rectilinear renderings" in game engines and what can

Post by Pyry »

Consider a 'T' junction: the blue quad meets the red and green quads at a 'T', where the blue quad does not have a vertex at the junction. This doesn't pose a problem for perspective transformations, since lines still map to lines, so the blue quad remains flush with the red and green ones. However, apply a transformation that maps lines to curves, like a parabolic map, and a crack will appear. This type of arrangement isn't manifold, but in hand-created models tends to appear relatively frequently. In the 'old days' when vertex transforms were done in low-precision, modelers were warned against T junctions, because numerical imprecision could move the central vertex off the line, creating cracks even when mathematically there shouldn't be any in a perspective projection.
You do not have the required permissions to view the files attached to this post.
JamesMccrae
One Eyed Hopeful
Posts: 12
Joined: Thu Aug 30, 2012 12:22 am

Re: on "rectilinear renderings" in game engines and what can

Post by JamesMccrae »

*sigh*

(It is a great reply, well detailed)

To this I would say - the manifold is not 270 degree FOV projection ready :)

Algorithmically though, the T-junction can be detected and corrected.
Pyry
Two Eyed Hopeful
Posts: 85
Joined: Mon Aug 13, 2012 5:55 pm

Re: on "rectilinear renderings" in game engines and what can

Post by Pyry »

Haha, sorry for harassing you about a minor point. Consider it practice for arguing with siggraph reviewers during the rebuttal.
JamesMccrae
One Eyed Hopeful
Posts: 12
Joined: Thu Aug 30, 2012 12:22 am

Re: on "rectilinear renderings" in game engines and what can

Post by JamesMccrae »

Pyry wrote:Haha, sorry for harassing you about a minor point. Consider it practice for arguing with siggraph reviewers during the rebuttal.
You presume I took the idea that far :)
Post Reply

Return to “Oculus VR”