I was planning something similar, with onscreen sliders. The more advanced literature out there suggests that real world lenses do not completely match the formulas anyway, so real camera lenses usually have their distortion tables computed from photographs of a calibrated "grid and bullseye" target (or other static or animated targets), and use software (or eyeball it) to determine how much and in which direction things moved from their correct position.remosito wrote:After reading the fresnel stack thread I have been wondering about that custom pre-warp.geekmaster wrote:I am experimenting with variations of simple "approximate" pre-warp algorithms that will work on small devices like the Raspberry Pi, where any incorrect distortion is hopefully in the outer extremes where not visible, or at least not distracting. Computer graphics has a long history with approximate solutions (and still does)...
Couldn't one use an interactive pre-warp design application:
Where you display a series of increasingly dense grids. Lets say start with 2-3 grid points in each axis. Then each iteration you add another 1-3 per axis.
The perfect onscreen grid will look all wonky after it passes through your lenses and be all warped.
For each iteration you make each intersection point selectable and moveable. Then the user can move the grid points until he sees a perfect grid with his eyes. The offset correction of the gridpoints would be the pre-warp. Or am I overlooking something?
The more iteration the user works himself through the more accurate the resulting pre-warp. One could as well start instead of a virgin grid by providing a number of pre-warps the user can choose the one that looks best. And iteration x+1 new gridpoints could be estimated/predicted by using prewrap offsets from iterations 1 through x. Maybe even different predictions based on different models the user quickly rotates through for the one being best.
All cutting down a lot on required user work.
And yes, tables are used (with interpolation) for the most accurate results, because a single lens distortion formula cannot accurately apply to the entire lens diameter. But the formulas are fine for less critical apps, like the Rift DK. Whereas for something more complex like stacked lenses, I believe that a user interactive approach would be much more effective (with a variety of pre-adjusted profiles to select from for those who do not want to use the sliders).