It is currently Fri Nov 22, 2019 12:14 am



Reply to topic  [ 57 posts ]  Go to page Previous  1, 2
 "PTZ Tweening" for low-power low-latency head-tracking 
Author Message
Sharp Eyed Eagle!

Joined: Sat Dec 22, 2007 3:38 am
Posts: 425
Reply with quote
geekmaster wrote:
Actually only animated films use motion blur. Real film cameras have rather short exposure times, and there is no afterimage from previous frames to mix in the new exposure on a whole new frame of film.

Even CRT monitors which DO have some persistence as the phosphor light emission decays, are still fast enough to support LCS (liquid crystal shutter) 3D glasses.

The persistence of vision (POV) occurs inside the eye, and it is independent of whether the content is from a movie or from a video game.

It's a bit late, but I'd just like to point out that this is definitely NOT true. Both the eye AND the capture/rendering process AND the display device ALL contribute to how motion blur is perceived, and the portion they contribute differed depending on what it is in the scene you're looking at!

Have a read of Charles Poyntons excellent paper on motion portrayal

::EDIT:: And in a stunning display of total coincidence, here's some diagrams from Michael Abrash on the same subject as a precursor to his GDC talk.


Wed Mar 27, 2013 6:29 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
EdZ wrote:
geekmaster wrote:
Actually only animated films use motion blur. Real film cameras have rather short exposure times, and there is no afterimage from previous frames to mix in the new exposure on a whole new frame of film.

Even CRT monitors which DO have some persistence as the phosphor light emission decays, are still fast enough to support LCS (liquid crystal shutter) 3D glasses.

The persistence of vision (POV) occurs inside the eye, and it is independent of whether the content is from a movie or from a video game.

It's a bit late, but I'd just like to point out that this is definitely NOT true. Both the eye AND the capture/rendering process AND the display device ALL contribute to how motion blur is perceived, and the portion they contribute differed depending on what it is in the scene you're looking at!

Have a read of Charles Poyntons excellent paper on motion portrayal

::EDIT:: And in a stunning display of total coincidence, here's some diagrams from Michael Abrash on the same subject as a precursor to his GDC talk.
What EXACTLY is "definitely NOT true"? I was talking about Persistence of VISION, not about where motion blur comes from. In fact, motion blur comes from motion, whether it is the head moving or objects in the FoV. The biggest effects are from the time it takes for the cells in your retina to recover and fire again when they detect a change in light, which is much longer (in the central foveal region) than any of the technologies being discussed here.

Are you talking about perceived blur when moving your head quickly while wearing the older 5.6-inch Rift prototypes that used a slow LCD with "high switching times"? If so, that is not what this thread is about.
http://www.oculusvr.com/blog/details-on ... oper-kits/

Typical motion pictures that show a broad scene with camera panning show noticeable flicker because the camera used a fast shutter of very short exposure time per frame. Films produced in a studio are normally well lighted to allow such short exposures. Of course, "film noir" is usually filmed in a dark environment, with longer exposures, and so would exhibit more blur, but this is not the norm for typical content.

Analog video cameras typically DID cause motion blur, especially on very bright objects that could take many seconds to fade. That is shy I mentioned "film cameras". Digital cameras usually use extremely short exposure times, for even LESS motion blur than film cameras. So, what I said about that was true.

And analog CRTs do support LCS (liquid crystal shutter) glasses, so their phosphor persistence is short enough to support that, just as I claimed.
http://en.wikipedia.org/wiki/Persistence_of_vision wrote:
Persistence of vision is the phenomenon of the eye by which an afterimage is thought to persist for approximately one twenty-fifth of a second on the retina.
So, it looks like every claim I made is true, and in no way conflicts with your offsite links.

So, how is ANYTHING I said above in any way "NOT true" as you claimed?

Both of your externel references discuss the perception of bent objects from display scanning technology (rolling shutter, etc.). That is not "motion blur", but rather a variance between where pixels are and where your brain expects them to be while the display itself is moving (such as in an HMD). Motion blur and PoV are something else entirely.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Wed Mar 27, 2013 9:36 am
Profile
Sharp Eyed Eagle!

Joined: Sat Dec 22, 2007 3:38 am
Posts: 425
Reply with quote
geekmaster wrote:
Both of your externel references discuss the perception of bent objects from display scanning technology (rolling shutter, etc.). That is not "motion blur", but rather a variance between where pixels are and where your brain expects them to be while the display itself is moving (such as in an HMD). Motion blur and PoV are something else entirely.
Please read the two links more closely. The first does start with the issues surrounding scanning of film to interlaced video (NOT the rolling shutter effect), but the rest of the document covers the issues of shuter/exposure time for film and TV cameras, and imaging time for display devices (impulse, pwm or sample-and-hold). Abrash's post has nothing whatsoever to do with rolling shutters or film scanning.

Please, re-read the two links carefully. They address all your previous points. Perception of motion blur is MUCH more complex than just shutter speed and persistence of vision. Poynton's article mentions in the latter pages that artefacts from gaze-tracking of moving objects are not objectionable at small fields of view (SDTV viewing) but have more of an effect at larger (HDTV) fields of view. And large for HDTV is in the 45° area, less than half what the Rift displays!


Wed Mar 27, 2013 4:11 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
I have read your externally-linked content in the past, and it is PART of my knowledge base used to form my opinions as stated above.

A lot of things affect Persistence of Vision, but the dominant things are the response times of the rods and cones in the retina of your eye, and the visual processing centers of your brain, which model the external world built up from very small FoV images acquired during saccadic motions of your eye, combined with your peripheral vision that guides such saccades.

Your nerve cells are far slower than camera shutter speed or typical phospher persistence, so the primary effects of Persistence of Vision are in your head, not in the external devices.

Because your anchor to the external world expects pixels to behave in a predictable way when moving your head and eyes, those pixels must move onscreen in an HMD quickly with low latency, or they will not match your internel model of what they represent in the virtual world.

What I said and what your external links say are not in disagreement. That is why I dispute your claims that I am WRONG...

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Fri Mar 29, 2013 8:42 am
Profile
Sharp Eyed Eagle!

Joined: Sat Dec 22, 2007 3:38 am
Posts: 425
Reply with quote
geekmaster wrote:
A lot of things affect Persistence of Vision, but the dominant things are the response times of the rods and cones in the retina of your eye, and the visual processing centers of your brain, which model the external world built up from very small FoV images acquired during saccadic motions of your eye, combined with your peripheral vision that guides such saccades.
Yes, that is the primary mechanism of persistence of vision. I am not disagreeing with that.
However, it is NOT the primary mechanism of perception of motion blurring. That, with current technology, is still overwhelmingly dominated by display technology (and either capture or rendering of image data). Let's take Abrash's space-time diagram:
Image
Due to the display characteristics of an LCD panel (i.e. sample-and-hold), when your eye tracks a moving object, what you see is an object blurring that should be sharp, or worse, an object that repeatedly jumps backwards when it should appear (when tracked by your eye) stationary! Whether the object slides about or merely blurs is dependant on the speed of the moving object, the refresh rate of the display, and the persistence of vision effect, with the latter being the smallest factor with current displays.


Fri Mar 29, 2013 6:54 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
EdZ wrote:
... Due to the display characteristics of an LCD panel (i.e. sample-and-hold), when your eye tracks a moving object, what you see is an object blurring that should be sharp, or worse, an object that repeatedly jumps backwards when it should appear (when tracked by your eye) stationary! Whether the object slides about or merely blurs is dependant on the speed of the moving object, the refresh rate of the display, and the persistence of vision effect, with the latter being the smallest factor with current displays.
Actually, in my Rift and the Unity Tuscany demo, I cannot see any of the problems Abrash was talking about, other than a little blurring due to pixels taking awhile to fade out (switching time). I can also (barely) see a slight trail of ghosted edges of high contrast vertical edges when rotating my head very fast, due to the frame update rate. 120Hz should make extra ghost images between those. Again, that is very slight and you have to look for it to even notice it. That is due to the image not being where expected at the interframe periods while moving your head. But the fact that I *CAN* see those ghosted images instead of a continuous blur shows that this LCD panel has EXCELLENT switching time. I am quite pleased with it...

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Sun Mar 31, 2013 6:35 pm
Profile
Sharp Eyed Eagle!

Joined: Sat Dec 22, 2007 3:38 am
Posts: 425
Reply with quote
Those 'ghosted images' you're seeing are exactly what Abrash is talking about. The 'blurring due to pixels taking a while to fade out' isn't; It's not slow pixel switching that you're seeing, it's your eyes tracking what your brain expects to be a stably moving object but is instead a set of discrete images.
RoadtoVR has a transcript of Abrash's talk up.


Mon Apr 01, 2013 9:46 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
My HeadPlay HMD is much worse because it uses a single monochrome LCoS display, and alternating RGB lighting to display color cycling. When I turn my head (or even move my eyes to follow onscreen motion), I see a trail of alternating RGB images. And even worse, in stereoscopic mode it alternates between left and right eye so I see these colored trails alternating between eyes so I cannot stereoscopically merge the image trails. Just watching a movie in the HeadPlay HMD makes me a bit queasy even with no camera panning and no head movement, because of the color trails during normal eye movements.

The Rift LCD panel is vastly superior to the HeadPlay LCoS tech, IMHO...

Regarding the Abrash info, I do not see any bending or stuttering as he described (as I understand it). I did see what appeared to be a "pencil illusion" effect in the Oculus "Tiny Room" demo when running on my onboard HD4000 HDMI video. Not a problem working on a better Nvidia card though.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Mon Apr 08, 2013 12:34 pm, edited 1 time in total.



Mon Apr 01, 2013 10:33 am
Profile
One Eyed Hopeful

Joined: Wed Mar 27, 2013 2:23 pm
Posts: 35
Reply with quote
As a slight deviation back to the thread topic:

Background:
I believe almost all graphics engines for the past few generations use some pretty complex ambient occlusion algorithms in order to specifically not spend time drawing what can't be seen in a given frame (e.g. - if a person walks between you and another character, the engine isn't going to spend time drawing the character which is now obscured)

Theoretical concept:
What if, depending on distance from the viewer, the engine DID process and render some amount of each hidden object. If you are familiar with Photoshop, this might be similar to making a marquee selection of an object and using "contract" to bring those edges several pixels inside the visible edges of the object. After the video card is done rendering a frame, based on a certain number of depth slices, those slices are then sent (as separate layers) to a second video card / device to handle your tweening(translation/rotation) calculations and then the Oculus prewarping, full screen effects, and UI elements.

Theoretical second video card:
This would be used in a much different way than current "SLI" type rigs work now. This card would essentially be spitting out a fixed resolution and frame rate (60/90/120) all the time. In those frames it would be inserting the most up to date info it had from the "worker" video card, as well as static position IU elements and prewarp / other full screen shaders. This card would be using the latest tracking info from the sensor, which could be ~16ms or more up to date than the worker card, to do the translations and warping to the various layers in order to give a more updated look (ala Carmack "Latency Mitigation Strategies").


Thu Apr 04, 2013 12:09 pm
Profile
One Eyed Hopeful

Joined: Wed Mar 27, 2013 2:23 pm
Posts: 35
Reply with quote
Hasn’t been anything new in this thread from the OP for a week, so I’ll add some more thoughts (at least until Geekmaster asks me to shut up :-D )

External vs internal device

I think this all comes down to bandwidth and latency. While not impossible to establish fairly high bandwidth external connections (USB 3.0, eSata) those are generally directed/optimized towards data throughput, with latency (of the order we are talking about) as an afterthought. They also use their own protocols and have specific interface chips, which are another layer to transvers that might have its own quirks. It seems that at least INITIALLY, you might be picking fights you don’t need to by trying to go external, as opposed to talking over the internal PCI-E bus that was built with graphics in mind.

Warping – moving the discussion closer to implementation

The IDEA of warping has certainly been established, and though I had never considered it before reading Carmack’s latency ideas, it does seem to make a lot of sense, at least for the next few generations of hardware.
I specifically want to bring up in this thread the method of deciding what gets warped and by how much. Regardless of if you use spherical projections, or “sliced depth layers” or whatever, when you are trying to manipulate 2d images against a 3d matrix, you have to know the 3d position of each pixel relative to the camera, even if it’s not to quite the same degree as when it was first rendered.

Storing depth information in a 2d image means using a depth buffer (zbuf), either as an additional image or in an image format that supports more than just R, G and B values per pixel. Also – if you’re using a single zbuf range over a whole image, which could contain values been inches away to potentially several miles, you’re probably going to need LOTS-O-BITS. Though to be fair, the distribution would hardly need to be linear. Basic testing would probably be able to work out how much precision would be needed for various distances, with near objects almost certainly needing much more than far objects.

And given that the whole point of this is stereo vision, everything is, of course, multiplied by 2. At least that’s been the case so far. In a system such as the one we are discussing, it may very well be possible that things rendered more than X distance away need ONLY be rendered ONCE and then be copied and translated for use in the other eye. That might be the source of some speed increase, as long as the calculations required to determine when it should be used are less costly than just doing the rendering in the first place.


The idea is currently in use
Something else to point out: companies are already doing something along these lines with 2D to 3D movie conversions. Of course, they are working from a fully 2d source, and then have to try and calculate depth values and/or rotoscope objects onto different planes, as well as regenerating parts for the image no longer occluded. Generating the depth information in real time from the engine obviously gives the implementation discussed here a massive leg up over the methods used for movie conversions. But none the less, taking a 2D image and changing it to simulate 3D is already happening, and is being done by industry, not just academia.


Mon Apr 08, 2013 12:07 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
t0pquark wrote:
What if, depending on distance from the viewer, the engine DID process and render some amount of each hidden object.
Complex ambient occlusion (i.e. the "Painter's Algorithm") you described is a popular graphics technique, especially when simplicity is more important than speed, especially for older or smaller hardware where computionaly complexity is more expensive than memory access.

Complex ambient occlusion (i.e. frustrum culling, and bounding box culling) is also popular, especially with modern GPU methods, so pixels do not have to be overwritten multiple times. Modern hardware is MUCH faster doing extra math than it is accessing memory multiple times.
t0pquark wrote:
Theoretical second video card:
This would be used in a much different way than current "SLI" type rigs work now. This card would essentially be spitting out a fixed resolution and frame rate (60/90/120) all the time. In those frames it would be inserting the most up to date info it had from the "worker" video card, as well as static position IU elements and prewarp / other full screen shaders. This card would be using the latest tracking info from the sensor, which could be ~16ms or more up to date than the worker card, to do the translations and warping to the various layers in order to give a more updated look (ala Carmack "Latency Mitigation Strategies").
SLI already interleaves the cards. But you said this is different? How?

A single card can already use the latest tracker data PER SCAN LINE, if it supports scanline position reading. The video card driver also needs to support it. If not, software can TIME where the scan is based on time since VSYNC. You need double-buffering off, and you are in essence HACKING the normally undesirable shearing effect, but so that a moving HMD sees each scanline closest to where it is expected. No second video card needed. I believe that either John Carmack or Michael Abrash discussed that in a recent "latency mitigation" blog post.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Mon Apr 08, 2013 12:46 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
t0pquark wrote:
... Storing depth information in a 2d image means using a depth buffer (zbuf), either as an additional image or in an image format that supports more than just R, G and B values per pixel. Also – if you’re using a single zbuf range over a whole image, which could contain values been inches away to potentially several miles, you’re probably going to need LOTS-O-BITS. Though to be fair, the distribution would hardly need to be linear. Basic testing would probably be able to work out how much precision would be needed for various distances, with near objects almost certainly needing much more than far objects.
Nonlinear depth is fine as long as you do not get close enough to distant objects to see the distortion of position. One common optimization is to paint distant objects onto the skybox, so you do not even need independent depth data.
t0pquark wrote:
And given that the whole point of this is stereo vision, everything is, of course, multiplied by 2. At least that’s been the case so far. In a system such as the one we are discussing, it may very well be possible that things rendered more than X distance away need ONLY be rendered ONCE and then be copied and translated for use in the other eye. That might be the source of some speed increase, as long as the calculations required to determine when it should be used are less costly than just doing the rendering in the first place.
When replacing a pair of stereoscopic images with one combined (joint-stereo) image plus zbuffer data, you need to warp that image to show otherwise hidden pixels. You do not want to discard pixels from either image. Then you discard those extra pixels when you unwarp them for each eye view, based on z-buffer data.
t0pquark wrote:
The idea is currently in use
Something else to point out: companies are already doing something along these lines with 2D to 3D movie conversions. Of course, they are working from a fully 2d source, and then have to try and calculate depth values and/or rotoscope objects onto different planes, as well as regenerating parts for the image no longer occluded. Generating the depth information in real time from the engine obviously gives the implementation discussed here a massive leg up over the methods used for movie conversions. But none the less, taking a 2D image and changing it to simulate 3D is already happening, and is being done by industry, not just academia.
My 3D TV has automatic synthetic 3D conversion, and it actually does a surprisingly good job, in realtime. However, it makes some hair styles look like they stick out six inches farther than the person's nose... There will always be visible flaws in automatic 3D conversions, requiring humans in the loop to fix the mistakes. Until 2045, when computers become smarter than humans (i.e. the "Technological Singularity").

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Mon Apr 08, 2013 1:04 pm
Profile
One Eyed Hopeful

Joined: Wed Mar 27, 2013 2:23 pm
Posts: 35
Reply with quote
My understanding is that SLI interleaves frames from card 1 and card 2, and you see 30 frames like this: 121212121212121212121212121212

But the point of this thread is that you want to be able to make changes to those frames BETWEEN the time when one frame has finished and the next one is ready, right? While we are waiting for the next true frame, warp the current one just enough, using the latest HMD positions, that the whole screen (not just a scan line) is changed and updated to what the person thinks they should currently be seeing.

In my example, your display is always being driven by card 2 and that if card 1 is locked at 60 fps, card 2 would be locked at 120 fps (for example)

So now you would see this: (pardon the orientation switch)
Frame 1 copied from card 1, with slight change
Frame 1 copied from card 1, with greater change
Frame 2 copied from card 1, with slight change
Frame 2 copied from card 1, with greater change
Frame 3 copied from card 1, with slight change
Frame 3 copied from card 1, with greater change
Etc.

My intention was to describe something not TOO much different from you (in principle) – a way to decouple and interpolate some of the basic 3d movement from the intensive rendering. You are talking about moving around the skybox, and I’m proposing essentially “stacked skyboxes” or “stacked cutouts” at different depth points. But in both cases, it's about manipulating a 2d image to show 3d change while waiting for the next real frame.


Mon Apr 08, 2013 1:23 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
t0pquark wrote:
... But the point of this thread is that you want to be able to make changes to those frames BETWEEN the time when one frame has finished and the next one is ready, right?
You could "tween" between scanlines. You do not need a second video card for that. The point of this thread was to support head tracking, with the image motion corresponding to head rotation with low latency, EVEN WHEN the game content cannot be updated per frame (but still fast enough to appear as continuous movement of game objects). Specifically, this thread was meant to support low-power processing like Ouya (or maybe even Raspberry Pi). For example, Quake3 on my RasPi only runs at about 20FPS. With PTZ tweening we could make the video frames a little larger (perhaps 1080p), so head tracking can move around in that larger frame at 60Hz, while waiting for the next 20FPS game "frame".

Most video cards let you set the frame start address in video RAM "for free", so you could pan around in a larger UNWARPED image. The problem with doing that in the Rift is that pre-warp needs to be done WHILE panning around in that larger unwarped frame. I am exploring various low-overhead pre-warp methods that "cheat" a little while still looking good in the Rift, specifically for RasPi support of 60Hz "PTZ Tweening", even with a 20FPS Quake3 limitation.

In my book, "just good enough" is much better than "not at all". Even a simplistic wire-frame world can be immersive, if content is convincing in both set and setting.
t0pquark wrote:
... My intention was to describe something not TOO much different from you (in principle) – a way to decouple and interpolate some of the basic 3d movement from the intensive rendering. You are talking about moving around the skybox, and I’m proposing essentially “stacked skyboxes” or “stacked cutouts” at different depth points. But in both cases, it's about manipulating a 2d image to show 3d change while waiting for the next real frame.
What you describe is an extension of what I discussed, and is essentially emulating traditional multi-layer cell animation employed by early animators, such as Walt Disney himself. That could certainly add to the 3D parallax effect beyond using a simple single-layered skybox. There are also some multi-layer side-scroller games that use this technique:
http://www.netmagazine.com/tutorials/bu ... -framework EDIT: Dead link. Here is cached copy:
https://web.archive.org/web/20120708050 ... -framework
Thanks for the great suggestion! I plan to use it (but without the extra video card)... :D

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Last edited by geekmaster on Fri Jun 12, 2015 10:03 pm, edited 1 time in total.



Mon Apr 08, 2013 1:28 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
I just stumbled across something that proves that my PTZ Tweening "invention" is nothing new. Somebody else published a research paper about this same method:
Latency compensation by horizontal scanline selection for head- mounted displays

A fundamental task of a virtual-environment system is to present images that change appropriately as the user's head moves. Latency produces registration error causing the scene to appear spatially unstable. To improve the spatial stability of the scene, we built a system that, immediately before scanout to a head-mounted raster display, selects a portion of each scanline from an image rendered at a wider display width. The pixel selection corrects for yaw head rotations and effectively reduces latency for yaw to the millisecond range. In informal evaluations, users consistently judged visual scenes more stable and reported no additional visual artifacts with horizontal scanline selection than the same system without. Scanline-selection hardware can be added to existing virtual-reality systems as an external device between the graphics card and the raster display.
And here is another one that is also using the method that I called "PTZ Tweening":
Reflex HMD to compensate lag and correction of derivative deformation

A head-mounted display (HMD) system suffers largely from the time lag between human motion and the display output. The concept of a reflex HMD to compensate for the time lag is proposed and discussed. Based on this notion, a prototype reflex HMD is constructed. The rotational movement of the user's head is measured by a gyroscope, modulating the driving signal for the LCD panel, and this shifts the viewport within the image supplied from the computer. The derivative distortion was investigated, and the dynamic deformation of the watched world was picked up as the essential demerit. Cylinderical rendering is introduced to solve this problem and is proved to cancel this dynamic deformation, and also to decrease the static distortion
This just goes to prove that if you cannot find your new invention already in use (perhaps for a very long time), you are searching for the wrong terminology. It seems that every time a method is reinvented, it is given a new name and is described using different keywords.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Sat Jun 01, 2013 2:58 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
PTZ Tweening, as described above in this thread (and other threads), is essentially the same as the descriptions posted for Asynchronous Time Warp that John Carmack said is critical to the success of GearVR.
... Timewarp is a technique that warps the rendered image before sending it to the display in order to correct for head motion that occurred after the scene was rendered and thereby reduce the perceived latency. The basic version of this is orientation-only timewarp, in which only the rotational change in the head pose is corrected for; this has the considerable advantage of being a 2D warp, so it does not cost much performance when combined with the distortion pass. For reasonably complex scenes, this can be done with much less computation than rendering a whole new frame. ...

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Fri Jun 12, 2015 8:01 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Sep 01, 2012 10:47 pm
Posts: 2708
Reply with quote
Oculus patented (filed 5/2014) the ideas I presented in my "PTZ Tweening" thread (published in 2/2013 with excerpts from early 2012 posts).

Perception based predictive tracking for head mounted displays - Oculus VR, LLC
http://www.freepatentsonline.com/9063330.html

"PTZ Tweening" for low-power low-latency head-tracking
http://www.mtbs3d.com/phpbb/viewtopic.php?f=138&t=16543

Mike Abrash published something similar about 6 weeks after I did:
http://blogs.valvesoftware.com/abrash/g ... -diagrams/

Mike is not mentioned in that 2015 patent either.

John Carmack said this was the key to the GearVR, which would be worthless without it.

But Mike and John are now Oculus employees.

Somebody told me "If [Oculus] went after someone else you could be a good Samaritan and point to your prior are and THEY could spend a bunch of money to challenge the patent. At the end of the day – none of this makes you any money. Yup - get in bed with lawyers, abandon and self respect, and screw everyone that does not have the money to fight back. We need 'loser pays' to put an end to this."

The guy who made millions in license fees on the Y2K sliding window algorithm patented the exact same algorithm (to the last detail) that I invented and put into about 100 programs at the Minneapolis Star Tribune decades earlier (we had single-digit dates to fit punch cards and the 1980 was approaching). I have seen other ideas I shared over the years get patented much later too -- I always thought software patents were evil, and I still do...

It is also interesting that most high-end consumer HMDs seem to be using fresnel lenses now, after suggesting they were not such a good idea in my Fresnel Lens Stack thread:
http://www.mtbs3d.com/phpbb/viewtopic.php?f=138&t=16373

And BTW, my latest iteration uses single lenses that work much better than those stacks in that thread (which I used only for increased magnification from dollar store lenses). They key (as I mentioned) was and is lens offset. I have lenses that give me more than 180-degree FoV on my Galaxy Note 4 display.

_________________
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Image


Tue Aug 16, 2016 9:05 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 57 posts ]  Go to page Previous  1, 2

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
Powered by phpBB® Forum Software © phpBB Group
Designed by STSoftware.