Here is another data plot from the geekmaster hydra zero-latency filter:
hydra-hyperpred2.png
To make this curve I jerked my hydra quickly toward and away from me. For a distant observer this "
jerking the hydra" motion may appear as
obscene.
The red samples are discarded hydra data packets from delayed phases that are of no interest. In fact, the clustered data points tend to travel PERPENDICULAR to the actual direction of travel, so they can derail motion prediction attempts and distort any filter results. The 20msec delayed outlier distorts any "all points" filters even farther giving more latency.
The pink samples are the data points from the leading edge that I use as input to my "hyper prediction" filter. The output of my prediction filter (from a test point preceding my "oppressive filter" step) is the set of blue points. The yellow points are the average of the most recent four predictions, just there as a reference to show the trend of the prediction curve. You can see that the blue prediction point cloud precedes the motion reported by the Rift data leading edge by a wide margin. This is true even for seemingly random jerks while swinging my arms forward and back. Human muscles tend to filter real physical motion into a low-order sum of sine waves, which is probably why my motion prediction works so well.
The blue points are predicted by snap-vectors, where snap = delta jerk, jerk = delta acceleration, acceleration = delta velocity, and velocity = difference between consecutive input sample points (pink on this plot). I actually multiply my snap vector prediction velocity by 32 before plotting it, which is what gives such a large blue point spread, but also greatly enhances early "detection" of motion "sum of sine wave" motion profile curve trends. Note that 32x is about a half second into the future (based on only the curvature of the most recent small handful of samples).
Then I feed that blue point cloud into my "oppressive filter" which would add HUGE latency for "normal" data. I use "new point = (input point + previous output point * 63) / 64. Note that 64 samples is a full second here:
hydra-filt-hyperpred.png
Notice how BAD the red hydra motion sample point cloud is spread out and delayed when repetitively shaking my hydra as fast as I can! A quarter cycle or more delay on the trailing edge of the point cloud. And yet, look at my filter output even at that rate of herky-jerky change! Surprisingly good, and 250Hz too. Brutal coolness, eh?
Now, you would think that such wild prediction followed by such heavy filtering would be useless, but remember that I am only using it to predict virtual points until we receive the next "good" data point from a "leading phase" packet.
Note that I shook my hydra quickly with my wrist while swinging it rapidly forward and back with my arm. The points are sampled at 250Hz, using the feature report data commonly used in projects like VRPN (and others). The red points are discarded hydra data from the three "late phases". The yellow points are the leading phase of input data that I feed to my "snap vector hyper-prediction" filter. The blue points are the output of my "oppressively filtered" "wild predictions".
Notice that the blue points closely follow the yellow points (actually using them when available). Anyway, 240Hz virtually zero latency head tracking using a hydra isn't that bad, and
I plan to use this to make my Rift-only tracking code do as good as this as I can make it. I know I can do it. It is only a matter of time (a precious commodity)...
Oppressively filtered hyper-prediction sounds wacky, but actually works, and works well IMHO. That heavy filter makes it much less noise-sensistive, and the hyper-prediction prevents prediction point clustering I was seeing when using only 1x prediction instead of 32x prediction. even 16x did not spread the prediction points as evenly as 32x, but that yielded great results so I did not want to go farther into the future on my predictions, just to stomp them back to the present with a heavy filter.
And another sample, showing that my oppressively-filtered hyper-prediction points (blue) can actually precede the leading phase of the hydra sample data (yellow).
Negative latency?
hydra-rev.png
Impressive results, eh? I think so...
Opinions? Comments? Am I wasting my time doing this kind of research and development? Should I be working with Unity 3D instead?
Now, what deep magic can I extract from analyzing my Rift tracker data?
EDIT: I am reading raw USB HID packets for my Rift Tracker DK and for my Razer Hydra using the signal11 hidapi library:
http://www.signal11.us/oss/hidapi/
https://github.com/signal11/hidapi