Building a better input latency tool

This is for discussion and development of non-commercial open source VR/AR projects (e.g. Kickstarter applicable, etc). Contact MTBS admins at customerservice@mtbs3d.com if you are unsure if your efforts qualify.
Post Reply
BillRoeske
Cross Eyed!
Posts: 102
Joined: Fri May 18, 2012 5:31 pm
Location: Houston, TX
Contact:

Building a better input latency tool

Post by BillRoeske »

It occurred to me while working on PS3EyeCapture that camera-based methods work, but analyzing the data and compiling the results is a bit cumbersome. Also, the equipment gets expensive pretty fast if you want higher-resolution data.

A better approach might be to pipe the output of two tiny light sensors to the two channels of a stereo microphone plug. One light sensor would stick to and face the corner of a display and the other to to the LED light for the Caps Lock key (or something). Assuming that the light sensors output a higher signal when absorbing more light, the test set-up would be an program that displays black until a keystroke is received, in which case it would start displaying white.

The results could be captured at a very high frequency (44KHz vs 200Hz high-speed video) with just about any PC sound card. It would be pretty easy automate data analysis by looking at the recorded waveform to find the time difference between the keystroke spike on one channel and the display spike on the other. Even given a relatively noisy signal, the spikes should stand out.

Here's where the disclaimer comes in, though: while I'm pretty good with software, my hardware knowledge is sadly lacking. For instance, I'm pretty sure that just taking the leads from two electronic photoreceptors and attaching them to a headphone plug with some solder and a bit of wire wouldn't quite work. I'm more than willing to put the work in (and eager to learn!), but I need some help on shaking out the hardware design. Also, please poke holes in my theory. :)

I'd love to see this culminate in an open design that is inexpensive and easy for an interested person to build. Any software I write for this will be open-source, too. Long-term, I'd love to have an Android device capturing input and doing on-the-fly session analysis and reporting.
Post Reply

Return to “VR/AR Research & Development”