Understand the Speed of Light of Game Input Latency
Posted: Wed Feb 27, 2013 6:46 pm
Understand the Speed of Light of Game Input Latency
by Timothy Lottes
Link : http://timothylottes.blogspot.fr/2013/0 ... input.html
Very interesting article about latency that was published last month, discovered it thanks to a tweet from John Carmack.
Excerpts :
Dissecting PC DX WDDM Graphics Stack Latency
Two to three frames of latency is typical of the PC driver stack. For this reason, many serious gamers disable v-sync, accept tearing, and attempt to maximize frame rate --- all in the name of reducing input latency.
OpenGL and Latency
For OpenGL in Linux at least (and maybe even in Windows for some vendors drivers), route from CPU to GPU is much more direct (no WDDM layer in Linux). Also in OpenGL one can use glFlush() to force the CPU to kick off commands to the GPU instead of waiting for the push buffer segment to fill up.
Quick Estimates of PC Latency
I've removed the driver buffering problem by my CPU input thread writing directly into a pinned memory buffer, with the GPU reading input from pinned memory directly then computing the view matrix and storing to the constant buffer right before the GPU renders the view-dependent part of the frame.
Continuing with a high-end GPU with a gamer mouse and good PC display: 1ms mouse + 6ms drawing + 2ms v-sync window + 8.3ms scanout + 5ms display = 20.3ms. The PC remains king in terms of latency.
Understanding Tiler GPUs and Latency with Low Frame Rates
Mixing low frame rates and a tiler GPU is a latency nightmare. Add this latency to the latency of the tablet touch interface and now you understand why touch based twitch games won't work on iOS mobile...
And yes, he ordered a Rift : http://timothylottes.blogspot.fr/2012/0 ... v-kit.html
by Timothy Lottes
Link : http://timothylottes.blogspot.fr/2013/0 ... input.html
Very interesting article about latency that was published last month, discovered it thanks to a tweet from John Carmack.
Excerpts :
Dissecting PC DX WDDM Graphics Stack Latency
Two to three frames of latency is typical of the PC driver stack. For this reason, many serious gamers disable v-sync, accept tearing, and attempt to maximize frame rate --- all in the name of reducing input latency.
OpenGL and Latency
For OpenGL in Linux at least (and maybe even in Windows for some vendors drivers), route from CPU to GPU is much more direct (no WDDM layer in Linux). Also in OpenGL one can use glFlush() to force the CPU to kick off commands to the GPU instead of waiting for the push buffer segment to fill up.
Quick Estimates of PC Latency
I've removed the driver buffering problem by my CPU input thread writing directly into a pinned memory buffer, with the GPU reading input from pinned memory directly then computing the view matrix and storing to the constant buffer right before the GPU renders the view-dependent part of the frame.
Continuing with a high-end GPU with a gamer mouse and good PC display: 1ms mouse + 6ms drawing + 2ms v-sync window + 8.3ms scanout + 5ms display = 20.3ms. The PC remains king in terms of latency.
Understanding Tiler GPUs and Latency with Low Frame Rates
Mixing low frame rates and a tiler GPU is a latency nightmare. Add this latency to the latency of the tablet touch interface and now you understand why touch based twitch games won't work on iOS mobile...
And yes, he ordered a Rift : http://timothylottes.blogspot.fr/2012/0 ... v-kit.html