Input latency compensation on various OSes

Win32

https://docs.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-getmessagetime https://docs.microsoft.com/en-us/windows/desktop/inputdev/raw-input https://stackoverflow.com/questions/23399511/high-resolution-getmessagetime "Most events in Windows occur at a rate that's determined the by the clock interrupt rate. Which by default ticks 64 times per second, once every 15.625 milliseconds." GetMessageTime() never exceeds a resolution of 15.625 milliseconds, but QueryPerformanceCounter() can

Darwin

https://developer.apple.com/documentation/kernel/iohidelement/1426839-gettimestamp?language=objc Only in kernel space, no userspace interface or Cocoa API https://opensource.apple.com/source/IOKitUser/IOKitUser-907.90.2/hid.subproj/

AppKit

Applications relying on AppKit receive user input through NSEvents provided through NSViews, themselves forwarding those events from the NSWindow they belong to.

NSEvent has a timestamp field, expressed as the number of seconds since system start-up, as a double.

https://developer.apple.com/documentation/appkit/nsevent/1528239-timestamp

Linux

Kernel

The standard, modern1 way of interacting with input devices on Linux is through the evdev system, accessed in standard UNIX fashion as reads/writes/ioctls to device nodes typically located in /dev/.

The exact path of each device depends on what device manager is used and how it is configured; for instance typical udev (either systemd or eudev) will place most of these devices in /dev/input/, and your mileage may vary with Busybox mdev or Android init.

Reading from the device nodes returns a blocking stream of data representing input events, the structure of which varies with each input device kind, but all HID devices share a common header defined in include/uapi/linux/input.h:

struct input_event {
    struct timeval time;
    __u16 type;
    __u16 code;
    __s32 value;
};

Consequently, each HID event has a timestamp of the accuracy given by struct timeval, in turn being of microsecond granularity, as per sys/time.h ans man 3 timeval.

!!! todo "but what time does it fill in?"

1

evdev in its current form is actually not that old, only appearing proper in kernel version 2.4.10pre9 with change "Alan Cox: merge input/joystick layer differences, driver and alpha merge". There still exists a different deprecated input framework for gamepads known as joydev; see documentation.

Latency

From kernel drivers / modules

!!! todo "evdev dispatch (should be easy?)"

To the application

Synchronous reads
  • blocking reads, 1 per thread
  • epoll + subsequent reads
io_uring
  • kernel can fill the CQE while the process is already running

libinput

libinput's API provides microsecond precision for its events, and, correspondingly, events' timestamps are stored as uint64_t, filled from the timestamp provided by the kernel.

!!! todo "how does it poll for events"

X11

In the base X11 specifications, input device events have a time: TIMESTAMP field. This type is defined as a CARD32, i.e. a uint32, and is semantically defined as:

A timestamp is a time value, expressed in milliseconds. It typically is the time since the last server reset. Timestamp values wrap around (after about 49.7 days).

The Xorg event processing stack is (needlessly) complicated, but having separate input drivers means we can easily focus our attention on the two pieces relevant for Linux:

  • xf86-input-evdev, whose source code does not once refer to the time field of struct input_event
  • xf86-input-libinput, whose source does not refer to any libinput_event_*_get_time{,_usec}() function

This means the X11 events' timestamps are filled in when they get processed by the server, and sure enough, dix/getevents.c features such wonders as:

int
GetPointerEvents(InternalEvent *events, DeviceIntPtr pDev, int type,
                 int buttons, int flags, const ValuatorMask *mask_in)
{
    CARD32 ms = GetTimeInMillis();

GetTimeInMillis() uses CLOCK_MONOTONIC_COARSE if available (i.e. since Linux 2.6.32), otherwise falling back to CLOCK_MONOTONIC.

Wayland

The base Wayland protocol includes timestamps in some, but not all, event messages. A wl_surface's gain or loss of input device focus are not timestamped, for example. This however is not a concern as they serve as ordered (AF_UNIX socket) delimiters for further data-bearing input events that are timestamped.

Input events also carry timestamps with millisecond granularity. Their base is undefined, so they can't be compared against system time (as obtained with clock_gettime or gettimeofday). They can be compared with each other though, and for instance be used to identify sequences of button presses as double or triple clicks.
Chapter 4. Wayland Protocol and Model of Operation

Looking at the wl_keyboard::key event, the timestamps are stored in a time field of type uint—the exact C type of this is specified as unsigned 32-bit as per the protocol Wire format—and as we saw its unit is of "millisecond granularity". This XML description of the protocol confirms this.

Implementation-wise, both Weston and wlroots store the timestamp value provided by the libinput backend they use for input devices into their corresponding Wayland messages, simply dividing the libinput timestamp by 1000.

The exact same happens for mouse button presses for instance, in the protocol, its XML definition, Weston, and wlroots.

!!! todo "KWin https://github.com/KDE/kwayland/blob/master/src/server/keyboard_interface.cpp#L147"

input_timestamps_unstable_v1

There exists an unstable extension allowing clients to request the compositor send high-resolution timestamps before sending input events. It is opt-in for each pointer, keyboard, and touch device, and carries a functional equivalent of struct timespec:

<arg name="tv_sec_hi" type="uint"
     summary="high 32 bits of the seconds part of the timestamp"/>
<arg name="tv_sec_lo" type="uint"
     summary="low 32 bits of the seconds part of the timestamp"/>
<arg name="tv_nsec" type="uint"
     summary="nanoseconds part of the timestamp"/>

As of writing this, only Weston implements the protocol, !!! todo "Weston impl"

Summary

SystemReference pointPrecisionUnitActual accuracyTypical latency
Linux input???fixed, 64+ bitsmicrosecond??
X11Any / Start of servermillisecondCLOCK_MONOTONIC_COARSE?
WaylandAnyfixed, 32 bitsmillisecond
libinputSame as kernelfixed, 64 bitsmicrosecondSame as kernel?
AppKit NSEvents???IEEE754 doublesecond