The role of peripheral vision in self-motion estimation is far more efficient than central vision. Vection, or perceived self-motion through visual stimulus alone, is heavily influenced by peripheral cues and creates strong illusory effects to the observer. Psychophysical stimuli, the carrier signals, are fast adaptation mechanisms to manipulate raw scene data streams in their animated motion trajectories. These algorithmically-generated signals are subtly presented to affect the observer’s sensation of speed and rate of turn in a first-person point-of-view (POV) driving environment.
Data capture and processing:
Stimulus projections to affect observer perception of self-motion through space time [live field test, central channel, raw]. Sequence 1: computationally generating sunlight on a rainy day. Sequence 2: gimbal correction for observer proprioception.
(un-mute for audio)
Evaluating adaptive cues in dynamic scenes requires a carefully controlled environment that is both parameterized and repeatable. The study environment includes three use cases: while walking, while driving in a rural environment, and while driving in a suburban environment. Source videos for the driving sequences were captured on three, 4K wide-angle cameras. For the rural sequence I built a custom rig to mount and align the three cameras within the interior of a car (see images, right).
Even with on-board image-stabilization, the raw videos contain significant jitter. Before capturing the videos, we placed registration marks in several locations within the interior of the vehicle and afterwards imported the raw videos into After Effects to track the motion of the marks and stabilize the footage. After stabilization, the three views were aligned using a corner pin effect and the results were rendered and encoded with the H.265 HEVC codec.
In order to align the three views of the interior of the car and prepare that region for cues to the observer, the footage had to be first stabilized. The tracking algorithms work best when they have small, high contrast features that are consistent throughout the footage (unaffected by changing shadows, color balance, etc.). Maxing out the contrast and brightness, then applying an edge detection filter, improves the speed and accuracy of this process dramatically. Even with these improvements, however, it takes several hours of fine-tuning the parameters to achieve consistent, usable results. Once this process is complete, the tracking data can be applied to either stabilize the footage or animate a separate object.
Once the footage is stabilized, the interior of the cockpit is isolated and used to mask the prepared stimuli.
Viewing environment architecture:
[numbers in reference to semantic carrier signal MIPS patches]
[Full posting forthcoming]