This dissertation describes a system architecture inspired by an opportunity to integrate powerful perceptual phenomenon onto natural environments. The goal of this system is to augment observer experiences in two distinct ways: by proprioceptive feedback (such as modulation of perceived self-motion through space-time), and by complex semantic information delivery via peripheral vision. The systems and methodologies described here expand upon prior work by introducing a two-stage carrier signal generation approach: first, proven psychophysical carrier signals are adapted to the environment and second, the adapted carriers are animated along contextual motion paths. The path trajectories are computationally generated with respect to the goal of augmentation – proprioceptive carrier motion is derived from environmental optical flow, and semantic carrier motion is derived from a codex of symbols. This work represents a new intersection of the fields of vision science, computational imaging, and display technologies and could challenge the way we generate media for human consumption in active environments.
The role of peripheral vision in self-motion estimation is far more efficient than central vision. Vection, or perceived self-motion through visual stimulus alone, is heavily