Lawson, Matthew Everett, and Ramesh Raskar. “Methods and apparatus for retinal imaging.” U.S. Patent No. 9,295,388. 29 Mar. 2016.
In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye’s pupillary axis and camera’s optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging.
Lawson, Matthew Everett, et al. “Methods and apparatus for retinal imaging.” U.S. Patent No. 9,060,718. 23 Jun. 2015.
In exemplary implementations, this invention comprises apparatus for retinal self-imaging. Visual stimuli help the user self-align his eye with a camera. Bi-ocular coupling induces the test eye to rotate into different positions. As the test eye rotates, a video is captured of different areas of the retina. Computational photography methods process this video into a mosaiced image of a large area of the retina. An LED is pressed against the skin near the eye, to provide indirect, diffuse illumination of the retina. The camera has a wide field of view, and can image part of the retina even when the eye is off-axis (when the eye’s pupillary axis and camera’s optical axis are not aligned). Alternately, the retina is illuminated directly through the pupil, and different parts of a large lens are used to image different parts of the retina. Alternately, a plenoptic camera is used for retinal imaging.
Lawson, Everett, Jason Boggess, Siddharth Khullar, Alex Olwal, Gordon Wetzstein, and Ramesh Raskar. “Computational retinal imaging via binocular coupling and indirect illumination.” In ACM SIGGRAPH 2012 Talks, pp. 1-1. 2012.
The retina is a complex light-sensitive tissue that is an essential part of the human visual system. It is unique, as it can be optically observable with non-invasive methods through the eye’s transparent elements. This has inspired a long history of retinal imaging devices for examination of optical function [Van Trigt 1852; Yates 2011] and for diagnosis of many of the diseases that manifest in the retinal tissue, such as diabetic retinophathy, hypertension, HIV/AIDS related retinitis, and age-related macular degeneration. These conditions are some of leading causes of blindness, especially in the developing world, but can often be prevented if screened and diagnosed in early stages.
Velten, Andreas, Di Wu, Belen Masia, Adrian Jarabo, Christopher Barsi, Chinmaya Joshi, Everett Lawson, Moungi Bawendi, Diego Gutierrez, and Ramesh Raskar. “Imaging the propagation of light through scenes at picosecond resolution.” Communications of the ACM 59, no. 9 (2016): 79-86.
We present a novel imaging technique, which we call femto-photography, to capture and visualize the propagation of light through table-top scenes with an effective exposure time of 1.85 ps per frame. This is equivalent to a resolution of about one half trillion frames per second; between frames, light travels approximately just 0.5 mm. Since cameras with such extreme shutter speed obviously do not exist, we first re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor’s spatial dimensions. We then introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through the scenes. Given this fast resolution and the finite speed of light, we observe that the camera does not necessarily capture the events in the same order as …
Sinha, Shantanu, Nickolaos Savidis, Everett Lawson, and Ramesh Raskar. “Replacing the Slit Lamp with a Mobile Multi-Output Projector Device for Anterior Segment Imaging.” Investigative Ophthalmology & Visual Science 56, no. 7 (2015): 3162-3162.
Lawson, Matthew Everett. “A Priori vision: the transcendence of pre-ontological sight: the disparity of externalizing the internal architecture of creation.” PhD diss., Massachusetts Institute of Technology, 2012.
The completion of any visual work is not an arrival, but furthered from the origin, the inner plane of perspective, which is so readily lent from the context of communicating the seemingly coded space from which I am inspired. The closest visual language within my …
matthew Lawson, Everett, and Ramesh Raskar. “Smart phone administered fundus imaging without additional imaging optics.” Investigative Ophthalmology & Visual Science 55, no. 13 (2014): 1609-1609.
Velten, Andreas, Di Wu, Adrian Jarabo, Belen Masia, Christopher Barsi, Chinmaya Joshi, Everett Lawson, Moungi Bawendi, Diego Gutierrez, and Ramesh Raskar. “Femto-photography: capturing and visualizing the propagation of light.” ACM Transactions on Graphics (ToG) 32, no. 4 (2013): 1-8.
We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor’s spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera’s and the world’s space-time coordinate systems to take into account effects associated with the finite speed of light …
Velten, Andreas, Di Wu, Adrian Jarabo, Belen Masia, Christopher Barsi, Everett Lawson, Chinmaya Joshi, Diego Gutierrez, Moungi G. Bawendi, and Ramesh Raskar. “Relativistic ultrafast rendering using time-of-flight imaging.” In ACM SIGGRAPH 2012 Talks, pp. 1-1. 2012.
We capture ultrafast movies of light in motion and synthesize physically valid visualizations. The effective exposure time for each frame is under two picoseconds (ps). Capturing a 2D video with this time resolution is highly challenging, given the low signal-to-noise ratio (SNR) associated with ultrafast exposures, as well as the absence of 2D cameras that operate at this time scale. We re-purpose modern imaging hardware to record an average of ultrafast repeatable events that are synchronized to a streak tube, and we introduce reconstruction methods to visualize propagation of light pulses through macroscopic scenes.
Velten, Andreas, Everett Lawson, Andrew Bardagjy, Moungi Bawendi, and Ramesh Raskar. “Slow art with a trillion frames per second camera.” In ACM SIGGRAPH 2011 Talks, pp. 1-1. 2011.
How will the world look with a one trillion frame per second camera? Although such a camera does not exist today, we converted high end research equipment to produce conventional movies at 0.5 trillion (5· 10 11) frames per second, with light moving barely 0.6 mm in each frame. Our camera has the game changing ability to capture objects moving at the speed of light. Inspired by the classic high speed photography art of Harold Edgerton [Kayafas and Edgerton 1987] we use this camera to capture movies of several scenes.
Pamplona, V., E. Passos, Jan Zizka, M. Oliveira, Everett Lawson, Esteban Clua, and Ramesh Raskar. “Catra: cataract probe with a lightfield display and a snap-on eyepiece for mobile phones.” In Proc. SIGGRAPH, vol. 11, pp. 7-11. 2011.
Pamplona, Vitor F., Erick B. Passos, Jan Zizka, Manuel M. Oliveira, Everett Lawson, Esteban Clua, and Ramesh Raskar. “CATRA: interactive measuring and modeling of cataracts.” ACM Transactions on Graphics (TOG) 30, no. 4 (2011): 1-8.
We introduce an interactive method to assess cataracts in the human eye by crafting an optical solution that measures the perceptual impact of forward scattering on the foveal region. Current solutions rely on highly-trained clinicians to check the back scattering in the crystallin lens and test their predictions on visual acuity tests. Close-range parallax barriers create collimated beams of light to scan through sub-apertures, scattering light as it strikes a cataract. User feedback generates maps for opacity, attenuation, contrast and sub-aperture point-spread functions. The goal is to allow a general audience to operate a portable high-contrast light-field display to gain a meaningful understanding of their own visual conditions. User evaluations and validation with modified camera optics are performed. Compiled data is used to reconstruct the individual’s cataract-affected view, offering a novel approach for capturing …
Pandharkar, Rohit, Andreas Velten, Andrew Bardagjy, Everett Lawson, Moungi Bawendi, and Ramesh Raskar. “Estimating motion and size of moving non-line-of-sight objects in cluttered environments.” In CVPR 2011, pp. 265-272. IEEE, 2011.
e present a technique for motion and size estimation of non-line-of-sight (NLOS) moving objects in cluttered environments using a time of flight camera and multipath analysis. We exploit relative times of arrival after reflection from a grid of points on a diffuse surface and create a virtual phased-array. By subtracting space-time impulse responses for successive frames, we separate responses of NLOS moving objects from those resulting from the cluttered environment. After reconstructing the line-of-sight scene geometry, we analyze the space of wavefronts using the phased array and solve a constrained least squares problem to recover the NLOS target location. Importantly, we can recover target’s motion vector even in presence of uncalibrated time and pose bias common in time of flight systems. In addition, we compute the upper bound on the size of the target by backprojecting the extremas of the time profiles …
Kirmani, Ahmed, Andreas Velten, Tyler Hutchison, M. Everett Lawson, Vivek K. Goyal, M. Bawendi, and Ramesh Raskar. “Reconstructing an image on a hidden plane using ultrafast imaging of diffuse reflections.” Submitted, May (2011).
An ordinary camera cannot photograph an occluded scene without the aid of a mirror for lateral visibility. Here, we demonstrate the reconstruction of images on hidden planes that are occluded from both the light source and the camera sensor, using only a Lambertian (diffuse) surface as a substitute for a mirror. In our experiments, we illuminate a Lambertian diffuser using a femtosecond pulsed laser and time-sample the scattered light using a streak camera with picosecond temporal resolution. We recover the hidden image from the time-resolved data by solving a linear inversion problem. We develop a model for the time-dependent scattering of a light impulse from diffuse surfaces, which we use with ultra-fast illumination and time-resolved sensing to computationally reconstruct planar black and white patterns that cannot be recovered with conventional optical imaging methods.
© 2020 M. Everett Lawson