3d Shape Perception
My research investigates the perception of three-dimensional (3d) shape. My interest in this field was inspired by the Molyneux Problem, a philosophical thought experiment concerning recovery from blindness. Molyneux asked — If a blind man is given sight, will he be able to distinguish shapes by sight that he previously only felt? This question highlights our amazing ability to create and compare mental representations of 3d shape from perceptual experiences. The goal of my research is to further our understanding of such representations. Specifically, can we discover the structure of these mental representations? That is, to what extent is a 3d form actually represented in our minds? Furthermore, can we determine what information in a haptic or visual physical stimulus is used to create 3d representations? I seek answers to these questions through combination of psychophysical experiments and computational modeling.
Cross-Modal Perceptual Equivalency
My early research examined how 3d shape complexity affects perceptual equivalency — the ability to compare or share representations across perceptual modalities (Phillips, Egan, & Perry, 2009). In two unimodal experiments (vision and touch-only), and one cross-modal experiment, observers compared 3d objects whose features varied systematically. The individually smoothly curved stimuli ranging in complexity were both digitally rendered and physically manufactured using a 3d printer. The results showed that the visual system is less hindered by object complexity relative to the haptic system. Furthermore, cross-model perceptual equivalency is constrained by the observers’ haptic representations. While not fully answering Molyneux’s question, this research demonstrates the clear presence of perceptual equivalency for novel stimuli.
3d Shape from Contour Textures
My training at The Ohio State University has turned much of my focus to visual perception. My Master’s thesis focused on the perception of 3d shape from contour surface textures, a source of optical information (Egan, Todd, & Phillips, 2011). Contour surface textures are such a compelling source of 3d shape that they were central to the Optical Art movement that exemplifies the idea of illusion as art. For example, the left side Figure 1 is Winged Curve, a piece by Bridget Riley. On the right side of Figure 1 is an example stimulus that was used to collect observers’ perceptions of 3d shape. In my thesis, I proposed a new computational model of 3d shape from contour textures and compared it to implementations of two previously existing models. Our model assumes that contour texture are created via a series of planar cuts. For example, imagine slicing a loaf of bread where the slices mark the contour locations. Our data showed that observers’ judgments varied systematically from the ground truth, that is, the actual 3d structure. The planar cut model more readily captured these systematic differences in 3d shape compared to the other models. Furthermore, our model has fewer free variables and relies on less restrictive assumptions. This makes the model the best explanation of not only why we perceive 3d structure from 2d images like those in Figure 1, but also the best predictor of that structure.
3d Shape from Shading
My current research focuses on surface shading, another valuable form of optical information for the perception of 3d shape. Similar to shape from texture, shape from shading also has a long history in both art and vision science. For example, consider the effectiveness of Rembrandt’s shading in Figure 2. Notice how the use of shading creates realistic furrows on his brow, wrinkles under his eyes and definition in his cheeks. While artists have understood how to effectively utilize shading to create compelling depictions of 3d forms, vision scientists are still unsure as to how the brain processes this information.
Shape from shading is still a difficult problem because the local illumination of surface is determined by three components: local orientation (shape), illumination, and material property. It is unclear how the visual system is able to separate the contributions of these three components and compute stable 3d shape perceptions. One common approach to this problem involves determining the direction of illumination, thus reducing the equation to two unknowns. My research recently tested a hypothesis as to how direction of illumination might be calculated from an shaded image (Egan & Todd, 2013). If you examine the shaded regions along the edges of the cheeks and nose in Figure 2 you will notice that those on the left are dark and those on the right are light. This information strongly suggests that the illumination is mostly coming from above and to the right. Does this information allow you to better interpret areas like the furrows on the forehead? Will 3d shape judgments about the furrows be less accurate if only the forehead is visible? These are the types of questions we recently investigated in a series of experiments using rendered surfaces, like those in Figure 3, while manipulating the visibility of smooth occlusions. Surprisingly, we found that observers’ 3d shape judgements are not systematically affected by the absence of smooth occlusions. According to this result the visual system must be utilizing a source of information besides Rembrandt’s cheeks to perceive the furrows on his forehead.
An alternative approach to modeling shape from shading involves the use of a priori assumptions about both illumination and material to compute 3d shape from 2d images. Such computations are referred to as internal reflectance models. While some of the latest internal reflectance models are quite elegant, all of them rely heavily on dangerously restrictive assumptions. Common assumptions include: a distant point light source; illumination along the viewing direction; perfectly matte material; absence of shadows; absence of reflections; darker is deeper; or illumination from above. In a recent series of experiments I found results that contest the plausibility of any possible internal reflectance model (Todd & Egan, 2012). If you examine Figure 4 you will easily notice that each image depicts the same surface despite drastically different appearances. Although each was rendered with a different material illumination combination,observers’ shape judgments were remarkably similar across all three. This finding is incompatible with any process for computing shape from shading that assumes any reflectance function combined with any possible illumination.
Since assumptions about illumination and material are too restrictive, my goal for future research is to identify 2d image properties that are informative about shape but also stable across changes in illumination and material. My current hypothesis is that diagnostic information about 3d shape can be recovered from the 2d orientation and 2d curvature maps of shaded images. Consider Figure 5; it is a transformation of Figure 4 where each contour represents a different illumination intensity. My future work will involve processing images like this for the presence of regular structures that could be used to inform us about the underlying 3d shape of the surface. Possible important structures may include the location of t-junctions, x-crossings, or circular contours. Preliminary findings are reassuring but a more thorough and systematic analysis must be conducted before arriving at any conclusions.
Visual & Haptic Perception of Symmetry
I am also involved in anongoing collaboration that involves assessing the role of symmetry in both visual and haptic 3d shape perception. While symmetry does play a role in perception, there has been a recent trend of overstating this importance (Phillips, Todd, & Egan, 2011). Results from several visual mental rotation tasks involving symmetrical and asymmetrical stimuli suggests there is a performance benefit for symmetrical objects. This finding was a common replication. However, we found that performance for asymmetrical objects can be improved to equal that of symmetrical objects simply by cueing for object rotation (Egan, Todd, & Phillips, 2012). My interpretation is that symmetry only aids object alignment and does not necessarily facilitate more accurate mental representations compared to asymmetrical objects. We recently finished collecting data for a haptic version of the experiment using 3d printed versions of the visual stimuli. We discovered asymmetrical objects are easier to discriminate compared to symmetric counterparts, suggesting that the benefits of object symmetry to 3d shape are modality dependent.
I strongly believe my past findings, current work and future plans demonstrate my ability as researcher. My work aims to answer questions about age-old philosophical questions and deepen our understanding both art and perception through the use of modern techniques. One of my research plan’s major advantages is that although I am trained in how to use various tools, I am not explicitly dependent on expensive laboratory equipment, such as eye-trackers, EEG, virtual reality, supercomputing, or MRI scanning time. Most of the research mentioned only requires a computer paired with a few common software licenses. Fortunately, my continuing collaboration with colleagues at Ohio State, Skidmore College and beyond have offered me continued opportunities to access to a network of resources. For example, all of the 3d printed stimuli used in my haptic perception experiments were printed at Skidmore College. The printer was purchased on a grant based on the merit of my early experiments. My independence from expensive equipment does not preclude my interest in exploring alternative methods. I am currently collaborating with another lab at Ohio State to finish collecting data for an fMRI that I designed. The project investigates the neural correlates of 3d shape, illumination and material property perception. I am confident that my expertise in psychophysical methods and computational modeling combined with my dedication to collaboration will ensure continued success in advancing our understanding of 3d shape perception.
Egan, E. J. L., Todd, J. T., & Phillips, F. (2011). The perception of 3D shape from planar cut contours. Journal of Vision, 11(12):15, 1-13.
Egan, E. J. L., Todd, J. T., & Phillips, F. (2012). The role of symmetry in 3D shape discrimination across changes in viewpoint. [Abstract]. Journal of Vision, 12(9): 1048.
Egan, E. J. L. & Todd, J. T. (2013). The role of smooth occlusions in the perception of 3D shape from shading. [Abstract]. Journal of Vision, 13(9): 260.
Phillips, F., Egan, E. J. L., & Perry, B. N. (2009). Perceptual equivalence between vision and touch is complexity dependent. Acta Psychologica, 132, 259-266.
Phillips, F., Todd, J. T., & Egan, E. J. L. (2011). 3D Shape perception does not depend on symmetry. [Abstract]. Journal of Vision, 11(12):15, 1-13.
Todd, J. T. & Egan, E. J. L. (2012). The Perception of shape from shading for Lambertian surfaces and range images. [Abstract]. Journal of Vision, 12(9): 281.