Statistical learning refers to the ability to pick up on the statistical regularities in our sensory environment, typically without intention or conscious awareness. Since the seminal publication on statistical learning in 1996, sensitivity to regularities has become a key concept in our understanding of language acquisition as well as other cognitive functions such as perception and attention.
Neuroimaging studies investigating which brain areas underpin statistical learning have mapped a network of domain-general regions in the medial temporal lobe as well as modality-specific regions in early sensory cortices. Research using electroencephalography has further demonstrated how sensitivity to structure impacts the brain’s processing of sensory input.
In response to concerns about the large discrepancy between the very simplistic artificial regularities employed in laboratory experiments on statistical learning and the much noisier and more complex regularities humans face in the real world, recent studies have taken more ecological approaches.
Article
Statistical Learning
Louisa Bogaerts, Noam Siegelman, and Ram Frost
Article
Peripheral Vision: A Critical Component of Many Visual Tasks
Ruth Rosenholtz
In understanding human visual perception, an important component consists of what people can perceive at a glance. If that glance provides the observer with sufficient task-relevant information, this affords efficient processing. If not, one must move one’s eyes and integrate information across glances and over time, which is necessarily slower and limited by both working memory and the ability to integrate that information. Vision at a glance has to do in large part with the strengths and limitations of peripheral vision, and in particular with visual crowding. Understanding peripheral vision has helped unify a number of aspects of vision.
Article
Object Perception
Scott P. Johnson
Visual scenes tend to be very complex: a multitude of overlapping surfaces varying in shape, color, texture, and depth relative to the observer. Yet most observers effortlessly perceive that the visual environment is composed of distinct objects, laid out across space, each with a particular shape that can be inferred from partial views and incomplete information. Moreover, observers generally expect objects to be continuous across space and time, to have a certain shape, and to be solid in three-dimensional (3D) space. The cortical visual system processes information for objects first by coding visual features, then by linking features into units, and last by interpretation of units as objects that may be recognizable or otherwise relevant to the observer. This way of conceptualizing object perception maps roughly onto processes of lower-, middle-, and higher-level visual processing that have long formed the basis for investigations of visual perception in adults, as well as theories of object perception, the ways visual deprivation reduces object perception skills, and the developmental time course of object perception in infancy.
Article
Eye Movements and Perception
Doris I. Braun and Alexander C. Schütz
Voluntary eye movements and visual perception are closely intertwined in humans and nonhuman primates because of the limitation of high-acuity vision to a very small, specialized area at the center of the retina, the fovea. Only when the image of an object is projected on the foveal region by eye and head movements it is possible, to perceive fine visual details such as letters during reading. In order to improve visual perception and to benefit from high-resolution foveal vision, rapid saccadic eye movements frequently change the direction of both eyes to selected peripheral locations. Continuous sequences of these voluntary saccades and fixations determine what humans see and in how much detail they perceive objects and their visual surroundings. Where, when, and how humans move their eyes depends not only on the visual properties of the target object but also on their intentions and prevailing tasks. Accordingly, target locations for saccades differ depending on the things people do—whether they just look around, actively search for something, read, or do sports. Instead of the classical dichotomy of bottom-up and top-down processes, recent research on gaze behavior has focused on the dynamic interplay of factors such as task demands, rewards, scene content, temporal sequences, and individual and historical differences. Besides saccadic eye movements, humans are also able to rotate their eyes continuously when they pursue moving objects of interest. Smooth pursuit eye movements stabilize the image of a moving object on the foveal region and prevent degradation of the retinal target image resulting from motion smear. The use of pursuit eye movements also improves the prediction of future target movement. Pursuit initiation is often combined with interceptive saccades that direct the fovea to the moving target, and catch-up saccades that correct for small mismatches concerning eye and target position, speed, and/or direction. Because each eye movement alters retinal input, compensations for retinal displacements are needed to maintain a stable representation of the environment. Overall, both saccadic and smooth pursuit eye movements provide optimal uptake of visual information for perception and guidance of actions.