21-25 of 25 Results  for:

  • Cognitive Neuroscience x
Clear all

Article

Spatial Cognition in Rodents  

Freyja Ólafsdóttir

Wayfinding, like other related spatial cognitive abilities, is a core function of all mobile animals. The past 50 years have a seen a plethora of research devoted to elucidating the neural basis of this function. This research has led to the identification of neuronal cell types—many of which can be found within the hippocampal area and afferent brain regions—that encode different spatial variables and together are thought to provide animals with a so-called “cognitive map.” Moreover, seminal research carried out over the past decade has identified a neural activity event—known as “replay”—that is thought to consolidate newly formed cognitive maps, so to commit them to long-term storage and support planning of goal-directed navigational trajectories in familiar, and perhaps novel, environments. Finally, this hippocampal spatial coding scheme has in recent years been postulated to extend to nonspatial domains, including episodic memory, suggesting it may play a general role in knowledge creation.

Article

Understanding How Humans Learn and Adapt to Changing Environments  

Daphne Bavelier and Aaron Cochrane

Compared to other animals or to artificial agents, humans are unique in the extent of their abilities to learn and adapt to changing environments. When focusing on skill learning and model-based approaches, learning can be conceived as a progression of increasing, then decreasing, dimensions of representing knowledge. First, initial learning demands exploration of the learning space and the identification of the relevant dimensions for the novel task at hand. Second, intermediate learning requires a refinement of these relevant dimensions of knowledge and behavior to continue improving performance while increasing efficiency. Such improvements utilize chunking or other forms of dimensionality reduction to diminish task complexity. Finally, late learning ensures automatization of behavior through habit formation and expertise development, thereby reducing the need to effortfully control behavior. While automatization greatly increases efficiency, there is also a trade-off with the ability to generalize, with late learning tending to be highly specific to the learned features and contexts. In each of these phases a variety of interacting factors are relevant: Declarative instructions, prior knowledge, attentional deployment, and cognitive fitness have unique roles to play. Neural contributions to processes involved also shift from earlier to later points in learning as effortfulness initially increases and then gives way to automaticity. Interestingly, video games excel at providing uniquely supportive environments to guide the learner through each of these learning stages. This fact makes video games a useful tool for both studying learning, due to their engaging nature and dynamic range of complexity, as well as engendering learning in domains such as education or cognitive training.

Article

Visual Attention  

Sabine Kastner and Timothy J. Buschman

Natural scenes are cluttered and contain many objects that cannot all be processed simultaneously. Due to this limited processing capacity, neural mechanisms are needed to selectively enhance the information that is most relevant to one’s current behavior and to filter unwanted information. We refer to these mechanisms as “selective attention.” Attention has been studied extensively at the behavioral level in a variety of paradigms, most notably, Treisman’s visual search and Posner’s paradigm. These paradigms have also provided the basis for studies directed at understanding the neural mechanisms underlying attentional selection, both in the form of neuroimaging studies in humans and intracranial electrophysiology in non-human primates. The selection of behaviorally relevant information is mediated by a large-scale network that includes regions in all major lobes as well as subcortical structures. Attending to a visual stimulus modulates processing across the visual processing hierarchy with stronger effects in higher-order areas. Current research is aimed at characterizing the functions of the different network nodes as well as the dynamics of their functional connectivity.

Article

Visual Perception in the Human Brain: How the Brain Perceives and Understands Real-World Scenes  

Clemens G. Bartnik and Iris I. A. Groen

How humans perceive and understand real-world scenes is a long-standing question in neuroscience, cognitive psychology, and artificial intelligence. Initially, it was thought that scenes are constructed and represented by their component objects. An alternative view proposed that scene perception starts by extracting global features (e.g., spatial layout) first and individual objects in later stages. A third framework focuses on how the brain not only represents objects and layout but how this information combines to allow determining possibilities for (inter)action that the environment offers us. The discovery of scene-selective regions in the human visual system sparked interest in how scenes are represented in the brain. Experiments using functional magnetic resonance imaging show that multiple types of information are encoded in the scene-selective regions, while electroencephalography and magnetoencephalography measurements demonstrate links between the rapid extraction of different scene features and scene perception behavior. Computational models such as deep neural networks offer further insight by how training networks on different scene recognition tasks results in the computation of diagnostic features that can then be tested for their ability to predict activity in human brains when perceiving a scene. Collectively, these findings suggest that the brain flexibly and rapidly extracts a variety of information from scenes using a distributed network of brain regions.

Article

Visual Shape and Object Perception  

Anitha Pasupathy, Yasmine El-Shamayleh, and Dina V. Popovkina

Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.