1-7 of 7 Results  for:

  • Computational Neuroscience x
  • Cognitive Neuroscience x
Clear all

Article

Confidence in Decision-Making  

Megan A.K. Peters

The human brain processes noisy information to help make adaptive choices under uncertainty. Accompanying these decisions about incoming evidence is a sense of confidence: a feeling about whether a decision is correct. Confidence typically covaries with the accuracy of decisions, in that higher confidence is associated with higher decisional accuracy. In the laboratory, decision confidence is typically measured by asking participants to make judgments about stimuli or information (type 1 judgments) and then to rate their confidence on a rating scale or by engaging in wagering (type 2 judgments). The correspondence between confidence and accuracy can be quantified in a number of ways, some based on probability theory and signal detection theory. But decision confidence does not always reflect only the probability that a decision is correct; confidence can also reflect many other factors, including other estimates of noise, evidence magnitude, nearby decisions, decision time, and motor movements. Confidence is thought to be computed by a number of brain regions, most notably areas in the prefrontal cortex. And, once computed, confidence can be used to drive other behaviors, such as learning rates or social interaction.

Article

Deep Neural Networks in Computational Neuroscience  

Tim C. Kietzmann, Patrick McClure, and Nikolaus Kriegeskorte

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behavior. At the heart of the field are its models, that is, mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioral responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g., visual object and auditory speech recognition) to cognitive tasks (e.g., machine translation), and on to motor control (e.g., playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviors, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

Article

High-Density Electrophysiological Recordings to Assess the Dynamic Properties of Attention  

Corentin Gaillard and Suliann Ben Hamed

The brain has limited processing capacities. Attention selection processes are continuously shaping humans’ world perception. Understanding the mechanisms underlying such covert cognitive processes requires the combination of psychophysical and electrophysiological investigation methods. This combination allows researchers to describe how individual neurons and neuronal populations encode attentional function. Direct access to neuronal information through innovative electrophysiological approaches, additionally, allows the tracking of covert attention in real time. These converging approaches capture a comprehensive view of attentional function.

Article

Models of Decision-Making Over Time  

Paul Cisek and David Thura

Making a good decision often takes time, and in general, taking more time improves the chances of making the right choice. During the past several decades, the process of making decisions in time has been described through a class of models in which sensory evidence about choices is accumulated until the total evidence for one of the choices reaches some threshold, at which point commitment is made and movement initiated. Thus, if sensory evidence is weak (and noise in the signal increases the probability of an error), then it takes longer to reach that threshold than if sensory evidence is strong (thus helping filter out the noise). Crucially, the setting of the threshold can be increased to emphasize accuracy or lowered to emphasize speed. Such accumulation-to-bound models have been highly successful in explaining behavior in a very wide range of tasks, from perceptual discrimination to deliberative thinking, and in providing a mechanistic explanation for the observation that neural activity during decision-making tends to build up over time. However, like any model, they have limitations, and recent studies have motivated several important modifications to their basic assumptions. In particular, recent theoretical and experimental work suggests that the process of accumulation favors novel evidence, that the threshold decrease over time, and that the result yields improved decision-making in real, natural situations.

Article

Understanding How Humans Learn and Adapt to Changing Environments  

Aaron Cochrane and Daphne Bavelier

Compared to other animals or to artificial agents, humans are unique in the extent of their abilities to learn and adapt to changing environments. When focusing on skill learning and model-based approaches, learning can be conceived as a progression of increasing, then decreasing, dimensions of representing knowledge. First, initial learning demands exploration of the learning space and the identification of the relevant dimensions for the novel task at hand. Second, intermediate learning requires a refinement of these relevant dimensions of knowledge and behavior to continue improving performance while increasing efficiency. Such improvements utilize chunking or other forms of dimensionality reduction to diminish task complexity. Finally, late learning ensures automatization of behavior through habit formation and expertise development, thereby reducing the need to effortfully control behavior. While automatization greatly increases efficiency, there is also a trade-off with the ability to generalize, with late learning tending to be highly specific to the learned features and contexts. In each of these phases a variety of interacting factors are relevant: Declarative instructions, prior knowledge, attentional deployment, and cognitive fitness have unique roles to play. Neural contributions to processes involved also shift from earlier to later points in learning as effortfulness initially increases and then gives way to automaticity. Interestingly, video games excel at providing uniquely supportive environments to guide the learner through each of these learning stages. This fact makes video games a useful tool for both studying learning, due to their engaging nature and dynamic range of complexity, as well as engendering learning in domains such as education or cognitive training.

Article

Visual Perception in the Human Brain: How the Brain Perceives and Understands Real-World Scenes  

Clemens G. Bartnik and Iris I. A. Groen

How humans perceive and understand real-world scenes is a long-standing question in neuroscience, cognitive psychology, and artificial intelligence. Initially, it was thought that scenes are constructed and represented by their component objects. An alternative view proposed that scene perception starts by extracting global features (e.g., spatial layout) first and individual objects in later stages. A third framework focuses on how the brain not only represents objects and layout but how this information combines to allow determining possibilities for (inter)action that the environment offers us. The discovery of scene-selective regions in the human visual system sparked interest in how scenes are represented in the brain. Experiments using functional magnetic resonance imaging show that multiple types of information are encoded in the scene-selective regions, while electroencephalography and magnetoencephalography measurements demonstrate links between the rapid extraction of different scene features and scene perception behavior. Computational models such as deep neural networks offer further insight by how training networks on different scene recognition tasks results in the computation of diagnostic features that can then be tested for their ability to predict activity in human brains when perceiving a scene. Collectively, these findings suggest that the brain flexibly and rapidly extracts a variety of information from scenes using a distributed network of brain regions.

Article

Visual Shape and Object Perception  

Anitha Pasupathy, Yasmine El-Shamayleh, and Dina V. Popovkina

Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.