1-9 of 9 Results  for:

  • Computational Neuroscience x
  • Sensory Systems x
Clear all

Article

Confidence in Decision-Making  

Megan A.K. Peters

The human brain processes noisy information to help make adaptive choices under uncertainty. Accompanying these decisions about incoming evidence is a sense of confidence: a feeling about whether a decision is correct. Confidence typically covaries with the accuracy of decisions, in that higher confidence is associated with higher decisional accuracy. In the laboratory, decision confidence is typically measured by asking participants to make judgments about stimuli or information (type 1 judgments) and then to rate their confidence on a rating scale or by engaging in wagering (type 2 judgments). The correspondence between confidence and accuracy can be quantified in a number of ways, some based on probability theory and signal detection theory. But decision confidence does not always reflect only the probability that a decision is correct; confidence can also reflect many other factors, including other estimates of noise, evidence magnitude, nearby decisions, decision time, and motor movements. Confidence is thought to be computed by a number of brain regions, most notably areas in the prefrontal cortex. And, once computed, confidence can be used to drive other behaviors, such as learning rates or social interaction.

Article

Deep Neural Networks in Computational Neuroscience  

Tim C. Kietzmann, Patrick McClure, and Nikolaus Kriegeskorte

The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to give rise to cognitive function and behavior. At the heart of the field are its models, that is, mathematical and computational descriptions of the system being studied, which map sensory stimuli to neural responses and/or neural to behavioral responses. These models range from simple to complex. Recently, deep neural networks (DNNs) have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, current DNNs neglect many details of biological neural networks. These simplifications contribute to their computational efficiency, enabling them to perform complex feats of intelligence, ranging from perceptual (e.g., visual object and auditory speech recognition) to cognitive tasks (e.g., machine translation), and on to motor control (e.g., playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviors, DNNs excel at predicting neural responses to novel sensory stimuli with accuracies well beyond any other currently available model type. DNNs can have millions of parameters, which are required to capture the domain knowledge needed for successful task performance. Contrary to the intuition that this renders them into impenetrable black boxes, the computational properties of the network units are the result of four directly manipulable elements: input statistics, network structure, functional objective, and learning algorithm. With full access to the activity and connectivity of all units, advanced visualization techniques, and analytic tools to map network representations to neural data, DNNs represent a powerful framework for building task-performing models and will drive substantial insights in computational neuroscience.

Article

The Functional Organization of Vertebrate Retinal Circuits for Vision  

Tom Baden, Timm Schubert, Philipp Berens, and Thomas Euler

Visual processing begins in the retina—a thin, multilayered neuronal tissue lining the back of the vertebrate eye. The retina does not merely read out the constant stream of photons impinging on its dense array of photoreceptor cells. Instead it performs a first, extensive analysis of the visual scene, while constantly adapting its sensitivity range to the input statistics, such as the brightness or contrast distribution. The functional organization of the retina abides to several key organizational principles. These include overlapping and repeating instances of both divergence and convergence, constant and dynamic range-adjustments, and (perhaps most importantly) decomposition of image information into parallel channels. This is often referred to as “parallel processing.” To support this, the retina features a large diversity of neurons organized in functionally overlapping microcircuits that typically uniformly sample the retinal surface in a regular mosaic. Ultimately, each circuit drives spike trains in the retina’s output neurons, the retinal ganglion cells. Their axons form the optic nerve to convey multiple, distinctive, and often already heavily processed views of the world to higher visual centers in the brain. From an experimental point of view, the retina is a neuroscientist’s dream. While part of the central nervous system, the retina is largely self-contained, and depending on the species, it receives little feedback from downstream stages. This means that the tissue can be disconnected from the rest of the brain and studied in a dish for many hours without losing its functional integrity, all while retaining excellent experimental control over the exclusive natural network input: the visual stimulus. Once removed from the eyecup, the retina can be flattened, thus its neurons are easily accessed optically or using visually guided electrodes. Retinal tiling means that function studied at any one place can usually be considered representative for the entire tissue. At the same time, species-dependent specializations offer the opportunity to study circuits adapted to different visual tasks: for example, in case of our fovea, high-acuity vision. Taken together, today the retina is amongst the best understood complex neuronal tissues of the vertebrate brain.

Article

General Principles for Sensory Coding  

Tatyana O. Sharpee

Sensory systems exist to provide an organism with information about the state of the environment that can be used to guide future actions and decisions. Remarkably, two conceptually simple yet general theorems from information theory can be used to evaluate the performance of any sensory system. One theorem states that there is a minimal amount of energy that an organism has to spend in order to capture a given amount of information about the environment. The second theorem states that the maximum rate with which the organism can acquire resources from the environment, relative to its competitors, is limited by the information this organism collects about the environment, also relative to its competitors. These two theorems provide a scaffold for formulating and testing general principles of sensory coding but leave unanswered many important practical questions of implementation in neural circuits. These implementation questions have guided thinking in entire subfields of sensory neuroscience, and include: What features in the sensory environment should be measured? Given that we make decisions on a variety of time scales, how should one solve trade-offs between making simpler measurements to guide minimal decisions vs. more elaborate sensory systems that have to overcome multiple delays between sensation and action. Once we agree on the types of features that are important to represent, how should they be represented? How should resources be allocated between different stages of processing, and where is the impact of noise most damaging? Finally, one should consider trade-offs between implementing a fixed strategy vs. an adaptive scheme that readjusts resources based on current needs. Where adaptation is considered, under what conditions does it become optimal to switch strategies? Research over the past 60 years has provided answers to almost all of these questions but primarily in early sensory systems. Joining these answers into a comprehensive framework is a challenge that will help us understand who we are and how we can make better use of limited natural resources.

Article

High-Density Electrophysiological Recordings to Assess the Dynamic Properties of Attention  

Corentin Gaillard and Suliann Ben Hamed

The brain has limited processing capacities. Attention selection processes are continuously shaping humans’ world perception. Understanding the mechanisms underlying such covert cognitive processes requires the combination of psychophysical and electrophysiological investigation methods. This combination allows researchers to describe how individual neurons and neuronal populations encode attentional function. Direct access to neuronal information through innovative electrophysiological approaches, additionally, allows the tracking of covert attention in real time. These converging approaches capture a comprehensive view of attentional function.

Article

Physiology of Color Vision in Primates  

Robert Shapley

Color perception in macaque monkeys and humans depends on the visually evoked activity in three cone photoreceptors and on neuronal post-processing of cone signals. Neuronal post-processing of cone signals occurs in two stages in the pathway from retina to the primary visual cortex. The first stage, in in P (midget) ganglion cells in the retina, is a single-opponent subtractive comparison of the cone signals. The single-opponent computation is then sent to neurons in the Parvocellular layers of the Lateral Geniculate Nucleus (LGN), the main visual nucleus of the thalamus. The second stage of processing of color-related signals is in the primary visual cortex, V1, where multiple comparisons of the single-opponent signals are made. The diversity of neuronal interactions in V1cortex causes the cortical color cells to be subdivided into classes of single-opponent cells and double-opponent cells. Double-opponent cells have visual properties that can be used to explain most of the phenomenology of color perception of surface colors; they respond best to color edges and spatial patterns of color. Single opponent cells, in retina, LGN, and V1, respond to color modulation over their receptive fields and respond best to color modulation over a large area in the visual field.

Article

Understanding How Humans Learn and Adapt to Changing Environments  

Aaron Cochrane and Daphne Bavelier

Compared to other animals or to artificial agents, humans are unique in the extent of their abilities to learn and adapt to changing environments. When focusing on skill learning and model-based approaches, learning can be conceived as a progression of increasing, then decreasing, dimensions of representing knowledge. First, initial learning demands exploration of the learning space and the identification of the relevant dimensions for the novel task at hand. Second, intermediate learning requires a refinement of these relevant dimensions of knowledge and behavior to continue improving performance while increasing efficiency. Such improvements utilize chunking or other forms of dimensionality reduction to diminish task complexity. Finally, late learning ensures automatization of behavior through habit formation and expertise development, thereby reducing the need to effortfully control behavior. While automatization greatly increases efficiency, there is also a trade-off with the ability to generalize, with late learning tending to be highly specific to the learned features and contexts. In each of these phases a variety of interacting factors are relevant: Declarative instructions, prior knowledge, attentional deployment, and cognitive fitness have unique roles to play. Neural contributions to processes involved also shift from earlier to later points in learning as effortfulness initially increases and then gives way to automaticity. Interestingly, video games excel at providing uniquely supportive environments to guide the learner through each of these learning stages. This fact makes video games a useful tool for both studying learning, due to their engaging nature and dynamic range of complexity, as well as engendering learning in domains such as education or cognitive training.

Article

Visual Perception in the Human Brain: How the Brain Perceives and Understands Real-World Scenes  

Clemens G. Bartnik and Iris I. A. Groen

How humans perceive and understand real-world scenes is a long-standing question in neuroscience, cognitive psychology, and artificial intelligence. Initially, it was thought that scenes are constructed and represented by their component objects. An alternative view proposed that scene perception starts by extracting global features (e.g., spatial layout) first and individual objects in later stages. A third framework focuses on how the brain not only represents objects and layout but how this information combines to allow determining possibilities for (inter)action that the environment offers us. The discovery of scene-selective regions in the human visual system sparked interest in how scenes are represented in the brain. Experiments using functional magnetic resonance imaging show that multiple types of information are encoded in the scene-selective regions, while electroencephalography and magnetoencephalography measurements demonstrate links between the rapid extraction of different scene features and scene perception behavior. Computational models such as deep neural networks offer further insight by how training networks on different scene recognition tasks results in the computation of diagnostic features that can then be tested for their ability to predict activity in human brains when perceiving a scene. Collectively, these findings suggest that the brain flexibly and rapidly extracts a variety of information from scenes using a distributed network of brain regions.

Article

Visual Shape and Object Perception  

Anitha Pasupathy, Yasmine El-Shamayleh, and Dina V. Popovkina

Humans and other primates rely on vision. Our visual system endows us with the ability to perceive, recognize, and manipulate objects, to avoid obstacles and dangers, to choose foods appropriate for consumption, to read text, and to interpret facial expressions in social interactions. To support these visual functions, the primate brain captures a high-resolution image of the world in the retina and, through a series of intricate operations in the cerebral cortex, transforms this representation into a percept that reflects the physical characteristics of objects and surfaces in the environment. To construct a reliable and informative percept, the visual system discounts the influence of extraneous factors such as illumination, occlusions, and viewing conditions. This perceptual “invariance” can be thought of as the brain’s solution to an inverse inference problem in which the physical factors that gave rise to the retinal image are estimated. While the processes of perception and recognition seem fast and effortless, it is a challenging computational problem that involves a substantial proportion of the primate brain.