1-13 of 13 Results

  • Keywords: vision x
Clear all

Article

Thomas W. Cronin, N. Justin Marshall, and Roy L. Caldwell

The predatory stomatopod crustaceans, or mantis shrimp, are among the most attractive and dynamic creatures living in the sea. Their special features include their powerful raptorial appendages, used to kill, stun, or disable other animals (whether predators, prey, or competitors), and their highly specialized compound eyes. Mantis shrimp vision is unlike that of any other animal and has several unique features. Their compound eyes are optically triple, each having three separate regions that produce overlapping visual fields viewing certain regions of space. They have the most diverse set of spectral classes of receptors ever described in animals, with as many as 16 types in a single compound eye. These receptors are based on a highly duplicated set of opsin molecules paired with strongly absorbing photostable filters in some photoreceptor types. The receptor set includes six ultraviolet types, all spectrally distinct, many themselves tuned by photostable filters. There are as many as eight types of polarization receptors of up to three spectral classes (including an ultraviolet class). In some species, two sets of these receptors analyze circularly polarized light, another unique capability. Stomatopod eyes move independently, each capable of visual field stabilization, image foveation and tracking, or scanning of image features. Stomatopods are known to recognize colors and polarization features and evidently use these in predation and communication. Altogether, mantis shrimps have perhaps the most unusual vision of any animal.

Article

Cynthia M. Harley and Mark K. Asplen

Annelid worms are simultaneously an interesting and difficult model system for understanding the evolution of animal vision. On the one hand, a wide variety of photoreceptor cells and eye morphologies are exhibited within a single phylum; on the other, annelid phylogenetics has been substantially re-envisioned within the last decade, suggesting the possibility of considerable convergent evolution. This article reviews the comparative anatomy of annelid visual systems within the context of the specific behaviors exhibited by these animals. Each of the major classes of annelid visual systems is examined, including both simple photoreceptor cells (including leech body eyes) and photoreceptive cells with pigment (trochophore larval eyes, ocellar tubes, complex eyes); meanwhile, behaviors examined include differential mobility and feeding strategies, similarities (or differences) in larval versus adult visual behaviors within a species, visual signaling, and depth sensing. Based on our review, several major trends in the comparative morphology and ethology of annelid vision are highlighted: (1) eye complexity tends to increase with mobility and higher-order predatory behavior; (2) although they have simple sensors these can relay complex information through large numbers or multimodality; (3) polychaete larval and adult eye morphology can differ strongly in many mobile species, but not in many sedentary species; and (4) annelids exhibiting visual signaling possess even more complex visual systems than expected, suggesting the possibility that complex eyes can be simultaneously well adapted to multiple visual tasks.

Article

Color is a central feature of human perceptual experience where it functions as a critical component in the detection, identification, evaluation, placement, and appreciation of objects in the visual world. Its role is significantly enhanced by the fact that humans evolved a dimension of color vision beyond that available to most other mammals. Many fellow primates followed a similar path and in recent years the basic mechanisms that support color vision—the opsin genes, photopigments, cone signals, and central processing—have been the subjects of hundreds of investigations. Because of the tight linkage between opsin gene structure and the spectral sensitivity of cone photopigments, it is possible to trace pathways along which color vision may have evolved in primates. In turn, such information allows the development of hypotheses about the nature of color vision and its utility in nonhuman primates. These hypotheses are being critically evaluated in field studies where primates solve visual problems in the presence of the full panoply of photic cues. The intent of this research is to determine which aspects of these cues are critically linked to color vision and how their presence facilitates, impedes, or fails to influence the solutions. These investigations are challenging undertakings and the emerging literature is replete with contradictory conclusions. But steady progress is being made and it appears that (a) some of the original ideas about there being a restricted number of tasks for which color vision might be optimally utilized by nonhuman primates (e. g., fruit harvest) were too simplistic and (b) depending on circumstances that can include both features of proximate visual stimuli (spectral cues, luminance cues, size cues, motion cues, overall light levels) and situational variables (social cues, developmental status, species-specific traits) the utilization of color vision by nonhuman primates is apt to be complex and varied.

Article

Bevil R. Conway

The premise of the field of vision and art is that studies of visual processing can inform an understanding of visual art and artistic practice, and a close reading of art, art history, and art practice can help generate hypotheses about how vision works. Paraphrasing David Hubel, visual neurobiology can enhance art just as knowledge of bones and muscles has for centuries informed artistic representations of the body. The umbrella of visual art encompasses a bewildering diversity of works. A focus on 2-dimensional artworks provides an introduction to the field. For each of the steps taken by the visual brain to turn retinal images into perception, one can ask how the biology informs one’s understanding of visual art, how visual artists have exploited aspects of how the brain processes visual information, and what the strategies deployed by visual artists reveal about neural mechanisms of vision.

Article

Chuan-Chin Chiao and Roger T. Hanlon

Visual camouflage change is a hallmark of octopus, squid, and cuttlefish and serves as their primary defense against predators. They can change their total body appearance in less than a second due to one principal feature: every aspect of this sensorimotor system is neurally refined for speed. Cephalopods live in visually complex environments such as coral reefs and kelp forests and use their visual perception of backgrounds to rapidly decide which camouflage pattern to deploy. Counterintuitively, cuttlefish have evolved a small number of pattern designs to achieve camouflage: Uniform, Mottle, and Disruptive, each with variation. The expression of these body patterns is based on several fundamental scene features. In cuttlefish, there appear to be several “visual assessment shortcuts” that enable camouflage patterning change in as little as 125 milliseconds. Neural control of the dynamic body patterning of cephalopods appears to be organized hierarchically via a set of lobes within the brain, including the optic lobes, the lateral basal lobes, and the anterior/posterior chromatophore lobes. The motor output of the central nervous system (CNS) in terms of the skin patterns that are produced is under sophisticated neural control of chromatophores, iridophores, and three-dimensional skin papillae. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. This coloration system, often referred to as rapid neural polyphenism, is unique in the animal kingdom and can be explained and interpreted in the context of sensory and behavioral ecology.

Article

Thomas F. Mathejczyk and Mathias F. Wernet

Evolution has produced vast morphological and behavioral diversity amongst insects, including very successful adaptations to a diverse range of ecological niches spanning the invasion of the sky by flying insects, the crawling lifestyle on (or below) the earth, and the (semi-)aquatic life on (or below) the water surface. Developing the ability to extract a maximal amount of useful information from their environment was crucial for ensuring the survival of many insect species. Navigating insects rely heavily on a combination of different visual and non-visual cues to reliably orient under a wide spectrum of environmental conditions while avoiding predators. The pattern of linearly polarized skylight that results from scattering of sunlight in the atmosphere is one important navigational cue that many insects can detect. Here we summarize progress made toward understanding how different insect species sense polarized light. First, we present behavioral studies with “true” insect navigators (central-place foragers, like honeybees or desert ants), as well as insects that rely on polarized light to improve more “basic” orientation skills (like dung beetles). Second, we provide an overview over the anatomical basis of the polarized light detection system that these insects use, as well as the underlying neural circuitry. Third, we emphasize the importance of physiological studies (electrophysiology, as well as genetically encoded activity indicators, in Drosophila) for understanding both the structure and function of polarized light circuitry in the insect brain. We also discuss the importance of an alternative source of polarized light that can be detected by many insects: linearly polarized light reflected off shiny surfaces like water represents an important environmental factor, yet the anatomy and physiology of underlying circuits remain incompletely understood.

Article

Navigation is the ability of animals to move through their environment in a planned manner. Different from directed but reflex-driven movements, it involves the comparison of the animal’s current heading with its intended heading (i.e., the goal direction). When the two angles don’t match, a compensatory steering movement must be initiated. This basic scenario can be described as an elementary navigational decision. Many elementary decisions chained together in specific ways form a coherent navigational strategy. With respect to navigational goals, there are four main forms of navigation: explorative navigation (exploring the environment for food, mates, shelter, etc.); homing (returning to a nest); straight-line orientation (getting away from a central place in a straight line); and long-distance migration (seasonal long-range movements to a location such as an overwintering place). The homing behavior of ants and bees has been examined in the most detail. These insects use several strategies to return to their nest after foraging, including path integration, route following, and, potentially, even exploit internal maps. Independent of the strategy used, insects can use global sensory information (e.g., skylight cues), local cues (e.g., visual panorama), and idiothetic (i.e., internal, self-generated) cues to obtain information about their current and intended headings. How are these processes controlled by the insect brain? While many unanswered questions remain, much progress has been made in recent years in understanding the neural basis of insect navigation. Neural pathways encoding polarized light information (a global navigational cue) target a brain region called the central complex, which is also involved in movement control and steering. Being thus placed at the interface of sensory information processing and motor control, this region has received much attention recently and emerged as the navigational “heart” of the insect brain. It houses an ordered array of head-direction cells that use a wide range of sensory information to encode the current heading of the animal. At the same time, it receives information about the movement speed of the animal and thus is suited to compute the home vector for path integration. With the help of neurons following highly stereotypical projection patterns, the central complex theoretically can perform the comparison of current and intended heading that underlies most navigation processes. Examining the detailed neural circuits responsible for head-direction coding, intended heading representation, and steering initiation in this brain area will likely lead to a solid understanding of the neural basis of insect navigation in the years to come.

Article

Tom Baden, Timm Schubert, Philipp Berens, and Thomas Euler

Visual processing begins in the retina—a thin, multilayered neuronal tissue lining the back of the vertebrate eye. The retina does not merely read out the constant stream of photons impinging on its dense array of photoreceptor cells. Instead it performs a first, extensive analysis of the visual scene, while constantly adapting its sensitivity range to the input statistics, such as the brightness or contrast distribution. The functional organization of the retina abides to several key organizational principles. These include overlapping and repeating instances of both divergence and convergence, constant and dynamic range-adjustments, and (perhaps most importantly) decomposition of image information into parallel channels. This is often referred to as “parallel processing.” To support this, the retina features a large diversity of neurons organized in functionally overlapping microcircuits that typically uniformly sample the retinal surface in a regular mosaic. Ultimately, each circuit drives spike trains in the retina’s output neurons, the retinal ganglion cells. Their axons form the optic nerve to convey multiple, distinctive, and often already heavily processed views of the world to higher visual centers in the brain. From an experimental point of view, the retina is a neuroscientist’s dream. While part of the central nervous system, the retina is largely self-contained, and depending on the species, it receives little feedback from downstream stages. This means that the tissue can be disconnected from the rest of the brain and studied in a dish for many hours without losing its functional integrity, all while retaining excellent experimental control over the exclusive natural network input: the visual stimulus. Once removed from the eyecup, the retina can be flattened, thus its neurons are easily accessed optically or using visually guided electrodes. Retinal tiling means that function studied at any one place can usually be considered representative for the entire tissue. At the same time, species-dependent specializations offer the opportunity to study circuits adapted to different visual tasks: for example, in case of our fovea, high-acuity vision. Taken together, today the retina is amongst the best understood complex neuronal tissues of the vertebrate brain.

Article

Tyler S. Manning and Kenneth H. Britten

The ability to see motion is critical to survival in a dynamic world. Decades of physiological research have established that motion perception is a distinct sub-modality of vision supported by a network of specialized structures in the nervous system. These structures are arranged hierarchically according to the spatial scale of the calculations they perform, with more local operations preceding those that are more global. The different operations serve distinct purposes, from the interception of small moving objects to the calculation of self-motion from image motion spanning the entire visual field. Each cortical area in the hierarchy has an independent representation of visual motion. These representations, together with computational accounts of their roles, provide clues to the functions of each area. Comparisons between neural activity in these areas and psychophysical performance can identify which representations are sufficient to support motion perception. Experimental manipulation of this activity can also define which areas are necessary for motion-dependent behaviors like self-motion guidance.

Article

Andrew J. Parker

Humans and some animals can use their two eyes in cooperation to detect and discriminate parts of the visual scene based on depth. Owing to the horizontal separation of the eyes, each eye obtains a slightly different view of the scene in front of the head. These small differences are processed by the nervous system to generate a sense of binocular depth. As humans, we experience an impression of solidity that is fully three-dimensional; this impression is called stereopsis and is what we appreciate when we watch a 3D movie or look into a stereoscopic viewer. While the basic perceptual phenomena of stereoscopic vision have been known for some time, it is mainly within the last 50 years that we have gained an understanding of how the nervous system delivers this sense of depth. This period of research began with the identification of neuronal signals for binocular depth in the primary visual cortex. Building on that finding, subsequent work has traced the signaling pathways for binocular stereoscopic depth forward into extrastriate cortex and further on into cortical areas concerning with sensorimotor integration. Within these pathways, neurons acquire sensitivity to more complex, higher order aspects of stereoscopic depth. Signals relating to the relative depth of visual features can be identified in the extrastriate cortex, which is a form of selectivity not found in the primary visual cortex. Over the same time period, knowledge of the organization of binocular vision in animals that inhabit a wide diversity of ecological niches has substantially increased. The implications of these findings for developmental and adult plasticity of the visual nervous system and onset of the clinical condition of amblyopia are explored in this article. Amblyopic vision is associated with a cluster of different visual and oculomotor symptoms, but the loss of high-quality stereoscopic depth performance is one of the consistent clinical features. Understanding where and how those losses occur in the visual brain is an important goal of current research, for both scientific and clinical reasons.

Article

Sabine Kastner and Timothy J. Buschman

Natural scenes are cluttered and contain many objects that cannot all be processed simultaneously. Due to this limited processing capacity, neural mechanisms are needed to selectively enhance the information that is most relevant to one’s current behavior and to filter unwanted information. We refer to these mechanisms as “selective attention.” Attention has been studied extensively at the behavioral level in a variety of paradigms, most notably, Treisman’s visual search and Posner’s paradigm. These paradigms have also provided the basis for studies directed at understanding the neural mechanisms underlying attentional selection, both in the form of neuroimaging studies in humans and intracranial electrophysiology in non-human primates. The selection of behaviorally relevant information is mediated by a large-scale network that includes regions in all major lobes as well as subcortical structures. Attending to a visual stimulus modulates processing across the visual processing hierarchy with stronger effects in higher-order areas. Current research is aimed at characterizing the functions of the different network nodes as well as the dynamics of their functional connectivity.

Article

A first step in analyzing complex systems is a classification of component elements. This applies to retinal organization as well as to other circuit components in the visual system. There is great variety in the types of retinal ganglion cells and the targets of their axonal projections. Thus, a prerequisite to any deep understanding of the early visual system is developing a proper classification of its elements. How many distinct classes of retinal ganglion cells are there? Can the main classes be broken down into subclasses? What sort of functional correlates can be established for each class? Can homologous relationships between apparently similar classes between species be established? Can a common nomenclature based on homologous cell and circuit classes be developed?

Article

Mindaugas Mitkus, Simon Potier, Graham R. Martin, Olivier Duriez, and Almut Kelber

Diurnal raptors (birds of the orders Accipitriformes and Falconiformes), renowned for their extraordinarily sharp eyesight, have fascinated humans for centuries. The high visual acuity in some raptor species is possible due to their large eyes, both in relative and absolute terms, and a high density of cone photoreceptors. Some large raptors, such as wedge-tailed eagles and the Old World vultures, have visual acuities twice as high as humans and six times as high as ostriches—the animals with the largest terrestrial eyes. The raptor retina has rods, double cones, and four spectral types of single cones. The highest density of single cones occurs in one or two specialized retinal regions: the foveae, where, at least in some species, rods and double cones are absent. The deep central fovea allows for the highest acuity in the lateral visual field that is probably used for detecting prey from a large distance. Pursuit-hunting raptors have a second, shallower, temporal fovea that allows for sharp vision in the frontal field of view. Scavenging carrion eaters do not possess a temporal fovea that may indicate different needs in foraging behavior. Moreover, pursuit-hunting and scavenging raptors also differ in configuration of visual fields, with a more extensive field of view in scavengers. The eyes of diurnal raptors, unlike those of most other birds, are not very sensitive to ultraviolet light, which is strongly absorbed by their cornea and lens. As a result of the low density of rods, and the narrow and densely packed single cones in the central fovea, the visual performance of diurnal raptors drops dramatically as light levels decrease. These and other visual properties underpin prey detection and pursuit and show how these birds’ vision is adapted to make them successful diurnal predators.