V1 V2 V3 V4 Visual Cortex

Color vision is possible because three types of cones (S, M, and L) in the retina respond to slightly different ranges of wavelengths of light. This part is well understood in terms of chemistry and physics, but it only represents the first steps in a truly amazing process. The perception of color and, in fact, the complete illusion of a multicolored world, is generated in the visual cortex of the brain. Information about the distribution of intensities at various wavelengths available from the three types of photoreceptors is first processed by bipolar and ganglion cells in the retina and is then transmitted via axons in the optic nerve to the lateral geniculate nucleus (LGN). In the LGN, signals from the two eyes are integrated and projected directly to the primary visual cortex (V1), that occupies the surface of the occipital lobe of the cerebral cortex (see Figure 15.1).

In the last few decades of the 20th century, noninvasive imaging methods became available for the study of normal, healthy human brains. In particular, functional magnetic resonance imaging (fMRI) permits brain response to visual stimuli to be correlated with specific locations in the visual cortex. These developments have prompted a large number of studies of localized brain function. The consensus view is that the visual cortex contains a set of distinct visual field maps, and that nearby neurons analyze nearby points in the visual field. While the cells in V1 appear to respond to wavelength, the perception of color requires additional processing. Visual maps have been established in the visual cortex areas V1, V2, and V3; 20-30 visual areas have been distinguished, and the following picture has emerged. Information is transmitted via two pathways. In the dorsal stream, information passes from V1 through V2 and on to the middle temporal area (MT or V5). This is the "where pathway" that is concerned with location and motion of objects and control of the eyes. The ventral stream, or "what pathway," also begins with V1 and V2, but directs information to the visual area known as the V4 complex and the adjacent V8 area. This path is involved with representation, long term memory, and the association of color with visual stimuli.

Parietal

Parietal

FIGURE 15.1. A sketch of the human brain showing the lobes (blue type), the primary visual cortex (V1), and other specialized visual areas as described in [Zeki 99].

The exact location of visual processing is not the primary concern here, but rather the fact that neural processes deep in the visual cortex of the brain are the origin of color. Objects don't have color, and neither do light beams. Objects simply absorb, transmit, and reflect various amounts of incident electromagnetic radiation, and the radiation is characterized by wavelength and intensity. If radiation falls in the range of about 400 nm to 700 nm, the photoreceptors in our eyes can respond to it (see Figure 14.3). The response is processed and used in various ways to analyze the visual field for patterns and motion, but the association of color with the excitation of cones in the retina is another thing altogether. Part of this picture has been revealed through the study of defects in color vision. Color vision deficiency (dyschromatopsia) is often genetic in nature: one or more of the three types of cones (S, M, or L) may be missing or defective. Thus about 1 in 12 males and a smaller fraction of females suffer from some color vision deficiency, the most common of which is red-green color-blindness. Each deviation in function of a cone generates a different perception of the world around us, and it is clear that a shift in the region of maximum sensitivity of all the cones and rods by a few hundred nm would present to us a strange, alien world with little similarity to the one we now accept as reality.

Now, suppose that color perception is destroyed by an injury or a disease that affects visual areas in the cortex, perhaps the V4 complex. Such events are, in fact, well documented, and the consequences are more dire than might be imagined from the simple loss of color for the visual field. Cerebral achromatopsia patients experience a drab and disturbing world with little similarity to black-and-white movies or TV. In his book An Anthropologist on Mars, Oliver Sacks described the case of an artist (Jonathan I.), who suffered from achromatopsia after an automobile accident [Sacks 95]. This was a catastrophic loss for an artist, who had previously produced brilliantly colored paintings. He found that "... now color was gone, not only in perception, but in imagination and memory as well." There was a "wrongness" to things, and photographers will be able to relate to the suggested analogy with "Tri-X pushed for speed." However, Mr. I. strongly maintains that the concept of gray does not at all capture the shades he experiences.

Color perception, and, indeed, all vision, results from complex multichanneled information processing in the brain. We must not, however, imagine a unidirectional information flow with data from photoreceptors being processed at ever more-sophisticated levels and finally emerging as evolving color images for contemplation and reaction. Instead, there is a high degree of feedback at all levels so that what we see depends on more than our conscious control of attention. Priority is given to what we must see for survival. A snake or lion receives our immediate attention and initiates action before we can consciously consider consequences. Also, for color to serve as a biological marker, there must be some constancy in the apparent colors of objects even when they are illuminated by quite different light sources. This implies a kind of automatic white balance adjustment. In addition, to some extent we see what we expect to see. Fleeting images, especially of poorly illuminated objects, often take on appearances of well-known things, e.g., roots appear to be snakes.

Furthermore, those who deal with images need to recognize that vision with comprehension is a learned process. A normal visual system presents an immense array of patterns and colors to a newborn baby, but initially there is no association of this information with reality. Babies have no perception of depth, and small children only slowly learn how to put round objects in round holes, square things in square holes, etc. The child learns to use his or her eyes efficiently, to experience depth, to recognize shapes, to respond to illusions, etc. It has, in fact, been estimated that the complete development of vision in man requires up to 15 years. Eventually, a model of reality is learned so that an individual is comfortable with their location in space, as revealed by the eyes. However, adults who have matured in restricted environments, e.g., as members of tribes living in tropical rain forests may still have limited depth perception and may fail to recognize two-dimensional pictures as representations of the three-dimensional world. Here again our understanding of cognition and vision has been greatly extended by case studies of individuals who have regained sight, after extended periods of blindness through cataract operations. A newly sighted individual has to learn to see, and the transition from blindness to sight is fraught with difficulties. There even appears to be mental conflict in adapting a brain organized for blindness, with enhanced touch and hearing, to a brain organized for seeing. The starting point for a baby is quite different. Learning may still require an extended period, but with a baby there is little internal conflict.

I want to emphasize that each of us has a unique visual system both in capabilities and conditioning. We may achieve similar scores on visual acuity tests, but still perceive quite different worlds. Even those who show no visual dysfunction may not see similar colors. Language, of course, is no help. We all use the same name for the color of an object, but no one can know what others see. This brings us to a dichotomy that can serve as the theme for the study of visual perception and photography. Differences in perception do not pose a severe problem for the design of color cameras, monitors, printers, etc., because the goal is to replicate in an image the optical properties of objects in a framed scene. What our eyes and brain do with the light rays from an object or its photographic image is outside the realm of color management in photography. As we shall see later in this chapter, the theory of light-mixing in photography, trichromatism (George Palmer, 1777; Thomas Young, 1802), depends on viewing a small visual field with a dark background (the void mode). The aim of color management in photography is for each point (pixel) in the image to reproduce the visual characteristics of a point on an object. To go beyond this, and to control appearance as well, one must be able to specify viewing conditions.

In fact, our perception of colors in a scene and in a color photograph of the scene depends on the neighboring colors and their brightness. Optical illusions also depend on image elements in context. While composition and the interactions of image elements are irrelevant for color management in photographic instruments, they are of major importance for photographers and other artists. To handle this more general requirement, trichromacy is not sufficient, and a theory of color vision is required that is consistent with the wiring of the brain. This will lead us to color opponency theory (Ewald Hering, 1872), which can also account for nonspectral colors such as brown and the inhibitory aspects of blue-yellow, red-green (cyan), and white-black color pairs. Trichromacy and opponency color theories were once thought to be contradictory, but the proper view is that they are complementary, and, indeed, both are necessary for a complete understanding of color vision. In fact, electrophysiology and functional-imaging studies support the idea that trichromacy describes the detection of light by cones in the retina, but that the wiring of nerve cells in the retina and the thalamus is consistent with Hering's op-ponency theory. This leaves much to understand. Research on the visual system is currently concerned with the mechanism of automatic white balance (color constancy) and the locus in the brain of regions that determine color, form, and position attributes.

0 0

Post a comment