01 - 4 Sensory Processes
4 Sensory Processes
CHAPTER 4 SENSORY PROCESSES © LONDONPHOTOS - HOMER SYKES / ALAMY For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
I magine yourself sitting late at night in a deserted church. Although the image is one of profound serenity, there is in reality an enormous amount of information impinging on you from the world: Light from the alter, dim though it may seem, is entering your eyes. The sounds of the city, soft though they may seem, are entering your ears. The pew you’re sitting in is pushing up on your body; the smell of incense is wafting into your nose; and the taste of the wine you just drank still lingers in your mouth. And this is just the environmental information that you’re aware of! In addition, there’s lots more information that you’re unaware of. The microwave transmitter on the hill behind you, the radio station on the other side of town, and the mobile phone of a talkative passer-by outside are all issuing various sorts of electromagnetic radiation that, while enveloping you, aren’t touching your consciousness. Across the street a dog owner blows his dog whistle, sending a high-frequency shriek that, while very salient to the dog (and to any bats in the vicinity), is inaudible to you. Likewise, there are particles in the air and in your mouth, and subtle pressures on your skin that constitute information, yet do not register. The point here is that, even in the calmest of circumstances the world is constantly providing us with a vast informational tapestry. We need to assimilate and interpret at least some of this information in order to appropriately interact with the world. This need raises two considerations. First, which aspects of the environmental information register with our senses and which don’t? For example, why do we see electromagnetic radiation in the form of green light, but not electromagnetic radiation in the form of x-rays or radio waves? Second, how do the sense organs work such that they efficiently acquire the information that is acquirable? The first question, while fascinating, is largely beyond the scope of this book, but is best understood from an evolutionary perspective. Steven Pinker’s classic How the Mind Works (1997) provides a superb description of this perspective. To give a quick illustration of it, a brief answer to the question of why we see only the forms of electromagnetic radiation that we do would go like this: To operate and survive in our world, we need to know about objects – what they are and where they are – and so we’ve evolved to use that part of the electromagnetic spectrum that best accomplishes this goal. With some forms of electromagnetic radiation – short-wave radiation like x-rays or gamma rays, for example – most objects are invisible, that is, the radiation passes right through them rather than reflecting off them to our eyes. Other forms of For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHAPTER OUTLINE CHARACTERISTICS OF SENSORY MODALITIES Threshold sensitivity Suprathreshold sensation Signal detection theory Sensory coding VISION Light and vision The visual system Seeing light Seeing patterns Seeing color Sensation and perception: a preview AUDITION Sound waves The auditory system Hearing sound intensity Hearing pitch CUTTING EDGE RESEARCH: WHERE IN THE BRAIN ARE ILLUSIONS? OTHER SENSES Olfaction Gustation Pressure and temperature Pain SEEING BOTH SIDES: SHOULD OPIOIDS BE USED FOR TREATING CHRONIC PAIN? 109
110 CHAPTER 4 SENSORY PROCESSES radiation – long-wave radiation like radio waves, for example – would reflect off the objects to our eyes, but in a manner that would be so blurred as to be useless in any practical sense. Our senses are our input systems. From them we acquire data about the world around us, which constitutes the most immediate means (although, as we shall see, not the only means) by which we determine the character of the environment within which we exist and behave. In this chapter we discuss some of the major properties of the senses. Some of the research we review deals with psychological phenomena; other studies deal with the biological bases of these phenomena. At both the biological and psychological levels of analysis, a distinction is often made between sensation and perception. At the psychological level, sensations are fundamental, raw experiences associated with stimuli (for example, sense of sight may register a large red object), while perception involves the integration and meaningful interpretation of these raw sensory experiences (‘It’s a fire engine’). At the biological level, sensory processes involve the sense organs and the neural pathways that emanate from them, which are concerned with the initial stages of acquiring stimulus information. Perceptual processes involve higher levels of the cortex, which are known to be more related to meaning. This chapter concerns sensation, while Chapter 5 concerns perception. The distinction between sensation and perception, while useful for organizing chapters, is somewhat arbitrary. CHARACTERISTICS OF SENSORY MODALITIES Any sensory system has the task of acquiring some form of information from the environment and transducing it into some form of neural representation in the brain. Thus understanding the workings of a sensory system entails two steps: first understand what are the relevant dimensions of a particular form of environmental information and then to understand how that dimension is translated by the sensory organ into a neural representation. The dimensions corresponding to any given form of information can be roughly divided into ‘intensity’ and ‘everything else’. Threshold sensitivity We singled out intensity because it is common to all forms of information, although it takes different forms for For more Cengage Learning textbooks, visit www.cengagebrain.co.uk Psychological and biological events that occur early in the processing of a stimulus can sometimes affect interpretation of the stimulus. Moreover, from the perspective of the nervous system, there is no sharp break between the initial uptake of stimulus information by the sense organs and the brain’s subsequent use of that information to ascribe meaning. In fact, one of the most important features of the brain is that, in addition to taking in sensory information, it is constantly sending messages from its highest levels back to the earliest stages of sensory processing. These back projections actually modify the way sensory input is processed (Damasio, 1994; Zeki, 1993). This chapter is organized around the different senses: vision, hearing, smell, taste, and touch; the latter includes pressure, temperature, and pain. In everyday life, several senses are often involved in any given act – we see a peach, feel its texture, taste and smell it as we bite into it, and hear the sounds of our chewing. Moreover, many sensory judgments are more accurate when multiple senses are employed; for instance, people are more accurate at judging the direction from which a sound is coming when they are able to use their eyes to ‘target’ the approximate location than when they use their ears alone (Spence & Driver, 1994). For purposes of analysis, however, we consider the senses one at a time. Before beginning our analysis of individual senses, or sensory modalities, we will discuss some properties that are common to all senses. different kinds of information. For example, for light, intensity corresponds to the number of incoming photons per second, while for sound, intensity corresponds to the amplitude of sound pressure waves. It is entirely intuitive that the more intense is some stimulus, the more strongly it will affect the relevant sense organ: A high-amplitude light will affect the visual system more than a dimmer light; a high-volume sound will affect the auditory system more than a soft sound, and so on. This intuitively obvious observation is important but not surprising: it is analogous to the equally intuitive observation that a dropped apple will fall downward. In other words, it is a scientific starting point. So just as Newton (supposedly) began from the dropped-apple observation to develop a detailed and quantitative theory of gravity, sensory psychologists have long sought to detail and quantify the relation between physical stimulus intensity and the resulting sensation magnitude. In what follows, we will describe some of the results of this endeavor.
Table 4.1 Minimum stimuli Approximate minimum stimuli for various senses. (Galanter, E. (1962). ‘Contemporary Psychophysic,’ from Roger Brown & collaborators (eds.), New Directions in Psychology, Vol. 1. Reprinted by permission of Roger Brown.) Sense Minimum stimulus Vision A candle flame seen at 30 miles on a dark, clear night Hearing The tick of a clock at 20 feet under quiet conditions Taste One teaspoon of sugar in 2 gallons of water Smell One drop of perfume diffused into the entire volume of six rooms Touch The wing of a fly falling on your cheek from a distance of 1 centimeter Absolute thresholds: detecting minimum intensities A basic way of assessing the sensitivity of a sensory modality is to determine the absolute threshold: the minimum magnitude of a stimulus that can be reliably discriminated from no stimulus at all – for example, the weakest light that can be reliably discriminated from darkness. One of the most striking aspects of our sensory modalities is that they are extremely sensitive to the presence of, or a change in, an object or event. Some indication of this sensitivity is given in Table 4.1. For five of the senses, we have provided an estimate of the minimal stimulus that they can detect. What is most noticeable about these minimums is how low they are – that is, how sensitive the corresponding sensory modality is. These values were determined using what are called psychophysical procedures, which are experimental techniques for measuring the relation between the physical magnitude of some stimulus (e.g., the physical intensity of a light) and the resulting psychological response (how bright the light appears to be). In one commonly used psychophysical procedure, the experimenter first selects a set of stimuli whose magnitudes vary around the threshold (for example, a set of dim lights whose intensities vary from invisible to barely visible). Over a series of what are referred to as trials, the stimuli are presented one at a time in random order, and the observer is instructed to say ‘yes’ if the stimulus appears to be present and ‘no’ if it does not. Each stimulus is presented many times, and the percentage of ‘yes’ responses is determined for each stimulus magnitude. Figure 4.1 depicts hypothetical data that result from this kind of experiment: a graph showing that the For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHARACTERISTICS OF SENSORY MODALITIES 100 Percent ‘yes’ responses Threshold intensity 0 20 40 Stimulus intensity Figure 4.1 Psychophysical Function from a Detection Experiment. Plotted on the vertical axis is the percentage of times the participant responds, ‘Yes, I detect the stimulus’; on the horizontal axis is the measure of the magnitude of the physical stimulus. Such a graph may be obtained for any stimulus dimension to which an individual is sensitive. percentage of ‘yes’ responses rises smoothly as stimulus intensity (defined here in terms of hypothetical ‘units’) increases. When performance is characterized by such a graph, psychologists have agreed to define the absolute threshold as the value of the stimulus at which it is detected 50 percent of the time. For the data displayed in Figure 4.1, the stimulus is detected 50 percent of the time when the stimulus’s intensity is about 28 units; thus 28 units is defined to be absolute threshold. ª RUSSEL SHIVELY/DREAMSTIME.COM Our sensory modalities are extremely sensitive in detecting the presence of an object – even the faint light of a candle in a distant window. On a clear night, a candle flame can be seen from 30 miles away!
112 CHAPTER 4 SENSORY PROCESSES At first glance, this definition of ‘threshold’ may seem vague and unscientific. Why 50 percent? Why not 75 percent or 28 percent? Any value would seem arbitrary. There are two answers to this question. The first, which is generally true, is that establishing a threshold is generally only a first step in some experiment. As an example, suppose one is interested in dark adaptation, i.e., in establishing how sensitivity is affected by the amount of time that an observer has spent in the dark. One would then plot (as indeed we do later in this chapter) how threshold is affected by time. Of interest is the specific shape and/or mathematical form of the function that relates threshold to what we are investigating – in this illustration, time in the dark. This function is generally unaffected by the specific value – 28%, 50%, 75%, whatever – that we choose. In short, although the magnitude of the threshold is arbitrary, this arbitrariness does not affect the qualitative or even quantitative nature of our eventual conclusions. Second, if we know enough both about the physics of the informational dimension under consideration and the anatomy of the sensory system that we are studying, we can carry out experiments that yield more specific knowledge about how the system works; that is we can arrive at conclusions based on an integration of physics, biology, and psychology. A classic, and particularly elegant experiment of this sort was reported by Hecht, Shlaer, and Pirenne (1942) who endeavored to determine the absolute threshold for vision and in the process, demonstrated that human vision is virtually as sensitive as is physically possible. As every graduate of elementary physics knows, the smallest unit of light energy is a photon. Hecht and his colleagues showed that a person can detect a flash of light that contains only 100 photons. This is impressive in and of itself; on a typical day, many billions of photons are entering your eye every second. What is even more impressive is that Hecht and his colleagues went on to show that only 7 of these 100 photons actually contact the critical molecules in the eye that are responsible for translating light into the nerve impulses that correspond to vision (the rest are absorbed by other parts of the eye) and furthermore that each of these 7 photons affects a different neural receptor on the retina. The critical receptive unit of the eye (a particular molecule within the receptor), therefore, is sensitive to a single photon. This is what it means to say that ‘human vision is as sensitive as is physically possible’. Difference thresholds: detecting changes in intensity Measuring absolute threshold entails determining by how much stimulus intensity must be raised from zero in order to be distinguishable from zero. More generally, we can ask: By how much must stimulus intensity be raised from some arbitrary level (called a standard) in order that the new, higher level be distinguishable from the base level. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk Percent ‘more than’ responses 50 Just noticeable difference 48 50 52 Stimulus intensity Standard stimulus Figure 4.2 Results from an Experiment on Change Detection. Plotted on the vertical axis is the percentage of times the participant responds, ‘Yes, I detect more than the standard’; on the horizontal axis is the measure of the magnitude of the physical stimulus. The standard stimulus in this example is in the center of the range of stimuli. Such a graph may be obtained for any stimulus dimension for which an individual is sensitive to differences. This is measurement of change detection. In a typical change-detection study, observers are presented with a pair of stimuli. One of them is the standard – it is the one to which other stimuli are compared. The others are called comparison stimuli. On each presentation of the pair, observers are asked to respond to the comparison stimulus with ‘more’ or ‘less’. What is being measured is the difference threshold or just noticeable difference (jnd), the minimum difference in stimulus magnitude necessary to tell two stimuli apart. To illustrate, imagine measuring the visual system’s sensitivity to changes in the brightness of a light. Typical results are shown in Figure 4.2. In this experiment the standard (a 50-watt bulb) was presented along with each comparison stimulus (ranging from 47 watts to 53 watts, in 1-watt steps) dozens of times. We have plotted the percentage of times in which each comparison stimulus was judged to be ‘brighter’ than the standard. In order to determine the jnd, two points are estimated, one at 75 percent and the other at 25 percent on the ‘percent brighter’ axis. Psychologists have agreed that half of this distance in stimulus intensity units will be considered to be the just noticeable difference. In this case, then, the estimated jnd is (51 49)/2 ¼ 1 watt. If an individual’s sensitivity to change is high, meaning that he or she can notice tiny differences between stimuli, the estimated value of the jnd will be small. On the other hand, if sensitivity is not as high, the estimated jnd’s will be larger. This kind of experiment was first carried out about a century and a half ago, by two German scientists: Ernst
Table 4.2 Just noticeable differences (jnd) for various sensory qualities (expressed as the percentage change required for reliable change detection) Quality Just noticeable difference (jnd) Light intensity 8% Sound intensity 5% Sound frequency 1% Odor concentration 15% Salt concentration 20% Lifted weights 2% Electric shock 1% Heinrich Weber, a physiologist, and Gustav Fechner, a physicist. Their seminal finding was that the larger the value of the standard stimulus, the less sensitive the sensory system is to changes in intensity. Actually, under a wide range of circumstances, the relation is more precise and is this: The intensity by which the standard must be increased to be noticed is proportional to the intensity of the standard. For example, if a room contained 25 lit candles and you could just detect the addition of two candles – that is, 8 percent more – then if the room contained 100 candles it would require an additional 8% 100 ¼ 8 candles for you to be able to detect the change. This proportional relation has come to be known as the Weber-Fechner law, and the constant of proportionality (8% in our light bulb example) is referred to as the Weber fraction. Table 4.2 shows some typical jnd’s for different sensory qualities, expressed in terms of the Weber fraction. Table 4.2 shows, among other things, that we are generally more sensitive to changes in light and sound – that is, we can detect a smaller increase – than is the case with taste and smell. These values can be used to predict how much a stimulus will need to be changed from any level of intensity in order for people to notice the changes reliably. For example, if a theater manager wished to produce a subtle but noticeable change in the level of lighting on a stage, he or she might increase the lighting level by 10 percent. This would mean a 10-watt increase if a 100-watt bulb was being used to begin with, but it would mean a 1,000-watt increase if 10,000 watts were already flooding the stage. Similarly, if a soft-drink manufacturer wanted to produce a beverage that tasted notably sweeter than a competitor they could employ the Weber fraction for sweetness for this purpose. This leads to a final important point regarding psychophysical procedures: they often have direct and useful applications to the real world. For instance, Twinkies (a popular American snack cake) include the ingredients sodium stearol lactylate, For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHARACTERISTICS OF SENSORY MODALITIES polysorbate 60, and calcium sulphate. It is unlikely that these substances taste good; however, if the manufacturer is careful to keep the intensities below the absolute taste threshold they can be added as preservatives without fear of degrading the taste. Suprathreshold sensation Knowledge of sensory thresholds in vision and other sensory modalities is important in understanding the fundamentals of how sense organs are designed – for example, the knowledge that a molecule of light-sensitive pigment in the eye responds to a single photon of light is an important clue in understanding how the light-sensitive pigments work. However, quite obviously, most of our everyday visual behavior takes place in the context of abovethreshold or suprathreshold conditions. Beginning with Weber and Fechner in the mid-nineteenth century, scientists have been investigating the relation between suprathreshold stimulus intensities and corresponding sensory magnitudes by presenting stimuli of various intensities to humans and attempting to measure the magnitude of the humans’ responses to them. Imagine yourself in the following experiment. You sit in a dimly lit room looking at a screen. On each of a series of trials, a small spot of light appears on the screen. The spot differs in physical intensity from one trial to the next. Your job is to assign a number on each trial that reflects how intense that trial’s light spot appears to you. So to a very dim light you might assign a ‘1’ while to a very bright light, you might assign ‘100’. Figure 4.3 shows typical data from such an experiment. 6.0 5.5 5.0 Response 4.5 4.0 3.5 3.0 5 15 25 35 Stimulus intensity (arbitrary units) Figure 4.3 Psychophysical Data from a MagnitudeEstimation Experiment. Plotted on the vertical axis is the average magnitude estimate given by the observer; on the horizontal axis is the measure of the magnitude of the physical stimulus. Such a graph may be obtained for any stimulus dimension the observer can perceive.
114 CHAPTER 4 SENSORY PROCESSES In the mid-twentieth century, the American psychologist S. S. Stevens carried out an intensive investigation of suprathreshold sensation using this kind of experiment. To interpret his data, Stevens derived a law, bearing his name, from two assumptions. The first assumption is that the Weber-Fechner law, described above, is correct; that is, a jnd above some standard stimulus is some fixed percentage of the standard. The second assumption is that psychological intensity is appropriately measured in units of jnd’s (just as distance is appropriately measured in meters or weight is appropriately measured in grams). This means, for example, that the difference between four and seven jnd’s (i.e., three jnd’s) would to an observer be the same as the difference between ten and thirteen jnd’s (also three). We will skip the mathematical derivations and go straight to the bottom line: Stevens’ Law, implied by these assumptions, is that perceived psychological magnitude (C) is a power function of physical magnitude (F). By this is meant that the relation between C and F is (basically), C ¼ fr where r is an exponent unique to each sensory modality. The function shown in Figure 4.3 is a power function with an exponent of 0.5 (which means that C is equal to the square root of f). Stevens and others have reported literally thousands of experiments in support of the proposition that the relation between physical and psychological intensity is a r = 1.5 r = 1.0 r = 0.5 Response 10 0 –5 –1 1 3 5 Stimulus intensity (arbitrary units) Figure 4.4 Psychophysical Data from a MagnitudeEstimation Experiment. Here different curves are shown for different sensory modalities that entail different exponents. An exponent less than 1.0 produces a concave-down curve, an exponent of 1.0 produces a linear curve, and an exponent greater than 1.0 produces a concave-up curve. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk power function. It is of some interest to measure the value of the exponent for various sensory dimensions. The mathematically astute among you have probably noticed that a power function is quite different depending on whether r, the exponent, is less than or greater than 1.0. As illustrated in Figure 4.4, a power function with a lessthan-1 exponent, such as that corresponding to loudness, is concave down; that is, increasing levels of physical intensity lead to progressively smaller increases in sensation. In contrast, a power function with a greater-than-1 exponent, such as that corresponding to electric shock, is concave up; that is increasing levels of physical intensity lead to progressively greater increases in sensation. The exact reasons why the exponents differ among the sensory modalities is not known. It is interesting to note, however, that relatively benign sensory modalities such as light intensities have less-than-1 exponents, while relatively harmful sensory modalities such as electric shock have greater-than-1 exponents. This configuration probably serves adaptive purposes. For relatively ‘benign’ modalities such as light intensity, the relation between physical intensity and the psychological response simply conveys useful information that may or may not be immediately relevant: for instance, a loud train whistle, bespeaking a nearby train, signals a greater need to be cautious than a softer whistle indicating that the train is far away. However, a modality like pain signals the need for immediate action, and it would make sense to make it as obvious to the perceiver as possible that such action should be taken because bodily harm is likely: if your finger accidentally comes in contact with a red-hot coal, it is important that this highly pain-evoking stimulus produce a very high response; otherwise loss of life or limb could result! Signal detection theory At first glance, it may appear as if a sensory system’s job is a simple one: if something important is there – say a malignant tumor in a lung – then register its presence via the sensory information that it provides so that the observer can take appropriate action, such as consider possible treatments. In reality, however, life is not that simple because, as any communications engineer will tell you, information of any sort consists of both signal and noise. Do not be confused by the term ‘noise’ which in common language refers to the auditory domain only (as in ‘There’s an awful lot of unpleasant noise coming from that party across the street!’). In the world of science, however, ‘signal’ refers to the important relevant part of the information, while ‘noise’ refers to the unimportant and irrelevant part of the information. As we shall demonstrate below in the visual modality, noise occurs as part of any kind of information. Critically, in any modality, the task of the detector is to separate out the signal which it wants from the noise which can obscure and disguise it.
To illustrate this problem in a real-life context we will describe an American medical malpractice lawsuit. A radiologist, Dr. A, examined a chest X-ray of a patient, Mr. P, during a routine medical exam. Sadly, there was a small but cancerous tumor in Mr. P’s chest, undetected by Dr. A that, three years later, had grown substantially and resulted in Mr. P’s death. Mr. P’s family filed the lawsuit against Dr. A, asserting that the tumor had been detectable in the original X-ray and that Dr. A should have detected it. During the ensuing trial Mr. P’s family called upon another radiologist, Dr. B, as an expert witness. As part of his preparation, Dr. B first viewed recent X-rays, taken just before Mr. P’s death, in which the tumor, large and ominous at that point, was clearly visible. Dr. B then viewed the original X-ray – the one seen by Dr. A – and easily ‘detected’ the then-smaller tumor that Dr. A had missed. Dr. B’s conclusion was that, because he, Dr. B, was able to detect the tumor in the original X-ray, Dr. A should have also detected it, and Dr. A, in missing it, was therefore negligent. This case raises several interesting issues in the domain of sensation and perception. One, roughly characterized as ‘hindsight is 20-20’, will be discussed in the next chapter. In this chapter however, we will focus on another issue which is the distinction between sensation and bias. To understand this distinction, let’s consider generally the task of a radiologist viewing an X-ray trying to determine whether it is normal, or whether it shows the presence of a tumor. In scientific language, this task is, as we’ve just noted, one of trying to detect a signal embedded in noise. This concept is illustrated in Figure 4.5. There are three panels of in the figure, each of which has the same background, which consists of random-visual noise. Suppose that your task was to decide whether there was a small black generally diamond-shaped blob embedded somewhere in this noise. This task is strongly analogous to the radiologist’s task of finding a poorly defined tumor in an X-ray. Consider first the left panel of Figure 4.5. As indicated, there is, in this panel, only noise (we know this is true because we created it that way). Would you indicate that the signal was present? Well there’s not much evidence for the small diamond (as indeed there shouldn’t be since actually there isn’t one). There is, however, a random collection of noise over at the right, indicated by the arrow in the left panel that maybe could be the sought-after signal, and perhaps you might incorrectly choose it – or maybe you’d correctly decide that there’s only noise. In the middle panel, a weak signal is present, also indicated by an arrow. In this case, you might correctly choose it, or may still feel that it’s only noise and incorrectly claim there to be only noise. Finally, the right panel shows a strong signal, which you would probably correctly detect as a signal. Hits and false alarms Now suppose that you are given a whole series of stimuli like the ones in Figure 4.5. Some, like the left panel, contain only noise while others, like the right panel, contain noise plus signal. Your task is to say ‘yes’ to those containing signal and ‘no’ to those containing only noise. Of importance is that it is not possible to carry out this task perfectly. To see why this is, look at the left panel of Figure 4.5, which contains only noise. You might, upon inspecting it, think it contains a signal – for instance, the area indicated by the arrow which resembles the kind of black blob that you are seeking. So you might reasonably respond ‘yes’ to it, in which case you would be incorrect. If you did this, you would make an error that is referred to as a false alarm. In the kind of signal-detection experiment that we have just described, we could measure the proportion of noiseonly trials that result in an incorrect ‘yes response’. This proportion is referred to as the false-alarm rate. We can also measure the proportion of noise-plus-signal trials that result in a correct ‘yes’ response. Such responses are Figure 4.5 Examples of Signals Embedded in Noise. Each panel shows a background of random noise. In the left panel, there is no signal, although the small blob indicated by the arrow may look like a signal. In the middle panel, there is a low signal added, indicated by the arrow. In the right panel, the signal is strong and obvious. COURTESY OF GEOFFREY LOFTUS CHARACTERISTICS OF SENSORY MODALITIES For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
116 CHAPTER 4 SENSORY PROCESSES referred to as hits, and the proportion of hits is referred to as the hit rate. We now have a powerful tool to investigate the sensitivity of some sense organ. We know that if no signal is there to be detected, the observer says ‘yes’ anyway with some probability equal to the false-alarm rate. So we infer that the observer does detect a signal only under those conditions that the hit rate exceeds the false-alarm rate. If the hit rate exceeds the false-alarm rate by a lot, we infer that sensitivity is high. If the hit rate exceeds the false-alarm rate by only a little, we infer that sensitivity is low. If the hit rate equals the false-alarm rate, we infer the sensitivity is zero. Sensitivity and bias Notice something interesting here. An observer is at liberty to choose what his or her false-alarm rate will be. Imagine two hypothetical observers, Charlotte and Linda, who are equally good at detecting signals, but who differ in an important way. In particular, Charlotte is a ‘conservative’ observer – that is, Charlotte requires a lot of evidence to claim that a signal is present. Charlotte will say ‘yes’ infrequently which means that she will have a low falsealarm rate, but also a low hit rate. Suppose in contrast that Linda is a ‘liberal’ observer – she will claim ‘signal’ given the slightest shred of evidence for a signal. Linda, in other words, will say ‘yes’ frequently which will endow her with a high false-alarm rate, but also with a high hit rate. The most useful characteristic of a signal-detection analysis is that it allows separation of bias (referred to as b) and sensitivity (referred to as d0, pronounced ‘deeprime’). In our Charlotte–Linda example, Charlotte and Linda would be determined to have equal sensitivities, even though they have quite different bias values. Let’s conclude this discussion by going back to the medical-malpractice lawsuit that we described earlier. Notice that there are two observers: Dr. A and Dr. B. The suit alleges that Dr. A has poor sensitivity – poor ability to detect a tumor – compared to Dr. B and it is for this reason (essentially) that Dr. A is alleged to have been negligent. However, we can now see that this conclusion doesn’t necessarily follow from the fact that Dr. A didn’t detect the original tumor while Dr. B did detect it. It is equally plausible that Dr. B simply had more of a bias to say ‘yes I detect a tumor’ than did Dr. A. This explanation actually makes a good deal of sense. Psychologists have discovered that, in a signal-detection situation, a number of factors influence bias, including expectation: Reasonably enough, the greater the observer’s expectation that a signal will be present, the greater is the observer’s bias to respond ‘yes’. And, of course, Dr. B had good reason to expect the presence of a tumor, whereas Dr. A had very little reason to expect it. Sensory coding A 1966 movie titled Fantastic Voyage featured a submarine carrying a collection of B-list actors, shrunk by a For more Cengage Learning textbooks, visit www.cengagebrain.co.uk technological miracle to microscopic size, and inserted into a human body with the intent of traveling to the brain to destroy a life-threatening blood clot. Among the film’s many inadvertently comical features was a scene in which a series of large, red, amorphous blobs are seen whizzing past the window, in response to which one of the characters exclaims, ‘There’s light going to the brain; we must be near the eye!’ While (maybe) good theater, this scene violates the main feature of how sensory systems work: it confuses the original information from the world (red light in this instance) with the representation of light in the brain which, as with all sensory systems, is a pattern of neural activity. As described in Chapter 2, all information transmission in the brain is carried out by neural impulses which means that, for instance, the conscious perception of red light doesn’t issue directly from red light pulsing through the brain’s innards (as in Fantastic Voyage), but rather from a particular pattern of neural impulses that is triggered by the arrival of red light at the eye. This is true with all sensory systems. Imagine, unpleasant though it may be, the excruciating pain that would result from accidentally touching a red-hot fire poker. It may seem as if the conscious experience of pain comes from the poker itself and the associated damage to your skin. But in fact, the conscious experience is due entirely to the resulting pattern of neuronal activity in your brain. We’ll discuss this very issue later in the ‘Cutting Edge Research’ section of this chapter. But for the moment, back to basics. Each sensory system has two fundamental problems that it has to solve: first, how to translate incoming physical information, for example light, to an initial neural representation and second how to encode various features of the physical information (e.g., intensity, hue) to a corresponding neural representation. In this section we will address these questions of sensory coding. The first problem is addressed by the use of specialized cells in the sense organs called receptors. For instance the receptors for vision, to which we briefly alluded earlier, are located in a thin layer of tissue on the inside of the eye. Each visual receptor contains a chemical that reacts to light, which in turn triggers a series of steps that results in a neural impulse. The receptors for audition are fine hair cells located deep in the ear; vibrations in the air bend these hair cells, thus creating a neural impulse. Similar descriptions apply to the other sensory modalities. A receptor is a specialized kind of nerve cell or neuron (see Chapter 2); when it is activated, it passes its electrical signal to connecting neurons. The signal travels until it reaches its receiving area in the cortex, with different sensory modalities sending signals to different receiving areas. Somewhere in the brain the electrical signal results in the conscious sensory experience that, for example underlies responses in a psychophysical experiment. Thus, when we experience a touch, the experience is occurring in our
brain, not in our skin. One demonstration of this comes from the Canadian brain surgeon Wilder Penfield. During brain surgeries on awake patients he sometimes electrically stimulated the surface of a region of the parietal lobe called primary somatic sensory cortex with an electrode; patients reported feeling a tingling sensation in a specific location on their bodies (Penfield & Rasmussen, 1950). As he moved his electrode along this strip of cortex the patients felt the tingling move along their bodies. In normal life, the electrical impulses in the brain that mediate the experience of touch are themselves caused by electrical impulses in touch receptors located in the skin. Penfield apparently stimulated the brain regions where those impulses are received and converted into touch experiences. Similarly, our experience of a bitter taste occurs in our brain, not in our tongue; but the brain impulses that mediate the taste experience are themselves caused by electrical impulses in taste receptors on the tongue. In this way our receptors play a major role in relating external events to conscious experience. Numerous aspects of our conscious perceptions are caused by specific neural events that occur in the receptors. Amplifier Coding of intensity and quality Our sensory systems evolved to pick up information about objects and events in the world. What kind of information do we need to know about an event such as a brief flash of a bright red light? Clearly, it would be useful to know its intensity (bright), quality (red), duration (brief), location, and time of onset. Each of our sensory systems provides some information about these various attributes, although most research has focused on the attributes of intensity and quality. When we see a bright red color patch, we experience the quality of redness at an intense level; when we hear a faint, high-pitched tone, we experience the quality of the pitch at a nonintense level. The receptors and their neural pathways to the brain must therefore code both intensity and quality. How do they do this? Researchers who study these coding processes need a way of determining which specific neurons are activated by which specific stimuli. The usual means is to record the electrical activity of single cells in the receptors and neural pathways to the brain while some subject (which, in the case of single-cell recording, is generally an animal such as a cat or a monkey) is presented with various inputs or stimuli. By such means, one can determine exactly which attributes of a stimulus a particular neuron is responsive to. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHARACTERISTICS OF SENSORY MODALITIES Oscilloscope Microelectrode Light Screen Receptive field Figure 4.6 Single-Cell Recording. An anesthetized monkey is placed in a device that holds its head in a fixed position. A stimulus, often a flashing or moving bar of light, is projected onto the screen. A microelectrode implanted in the visual system of the monkey monitors activity from a single neuron, and this activity is amplified and displayed on an oscilloscope. A typical single-cell recording experiment is illustrated in Figure 4.6. This is a vision experiment, but the procedure is similar for studying other senses. Before the experiment, the animal (in this case a monkey) has undergone a surgical procedure in which thin wires are inserted into selected areas of its visual cortex. The thin wires are microelectrodes, insulated except at their tips, that can be used to record electrical activity of the neurons they are in contact with. They cause no pain, and the monkey moves around and lives quite normally. During the experiment, the monkey is placed in a testing apparatus and the microelectrodes are connected to recording and amplifying devices. The monkey is then exposed to various visual stimuli on a computer-controlled monitor. For each stimulus, the researcher can determine which neurons respond to it by observing which microelectrodes produce sustained outputs. Because the electrical outputs are tiny, they must be amplified and displayed on an oscilloscope, which converts the electrical signals into a graph of the changing electrical voltage. Most neurons emit a series of nerve impulses that appear on a second computer screen in whatever format the experimenter wishes. Even in the absence of a signal (i.e., even in a noise-only situation), many cells will respond at a slow rate. If a signal to which the neuron is sensitive is presented, the cells respond faster. This is the most fundamental neural correlate of the signaldetection situation that we described above. With the aid of single-cell recordings, researchers have learned a good deal about how sensory systems code intensity and quality. The primary means for coding the intensity of a stimulus is via the number of neural impulses in each unit of time, that is, the rate of neural impulses. We can illustrate this point with the sense of touch. If someone lightly touches your arm, a series of
118 CHAPTER 4 SENSORY PROCESSES a) b) c) Time Pressure on Pressure off Figure 4.7 Coding Intensity. Responses of a nerve fiber from the skin to (a) soft, (b) medium, and (c) strong pressure applied to the fiber’s receptor. Increasing the stimulus strength increases both the rate and the regularity of nerve firing in this fiber. electrical impulses are generated in a nerve fiber. If the pressure is increased, the impulses remain the same in size but increase in number per unit of time (see Figure 4.7). The same is true for other sensory modalities. In general, the greater the intensity of the stimulus, the higher the neural firing rate; and in turn, the greater the firing rate, the greater the perceived magnitude of the stimulus. The intensity of a stimulus can also be coded by other means. One alternative is coding by the temporal pattern of the electrical impulses. At low intensities, nerve impulses are further apart in time, and the length of time between impulses is variable. At high intensities, though, the time between impulses may be quite constant (see Figure 4.7). Another alternative is coding by number of neurons activated: The more intense the stimulus, the more neurons are activated. Coding the quality of a stimulus is a more complex matter. The key idea behind coding quality was proposed by Johannes Müller in 1825. Müller suggested that the brain can distinguish between information from different sensory modalities – such as lights and sounds – because they involve different sensory nerves (some nerves lead to visual experiences, others to auditory experiences, and so on). Müller’s idea of specific nerve energies received support from subsequent research demonstrating that neural pathways originating in different receptors terminate in different areas of the cortex. It is now generally agreed that the brain codes the qualitative differences between sensory modalities according to the specific neural pathways involved. But what about the distinguishing qualities within a sense? How do we tell red from green or sweet from sour? It is likely that, again, the coding is based on the specific neurons involved. To illustrate, there is evidence that we distinguish between sweet and sour tastes by virtue of the fact that each kind of taste has its own nerve fibers. Thus, sweet fibers respond primarily to sweet tastes, sour fibers For more Cengage Learning textbooks, visit www.cengagebrain.co.uk primarily to sour tastes, and ditto for salty fibers and bitter fibers. Specificity is not the only plausible coding principle. A sensory system may also use the pattern of neural firing to code the quality of a sensation. While a particular nerve fiber may respond maximally to a sweet taste, it may respond to other tastes as well, but to varying degrees. One fiber may respond best to sweet tastes, less to bitter tastes, and even less to salty tastes; a sweet-tasting stimulus would thus lead to activity in a large number of fibers, with some firing more than others, and this particular pattern of neural activity would be the system’s code for a sweet taste. A different pattern would be the code for a bitter taste. As we will see when we discuss the senses in detail, both specificity and patterning are used in coding the quality of a stimulus. INTERIM SUMMARY l The senses include the four traditional ones of seeing, hearing, smell, and taste, plus three ‘touch’ sensations, pressure and temperature, and pain, plus the body senses. l Sensations are psychological experiences associated with simple stimuli, that have not, as yet, been endowed with meaning. l For each sense, two kinds of threshold sensitivity can be defined: absolute threshold (the minimum amount of stimulus energy reliably registers on the sensory organ) and difference threshold (the minimum difference between two stimuli that can be reliably distinguished by the sensory organ). l The psychophysical function is the relation between stimulus intensity and the magnitude of sensation for above-threshold (‘suprathreshold’) stimuli. l Sensation is often viewed as the process of detecting a signal that is embedded in noise. In some cases, a signal may be falsely ‘detected’ even when only noise is present – a false alarm. Correctly detecting a signal that is present is a hit. The difference between hits and false alarms is a measure of the magnitude of the stimulus’s effect on the sensory organ. The use of signal-detection theory allows the process of detecting a stimulus to be separated into two numbers, one representing the observer’s sensitivity to the signal and the other representing the observer’s bias to respond ‘signal present’. l Every sensory modality must recode or transduce the physical energy engendered a stimulus into neural impulses. The nature of such coding, unique to each sensory modality, must encode both stimulus intensity, along with various qualitative characteristics of the stimulus.
CRITICAL THINKING QUESTIONS 1 How might you use measurements of the just noticeable difference (jnd) in loudness to describe the change in the auditory environment caused by the addition of a new airline to those serving your local airport? Would you be able to explain your measurement method to a panel of concerned citizens? 2 In the text we described a radiologist, Dr. A, who was accused of missing a tumor in an x-ray, and Dr. B., the expert witness in the resulting lawsuit, who claimed that the tumor was clearly visible. Dr. B’s implicit conclusion is that Dr. A is not as good as detecting tumors as is he, Dr. B. State clearly why Dr. B’s conclusion is flawed given the available information, and design two experiments: the first addressing the issue of whether Dr. B perceives tumors better than Dr. A, and the second addressing how easy it would be for radiologists in general to have detected the original tumor missed by Dr. A. VISION Humans are generally credited with the following senses: (a) vision; (b) audition; (c) smell; (d) taste; (e) touch (or the skin senses); and (f) the body senses (which are responsible for sensing the position of the head relative to the trunk, for example). Since the body senses do not always give rise to conscious sensations of intensity and quality, we will not consider them further in this chapter. Only vision, audition, and smell are capable of obtaining information that is at a distance from us, and of this group, vision is the most finely tuned in humans. In this section we first consider the nature of the stimulus energy to which vision is sensitive; next we describe the visual system, with particular emphasis on how its receptors carry out the transduction process; and then we consider how the visual modality processes information about intensity and quality. Light and vision Each sense responds to a particular form of physical energy, and for vision the physical stimulus is light. Light is a form of electromagnetic energy, energy that emanates from the sun and the rest of the universe and constantly bathes our planet. Electromagnetic energy is best conceptualized as traveling in waves, with wavelengths (the distance from one crest of a wave to the next) varying tremendously from the shortest cosmic rays (4 trillionths of a centimeter) to the longest radio waves (several For more Cengage Learning textbooks, visit www.cengagebrain.co.uk VISION kilometers). Our eyes are sensitive to only a tiny portion of this continuum: wavelengths of approximately 400 to 700 nanometers, where a nanometer is a billionth of a meter. Visible electromagnetic energy – light – therefore makes up only a very small part of electromagnetic energy. The visual system The human visual system consists of the eyes, several parts of the brain, and the pathways connecting them. Go back to Figure 2.14 (visual pathways figure) for a simplified illustration of the visual system and notice in particular that (assuming you’re looking straight ahead) the right half of the visual world is initially processed by the left side of the brain and vice-versa. The first stage in vision is, of course, the eye, which contains two systems: one for forming the image and the other for transducing the image into electrical impulses. The critical parts of these systems are illustrated in Figure 4.8. An analogy is often made between an eye and a camera. While this analogy is misleading for many aspects of the visual system, it is appropriate for the image-forming system, whose function is to focus light reflected from an object so as to form an image of the object on the retina, Cornea Aqueous humor Ciliary body Pupil Iris Vitreous humor Visual axis Retina Fovea Blind spot Choroid Nerve Sclera Figure 4.8 Top View of the Right Eye. Light entering the eye on its way to the retina passes through the cornea, the aqueous humor, the lens, and the vitreous humor. The amount of light entering the eye is regulated by the size of the pupil, a small hole toward the front of the eye formed by the iris. The iris consists of a ring of muscles that can contract or relax, thereby controlling pupil size. The iris gives the eyes their characteristic color (blue, brown, and so forth).
120 CHAPTER 4 SENSORY PROCESSES Figure 4.9 Image Formation in the Eye. Some of the light from an object enters the eye, where it forms an image on the retina. Both the cornea and the lens bend the light rays, as would a lens in a telescope. Based purely on optical considerations we can infer that the retinal image is inverted. which is a thin layer of tissue at the back of the eyeball (see Figure 4.9). The image-forming system itself consists of the cornea, the pupil, and the lens. The cornea is the transparent front surface of the eye: Light enters here, and rays are bent inward by it to begin the formation of the image. The lens completes the process of focusing the light on the retina (see Figure 4.9). To focus on objects at different distances, the lens changes shape. It becomes more spherical for near objects and flatter for far ones. In some eyes, the lens does not become flat enough to bring far objects into focus, although it focuses well on near objects; people with eyes of this type are said to be myopic (nearsighted). In other eyes, the lens does not become spherical enough to focus on near objects, although it focuses well on far objects; people with eyes of this type are said to be hyperopic (farsighted). As otherwise normal people get older (into their 40s) the lens loses much of its ability to change shape or focus at all. Such optical defects can of course, generally be corrected with eyeglasses or contact lenses. The pupil, the third component of the imageforming system, is a circular opening between the cornea and the lens whose diameter varies in response to the level of light present. It is largest in dim light and smallest in bright light, thereby helping to ensure that enough light passes through the lens to maintain image quality at different light levels. All of these components focus the image on the retina. There the transduction system takes over. This system begins with various types of neural receptors which are spread over the retina, For more Cengage Learning textbooks, visit www.cengagebrain.co.uk somewhat analogously to the way in which photodetectors are spread over the imaging surface of a digital camera. There are two types of receptor cells, rods and cones, so called because of their distinctive shapes, shown in Figure 4.10. The two kinds of receptors are specialized for different purposes. Rods are specialized for seeing at night; they operate at low intensities and lead to low-resolution, colorless sensations. Cones are specialized for seeing during the day; they respond to high intensities and result in high-resolution sensations that include color. The retina also contains a network of other neurons, along with support cells and blood vessels. When we want to see the details of an object, we routinely move our eyes so that the object is projected onto a small region at the center of the retina called the fovea. The reason we do this has to do with the Amacrine cell Ganglion cell Bipolar cell Horizontal cell Rod Cone Light Figure 4.10 A Schematic Picture of the Retina. This is a schematic drawing of the retina based on an examination with an electron microscope. The bipolar cells receive signals from one or more receptors and transmit those signals to the ganglion cells, whose axons form the optic nerve. Note that there are several types of bipolar and ganglion cells. There are also sideways or lateral connections in the retina. Neurons called horizontal cells make lateral connections at a level near the receptors; neurons called amacrine cells make lateral connections at a level near the ganglion cells. (J. E. Dowling and B. B. Boycott (1969) ‘Organization of the Primate Retina’ from Proceedings of the Royal Society of London, Series B, Vol. I66, pp. 80–111. Adapted by permission of the Royal Society of London.)
R Q J N I F E C B T P O S M K A D G H L U Figure 4.11 Visual Acuity Decreases in the Periphery. Letter sizes have been scaled so that when the central A is looked at directly, all the other letters are approximately equally easy to read. distribution of receptors across the retina. In the fovea, the receptors are plentiful and closely packed; outside the fovea, on the periphery of the retina, there are fewer receptors. More closely packed receptors means higher resolution, as, analogously, a computer monitor set to more pixels per screen (e.g., one set to 1,600 1,200) has a higher resolution than when it is set to fewer pixels per screen (e.g., 640 480). The high-density fovea is therefore the highest-resolution region of the fovea, the part that is best at seeing details. To get a sense of how your perception of detail changes as an image is moved away from your fovea, look at Figure 4.11 and keep your a) b) Figure 4.12 Locating Your Blind Spot. (a) With your right eye closed, stare at the cross in the upper right-hand corner. Put the book about a foot from your eye and move it forward and back. When the blue circle on the left disappears, it is projected onto the blind spot. (b) Without moving the book and with your right eye still closed, stare at the cross in the lower right-hand corner. When the white space falls in the blind spot, the blue line appears to be continuous. This phenomenon helps us understand why we are not ordinarily aware of the blind spot. In effect, the visual system fills in the parts of the visual field that we are not sensitive to; thus, they appear to be a part of the surrounding field. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk VISION eyes trained on the central letter (A). The sizes of the surrounding letters have been adjusted so that they are all approximately equal in visibility. Note that in order to achieve equal visibility, the letters on the outer circle must be about ten times larger than the central letter. Given that light reflected from an object has made contact with a receptor cell, how does the receptor transduce the light into electrical impulses? The rods and cones contain chemicals, called photopigments, that absorb light. Absorption of light by the photopigments starts a process that eventuates in a neural impulse. Once this transduction step is completed, the electrical impulses must make their way to the brain via connecting neurons. The responses of the rods and cones are first transmitted to bipolar cells and from there to other neurons called ganglion cells (refer to Figure 4.10). The long axons of the ganglion cells extend out of the eye to form the optic nerve to the brain. At the place where the optic nerve leaves the eye, there are no receptors; we are therefore blind to a stimulus in this region (see Figure 4.12). We do not notice this hole in our visual field – known as the blind spot – because the brain automatically fills it in (Ramachandran & Gregory, 1991). Seeing light Sensitivity Our sensitivity to light is determined by the rods and cones. There are three critical differences between rods and cones that explain a number of phenomena involving perceived intensity, or brightness. The first difference is that rods and cones are activated under different levels of light. In broad daylight or in a well-lit room, only the cones are active; the rods send no meaningful neural signals. On the other hand, at night under a quarter moon or in a dimly lit room, only the rods are active. A second difference is that cones and rods are specialized for different tasks. This can be seen in the way they are connected to ganglion cells, as illustrated in Figure 4.13. The left side of the figure shows three adjacent cones, each of which is connected to a single ganglion cell. This means that if a cone receives light it will increase the activity of its corresponding ganglion cell. Each ganglion cell is connected to its nearest neighbor by a connection that decreases the activity of that neighboring cell; it is also connected to the visual area of the brain by a long axon. Together these axons form the
122 CHAPTER 4 SENSORY PROCESSES Spot of light shines onto cones and rods Ganglion cells Receptors Cones Rods Figure 4.13 How Cones and Rods Connect to Ganglion Cells. This diagram shows a single spot of light shining onto a cone and a rod. To simplify matters, we have omitted several other types of cells located between receptors and ganglion cells. Arrows represent a signal to increase neuronal firing. Dots represent a signal to decrease neuronal firing. The long arrows emanating from the ganglion cells are axons that become part of the optic nerve. optic nerve. The right side of the figure shows three adjacent rods, each of which is connected to three ganglion cells. Here, however, there are no connections among ganglion cells that decrease neural activity. To understand the implications of these wiring differences, suppose that a single spot of light was presented to either the cones or the rods. When it was presented to the cones, only one of the ganglion cells, corresponding to the location of the spot, would respond. However, when a spot of light was presented only to the rods, it would cause up to three ganglion cells to increase their activity. This combined activity would help ensure that the signal reached the brain, but it would also mean that there would be considerable uncertainty about the exact location of the spot of light. Thus, the connections among ganglion cells associated with cones help ensure detailed form perception under well-lit conditions, whereas the convergence of many rods on a single ganglion cell helps ensure sensitivity to light under low lighting conditions. Thus you can do tasks requiring high resolution, such as reading fine print, only in reasonably well lit conditions in which the cones are active. A third difference is that rods and cones are concentrated in different locations on the retina. The fovea contains many cones but no rods. The periphery, on the other hand, is rich in rods but has relatively few cones. We have already seen one consequence of the smaller For more Cengage Learning textbooks, visit www.cengagebrain.co.uk number of cones in the periphery (see Figure 4.11). A consequence of the distribution of rods can be seen when viewing stars at night. You may have noticed that in order to see a dim star as clearly as possible it is necessary to look slightly to one side of the star. This ensures that the maximum possible number of rods are activated by the light from the star. Dark adaptation Imagine yourself entering a dark movie theater from a bright street. At first you can see hardly anything in the dim light reflected from the screen. However, in a few minutes you are able to see well enough to find a seat. Eventually you are able to recognize faces in the dim light. This change in your ability to see in the dark is referred to as dark adaptation: As you spend time in the dark, two processes occur that account for it. One, which we’ve already mentioned, is that the eye’s pupil changes size – it enlarges when the surrounding environment becomes dark. More importantly, there are photochemical changes in the receptors that increase the receptors’ sensitivity to light. Figure 4.14 shows a dark-adaptation curve: It shows how the absolute threshold decreases with the length of time the person is in darkness. The curve has two limbs. The upper limb reflects adaptation of the cones, which takes place quite rapidly – cones are fully adapted within about five minutes. While the cones are adapting, the rods are also adapting, but more slowly. Eventually, the rod adaptation ‘catches up’ with the already-complete cone adaptation, but the rods then continue to adapt for an additional 25 minutes or so which accounts for the second limb of the dark-adaptation curve. Seeing patterns Visual acuity refers to the eye’s ability to resolve details. There are several ways of measuring visual acuity, but the most common measure is the familiar eye chart found in optometrists’ offices. This chart was devised by Herman Snellen in 1862. Snellen acuity is measured relative to a viewer who does not need to wear glasses. Thus, an acuity of 20/20 indicates that the viewer is able to identify letters at a distance of 20 meters that a typical viewer can read at that distance. An acuity of 20/100 would mean that the viewer can only read letters at 20 meters that are large enough for a typical viewer to read at a distance of 100 meters. In this case, visual acuity is less than normal. There are a number of reasons why the Snellen chart is not always the best way to measure acuity. First, the
100,000 Threshold light intensity (relative units) 10,000 Cones 1,000 10 Rods 0 20 40 Time in darkness (minutes) Figure 4.14 The Course of Dark Adaptation. Subjects look at a bright light until the retina has become light adapted. When the subjects are then placed in darkness, they become increasingly sensitive to light, and their absolute thresholds decrease. This is called light adaptation. The graph shows the threshold at different times after the adapting light has been turned off. The green data points correspond to threshold flashes whose color could be seen; the purple data points correspond to flashes that appeared white regardless of the wavelength. Note the sharp break in the curve at about 10 minutes; this is called the rod-cone break. A variety of tests show that the first part of the curve is due to cone vision and the second part to rod vision. (Data are approximate, from various determinations.) Details to be detected Landolt C Resolution Acuity Grating Acuity Snellen Letter Vernier Acuity Figure 4.15 Some Typical Forms Used in Tests of Visual Acuity. Arrows point to the details to be discriminated in each case. method is not good for young children or other people who do not know how to read. Second, the method is designed to test acuity only for objects seen at a distance (e.g., 10 meters); it does not measure acuity for reading and other tasks involving near distances. Third, the method does not distinguish between spatial acuity (the ability to see details of form) and contrast acuity (the ability to see differences in brightness). Figure 4.15 presents examples of typical forms used in tests of visual acuity, with arrows pointing to the critical detail to be detected. Notice that each detail is merely a region of the For more Cengage Learning textbooks, visit www.cengagebrain.co.uk VISION Figure 4.16 The Hermann Grid. The gray smudges seen at the white intersections are illusionary. They are seen by your eye and brain but are not on the page. To convince yourself that they are not really there, move your eyes to the different intersections. You will note that there is never a gray smudge at the intersection you are looking at directly. They appear in only intersections that fall on your peripheral visual field. field where there is a change in brightness from light to dark (Coren, Ward, & Enns, 1999). The sensory experience associated with viewing a pattern is determined by the way visual neurons register information about light and dark. The most primitive element of a visual pattern is the edge, or contour, the region where there is a transition from light to dark or vice versa. One of the earliest influences on the registration of edges occurs because of the way ganglion cells in the retina interact (see Figure 4.13). The effects of these interactions can be observed by viewing a pattern known as the Hermann grid, shown in Figure 4.16. You can see gray smudges at the intersections of the white spaces separating the black squares. A disconcerting aspect of this experience is that the very intersection you are gazing at does not appear to be filled with a gray smudge; only intersections that you are not currently gazing at give the illusion of the gray smudge. This illusion is the direct result of the connections producing decreased activity among the neighbors of active ganglion cells. For example, a ganglion cell that is centered on one of the white intersections of the grid will be receiving signals that decrease its rate of firing from neighboring ganglion cells on four sides, a phenomenon known as lateral inhibition (that is, the cells centered in the white spaces above, below, to the right, and to the left of the intersection). A ganglion cell that is positioned on one of the white rows or columns, on the other hand, will be receiving signals that decrease its rate of firing from neighboring cells on only two sides. As a result, the intersections appear darker than the white rows or
columns, reflecting the larger number of signals to decrease the rate of firing being received by ganglion cells centered there. The purpose of lateral inhibition is to enhance edge detection by darkening one side of the edge and lightening the other (e.g., Mach Bands). But why do the smudges appear only off to the side, not at the intersection you are looking at directly? This happens because the range over which the signals are sent is much smaller at the fovea than in the periphery. This arrangement contributes to our having greater visual acuity at the fovea than in the periphery. Seeing color All visible light (and, in fact, all electromagnetic radiation from gamma rays to radio waves) is alike except for wavelength. Our visual system does something wonderful with wavelength: It turns it into color, with different wavelengths resulting in different colors. In particular, short wavelengths (450–500 nanometers) appear blue; medium wavelengths (500–570 nanometers) appear green; and long wavelengths (about 650–780 nanometers) appear red (see Figure 4.17). Our discussion of color perception considers only wavelength. This is adequate for cases in which the origin of a color sensation is an object that emits light, such as the sun or a light bulb. Usually, however, the origin of a color sensation is an object that reflects light when it is illuminated by a light source. In these cases, our perception of the object’s color is determined partly by the wavelengths that the object reflects and partly by other factors. One such factor is the surrounding context of colors. A rich variety of other colors in the spatial neighborhood of an object makes it possible for the viewer to see the correct color of an object even when the wavelengths reaching the eye from that object do not faithfully record the object’s characteristic color (Land, 1986). Your ability to see your favorite blue jacket as navy despite wide variations in the ambient lighting is called color constancy. We will discuss this topic more fully in Chapter 5. Color appearance Seeing color is a subjective experience in the sense that ‘color’ is a construction of the brain based on an analysis of wavelengths of light. However, it is also objective in that any two viewers with the same kinds of color receptors (cones) appear to construct ‘color’ in the same way. The most common way of referring to the various Blue-green Yellow-green Orange Red (650-780) (573) Yellow (521) Green (480) Blue 500 nm 600 nm 700 nm 400 nm Violet (380-450) Figure 4.17 The Solar Spectrum. The numbers given are the wavelengths of the various colors in nanometers (nm). ª DAVID SUTHERLAND/GETTY/STONE A prism breaks up light into different wavelengths. Short wavelengths appear blue, medium wavelengths green, and long wavelengths red. CHAPTER 4 SENSORY PROCESSES For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
color experiences of a typical viewer is to organize them on three dimensions: hue, brightness, and saturation. Hue refers to the quality best described by the color’s name, such as red or greenish-yellow. Brightness refers to how much light appears to be reflected from a colored surface, with white being the brightest possible color and black the dimmest. Saturation refers to the purity of the color, in that a fully saturated color, such as crimson, appears to contain no gray, while an unsaturated color, such as pink, appears to be a mixture of red and white. Albert Munsell, an artist, proposed a scheme for specifying colored surfaces by assigning them one of ten hue names and two numbers, one indicating saturation and the other brightness. The colors in the Munsell system are represented by the color solid (see Figure 4.18). (The key characteristics of color and sound are summarized in the Concept Review Table.) Given a means of describing colors, we can ask how many colors we are capable of seeing. Within the 400–700 nanometer range to which humans are sensitive, we can discriminate among 150 hues, suggesting that we can distinguish among about 150 wavelengths. This means that, on the average, we can discriminate between two wavelengths that are only two nanometers apart; that is, the jnd for wavelengths is two nanometers. Given that each of the 150 discriminable colors can have many different values of lightness and saturation, the estimated number of colors among which we can discriminate is over 7 million! Moreover, according to estimates by the National Bureau of Standards, we have names for about 7,500 of these colors. These numbers give some indication of the importance of color to our lives (Coren, Ward, & Enns, 1999). Color mixture The most important fact for understanding how the visual system constructs color is that, all the hues among which we can discriminate can be generated by mixing together only three basic colors. This was demonstrated many years ago using what is called the color-matching experiment. Suppose that we project different-colored lights to the same region of the retina. The result of this light mixture will be a new color. For example, a pure yellow light of 580 nanometers will appear yellow. Of critical importance is that it is possible to create a mixture of a 650-nanometer light (red), 500-nanometer light (green) and 450-nanometer light (blue) that will look identical – and we literally mean identical – to the pure yellow light. This matching process can be carried out for any pure visible light whatsoever. A pair of such matching lights – that is, two lights with different Figure 4.18 The Color Solid. The three dimensions of color can be represented on a double cone. Hue is represented by points around the circumference, saturation by points along the radius, and brightness by points on the vertical axis. A vertical slice taken from the color solid will show differences in the saturation and lightness of a single hue. (Courtesy Macbeth/Munsell Color, New Windsor, NY) CONCEPT REVIEW TABLE The physics and psychology of light and sound Stimulus Physical attribute Measurement unit Psychological experience Light Wavelength Nanometers Hue Intensity Photons Brightness Purity Level of gray Saturation Sound Frequency Hertz Pitch Amplitude Decibels Loudness Complexity Harmonics Timbre VISION For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
physical makeups but which appear to be identical – are called metamers. At this point, we will make a few general comments about why metamers provide important clues for understanding how the visual system works: This is because the means by which a system, such as the visual system, constructs metamers reveals how the system loses information – in our example, the information about whether the stimulus is a mixture or a pure light is lost when they are both perceived to be the same yellow color. Now at first glance, it may seem as if losing information is a bad thing; however it is not. As we noted earlier in the chapter, we are at any instant being bombarded by an immense amount of information from the world. We do not need all of this information or even the majority of it to survive and flourish in the environment. This means that we must eliminate much of the incoming information from the environment or we would constantly be overwhelmed by information overload. It is this informationelimination process that creates metamers. As we shall see below the fact that three and exactly three primary colors are needed to match – that is, to form a metamer of – any C ol or s o n
t h e s p e c tr u m
C ol or s no t o n t h e
s p e ct r u m
Yellow-green Green-blue Bluish purple Red Orange Yellow 494 480 450 700-780 White 567 580 Violet-blue (complement to yellow) Reddish purple (complement to green) Blue Blue-green (complement to red) Orange-yellow (complement to blue) Green Figure 4.19 The Color Circle. A simple way to represent color mixture is by means of the color circle. The spectral colors (colors corresponding to wavelengths in our region of sensitivity) are represented by points around the circumference of the circle. The two ends of the spectrum do not meet; the space between them corresponds to the nonspectral reds and purples, which can be produced by mixtures of long and short wavelengths. The inside of the circle represents mixtures of lights. Lights toward the center of the circle are less saturated (or whiter); white is at the very center. Mixtures of any two lights lie along the straight line joining the two points. When this line goes through the center of the circle, the lights, when mixed in proper proportions, will look white. Such pairs of colors are called complementary colors. CHAPTER 4 SENSORY PROCESSES For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
Figure 4.20 Testing for Color Blindness. Two plates used in color blindness tests. In the left plate, individuals with certain kinds of red-green blindness see only the number 5; others see only the 7; still others, no number at all. Similarly, in the right plate, people with normal vision see the number 15, whereas those with red-green blindness see no number at all. arbitrary color provides an important clue about how the visual system is constructed. Implication of the matching-by-three-primaries law Before describing the value of this clue we note two implications. First, this arrangement for color mixing has important practical uses. A good example is that color reproduction in television or photography relies on the fact that a wide range of colors can be produced by mixing only three primary colors. For example, if you examine your television screen with a magnifying glass you will find that it is composed of tiny dots of only three colors (blue, green, and red). Additive color mixture occurs because the dots are so close together that their images on your retina overlap. (See Figure 4.19 for a way of representing color mixtures.) A second implication has to do with our understanding of color deficiencies. While most people can match a wide range of colors with a mixture of three primaries, others can match a wide range of colors by using mixtures of only two primaries. Such people, referred to as dichromats, have deficient color vision, as they confuse some colors that people with normal vision (trichromats) can distinguish among. But dichromats can still see color. Not so for monochromats, who are unable to discriminate among different wavelengths at all. Monochromats are truly color-blind. (Screening for color blindness is done with tests like that shown in Figure 4.20, a simpler procedure than conducting color mixture experiments.) Most color deficiencies are genetic in origin. As noted in Chapter 2, color blindness occurs much more frequently in males (2%) than in females (0.03%), because the critical genes for this condition are recessive genes located on the X chromosome (Nathans, 1987). For more Cengage Learning textbooks, visit www.cengagebrain.co.uk VISION Theories of color vision Two major theories of color vision have been suggested. The first was proposed by Thomas Young in 1807, long before scientists even knew about the existence of cones. Fifty years later, Hermann von Helmholtz further developed Young’s theory. According to the YoungHelmholtz or trichromatic theory, even though we can discriminate among many different colors, there are only three types of receptors for color. We now know that these are the cones. Each type of cone is sensitive to a wide range of wavelengths but is most responsive within a narrower region. As shown in Figure 4.21, the short-wavelength cone is most sensitive to short wavelengths (blues), the medium-wavelength cone is most sensitive to medium wavelengths (greens and yellows), and the long-wavelength cone is most sensitive to long wavelengths (reds). The joint action of these three receptors determines the sensation of color. That is, a light of a particular wavelength stimulates the three receptors to different degrees, and the specific ratios of activity in the three receptors leads to the sensation of a specific color. Hence, with regard to our earlier discussion of coding quality, the trichromatic theory holds that the quality of color is coded by the pattern of activity of three receptors rather than by specific receptors for each of a multitude of colors. S M L 0.8 Relative response 0.6 M 0.4 L 0.2 S 350 450 550 650 Wavelength Figure 4.21 The Trichromatic Theory. Response curves for the short-, medium-, and long-wave receptors proposed by trichromatic theory. These curves enable us to determine the relative response of each receptor to light of any wavelength. In the example show here, the response of each receptor to a 500-nanometer light is determined by drawing a line up from 500 nanometers and noting where this line intersects each curve. (Reprinted from ‘Spectral Sensitivity of the Foveal Cone Photopigments Between 400 and 500 nm’, in Vision Search, 15, pp. 161–171. © 1975, with permission from Elsevier Science.)
The trichromatic theory explains the facts about color vision – and most importantly the result of the colormatching experiment – that we mentioned previously. First, we can discriminate among different wavelengths because they lead to different responses in the three receptors. Second, the law of three primaries follows directly from the trichromatic theory. We can match a mixture of three widely spaced wavelengths to any color because the three widely spaced wavelengths will activate the three different receptors, and activity in these receptors results in perception of the test color. (Now we see the significance of the number three.) Third, the trichromatic theory explains the various kinds of color deficiencies by positing that one or more of the three types of receptors is missing: Dichromats are missing one type of receptor, whereas monochromats are missing two of the three types of receptors. In addition to accounting for these long-known facts, trichromatic theory led biological researchers to a successful search for the three kinds of cones that are familiar to us today. Despite its successes, the trichromatic theory cannot explain some well-established findings about color perception. In 1878 Ewald Hering observed that all colors may be described as consisting of one or two of the following sensations: red, green, yellow, and blue. Hering also noted that nothing is perceived to be reddish-green or yellowish-blue; rather, a mixture of red and green may look yellow, and a mixture of yellow and blue may look white. These observations suggested that red and green form an opponent pair, as do yellow and blue, and that the colors in an opponent pair cannot be perceived simultaneously. Further support for the notion of opponent pairs comes from studies in which an observer first stares at a colored light and then looks at a neutral surface. The observer reports seeing a color on the neutral surface that is the complement of the original one (see Figure 4.22). These phenomenological observations led Hering to propose an alternative theory of color vision called opponent-color theory. Hering believed that the visual system contains two types of color-sensitive units. One type responds to red or green, the other to blue or yellow. Each unit responds in opposite ways to its two opponent colors. The red-green unit, for example, increases its response rate when a red is presented and decreases it when a green is presented. Because a unit cannot respond in two ways at once, if two opponent colors are presented, white is perceived (see Figure 4.11). Opponentcolor theory is able to explain Hering’s observations about color. The theory accounts for why we see the hues that we do. We perceive a single hue – red or green or yellow or blue – whenever only one type of opponent unit is out of balance, and we perceive combinations of hues when both types of units are out of balance. Nothing is perceived as red-green or as yellow-blue because a unit cannot respond in two ways at once. Moreover, the theory explains why people who first view a colored light and then stare at a neutral surface report seeing the complementary color; if the person first stares at red, for example, the red component of the unit will become fatigued, and consequently, the green component will come into play. We therefore have two theories of color vision – trichromatic and opponent-color – in which each theory can explain some facts but not others. For decades the two theories were viewed as competing with each other, but eventually, researchers proposed that they be integrated into a two-stage theory in which the three types of receptors identified by the trichromatic theory feed into the color-opponent units at a higher level in the visual system (Hurvich & Jameson, 1974). This view suggests that there should be neurons in the visual system that function as color-opponent units and operate on visual information after the retina (which contains the three kinds of receptors of trichromatic theory). And in fact such color-opponent neurons have been discovered in the thalamus, a neural waystation between the retina and the Figure 4.22 Complementary Afterimages. Look steadily for about a minute at the dot in the center of the colors, and then transfer your gaze to the dot in the gray field at the right. You should see a blurry image with colors that are complementary to the original. The blue, red, green, and yellow are replaced by yellow, green, red, and blue. CHAPTER 4 SENSORY PROCESSES For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
‘Green’ cones ‘Blue’ cones ‘Red’ cones Eye (534 nm) (420 nm) (564 nm) W–Blk (L) B–Y Brain R–G Figure 4.23 How the Trichromatic and Opponent-Process Theories May Be Related. This diagram shows three types of receptors connected to produce opponent-process neural responses at a later stage in processing. The numbers in the cones indicate wavelengths of maximum sensitivity. The lines with arrows represent connections that increase activity; the lines with dots represent connections that decrease activity. Note that this is only a small part of the whole system. Another set of opponent-process units has the reverse arrangement of increasing and decreasing connections. visual cortex (DeValois & Jacobs, 1984). These cells are spontaneously active, increasing their activity rate in response to one range of wavelengths and decreasing it in response to another. Thus, some cells at a higher level in the visual system fire more rapidly if the retina is stimulated by a blue light and less rapidly when the retina is exposed to a yellow light; such cells seem to constitute the biological basis of the blue-yellow opponent pair. A summary neural wiring diagram that shows how the trichchromatic and opponent-process theories may be related is presented in Figure 4.23. This research on color vision is a striking example of successful interaction between psychological and biological approaches to a problem. Trichromatic theory suggested that there must be three kinds of color receptors, and subsequent biological research established that there were three kinds of cones in the retina. Opponent-color theory said that there must be other kinds of units in the visual system, and biological researchers subsequently found opponent-color cells in the thalamus. Moreover, successful integration of the two theories required that the trichromatic cells feed into the opponent-color ones, and this, too, was confirmed by subsequent biological research. Thus, on several occasions outstanding work at the psychological level pointed the way for biological discoveries. It is no wonder that many scientists have For more Cengage Learning textbooks, visit www.cengagebrain.co.uk VISION taken the analysis of color vision as a prototype for the analysis of other sensory systems. Sensation and perception: a preview In this chapter we have been focusing on raw sensory input – light waves, in the instance of vision – and how that sensory input is transformed into neural patterns. In the next chapter, we will focus on perception – how the raw sensory input is transformed to knowledge about the structure of the world. In this section, we will briefly describe some recent research that bridges the gap between the two. The research begins with a prosaic question: How does the distance between an observer and an object affect the ability of the observer perceive the object? Suppose you are standing on a street corner in Trafalgar Square watching the people milling to and fro. As a particular person walks toward you, you are increasingly able to see what she looks like. At some distance you can tell she’s a woman. Then you can tell that she has a narrow face. Then you can tell that she has rather large lips. And so on. As she moves closer and close, you can make out more and more details about her appearance. Enough is known about the workings of the visual system for us to know fairly precisely why this happens. Both the optics of the eye and the neurology of the rest of the system causes the representation of an image to be slightly out of focus (this is not unique to the visual system; it is true of any optical device). The further away from you is an object, like the person you’re looking at, the smaller is that person’s image on your retina, and the greater the degree to which the out-of-focus-ness degrades larger details. Recent research (Loftus & Harley, 2005) has quantified these general ideas and in particular demonstrated that seeing an object – a face in this research – from a particular distance is equivalent, from the visual system’s perspective, to blurring the object by a particular amount. Furthermore, the work allowed an exact specification of how much blurring corresponds to any particular distance. Figure 4.24 shows an example: a picture of Scarlett Johansson, shrunk (left panels) or blurred (right panels) to demonstrate the loss of visual information when she’s seen from approximately 13 meters away (top panels) or 52 meters away (bottom panels). This research, and the findings from it, provide an example of using what’s known about the fundamental manner in which the visual system acquires and treats basic information (i.e., what’s known about sensation) to demonstrate in a clear and intuitive manner what is the effect of a particular variable – distance – on the resulting perception. As we shall see in the next chapter, this knowledge not only is useful in practical settings (e.g., demonstrating to a jury in a criminal trial how well a witness could have seen a criminal from a particular distance) but also provides a scientific tool to investigate other perceptual phenomena.
INTERIM SUMMARY l The stimulus for vision is light, which is electromagnetic radiation in the range from 400 to 700 nanometers. l The transduction system for vision consists of visual receptors in the retina at the back of the eye. The visual receptors are broadly divided into consist of rods and cones. There are three subtypes of cones, each subtype maximally sensitive to a different wavelength. l Different wavelengths of light lead to sensations of different colors. Color vision is understood via the trichromatic theory, which holds that perception of color is based on the activity of three types of cone receptors. The rods are insensitive to color and to fine details; however rods are capable of detecting very small amounts of light and are used for seeing under conditions of low illumination. l Visual acuity refers to the visual system’s ability to resolve fine details. The cones, which are concentrated in a small part of the retina, allow highest-acuity, while the rods are not capable of high acuity. l There are four basic color sensations: red, yellow, green, and blue. Opponent-color theory proposes that there are red-green and yellow-blue opponent processes, each of which responds in opposite ways to its two opponent colors. Trichromatic and opponent-color theories have been successfully combined through the proposal that they operate at different neural locations in the visual system. CRITICAL THINKING QUESTIONS 1 Think of an eye as analogous to a camera. What features of the eye correspond to which features of a camera? 2 Pilots preparing for flying at night often wear red goggles for an hour or so prior to their flight. Why do you suppose that they would do this? 3 From an evolutionary standpoint, can you think of reasons why some animals’ eyes consist almost entirely of rods, other animals’ eyes have only cones, and those of still others, such as humans, have both cones and rods? AUDITION Along with vision, audition (hearing) is our major means of obtaining information about the environment. For most of us, it is the primary channel of communication as well as the vehicle for music. As we will see, it all comes about because small changes in sound pressure can move a membrane in our inner ear back and forth. Our discussion of audition follows the same plan as our discussion of vision. We first consider the nature of the physical stimulus to which audition is sensitive; then describe the auditory system, with particular emphasis on how the receptors carry out the transduction process; and Figure 4.24 Effects of Distance. Two theoretically equivalent representations of Scarlett Johansson's face viewed from 13 meters (top panels) and 52 meters (lower panels): resizing (left panels) and filtering (right panels). The left panels are valid if viewed from 50 cm away. © PICTORIAL PRESS LTD / ALAMY CHAPTER 4 SENSORY PROCESSES For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
ª ISTOCKPHOTO.COM/SHELLY PERRY Musical instruments produce complex patterns of sound pressure. These are referred to as the sound’s timbre. finally consider how the auditory system codes the intensity of sound and its quality. Sound waves Sound originates from the motion or vibration of an object, as when the wind rushes through the branches of a tree. When something moves, the molecules of air in front of it are pushed together. These molecules push Greatest compression Greatest expansion One cycle Figure 4.25 A Pure Tone. As the tuning fork vibrates, it produces a pure tone, which is made up of successive aircompression waves that form a sine-wave pattern. The amplitude of the wave corresponds to the wave’s intensity, while the number of waves per second is its frequency. Using a technique called Fourier analysis, any arbitrary sound wave can be decomposed into the sum of sine waves of different frequencies and intensities. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk AUDITION other molecules and then return to their original position. In this way, a wave of pressure changes (a sound wave) is transmitted through the air, even though the individual air molecules do not travel far. This wave is analogous to the ripples set up by throwing a stone into a pond. A sound wave may be described by a graph of air pressure as a function of time. A pressure-versus-time graph of one type of sound is shown in Figure 4.25. The graph depicts a sine wave, familiar to anyone who has taken trigonometry. Sounds that correspond to sine waves are called pure tones. An important dimension of a pure tone is the tone’s frequency, which is the number of cycles per second (or hertz), at which the molecules move back and forth (see Figure 4.25). Frequency is the basis of our perception of pitch, which is one of the most noticeable qualities of a sound. High-frequency tones take the form of high-frequency sine waves (like the 5,000 hertz sine wave shown in the top panel of Figure 4.25) while lower-frequency tones take the form of low-frequency sound waves such as the 500 cycle/sec sine wave shown in the bottom panel of Figure 4.25). Sine waves are important in the analysis of audition because, as proved by the French mathematician Fourier, any complex sound can be decomposed into pure tones; that is, any complex sound can be represented as a weighted sum of a series of different-frequency sine waves. A second aspect of a pure tone is its amplitude, which is the pressure difference between the peak and the trough in a pressure-versus-time graph (see Figure 4.25). Amplitude underlies our sensation of loudness. Sound amplitude is usually specified in decibels which is a type of logarithmic scale; an increase of 10 decibels corresponds to a 10-fold increase in amplitude above the sound’s threshold; 20 decibels, a 100-fold increase; 30 decibels, a 1,000-fold increase; and so forth. For example, a soft whisper in a quiet library is approximately 30 decibels, a noisy restaurant may have a level of 70 decibels, a rock concert may be near 120 decibels, and a jet taking off may be over 140 decibels. Consistent exposure to sound levels at or above 100 decibels is associated with permanent hearing loss. A final aspect of sound is timbre, which refers to our experience of the complexity of a sound. Almost none of the sounds we hear every day is as simple as the pure tones we have been discussing. (The exceptions are tuning forks and some electronic instruments.) Sounds produced by acoustical instruments, automobiles, the human voice, other animals, and waterfalls are characterized by complex patterns of sound pressure. The difference in timbre Amplitude
132 CHAPTER 4 SENSORY PROCESSES ª RALF-FINN HESTOFT/PHOTOLIBRARY.COM Prolonged exposure to loud noises can cause permanent hearing loss. This is why airport workers always wear ear protectors. is, for example, what makes a middle C produced by a violin sound different from a middle-C produced by a trombone. The auditory system The auditory system consists of the ears, parts of the brain, and the various connecting neural pathways. Our primary concern will be with the ears; this includes not just the appendages on the sides of the head, but the entire hearing organ, most of which lies within the skull (see Figure 4.26). Like the eye, the ear contains two systems. One system amplifies and transmits the sound to the receptors, whereupon the other system takes over and transduces the sound into neural impulses. The transmission system involves the outer ear, which consists of the external ear (or pinna) along with the auditory canal, and the middle ear, which consists of the eardrum and a chain of three bones called the malleus, incus, and stapes. The For more Cengage Learning textbooks, visit www.cengagebrain.co.uk transduction system is housed in a part of the inner ear called the cochlea, which contains the receptors for sound. Let us take a more detailed look at the transmission system (see Figure 4.27). The outer ear aids in the collection of sound, funneling it through the auditory canal to a taut membrane, the eardrum. The eardrum, the outermost part of the middle ear, is caused to vibrate by sound waves funneled to it through the auditory canal. The middle ear’s job is to transmit these vibrations of the eardrum across an air-filled cavity to another membrane, the oval window, which is the gateway to the inner ear and the receptors. The middle ear accomplishes this transmission by means of a mechanical bridge consisting of three small bones called the malleus, incus, and stapes. The vibrations of the eardrum move the first bone, which then moves the second, which in turn moves the third, which results in vibrations of the oval window. This mechanical arrangement not only transmits the sound wave but greatly amplifies it as well. Now consider the transduction system. The cochlea is a coiled tube of bone. It is divided into sections of fluid by membranes, one of which, the basilar membrane, supports the auditory receptors (Figure 4.27). The receptors are called hair cells because they have hairlike structures that extend into the fluid. Pressure at the oval window (which connects the middle and inner ear) leads to pressure changes in the cochlear fluid, which in turn causes the basilar membrane to vibrate, resulting in a bending of the hair cells and an electrical impulse. Through this complex process, a sound wave is, at last, transduced into an electrical impulse. The neurons that synapse with the hair cells have long axons that form part of the auditory nerve. Most of these auditory neurons connect to single hair cells. There are about 31,000 auditory neurons in the auditory nerve, many fewer than the 1 million neurons in the optic nerve (Yost & Nielson, 1985). The auditory pathway from each ear goes to both sides of the brain and has synapses in several nuclei before reaching the auditory cortex. Hearing sound intensity Recall that our vision is more sensitive to some wavelengths than to others. A similar phenomenon occurs in audition. We are more sensitive to sounds of intermediate frequency than we are to sounds near either end of our frequency range. This is illustrated in Figure 4.28, which shows the absolute threshold for sound intensity as a function of frequency. Many people have some deficit in hearing and consequently have a threshold higher than those shown in the figure. There are two basic kinds of hearing deficits. In one kind, called conduction loss, thresholds are elevated roughly equally at all frequencies as the result of poor conduction in the middle ear. In the other kind, called sensory-neural loss, the threshold
elevation is unequal, with large elevations occurring at higher frequencies. This pattern is usually a consequence of inner-ear damage, often involving some destruction of the hair cells, which are unable to regenerate. Sensoryneural loss occurs in many older people and explains why the elderly often have trouble hearing high-pitched sounds. Sensory-neural loss is not limited to the elderly, though. It also occurs in young people who are exposed to excessively loud sound. Rock musicians, airportrunway crews, and pneumatic-drill operators commonly suffer major, permanent hearing loss. For example, Pete Townshend, the well-known guitarist of the 1960s rock group The Who, suffered severe sensory neural loss because of his continuous exposure to loud rock music; since then he has alerted many young people to this danger. It is natural to assume that the perceived intensity of a sound is the same at both ears, but in fact there are subtle differences. A sound originating on our right side, for example, will be heard as more intense by our right ear than by our left ear. This happens because our head causes a ‘sound shadow’ that decreases the intensity of the sound reaching the far ear. This difference does not interfere with our ability to hear, however; we take advantage of it by using it to localize where the sound is coming from. It is as if we said, ‘If the sound is more intense at my right ear than at my left ear, it must be coming from my right side’. Likewise, a sound originating on the right side will arrive at the right ear a splitsecond before it reaches the left ear (and vice versa for a sound originating on the left). We also take advantage of this difference to localize the sound (‘If the sound arrived at my right ear first, it must be coming from the right’). Hearing pitch As we have noted, one of the primary psychological qualities of a sound is its pitch, which is a sensation based on the frequency of a sound. As frequency increases, so does pitch. Young adults can detect pure tone frequencies between 20 and 20,000 hertz, with the jnd being less than 1 hertz at 100 hertz and increasing to 100 hertz at 10,000 hertz. External ear Middle ear Internal ear Eardrum Malleus Incus Stapes Semicircular canal Auditory nerve Cochlea Oval window Round window Auditory tube Malleus Incus Stapes Eardrum Auditory tube Auditory canal Figure 4.26 A Cross-Section of the Ear. This drawing shows the overall structure of the ear. The inner ear includes the cochlea, which contains the auditory receptors, and the vestibular apparatus (semicircular canals and vestibular sacs), which is the sense organ for our sense of balance and body motion. AUDITION For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
134 CHAPTER 4 SENSORY PROCESSES Malleus Incus Oval window
l l a n Vesti a c bul r a Sound Stapes
l a n a c c i an Tymp Round window Eardrum Cross section a) Cochlear duct Vestibular canal Hair cells Tympanic canal Auditory nerve Basilar membrane b) Figure 4.27 A Schematic Diagram of the Middle and Inner Ear. (a) Movement of the fluid within the cochlea deforms the basilar membrane and stimulates the hair cells that serve as the auditory receptors. (b) A cross-section of the cochlea showing the basilar membrane and the hair cell receptors. (From Sensation and Perception, 3/e, by S. Coren and L. Ward, © 1989. Used by permission of John Wiley and Sons, Inc.) 120 Sound pressure (decibels) 80 40 0 1,000 10,000 20 2,000 5,000 20,000 Frequency (hertz; cycles per second) Figure 4.28 Absolute Threshold for Hearing. The lower curve shows the absolute intensity threshold at different frequencies. Sensitivity is greatest in the vicinity of 1,000 hertz. The upper curve describes the threshold for pain. (Data are approximate, from various determinations.) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk With sound, as with light, we rarely have opportunities to hear pure sensory stimuli. Recall that for the visual system we usually see mixtures of wavelengths rather a pure stimulus – a light consisting of only one wavelength (an exception would be the light emitted by a laser). A similar situation characterizes the auditory system. We rarely hear a pure tone; instead, we are usually confronted by a sound composed of a mixture of tones. However, here the light–sound analogy begins to break down. When we mix wavelengths of light we see an entirely new color, but when we mix pure tones together we often can still hear each of the components separately. This is especially true if the tones are widely separated in frequency. When the frequencies are close together, the sensation is more complex but still does not sound like a single, pure tone. In color vision, the fact that a mixture of three lights results in the sensation of a single color led to the idea of three types of receptors. The absence of a comparable phenomenon in audition suggests rather than there being relatively few receptors specialized for relatively few different frequencies, sound-frequency receptors must form more of a continuum. Cochlear duct Basilar membrane Theories of pitch perception As with color vision, two kinds of theories have been proposed to account for how the ear codes frequency into pitch. The first kind was suggested in 1886 by Lord Rutherford, a British physicist. Rutherford proposed that a sound wave causes the entire basilar membrane to vibrate, and that the rate of vibration determines the rate of impulses of nerve fibers in the auditory nerve. Thus, a 1,000-hertz tone causes the basilar membrane to vibrate 1,000 times per second, which causes nerve fibers in the auditory nerve to fire at 1,000 impulses per second, and the brain interprets this as a particular pitch. Because this theory proposes that pitch depends on how sound varies with time, it is called a temporal theory. Rutherford’s hypothesis was quickly discovered to be overly simplistic when it was experimentally determined that nerve fibers have a maximum firing rate of about 1,000 impulses per second. This means that if Rutherford’s hypothesis were correct, it would not be possible to perceive the pitch of tones whose frequency exceeds 1,000 hertz – which, of course, we can do. Weaver (1949) proposed a way to salvage Rutherford’s hypothesis. Weaver argued that frequencies over 1,000 hertz could be coded by different groups of nerve fibers, each group firing at a slightly different pace. If one group of neurons is firing at 1,000 impulses per second, for example, and then 1 millisecond later a second group of neurons begins firing at 1,000 impulses
CUTTING EDGE RESEARCH Where in the Brain Are Illusions? Scott Murray, University of Washington Context has a dramatic effect on how we perceive object size. For example, in the picture illustrated, the two spheres are exactly the same physical size – they occupy the same size on the page (check it out!) and therefore occupy the same amount of space on the retina. However, we cannot help but perceive the sphere at the back of the hallway as being larger than the sphere at the front of the hallway. As we shall see in more detail in Chapter 5, illusion makes perfect sense for a visual system that has evolved to interpret a threedimensional (3-D) world. The depth cues in the image give rise to a difference in perceived distance between the two spheres, and our visual system takes this into account when arriving at an estimate of object size. This example is a powerful illustration of how identical input at the retina can be transformed into very different perceptions depending on the 3-D information present in an image. COURTESY OF SCOTT MURRAY AND HUSEYIN BOYACI per second, the combined rate of impulses per second for the two groups will be 2,000 impulses per second. This version of temporal theory received support from the discovery that the pattern of nerve impulses in the auditory nerve follows the waveform of the stimulus tone even though individual cells do not respond on every cycle of the wave (Rose, Brugge, Anderson, & Hind, 1967). While clever, this hypothesis is still insufficient: The ability of nerve fibers to follow the waveform breaks For more Cengage Learning textbooks, visit www.cengagebrain.co.uk AUDITION One important question is where in the visual system the 3-D information provided by the pictured hallway exerts its influence on the sensory representations of the spheres. Since the 3-D information is quite complex, this integration might occur at late stages of the visual system that are specialized for processing 3-D information and object recognition. Or, it could happen much earlier – the 3-D information could be used to change our perceptions as soon as the image of the spheres enters the brain. Indeed, there is a strong sense in which we ‘can’t make the illusion go away’ which suggests that the representations of the spheres are altered at very early stages of the visual system. To test this we used a brain imaging technique called fMRI to measure the amount of cortex that is activated by the front and back spheres. The early visual system is retinotopically organized, meaning that nearby positions on the retina project to nearby positions in visual cortex. The result is a ‘map’ of visual space – an object projecting an image on the retina literally activates a contiguous region of cortex. Using fMRI we measured whether the map is smaller when people are looking at the front sphere as compared to the back sphere. We found that in ‘primary visual cortex’ (or V1) – the very first area of our cortex to receive information from the eyes – that the maps for the front and back spheres are different. The front sphere activated a smaller area of cortex than the back sphere. This is shown in the second picture. The top row shows that the map extends further for the perceptually larger back sphere than for the perceptually smaller front sphere. The graphs look very similar when we used a stimulus that did not have 3-D context but had a real difference in size that matched the size illusion, as is shown in the bottom row. Why would the visual system change the maps in early visual cortex? Size is an important cue for recognizing objects. For example, object size can quickly help you discriminate between a golf ball, baseball, and volleyball. But in order for your recognition system to be able to use object size, 3-D information must be taken into account. For example, a golf ball held close to your eye can produce a larger visual image than a volleyball that is far away. Our fMRI research indicates that distance information is taken into account early, presumably so that we can obtain an accurate estimation of object size for recognition. down at about 4,000 hertz – yet we can hear pitch at much higher frequencies. This implies that there must be another means of coding the quality of pitch, at least for high frequencies. The second kind of theory of pitch perception deals with this question. It dates back to 1683, when the French anatomist Joseph Guichard Duverney proposed that frequency is coded into pitch mechanically by resonance (Green & Wier, 1984). To appreciate this proposal, it is helpful to first consider an example of resonance. When a tuning fork is struck near a piano, the
136 CHAPTER 4 SENSORY PROCESSES Our research also helps explain why illusions such the ball example above are so powerful – the differences in image size between the two spheres seem very real. By showing that Stimulus condition 6.5° 6.5° Perceptual difference in size (equal image size) 6.5° 8.125° Physical difference in size piano string that is tuned to the frequency of the fork will begin to vibrate. To say that the ear works the same way is to say that the ear contains a structure similar to a stringed instrument, with different parts tuned to different frequencies, so that when a frequency is presented to the ear the corresponding part of the structure vibrates. This idea proved to be roughly correct; the structure turned out to be the basilar membrane. In the 1800s the ubiquitous Hermann von Helmholtz (remember him from color-vision theory?) developed this hypothesis further, eventually proposing the place theory of pitch perception, which holds that each specific place along the basilar membrane will lead to a particular pitch sensation. The fact that there are many such places on the membrane is consistent with there being many different receptors for pitch. Note that place theory does not imply that we hear with our basilar membrane; rather, the places on the membrane that vibrate most determine which neural fibers are activated, and that determines the pitch we hear. This is an example of a sensory modality coding quality according to the specific nerves involved. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk there are differences in the maps in the very earliest stages of the visual system – for our brains, at least – these differences are real. fMRI response COURTESY OF SCOTT MURRAY AND HUSEYIN BOYACI 1.5 Peak MRI signal (%) * 0.5 * −0.5 −1 Perceptually larger Perceptually smaller −1.5 3 2 1.5 * Peak MRI signal (%) * 0.5 * −0.5 −1 Physically larger Physically smaller −1.5 3 Eccentricity (degrees) How the basilar membrane actually moves was not established until the 1940s, when the Hungarian-born biophysicist Georg von Békésy measured its movement through small holes drilled in the cochleas of guinea pigs and human cadavers. Von Békésy’s findings required a modification of place theory: Rather than behaving like a piano with separate strings, the basilar membrane behaves more like a bed sheet being shaken at one end. Specifically, von Békésy showed that the whole membrane moves for most frequencies, but that the place of maximum movement depends on the specific frequency sounded. High frequencies cause vibration at the near end of the basilar membrane; as frequency increases, the vibration pattern moves toward the oval window (von Békésy, 1960). For this and other research on audition, von Békésy received a Nobel prize in 1961. Like temporal theories, place theories explain many pitch-perception phenomena, but not all. A major difficulty for place theory arises with low-frequency tones. With frequencies below 50 hertz, all parts of the basilar membrane vibrate about equally. This means that all the
receptors are equally activated, which implies that we have no way of discriminating between different frequencies that are below 50 hertz. In fact, though, we can discern frequencies as low as 20 hertz. Hence, place theories have problems explaining our perception of lowfrequency tones, while temporal theories have problems dealing with high-frequency tones. This led to the idea that pitch depends on both place and temporal pattern, with temporal theory explaining our perception of low frequencies and place theory explaining our perception of high frequencies. It is not clear, however, where one mechanism leaves off and the other takes over. Indeed, it is possible that frequencies between 1000 and 5000 hertz are handled by both mechanisms (Coren, Ward, & Enns, 1999). Because our ears and eyes are so important to us in our day-to-day lives, many efforts have been made to develop ways to replace them in individuals who suffer irreparable damage to these organs. Some of these efforts are described in the Cutting Edge Research feature. INTERIM SUMMARY l The stimulus for hearing is a wave of air-pressure changes (a sound wave). l Sound waves are transduced by the outer and middle ear, causing the basilar membrane to vibrate, which results in a bending of the hair cells that produces a neural impulse. l Sound intensity is determined by the magnitude of the sound wave, i.e., the difference between a wave’s minimum and maximal pressure. l Pitch, the most striking quality of sound, is determined by the frequency of the sound wave. There are two theories of pitch perception: temporal theories and place theories. These theories are not mutually exclusive. Temporal theory explains perception of low frequencies, and place theory accounts for perception of high frequencies. CRITICAL THINKING QUESTIONS 1 Consider the relation between the eye and the ear. Each organ is made up of various components that perform various functions. What are the correspondences between the eye components and the ear components in terms of the functions they perform? 2 Why do you suppose that it is high-frequency sounds that are heard poorly by older adults? Why not low- or medium-frequency tones? For more Cengage Learning textbooks, visit www.cengagebrain.co.uk OTHER SENSES OTHER SENSES Senses other than vision and audition lack the richness of patterning and organization that have led sight and hearing to be called the ‘higher senses’. Still, these other senses are vitally important. Smell, for example, is one of the most primitive and most important of the senses. This is probably related to the fact that smell has a more direct route to the brain than any other sense. The receptors, which are in the nasal cavity, are connected to the brain without synapses. Moreover, unlike the receptors for vision and audition, the receptors for smell are exposed directly to the environment – they are right there in the nasal cavity with no protective shield in front of them. (In contrast, the receptors for vision are behind the cornea, and those for audition are protected by the outer and middle ear.) Since smell is clearly an important sensory modality, we begin our discussion of the other senses with smell, also termed olfaction. Olfaction Olfaction aids in the survival of our species: It is needed for the detection of spoiled food or escaping gas, and loss of the sense of smell can lead to a dulled appetite. Smell is even more essential for the survival of many other animals. Not surprisingly, then, a larger area of the cortex is devoted to smell in other species than in our own. In fish, the olfactory cortex makes up almost all of the cerebral hemispheres; in dogs, about one-third; in humans, only about one-twentieth. These variations are related to differences in sensitivity to smell. Taking advantage of the superior smell capability of dogs, both the United States Postal Service and the Bureau of Customs have trained them to check unopened packages for heroin; likewise trained police dogs can sniff out hidden explosives. Because smell is so well developed in other species, it is often used as a means of communication. Insects and some other animals secrete pheromones, chemicals that float through the air to be sniffed by other members of the species. For example, a female moth can release a pheromone so powerful that males are drawn to her from a distance of several kilometers. It is clear that the male moth responds only to the pheromone and not to the sight of the female; the male will be attracted to a female in a wire container even though she is blocked from view, but not to a female that is clearly visible in a glass container from which the scent cannot escape. (The fascinating novel Perfume by Patrick Suskind dealt with a man who, although born with absolutely no odor of his own, was exquisitely sensitive to all odors of the world. To others he seemed to have ‘extrasensory’ powers, since he could for example predict the imminent arrival of an unseen person by his or her odor). Insects use smell to communicate death as well as ‘love’. After an ant dies, the chemicals formed from its
138 CHAPTER 4 SENSORY PROCESSES decomposing body stimulate other ants to carry the corpse to a refuse heap outside the nest. If a living ant is experimentally doused with the decomposition chemicals, it is carried off by other ants to the refuse heap. When it returns to the nest, it is carried out again. Such premature attempts at burial continue until the ‘smell of death’ has worn off (Wilson, 1963). Do humans have a vestige of this primitive communication system? Experiments indicate that we can use smell at least to tell ourselves from other people and to distinguish males from females. In one study, observers wore undershirts for 24 hours without showering or using deodorant. The undershirts were collected by the experimenter, who then presented each observer with three shirts to smell. One was the observer’s own shirt, while the other two belonged to other people: one was a male’s, and the other was a female’s. Based only on odor, most observers could identify their own shirt and tell which of the other shirts had been worn by a male or a female (Russell, 1976; Schleidt, Hold, & Attili, 1981). Other studies suggest that we may communicate subtler matters by means of odor. Women who live or work together seem to communicate their stage in the menstrual cycle by means of smell, and over time this results in a tendency for their menstrual cycles to begin at the same time For more Cengage Learning textbooks, visit www.cengagebrain.co.uk (McClintock, 1971; Preti et al., 1986; Russell, Switz, & Thompson, 1980; Weller & Weller, 1993). However, it is important to remember that these are effects on physiological functioning, not behavior. Although menstrual regularity is associated with healthy reproductive functioning and fertility, it does not have a direct influence on human behavior. Indeed, many researchers now believe that the behavioral effects of pheromones on humans are likely to be indirect, since social and learning factors influence our behavior more than they do that of other mammals (Coren, Ward, & Enns, 1999). The olfactory system The volatile molecules given off by a substance are the stimulus for smell. The molecules leave the substance, travel through the air, and enter the nasal passage (see Figure 4.29). The molecules must also be soluble in fat, because the receptors for smell are covered with a fatlike substance. The olfactory system consists of the receptors in the nasal passage, certain regions of the brain, and interconnecting neural pathways. The receptors for smell are located high in the nasal cavity. When the cilia (hairlike structures) of these receptors come into contact with volatile molecules, an electrical impulse results; this is the transduction process. This impulse travels along nerve fibers to the olfactory bulb, a region of the brain that lies just below the frontal lobes. The olfactory bulb in turn is connected to the olfactory cortex on the inside of the temporal lobes. (Interestingly, there is a direct connection between the olfactory bulb and the part of the cortex known to be involved in the formation of long-term memories; perhaps this is related to the Proustian idea that a distinctive smell can be a powerful aid in retrieving an old memory.) Sensing intensity and quality Human sensitivity to smell intensity depends greatly on the substance involved. Absolute thresholds can be as low ª PORTER GIFFORD/STOCK BOSTON Dogs are far more sensitive to smells than humans, and for this reason they were used in the aftermath of the World Trade Center disaster for the search-and-rescue operation and bomb detection.
as 1 part per 50 billion parts of air. Still, as noted earlier, we are far less sensitive to smell than other species. Dogs, for example, can detect substances in concentrations 100 times lower than those that can be detected by humans (Marshall, Blumer, & Moulton, 1981). Our relative lack of sensitivity is not due to our having less sensitive olfactory receptors. Rather, we just have fewer of them by about a factor of 100: roughly 10 million receptors for humans versus 1 billion for dogs. Although we rely less on smell than do other species, we are capable of sensing many different qualities of odor. Estimates vary, but a healthy person appears to be able to distinguish among 10,000 to 40,000 different odors, with women generally doing better than men (Cain, 1988). Professional perfumers and whiskey blenders can probably do even better – discriminating among perhaps 100,000 odors (Dobb, 1989). Moreover, we know something about how the olfactory system codes the quality of odors at the biological level. The situation is most unlike the coding of color in vision, for which three kinds of receptors suffice. In olfaction, many kinds of receptors seem to be involved; an estimate of 1,000 kinds of olfactory receptors is not unreasonable (Buck & Axel, 1991). Rather than coding a specific odor, each kind of receptor may respond to many different odors (Matthews, 1972). So quality may be partly coded by the pattern of neural activity, even in this receptor-rich sensory modality. Gustation Gustation, or the sense of taste, gets credit for a lot of experiences that it does not provide. We say that a meal ‘tastes’ good; but when our ability to smell is eliminated by a bad cold, food seems to lack taste and we may have trouble telling red wine from vinegar. Still, taste is a sense in its own right. Even with a bad cold, we can tell salted from unsalted food. In what follows, we will refer to the taste of particular substances, but note that the substance being tasted is not the only factor that determines its taste. Our genetic makeup and experience also affect taste. For example, people vary in their sensitivity to the bitter taste in caffeine and saccharin, and this difference appears to be genetically determined (Bartoshuk, 1979). The role of experience is illustrated by Indians living in the Karnataka province of India, who eat many sour foods and experience citric acid and quinine (the taste of tonic water) as pleasant tasting. Most Westerners experience the opposite sensations. This particular difference seems to be a matter of experience, for Indians raised in Western countries find citric acid and quinine unpleasant tasting (Moskowitz et al., 1975). Frontal lobe of cerebrum Olfactory tract Olfactory bulb Olfactory nerves Olfactory epithelium Substance being smelled Cilia a) b) Figure 4.29 Olfactory Receptors. (a) Detail of a receptor interspersed among numerous supporting cells. (b) The placement of the olfactory receptors in the nasal cavity. ª MARK HAMILTON / ALAMY Humans vary in their sensitivity to different tastes. Some people, like this wine taster, are able to discriminate among very subtle differences in the tastes of particular substances. OTHER SENSES For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
140 CHAPTER 4 SENSORY PROCESSES The gustatory system The stimulus for taste is a substance that is soluble in saliva. The gustatory system includes receptors that are located on the tongue as well as on the throat and roof of the mouth; the system also includes parts of the brain and interconnecting neural pathways. In what follows, we focus on the receptors on the tongue. These taste receptors occur in clusters, called taste buds, on the bumps of the tongue and around the mouth. At the ends of the taste buds are short, hairlike structures that extend outward and make contact with the solutions in the mouth. The contact results in an electrical impulse; this is the transduction process. The electrical impulse then travels to the brain. Sensing intensity and quality Sensitivity to different taste stimuli varies from place to place on the tongue. While any substance can be detected at almost any place on the tongue (except the center), different tastes are best detected in different regions. Sensitivity to salty and sweet substances is best near the front of the tongue; sensitivity to sour substances along the sides; and sensitivity to bitter substances is best on the soft palate (see Figure 4.30). In the center of the tongue is a region that is insensitive to taste (the place to put an unpleasant pill). While absolute thresholds for taste are generally very low, jnds for intensity are relatively high (Weber’s constant is often about 0.2). This means that if you are increasing the amount of spice in a dish, you usually must add more than 20 percent or you will not taste the difference. Recent research suggests that ‘tongue maps’, such as the one in Figure 4.30, may be oversimplified in that they suggest that if the nerves leading to a particular region were cut, all sensation would be lost. However, this does not occur Bitter Sour Salt Sweet Figure 4.30 Taste Areas. Although any substance can be detected anywhere on the tongue – except in the center – different areas are maximally sensitive to different tastes. The area labeled ‘sweet’. for example, is most sensitive to sweet tastes. (E. H. ‘Sensory Neural Patterns in Gustation,‘from Zotterman (ed.) Olfaction and Taste, Vol. 1, p. 205–213. Copyright © 1963, with kind permission of Elsevier Science, Ltd.) Erikson For more Cengage Learning textbooks, visit www.cengagebrain.co.uk because taste nerves inhibit one another. Damaging one nerve abolishes its ability to inhibit others; thus, if you cut the nerves to a particular region, you also reduce the inhibitory effect, and the result is that there is little change in the everyday experience of taste (Bartoshuk, 1993). There is an agreed-upon vocabulary for describing tastes. Any taste can be described as one or a combination of the four basic taste qualities: sweet, sour, salty, and bitter (McBurney, 1978). These four tastes are best revealed in sucrose (sweet), hydrochloric acid (sour), sodium chloride (salty), and quinine (bitter). When people are asked to describe the tastes of various substances in terms of just the four basic tastes, they have no trouble doing this. Even if they are given the option of using additional qualities of their own choice, they tend to stay with the four basic tastes (Goldstein, 1989). The gustatory system codes taste in terms of both the specific nerve fibers activated and the pattern of activation across nerve fibers. There appear to be four types of nerve fibers, corresponding to the four basic tastes. While each fiber responds somewhat to all four basic tastes, it responds best to just one of them. Hence, it makes sense to talk of ‘salty fibers’ whose activity signals saltiness to the brain. Thus, there is a remarkable correspondence between our subjective experience of taste and its neural coding. Nonetheless, our taste experiences may be influenced not only by receptor activation, but also by peoples’ expectations regarding the foods that they eat. For instance, Plassmann et al. (2008) asked participants to taste the identical wine, marked as costing either $10 or $90. Participants perceived the ‘more expensive wine’ as tasting better. Although one might tend to simply interpret this as a bias effect, electrophysiological measures indicated that areas of the brain associated with pleasure were more activated by the ‘more expensive’ wine, producing a shift in taste experience. Similar results were reported in the domain of olfaction by Rachel Herz (2003). She randomly labeled perfumes as naturally or artificially scented. Participants consistently rated the products labeled as natural to be better smelling, regardless of whether the product itself was natural or artificial. Examples like this show that although receptor activation may provide the dominant information resulting in sensory experience, cognitive factors may also contribute. As will be seen in the next chapter, our conscious experiences of the world are often the result of a complex process of ‘give and take’ between patterns of sensory activation and expectations. Pressure and temperature Traditionally, touch was thought to be a single sense. Today, it is considered to include three distinct skin senses, one responding to pressure, another to temperature, and the third to pain. This section briefly considers pressure and temperature, and the next discusses pain.
Pressure The stimulus for sensed pressure is physical pressure on the skin. Although we are not aware of steady pressure on the entire body (such as air pressure), we can discriminate among variations in pressure over the surface of the body. Some parts of the body are more sensitive than others at sensing the intensity of pressure: The lips, nose, and cheek are the most sensitive, while the big toe is least sensitive. These differences are closely related to the number of receptors that respond to the stimulus at each of these locations. In sensitive regions, we can detect a force as small as 5 milligrams applied to a small area. However, like other sensory systems, the pressure system shows profound adaptation effects. If you hold a friend’s hand for several minutes without moving, you will become insensitive to its pressure and cease to feel it. When we are actively exploring the environment through touch, the motor senses contribute to our experience. Through active touch alone we can readily identify familiar objects, using it to recognize coins, keys, and ª ANKE VAN WYK/DREAMSTIME.COM After being in a swimming pool for a while, our temperature sense adapts to the change in temperature. However, when first dangling a foot into the water, we can detect the cooler temperature. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk OTHER SENSES other small objects that we keep in our pockets and purses (Klatzky, Lederman, & Metzger, 1985). Temperature The stimulus for temperature is the temperature of our skin. The receptors are neurons just under the skin. In the transduction stage, cold receptors generate a neural impulse when there is a decrease in skin temperature, while warm receptors generate an impulse when there is an increase in skin temperature (Duclauz & Kenshalo, 1980; Hensel, 1973). Hence, different qualities of temperature can be coded primarily by the specific receptors activated. However, this specificity of neural reaction has its limits. Cold receptors respond not only to low temperatures but also to very high temperatures (above 45 degrees centigrade or 113 degrees Fahrenheit). Consequently, a very hot stimulus will activate both warm and cold receptors, as you may have experienced when you accidentally plunged your foot into a very hot bath. Because maintaining body temperature is crucial to survival, it is important that we be able to sense small changes in our skin temperature. When the skin is at its normal temperature, we can detect a warming of only 0.4 degrees centigrade and a cooling of just 0.15 degrees centigrade (Kenshalo, Nafe, & Brooks, 1961). Our temperature sense adapts completely to moderate changes in temperature, so that after a few minutes the stimulus feels neither cool nor warm. This adaptation explains the strong differences of opinion about the temperature of a swimming pool between those who have been in it for a while and those who are first dangling a foot in it. Pain Of all our senses, none captures our attention like pain. We may sometimes take a blasé view of the other senses, but it is hard to ignore pain. Yet for all the discomfort it causes, we would be at risk if we had no sense of pain. It would be difficult for children to learn not to touch a hot stove, or to stop chewing their tongues. In fact, some people are born with a rare genetic disorder that makes them insensitive to pain, and they typically die young, owing to tissue deterioration resulting from wounds that could have been avoided if they had been able to feel pain. The pain system Any stimulus that is intense enough to cause tissue damage is a stimulus for pain. It may be pressure, temperature, electric shock, or chemical irritants. Such a stimulus causes the release of chemical substances in the skin, which in turn stimulate distinct high-threshold receptors (the transduction stage). These receptors are neurons with specialized free nerve endings, and researchers have identified several types (Brown & Deffenbacher, 1979). With regard to variations in the quality of pain, perhaps the most important distinction is between the kind of pain
142 CHAPTER 4 SENSORY PROCESSES we feel immediately upon suffering an injury, called phasic pain, and the kind we experience after the injury has occurred, called tonic pain. Phasic pain is typically a sharp, immediate pain that is brief in duration (that is, it rapidly rises and falls in intensity), whereas tonic pain is typically dull and long lasting. To illustrate, if you sprain your ankle, you immediately feel a sharp undulating pain (phasic pain), but after a while you start to feel the steady pain caused by the swelling (tonic pain). The two kinds of pain are mediated by two distinct neural pathways, and these pathways eventually reach different parts of the cortex (Melzack, 1990). Nonstimulus determinants of pain More than any other sensation, the intensity and quality of pain are influenced by factors other than the immediate stimulus. These factors include the person’s culture, expectations, and previous experience. The striking influence of culture is illustrated by the fact that some non-Western societies engage in rituals that seem unbearably painful to Westerners. A case in point is the hook-swinging ceremony practiced in some parts of India: The ceremony derives from an ancient practice in which a member of a social group is chosen to represent the power of the gods. The role of the chosen man (or ‘celebrant’) is to bless the children and crops in a series of neighboring villages during a particular period of the year. What is remarkable about the ritual is that steel hooks, which are attached by strong ropes to the top of a special cart, are shoved under his skin and muscles on both sides of his back [see Figure 4.31]. The cart is then moved from village to village. Usually the man hangs on to the ropes as the cart is moved about. But at the climax of the ceremony in each village, he swings free, hanging only from the hooks embedded in his back, to bless the children and crops. Astonishingly, there is no evidence that the man is in pain during the ritual; rather, he appears to be in a ‘state of exaltation’. When the hooks are later removed, wounds heal rapidly without any medical treatment other than the application of wood ash. Two weeks later the marks on his back are scarcely visible. (Melzak, 1973) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk Figure 4.31 Culture and Pain. Two steel hooks are inserted in the back of the celebrant in the Indian hook-swinging ceremony. Right: The celebrant hangs onto the ropes as a cart takes him from village to village. As he blesses the village children and crops, he swings freely suspended by the hooks in his back. (D. D. Kosambi (1967) ‘Living Prehistory in India’, from Scientific American 215:105. Copyright © 1967 by D. D. Kosambi. Reprinted by permission of Dr. Meera Kosambi and Mr. Jijoy B. Surka.) Clearly, pain is as much a matter of mind as of sensory receptors. Phenomena like the one just described have led to the gate control theory of pain (Melzack & Wall, 1982, 1988). According to this theory, the sensation of pain requires not only that pain receptors on the skin be active but also that a ‘neural gate’ in the spinal cord be open and allow the signals from the pain receptors to pass to the brain (the gate closes when critical fibers in the spinal cord are activated). Because the neural gate can be closed by signals sent down from the cortex, the perceived intensity of pain can be reduced by the person’s mental state, as in the hook-swinging ceremony. What exactly is the ‘neural gate’? It appears to involve a region of the midbrain called the periaqueductal gray, or PAG for short; neurons in the PAG are connected to other neurons that inhibit cells that would normally carry the pain signals arising in the pain receptors. So when the PAG neurons are active, the gate is closed; when the PAG neurons are not active, the gate is open. Interestingly, the PAG appears to be the main place where strong painkillers such as morphine affect neural processing. Morphine is known to increase neural activity in the PAG, which, as we have just seen, should result in a closing of the neural gate. Hence, the well-known analgesic effects of morphine fit with the gate control theory.
Moreover, our body produces certain chemicals, called endorphins, that act like morphine to reduce pain, and these chemicals, too, are believed to create their effect by acting on the PAG in such a way as to close the neural gate. There are other striking phenomena that fit with gate control theory. One is stimulation-produced analgesia, in which stimulation of the PAG acts like an anesthetic. One can perform abdominal surgery on a rat using only PAG stimulation as the anesthetic, with the rat showing no sign of experiencing pain (Reynolds, 1969). A milder version of this phenomenon is familiar to all of us: Rubbing a hurt area relieves pain, presumably because pressure stimulation is closing the neural gate. A phenomenon related to stimulation-produced analgesia is the reduction in pain resulting from acupuncture, a healing procedure developed in China in which needles are inserted into the skin at critical points. Twirling these needles has been reported to eliminate pain entirely, making it possible to perform major surgery on a conscious patient (see Figure 4.32). Presumably, the needles stimulate nerve fibers that lead to a closing of the pain gate. At the psychological level, then, we have evidence that drugs, cultural beliefs, and various nonstandard medicinal practices can dramatically reduce pain. However, all of these factors may stem from a single biological process. Here, then, is a case in which research at the biological level may actually unify findings at the psychological level. The interplay between the psychological and biological research on pain is typical of the successful interaction between these two approaches to sensation. As we commented at the beginning of the chapter, in perhaps no other area of psychology have the biological and psychological approaches worked so well together. Again Figure 4.32 A Typical Acupuncture Chart. The numbers indicate sites at which needles can be inserted and then either twisted, electrified, or heated. An impressive analgesia results in many cases. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk OTHER SENSES and again we have seen that neural events occurring in receptors can explain phenomena occurring at the psychological level. Thus, in discussing vision we showed how variations in sensitivity and acuity – which are psychological phenomena – can be understood as the direct consequence of how different kinds of receptors (rods versus cones) connect to ganglion cells. Also with regard to vision, we pointed out how psychological theories of color vision led to discoveries at the biological level (for example, three kinds of cone receptors). In the case of audition, the place theory of frequency perception was initially a psychological theory, and it led to research in the physiology of the basilar membrane. If ever anyone needed justification for intertwining psychological and biological research, the study of sensation provides it. INTERIM SUMMARY l The stimuli for smell are the molecules given off by a substance, which travel through the air and activate olfactory receptors located high in the nasal cavity. l The stimulus for taste is a substance that is soluble in saliva; many of the receptors occur in clusters on the tongue (taste buds). l Two of the skin senses are pressure and temperature. Sensitivity to pressure is greatest at the lips, nose, and cheeks, and least at the big toe. We are highly sensitive to temperature and are able to detect a change of less than 1 degree centigrade. We code different kinds of temperatures primarily by whether hot or cold receptors are activated. l Any stimulus that is intense enough to cause tissue damage is a stimulus for pain. Phasic pain is typically brief and rapidly rises and falls in intensity; tonic pain is typically long-lasting and steady. Sensitivity to pain is greatly influenced by factors other than the noxious stimulus, including expectations and cultural beliefs. CRITICAL THINKING QUESTIONS 1 Some people have described sensory experiences that cross over between two sensory systems. Called synesthesia, this apparently can occur both through natural causes and under the influence of a psychoactive drug. For example, people have reported being able to see the ‘color’ of music, or being able to hear the ‘tunes’ associated with different smells. On the basis of what you know about sensory coding, can you think of what might cause such experiences? 2 How would your life change if you did not have a sense of pain? How would it change if you did not have a sense of smell? Which do you think would be worse, and why?
144 CHAPTER 4 SENSORY PROCESSES SEEING BOTH SIDES SHOULD OPIOIDS BE USED FOR TREATING CHRONIC PAIN? Opioids are an appropriate treatment for chronic pain Robert N. Jamison, Harvard Medical School Pain is a serious problem in the United States and throughout the rest of the world. About a third of the American population, or more than 80 million people, are severely affected by pain. Pain is the major reason people visit their primary care physicians; in fact, 70 million people see a physician each year because of pain. Chronic pain can affect all aspects of your life, interfering with sleep, employment, social life, and daily activities. Persons who have chronic pain (defined as pain that lasts longer than three months) frequently report depression, anxiety, irritability, sexual problems, and decreased energy. Chronic pain accounts for 21 percent of emergency room visits and 25 percent of annual missed work days, and, when direct and indirect costs are considered, imposes a greater economic burden than any other disease, with estimates of annual costs adding up to $100 billion (Stewart et al., 2003). Chronic pain has remained a stubborn, debilitating problem for untold millions of individuals. Despite medical advances in treating pain, opioids remain the most potent class of medications available to treat pain (McCarberg & Billington, 2006). Yet many physicians and healthcare professionals are reluctant to support the use of opioid medication for patients with chronic pain because of concerns about adverse effects, tolerance, diversion, and addiction. Some clinicians worry that regular use of prescription opioid analgesics will contribute to dependence and impaired cognition, and may lead to the eventual use of other street drugs like heroin. For the vast majority of those individuals prescribed opioids for pain, however, these fears have been unfounded. Researchers and clinicians cite the relatively low incidence of abuse and addiction among patients with chronic pain and report that tolerance appears not to develop in those patients with stable pain pathophysiology. They suggest that the potential for increased functioning and improved quality of life significantly outweighs the minimal risk of abuse. Investigators have also suggested that chronic opioid therapy may decrease the cost of rehabilitation programs for pain patients while improving outcome. A number of years ago, my colleagues and I initiated a prospective study of opioid therapy for chronic noncancer back pain (Jamison et al., 1998). The results suggested that opioid therapy had a positive effect on pain and mood. Most important, opioid therapy for chronic back pain was used without significant risk of For more Cengage Learning textbooks, visit www.cengagebrain.co.uk abuse, and we found that individuals in the long-term opioid trial were compliant in coming off the opioid medication without signs of dependency or addiction. The results of our studies and others point to the overwhelming evidence that addiction rarely occurs when opioids are used for the treatment of pain. This has been found to be true in both human and animal studies. In order to further help minimize the risk of opioid use, recent efforts have been made on identifying those individuals at high risk for misuse of opioid medication, either due to past behavior or family history (Butler et al., 2008). Protocols to assist clinicians in assessing risk and ways to monitor for aberrant drug-related behavior are available including validated self-report questionnaires (Butler et al., 2007), improved toxicology screening, regular implementation of opioid agreements, and motivational counseling. These procedures have been increasingly adopted and have been shown to decrease the risk of opioid misuse and to increase compliance. Thus, when risk of potential opioid misuse exists, careful monitoring, support and supervision have been shown to further enhance safety and improve the risk/ benefit ratio (Savage et al., 2008). The future also holds promise for the treatment of chronic pain with abuse-resistant opioid formulas to help to combat the divergence of opioids into the hands of others who may want the medication just for their euphoric properties. We remain hopeful that other treatments using different delivery systems will also be discovered to help those who suffer needlessly from back pain, headaches, arthritis, and pain associated with the residual treatment of cancer and other chronic diseases. In the meantime, further education is needed to eradicate prejudices about the use of opioids for pain. The myth that all those who request opioid medication for their noncancer pain are drug abusers should be challenged. We know that, when used responsively and intelligently, opioids can help to significantly diminish pain. The goal is to improve the quality of life of the millions of people who continue to live each day in severe pain. The Worth Health Organization has declared than many persons with pain have a drug problem – they do not have access to the medication that will help their pain the most. The undertreatment of pain continues to be a needless tragedy and when used responsibly, opioids can be an appropriate treatment for many who experience debilitating chronic pain. Robert N. Jamison
Why opioids should be less frequently used for treating people with chronic pain Dennis C. Turk, University of Washington School of Medicine Perhaps the earliest mention of the use of opioids for treating pain was contained in the Ebers papyrus dating back to the 4th century BCE where opium is recommended by the goddess Isis as a treatment for the god Ra’s headaches. Since then there has been little question as to the effectiveness of opioids for the treatment of acute pain – such as that following surgery. The long-term use of opioids, even for pain associated with cancer, has been much more controversial and has swung from common use, to resistance, and back again. In the 1960s and 1970s, two trends challenged the thinking about the medical use of opioids. Behavioral scientist, Wilbert Fordyce (1976) suggested that it is impossible to know how much pain someone experiences other than by what the person tells you verbally or demonstrates by behaviors. He suggested that these ‘pain behaviors’ (overt expressions of pain, distess, and suffering such as moaning, limping, and grimacing) were observable and thus capable of being responded to by others, including family members and physicians. Fordyce also suggested that opioids could serve as a negative reinforcement for pain behaviors. That is, if the patient took opioid medication as is commonly prescribed, ‘as needed’, the pain behaviors might increase in order to obtain the painrelieving and mood elevating (positive reinforcing) effects of the medication. Fordyce suggested that elimination of the opioid medication would contribute to extinction of the pain behaviors. Dennis Turk and Akiko Okifuji (1997) showed that physicians were more likely to prescribe the chronic use of opioids if the patients were depressed, complained that pain impacted their lives greatly and displayed a large number of pain behaviors even though there were no differences in either actual physical pathology detected or reported pain severity. Thus, the opioids appeared to be prescribed in response to emotional distress, not specifically for pain or disease. The reinforcing properties of the opioids could thereby maintain the patients’ complaints and even their experience of pain. The second development that challenged the use of opioids for chronic pain was the social movement in the 1970s to combat drug abuse – ‘Just say no’. Unfortunately, the campaign to reduce the inappropriate use of drugs was extended into clinical areas. Thus, even appropriate uses of opioids were influenced by concerns about misuse and abuse. Fears of addiction, tolerance, and adverse side effects became prominent and not unfounded (Ballantyne & LaForge, 2007). Addiction is often confused with physical dependence. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk OTHER SENSES SEEING BOTH SIDES SHOULD OPIOIDS BE USED FOR TREATING CHRONIC PAIN? Addiction refers to a behavioral pattern characterized by overwhelming involvement with the use of a drug, securing of its supply, and tendency to relapse despite physical, psychological, and social harm to the user. Physical dependence develops with continued use of many drugs as the body becomes tolerant to the effects, not just opiods. Physical dependence is a pharmacological property of a drug characterized by the occurrence of withdrawal following abrupt discontinuation of the substance or administration of a drug antagonist and does not imply an aberrant psychological state or behavior. One concern with the use of opiods is that with long-term use, patients will require escalating doses of the medication to obtain the same level of pain relief. At times it is difficult to distinguish the need for increased dosage due to tolerance or progression of a disease process that might be increasing the pain severity. There is a growing body of research, primarily on animals, indicating prolonged use of opioids sensitizes peripheral nerves leading to reduction in the threshold for perceiving pain – ‘hyperalgesia’ (Angst & Clark, 2006; Chang, Chen, & Mao, 2007). Paradoxically, prolonged use of opioids appears to lower thresholds for pain producing a need for higher dosages of the drug to produce the same analgesic affect. Beginning In the mid-1980s, Ronald Melzack (1990), and Russell Portenoy and Kathleen Foley (1986) began to question the generalization from the illicit to the medical use of opioids. They suggested that if the use of opioids produced symptomatic improvement in chronic pain patients, long-term use might be a reasonable treatment and the failure to treat pain sufferers with appropriate and available opioids would be unethical. A number of studies have evaluated the effectiveness of longterm use of opioids in the treatment of chronic pain. The results of these studies report approximately 30 percent reduction in pain in less than 40 percent of patients (Kalso, Edwards, Moore, & McQuay, 2004; Furlan, Sandoval, Mailis-Gagnon, & Tunks, 2006). Even when pain is reduced, studies have found little support to indicate that the benefits of opioids are accompanied by significant improvement in physical functioning and reductions in emotional distress. Moreover, some studies have reported that both pain severity and physical functioning improve following withdrawal from opioids (Flor, Fydrich, & Turk, 1992). ‘Long-term’ opioid use should be used in quotes as the average duration of the published, double-blind, randomized controlled studies for the treatment of chronic pain with opioids Dennis C. Turk
146 CHAPTER 4 SENSORY PROCESSES is less than five weeks (Chou, Clark, & Helfand, 2003). Moreover, the sizes of the samples included in these studies are small and the rates of dropouts are high, averaging around 30 percent (Noble, Tregear, Tredwell, & Schoelles, 2008). Finally, although many of the studies report significant reductions in pain severity without serious problems, some have noted particular problems with abuse and intolerable side-effects (e.g., persistent constipation, depletion of sex hormones, neurotoxicity). Urine toxicology screening for opioid misuse suggest that as many as 35 percent of patients treated with opioids for chronic pain are not taking the medication as prescribed and consume a range of illicit substances in addition to opioids ( Turk, Swanson, & Gatchel, in press). Also troubling is the rapidly increasing number of cases of non-medical uses (i.e., taken for the mood elevating effect not to treat physical pain) of prescription opioids and deaths that are associated abuse correlated with the greater availability of these medications (SAMSA, 2004). CHAPTER SUMMARY At the psychological level, sensations are experiences associated with simple stimuli. At the biological level, sensory processes involve the sense organs and connecting neural pathways, and are concerned with the initial stages of acquiring stimulus information. The senses include vision; audition (hearing); olfaction (smell); gustation (taste); the skin senses, which include pressure, temperature, and pain; and the body senses. One property that can be used to describe all senses is sensitivity. Sensitivity to stimulus intensity is measured by the absolute threshold, which is the minimum amount of stimulus energy that can be reliably detected. Sensitivity to a change in intensity is measured by the difference threshold or jnd, the minimum difference between two stimuli that can be reliably detected. The amount of change needed for detection to occur increases with the intensity of the stimulus and is approximately proportional to it (the Weber-Fechner law). Another property of great interest is the relation between stimulus intensity and the magnitude of sensation for above-threshold stimuli. This relation is captured in Stevens’ power law which states that perceived stimulus magnitude is a power function of physical stimulus magnitude. The exponent of the power function differs for For more Cengage Learning textbooks, visit www.cengagebrain.co.uk The results of the available studies raise serious concerns about the long-term use of opiods: (1) the actual benefits reported are rather modest and there are no cures associated with long-term use of opioids; (2) few studies have shown any improvement in the patients’ physical or psychological functioning; (3) adverse side-effects can be substantial; (4) studies have reported significant problems with misuse, abuse, and diversion of the drugs; and (5) the outcomes of pain clinics have demonstrated reduction of pain associated with reduction of opioids. The central question is not whether chronic pain patients should ever be treated with opioids but, rather, what are the characteristics of patients who are able to reduce pain and improve physical and psychological functioning without significant problems accompanying long-term use? At the present time it seems premature to recommend that opioids be used on a long-term basis for a significant number of patients although there is no question that some are able to benefit without significant aberrant behaviors. different sensory modailities; for most, like sound intensity, the exponent is less than 1.0, which means that the function relating perceived to physical intensity is concave down. For others, like pain intensity, the exponent is greater than 1.0, which means that the function relating perceived to physical intensity is concave up. Sensation is often viewed as the process of detecting a signal that is embedded in noise. In some cases, a signal may be ‘detected’ even when only noise is present; this is referred to as a false alarm, while correctly detecting a signal that is present is called a hit. The use of signal detection theory allows the process of detecting a stimulus to be decomposed into two separate numbers: one representing the observer’s sensitivity to the signal and the other representing the observer’s bias to respond ‘signal present’. Signal-detection theory is not only useful as a fundamental scientific tool, but has important practical applications, such as evaluating the performance of a radiologist trying to detect abnormalities in noisy x-rays. Every sense modality must recode or transduce its physical energy into neural impulses. This transduction process is accomplished by the receptors. The receptors and connecting neural pathways code the intensity of a stimulus primarily by the rate of
neural impulses and their patterns; they code the quality of a stimulus according to the specific nerve fibers involved and their pattern of activity. The stimulus for vision is light, which is electromagnetic radiation in the range from 400 to 700 nanometers. Each eye contains a system for forming the image (including the cornea, pupil, and lens) and a system for transducing the image into electrical impulses. The transduction system is in the retina, which contains the visual receptors, that is, the rods and cones. Cones operate at high light intensities, lead to sensations of color, and are found mainly in the center (or fovea) of the retina; rods operate at low intensities, lead to colorless sensations, and are found mainly in the periphery of the retina. Our sensitivity to the intensity of light is mediated by certain characteristics of the rods and cones. Of particular importance is the fact that rods connect to a larger number of ganglion cells than do cones. Because of this difference in connectivity, visual sensitivity is greater when it is based on rods than when it is based on cones, but visual acuity is greater when it is based on cones than when it is based on rods. Different wavelengths of light lead to sensations of different colors. The appropriate mixture of three lights of widely separated wavelengths can be made to match almost any color of light. This fact and others led to the development of trichromatic theory, which holds that perception of color is based on the activity of three types of receptors (cones), each of which is most sensitive to wavelengths in a different region of the spectrum. There are four basic color sensations: red, yellow, green, and blue. Mixtures of these make up our experiences of color, except that we do not see reddish-greens and yellowish-blues. This can be explained by the opponent-color theory, which proposes that there are red-green and yellow-blue opponent processes, each of which responds in opposite ways to its two opponent colors. Trichromatic and opponent-color theories have been successfully combined through the proposal that they operate at different neural locations in the visual system. The stimulus for audition (hearing) is a wave of pressure changes (a sound wave). The ear includes the outer ear (the external ear and the auditory canal); the middle ear (the eardrum and a chain of For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHAPTER SUMMARY bones); and the inner ear. The inner ear includes the cochlea, a coiled tube that contains the basilar membrane, which supports the hair cells that serve as the receptors for sound. Sound waves transmitted by the outer and middle ear cause the basilar membrane to vibrate, resulting in a bending of the hair cells that produces a neural impulse. Pitch, the most striking quality of sound, increases with the frequency of the sound wave. The fact that we can hear the pitches of two different tones sounded simultaneously suggests that there may be many receptors, which respond to different frequencies. Temporal theories of pitch perception postulate that the pitch heard depends on the temporal pattern of neural responses in the auditory system, which itself is determined by the temporal pattern of the sound wave. Place theories postulate that each frequency stimulates a particular place along the basilar membrane more than it stimulates other places, and that the place where the maximum movement occurs determines which pitch is heard. There is room for both theories, as temporal theory explains perception of low frequencies while place theory accounts for perception of high frequencies. Olfaction (smell) is even more important to nonhuman species than to humans. Many species use specialized odors (pheromones) for communication, and humans seem to possess a vestige of this system. The stimuli for smell are the molecules given off by a substance. The molecules travel through the air and activate olfactory receptors located high in the nasal cavity. There are many kinds of receptors (on the order of 1,000). A normal person can discriminate among 10,000 to 40,000 different odors, with women generally doing better than men. Gustation (taste) is affected not only by the substance being tasted but also by genetic makeup and experience. The stimulus for taste is a substance that is soluble in saliva; many of the receptors occur in clusters on the tongue (taste buds). Sensitivity varies from one place to another on the tongue. Any taste can be described as one or a combination of the four basic taste qualities: sweet, sour, salty, and bitter. Different qualities of taste are coded partly in terms of the specific nerve fibers activated – different fibers respond best to one of the four taste sensations – and partly in terms of the pattern of fibers activated.
148 CHAPTER 4 SENSORY PROCESSES Two of the skin senses are pressure and temperature. Sensitivity to pressure is greatest at the lips, nose, and cheeks, and least at the big toe. We are very sensitive to temperature, being able to detect a change of less than one degree centigrade. We code different kinds of temperatures primarily by whether hot or cold receptors are activated. Any stimulus that is intense enough to cause tissue damage is a stimulus for pain. There are two distinct kinds of pain, which are mediated by CORE CONCEPTS sensitivity and bias expectation temporal pattern retina rods and cones fovea transduction dark adaptation curve spatial acuity visual acuity contrast acuity color constancy hue brightness saturation color-matching experiment metamers dichromatism frequency (of a tone) sensations perception back projections absolute threshold psychophysical procedures trials dark adaptation photon standard difference threshold just noticeable difference (jnd) Weber fraction suprathreshold power function exponent signal detection theory signal versus noise sensation versus bias hits and false alarms For more Cengage Learning textbooks, visit www.cengagebrain.co.uk different neural pathways. Phasic pain is typically brief and rapidly rises and falls in intensity; tonic pain is typically long lasting and steady. Sensitivity to pain is greatly influenced by factors other than the noxious stimulus, including expectations and cultural beliefs. These factors seem to exert their influence by opening or closing a neural gate in the spinal cord and midbrain; pain is felt only when pain receptors are activated and the gate is open. hertz pitch amplitude (of a tone) loudness timbre eardrum auditory canal oval window malleus, incus, and stapes (of the ear) cochlea basilar membrane hair cells temporal theory resonance place theory pheromones olfactory bulb olfactory cortex
WEB RESOURCES http://www.atkinsonhilgard.com/ Take a quiz, try the activities and exercises, and explore web links. http://www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=hstat4.chapter.14810 This comprehensive site will help you explore the many aspects of cochlear implants. Click on the Interactive Table of Contents and search through topics like the benefits and limitations of implants and learn more about future research. http://www.exploratorium.edu/learning_studio/cow_eye/index.html Did you ever want to dissect a cow eye? Well, here’s your chance! You will soon see a similarity between the cow eye and the human eye. CD-ROM LINKS Psyk.Trek 3.0 Check out CD Unit 3, Sensation and Perception 3a Light and the eye 3b The retina 3c Vision and the brain 3h The sense of hearing CD-ROM LINKS For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
No comments to display
No comments to display