Skip to main content

01 - 9 Language and Thought

9 Language and Thought

CHAPTER 9 LANGUAGE AND THOUGHT © ENXOH | DREAMSTIME.COM For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

I n the 1970s, jogging became a popular form of exercise in the United States as well as in Europe. Some joggers reported experiencing a ‘runner’s high’, a feeling of intense euphoria that presumably came with intense exercise. What could be causing this? At about the same time, neuroscientists discovered a new class of endogenous chemicals (chemicals produced by the body) that act like morphine, which came to be called ‘endorphins’ (for endogenous morphine). Many scientists then concluded that intense exercise leads to an increase in endorphins, which in turn is responsible for a runner’s high. This hypothesis became extremely well known. Alas, further biological work challenged the endorphin theory of a runner’s high. Although endorphin levels in the blood do indeed rise with exercise, the endorphins produced do not pass from the circulating blood into the brain, so they could not be the cause of the mood changes (Kolata, 2002). This is a nice example of scientific thinking. First, some fact about a mood change (a runner’s high) is reduced to an alteration in body chemistry (increased endorphins). But further work shows that the change in body chemistry does not affect the right organ. The episode involves many aspects of thinking and language. New concepts (like endorphins) are introduced, reasoning with these concepts is used to generate a hypothesis, and then subsequent tests of the hypothesis undermine it. And all of the concepts, claims, and counterclaims are expressed in language. The greatest accomplishments of our species stem from our ability to entertain complex thoughts such as those in this example, to communicate them, and to act on them. Thinking includes a wide range of mental activities. We think when we try to solve a problem that has been presented to us in class, and we think when we daydream while waiting for a class to begin. We think when we decide what groceries to buy, plan a vacation, write a letter, or worry about a troubled relationship. We begin this chapter with a discussion of language, the means by which thoughts are communicated. Then we consider the development or acquisition of language. The remaining sections of this chapter discuss major topics in propositional thinking. We begin by focusing on concepts, the building blocks of thought, and discuss their use in classifying objects. This is the study of concepts and categorization. Then we consider how thoughts are organized to arrive at a conclusion. This is the study of reasoning. Next we turn to the For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHAPTER OUTLINE LANGUAGE AND COMMUNICATION Levels of language Language units and processes Effects of context on comprehension and production The neural basis of language THE DEVELOPMENT OF LANGUAGE What is acquired? Learning processes Innate factors CONCEPTS AND CATEGORIZATION: THE BUILDING BLOCKS OF THOUGHT Functions of concepts Prototypes Hierarchies of concepts Different categorization processes Acquiring concepts The neural basis of concepts and categorization REASONING Deductive reasoning Inductive reasoning The neural basis of reasoning CUTTING EDGE RESEARCH: UNCONSCIOUS THOUGHT FOR COMPLEX DECISIONS IMAGINAL THOUGHT Imaginal operations The neural basis of imagery THOUGHT IN ACTION: PROBLEM SOLVING Problem-solving strategies Representing the problem Experts versus novices Automaticity SEEING BOTH SIDES: DO PEOPLE WHO SPEAK DIFFERENT LANGUAGES THINK DIFFERENTLY? 319

320 CHAPTER 9 LANGUAGE AND THOUGHT imaginal mode of thought, and in the final section we discuss thought in action – the study of problem solving – and consider the uses of both propositional and imaginal LANGUAGE AND COMMUNICATION Language is our primary means of communicating thought. Moreover, it is universal: Every human society has a language, and every human being of normal intelligence acquires his or her native language and uses it effortlessly. The naturalness of language sometimes lulls us into thinking that language use requires no special explanation. Nothing could be further from the truth. Some people can read, and others cannot; some can do arithmetic, and others cannot; some can play chess, and others cannot. But virtually everyone can master and use an enormously complex linguistic system. In contrast, even the most sophisticated computers have severe problems in interpreting speech, understanding written text, or speaking in a productive way. Yet most normal children perform these linguistic tasks effortlessly. Why this should be so is among the fundamental puzzles of human psychology. Levels of language Language use has two aspects: production and comprehension. In the production of language, we start with a thought, somehow translate it into a sentence, and end up with sounds that express the sentence. In the comprehension of language, we start by hearing sounds, attach meaning to the sounds in the form of words, and then attach meaning to the combination of the words in the form of sentences. Language use seems to involve moving Sentence units Words, prefixes, and suffixes Speech sounds Figure 9.1 Levels of Language. At the highest level are sentence units, including phrases and sentences. The next level is words and parts of words that carry meaning. The lowest level contains speech sounds. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk thought, as well as automaticity. Throughout this chapter, you will find separate paragraphs on findings considering the neural basis of these topics. through various levels, as shown in Figure 9.1. At the highest level are sentence units, including sentences and phrases. The next level is that of words and parts of words that carry meaning (the prefix ‘non’ or the suffix ‘er’, for example). The lowest level contains speech sounds. The adjacent levels are closely related: the phrases of a sentence are built from words and prefixes and suffixes, which in turn are constructed from speech sounds. Language therefore is a multilevel system for relating thoughts to speech by means of word and sentence units (Chomsky, 1965). There are striking differences in the number of units at each level. All languages have only a limited number of speech sounds; English has about 40 of them. But rules for combining these sounds make it possible to produce and understand thousands of words (a vocabulary of 70,000 words is not unusual for an adult; see Bloom, 2000). Similarly, rules for combining words make it possible to produce and understand millions of sentences (if not an infinite number of them). This property of language is called ‘productivity’: rules allow us to combine units at one level into a vastly greater number of units at the next level. So, two of the basic properties of language are that it is structured at multiple levels and that it is productive. Every human language has these two properties. Language units and processes Let’s now consider the units and processes involved at each level of language. In surveying the relevant material, we usually take the perspective of a person comprehending language, a listener, though occasionally we switch to that of a language producer, or speaker. Speech sounds If you could attend to just the sounds someone makes when talking to you, what would you hear? You would not perceive the person’s speech as a continuous stream of sound but rather as a sequence of phonemes, or discrete speech categories. Phonemes are the shortest segment of speech that carry meaning. For example, the sound corresponding to the first letter in boy is an instance of a phoneme symbolized as /b/. We can change the meaning of the word by changing one of the phonemes; boy becomes toy when the first phoneme /b/ is changed into a /t/. Note that phonemes may correspond to letters, but they are speech sounds, not letters. In English, we divide all speech

sounds into about 40 phonemes (see Table 9.1). Although something like 200 different phonemes have been documented in human language world wide, most human languages have no more than 60 phonemes (Ladefoged, 2005). The sounds that make up the phonetic alphabet also vary widely. For example, German and Dutch speakers use certain guttural sounds that are never heard in English. We are good at discriminating among different sounds that correspond to different phonemes in our language but poor at discriminating among different sounds that correspond to the same phoneme. Consider, for example, the sound of the first letter in pin and the sound of the second letter in spin (Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967). They are the same phoneme, /p/, and they sound the same to us, even though they have different physical characteristics. The /p/ in pin is accompanied by a puff of air, but the /p/ in spin is not (try holding your hand a short distance from your mouth as you say the two words). Our phonemic categories act as filters that convert a continuous stream of speech into a sequence of familiar phonemes. The fact that every language has a different set of phonemes is one reason we often have difficulty learning to pronounce foreign words. Another language may use phonemes that do not appear in ours. It may take us a while even to hear the new phonemes, let alone produce them. For example, in Hindi the two different /p/ sounds just described correspond to two different phonemes, so Hindu speakers appreciate differences that others do not. Another language may not make a distinction between two sounds that our language treats as two phonemes. In Japanese, the English sounds corresponding to r and l (/r/ and /l/) are perceived as the same phoneme – which leads to the frequent confusion between words like rice and lice. When phonemes are combined in the right way, we perceive them as words. Each language has its own rules about which phonemes can follow others. In English, for example, /b/ cannot follow /p/ at the beginning of a word (try pronouncing pbet). The influence of such rules is revealed when we listen. We are more accurate in perceiving a string of phonemes whose order conforms to the rules of our language than a string whose order violates these rules. The influence of these rules is even more striking when we take the perspective of a speaker. For example, we have no difficulty pronouncing the plurals of nonsense words that we have never heard before. Consider zuk and zug. In accordance with a simple rule, the plural of zuk is formed by adding the phoneme /s/, as in hiss. In English, however, /s/ cannot follow g at the end of a word, so to form the plural of zug we must use another rule – one that adds the phoneme /z/, as in fuzz. We may not be aware of these differences in forming plurals, but we have no difficulty producing them. It is as if we ‘know’ the rules for combining phonemes, even though we are not consciously aware of the rules: We conform to rules that we cannot verbalize. Word units What we typically perceive when listening to speech are not phonemes but words. Unlike phonemes, words carry meaning. However, they are not the only smallish linguistic units that convey meaning. Suffixes such as ly or prefixes such as un also carry meaning. They can be added to words to form more complex words with different meanings, as when un and ly are added to ‘time’ to form ‘untimely’. The term morpheme is used to refer to any small linguistic unit that carries meaning. Most morphemes are themselves words. Most words denote some specific content, such as house or run. A few words, however, primarily serve to make sentences grammatical. Such grammatical words, or grammatical morphemes, include what are commonly referred to as articles and prepositions, such as a, the, in, of, on, and at. Some prefixes and suffixes also play primarily a grammatical role. These grammatical morphemes include the suffixes ing and ed. Table 9.1 A phonetic alphabet for English pronunciation. Adapted from: Fromkin, Rodman & Hyams, an introduction to language, 7th edition (2003), Wadsworth, an imprint of Cengage Learning. Consonants Vowels p pill t till k kill i beet I bit b bill d dill g gill e bait bet m mill n nil ring u boot foot f feel s seal h heal o boat bore v veal z zeal l leaf æ bat a pot/bar u thigh chill r reef butt aw bout ð thy Jill j you aj bite shill which w witch j boy azure LANGUAGE AND COMMUNICATION For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

Grammatical morphemes may be processed differently from content words. One piece of evidence for this is forms of brain damage in which the use of grammatical morphemes is impaired more than the use of content words (Zurif, 1995). Also, as we will see later, grammatical morphemes are acquired in a different way than content words. The most important aspect of a word is, of course, its meaning. A word can be viewed as the name of a concept, and its meaning is the concept it names. Some words are ambiguous because they name more than one concept. Club, for example, names both a social organization and an object used for striking. Sometimes we may be aware of a word’s ambiguity, as when we hear the sentence ‘He was interested in the club.’ In most cases, however, the sentence context makes the meaning of the word sufficiently clear that we do not consciously experience any ambiguity – for example, ‘He wanted to join the club.’ Even in these cases, though, there is evidence that we unconsciously consider both meanings of the ambiguous word for a brief moment. In one experiment, a participant was presented a sentence such as ‘He wanted to join the club’, followed immediately by a test word that the participant had to read aloud as quickly as possible. Participants read the test word faster if it was related to either meaning of club (for example, group or struck) than if it was unrelated to another meaning (for example, apple). This suggests that both meanings of club were activated during comprehension of the sentence and that either meaning could prime, or activate, related words (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979). Sentence units As listeners, we usually effortlessly combine words into sentence units, which include sentences as well as phrases. An important property of these units is that they can correspond to parts of a thought, or proposition. Such correspondences allow a listener to ‘extract’ propositions from sentences. To understand these correspondences, first you have to appreciate that any proposition can be divided into a subject and a predicate (a description). In the proposition ‘Audrey has curly hair’, ‘Audrey’ is the subject and ‘has curly hair’ is the predicate. In the proposition ‘The tailor is asleep’, ‘the tailor’ is the subject and ‘is asleep’ is the predicate. And in ‘Teachers work too hard’, ‘teachers’ is the subject and ‘work too hard’ is the predicate. Any sentence can be broken into phrases so that each phrase corresponds either to the subject or the predicate of a proposition or to an entire proposition. For example, intuitively we can divide the simple sentence ‘Irene sells insurance’ into two phrases, ‘Irene’ and ‘sells insurance’. The first phrase, called a noun phrase because it centers on a noun, specifies the subject of an underlying proposition. The second phrase, a verb phrase, gives the predicate of the proposition. For a more complex example, consider the sentence ‘Serious scholars read books’. This sentence can be divided into two phrases, the noun phrase ‘Serious scholars’ and the verb phrase ‘read books’. The noun phrase expresses an entire proposition, ‘scholars are serious’; the verb phrase expresses part (the predicate) of another proposition, ‘scholars read books’ (see Figure 9.2). Again, sentence units correspond closely to proposition units, which provide a link between language and thought. When listening to a sentence, people seem to first divide it into noun phrases, verb phrases, and the like, and then to extract propositions from these phrases. There is a good deal of evidence for our dividing sentences into phrases and treating the phrases as units, with some of the evidence coming from memory experiments. In one study, participants listened to sentences such as ‘The poor girl stole a warm coat.’ Immediately after each sentence was presented, participants were given a probe word from the sentence and asked to say the word that came after it. People responded faster when the probe and the response words came from the same phrase (‘poor’ and ‘girl’) than when they came from different phrases (‘girl’ and ‘stole’). So each phrase acts as a unit in memory. When the probe and response are from the same phrase, only one unit needs to be retrieved (Wilkes & Kennedy, 1969). Analyzing a sentence into noun and verb phrases, and then dividing these phrases into smaller units like nouns, adjectives, and verbs, is syntactic analysis. Syntax deals with the relationships between words in phrases and sentences. Syntax primarily serves to structure the parts of a sentence so we can tell what is related to what. For example, in the sentence ‘The green bird ate a red snake’, the syntax of English tells us that the bird did the eating and not the snake, that the bird was green but not the Serious scholars (NOUN PHRASE) read books (VERB PHRASE) Scholars (SUBJECT) are serious (PREDICATE) read books (PREDICATE) Serious scholars read books Sentence Phrases Propositions Figure 9.2 Phrases and Propositions. The first step in extracting the propositions from a complex sentence is to decompose the sentence into phrases. This decomposition is based on rules like ‘Any sentence can be divided into a noun phrase and a verb phrase’. CHAPTER 9 LANGUAGE AND THOUGHT For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

snake, that the snake was red but not the bird, and so on. Furthermore, in an example like ‘The dogs that the man owned were lazy’, the syntax helps us to identify the man as doing the owning (by word order) and the dogs as being lazy (by word order and number agreement). In identifying the verb and noun phrases of a sentence and how they are related, we are identifying what is what, and who did what to whom. In the course of understanding a sentence, we usually perform such a syntactic analysis effortlessly and unconsciously. Sometimes, however, our syntactic analysis goes awry, and we become aware of the process. Consider the sentence ‘The horse raced past the barn fell.’ Many people have difficulty understanding this sentence. Why? Because on first reading, we assume that ‘The horse’ is the noun phrase and ‘raced past the barn’ is the verb phrase, which leaves us with no place for the word fell. To understand the sentence correctly, we have to repartition it so that the entire phrase ‘The horse raced past the barn’ is the noun phrase and ‘fell’ is the verb phrase (that is, the sentence is a shortened version of ‘The horse who was raced past the barn fell’) (Garrett, 1990; Garrod & Pickering, 1999). The misreading of such sentences is called a garden path. Effects of context on comprehension and production Figure 9.3 presents an amended version of our levelsbased description of language. It suggests that producing a sentence is the inverse of understanding a sentence. To understand a sentence, we hear phonemes, use them to construct the morphemes and phrases of the sentence, and finally extract the proposition from the sentence unit. We work from the bottom up. To produce a sentence, we move in the opposite direction: We start with a propositional thought, translate it into the phrases and morphemes of a sentence, and finally translate these morphemes into phonemes. Understanding a sentence SENTENCE UNITS (phrases, sentences) Producing a sentence MORPHEMES (words, prefixes, and suffixes) PHONEMES (speech sounds) Figure 9.3 Levels of Understanding and Producing Sentences. In producing a sentence, we translate a propositional thought into the phrases and morphemes of a sentence and translate these morphemes into phonemes. In understanding a sentence, we go in the opposite direction – we use phonemes to construct the morphemes and phrases of a sentence and from these units extract the underlying propositions. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk LANGUAGE AND COMMUNICATION Although this analysis describes some of what occurs in sentence understanding and production, it is oversimplified because it does not consider the context in which language processing occurs. Often the context makes what is about to be said predictable. After comprehending just a few words, we jump to conclusions about what we think the entire sentence means (the propositions behind it) and then use our guess about the propositions to help understand the rest of the sentence. In such cases, understanding proceeds from the highest level down, as well as from the lowest level up (Adams & Collins, 1979). Indeed, sometimes language understanding is nearly impossible without some context (what topic is being talked about). To illustrate, try reading the following paragraph: The procedure is actually quite simple. First you arrange things into different groups. Of course, one pile may be sufficient, depending on how much there is to do. If you have to go somewhere else due to lack of facilities, that is the next step; otherwise you are pretty well set. It is important not to overdo things. That is, it is better to do too few things at once than too many. In the short run this may not seem important, but complications can easily arise. A mistake can be expensive as well. At first the whole procedure will seem complicated. Soon, however, it will become just another facet of life. (After Bransford & Johnson, 1973) In reading the paragraph, you no doubt had difficulty understanding exactly what it was about. But given the context of ‘washing clothes’, you can now use your background knowledge about washing clothes to interpret all the cryptic parts of the passage. The ‘procedure’ referred to in the first sentence is that of ‘washing clothes’, the ‘things’ referred to in the first sentence are ‘clothes’, the ‘different groups’ are ‘groups of clothing of different colors’, and so on. Your understanding of the paragraph, if you reread it, should now be excellent. In addition to background knowledge, another salient part of the context is the other person (or persons) we are communicating with. In understanding a sentence, it is not enough to understand its phonemes, morphemes, and phrases. We must also understand the speaker’s intention in uttering that particular sentence. For example, when someone at dinner asks you, ‘Can you pass the potatoes?’ you usually assume that the speaker’s intention was not to find out whether you are physically capable of lifting the potatoes but, rather, to induce you to actually pass the potatoes. However, had your arm been in a sling, given the identical question, you might assume that the speaker’s intention was to determine your physical capability. In English, in both cases, the sentence (and proposition) is the same. What changes is the speaker’s intention in uttering that sentence (Grice, 1975). There is abundant evidence that people determine the speaker’s intention as part of the process of comprehension (Clark, 1984).

324 CHAPTER 9 LANGUAGE AND THOUGHT ª OWEN FRANKEN/CORBIS Language production depends on context. You would probably use different language when giving directions to a tourist than when telling a neighbor where a particular restaurant or store is located. There are similar effects in the production of language. If someone asks you, ‘Where is the Empire State Building?’ you will say different things depending on the physical context and the assumptions you make about the questioner. If the question is asked of you in Detroit, for example, you might answer, ‘In New York.’ If the question is asked in Brooklyn, you might say, ‘Near midtown Manhattan.’ If the question is asked in Manhattan, you might say, ‘On 34th Street.’ In speaking, as in understanding, we must determine how the utterance fits the context. The neural basis of language Recall from Chapter 2 that there are two regions of the left hemisphere of the cortex that are critical for language: Broca’s area, which lies in the posterior part of the frontal lobes, and Wernicke’s area, which lies in the temporal region. Damage to either of these areas – or to some in-between areas – leads to specific kinds of aphasia (a For more Cengage Learning textbooks, visit www.cengagebrain.co.uk breakdown in language) (Dronkers, Redfern, & Knight, 2000). The disrupted language of a patient with Broca’s aphasia (a patient with damage to Broca’s area) is illustrated by the following interview, in which E designates the interviewer (or experimenter) and P, the patient: E: Were you in the Coast Guard? P: No, er, yes, yes . . . ship . . . Massachu . . . chusetts . . . Coast Guard . . . years. [Raises hands twice with fingers indicating ‘19’] E: Oh, you were in the Coast Guard for 19 years. P: Oh . . . boy . . . right . . . right. E: Why are you in the hospital? P: [Points to paralyzed arm] Arm no good. [Points to mouth] Speech . . . can’t say . . . talk, you see. E: What happened to make you lose your speech? P: Head, fall, Jesus Christ, me no good, str, str . . . oh Jesus . . . stroke. E: Could you tell me what you’ve been doing in the hospital? P: Yes sure. Me go, er, uh, P. T. nine o’cot, speech . . . two times . . . read . . . wr . . . ripe, er, rike, er, write . . . practice . . . get-ting better. (Gardner, 1975, p. 61) The speech is very disfluent (halting and hesitant). Even in simple sentences, pauses and hesitations are plentiful. This is in contrast to the fluent speech of a patient with Wernicke’s aphasia (a patient with damage in Wernicke’s area): Boy, I’m sweating, I’m awful nervous, you know, once in a while I get caught up. I can’t mention the tarripoi, a month ago, quite a little, I’ve done a lot well, I impose a lot, while, on the other hand, you know what I mean, I have to run around, look it over, trebin and all that sort of stuff. (Gardner, 1975, p. 68) In addition to fluency, there are other marked differences between Broca’s and Wernicke’s aphasias. The speech of a Broca’s aphasic consists mainly of content words. It contains few grammatical morphemes and complex sentences and, in general, has a telegraphic quality that is reminiscent of the two-word stage of language acquisition (see The Development of Language later in this chapter). In contrast, the language of a Wernicke’s aphasic preserves syntax but is remarkably devoid of content. There are clear problems in finding the right noun, and occasionally words are invented for the occasion (as in the use of tarripoi and trebin). These observations suggest that Broca’s aphasia involves a disruption at the syntactic stage and that Wernicke’s aphasia involves a disruption at the level of words and concepts. These characterizations of the two aphasias are supported

by research findings. In a study that tested for a syntactic deficit, participants had to listen to a sentence on each trial and show that they understood it by selecting a picture (from a set) that the sentence described. Some sentences could be understood without using much syntactic knowledge. For example, given ‘The bicycle the boy is holding is broken’, we can figure out that it is the bicycle that is broken and not the boy, solely from our knowledge of the concepts involved. Understanding other sentences requires extensive syntactic analysis. In ‘The lion that the tiger is chasing is fat’, we must rely on syntax (word order) to determine that it is the lion who is fat and not the tiger. On the sentences that did not require much syntactic analysis, Broca’s aphasics did almost as well as normal participants, scoring close to 90 percent correct. But with sentences that required extensive analysis, Broca’s aphasics fell to the level of guessing (for example, given the sentence about the lion and tiger, they were as likely to select the picture with a fat tiger as the one with the fat lion). In contrast, the performance of Wernicke’s aphasics did not depend on the syntactic demands of the sentence. Thus, Broca’s aphasia, but not Wernicke’s, seems to be partly a disruption of syntax (Caramazza & Zurif, 1976). The disruption is not total, though, in that Broca’s aphasics are capable of handling certain kinds of syntactic analysis (Grodzinsky, 1984; Zurif, 1995). Other experiments have tested for a conceptual deficit in Wernicke’s aphasia. In one study, participants were presented with three words at a time and asked to select the two that were most similar in meaning. The words included animal terms, such as dog and crocodile, as well as human terms, such as mother and knight. Normal participants used the distinction between humans and animals as the major basis for their selections; given dog, crocodile, and knight, for example, they selected the first two. Wernicke’s patients, however, ignored this basic distinction. Although Broca’s aphasics showed some differences from normals, their selections at least respected the human–animal distinction. A conceptual deficit thus is more pronounced in Wernicke’s aphasics than in Broca’s aphasics (Zurif, Carramazza, Myerson, & Galvin, 1974). In addition to Broca’s and Wernicke’s aphasias, there are numerous other kinds of aphasias (Benson, 1985). One of these is referred to as conduction aphasia. In this condition, the aphasic seems relatively normal in tests of both syntactic and conceptual abilities but has severe problems when asked to repeat a spoken sentence. A neurological explanation of this curious disorder is that the brain structures mediating basic aspects of comprehension and production are intact but that the neural connections between these structures are damaged. The patient can understand what is said because Wernicke’s area is intact, and can produce fluent speech because Broca’s area is intact but cannot transmit what was understood to the speech center because the connecting links between the areas are damaged (Geschwind, 1972). For more Cengage Learning textbooks, visit www.cengagebrain.co.uk LANGUAGE AND COMMUNICATION This research presupposes that each kind of aphasia is caused by damage to a specific area of the brain. This idea may be too simple. In reality, the particular region mediating a particular linguistic function may vary from one person to another. The best evidence for such individual differences comes from findings of neurosurgeons preparing to operate on patients with incurable epilepsy. The neurosurgeon needs to remove some brain tissue but first has to be sure that this tissue is not mediating a critical function such as language. Accordingly, prior to surgery and while the patient is awake, the neurosurgeon delivers small electric charges to the area in question and observes their effects on the patient’s ability to name things. If electrical stimulation disrupts the patient’s naming, the neurosurgeon knows to avoid this location during the operation. These locations are of great interest to students of language. Within a single patient, these language locations seem to be highly localized. A language location might be less than 1 centimeter in all directions from locations where electrical stimulations do not disrupt language. But – and this is the crucial point – different brain locations have to be stimulated to disrupt naming in different patients. For example, one patient’s naming may be disrupted by electrical stimulation to locations in the front of the brain but not by stimulation in the back of the brain, whereas another patient might show a different pattern (Ojemann, 1983). If different areas of the brain mediate language in different people, presumably the areas associated with aphasias also vary from one person to another. INTERIM SUMMARY l Language is structured at three different levels: (1) sentence units, (2) words and parts of words that carry meaning, and (3) speech sounds. l The three levels of language are interconnected. Sentence units are built from words (and parts of words), and words are constructed from speech sounds. l A phoneme is a category of speech sounds. Every language has its own set of phonemes – with different sets for different languages – and rules for combining them into words. l A morpheme is the smallest unit of language that carries meaning. Most morphemes are words, but others are prefixes and suffixes that are added to words. l Syntactic rules are used for combining words into phrases and phrases into sentences. l The areas of the brain that mediate language lie in the left hemisphere and include Broca’s area and Wernicke’s area.

326 CHAPTER 9 LANGUAGE AND THOUGHT CRITICAL THINKING QUESTIONS 1 Now that you have some idea of the units and levels of language (such as phonemes, words, semantics, and syntax), apply these notions to learning a second language. Which components do you think will be easiest and hardest to learn? Why? 2 As we saw, background knowledge, or knowledge of context, is clearly important for understanding language. Do you think there is a particular region of the brain that mediates such knowledge? Why or why not? THE DEVELOPMENT OF LANGUAGE Our discussion of language should suggest the immensity of the task confronting children. They must master all levels of language – not only the proper speech sounds but also how those sounds are combined into thousands of words and how those words can be combined into sentences to express thoughts. It is a wonder that virtually all children in all cultures accomplish so much in a mere four to five years. We will first discuss what is acquired at each level of language and then how it is acquired – specifically, the roles played by learning and innate factors. What is acquired? Development occurs at all three levels of language. It starts at the level of phonemes, proceeds to the level of words and other morphemes, and then moves on to the level of sentence units, or syntax. In what follows, we adopt a chronological perspective, tracing the child’s development in both understanding and producing language. Phonemes and combinations of phonemes Recall that adult listeners are good at discriminating among different sounds that correspond to different phonemes in their language but poor at discriminating among different sounds that correspond to the same phoneme in their language. Remarkably, children come into the world able to discriminate among different sounds that correspond to different phonemes in any language. What changes over the first year of life is that infants learn which phonemes are relevant to their language and lose their ability to discriminate between sounds that correspond to the same phoneme in their language. (In essence, they lose the ability to make distinctions that will be of no use to them in understanding For more Cengage Learning textbooks, visit www.cengagebrain.co.uk ª ISTOCKPHOTO.COM/KATYA MONAKHOVA Children between 18 and 30 months of age learn to combine words in phrases and sentences. and producing their language.) These remarkable facts were determined through experiments in which infants who were sucking on pacifiers were presented with pairs of sounds in succession. Because infants suck more in response to a novel stimulus than in response to a familiar one, their rate of sucking can be used to tell whether they perceive two successive sounds as the same or different. Six-month-old infants increase their rate of sucking when the successive sounds correspond to different phonemes in any language, but 1-year-olds increase their rate of sucking only when the successive sounds correspond to different phonemes in their own language. Thus, a sixmonth-old Japanese child can distinguish /l/ from /r/ but loses this ability by the end of the first year of life (Eimas, 1985). Although children learn which phonemes are relevant during their first year of life, it takes several years for them to learn how phonemes can be combined to form words. When children first begin to talk, they occasionally produce ‘impossible’ words like dlumber for lumber. They do not yet know that in English /l/ cannot follow /d/ at the beginning of a word. By age 4, however, children have learned most of what they need to know about phoneme combinations. Words and concepts At about 1 year of age, children begin to speak. One-yearolds already have concepts for many things (including family members, household pets, food, toys, and body parts), and when they begin to speak, they are mapping these concepts onto words that adults use. The beginning vocabulary is roughly the same for all children. Children 1 to 2 years old talk mainly about people (‘Dada’, ‘Mama’, ‘baby’), animals (‘dog’, ‘cat’, ‘duck’), vehicles (‘car’, ‘truck’, ‘boat’), toys (‘ball’, ‘block’, ‘book’), food (‘juice’, ‘milk’, ‘cookie’), body parts (‘eye’, ‘nose’, ‘mouth’), and household implements (‘hat’, ‘sock’, ‘spoon’).

Although these words name some of the young child’s concepts, they by no means name them all. Consequently, young children often have a gap between the concepts they want to communicate and the words they have at their disposal. To bridge this gap, children aged 12 to 30 months old overextend their words – they apply words to neighboring concepts. For example, a 2-year-old child might use the word doggie for cats and cows as well as dogs. (The child is not unsure of the word’s meaning. If presented with pictures of various animals and asked to pick the ‘doggie’, the child makes the correct choice.) Overextensions begin to disappear at about age 21 2, presumably because the child’s vocabulary begins to increase markedly, thereby eliminating many of the gaps (Clark, 1983; Rescorla, 1980). Thereafter, the child’s vocabulary development virtually explodes. At 11 2 years, a child might have a vocabulary of 25 words; at 6 years, the child’s vocabulary is about 15,000 words. To achieve this incredible growth, children have to learn new words at the rate of almost 10 per day (Miller & Gildea, 1987; Templin, 1957). Children seem to be attuned to learning new words. When they hear a word they do not know, they may assume that it maps onto one of their concepts that is not yet labeled, and they use the context in which the word was spoken to find that concept (Clark, 1983; Markman, 1987). From primitive to complex sentences Between the ages of 11 2 and 21 2, the acquisition of phrase and sentence units, or syntax, begins. Children start to combine single words into two-word utterances such as ‘There cow’ (in which the underlying proposition is ‘There’s the cow’), ‘Jimmy bike’ (‘That’s Jimmy’s bike’), or ‘Towel bed’ (‘The towel’s on the bed’). There is a telegraphic quality about this two-word speech. The child leaves out the grammatical words (such as a, an, the, and is), as well as other grammatical morphemes (such as the suffixes ing, ed, and s) and puts in only the words that carry the most important content. Despite their brevity, these utterances express most of the basic intentions of speakers, such as locating objects and describing events and actions. Children progress rapidly from two-word utterances to more complex sentences that express propositions more precisely. Thus, ‘Daddy hat’ may become ‘Daddy wear hat’ and finally ‘Daddy is wearing a hat.’ Such expansions of the verb phrase appear to be the first complex constructions that occur in children’s speech. The next step is the use of conjunctions like and and so to form compound sentences (‘You play with the doll, and I play with the blocks’) and the use of grammatical morphemes like the past tense ed. The sequence of language development is remarkably similar for all children. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THE DEVELOPMENT OF LANGUAGE Learning processes How do children acquire language? Clearly, learning must play a role, which is why children raised in Englishspeaking households learn English while children raised in French-speaking households learn French. Innate factors must also play a role, which is why all the children in a household learn language but none of the pets do (Gleitman, 1986). In this section, we discuss learning, and innate factors are considered in the next section. In both discussions, we emphasize sentence units and syntax, for it is at this level of language that the important issues about language acquisition are illustrated most clearly. Imitation and conditioning One possibility is that children learn language by imitating adults. Although imitation plays some role in the learning of words (a parent points to a telephone, says, ‘Phone’, and the child tries to repeat the word), it cannot be the principal means by which children learn to produce and understand sentences. Young children constantly utter sentences that they have never heard an adult say, such as ‘All gone milk.’ Even when children in the twoword stage of language development try to imitate longer sentences (for example, ‘Mr. Miller will try’), they produce their usual telegraphic utterances (‘Miller try’). In addition, the mistakes children make (for instance, ‘Daddy taked me’) suggest that they are trying to apply rules, not simply trying to copy what they have heard adults say (Ervin-Tripp, 1964). A second possibility is that children acquire language through conditioning. Adults may reward children when they produce a grammatical sentence and reprimand them when they make mistakes. For this to work, parents would have to respond to every detail in a child’s speech. However, Brown, Cazden, & Bellugi (1969) found that parents do not pay attention to how the child says something as long as the statement is comprehensible. Also, attempts to correct a child (and, hence, apply conditioning) are often futile. Consider an example: CHILD: Nobody don’t like me. MOTHER: No, say, ‘nobody likes me’. CHILD: Nobody don’t like me. MOTHER: No, now listen carefully; say ‘nobody likes me’. CHILD: Oh! Nobody don’t likes me. (McNeill, 1966, p. 49) Hypothesis testing The problem with imitation and conditioning is that they focus on specific utterances. However, children often learn something general, such as a rule. They seem to form a hypothesis about a rule of language, test it, and retain it if it works.

328 CHAPTER 9 LANGUAGE AND THOUGHT Consider the morpheme ed. As a general rule in English, ed is added to the present tense of verbs to form the past tense (as in cook–cooked). Many common verbs, however, are irregular and do not follow this rule (go– went, break–broke). Many of these irregular verbs express concepts that children use from the beginning. So, at an early point, children use the past tense of some irregular verbs correctly (presumably because they learned them by imitation). Then they learn the past tense for some regular verbs and discover the hypothesis ‘add ed to the present tense to form the past tense’. This hypothesis leads them to add the ed ending to many verbs, including irregular ones. They say things like ‘Annie goed home’ and ‘Jackie breaked the cup’, which they have never heard before. Eventually, they learn that some verbs are irregular and stop overgeneralizing their use of ed (Pinker, 1994). How do children generate these hypotheses? There are a few operating principles that all children use as a guide to forming hypotheses. One is to pay attention to the ends of words. Another is to look for prefixes and suffixes that indicate a change in meaning. A child armed with these two principles is likely to hit upon the hypothesis that ed at the end of verbs signals the past tense, because ed is a word ending associated with a change in meaning. A third operating principle is to avoid exceptions, which explains why children initially generalize their ed-equals-past-tense hypothesis to irregular verbs. Some of these principles appear in Table 9.2, and they seem to hold for all of the 40 languages studied by Slobin (1985). In recent years, there has been a challenge to the idea that learning a language involves learning rules. Some researchers argue that the mere fact that a regular pattern is overextended does not guarantee that these errors are Table 9.2 Operating principles used by young children. Children from many countries seem to follow these principles in learning to talk and to understand speech. (Dan I. Slobin (1971) from ‘Developmental Psycholinguistics’, in A Survey of Linguistic Science, edited by W. O. Dingwall, pp. 298–400.) 1. Look for systematic changes in the form of words. 2. Look for grammatical markers that clearly indicate changes in meaning. 3. Avoid exceptions. 4. Pay attention to the ends of words. 5. Pay attention to the order of words, prefixes, and suffixes. 6. Avoid interruption or rearrangement of constituents (that is, sentence units). For more Cengage Learning textbooks, visit www.cengagebrain.co.uk caused by following a rule. Marcus (1996), for example, believes that children’s grammar is structured similarly to adults’. But because children have had less exposure to correct forms, their memories for irregular forms like broke are weaker. Whenever they cannot recall such a form, they add ed, producing an overextension. Other researchers have argued that what looks like an instance of learning a single rule may in fact be a case of learning numerous associations. Consider again a child learning the past tense of verbs in English. Instead of learning a rule about adding ed to the present tense of a verb, perhaps children are learning associations between the past tense ending ed and various phonetic properties of verbs that can go with ed. The phonetic properties of a verb include properties of the sounds that make up the verb, such as whether it contains an alk sound at the end. A child may unconsciously learn that verbs containing an alk sound at the end – such as talk, walk, and stalk – are likely to take ed as a past tense ending. This proposal has in fact been shown to account for some aspects of learning verb endings, including the finding that at some point in development children add the ed ending even to irregular verbs (Rumelhart & McClelland, 1987). However, other aspects of learning verb endings cannot be explained in terms of associations between sounds. For example, the word break and the word brake (meaning to stop a car) are identical in sound, but the past tense of the former is broke, whereas that of the latter is braked. So a child must learn something in addition to sound connections. This additional knowledge seems best cast in terms of rules (for example, ‘If a verb is derived from a noun – as in the case of brake – always add ed to form the past tense’). Another piece of evidence that verb endings can involve rules (for regular verbs) or memorized past tenses (for exceptions) comes from studies of aphasics. Recall that Broca’s aphasics have difficulty with the grammatical aspects of language, and they also have more problems with regular verbs (which are handled by rules) than with irregular ones. Furthermore, anomic aphasics, who primarily have problems in retrieving and recognizing words, have more problems with irregular verbs (which require memory) than with regular verbs (Ullman et al., 1997). Language learning thus seems to involve rules as well as associations and memory (Pinker, 1991; Pinker & Prince, 1988). Innate factors As noted earlier, some of our knowledge about language is inborn, or innate. There are, however, some controversial questions about the extent and nature of this innate knowledge. One question concerns its richness. If our innate knowledge is very rich or detailed, the process of language acquisition should be similar for different languages, even if the opportunities for learning differ among cultures. Is this the case? A second question about

innate factors involves critical periods. Innate behavior will be acquired more readily if the organism is exposed to the right cues during a critical time period. Are there such critical periods in language acquisition? A third question concerns the possible uniqueness of our innate knowledge about language. Is the ability to learn a language system unique to the human species? We will consider these three questions in turn. The richness of innate knowledge All children, regardless of their culture and language, seem to go through the same sequence of language development. At age 1 year, the child speaks a few isolated words; at about age 2, the child speaks two- and three-word sentences; at age 3, sentences become more grammatical; and at age 4, the child’s speech sounds much like that of an adult. Because cultures differ markedly in the opportunities they provide for children to learn from adults – in some cultures parents are constantly speaking to their children, whereas in others parents verbally ignore their children – the fact that this sequence is so consistent across cultures indicates that our innate knowledge about language is very rich. Indeed, our innate knowledge of language seems to be so rich that children can go through the normal course of language acquisition even when there are no language users around them to serve as models or teachers. A group of researchers studied six deaf children of hearing parents who had decided not to have their children learn sign language. Before the children received any instruction in lip reading and vocalization, they began to use a system of gestures called home sign. Initially, their home sign was a kind of simple pantomime, but eventually it took on the properties of a language. For example, it was organized at both the morphemic and syntactic levels, including individual signs and combinations of signs. In addition, these deaf children (who essentially created their own language) went through the same stages of development as normal hearing children. The deaf children initially gestured one sign at a time and later put their pantomimes together into two- and three-concept ‘sentences’. These striking results attest to the richness and detail of our innate knowledge (Feldman, Goldin-Meadow, & Gleitman, 1978). Critical periods Like other innate behaviors, language learning has some critical periods. This is particularly evident when it comes to acquiring the sound system of a new language – learning new phonemes and the rules for combining them. We have already noted that infants less than 1 year old can discriminate among phonemes of any language but lose this ability by the end of their first year, so the first months of life are a critical period for homing in on the phonemes of one’s native language. As a result, it is For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THE DEVELOPMENT OF LANGUAGE difficult to acquire the sound system of a second language later in life. After a few years of learning a second language, young children are more likely than adults to speak it without an accent, and they are better able to understand the language when it is spoken in noisy conditions (Lenneberg, 1967; Snow, 1987). Furthermore, when adults learn a second language, they typically retain an accent that they can never unlearn, no matter how many years they speak the new language. But the problems in later language acquisition are not limited to phoneme learning and pronunciation. Indirect evidence for the existence of a critical period for language acquisition can be seen in cases of children who have experienced extreme isolation. A famous case of social isolation in childhood is that of Genie, a girl whose father was psychotic and whose mother was blind and highly dependent. From birth until she was discovered by child welfare authorities at age 11, Genie was strapped to a potty chair in an isolated room of her parents’ home. Before she was discovered, Genie had had almost no contact with other people. She had virtually no language ability. Efforts to teach her to speak had limited results. She was able to learn words, but she could not master the rules of grammar that come naturally to younger children. Although tests showed that she was highly intelligent, her language abilities never progressed beyond those of a third-grader (Curtiss, 1977; Rymer, 1992a, 1992b). More recent research also indicates that there is critical period for learning syntax. The evidence comes from studies of deaf people who know American Sign Language (ASL), which is a full-blown language and not a pantomime system. The studies of interest involved adults who had been using ASL for 30 years or more but varied in the age when they had learned the language. Although ª DAVID YOUNG-WOLFF/PHOTOEDIT Research has shown that there is a critical period for learning syntax. Deaf people can use American Sign Language more effectively if they learn it at an early age.

330 CHAPTER 9 LANGUAGE AND THOUGHT all the participants were born to hearing parents, some were native signers who were exposed to ASL from birth, others first learned ASL between ages 4 and 6 when they enrolled in a school for the deaf, and still others did not encounter ASL until after they were 12 (their parents had been reluctant to let them learn a sign language rather than a spoken one). If there is a critical period for learning syntax, the early learners should have shown greater mastery of some aspects of syntax than the later learners, even 30 years after acquisition. This is exactly what the researchers found. With respect to understanding and producing words with multiple morphemes – such as untimely, which consists of the morphemes un, time, and ly – native signers did better than those who learned ASL when entering school, who in turn did better than those who learned ASL after age 12 (Meier, 1991; Newport, 1990). In today’s world, many individuals learn a second language later in life. In fact, many of the students reading this textbook are not native speakers of English. What do we know about second-language learning? As with ASL learning, we see a major effect of age of acquisition. Even though adults initially learn quickly because they can be taught the rules of a language (for example, how to conjugate regular verbs), they are ultimately at a disadvantage. Johnson and Newport (1989) studied Chinese and Korean speakers who had moved to the United States and became immersed in an Englishlanguage community (as students and faculty members at a university) at least five years prior to testing. Subjects were asked to judge whether or not sentences presented to them were grammatical in English. The researchers found that performance on this task dropped with increasing age of arrival. Subjects who had been between the ages of 3 and 7 when they moved to the United States did just as well as native speakers. However, the older the subjects were when they moved, the lower their score was on this test. The proficiency of second-language learners does not only depend on their age at the time of acquisition. The more the individual is socially and psychologically integrated into the new culture, the better the learning of the new culture’s language will be (Schumann, 1978). Not surprisingly, there is also a positive correlation between motivation and second-language learning (Masgoret & Gardner, 2003). Can another species learn human language? Some experts believe that our innate capacity to learn language is unique to our species (Chomsky, 1972; Pinker, 1994). They acknowledge that other species have communication systems but argue that these are qualitatively different from ours. Consider the communication system of the chimpanzee. Chimpanzees’ vocalizations and gestures are limited in number, and the productivity of their communication system is very low compared with For more Cengage Learning textbooks, visit www.cengagebrain.co.uk that of human language, in which a relatively small number of phonemes can be combined to create thousands of words, which in turn can be combined to create an unlimited number of sentences. Another difference is that human language is structured at several levels, whereas chimpanzee communications are not. In particular, in human language there is a clear distinction between the level of words or morphemes, which have meaning, and the level of sounds, which do not. There is no hint of such a duality of structure in chimpanzee communication; every symbol carries meaning. Still another difference is that chimpanzees do not vary the order of their symbols to vary the meaning of their messages as we do. For instance, for us, ‘Jonah ate the whale’ means something quite different from ‘The whale ate Jonah.’ There is no evidence for a comparable difference in chimpanzee communications. The fact that chimpanzee communication is impoverished compared with our own does not prove that chimpanzees lack the capacity for a more productive system. Their system may be adequate for their needs. To determine whether chimpanzees have the same innate capacity we do, we must see whether they can learn our language. In one of the best-known studies of the teaching of language to chimps, Gardner and Gardner (1972) taught a female chimpanzee named Washoe signs adapted from American Sign Language. Sign language was used because chimps lack the vocal equipment to pronounce human sounds. Training began when Washoe was about 1 year old and continued until she was 5. During this time, Washoe’s caretakers communicated with her only by means of sign language. They first taught her signs by means of shaping procedures, waiting for her to make a gesture that resembled a sign and then reinforcing her. Later, Washoe learned signs simply by observing and imitating. By age 4, Washoe could produce 130 different signs and understand even more. She could also generalize a sign from one situation to another. For example, she first learned the sign for ‘more’ in connection with ‘more tickling’ and then generalized it to indicate ‘more milk’. Other chimpanzees have acquired comparable vocabularies. Some studies used methods of manual communication other than sign language. For example, Premack (1971, 1985) taught a chimpanzee named Sarah to use plastic symbols as words and to communicate by manipulating these symbols. In a series of similar studies, Patterson (1978) taught sign language to a gorilla named Koko, starting when Koko was 1 year old. By age 10, Koko had a vocabulary of more than 400 signs (Patterson & Linden, 1981). Do these studies prove that apes can learn human language? There seems to be little doubt that the apes’ signs are equivalent to our words and that the concepts behind some of the signs are equivalent to ours. But many experts question whether these studies show that apes can

learn syntax and learn to combine signs in the same way that humans combine words into a sentence. For example, not only can we combine the words man, John, hurt, and the into the sentence ‘The man hurt John’, but we can also combine the same words in a different order to produce a sentence with a different meaning, ‘John hurt the man.’ Although the studies just described provide some evidence that apes can combine signs into a sequence resembling a sentence, there is little evidence that apes can alter the order of the signs to produce a different sentence (Brown, 1986; Slobin, 1979). Even the evidence that apes can combine signs into a sentence has come under attack. In their early work, researchers reported cases in which an ape produced what seemed to be a meaningful sequence of signs, such as ‘Gimme flower’ and ‘Washoe sorry’ (Gardner & Gardner, 1972). As data accumulated, however, it became apparent that, unlike human sentences, the utterances of an ape are often highly repetitious. An utterance like ‘You me banana me banana you’ is typical of the signing chimps but would be most odd for a human child. In the cases in which an ape utterance is more like a sentence, the ape may simply have imitated the sequence of signs made by its human teacher. Some of Washoe’s most sentence-like utterances occurred when she was answering a question. For example, the teacher signed, ‘Washoe eat?’ and Washoe signed, ‘Washoe eat time.’ Washoe’s combination of signs may have been a partial imitation of her teacher’s combination, which is not how human children learn to combine words (Terrace, Petitto, Sanders, & Bever, 1979). The evidence considered thus far supports the conclusion that, although apes can develop a humanlike vocabulary, they cannot learn to combine their signs in the systematic way humans do. However, studies by Greenfield and Savage-Rumbaugh (1990) seem to challenge this conclusion. The researchers worked with a bonobo (pygmy chimpanzee), whose behavior is thought to be more like that of humans than the behavior of the more widely studied common chimpanzee. The bonobo, a 7-year-old named Kanzi, communicated by manipulating symbols that stand for words. Unlike previous studies, Kanzi learned to manipulate the symbols in a relatively natural way, for example, by listening to his caretakers as they uttered English words while pointing to the symbols. Most important, after a few years of language training, Kanzi demonstrated some ability to vary word order to communicate changes in meaning. For example, if Kanzi was going to bite his half-sister Mulika, he would signal, ‘Bite Mulika’, but if his sister bit him, he would sign, ‘Mulika bite.’ Kanzi thus seems to have some syntactic knowledge, roughly that of a 2-year-old human. These results are tantalizing, but they need to be interpreted with caution. For one thing, Kanzi is one of very few apes who have shown any syntactic ability, and we might question how general the results are. For another thing, although Kanzi may have the linguistic ability of a 2-year-old, it took him substantially longer to get to that point than it does a human. But perhaps the main reason to be skeptical about the possibility of any ape’s developing comparable linguistic abilities to a human has been voiced by Chomsky (1991): ‘If an animal had a capacity as biologically advantageous as language but somehow hadn’t used it until now, it would be an evolutionary miracle, like finding an island of humans who could be taught to fly’. ª PAUL FUSCO/MAGNUM PHOTOS ª MICHAEL NICHOLS/MAGNUM PHOTOS The chimpanzee on the left has been trained to communicate by using a keyboard. The one on the right has learned a kind of sign language; here he makes the sign for ‘toothbrush’. THE DEVELOPMENT OF LANGUAGE For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

332 CHAPTER 9 LANGUAGE AND THOUGHT INTERIM SUMMARY l Infants appear to be preprogrammed to learn phonemes, but they need several years to learn the rules for combining them. l When children begin to speak, they first learn words that name concepts that are familiar in their environment. Then they move on to sentences. They begin with one-word utterances, progress to two-word telegraphic speech, and then elaborate their noun and verb phrases. l Children learn language in part by testing hypotheses (often unconsciously). These hypotheses tend to be guided by a small set of operating principles, which call the children’s attention to critical characteristics of utterances, such as word endings. l Innate factors also play a major role in language acquisition. There are numerous findings that support this claim. For one, all children in all cultures seem to go through the same stages in acquiring their language. For another, like other innate behaviors, some language abilities are learned only during a critical period. This partly explains why it is relatively difficult to learn a language later in life. CRITICAL THINKING QUESTIONS 1 Do you think there is a critical period for learning word meanings? Why or why not? 2 What do you think would happen if parents explicitly taught children language the way that most researchers have taught apes human language. Would it speed up, slow down, or leave unchanged the process of language acquisition? CONCEPTS AND CATEGORIZATION: THE BUILDING BLOCKS OF THOUGHT Thought can be conceived of as a ‘language of the mind’. Actually, there may be more than one such language. One mode of thought corresponds to the stream of sentences that we seem to ‘hear in our mind’. It is referred to as propositional thought because it expresses a proposition or claim. Another mode, imaginal thought, corresponds to images, particularly visual ones, that we can ‘see’ in our minds. Research on thinking in adults has emphasized these two modes, particularly the propositional mode. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk We can think of a proposition as a statement that expresses a factual claim. ‘Mothers are hard workers’ is one proposition. ‘Cats are animals’ is another. It is easy to see that such a thought consists of concepts – such as ‘mothers’ and ‘hard workers’ or ‘cat’ and ‘animal’ – combined in a particular way. To understand propositional thought, however, we first need to understand the concepts that compose it. Functions of concepts A concept represents an entire class; it is the set of properties that we associate with a particular class. Our concept of ‘cat’, for example, includes the properties of having four legs and whiskers. Concepts serve some major functions in mental life. One of those functions is to divide the world into manageable units (cognitive economy). The world is full of so many different objects that if we treated each one as distinct, we would soon be overwhelmed. For example, if we had to refer to every single object we encountered by a different name, our vocabulary would have to be gigantic – so immense that communication might become impossible. (Think what it would be like if we had a separate name for each of the 7 million colors among which we can discriminate!) Fortunately, we do not treat each object as unique. Rather, we see it as an instance of a concept. Many different objects are seen as instances of the concept ‘cat’, many others as instances of the concept ‘chair’, and so on. By treating different objects as members of the same concept, we reduce the complexity of the world that we have to represent mentally. Categorization refers to the process of assigning an object to a concept. When we categorize an object, we treat it as if it has many of the properties associated with the concept, including properties that we have not directly perceived. A second major function of concepts is that they allow us to predict information that is not readily perceived (referred to as predictive power). For example, our concept of ‘apple’ is associated with such hard-toperceive properties as having seeds and being edible, as well as with readily perceived properties like being round, having a distinctive color, and coming from trees. We may use the visible properties to categorize some object as an ‘apple’ (the object is red, round, and hangs from a tree) and then infer that the object has the less visible properties as well (it has seeds and is edible). As we will see, concepts enable us to go beyond directly perceived information (Anderson, 1991; Bruner, 1957). We also have concepts of activities, such as ‘eating’; of states, such as ‘being old’; and of abstractions, such as ‘truth’, ‘justice’, or even the number 2. In each case we know something about the properties that are common to all members of the concept. Widely used concepts like these are generally associated with a one-word name. This allows us to communicate quickly about experiences that

occur frequently. We can also make up concepts on the spot to serve some specific goal. For example, if you are planning an outing, you might generate the concept ‘things to take on a camping trip’. These kinds of goaldriven concepts facilitate planning. Although such concepts are used relatively infrequently, and accordingly have relatively long names, they still provide us with some cognitive economy and predictive power (Barsalou, 1985). Prototypes The properties associated with a concept seem to fall into two sets. One set of properties makes up the prototype of the concept. They are the properties that describe the best examples of the concept. In the concept ‘grandmother’, for example, your prototype might include such properties as a woman who is in her 60s, has gray hair, and loves to spend time with her children. The prototype is what usually comes to mind when we think of the concept. But although the prototype properties may be true of the typical grandmother, they clearly are not true of all instances (think of a woman in her late 30s who, like her daughter, had a child while a teenager). This means that a concept must contain something in addition to a prototype. This additional something is a core that comprises the properties that are most important for being a member of a concept. Your core of the concept ‘grandmother’ would probably include the properties of being a female parent of a parent, the properties that are essential for being a member of the concept (Armstrong, Gleitman, & Gleitman, 1983). As another example, consider the concept ‘bird’. Your prototype likely includes the properties of flying and chirping – which works for the best examples of ‘bird’, such as robins and blue jays, but not for other examples, such as ostriches and penguins. Your core would probably specify something about the biological basis of birdhood – having certain genes or, at least, having parents that are birds. Note that in both our examples – ‘grandmother’ and ‘bird’ – the prototype properties are salient but not perfect indicators of concept membership, whereas the core properties are more central to concept membership. However, there is an important difference between a concept like ‘grandmother’ and a concept like ‘bird’. The ªISTOCKPHOTO.COM/JOHN RICHBOURG Do flying and chirping make a bird? Your prototype for ‘bird’ probably includes these features. However, they do not apply to certain kinds of birds, such as penguins. ª ISTOCKPHOTO.COM/DAWN NICHOLS CONCEPTS AND CATEGORIZATION: THE BUILDING BLOCKS OF THOUGHT For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

334 CHAPTER 9 LANGUAGE AND THOUGHT core of ‘grandmother’ is a definition, and it is easily applied. Anyone who is a female parent of a parent must be a ‘grandmother’, and it is relatively easy to determine whether someone has these defining properties. Concepts like this one are said to be well defined. Categorizing a person or object into a well-defined category involves determining whether it has the core or defining properties. In contrast, the core of ‘bird’ is hardly a definition – we may know only that genes are somehow involved, for example – and the core properties are hidden from view. If we happen upon a small animal, we can hardly inspect its genes or inquire about its parentage. All we can do is check whether it does certain things, such as fly and chirp, and use this information to decide whether it is a bird. Concepts like ‘bird’ are said to be fuzzy. Deciding whether an object is an instance of a fuzzy concept often involves determining its similarity to the concept’s prototype (Smith, 1995). Most natural concepts seem to be fuzzy. They lack true definitions, and categorization of these concepts relies heavily on prototypes. Some instances of fuzzy concepts have more prototype properties than other instances. Among birds, for example, a robin will have the property of flying, whereas an ostrich will not. The more prototype properties an instance has, the more typical of the concept it is considered to be. In the case of ‘bird’, most people rate a robin as more typical than a chicken, and a chicken as more typical than an ostrich; in the case of ‘apple’, they rate red apples as more typical than green ones (since red seems to be a property of the concept ‘apple’); and so on. The degree to which an instance is typical has a major effect on its categorization. When people are asked whether a pictured animal is a ‘bird’, a robin produces an immediate yes, whereas a chicken requires a longer decision time. When young children are asked the same question, a robin will almost inevitably be classified correctly, whereas a chicken will often be declared a nonbird. Typicality also determines what we think of when we encounter the name of the concept. Hearing the sentence ‘There is a bird outside your window’, we are far more likely to think of a robin than a vulture, and what comes to mind will obviously influence what we make of the sentence (Rosch, 1978). Universality of prototypes formation Are our prototypes determined mainly by our culture, or are they universal? For some concepts, such as ‘grandmother’, culture clearly has a major impact on the prototype. But for more natural concepts, prototypes are surprisingly universal. Consider color concepts such as ‘red’. This is a fuzzy concept (no ordinary person knows its defining properties) and one with a clear prototype: People in our culture agree on which hues are typical reds and which hues are atypical. People in other cultures agree with our choices. Remarkably, this agreement is found even among people For more Cengage Learning textbooks, visit www.cengagebrain.co.uk whose language does not include a word for ‘red’. When speakers of these languages are asked to pick the best example from an array of red hues, they make the same choices we would. Even though the range of hues for what they would call ‘red’ may differ from ours, their idea of a typical red is the same as ours (Berlin & Kay, 1969). Other research suggests that the Dani, a New Guinea people whose language has terms only for ‘black’ and ‘white’, perceive color variations in exactly the same way as English-speaking people, whose language has terms for many colors. Dani individuals were given a set of red color patches to remember; the patches varied in how typical they were of ‘red’. Later the participants were presented with a set of color patches and asked to decide which ones they had seen before. Even though they had no word for ‘red’, they recognized more typical red colors better than less typical ones. This is exactly what American participants do when performing a comparable task (Rosch, 1974). Color prototypes thus appear to be universal. More recent experiments suggest that prototypes for some animal concepts may also be universal. The experiments compared U.S. students and Maya Itza participants. (Maya Itza is a culture of the Guatemalan rainforest that is relatively insulated from Western influences.) The U.S. participants were from southeastern Michigan, which happens to have a number of mammalian species that are comparable to those found in the Guatemalan rainforest. Both groups were presented with the names of these species. They were first asked to group them into sets that go together, then to group those sets into higher-order groups that were related, and so on until all the species were in one group corresponding to ‘mammals’. These groupings were determined by the similarity of the prototypes: In the first pass, participants would group together only species that seemed very similar. By making these groupings, each participant created a kind of tree, with the initial groupings at the bottom and ‘mammal’ at the top; this tree reflects the taxonomy of animals. The trees or taxonomies created by the Maya Itza were quite similar to those created by the U.S. students; in fact, the correlation between the average Itza and U.S. trees was about þ60. Moreover, both the Itza and U.S. taxonomies were highly correlated with the actual scientific taxonomy. Apparently, all people base their prototypes of animals on properties that they can easily observe (overall shape, or distinctive features like coloring, a bushy tail, or a particular movement pattern). These properties are indicators of the evolutionary history of the species, on which the scientific taxonomy is based (Lopez, Atran, Medin, Cooley, & Smith, 1997). One can also think of cases where the contents of animal concepts differ across cultures. If in some culture ostriches are plentiful but robins are not, that culture may well have a different prototype for ‘bird’ than does our

culture. However, the principles by which prototypes are formed – such as focusing on frequently encountered features of instances of the concept – may well be universal. Hierarchies of concepts In addition to knowing the properties of concepts, we also know how concepts are related to one another. For example, ‘apples’ are members (or a subset) of a larger concept, ‘fruit’; ‘robins’ are a subset of ‘birds’, which in turn are a subset of ‘animals’. These two types of knowledge (properties of a concept and relationships between concepts) are represented in Figure 9.4 as a hierarchy. As Figure 9.4 makes clear, an object can be identified at different levels. The same object is at once a ‘Golden Delicious apple’, an ‘apple’, and a ‘fruit’. However, in any hierarchy one level is the basic level or preferred one for classification, the level at which we first categorize an object. For the hierarchy in Figure 9.4, the level that contains ‘apple’ and ‘pear’ would be the basic one. Evidence for this claim comes from studies in which people are asked to name pictured objects with the first names that come to mind. People are more likely to call a pictured Golden Delicious apple an ‘apple’ than either a ‘Golden Delicious apple’ or a ‘fruit’. Basic-level concepts are special in other respects as well. As examples, they are the first ones learned by children, they are used more frequently, and they have shorter names (Mervis & Rosch, 1981). It seems, then, that we first divide the world into basiclevel concepts. What determines which level is basic? The answer appears to be that the basic level has the most distinctive properties. In Figure 9.4, ‘apple’ has several properties that are distinctive – not shared by other kinds of fruit (for example, red and round are not properties of ‘pear’). In contrast, ‘Golden Delicious apple’ has few distinct properties; most of its properties are shared by ‘MacIntosh apple’, for example. And ‘fruit’, which is at the highest level of Figure 9.4, has few properties of any kind. Thus, we first categorize the world at what turns out to be the most informative level (Murphy & Brownell, 1985). Different categorization processes We are constantly making categorization decisions. We categorize every time we recognize an object, every time we diagnose a problem (‘That’s a power failure’), and so on. How do we use concepts to categorize our world? The answer depends on whether the concept is well defined or fuzzy. For well-defined concepts like ‘grandmother’, we may determine how similar a person is to our prototype (‘She’s sixtyish and has gray hair, so she looks like a grandmother’). But if we are trying to be accurate, we can determine whether the person has the defining properties of the concept (‘Is she the female parent of a parent?’). The latter amounts to applying a rule: ‘If she’s the female parent of a parent, she’s a grandmother.’ There have been many studies of such rule-based categorization of welldefined concepts, and they show that the more properties there are in the rule, the slower and more error-prone the categorization process becomes (Bourne, 1966). This may be due to processing the properties one at a time. For fuzzy concepts like ‘bird’ and ‘chair’, we do not know enough defining properties to use rule-based categorization, so we often rely on similarity instead. As already mentioned, one thing we may do is determine the similarity of an object to the prototype of the concept (‘Is this object similar enough to my prototype to call it a chair?’). The evidence that people categorize objects in this fashion comes from experiments that involve three steps (Smith, 1995):

  1. First the researcher determines the properties of a concept’s prototype and of various instances of that concept. (The researcher might ask one group of Fruit sweet Macintosh red round seeds some green Golden delicious yellow round seeds some green D’anjou wider at bottom stem seeds green Bosc wider at bottom stem seeds brown Pear wider at bottom stem seeds Apple red, yellow, or green round seeds Figure 9.4 Hierarchy of Concepts. Words that begin with a capital letter represent concepts; lowercase words depict properties of these concepts. The green lines show relationships between concepts, and the red lines connect properties and concepts. CONCEPTS AND CATEGORIZATION: THE BUILDING BLOCKS OF THOUGHT For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

336 CHAPTER 9 LANGUAGE AND THOUGHT participants to describe the properties of their prototypical chair and of various pictures of chairs.) 2. Then the researcher determines the similarity between each instance (each pictured chair) and the prototype by identifying their shared properties. This results in a similarity-to-prototype score for each instance. 3. Finally, the researcher shows that the similarity-toprototype score is highly correlated with how accurately and quickly participants can correctly categorize that instance. This shows that similarity-toprototype plays a role in categorization. There is another kind of similarity calculation that we can use to categorize objects. We can illustrate it with our chair example. Because we have stored in long-term memory some specific instances or exemplars of chairs, we can determine whether an object is similar to our stored chair exemplars. If it is, we can declare that it is a chair. Thus, we have two means of categorization based on similarity: similarity to prototypes and similarity to stored exemplars. Acquiring concepts How do we acquire the multitude of concepts that we know about? Some concepts, such as the concepts of ‘time’ and ‘space’, may be innate. Others have to be learned. Learning prototypes and cores We can learn about a concept in different ways. Either we are explicitly taught something about the concept or we learn it through experience. Which way we learn depends on what we are learning. Explicit teaching is likely to be the means by which we learn cores of concepts, and experience ª JAMES SHAFFER/PHOTOEDIT Parents can teach children to name and classify objects. Later, when the child sees another object, he may determine whether it is in the same category as the stored exemplar. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk seems to be the usual means by which we acquire prototypes. Someone explicitly tells a child that a ‘robber’ is someone who takes another person’s possessions with no intention of returning them (the core), and the child’s experiences may lead him or her to expect robbers to be shiftless, disheveled, and dangerous (the prototype). Children must also learn that the core is a better indicator of concept membership than the prototype, but it takes a while for them to learn this. In one study, children aged 5 to 10 were presented with descriptions of items and asked to decide whether they belonged to particular well-defined concepts. We can illustrate the study with the concept of ‘robber’. One description given for ‘robber’ depicted a person who matched its prototype but not its core: A smelly, mean old man with a gun in his pocket who came to your house and takes your TV set because your parents didn’t want it anymore and told him he could have it. Another description given for ‘robber’ was of a person who matched its core but not its prototype: A very friendly and cheerful woman who gave you a hug, but then disconnected your toilet bowl and took it away without permission and no intention to return it. The younger children often thought that the prototypical description was more likely than the core description to be an instance of the concept. Not until age 10 did children show a clear shift from the prototype to the core as the final arbitrator of concept decisions (Keil & Batterman, 1984). Learning through experience There are at least two different ways in which one can learn a concept through experience. The simplest way is called the exemplar strategy, and we can illustrate it with a child learning the concept of ‘furniture’. When the child encounters a known instance or exemplar – for example, a table – she stores a representation of it. Later, when she has to decide whether a new item – say, a desk – is an instance of ‘furniture’, she determines the new object’s similarity to stored exemplars of ‘furniture’, including tables. This strategy seems to be widely used by children, and it works better with typical instances than with atypical ones. Because the first exemplars a child learns tend to be typical ones, new instances are more likely to be correctly classified to the extent that they are similar to typical instances. Thus, if a young child’s concept of ‘furniture’ consisted of just the most typical instances (say, table and chair), he could correctly classify other instances that looked similar to the learned exemplars, such as desk and sofa, but not instances that looked different from the learned exemplars, such as lamp and

bookshelf (Mervis & Pani, 1981). The exemplar strategy remains part of our repertory for acquiring concepts, as there is substantial evidence that adults often use it in acquiring novel concepts (Estes, 1994; Nosofsky & Johansen, 2000). But as we grow older we start to use another strategy, hypothesis testing. We inspect known instances of a concept, searching for properties that are relatively common to them (for example, many pieces of ‘furniture’ are found in living spaces), and we hypothesize that these common properties are what characterize the concept. We then analyze novel objects for these critical properties, maintaining our hypothesis if it leads to a correct categorization about the novel object and revamping it if it leads us astray. This strategy thus focuses on abstractions – properties that characterize sets of instances rather than just single instances – and is tuned to finding core properties, because they are the ones that are common to most instances (Bruner, Goodenow, & Austin, 1956). What properties we look for, though, may be biased by any specific knowledge we have about the objects themselves. If a child thinks furniture always has a flat surface, this piece of prior knowledge may overly restrict the hypothesis that is generated. The neural basis of concepts and categorization Although we have emphasized the difference between well-defined and fuzzy concepts, research at the neurological level indicates that there are important differences just among fuzzy concepts. In particular, the brain seems to store concepts of animals and concepts of artifacts in different neural regions. We mentioned some of the evidence for this in our discussion of perception in Chapter 5. There we noted that there are patients who are impaired in their ability to recognize pictures of animals but who are relatively normal in their recognition of pictured artifacts such as tools, whereas other patients show the reverse pattern. Recent research shows that what holds for pictures holds for words as well. Many of the patients who are impaired in naming pictures also cannot tell what the corresponding word means. For example, a patient who cannot name a pictured giraffe also cannot tell you anything about giraffes when presented with the word giraffe. The fact that the deficit appears for both words and pictures indicates that it has to do with concepts: The patient has lost part of the concept ‘giraffe’ (McCarthy & Warrington, 1990). There is an alternative to the idea that concepts of animals and artifacts are stored in different regions of the brain. Concepts of animals may contain more perceptual features (what does it look like?) than functional features (what can it be used for?), whereas concepts of artifacts may have more functional than perceptual features. When brain damage affects perceptual regions more than functional ones, we would expect patients to show more impairment with animal than For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CONCEPTS AND CATEGORIZATION: THE BUILDING BLOCKS OF THOUGHT artifact concepts; when damage affects functional or motor regions of the brain more than perceptual regions, we would expect the opposite pattern (Farah & McClelland, 1991). The choice between this perceptual–functional hypothesis and the separate-regions-for-separate-concepts one remains controversial (Caramazza, 2000; Martin, Ungerleider, & Haxby, 2000). Other research has focused on processes of categorization. One line of research suggests that determining the similarity between an object and a concept’s prototype involves different brain regions than determining the similarity between an object and stored exemplars of the concept. The logic behind these studies is as follows: The exemplar process involves retrieving items from long-term memory. As we saw in Chapter 8, such retrieval depends on brain structures in the medial temporal lobe. It follows that a patient with damage in these regions of the brain will be unable to effectively categorize objects by using a process that involves exemplars, although the patient might be relatively normal in the use of prototypes. This is exactly what researchers have found. One study tested patients with medial-temporal lobe damage as well as normal individuals on two different tasks. One task required participants to learn to sort dot patterns into two categories (see Figure 9.5 for examples), and the other task required participants to learn to sort paintings into two categories corresponding to two different artists. Independent evidence indicated that only the painting task relied on retrieval of explicit exemplars. The patients learned the dot pattern concepts as easily as the normal participants, but they performed far worse than the normal participants in acquiring the painting concepts (Kolodny, 1994). Thus, use of exemplars depends on the brain structures that mediate long-term memory, but use of prototypes in categorization must depend on other structures. Other research has focused on a patient who is essentially incapable of committing any new information to long-term memory (he cannot learn new exemplars), yet he performs normally on the dot pattern task. Clearly, prototype-based categorization does not depend on the structures that mediate long-term memory (Squire & Knowlton, 1995). The preceding discussion shows that there are neural differences between categorization based on prototypes and categorization based on stored exemplars. What about categorization based on rules? A recent study shows that rule use involves different neural circuits than similarity processes. Two groups of participants were taught to categorize imaginary animals into two categories corresponding to whether the animals were from Venus or Saturn. One group learned to categorize the animals on the basis of a complex rule: ‘An animal is from Venus if it has antennae ears, curly tail, and hoofed feet; otherwise it’s from Saturn.’ The second group learned to categorize the animals by relying solely on their memory. (The first time they saw an animal, they would have to

guess, but on subsequent trials they would be able to remember its category.) Then both groups were given novel animals to categorize while having their brains scanned. The rule group continued to categorize by rule, but the memory group had to categorize a novel animal by retrieving the stored exemplar that was most similar to it and then selecting the category associated with that exemplar. For the memory group, most of the brain areas that were activated were in the visual cortex at the back of the brain. This fits with the idea that these participants were relying on retrieval of visual exemplars. Participants in the rule group also showed activation in the back of the brain, but they showed activation in some frontal regions as well. These regions are often damaged in patients who have trouble doing rule-based tasks. Categorization based on rules therefore relies on different neural circuitry than does categorization based on similarity (Patalano, Smith, Jonides, & Koeppe, 2002). This research provides yet another example of the interplay between biological and psychological approaches to a phenomenon. Categorization processes that have been viewed as different at the psychological level – such as using exemplars versus using rules – have now been shown to involve different brain mechanisms. This example follows a pattern that we have encountered several times in earlier chapters: A distinction first made at the psychological level is subsequently shown to hold at the biological level as well. INTERIM SUMMARY l Thought occurs in both propositional and imaginal modes. The key component of a proposition is a concept, the set of properties that we associate with a class. l A concept includes both a prototype (properties that describe a best example) and a core (properties that are most important for being a member of the concept). Core properties play a major role in processing well-defined concepts like ‘grandmother’, whereas prototype properties dominate in fuzzy concepts like ‘bird’. l Children often learn a new concept by using an exemplar strategy: A novel item is classified as an instance of a concept if it is sufficiently similar to a known exemplar of the concept. As children grow older, they also use hypothesis testing as a strategy for learning concepts. l Different neural regions may mediate different kinds of concepts. For example, perceptual regions of the brain may be more involved in representing animals from artifacts, whereas functional and motor regions of the brain may play a larger role in representing artifacts than animals. Different neural regions may also be involved in different categorization procedures. Study items ‘Yes’ ‘No’ ‘Yes’ ‘Yes’ Test items Figure 9.5 Examples of Dot Patterns Used to Study Categorization in Amnesiac Patients. Individuals learned that the study items all belonged to one category and then had to decide whether each of the test items belonged to that category. The test items that belong to the category (the ones labeled ‘yes’) do not match the study items directly. Rather, the test items that belong to the category are sufficiently similar to a prototype of the study items – roughly an average of the dot positions of the study items – to justify a ‘yes’ response. (Adapted from Squire & Knowlton, 1995) CHAPTER 9 LANGUAGE AND THOUGHT For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

CRITICAL THINKING QUESTIONS 1 We have discussed some cases in which prototypes seem to be universal – that is, largely unaffected by culture. Can you think of cases in which prototypes would be greatly influenced by culture? If so, give some examples. 2 A critical finding is that some neurological patients are impaired in their animal concepts but not in their artifact concepts, whereas other patients show the reverse pattern. Aside from differences in the number of perceptual and function features contained in animals and artifact concepts, can you think of another explanation of the critical finding with patients? REASONING When we think in terms of propositions, our sequence of thoughts is organized. The kind of organization of interest to us here manifests itself when we try to reason. In such cases, our sequence of thoughts often takes the form of an argument, in which one proposition corresponds to a claim, or conclusion, that we are trying to draw. The remaining propositions are reasons for the claim or premises for the conclusion. Deductive reasoning Logical rules According to logicians, the strongest arguments demonstrate deductive validity, meaning that it is impossible for the conclusion of the argument to be false if its premises are true (Skyrms, 1986). Consider the following example: a If it’s raining, I’ll take an umbrella. b It’s raining. c Therefore, I’ll take an umbrella. This is an example of a syllogism, which contains two premises and a conclusion. Whether or not the conclusion is true or not follows logically from the two premises according to the rules of deductive logic. In this case, the relevant rule is the following: If you have a proposition of the form ‘If p then q’, and another proposition p, then you can infer the proposition q. How does the reasoning of ordinary people line up with that of the logician? When asked to decide whether an argument is deductively valid, people are quite accurate in their assessments of simple arguments like this one. How do we make such judgments? Some theories of deductive reasoning assume that we operate like For more Cengage Learning textbooks, visit www.cengagebrain.co.uk REASONING intuitive logicians and use logical rules in trying to prove that the conclusion of an argument follows from the premises. Specifically, they identify the first premise (‘If it’s raining, I’ll take an umbrella’) with the ‘If p then q’ part of the rule. They identify the second premise (‘It’s raining’) with the p part of the rule, and then they infer the q part (‘I’ll take an umbrella’). Presumably then, adults know the rules and use them (perhaps unconsciously) to decide that the previous argument is valid. Rule following becomes more conscious if we complicate the argument. Presumably, we apply our sample rule twice when evaluating the following argument: a If it’s raining, I’ll take an umbrella. b If I take an umbrella, I’ll lose it. c It’s raining. d Therefore, I’ll lose my umbrella. Applying our rule to propositions a and c allows us to infer ‘I’ll take an umbrella’, and applying our rule again to proposition b and the inferred proposition allows us to infer ‘I’ll lose my umbrella’, which is the conclusion. One of the best pieces of evidence that people are using rules like this is that the number of rules an argument requires is a good predictor of the argument’s difficulty. The more rules are needed, the more likely it is that people will make an error and the longer they will take when they do make a correct decision (Rips, 1983, 1994). Moreover, humans are quite likely to make mistakes under specific conditions. For example: contrary to the rules of deductive logic, the great majority of subjects will judge a logically invalid conclusion as valid if it seems plausible to them. This finding has been named the belief bias in syllogistic reasoning. As an example, consider the following two syllogisms (from Evans et al., 1983):

  1. a No addictive things are inexpensive. b Some cigarettes are inexpensive. c Therefore, some addictive things are cigarettes.
  2. a No addictive things are inexpensive. b Some cigarettes are inexpensive. c Therefore, some cigarettes are not addictive. The first syllogism is invalid: the conclusion does not follow from the two premises. But the plausibility of the conclusion led 92 percent of the subjects to accept it nevertheless. The second syllogism is valid, but was accepted by only 46 percent of the subjects. Next, we will look at other effects of content on reasoning. Effects of content Logical rules do not capture all aspects of deductive reasoning. Such rules are triggered only by the logical form of propositions, yet our ability to evaluate a deductive

340 CHAPTER 9 LANGUAGE AND THOUGHT argument often depends on the content of the propositions as well. We can illustrate this point with the following experiment: the Wason selection task (Wason, 1968). Participants are presented four cards. In one version of the problem, each card has a letter on one side and a digit on the other (see Figure 9.6a). The participant must decide which cards to turn over to determine whether the following claim is correct: ‘If a card has a vowel on one side, then it has an even number on the other side.’ The correct answer is to turn over the E and the 7. (To see that the ‘7’ card is critical, note that if it has a vowel on its other side, the claim is disconfirmed.) While most participants correctly choose the ‘E’ card, fewer than 10 percent of them also choose the ‘7’ card! E 7 K a) Hypothesis: If a card has a vowel on one side, it has an even number on the other side. Beer 16 Coke b) Hypothesis: If a person is drinking beer, he or she must be over 19. Figure 9.6 Content Effects in Deductive Reasoning. (a) An illustration of the problem in which participants had to decide which two cards should be turned over to test the hypothesis. (b) An illustration of a problem that is logically equivalent to (a) but much easier to solve. (After Griggs & Cox, 1982; Wason & JohnsonLaird, 1972) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk Performance improves dramatically, however, in another version of the problem (see Figure 9.6b). Now the claim that participants must evaluate is ‘If a person is drinking beer, he or she must be over 19.’ Each card has a person’s age on one side and what he or she is drinking on the other. This version of the problem is logically equivalent to the preceding version (in particular, ‘Beer’ corresponds to ‘E’, and ‘16’ corresponds to ‘7’), but now most participants make the correct choices and turn over the ‘Beer’ and ‘16’ cards (Griggs & Cox, 1982). The content of the propositions clearly affects their reasoning. Results like these imply that we do not always use logical rules when solving deduction problems. Rather, sometimes we use rules that are less abstract and more relevant to everyday problems – pragmatic rules. An example is the permission rule, which states that ‘If a particular action is to be taken, often a precondition must be satisfied.’ Most people know this rule and use it when presented with the drinking problem in Figure 9.6b; that is, they would think about the problem in terms of permission. Once activated, the rule would lead people to look for failures to meet the relevant precondition (being under age 19), which in turn would lead them to choose the ‘16’ card. In contrast, the permission rule would not be triggered by the letter-number problem in Figure 9.6a, so there is no reason for people to choose the ‘7’ card. Thus, the content of a problem affects whether a pragmatic rule is activated, which in turn affects the correctness of the reasoning (Cheng, Holyoak, Nisbett, & Oliver, 1986). In addition to applying rules, participants may sometimes solve the drinking problem by setting up a concrete representation of the situation – a mental model. They may, for example, imagine two people, each with a number on his back and a drink in his hand. They may then inspect this mental model and see what happens, for example, if the drinker with ‘16’ on his back has a beer in his hand. According to this idea, we reason in terms of mental models that are suggested by the content of the problem (Johnson-Laird, 1989). The two procedures just described – applying pragmatic rules and constructing mental models – have one thing in common. They are determined by the content of the problem, in contrast to the application of logical rules, which should not be affected by problem content. Our sensitivity to content often prevents us from operating as logicians in solving a problem. Inductive reasoning Logical rules Logicians have noted that an argument can be good even if it is not deductively valid. Such arguments are inductively strong, meaning that it is improbable that the

conclusion is false if the premises are true (Skyrms, 1986). An example of an inductively strong argument is as follows: a Mitch majored in accounting in college. b Mitch now works for an accounting firm. c Therefore, Mitch is an accountant. This argument is not deductively valid (Mitch may have tired of accounting courses and taken a night watchman’s job). Inductive strength, then, is a matter of probabilities, not certainties, and (according to logicians) inductive logic should be based on the theory of probability. We make and evaluate inductive arguments all the time. In doing so, do we rely on the rules of probability theory as a logician or mathematician would? One relevant probability rule is the base-rate rule, which states that the probability of something being a member of a class (such as Mitch being a member of the class of accountants) is greater the more class members there are (that is, the higher the base rate of the class). Our sample argument about Mitch being an accountant can be strengthened by adding the premise that Mitch joined a club in which percent of the members are accountants. Another relevant probability rule is the conjunction rule: The probability of a proposition cannot be less than the probability of that proposition combined with another proposition. For example, the probability that ‘Mitch is an accountant’ cannot be less than the probability that ‘Mitch is an accountant and makes more than $60,000 a year.’ The base-rate and conjunction rules are rational guides to inductive reasoning – they are endorsed by logic – and most people will defer to them when the rules are made explicit. However, in rough-and-tumble everyday reasoning, people frequently violate these rules, as we are about to see. Heuristics A heuristic is a short-cut procedure that is relatively easy to apply and can often yield the correct answer, but not inevitably so. People often use heuristics in everyday life because they have found them useful. However, as the following discussion shows, they are not always dependable. In a series of ingenious experiments, Tversky and Kahneman (1973, 1983; Kahneman & Tversky, 1996) have shown that people violate some basic rules of probability theory when making inductive judgments. Violations of the base-rate rule are particularly common. In one experiment, one group of participants was told that a panel of psychologists had interviewed 100 people – 30 engineers and 70 lawyers – and written personality descriptions of them. These participants were then given a few descriptions and asked to indicate the probability that the person described was an For more Cengage Learning textbooks, visit www.cengagebrain.co.uk REASONING engineer. Some descriptions were prototypical of an engineer (for example, ‘Jack shows no interest in political issues and spends his free time on home carpentry’), and others were neutral (for example, ‘Dick is a man of high ability and promises to be quite successful’). Not surprisingly, these participants rated the prototypical description as more likely to be that of an engineer. Another group of participants was given the identical instructions and descriptions, except they were told that the 100 people were 70 engineers and 30 lawyers (the reverse of the first group). The base rate of engineers therefore differed greatly between the two groups. This difference had virtually no effect: Participants in the second group gave essentially the same ratings as those in the first group. For example, participants in both groups rated the neutral description as having a 50–50 chance of being that of an engineer. This shows that participants ignored the information about base rates. The rational decision (applying the base-rate rule) would have been to rate the neutral description as more likely to be in the profession with the higher base rate (Tversky & Kahneman, 1973). People pay no more heed to the conjunction rule. In one study, participants were presented with the following description: Linda is 31 years old, single, outspoken, and very bright. In college, she majored in philosophy . . . and was deeply concerned with issues of discrimination. Participants then estimated the probabilities of the following two statements:

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement. Statement 2 is the conjunction of statement 1 and the proposition ‘Linda is active in the feminist movement.’ In flagrant violation of the conjunction rule, most participants rated statement 2 as more probable than statement
  3. This is a fallacy because every feminist bank teller is a bank teller, but some female bank tellers are not feminists, and Linda could be one of them (Tversky & Kahneman, 1983). Participants in this study based their judgments on the fact that Linda seems more similar to a feminist bank teller than to a bank teller. Although they were asked to estimate probability, participants instead estimated the similarity of the specific case (Linda) to the prototype of the concepts ‘bank teller’ and ‘feminist bank teller’. Estimating similarity is used as a heuristic for estimating probability. People use the similarity heuristic because similarity often relates to probability yet is easier to calculate. Use of the similarity heuristic also explains why people ignore base rates. In the engineer–lawyer study described earlier, participants

342 CHAPTER 9 LANGUAGE AND THOUGHT may have considered only the similarity of the description to their prototypes of ‘engineer’ and ‘lawyer’. Given a description that matched the prototypes of ‘engineer’ and ‘lawyer’ equally well, participants judged that engineer and lawyer were equally probable. Reliance on the similarity heuristic can lead to errors even by experts. Reasoning by similarity shows up in another common reasoning situation, that in which we know some members of a category have a particular property and have to decide whether other members of the category have that property as well. In one study, participants had to judge which of the following two arguments seemed stronger:

  1. a All robins have sesamoid bones. b Therefore all sparrows have sesamoid bones. versus
  2. a All robins have sesamoid bones. b Therefore all ostriches have sesamoid bones. Not surprisingly, participants judged the first argument to be stronger, presumably because robins are more similar to sparrows than they are to ostriches. This use of similarity appears rational, inasmuch as it fits with the idea that things that have many known properties in common are likely to share unknown properties as well. But the veneer of rationality fades when we consider participants’ judgments on another pair of arguments:
  3. a All robins have sesamoid bones. b Therefore all ostriches have sesamoid bones (same as the preceding argument). versus
  4. a All robins have sesamoid bones. b Therefore all birds have sesamoid bones. Participants judged the second argument to be stronger, presumably because robins are more similar to the prototype of birds than they are to ostriches. But this judgment is a fallacy. On the basis of the same evidence (that robins have sesamoid bones), it cannot be more likely that all birds have some property than that all ostriches do, because ostriches are in fact birds. Again, our similarity-based intuitions can sometimes lead us astray (Osherson, Smith, Wilkie, Lopez, & Shafir, 1990). Similarity is not our only strong heuristic. Another is the causality heuristic. People estimate the probability of a situation by the strength of the causal connections between the events in the situation. In the following example, people judge the second statement to be more probable than the first:
  5. Sometime during the year 2010, there will be a massive flood in California in which more than 1,000 people will drown. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
  6. Sometime during the year 2010, there will be an earthquake in California, causing a massive flood in which more than 1,000 people will drown. Judging statement 2 to be more probable than statement 1 is another violation of the conjunction rule (and hence another fallacy). This time, the violation arises because in statement 2 the flood has a strong causal connection to another event, the earthquake, whereas in statement 1 the flood alone is mentioned and has no causal connections. Other heuristics are used to estimate probabilities and frequencies as well. For example, Kahneman and Tversky (1973) showed that subjects (incorrectly!) estimated the frequency of words starting with the letter r (like rose) as higher than the frequency of words with the letter r in the third position (such as care). The reason for this error lies in the ease with which we can retrieve words based on their first letter: the use of an availability heuristic leads to an erroneous conclusion in this case. Another heuristic that can lead us astray is the representativeness heuristic: the assumption that each case is representative of its category. As a result, people often extrapolate from a single case, even when such extrapolations are unwarranted. These two heuristics probably explain why subjects overestimate the number of fatalities caused by floods or murder (which get high press coverage, and are easily remembered), while they underestimate the number of fatalities caused by specific diseases (Slovic, Fischhoff, & Lichtenstein, 1982). The biases resulting from these heuristics are compounded by another aspect of human reasoning, called the confirmation bias. We give more credence to evidence that is in line with our previous beliefs than to evidence that contradicts it. To illustrate: once we believe that we live in a dangerous society and that murders are frequent events, we are even more likely to notice and remember news reports about murders – thereby confirming our own beliefs. Gilovich (1983) describes how many compulsive gamblers persist in a belief about their own ‘winning game’, even in the face of persistent losses. The confirmation bias determines how the gamblers review their own wins and losses: wins are seen as a confirmation of the ‘winning game’ and taken at face value, whereas losses are discounted or ‘explained away’. So, our reliance on heuristics often leads us to ignore some basic rational rules, including the base-rate and conjunction rules. But we should not be too pessimistic about our level of rationality. For one thing, heuristics probably lead to correct decisions in most cases. Another point is that under the right circumstances we can appreciate the relevance of certain logical rules to particular problems and use them appropriately (Gigerenzer, 1996; Nisbett, Kranz, Jepson, & Kunda, 1983). For example, in reading and thinking about this discussion, you were probably able to see the relevance of the baserate and conjunction rules to the problems at hand.

The neural basis of reasoning We noted that many psychologists accept the logicians’ distinction between deductive and inductive reasoning, but not all do. Some researchers who believe that mental models underlie deductive reasoning further hold that mental models are used in inductive reasoning and that consequently there is no qualitative difference between deductive and inductive reasoning (for example, see Johnson-Laird, 1997). The question of whether there are two kinds of reasoning or one is a fundamental issue, and recently it has been studied at the neural level. A number of brain-imaging experiments have been carried out, but for our purposes it suffices to focus on a single study by Osherson and colleagues (1998). These researchers used PET to image peoples’ brains while they performed a deductive reasoning or an inductive reasoning task. In both tasks, participants had to evaluate arguments like the following:

  1. a None of the bakers plays chess. b Some of the chess players listen to opera. c (Therefore) some of the opera listeners are not bakers.
  2. a Some of the computer programmers play the piano. b No one who plays the piano watches soccer matches. c (Therefore) some computer programmers watch soccer matches. In the deductive task, participants were asked to distinguish valid arguments (conclusion must be true if the premises are) from invalid arguments (possible for conclusion to be false even if premises are true). Participants were first given some training on this valid–invalid distinction. In these cases, 1 is valid, and 2 is not. The task is not easy, as the researchers wanted to ensure that their participants’ reasoning powers were fully engaged. In the induction task, individuals were asked whether the conclusion had a greater chance of being true than false, given that the premises were true. For argument 1, the answer has to be yes – because the argument is deductively valid. For argument 2, the answer is more up for grabs. But what is important is that in both cases participants are reasoning in terms of ‘chances of being true’; that is, they’re reasoning about probabilities (regardless of how they compute them). A number of brain areas were active during deductive but not inductive reasoning, and a number of areas showed the reverse pattern. These results are consistent with the hypothesis that deductive and inductive reasoning are mediated by different mechanisms. More specifically, only when reasoning deductively were a number of areas in the right hemisphere activated, some of which were toward the back of the brain. These activations might reflect the participants’ use of spatial representations (like Venn diagrams) in trying to answer the difficult For more Cengage Learning textbooks, visit www.cengagebrain.co.uk REASONING validity question. In contrast, when reasoning inductively, some of the major brain activations were in the left hemisphere, in a region of the frontal cortex that is known to be involved in estimation problems (such as How many camels are there in California?). Estimation often involves rough assessments of probabilities (such as What’s the chance of a medium-sized city having a zoo?). Other imaging studies of deductive versus inductive reasoning (Goel, Gold, Kapur, & Houle, 1998) have also found distinctive areas involved in the two kinds of reasoning, although the areas found were not always the same as those obtained in the previous study. The difference in the areas activated in the two studies may reflect the use of very different materials, but the fact that both experiments show different neural patterns for deductive and inductive reasoning supports the idea that two different reasoning mechanisms are involved. These studies provide a beginning of an understanding of reasoning at the neural level. INTERIM SUMMARY l In reasoning, some arguments are deductively valid, which means that it is impossible for the conclusion to be false if the premises are true. When evaluating such an argument, we sometimes use logical rules, and other times use heuristics – rules of thumb that operate on the content of propositions, not their logical form. l Other arguments are inductively strong, which means that it is improbable that the conclusion is false if the premises are true. When evaluating such an argument, often we ignore the principles of probability theory and rely on similarity and causality heuristics. l Research on the neural bases of reasoning supports the distinction between deductive and inductive reasoning. When people are presented with the same arguments, different parts of the brain become active when people evaluate deductive validity versus inductive strength. CRITICAL THINKING QUESTIONS 1 With regard to inductive reasoning, what kind of training might people be given to increase their use of the base-rate and conjunction rules in real-life reasoning situations? 2 How could you use a brain-imaging experiment to see if there is a neural distinction between reasoning by formal procedures (logical rules, probability rules) and reasoning by heuristics?

344 CHAPTER 9 LANGUAGE AND THOUGHT CUTTING EDGE RESEARCH Unconscious thought for complex decisions In 2004, Dijksterhuis published results showing that our unconscious can make decisions which are superior to decisions that are made consciously (Dijksterhuis, 2004). In one experiment, subjects were presented with descriptions of a number of apartments (some more desirable than others), and were asked to select the best option. Some subjects had to do so immediately, others were given a few minutes to think about the information (the ‘conscious thought’ condition), and a third group of subjects was distracted for a few minutes before they decided (the ‘unconscious thought’ condition). Subjects in the last condition made the best decisions. In subsequent work, the researchers studied how satisfied the subjects were with the choices they had made. They were interviewed about their choice, a few weeks after selecting a poster to take home (Dijksterhuis & van Olden, 2006). Subjects in the ‘unconscious thought’ condition were more satisfied than the subjects in the other conditions. These discoveries seem counterintuitive. After all, wouldn’t it seem wise to consider your options carefully? When does it help to deliberate your decisions, and when does it not? Recent research by Dijksterhuis and his co-workers (Dijksterhuis et al., 2006) gives us important clues. In an experiment similar to the one described above, one important variable was added: the complexity of the issues to be IMAGINAL THOUGHT Earlier we mentioned that, in addition to propositional thought, we can also think in an imaginal mode, particularly in terms of visual images. In this section we take a closer look at such visual thinking. We seem to do some of our thinking visually. Often we retrieve past perceptions, or parts of them, and operate on them the way we would operate on a real percept. To appreciate this point, try to answer the following three questions:

  1. What shape are a German shepherd’s ears?
  2. What new letter is formed when an uppercase N is rotated 90 degrees?
  3. How many windows are there in your parents’ living room? When answering the first question, most people report that they form a visual image of a German shepherd’s For more Cengage Learning textbooks, visit www.cengagebrain.co.uk discussed was either simple or complex. In this study, subjects were choosing cars. In the ‘simple’ condition, each car was characterized by 4 attributes, whereas in the ‘complex’ condition, each car was characterized by 12 attributes. The researchers reasoned that conscious thought is precise, and should therefore lead to the right choices in simple matters. But since conscious thought requires the use of short-term memory (which has limited capacity), it will lead to inferior decisions on complex matters. And indeed: conscious thinkers were more likely than unconscious thinkers to make the correct choice in the simple condition. But in the complex condition, performance of the unconscious thinkers was superior to that of the conscious thinkers. Furthermore, it seems that unconscious thought is an active process: First of all, subjects in the unconscious thought condition did better than subjects in the immediate condition (Dijksterhuis, 2004; Dijksterhuis and van Olden, 2006). Secondly, unconscious thought is goal-dependent: subjects who are not warned about an upcoming decision do not seem to engage in unconscious thought (Bos et al., 2008). Thirdly, unconscious thought results in a different representation of the information (Dijksterhuis, 2004; Bos et al., 2008). This representation apparently allows for a superior weighing of the many factors that are important in complex decisions. head and ‘look’ at the ears to determine their shape. When answering the second question, people report first forming an image of a capital N and then mentally ‘rotating’ it 90 degrees and ‘looking’ at it to determine its identity. And when answering the third question, people report imagining the room and then ‘scanning’ the image while counting the windows (Kosslyn, 1983; Shepard & Cooper, 1982). These examples are based on subjective impressions, but they and other evidence suggest that imagery involves the same representations and processes that are used in perception (Finke, 1985). Our images of objects and places have visual detail: We see the German shepherd, the N, or our parents’ living room in our ‘mind’s eye’. Moreover, the mental operations that we perform on these images seem to be analogous to the operations we carry out on real visual objects. We scan the image of our parents’ room in much the same way that we would scan a real room, and we rotate our image of the N the way we would rotate the real object. For this reason, imaginal thought is said to rely on analogical representations. This

in contrast with propositional thought, which relies on symbolic representations (consider the word ‘room’: it does not resemble your parent’s living room in any way). Imaginal operations We have noted that the mental operations performed on images seem to be analogous to those that we carry out on real visual objects. Numerous experiments provide objective evidence for these subjective impressions. One operation that has been studied intensively is mental rotation. In a classic experiment, participants saw the capital letter R on each trial. The letter was presented either normally or backward, and either in its usual vertical orientation or rotated by various degrees (see Figure 9.7). The participants had to decide whether the letter was normal or backward. The more the letter had been rotated from its vertical orientation, the longer it took the participants to make the decision (see Figure 9.8). This finding suggests that participants made their decisions by rotating the image of the letter in their Normal Backward R R 0° R R 60° R R 120° R R 180° R R 240° R R 300° Figure 9.7 Study of Mental Rotation. Examples of the letters presented to participants in studies of mental rotation. On each presentation, participants had to decide whether the letter was normal or backward. Numbers indicate deviation from the vertical in degrees. (L. A. Cooper & R. N. Shepard (1973) ‘Chronometric Studies of the Rotation of Mental Images’, in Visual Information Processing, ed. by W. G. Chase. Adapted by permission of Academic Press.) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk IMAGINAL THOUGHT 1100 900 Decision time (milliseconds) 700 500 120 240 0 Angle of rotation (degrees) Figure 9.8 Decision Times in the Mental Rotation Study. The time taken to decide whether a letter had normal or reversed orientation was greatest when the rotation was 180 degrees so that the letter was upside down. (L. A. Cooper & R. N. Shepard (1973) ‘Chronometric Studies of the Rotation of Mental Images’, in Visual Information Processing, ed. by W. G. Chase. Adapted by permission of Academic Press.) minds until it was vertical and then checking to determine whether it was normal or backward. Another operation that is similar in imagery and perception is that of scanning an object or array. In an experiment on scanning an image, participants first studied the map of a fictional island that contained seven key locations (see Figure 9.9). The map was removed, and participants were asked to form an image of it and fixate on a particular location (for example, the tree in the southern part of the island). Then the experimenter named another location (for example, the tree at the northern tip of the island). Starting at the fixated location, the participants were to scan their images until they found the named location and to push a button upon ‘arriving’ there. The greater the distance between the fixated location and the named one, the longer the participants took to respond. Indeed, the time people took to scan the image increased linearly with the imagined distance, which suggests that they were scanning their images in much the same way that they scan real objects. Another commonality between imaginal and perceptual processing is that both are limited by grain size. On a television screen, for instance, the grain of the picture tube determines how small the details of a picture can be and still remain perceptible. Although there is no such screen in the brain, we can think of our images as occurring in a mental medium whose grain limits the amount of detail we can detect in an image. If this grain size is fixed, smaller images should be more difficult to inspect than larger ones. A good deal of evidence supports this claim.

346 CHAPTER 9 LANGUAGE AND THOUGHT Figure 9.9 Scanning Mental Images. The person scans the image of the island from south to north, looking for the named location. It appears as though the individual’s mental image is like a real map and that it takes longer to scan across the mental image if the distance to be scanned is greater. (S. M. Kosslyn, et al., (1978) ‘Scanning Mental Images’, from ‘Visual Images Preserve Metric Spatial Information: Evidence from Studies of Image Scanning’, in Journal of Experimental Psychology, 4:47–60. Copyright © 1978 by the American Psychological Association. Adapted by permission.) In one experiment, participants first formed an image of a familiar animal – for example, a cat. Then they were asked to decide whether the imaged object had a particular property. Participants made decisions faster for larger properties, such as the head, than for smaller ones, such as the claws. In another study, participants were asked to form an image of an animal at different relative sizes – small, medium, or large. They were then asked to decide whether their images had a particular property. Their decisions were faster for larger images than for smaller ones. In imagery as in perception, the larger the image, the more readily we can see the details of an object (Kosslyn, 1980). The neural basis of imagery Perhaps the most persuasive evidence that imagery is like perception would be demonstrations that the two are For more Cengage Learning textbooks, visit www.cengagebrain.co.uk mediated by the same brain structures. In recent years, a substantial amount of evidence of this sort has accumulated. Some of the evidence comes from studies of brain-damaged patients and shows that any problem the patient has in visual perception is typically accompanied by a parallel problem in visual imagery (Farah, Hammond, & Levine, 1988). A particularly striking example is patients who suffer damage in the parietal lobe of the right hemisphere and as a result develop visual neglect of the left side of the visual field. Though not blind, these patients ignore everything on the left side of their visual field. A male patient, for example, may neglect to shave the left side of his face. The Italian neurologist Bisiach (Bisiach & Luzzatti, 1978) found that this visual neglect extends to imagery. He asked patients with visual neglect to imagine a familiar square in their native Milan as it looks while standing in the square facing the church. The patients reported most objects on their right but few on their left. When asked to imagine the scene from the opposite perspective, while standing in front of the church and looking out into the square, the patients neglected the objects they had previously reported (which were now on the left side of the image). These patients manifested the same kind of neglect in imagery that they did in perception, which suggests that the damaged brain structures normally mediate imagery as well as perception. Some studies have used brain-scanning methods to demonstrate that in normal individuals the parts of the brain involved in perception are also involved in imagery. In one experiment, participants performed both a mental arithmetic task (‘Start at 50 and count down, subtracting by 3s’) and a visual imagery task (‘Visualize a walk through your neighborhood, making alternating right and left turns starting at your door’). While a participant was doing each task, the amount of blood flow in various areas of his or her cortex was measured. There was more blood flow in the visual cortex when participants engaged in the imagery task than when they engaged in the mental arithmetic task. Moreover, the pattern of blood flow during the imagery task was like that normally found in perceptual tasks (Roland & Friberg, 1985). A PET experiment by Kosslyn and associates (1993) provides a striking comparison of the brain structures involved in perception and imagery. While having their brains scanned, participants performed two different

Perception Imagery f f Figure 9.10 Imagery and Perception. Tasks used to determine whether visual imagery involves the same brain structures as visual perception. In the perception task, participants must decide whether the X fell on part of the block letter. In the imagery task, participants generate an image of the block letter and then decide whether the X fell on part of the (image of the) block letter. The person knows which letter to image because the lowercase version of it is presented below the grid. (The lowercase version is also presented in the perception task, just to keep things comparable.) (From Robert J. Sternberg, Beyond IQ: A Triarchic Theory of Human Intelligence, © 1985 by Robert J. Sternberg. Reprinted by permission of Cambrige University Press.) tasks, a perception task and an imagery task. In the perception task, first a block capital letter was presented on a background grid and then an X was presented in one of the grid cells. The participant’s task was to decide as quickly as possible whether the X fell on part of the block letter (see Figure 9.10). In the imagery task, the background grid was again presented, but without a block capital letter. Under the grid was a lowercase letter, and participants had been previously instructed to generate an image of the capital version of the lowercase letter and project it onto the grid. Then an X was presented in one of the grid cells, and participants were asked to determine whether the X fell on part of the imagined block letter. Not surprisingly, the perception task resulted in heightened neural activity in parts of the visual cortex, but so did the imagery task. Indeed, the imagery task resulted in increased activity in brain structures that are among the first regions of the cortex to receive visual information. Imagery is like perception from the early stages of cortical processing. Moreover, when the neural activations from the two tasks were directly compared, there was more activation in the imagery task than in the perception task, presumably reflecting the fact that the imagery task required more ‘perceptual work’ than the perception task. These results leave little doubt that imagery and perception are mediated by the same neural mechanisms. Here again, biological research has provided evidence to support a hypothesis that was first proposed at the psychological level. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THOUGHT IN ACTION: PROBLEM SOLVING INTERIM SUMMARY l Thoughts that are manifested as visual images contain the kind of visual detail found in perception. l Mental operations that are performed on images (such as scanning and rotation) are like those carried out on perceptions. l Imagery is like perception because both are mediated by the same parts of the brain. Brain-scanning experiments indicate that the specific regions involved in an imagery task are the same as those involved in a perceptual task. CRITICAL THINKING QUESTIONS 1 In this section we discussed visual imagery. By analogy, how would you find evidence for auditory imagery? 2 How could you use brain-scanning experiments to determine whether individual differences in imaging ability are related to neural differences? THOUGHT IN ACTION: PROBLEM SOLVING For many people, solving a problem epitomizes thinking itself. When solving a problem, we are striving for a goal but have no ready means of obtaining it. In each case, there is an initial state (you need a dress or a suit for a party) and a goal state (you have found and bought the clothing you need). Often, we might break down the goal into subgoals (saving enough money and finding the right store) and perhaps divide these subgoals further into smaller subgoals, until we reach a level that we have the means to obtain (Anderson, 1990). We can illustrate these points with a simple problem. Suppose that you need to figure out the combination of an unfamiliar lock. You know only that the combination has four numbers and that whenever you come across a correct number you will hear a click. Your overall goal is to find the combination. Rather than trying four numbers at random, most people divide the overall goal into four subgoals, each corresponding to finding one of the four numbers in the combination. Your first subgoal is to find the first number, and you have a procedure for accomplishing this – turning the lock slowly while listening for a

348 CHAPTER 9 LANGUAGE AND THOUGHT click. Your second subgoal is to find the second number, for which you can use the same procedure, and so on for the remaining subgoals. In this example, the problem is well-defined: the initial state and the goal state are clearly defined. Many real-world problems, however, are illdefined. For example, you might think ‘I really need to relax a bit this weekend’. Your goal state is rather vague, and doesn’t help much in your search for a specific plan. One sensible strategy for solving ill-defined problems is to first make them well-defined. The strategies that people use to solve problems is a major issue in the study of problem solving. A related issue is how people represent a problem mentally, because it affects how readily we can solve the problem. We will see that experience with the problem at hand also affects how successful we are at solving it. The following discussion considers all of these issues. Problem-solving strategies Much of what we know about strategies for breaking down goals derives from the research of Newell and Simon (1972). Typically, the researchers ask participants to think aloud while trying to solve a difficult problem. They then analyze the participants’ verbal responses for clues to the underlying strategy. Specifically, the researchers use the verbal responses as a guide in programming a computer to solve the problem. The output can be compared with aspects of people’s performance on the problem – for example, the sequence of moves – to see whether they match. If they match, the computer program offers a theory of a problem-solving strategy. A number of general-purpose strategies have been identified in this way. One strategy is to reduce the difference between our current state in a problem situation and our goal state, in which a solution is obtained. This strategy is called the difference-reduction method. Consider again the combination-lock problem. Initially, our current state includes no knowledge of any of the numbers, and our goal state includes knowledge of all four numbers. We therefore set up the subgoal of reducing the difference between these two states, and identifying the first number that accomplishes this subgoal. Our current state now includes knowledge of the first number. There is still a difference between our current state and our goal state. We can reduce this difference identifying the second number, and so on for the third and fourth numbers. The key idea behind difference reduction is that we set up subgoals that, when obtained, put us in a state that is closer to our goal. A similar but more sophisticated strategy is means– ends analysis. We compare our current state to the goal state in order to find the most important difference between them, and eliminating this difference becomes our main subgoal. We then search for a means or For more Cengage Learning textbooks, visit www.cengagebrain.co.uk procedure to achieve this subgoal. If we find such a procedure but discover that something in our current state prevents us from applying it, we introduce a new subgoal of eliminating this obstacle. Many commonsense problem-solving situations involve this strategy. Here is an example: I want to take my son to nursery school. What’s the [most important] difference between what I have and what I want? One of distance. What [procedure] changes distance? My automobile. My automobile won’t work. What is needed to make it work? A new battery. What has new batteries? An auto repair shop. (After Newell & Simon, 1972, as cited in Anderson, 1990, p. 232) Means–ends analysis is more sophisticated than difference reduction because it allows us to take action even if it results in a temporary decrease in similarity between our current state and the goal state. In the example just presented, the auto repair shop may be in the opposite direction from the nursery school. Going to the shop temporarily increases the distance from the goal, yet this step is essential for solving the problem. A strict application of the difference-reduction method would never have you drive away from the school. Another strategy is working backward from the goal, a particularly useful strategy in solving mathematical problems like the one illustrated in Figure 9.11. The problem is this: Given that ABCD is a rectangle, prove that AD and BC are the same length. In working backward, we might proceed as follows: What could prove that AD and BC are the same length? I could prove this if I could prove that the triangles ACD and BDC are congruent. I can prove that ACD and BDC are congruent if I could prove that two sides and an included angle are equal. (After Anderson, 1990, p. 238) A B C D Figure 9.11 An Illustrative Geometry Problem. Given that ABCD is a rectangle, prove that the line segments AD and BC are the same length.

We reason from the goal to a subgoal (proving that the triangles are congruent), from that subgoal to another subgoal (proving that the sides and angle equal), and so on, until we reach a subgoal that we have a ready means of obtaining. The three strategies that we have considered – difference reduction, means–ends analysis, and working backward – are extremely general and can be applied to virtually any problem. These problem-solving strategies, which are often referred to as weak methods, do not rest on any specific knowledge and may even be innate. People are especially likely to rely on these weak methods when they are first learning about an area and are working on problems whose content is unfamiliar. When people gain expertise in an area, they develop more powerful domain-specific procedures (and representations), which come to dominate the weak methods (Anderson, 1987). The steps in problem solving by weak methods are listed in Table 9.3. Table 9.3 Steps in problem solving 1. Represent the problem as a proposition or in visual form. 2. Determine the goal. 3. Break down the goal into subgoals. 4. Select a problem-solving strategy and apply it to achieve each subgoal. Representing the problem Being able to solve a problem depends not only on our strategy for breaking it down but also on how we represent it. Sometimes a propositional representation works best, and at other times a visual representation or image is more effective. Consider the following problem: One morning, exactly at sunrise, a monk began to climb a mountain. A narrow path, a foot or two wide, spiraled around the mountain to a temple at the summit. The monk ascended at varying rates, stopping many times along the way to rest. He reached the temple shortly before sunset. After several days at the temple, he began his journey back along the same path, starting at sunrise and again walking at variable speeds with many pauses along the way. His average speed descending was, of course, greater than his average climbing speed. Prove that there exists a particular spot along the path that the monk will occupy on both trips at precisely the same time of day. (Adams, 1974, p. 4) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THOUGHT IN ACTION: PROBLEM SOLVING In trying to solve this problem, many people start with a propositional representation. They may even try to write out a set of equations. The problem is far easier to solve when it is represented visually. All you need do is visualize the upward journey of the monk superimposed on the downward journey. Imagine one monk starting at the bottom and the other at the top. No matter what their speed, at some time and at some point along the path the two monks will meet. Thus, there must be a spot along the path that the monk occupied on both trips at precisely the same time of day. (Note that the problem did not ask you where the spot was.) Some problems can be readily solved by manipulating either propositions or images. Look at this simple problem: ‘Ed runs faster than David but slower than Dan; who’s the slowest of the three men?’ To solve this problem in terms of propositions, note that we can represent the first part of the problem as a proposition that has ‘David’ as subject and ‘is slower than Ed’ as predicate. We can represent the second part of the problem as a proposition with ‘Ed’ as subject and ‘is slower than Dan’ as predicate. We can then deduce that David is slower than Dan, which makes David the slowest. To solve the problem by means of imagery, we might imagine the three men’s speeds as points on a line, like this: David Ed Dan speed Then we can simply ‘read’ the answer directly from the image. Apparently some people prefer to represent such problems as propositions, and others tend to represent them visually (Johnson-Laird, 1985). In addition to the issue of propositions versus images, there are questions about what is represented. Often we have difficulty with a problem because we fail to include something important in our representation or because we include something in our representation that is not an important part of the problem. Remember that we often transform an ill-defined problem into a well-defined one. If we make the wrong assumptions in doing so, our mental set can create an obstacle on the path to the solution. We can illustrate this point with an experiment. One group of participants was given the problem of supporting a candle on a door, using only the materials depicted in Figure 9.12. The solution was to tack the box to the door and use the box as a platform for the candle. Most participants had difficulty with the problem, presumably because they represented the box as a container (its usual function), not as a platform. This difficulty is often referred to as functional fixedness. Another group of participants was given the identical problem except that

350 CHAPTER 9 LANGUAGE AND THOUGHT © SUSAN HOLTZ Figure 9.12 Materials for the Candle Problem. Given the materials depicted, how can you support a candle on a door? The solution is shown on p354. (After Glucksberg & Weisberg, 1966) the contents of the box were removed. These participants had more success in solving the problem, presumably because they were less likely to include the box’s container property in their representation and more likely to include its supporter property. It seems that arriving at a useful representation of a problem is half the solution to the problem. We have seen the importance of restructuring a problem: solving a problem is often the result of mentally representing it in a certain way. Once we arrive at the correct mental set (‘I can use a box as a supporter’) the solution isn’t far away. Another way to solve a problem by thinking about it differently, is to find an appropriate analogy. If two problems share the same underlying structure, solving one problem means that you can solve the other by relying on the analogy. In a classic experiment, Gick and Holyoak (1983) showed that subjects were able to solve a complicated ‘radiation problem’ that way. In this problem, a laser beam should be used to burn away a tumor. The problem is that the laser beam is very strong, so that it will also damage the intermediate healthy tissue. Subjects were able to find the solution (to use multiple beams from different directions) if they saw the analogy to a story they were told about small groups of soldiers storming a fortress (which was surrounded by mines) from multiple different directions. The researchers also discovered that it isn’t easy to get subjects to compare the underlying structure of two problems. We often overlook an analogy because we tend to focus on the superficial features of a problem rather than on the underlying structure. As we will see next, the amount of experience we have in a particular domain influences how we represent a problem. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk Experts versus novices In a given content area (physics, geography, or chess, for instance), experts solve problems qualitatively differently than novices do. These differences are due to differences in the representations and strategies used by experts and novices. Experts have many more specific representations stored in memory that they can bring to bear on a problem. A master chess player, for example, can look for five seconds at a configuration of over 20 pieces and reproduce it perfectly; a novice in this situation can reproduce only the usual 7 2 items (see Chapter 8). These discoveries were first made by de Groot (1965, 1966), who wondered what makes expert chess players choose better moves than novices. He found that chess players are not particularly more intelligent in other domains. However, their representation of chess positions is superior and allows them to remember the individual positions. Through years of practice they have developed representations of many possible configurations of chess pieces that permit them to encode a complex configuration in just a few chunks. Further, these representations are presumably what underlies their superior chess game. A master may have stored as many as 50,000 configurations and has learned what to do when each one arises. Master chess players can essentially ‘see’ possible moves and do not have to think them out the way novices do (Chase & Simon, 1973b; Simon & Gilmartin, 1973). RONALDO SCHEMIDT/AFP/GETTY IMAGES Experts solve problems in qualitatively different ways than novices do. For example chess grandmasters, such as Viswanathan Anand, have many more specific representations stored in memory that they can bring to bear on a problem.

Even when they are confronted with a novel problem, experts represent it differently than novices do. This point is illustrated by studies of problem solving in physics. An expert (say, a physics professor) represents a problem in terms of the physical principle that is needed for solution: For example, ‘This is one of those everyaction-has-an-equal-and-opposite-reaction problems.’ In contrast, a novice (say, a student taking a first course in physics) tends to represent the same problem in terms of its surface features – for example, ‘This is one of those inclined-plane problems’ (Chi & Feltovich, 1981). The tendency to focus on the superficial features of a problem also shows up when novices solve a problem by using an analogy. When we do not know much about a particular domain and have to solve a problem in it, frequently we think of superficially similar problems that we have encountered to use as analogies. In one illustrative study on this phenomenon (Ross, 1984), people had to learn new ways to edit text on a computer. During the learning phase, people were often reminded by superficial similarities of an earlier text edit and used this to figure out how to do the current edit. For example, people learned two different methods for inserting a word into text, with one method illustrated on a shopping list and the other method illustrated on a restaurant review. Later, they had to insert a word in either another shopping list or restaurant review. People were more likely to use the method they had learned with the similar text (given a shopping list, they tended to insert a word by using the method originally illustrated with a shopping list). Early in learning, we are guided by superficial similarities among problems. Only when we have had training in a given domain, are we able to focus on the structural features of a problem and make effective use of analogies (Novick, 1988). Experts and novices also differ in the strategies they employ. In studies of physics problem solving, experts generally try to formulate a plan for attacking the problem before generating equations, whereas novices typically start writing equations with no general plan in mind (Larkin, McDermott, Simon, & Simon, 1980). Another difference is that experts tend to reason from the givens of a problem toward a solution, but novices tend to work in the reverse direction (the workingbackward strategy). This difference in the direction of reasoning has also been found in studies of how physicians solve problems. More expert physicians tend to reason in a forward direction – from symptom to possible disease – but the less expert tend to reason in a backward direction – from possible disease to symptom (Patel & Groen, 1986). The characteristics of expertise just discussed – a multitude of representations, representations based on principles, planning before acting, and working For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THOUGHT IN ACTION: PROBLEM SOLVING forward – make up some of the domain-specific procedures that come to dominate the weak methods of problem solving discussed earlier. Automaticity With experience comes another advantage: automaticity. Automatic processes can be carried out without conscious control, as if on an automatic pilot. Think back to when you first learned to ride a bike or drive a car: the task required all your attention. With more practice it became easier to focus your attention on the traffic – the cycling or driving itself seems to go on effortlessly. Much of our thinking processes also become automatic with experience. Reading is something that most of us do without paying special attention to it: you see a word and automatically read it, very much unlike when you first learned how to read. The Stroop effect (named after Stroop, who described it in 1935) demonstrates the automaticity of the reading process. Stroop presented subjects with lists of non-words (such as suwg) and real words (such as blue) and asked his subjects to name the color that the different items on the lists were printed in. Note that he did not ask them to read the words. Stroop was able to show that his subjects nevertheless read the words automatically, because in one condition he had printed the color words in a non-congruent color (see Figure 9.13). For example, the word blue would appear in red ink. This slowed down the color-naming response significantly, compared to the other conditions (the list of non-words, or the list of color words printed in congruent colors). This interference of the automatic reading process with the color-naming task shows that reading is something we do without consciously attending to it (Stroop, 1935). Throughout this chapter, we have seen that people often use shortcuts in reasoning and solving problems. Which problem-solving strategy or reasoning heuristic is used depends in part on our experience with the problem at hand. Some problems are solved by relying on rules and on conscious and effortful thought. Other problems are solved more automatically. Some theorists argue for a dual-process theory of human reasoning, and have named the ‘automatic’ processes intuitive, in contrast to (a) (b) (c) wopr swrg zcidb zyp blue green yellow red red yellow blue green Figure 9.13 An example of the Stroop Effect.

SEEING BOTH SIDES DO PEOPLE WHO SPEAK DIFFERENT LANGUAGES THINK DIFFERENTLY? The role of language in mind Stephen C. Levinson and Asifa Majid, Max-Planck-Institute for Psycholinguistics, Nijmegen. Imagine you were born among the Pirahã, a remote tribe in the Amazon. You would speak a language with, it seems, no words for color, no words for uncles or cousins, no words for numbers, no easy way to talk about the future or to make complex sentences by embedding (Everett, 2005). What, then, would be the character of your thoughts? Or suppose you parachute into the tribe, and learn to speak their language, do you think you could easily tell them about your world? Armchair thought-experiments of this kind used to intrigue linguists, laymen, and psychologists, such as Sapir, Whorf and Carroll. Then with the rise of the cognitive science movement in the 1960s they became suddenly unfashionable, because human cognition was viewed as a uniform processing machine, with a structure and content largely built into our genes. It followed that the Pirahã, unbekownst to themselves, actually had the concepts ‘pink’, ‘cousin’, ‘17’, ‘next year’, even ‘algorithm’ and ‘symphony’ – they simply didn’t have the words for them (Fodor, 1975). There was a universal language of thought, ‘mentalese’, for which different languages were merely an inputoutput system (Pinker, 1994). This view is now losing ascendancy, for a number of reasons, one is the rise of alternative computational metaphors (Parallel Distributed Processing, neural networks) that emphasize learning from experience, and another the phenomenal rise of neurocognition and the beginnings of neurogenetics, both of which reveal the importance of human differences. Another reason why interest is returning to the role of language in cognition is empirical. It turns out for example that the Pirahã can’t think ‘17’; they really don’t have elementary number concepts (Gordon, 2004). No experiments have been done on their color discrimination, but in other cultures we find a systematic relation between the kinds of color words and color concepts. For example, speakers of a language like English with a ‘blue’ vs. ‘green’ distinction exaggerate the actual distance (in JNDs or just noticeable differences) between blue and green, while speakers of a language (like Taruhumara) with a ‘grue’ term covering both green and blue, do not (Kay & Kempton, 1984, Davidoff et al., 1999). Recently Kay and colleagues have shown that this effect is due to the right visual field, which projects to the left brain hemisphere where language is processed (Gilbert et al., 2006), and that toddlers switch their categorical perception for color over to the left hemisphere as they learn color terms (Franklin et al., 2008a, b). Less surprisingly, a native language also changes our audition, we become blind (or rather deaf) in early infancy to sounds not in our language (Kuhl, 2000). Thus language alters our very perception of the world around us. What about more abstract domains like space and time? It turns out that the way we talk about time in a language makes a difference to how we think about it. In Chinese, a vertical spatial metaphor is often used so that earlier events are ‘up’ and later ones ‘down’, whereas in English we prefer to think of the future ‘ahead’ and the past ‘behind’. Chinese speakers, but not English speakers, are faster to respond to a time question when they have previously seen a vertical spatial prime (Boroditsky, 2001). This suggests that for thinking about abstract domains like time we borrow the language we use for the more concrete spatial domain, and so different spatial language makes a difference to temporal thinking. Spatial language itself differs radically across languages. In some languages there are no terms for ‘left’ and ‘right’ (as in ‘the knife is left of the fork’). Instead one has to use notions like ‘north’ and ‘south’ even for things on the table (Majid et al., 2004)! Systematic experimentation in over a dozen languages and cultures shows how powerful these differences are (Levinson, 2003). Speakers of north/south vs. left/right languages remember and reason in ways consistent with their spatial strategies in language, even when language is not required. An interesting question is which system is most natural? Experiments with apes and pre-linguistic infants suggest that the north/south one is core, and the left/right emphasis comes from our own culture and language (Haun et al., 2006). So next time you pass the salt, think about how you might be thinking about it differently had you been born in another culture! Our senses, and arguably our more abstract thoughts too, may be set up innately to deliver veridical information and inference, but rapidly in infancy we imbibe the language and categories of our culture and use these to make the discriminations and inferences that the culture has found useful through historical adaptation to its environment. As psychology enters an era of preoccupation with individual differences, we can be sure that many more ways in which language and culture influence cognition (and, no doubt, constraints on those effects) will be discovered. CHAPTER 9 LANGUAGE AND THOUGHT For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

SEEING BOTH SIDES DO PEOPLE WHO SPEAK DIFFERENT LANGUAGES THINK DIFFERENTLY? How is language related to thought? Anna Papafragou, University of Delaware How is language related to thought? Do people who speak different languages think differently? According to one theory, language offers the concepts and mechanisms for representing and making sense of our experience, thereby radically shaping the way we think. This strong view, famously associated with the writings of Benjamin Whorf (Whorf, 1956), is certainly wrong. Firstly, people possess many concepts which their language does not directly encode. For instance, the Mundurukú, an Amazonian indigene group, can recognize squares and trapezoids even though their language has no rich geometric terms (Dehaene, et al., 2006). Similarly, members of the Pirahã community in Brazil whose language lacks number words can nevertheless perform numerical computations involving large sets (even though they have trouble retaining this information in memory; Frank, et al., 2008). Secondly, there are often broad similarities in the ways different languages carve up domains of experience. For instance, crucial properties of color vocabularies across languages appear to be shaped by universal perceptual constraints (Regier et al., 2007). Also many languages seem to label basic tastes by distinct words (e.g., sweet, salt, sour and bitter; Majid & Levinson, 2008). The presence of constraints on cross-linguistic variation suggests that language categories are shaped by cognitive biases shared across humans. A weaker version of the Whorfian view maintains that, even though language does not completely determine thought, it still affects people’s habitual thought patterns by promoting the salience of some categories and downgrading others. One line of studies set out to examine how English and Japanese speakers draw the conceptual distinction between objects and substances. English distinguishes between count nouns (a pyramid) and mass nouns (cork), while Japanese does not (all nouns behave like mass nouns). When taught names for novel simple exemplars (e.g., a cork pyramid), which could in principle be considered either objects or substances, English speakers predominantly took the name to refer to the object (‘pyramid’) but Japanese speakers were at chance between the object or the substance (‘cork’) construal (Imai & Gentner, 1997). These findings have been interpreted as evidence that the linguistic count/mass distinction affects how people draw the conceptual object/substance distinction (at least for indeterminate cases). Another set of studies focused on speakers of Tseltal Mayan living in Mexico, whose language lacks left/right terms for giving directions and locating things in the environment. Tseltal speakers cannot say things such as ‘the cup is to my left’; instead they use absolute co-ordinates (e.g., ‘north’ or ‘south’) to encode space. In a series of experiments, Tseltal speakers were shown to remember spatial scenes in terms of absolute coordinates rather than body-centered (left/right) spatial concepts; speakers of Dutch, a language which, like English, possesses left/right terms, showed the opposite preference (Levinson, 2003). The precise interpretation of these findings is greatly debated. Firstly, studies such as the above simply show that linguistic behavior and cognitive preferences can co-vary, not that language causes cognition to differ across various linguistic populations. Furthermore, some of the reported cognitive differences may have been due to ambiguities in the way instructions to study participants were phrased. When Japanese and English speakers were asked to rate, on a scale from 1 to 7, how likely they were to classify a novel specimen as a kind of object or a kind of substance, their ratings converged (Li, et al., in press). Similarly, when Tseltal speakers were given implicit cues about how to solve spatial tasks, they were able to use left/right reasoning; in fact, on some tasks, they were more accurate when using left/right concepts compared to absolute co-ordinates, contrary to what one might expect on the basis of how Tseltal encodes space (Li, et al, 2005). These data show that human cognitive mechanisms are flexible rather than streamlined by linguistic terminology. Other studies have confirmed that cross-linguistic differences do not necessarily lead to cognitive differences. For instance, memory and categorization of motion events, such as an airplane flying over a house, seem to be independent of the way languages encode motion (Papafragou et al., 2002). Relatedly, similarity judgments for containers such as jars, bottles and cups converge in speakers of different languages despite words for such containers varying cross-linguistically (Malt et al., 1999). In a striking recent demonstration, using eye tracking methods, English and Greek speakers were found to attend to different parts of an event while they were getting ready to describe the event verbally; however, when preparing to memorize the event for a later memory task, speakers of the two languages performed identically in terms of how they allocated attention, presumably because they relied on processes of event perception that are independent of language (Papafragou et al., 2008). This research suggests that language can be usefully thought of as an additional route for encoding experience. Rather than permanently reshaping the processes supporting perception and cognitive processing, language offers an alternative, often optionally recruited system of encoding, organizing and tracking experience. The precise interplay between linguistic and cognitive functions will continue to be a topic of intense experimentation and theorizing for years to come. THOUGHT IN ACTION: PROBLEM SOLVING For more Cengage Learning textbooks, visit www.cengagebrain.co.uk

354 CHAPTER 9 LANGUAGE AND THOUGHT the rule-based processes (Kahneman, 2003). Social psychologists are especially interested in understanding how we arrive at some of our intuitive knowledge about other human beings. In Chapter 17 you will see that variations on the Stroop task are still used today by social psychologists to study automaticity in social perception. ª SUSAN HOLTZ The solution to the candle problem. CHAPTER SUMMARY Language, our primary means for communicating thoughts, is structured at three levels. At the highest level are sentence units, including phrases that can be related to thoughts or propositions. The next level is words and parts of words that carry meaning. The lowest level contains speech sounds. The phrases of a sentence are built from words (and parts of words), whereas the words themselves are constructed from speech sounds. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk INTERIM SUMMARY l Problem solving requires breaking down a goal into subgoals that can be obtained more easily. l Strategies for breaking a goal into subgoals include reducing differences between the current state and the goal state; means–ends analysis (eliminating the most important differences between the current and goal states), and working backward. l Some problems are easier to solve by using a visual representation, and others can be more readily solved by using a propositional representation. Numerous problems can be solved equally well by visual or propositional representations. l Expert problem solvers differ from novices in four ways: They have more representations to bring to bear on the problem, they represent novel problems in terms of solution principles rather than surface features, they form a plan before acting, and they tend to reason forward rather than backward. l Thought processes that do not require effortful attention occur automatically and without conscious control. CRITICAL THINKING QUESTIONS 1 Think of some activity (an academic subject, game, sport, or hobby) in which you have gained some expertise. How would you characterize the changes that you went through in improving your performance? How do these changes line up with those described in the chapter? 2 How can the findings about expertise in problem solving be used in teaching people professional skills, like teaching medical students about a new specialty? A phoneme is a category of speech sounds. Every language has its own set of phonemes and rules for combining them into words. A morpheme is the smallest unit that carries meaning. Most morphemes are words; others are prefixes and suffixes that are added to words. A language also has syntactic rules for combining words into phrases and phrases into sentences. Understanding a sentence requires not only analyzing phonemes, morphemes, and phrases

but also using context and understanding the speaker’s intention. The areas of the brain that are responsible for language lie in the left hemisphere and include Broca’s area (frontal cortex) and Wernicke’s area (temporal cortex). Language development occurs at three different levels. Infants come into the world preprogrammed to learn phonemes, but they need several years to learn the rules for combining them. When children begin to speak, they learn words that name familiar concepts. In learning to produce sentences, they begin with one-word utterances, progress to twoword telegraphic speech, and then elaborate their noun and verb phrases. Children learn language at least partly by testing hypotheses. Children’s hypotheses appear to be guided by a small set of operating principles, which call their attention to critical characteristics of utterances, such as word endings. Innate factors also play a role in language acquisition. Our innate knowledge of language seems to be very rich and detailed, as suggested by the fact that all children seem to go through the same stages in acquiring a language. Like other innate behaviors, some language abilities are learned only during a critical period. It is a matter of controversy whether our innate capacity to learn language is unique to our species. Many studies suggest that chimpanzees and gorillas can learn signs that are equivalent to our words, but they have difficulty learning to combine these signs in the systematic (or syntactic) way in which humans combine words. Thought occurs in different modes, including propositional and imaginal. The basic component of a proposition is a concept, the set of properties we associate with a class. Concepts provide cognitive economy by allowing us to code many different objects as instances of the same concept and also permit us to predict information that is not readily perceptible. A concept includes both a prototype (properties that describe the best examples) and a core (properties that are most essential for being a member of the concept). Core properties play a major role in well-defined concepts like ‘grandmother’; prototype properties dominate in fuzzy concepts like ‘bird’. Most natural concepts are fuzzy. Concepts are sometimes organized into hierarchies; in such cases, one level of the hierarchy is the basic or preferred level for categorization. Children often learn a concept by following an exemplar strategy. With this technique, a novel For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHAPTER SUMMARY item is classified as an instance of a concept if it is sufficiently similar to a known exemplar of the concept. As children grow older, they use hypothesis testing as another strategy for learning concepts. Different categorization processes have been shown to involve different brain mechanisms. In reasoning, we organize our propositions into an argument. Some arguments are deductively valid: It is impossible for the conclusion of the argument to be false if its premises are true. When evaluating a deductive argument, we sometimes try to prove that the conclusion follows from the premises by using logical rules. Other times, however, we use heuristics – rules of thumb – that operate on the content of propositions rather than on their logical form. Some arguments are inductively strong: It is improbable for the conclusion to be false if the premises are true. In generating and evaluating such arguments, we often ignore some of the principles of probability theory and rely instead on heuristics that focus on similarity or causality. Not all thoughts are expressed in propositions; some are manifested as visual images. Such images contain the kind of visual detail found in perceptions. The mental operations performed on images (such as scanning and rotation) are like the operations carried out on perceptions. Imagery seems to be like perception because it is mediated by the same parts of the brain. Brain damage that causes the perceptual problem of visual neglect also causes comparable problems in imagery. Experiments using brain-scanning techniques indicate that the specific brain regions involved in an imagery task are the same as those involved in a perceptual task. Problem solving requires breaking down a goal into subgoals that are easier to obtain. Strategies for doing this include reducing differences between the current state and the goal state, means–ends analysis (eliminating the most important differences between the current and goal states), and working backward. Some problems are easier to solve by using a propositional representation; for other problems, a visual representation works best. Expert problem solvers differ from novices in four basic ways: They have more representations to bring to bear on the problem, they represent novel problems in terms of solution principles rather than surface features, they form a plan before acting, and they tend to reason forward rather than working backward.

CORE CONCEPTS production of language comprehension of language language phoneme morpheme grammatical morpheme meaning sentence unit proposition noun phrase verb phrase syntax Broca’s aphasia Wernicke’s aphasia overextend anomic aphasics propositional thought imaginal thought concept categorization prototype core basic level deductive validity syllogism belief bias pragmatic rules mental model inductively strong base-rate rule conjunction rule heuristic similarity heuristic causality heuristic availability heuristic representativeness heuristic confirmation bias imaginal mode mental rotation grain size visual neglect difference-reduction method means–ends analysis working backward mental set functional fixedness restructuring automaticity Stroop effect WEB RESOURCES http://www.atkinsonhilgard.com/ Take a quiz, try the activities and exercises, and explore web links. http://www.cwu.edu/~cwuchci/ Learn more about primates and their language abilities, at the website for the Chimpanzee and Human Communication Institute. http://www.ilovelanguages.com/ Everything you ever wanted to know about languages. CHAPTER 9 LANGUAGE AND THOUGHT For more Cengage Learning textbooks, visit www.cengagebrain.co.uk