01 - 17 Social Influence
17 Social Influence
P eople sometimes do the inexplicable – or what seems inexplicable. Our newspapers and history books provide plenty of examples. From 1933 to 1945, millions of innocent people – mostly Jews – were forced to live in concentration camps in Nazi Germany. Only after World War II did the world community realize that these camps were in fact high-efficiency death ‘factories’ that systematically slaughtered more than 8 million people. How could this genocide happen? What kind of people could design and operate these death factories? The actions of the Nazi regime seem inexplicable. On November 18, 1978, U.S. Congressman Leo Ryan was concluding his visit to Jonestown, a settlement of the People’s Temple (formerly of based in San Francisco) in Guyana, South America. Ryan was investigating Jonestown because reports had come back to the U.S. that people were being held there against their will. As Ryan boarded his plane to leave Guyana, he and four others were shot and killed by Temple gunmen. Meanwhile, Jim Jones, the leader of the People’s Temple, gathered the nearly 1,000 residents of Jonestown and asked them to kill themselves by drinking strawberry-flavored poison. They complied. How could this happen? What kind of people would kill themselves at ª AP/WIDE WORLD PHOTOS seem inexplicable. Horrific world events often seem completely inexplicable. How could people do these things to themselves and to others? Social psychologists argue that answers that appeal only to personality or character traits overlook the powerful influence that social situations can have in shaping human behavior. hijackers seem inexplicable. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHAPTER OUTLINE THE PRESENCE OF OTHERS Social facilitation and social inhibition Deindividuation Bystander effects COMPLIANCE AND OBEDIENCE Conformity to a majority Minority influence CUTTING EDGE RESEARCH: PLURALISTIC IGNORANCE AND BINGE DRINKING AT UNIVERSITIES Obedience to authority INTERNALIZATION Self-justification Reference groups and identification GROUP INTERACTIONS Institutional norms Group decision making SEEING BOTH SIDES: ARE THE EFFECTS OF AFFIRMATIVE ACTION POSITIVE OR NEGATIVE? another person’s request? The actions RECAP: SOCIAL PSYCHOLOGICAL VIEWS OF THE SEEMINGLY INEXPLICABLE of the members of the People’s Temple On September 11, 2001, four U.S. planes were hijacked. Two crashed into New York City’s twin World Trade Center towers, one crashed into U.S. military headquarters at the Pentagon, outside Washington, DC, and the fourth crashed in Pennsylvania, missing its intended target. In addition to the hundreds of people killed on board the airplanes and in the Pentagon, nearly 3,000 people remained in the World Trade Center towers when they collapsed from the impact. How could this happen? What kind of people could take so many innocent lives, as well as their own? The actions of these suicide 609
610 CHAPTER 17 SOCIAL INFLUENCE In trying to make sense of these seemingly inexplicable horrors of humanity, our first reaction is often to pin evil (or crazy) actions on evil (or crazy) individuals. ‘The suicide hijackers were evil terrorists.’ ‘Jim Jones’s followers were crazy.’ ‘The Nazis were evil racists.’ These sorts of explanations provide some comfort. They distance us ‘good’ and ‘normal’ people, from those ‘bad’ and ‘crazy’ people. To be sure, there is a grain of truth within explanations that attribute evil actions to evil characters. Osama bin Laden, Jim Jones, and Adolf Hitler, for instance, might well be classified as evil leaders. Even so, social psychologists have argued that explanations that attribute the full cause of an action to someone’s personality are often wrong – so often wrong that social psychologists identify these explanations as instances of the fundamental attribution error. The fundamental attribution error refers to the tendency to explain other people’s actions by overestimating the influence of personality or character and underestimating the influence of situations or circumstances. Moreover, we make this fundamental error not only when trying to make sense of unfathomable horrors but also when making sense of the ordinary, everyday actions of our friends, classmates, and others. Social psychology is the scientific study of the ways that people’s behavior and mental processes are shaped by the real or imagined presence of others. Social psychologists begin with the basic observation that human behavior is a function of both the person and the situation. Each individual brings a unique set of personal attributes to a situation, leading different people to act in different ways in the same situation. But each situation also brings a unique set of forces to bear on an individual, leading him or her to act in different ways in different situations. Research has repeatedly shown that situations are more powerful determinants of behavior than our intuitions lead us to believe. Thus, one of the foremost contributions of social psychology is an understanding of how powerful situations shape people’s behavior and mental processes. Our two-chapter discussion of social psychology begins with this focus on the power of situations. Yet people do not simply react to the objective features of situations but rather to their subjective interpretations of them. As we learned in Chapter 11 on emotions, the person who interprets an offensive act as the product of hostility reacts differently than the person who construes the same act as the product of mental illness. Accordingly, Chapter 18 examines the power that subjective interpretations and people’s modes of thinking have in shaping their thoughts, feelings, and social behavior, a topic known as social cognition. We begin, however, with a focus on social influence and the power of situations themselves. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THE PRESENCE OF OTHERS Social facilitation and social inhibition In 1898, while examining the speed records of bicycle racers, psychologist Norman Triplett noticed that many cyclists achieved better times when they raced against each other than when they raced against the clock. This led him to perform one of social psychology’s earliest laboratory experiments. He instructed children to turn a fishing reel as fast as possible for a fixed period. Sometimes two children worked at the same time in the same room, each with his or her own reel. At other times they worked alone. Triplett reported that many children worked faster when someone else doing the same task was present (a situation termed coaction) than when they worked alone. In the more than 100 years since Triplett conducted his experiment, many other studies have demonstrated the facilitating effects of coaction both in humans and in animals. For example, worker ants in groups dig more than three times as much sand per ant than when alone (Chen, 1937), many animals eat more food if other members of their species are present (Platt, Yaksh, & Darby, 1967), and college students complete more multiplication problems in coaction than when alone (Allport, 1920, 1924). Soon after Triplett’s experiment on coaction, psychologists discovered that the presence of a passive spectator – an audience rather than a coactor – also facilitates performance. For example, the presence of an audience had the same facilitating effect on students’ multiplication performance as the presence of coactors in the earlier study (Dashiell, 1930). The term social facilitation is used to refer to the boosting effects of coactors and audiences on performance. But this simple case of social influence turned out to be more complicated than social psychologists first thought. For example, researchers found that people made more errors on the multiplication problems when in coaction or in the presence of an audience than when they performed alone (Dashiell, 1930). In other words, accuracy decreased even though speed increased. In many other studies, both the speed and accuracy of performance decreased when others were present. The term social inhibition was introduced to refer to the sometimes derailing effects of coactors and audiences on performance. How can we predict whether the presence of others – either coacting or observing – will improve our performance or impair it? The answer to this question first emerged in the mid-1960s (Zajonc, 1965) and was solidified two decades later in a meta-analysis of 241 studies (Bond & Titus, 1983). The basic finding is that the presence of
ª MATTHEW WEINEL j DREAMSTIME.COM In 1898 psychologist Norman Triplett noticed that cyclists achieved better times when they raced against other cyclists than when they raced against the clock. This led him to study the phenomenon of social facilitation. coactors and audiences improves the speed and accuracy of performance on simple or well-learned tasks but impairs the speed and accuracy of performance on complex or poorly learned tasks. So social facilitation holds for simple tasks, and social inhibition holds for complex tasks. Despite this useful generalization, this pattern of results still requires explanation. Why does it occur? Social psychologists have offered two competing explanations. The first explanation, offered by Robert Zajonc (1965), appeals to drive theories of motivation (see Chapter 10). These suggest that high levels of drive or arousal tend to energize the dominant responses of an organism. If the mere presence of another member of the species raises the general arousal or drive level of an organism, the dominant response will be facilitated. For simple or well-learned behaviors, the dominant response is most likely to be the correct response, and performance should be facilitated. For complex behaviors or behaviors that are just being learned, the dominant or most probable response is likely to be incorrect. Consider the multiplication problems discussed earlier. There are many wrong responses but only one correct one. Accurate performance on this complex task should therefore be inhibited. A number of experiments have confirmed these predictions. For example, people learn simple mazes or easy word lists more quickly but learn complex mazes or difficult word lists more slowly when an audience is present than when it is not (Cottrell, Rittle, & Wack, 1967; Hunt & Hillery, 1973). A study using cockroaches found that, when attempting to escape light, roaches run an easy route more quickly but a difficult route more For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THE PRESENCE OF OTHERS slowly if other roaches watch from the sidelines (or run with them) than if they run without other roaches present (Zajonc, Heingartner, & Herman, 1969). More recent experiments conclude that physical presence is not required. Electronic surveillance is sufficient to facilitate the dominant response (Feinberg & Aiello, 2006), as is the presence of a lifelike virtual human, presented by computer screen, who observes via artificial intelligence (Park & Catrambone, 2007). The second explanation for social facilitation and social inhibition appeals to attention factors (Baron, 1986; Huguet, Galvaing, Monteil, & Dumas, 1999). The core idea is that the presence of others is often distracting, which can produce a mental overload that results in a narrowed focus of attention. This view can also explain the different effects for simple and complex tasks: Social facilitation should occur when tasks are simple and require that we focus on only a small number of central cues, and social inhibition should occur when tasks are complex and require our attention to a wide range of cues. Which explanation is correct? In most circumstances, the two explanations make the same predictions and so cannot be tested against one another. A recent study solved this problem, however, by locating a task for which the two views offer different predictions (Huguet et al., 1999). The Stroop task (MacLeod, 1991; Stroop, 1935) is a complex, poorly learned task that involves only a few key stimuli. In this task, a person is asked to identify the ink color in which words or symbols (like ‘þþþ’) are printed. See Figure17.1 for an example. People perform this task relatively quickly for symbols but are particularly slowed for words that are incongruent (like the word red printed in yellow ink). This phenomenon, called Stroop interference, results because ++++
XXXX RED YELLOW BLUE Figure 17.1 Items From a Stroop Task. Say aloud the color of the inks you see in the top row. Now do the same for the bottom row. Notice how much slower you were to name the ink colors for words versus symbols. This is called Stroop interference. Studies show that people perform better on the Stroop task when in the presence of others, a finding that supports the attention explanation for social facilitation.
612 CHAPTER 17 SOCIAL INFLUENCE word reading is such a dominant and automatic response among skilled readers that it is difficult to follow the instruction to ignore the printed word and name the word’s ink color. Because the Stroop task is complex and the automatic response is to name the word (not the ink color), the dominant-response view predicts that social presence should derail performance, producing social inhibition. At the same time, because the Stroop task involves only two key stimuli – the word and the ink color – and a narrowed focus of attention can reduce attention to the irrelevant information (the word), the attention view, by contrast, predicts that social presence should improve performance, producing social facilitation. The data from several experiments that have manipulated the presence or absence of an audience or coactors during Stroop performance provide clear support for the attention view and fail to support the dominant-response view: People perform better on the Stroop task when in the presence of others (Huguet et al., 1999). These and other studies also identify two key limits to social facilitation effects. First, the mere presence of another person does not produce much social facilitation. If the audience member is reading or blindfolded, for example, social facilitation is greatly reduced (Cottrell, Wack, Sekerak, & Rittle, 1968; Huguet et al., 1999). Second, competition and social comparison with coactors seem to be critical. If coactors perform much worse than participants themselves – that is, if they are no competition – social facilitation is also greatly reduced (Dashiell, 1930; Huguet et al., 1999). The lineage of studies on social facilitation and social inhibition begins to convey the power of situations. You might have thought that your physical performance (like throwing free throws in basketball) or academic performance (like taking a calculus exam) merely reflected your ability. But the studies described here suggest that whether the performance situation includes real or even virtual others, and what those others are doing (evaluating or providing competition), also critically determine your level of performance. Yet whether the presence of others helps or hurts your performance depends on whether the task at hand is simple or complex for you. A pro basketball player and a student who has mastered the basics of calculus are likely to do better when the situation involves others. For them, the task becomes simple because it is well learned. For a novice basketball player and a student who neglects studying, the situation does not bode so well. Deindividuation At about the same time that Triplett was performing his experiment on social facilitation, another observer of For more Cengage Learning textbooks, visit www.cengagebrain.co.uk ª MARK E. GIBSON Audience effects on performance vary depending on whether the task is easy or difficult for them and on how much the person feels that he or she is being evaluated. human behavior, Gustave LeBon, was also studying the effects of coaction. In The Crowd (1895), LeBon complained that ‘the crowd is always intellectually inferior to the isolated individual’. He believed that the aggressive and immoral behaviors shown by lynch mobs (and, in his view, French revolutionaries) spread through a mob or crowd by contagion, like a disease, breaking down an individual’s moral sense and self-control. Such breakdowns, he argued, caused crowds to commit destructive acts that few individuals would commit when acting alone. LeBon’s early observations of crowd behavior fueled the development of a concept that social psychologists have called deindividuation, first introduced in the 1950s (Festinger, Pepitone, & Newcomb, 1952) but revisited and revised in each subsequent decade (in the 1960s by Zimbardo [1969], in the 1970s by Diener [1977, 1980], in the 1980s by Prentice-Dunn & Rogers [1982, 1989], and in the 1990s and beyond by Postmes & Spears [1998]). Although the explanations for the phenomenon
ª PEERPOINT / ALAMY People often behave differently in a crowd than when alone. Some researchers believe that in a situation like a riot, individuals experience deindividuation – a feeling that they have lost their personal identities and merged anonymously into the group. have shifted over the decades, the core idea within deindividuation is that certain group situations can minimize the salience of people’s personal identities, reduce their sense of public accountability, and in doing so produce aggressive or unusual behavior (for a meta-analysis of 60 studies, see Postmes & Spears, 1998). To illustrate, violent attacks in Northern Ireland during the mid-1990s can be divided into those carried out by identifiable offenders versus offenders who wore disguises to mask their identity. Compared to identifiable offenders, disguised offenders attacked more people at the scene and inflicted more serious injuries (Silke, 2003). Early explanations for the effects of deindividuation suggested that a reduced sense of public accountability weakened the normal restraints against impulsive and unruly behavior (Diener, 1980; Festinger et al., 1952; Zimbardo, 1969). In one famous study of deindividuation, groups of four college women were required to deliver electric shocks to another woman who was supposedly participating in a learning experiment. Half of the groups were deindividuated by making them feel anonymous. They were dressed in bulky laboratory coats and hoods that hid their faces, and the experimenter spoke to them only as a group, never referring to any of them by name (see Figure 17.2). The remaining groups were individuated by having them remain in their own clothes and wear large identification tags. In addition, the women in the latter groups were introduced to one another by name. During the experiment, each woman had a shock For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THE PRESENCE OF OTHERS button in front of her, which she was to push when the learner made an error. Pushing the button appeared to deliver a shock to the learner (in reality, it did not). The results showed that the deindividuated women delivered twice as much shock to the learner as the individuated women (Zimbardo, 1969). Another study was conducted at several homes on Halloween night. Children out trick-or-treating were greeted at the door by a woman who asked that each child take only one piece of candy. The woman then disappeared into the house briefly, giving the children the opportunity to take more candy. Some of the children had been asked their names, and others remained anonymous. Children who came in groups or who remained anonymous stole more candy than children who came alone or had given their names to the adult (Diener, Fraser, Beaman, & Kelem, 1976). These experiments are not definitive, however. For instance, you can see in Figure 17.2 that the laboratory coats and hoods in the first study resembled Ku Klux Klan outfits. Similarly, Halloween costumes often represent witches, monsters, or ghosts. These all carry aggressive or negative connotations. It may be that these costumes did not simply provide anonymity but that they also activated social norms that encouraged aggression. Social norms are implicit or explicit rules for acceptable behavior and beliefs. To test whether social norms rather than anonymity produced aggressive behavior, the shock experiment was repeated, but this time each participant wore one of three outfits: a Ku Klux Klan-type costume, a nurse’s uniform, or the participant’s own clothes. Figure 17.2 Anonymity Can Increase Aggression. When women were disguised so that they felt anonymous, they delivered more shock to another person than did nondisguised women. Philip G. Zimbardo Inc.
614 CHAPTER 17 SOCIAL INFLUENCE Compared with the group who wore their own clothes, participants wearing Ku Klux Klan-type costumes delivered somewhat more shocks to the learner (but not reliably so). More significantly, participants wearing nurses’ uniforms actually gave fewer shocks than participants who wore their own clothes. This study shows that anonymity does not inevitably lead to increased aggression (Johnson & Downing, 1979). The finding that cues that are specific to the situation (like a nurse’s uniform) evoke social norms that guide behavior within anonymous groups led to a later reformulation of the mental processes involved in deindividuation. This view holds that situations that reduce public accountability – like group size and anonymity – do not simply reduce the salience of people’s personal identities but also simultaneously enhance the salience of people’s group identities (like being a nurse, or a member of the People’s Temple). Plus, situations that make group identities salient promote behavior that is normative for the salient group (like being less aggressive if you are role-playing a nurse). So whereas earlier explanations of deindividuation suggested that anonymity produces a breakdown of the normal restraints against unruly behavior, this more recent explanation suggests that these same features of group situations promote greater conformity to situation-specific social norms (Lea, Spears, & de Groot, 2001; Postmes & Spears, 1998). Again, the research on deindividuation conveys the power of situations in determining people’s behavior. So the next time you find yourself in a large group situation in which you feel anonymous (not uncommon on a university campus), you may notice yourself getting caught up with the group’s behavior. If the group is focused on peaceful activities (like a candlelight vigil for victims of terrorist attacks), you may act more patriotic and reverent than you might on your own. Yet if the group is focused on more raucous activities (like looting or harassing others), know that situational forces will exert their pull. Bystander effects Earlier we noted that people do not react simply to the objective features of a situation but also respond to their subjective interpretations of it. We have seen that even social facilitation, a primitive kind of social influence, depends in part on the individual’s interpretation of what other people are doing or thinking. But as we will now see, defining or interpreting the situation is often the very mechanism through which individuals influence one another. In 1964 a young woman named Kitty Genovese was attacked outside her New York apartment around 3 a.m. Two weeks later, the front page of the New York Times ran an article headlined, ‘37 Who Saw Murder Didn’t For more Cengage Learning textbooks, visit www.cengagebrain.co.uk Call the Police: Apathy at Stabbing Shocks Inspector’. The ensuing article claimed that for more than half an hour 38 eyewitnesses watched Kitty Genovese’s killer stalk and stab her, but not one called the police during the assault, and only one called after she was dead. The American public was horrified by this account. Although investigations some 40 years later suggest that far fewer people actually witnessed the Genovese murder (Manning, Levine, & Collins, 2007), the powerful image of 38 passive witnesses sparked social psychologists of the time to investigate the causes of what came to be called the bystander effect, referring to the finding that people are less likely to help when others are present. You might suppose that if you needed help in an emergency, you’d be more likely to receive it if many people witnessed the event. Simple odds should increase the chances that helpful souls are in the crowd, right? Unfortunately not. Research on bystander effects shows just the reverse: Often it is the very presence of other people that prevents us from taking action. In fact, by 1980 more than 50 studies of bystander effects had been conducted, and most of them showed that people reduced helping when others were present (Latané, Nida, & Wilson, 1981). Latané and Darley (1970) suggest that the presence of others deters an individual from taking action by (1) defining the situation as a nonemergency through the process of pluralistic ignorance and (2) diffusing the responsibility for acting. Defining the situation Many emergencies begin ambiguously. Is that staggering man ill or simply drunk? Is the woman being threatened by a stranger, or is she arguing with her husband? Is that smoke from a fire or just steam pouring out the window? A common way of dealing with such uncertainties is to ªISTOCKPHOTO.COM/DAMIR SPANIC Although many passers-by have noticed the man lying on the street, no one has stopped to help – to see if he is asleep, sick, drunk, or dead. Research shows that people are more likely to help if no other bystanders are present.
postpone action, act as if nothing is wrong, and discreetly glance to see how other people are reacting. What you are likely to see, of course, are other people who, for the same reasons, are also acting as if nothing is wrong. Because people often show blank expressions when confronted with ambiguity, especially if trying to maintain their cool, a state of pluralistic ignorance develops – that is, everybody in the group misleads everybody else by defining the situation as a nonemergency. We have all heard about crowds panicking because each person causes everybody else to overreact. The reverse situation – in which a crowd lulls its members into inaction – may be even more common. Several experiments demonstrate this effect. In one experiment, male college students were invited to an interview. As they sat in a small waiting room completing a questionnaire, what appeared to be smoke began to stream through a wall vent. Some participants were alone in the waiting room when this occurred; others were in groups of three. The experimenters observed them through a one-way window and waited six minutes to see if anyone would take action or report the situation. Of the participants who were tested alone, 75 percent left the room and reported the potential fire. In contrast, less than 13 percent of the participants who were tested in groups reported the smoke, even though the room was so filled with smoke they had to wave it away to complete their questionnaires. Those who did not report the smoke subsequently reported that they had decided that it must have been steam, air conditioning vapors, or smog – practically anything but a real fire or an emergency. This experiment thus showed that bystanders can define situations as nonemergencies for one another (Latané & Darley, 1968). But perhaps these participants were simply afraid to appear cowardly. To check on this possibility, a similar study was designed in which the ‘emergency’ did not involve personal danger. Participants waiting in the testing room heard a female experimenter in the next office climb up on a chair to reach a bookcase, fall to the floor, and yell, ‘Oh my God – my foot. . . . I can’t move it. Oh . . . my ankle. . . . I can’t get this thing off me.’ She continued to moan for about a minute longer. The entire incident lasted about two minutes. Only a curtain separated the woman’s office from the testing room, in which participants waited either alone or in pairs. The results confirmed the findings of the smoke study. Of the participants who were alone, 70 percent came to the woman’s aid, but only 40 percent of those in two-person groups offered help. Again, those who had not intervened claimed later that they were unsure of what had happened but had decided that it was not serious (Latané & Rodin, 1969). In these experiments, the presence of others produced pluralistic ignorance; each person, observing the calmness of the others, resolved For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THE PRESENCE OF OTHERS the ambiguity of the situation by deciding that no emergency existed. Pluralistic ignorance appeared to govern a more recent and disturbing example of the bystander effect. In 1993, near Liverpool, England, two 10-year-old boys kidnapped 2-year-old James Bulger at a local shopping mall. They led the toddler away on a meandering walk, cruelly tortured him along the way, and eventually beat him to death. Over the course of the day, dozens of adults came across the three boys. Later testimony of these bystanders revealed that they had assumed – or were told – that the three boys were brothers (Levine, 1999). Interpreting aggressive actions as ‘family squabbles’ seemed to define the situation as a nonemergency. This is especially troublesome. If the boys were in fact related, would the frightened and injured toddler be in less need of adult intervention? Similarly, is a woman threatened by her boyfriend or husband in less trouble than one threatened by a stranger? Crime statistics suggest not. Diffusion of responsibility Pluralistic ignorance can lead individuals to define a situation as a nonemergency, but this process does not explain incidents like the Genovese murder, in which the emergency is abundantly clear. Moreover, Kitty Genovese’s neighbors could not observe one another behind their curtained windows and could not tell whether others were calm or panicked. The crucial process here was diffusion of responsibility. When each individual knows that many others are present, the burden of responsibility does not fall solely on him or her. Each can think, ‘Someone else must have done something by now; someone else will intervene.’ WWW.CRIMELIBRARY.COM A shopping mall surveillance video shows 2-year-old James Bulger being led away by one of his two 10-year-old kidnappers, who tortured and eventually murdered the toddler. Many adults saw the boys together that day. Yet even though the toddler’s head was cut and bruised and his face was tear-streaked, nobody intervened on James Bulger’s behalf. Another devastating outcome of pluralistic ignorance?
616 CHAPTER 17 SOCIAL INFLUENCE To test this hypothesis, experimenters placed participants in separate booths and told them that they would take part in a group discussion about personal problems faced by college students. To avoid embarrassment, the discussion would be held through an intercom system. Each person would speak for two minutes. The microphone would be turned on only in the booth of the person speaking, and the experimenter would not be listening. In reality, all the voices except the participant’s were tape recordings. On the first round, one person mentioned that he had problems with seizures. On the second round, this individual sounded as if he were actually starting to have a seizure and begged for help. The experimenters waited to see if the participant would leave the booth to report the emergency and how long it would take. Note that (1) the emergency is not at all ambiguous, (2) the participant could not tell how the bystanders in the other booths were reacting, and (3) the participant knew that the experimenter could not hear the emergency. Some participants were led to believe that the discussion group consisted only of themselves and the seizure victim. Others were told that they were part of a three-person group, and still others that they were part of a six-person group. Of the participants who thought that they alone knew of the victim’s seizure, 85 percent reported it; of those who thought they were in a three-person group, 62 percent reported the seizure; and of those who thought they were part of a six-person group, only 31 percent reported it (see Figure 17.3). Later interviews confirmed that all the participants perceived the situation to be a real Percent of participants reporting victim’s ‘seizure’ 60 20 Alone with victim Threeperson Sixperson group group Figure 17.3 Diffusion of Responsibility. The percentage of individuals who reported a victim’s apparent seizure declined as the number of other people the individual believed were in his or her discussion group increased. (Adapted from M. M. Darley & B. Latané (1968), ‘Bystander Intervention in Emergencies:Diffusion of Responsibility, ‘in Journal of Personality and Social Psychology, 8: 377–383. Copyright © 1968 by the American Psychological Association. Adapted with permission.) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk emergency. Most were very upset by the conflict between letting the victim suffer and rushing for help. In fact, the participants who did not report the seizure appeared more upset than those who did. Clearly, we cannot interpret their nonintervention as apathy or indifference. Instead, the presence of others diffused the responsibility for acting (Darley & Latané, 1968; Latané & Darley, 1968). Exactly how does the presence of others diffuse responsibility? The answer ties back to deindividuation, or feeling less accountable when you’re part of a group. Recent experiments have in fact demonstrated that people experience diffusion of responsibility and become less likely to help even if they simply imagined being in a group a few moments earlier on an unrelated task (Garcia, Weaver, Moskowitz, & Darley, 2002). Imagining being part of a group, these researchers found, calls to mind ideas related to unaccountability, which derails acting based on a sense of individual responsibility. If pluralistic ignorance and diffusion of responsibility are minimized, will people help one another? To find out, three psychologists used the New York City subway system as their laboratory (Piliavin, Rodin, & Piliavin, 1969). Two male and two female experimenters boarded a subway train separately. The female experimenters took seats and recorded the results, while the two men remained standing. As the train moved along, one of the men staggered forward and collapsed, remaining prone and staring at the ceiling until he received help. If no help came, the other man finally helped him to his feet. Several variations of the study were tried: The victim either carried a cane (so he would appear disabled) or smelled of alcohol (so he would appear drunk). Sometimes the victim was white, sometimes black. There was no ambiguity when the person with a cane fell. Clearly the victim needed help, so pluralistic ignorance was minimized in that case. Diffusion of responsibility was also minimized because each bystander could not continue to assume that someone else was intervening. So if pluralistic ignorance and diffusion of responsibility are the main obstacles to helping, people should help the victim with a cane in this situation. The results supported this optimistic expectation. The victim with the cane received spontaneous help on more than 95 percent of the trials, within an average of five seconds, regardless of the number of bystanders. The ‘drunk’ victim received help on half of the trials, within an average of two minutes. Although in this study on the New York subway both black and white cane victims were aided by black and white bystanders, more recent work suggests that bystanders who share a common group identity with the victim (e.g, they’re both Manchester United football fans) are especially likely to offer assistance (Levine, Prosser, Evans & Reicher, 2005).
The role of helping models In the subway study, as soon as one person moved to help, many others followed. This suggests that just as individuals use other people as models to define a situation as a nonemergency (as in pluralistic ignorance), they also use other people as models to indicate when to be helpful. This possibility was tested by counting the number of drivers who would stop to help a woman who was parked at the side of a road with a flat tire. It was found that significantly more drivers would stop to help if they had seen another woman with car trouble receiving help about a quarter of a mile earlier. Similarly, people are more likely to donate to a person soliciting for charity if they observe others doing so (Bryan & Test, 1967; Macaulay, 1970). Even role models on television can promote helping. These experiments indicate that others not only help us decide when not to act in an emergency but also serve as models to show us how and when to be good Samaritans. The role of information Now that you have read about the factors that deter bystanders from intervening in an emergency, will you be more likely to act in such a situation? An experiment at the University of Montana suggests that you would. Undergraduates were either given a lecture or shown a film based on the material discussed in this section. Two weeks later, each undergraduate was confronted with a simulated emergency while walking with one other person (a confederate of the experimenters). A person needing aid was sprawled on the floor of a hallway. The confederate was trained to react as if the situation was not an emergency. Those who had heard the lecture or seen the film were significantly more likely than others to offer help (Beaman, Barnes, Klentz, & McQuirk, 1978). This study provides hope: Simply learning about social psychological phenomena – as you are doing now – can begin to lessen the power that situations have to produce unwelcome behavior – at least in this case of intervening in emergencies. INTERIM SUMMARY l Situational forces have tremendous power to shape human behavior, and yet these powerful situational forces are often invisible. People often mistakenly make sense of others’ behavior by referring to their personality or character, rather than to situational pressures, called the fundamental attribution error. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk THE PRESENCE OF OTHERS l People perform simple tasks better – and complex tasks worse – when in the presence of coactors or an audience. These social facilitation and social inhibition effects occur because the presence of others narrows people’s attention. l The aggressive behavior sometimes shown by mobs and crowds may be the result of a state of deindividuation, in which individuals feel that they have lost their personal identities and merged into the group. Both anonymity and group size contribute to deindividuation. Deindividuation creates increased sensitivity to situation-specific social norms linked with the group. This can increase aggression when the group’s norms are aggressive and reduce aggression when the group norms are benign. l A bystander to an emergency is less likely to intervene or help if in a group than if alone. Two factors that deter intervention are pluralistic ignorance and diffusion of responsibility. By attempting to appear calm, bystanders may define the situation for one another as a nonemergency, thereby producing a state of pluralistic ignorance. The presence of other people, even imagined others, also diffuses responsibility so that no one person feels the necessity to act. CRITICAL THINKING QUESTIONS 1 The presence of others not only alters people’s behavior but also alters their mental processes or patterns of thinking. Drawing from studies of (1) social facilitation, (2) deindividuation, and (3) bystander effects, describe three distinct mental processes that are altered by the presence of others in each context. 2 Reconsider the case of the mass suicides at Jonestown described at the opening of this chapter. One thing to know about the members of the People’s Temple is that they were devoted to ‘the Cause,’ a utopian vision of social equality and racial harmony painted by Jim Jones. They moved to the jungle of Guyana for ‘the Cause’. They signed over their worldly possession, gave up legal custody of their children, and lived separately from their spouses, all for ‘the Cause’. Imagine being in this crowd of followers when Jim Jones asked them to drink the poison. Describe how deindividuation or pluralistic ignorance might have played a role in people’s compliance to Jim Jones’s request.
618 CHAPTER 17 SOCIAL INFLUENCE COMPLIANCE AND OBEDIENCE Conformity to a majority When we are in a group, we may find ourselves in the minority on some issue. This is a fact of life to which most of us have become accustomed. If we decide that the majority is a more valid source of information than our own experience, we may change our minds and conform to the majority opinion. But imagine yourself in a situation in which you are absolutely sure that your own opinion is correct and that the group is wrong. Would you yield to social pressure and conform under those circumstances? If you’re like most people, you don’t think you would. While other people follow the crowd like sheep, your actions stem from your beliefs and principles (Pronin, Berger & Molouki, 2007). But your certainty in your own autonomy most likely provides another instance of the fundamental attribution error, or the underestimation of situational pressures. We know this from a classic series of studies on conformity conducted by social psychologist Solomon Asch (1952, 1955, 1958). In Asch’s standard procedure, a participant was seated at a table with a group of seven to nine others (all confederates of the experimenter). The group was shown a display of three vertical lines of different lengths and asked to judge which line was the same length as a line in another display (see Figure 17.4). Each individual announced his or her decision in turn, and the participant sat in the next-to-last seat. The correct judgments were obvious, and on most trials everyone gave the same response. But on several predetermined trials the confederates had been instructed to each give the wrong a) b) Figure 17.4 A Representative Stimulus in Asch’s Study. After viewing display (a), participants were told to pick the matching line from display (b). The displays shown here are typical in that the correct decision is obvious. (After Asch, 1958) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk answer. Asch then observed the amount of conformity this procedure would elicit from participants. The results were striking. Even though the correct answer was always obvious, the average participant conformed to the incorrect group consensus about a third of the time; about 75 percent of the participants conformed at least once. Moreover, the group did not have to be large to produce such conformity. When Asch varied the size of the group from 2 to 16, he found that a group of 3 or 4 confederates was just as effective at producing conformity as larger groups (Asch, 1958). Why didn’t the obviousness of the correct answer provide support for the participant’s independence from the majority? Why isn’t a person’s confidence in his or her ability to make simple sensory judgments a strong force against conformity? According to one line of argument, it is precisely the obviousness of the correct answer that produces the strong forces toward conformity (Ross, Bierbrauer, & Hoffman, 1976). Disagreements in real life typically involve difficult or subjective judgments, such as which economic policy will best prevent recession or which of two paintings is more aesthetically pleasing. In these cases, we expect to disagree with others occasionally. We even know that being a minority of one in an otherwise unanimous group is a plausible, if uncomfortable, possibility. The situation in Asch’s experiments is much more extreme. Here the participant is confronted with unanimous disagreement about a simple physical fact, a bizarre and unprecedented occurrence that appears to have no rational explanation. Participants are clearly puzzled and tense. They rub their eyes in disbelief and jump up to look more closely at the lines. They squirm, mumble, giggle in embarrassment, and look searchingly at other members of the group for some clue to the mystery. After the experiment, they offer halfhearted hypotheses about optical illusions or suggest that perhaps the first person occasionally made a mistake and each successive person followed suit because of pressure to conform (Asch, 1952). Consider what it means to dissent from the majority under these circumstances. Just as the judgments of the group seem incomprehensible to the participant, so the participant believes that his or her dissent will be incomprehensible to the group. Group members will surely judge the dissenter to be incompetent, even out of touch with reality. Similarly, if the participant dissents repeatedly, this would seem to constitute a direct challenge to the group’s competence, a challenge that requires enormous courage when one’s own perceptual abilities are suddenly and inexplicably called into question. Such a challenge violates a strong social norm against insulting others. This fear of ‘What will they think of me?’ and ‘What will they think I think of them?’ inhibits dissent and generates the strong pressure to conform in Asch’s experiments.
(THE ASCH STUDY OF THE RESISTANCE OF MAJORITY OPINION, FROM SCIENTIFIC AMERICAN, NOVEMBER 1995, VOL. 193, NO. 5, BY SOLOMON E. ASCH) In a study of conformity to majority opinion, (top) all of the group members except the man sixth from the left are confederates who have been instructed to give uniformly wrong answers on 12 of the 18 trials. Number 6, who has been told that he is participating in an experiment on visual judgment, therefore finds that he is a lone dissenter when he gives the correct answers. (bottom left). The participant, showing the strain of repeated disagreement with the majority, leans forward anxiously to look at the exhibit in question. (bottom right). This particular participant persists in his opinion, saying that ‘he has to call them as he sees them’. If Asch’s conformity situation is unlike most situations in real life, why did he use a task in which the correct answer was obvious? The reason is that he wanted to study compliance, pure public conformity, uncontaminated by the possibility that participants were actually changing their minds about the correct answers. Several variations of Asch’s study have used more difficult or subjective judgments, and although they may reflect conformity in real life more faithfully, they do not permit us to assess the effects of pure pressure to conform to a majority when we are certain that our own minority judgment is correct (Ross et al., 1976). One of the most important findings from Asch’s and later experiments on conformity is that the pressure to conform is far less strong when the group is not unanimous. If even one confederate breaks with the majority, the amount of conformity drops from 32 percent of the trials to about 6 percent. In fact, a group of eight containing only one dissenter produces less conformity than a unanimous majority of three (Allen & Levine, 1969; Asch, 1958). Surprisingly, the dissenter does not even have to give the correct answer. Even when the dissenter’s answers are more inaccurate than the majority’s, the For more Cengage Learning textbooks, visit www.cengagebrain.co.uk COMPLIANCE AND OBEDIENCE majority influence is broken, and participants are more inclined to give their own correct judgments (Asch, 1955). Nor does it matter who the dissenter is. In a variation that approaches the absurd, conformity was significantly reduced even though the participants thought the dissenter was so visually handicapped that he could not see the stimuli (Allen & Levine, 1971). It seems clear that the presence of just one other dissenter to share the potential disapproval or ridicule of the group permits the participant to dissent without feeling totally isolated. Here we see the power of situations to shape behavior yet again. A situation in which we face a unanimous majority creates a strong pull for conformity. By contrast, a seemingly minor change in that situation – a simple break in the unanimity – allows us to ‘be ourselves’. Yet whether or not a situation includes a unanimous majority is a central feature of the situation, so perhaps it is not surprising that rates of conformity depend on it. What about subtler background features of situations? Like what newspaper article you just read, or what’s playing on the television in the corner of the room? Recent variations on Asch’s experiment have explored the influence of such seemingly trivial situational factors by examining how simple exposure to words and pictures can push us conform. The key is whether these words and pictures prime – or activate – ideas about conformity or ideas about nonconformity. In one experiment, some participants were exposed to words like adhere, comply, and conform, whereas others were exposed to words like challenge, confront, and deviate (Epley & Gilovich, 1999). In another study, the experimenters primed conformity in some participants by showing them a photo of ‘Norman, an accountant,’ whereas they primed nonconformity in others with a photo of ‘Norman, a punk rocker’ (Pendry & Carrick, 2001). In both experiments, participants with prior exposure to the mere idea of conformity actually behaved in more conforming ways when later faced with a unanimous majority. This evidence shows how exquisitely responsive to situational factors our behavior can be. Even features of the situation that are in the background – outside our conscious awareness – can exert their power and pull us to conform. To be sure, we conform to the behavior of others for a number of reasons. Sometimes we find ourselves in ambiguous situations and don’t know how to behave. What do you do, for instance, if you don’t know which of several forks to use first at a fancy restaurant? You look to see what others do, and conform. This type of conformity is called informational social influence. In these cases, we conform because we believe that other people’s
620 CHAPTER 17 SOCIAL INFLUENCE ª BILL LAI/RAINBOW Simple images can activate the concepts of conformity or nonconformity. Once activated, these concepts can influence people’s behavior. Researchers found more conformity among those who saw a picture of ‘Norman, an accountant’ than among those who saw a picture of ‘Norman, a punk rocker.’ (After Pendry & Carrick, 2001) interpretations of an ambiguous situation are more correct than our own. At other times we find ourselves simply wanting to fit in and be accepted by a group. Perhaps you felt this way when you started at a new school or university. This type of conformity is called normative social influence. In these cases, we conform to a group’s social norms or typical behaviors to become liked and accepted. We go along to get along. Because the correct line length was not ambiguous in Asch’s famous study, we know that normative social influence is what pulled Asch’s participants to conform. Luckily, it turns out that age plays an important role in conformity. Although informational social influence continues to produce conformity in old age – suggesting that we still value others’ expertise in late life – the pressures to fit in and be liked that fuel normative social influence appear to lessen as people grow older (Pasupathi, 1999). Sometimes it’s easy to tell what the group’s norms are because it’s evident in the group’s behavior. Again, this was true in the Asch study. In real life, however, group norms may be more difficult to identify. In these cases, pluralistic ignorance may promote conformity to imagined social norms rather than actual social norms. Recall that pluralistic ignorance occurs when group members mistakenly believe they know what others think. In the case of bystander effects, people mistakenly believe that other bystanders know that the situation is a nonemergency. Understanding how the concept of pluralistic ignorance fuels conformity can also be used to reduce pressures to conform. See the Cutting Edge Research box For more Cengage Learning textbooks, visit www.cengagebrain.co.uk to learn how Princeton University used this social psychological concept to reduce campus alcohol consumption. ª SANDY FELSENTHAL/CORBIS Minority influence A number of European scholars have been critical of social psychological research in North America because of its preoccupation with conformity and the influence of the majority on the minority. As they correctly point out, intellectual innovation, social change, and political revolution often occur because an informed and articulate minority begins to convert others to its point of view (Moscovici, 1976). Why not study innovation and the influence that minorities can have on the majority? To make their point, these European investigators deliberately began their experimental work by setting up a laboratory situation virtually identical to Asch’s conformity situation. Participants were asked to make a series of simple perceptual judgments in the face of confederates who consistently gave the incorrect answer. But instead of placing a single participant in the midst of several confederates, these investigators planted two confederates, who consistently gave incorrect responses, in the midst of four real participants. The experimenters found that the minority was able to influence about 32 percent of the participants to make at least one incorrect judgment. For this to occur, however, the minority had to remain consistent throughout the experiment. If they wavered or showed any inconsistency in their judgments, they were unable to influence the majority (Moscovici, Lage, & Naffrechoux, 1969). Since this initial demonstration of minority influence, more than 90 related studies have been conducted in both Europe and North America, including several that required groups to debate social and political issues rather than make simple perceptual judgments (Wood, Lundgren, Ouellette, Busceme, & Blackstone, 1994). The general finding of minority influence is that minorities can move majorities toward their point of view if they present a consistent position without appearing rigid, dogmatic, or arrogant. Such minorities are perceived to be more confident and, occasionally, more competent than the majority (Maass & Clark, 1984). Minorities are also more effective if they argue a position that is consistent with the developing social norms of the larger society. For example, in two experiments in which feminist issues were discussed, participants were moved significantly more by a minority position that was in line with feminist social norms than by one opposed to feminist norms (Paicheler, 1977). But the most interesting finding of this research is that the majority members in these studies show a change of private attitude – that is, internalization – not just the public conformity that was found in the Asch
CUTTING EDGE RESEARCH Pluralistic Ignorance and Binge Drinking at Universities A little knowledge can be a powerful thing. Sometimes, it can even stand up to the power of situational forces and lessen their impact on our behavior. We saw this triumph of knowledge earlier when we discussed the role of information in bystander effects. The study from the University of Montana demonstrated that simply learning about the social psychological factors that deter bystanders from intervening produced more offers to help. One of the key insights about social influence illustrated in studies of bystander effects is the concept of pluralistic ignorance (Schanck, 1932). When we don’t know exactly what to do in a complex or confusing situation, we delay action while we gather information from others around us. Yet rarely do we seek out information directly by going up to others and asking them what they think or feel. Instead, we maintain a calm, cool demeanor – basically pretending that we know what we’re doing – and then slyly check out what others are doing. Suppose everyone does the same? What you get is a group of people who look like they know what they’re doing, but inside they are each in turmoil, confused and uncertain. This is pluralistic ignorance. Everybody – the plurality – is ignorant of everyone else’s true feelings. This group-level phenomenon characterizes many situations beyond bystanders’ reactions to emergencies. You and your classmates have probably experienced it countless times in large lecture courses. Your professor, after presenting some new and complex material, asks the class if they have any questions. Do you raise your hand? Probably not. Why would you want to acknowledge your confusion? You don’t want to be known as the one who asks stupid questions. Do your classmates raise their hands? No. They obviously understand the material, which is all the more reason to keep your questions to yourself. Do you see the pluralistic ignorance at work? You and all your classmates are behaving identically – you are all sitting quietly, asking no questions. Even faced with this identical behavior, it’s common to interpret your own private feelings as being different from those of others: You alone are confused, whereas others are confident. In this case, pluralistic ignorance can make students feel alienated from their classmates. Imagine how much more at ease you’d feel if the person next to you leaned over and whispered, ‘I have no idea what she’s been saying!’ Perhaps you’d even find the courage to raise your hand with your own question! What does all this have to do with binge drinking within universities? As you are probably know, students’ alcohol use is a major concern of parents and university administrators experiments. In fact, minorities sometimes provoke private attitude change in majority members even when they fail to obtain public conformity. Typically this attitude change shows up only after a delay (Wood et al., 1994). For more Cengage Learning textbooks, visit www.cengagebrain.co.uk COMPLIANCE AND OBEDIENCE across the United States. Alcohol-related accidents are the number one cause of death among university students, and alcohol use is linked to lower academic performance and higher rates of destructive behavior. Ninety percent of university students, when surveyed, indicate that they’ve tried alcohol, and about 25 percent show problems like binge drinking. We know already that peers exert a big influence on students’ drinking. The question is how? Do peers cajole each other into drinking and drinking more? Well, sometimes. Other times, pluralistic ignorance is at work. If 90 percent of students are drinking, it looks like everyone is comfortable with it. Despite this, surveys show that many students have clear misgivings about drinking. Perhaps you’ve nursed a sick classmate, heard about a recent death from binge drinking, or seen that your own hangovers have harmed your academic performance. Even though you have a drink or two at a party, you may not be completely comfortable with the amount of drinking at your university. Here again is a pattern of pluralistic ignorance: Everyone’s behavior looks basically the same – they all look comfortable as they drink. And yet, even while holding that glass full of beer, many students harbor private misgivings about drinking, while assuming that the group norm is to be unconcerned about drinking. What are the consequences for misperceiving this group norm? Conformity with it – an increase in drinking over time! A series of studies at Princeton University documented this problematic outcome (Prentice & Miller, 1993). Yet as we’ve seen, knowledge of social psychology can be a powerful tool. Princeton University also developed and tested a new kind of alcohol education program. First-year students attended a discussion about alcohol use in their residence hall that was either peer-oriented, including information about pluralistic ignorance, or individual-oriented, focusing on decision making in drinking situations. Four to six months later, those who learned about the concept of pluralistic ignorance reported drinking less. A little knowledge can be a powerful thing! Moreover, the study’s evidence suggested that knowledge of this social psychological principle did not so much change students’ perceptions of the group norm but rather lessened the norm’s power to induce conformity (Schroeder & Prentice, 1998). So the next time you find yourself at a party deciding whether you should have a drink (or another drink) and surrounded by nonchalant drinkers, consider basing your choice on your own hunches rather than the apparent beliefs of your companions. Social situations exert powerful pressures toward conformity. But you can fight back with the power of knowledge! One investigator has suggested that minorities are able to produce eventual attitude change because they lead majority individuals to rethink the issues. Even when they fail to convince the majority, they broaden the range of
622 CHAPTER 17 SOCIAL INFLUENCE ª TRINITY MIRROR / MIRRORPIX / ALAMY Social change – such as the end of apartheid in South Africa – is sometimes brought about because a few people manage to persuade the majority in power to change its attitudes. acceptable opinions. In contrast, unanimous majorities are rarely prompted to think carefully about their position (Nemeth, 1986). Another view suggests that minority influence occurs in part because majority members believe that they won’t be influenced by the minority but simply extend them the courtesy of hearing them out. That is, simply to show their open-mindedness, those in the majority may thoughtfully consider the minority opinion but don’t expect this deliberation to change their own views. Ironically, however, it does change people’s minds, because thoughtful deliberation unsettles whole sets of related beliefs, which become more likely to change down the road. This process reflects what psychologists have called an implicit leniency contract in the treatment of minority group members, meaning that simply to appear fair, majority members let minority members have their say, but by doing so they unwittingly open the door to minority influence (Crano & Chen, 1998). But thoughtful consideration of minority views is not the whole story. We know this because even when a For more Cengage Learning textbooks, visit www.cengagebrain.co.uk numerical minority presents weak arguments, they can persuade majority members. More recent work suggests that a steadfast minority can change our attitudes because it takes courage to stand up for one’s own opinion in the face of disagreement with and even harassment by the majority (Baron & Bellman, 2007). Perceived courage may well be what inspires allegiance. These findings remind us that majorities typically have the social power to approve and disapprove, to accept or reject, and it is this power that can produce public compliance or conformity. In contrast, minorities rarely have such social power. But if they have credibility and show courage, they have the power to produce genuine attitude change and, hence, innovation, social change, and even revolution. Obedience to authority We opened this chapter with some of the most chilling horrors of humanity – perhaps none is more sobering in sheer magnitude than the systematic genocide of more than 8 million people undertaken by Nazi Germany during World War II. The mastermind of that horror, Adolf Hitler, may well have been a psychopath. But he could not have done it alone. What about the people who ran the day-to-day operations, who built the ovens and gas chambers, filled them with human beings, counted bodies, and did the necessary paperwork? Were they all psychopaths, too? Not according to social philosopher Hannah Arendt (1963), who observed the trial of Adolf Eichmann, a Nazi war criminal who was found guilty and executed for causing the murder of millions of Jews. She described him as a dull, ordinary bureaucrat, who saw himself as a little cog in a big machine. In her book about Eichmann, subtitled A Report on the Banality of Evil, Arendt concluded that most of the ‘evil men’ of the Third Reich were just ordinary people following orders from superiors. Her suggestion was that all of us might be capable of such evil and that Nazi Germany was less wildly alien from the normal human condition than we might like to think. As Arendt put it, ‘In certain circumstances the most ordinary decent person can become a criminal.’ This is not an easy conclusion to accept because it is more comforting to believe that monstrous evil is done only by monstrous individuals. The problem of obedience to authority arose again in Vietnam in 1968, when a group of American soldiers, claiming that they were simply following orders, killed civilians in the community of My Lai. Again the international community was forced to ponder the possibility that ordinary citizens are willing to obey authority, even in violation of their own moral consciences. Arendt’s depiction of Adolf Eichmann as an ordinary bureaucrat, just following orders, has been sharply challenged by present day historians (Cesarani, 2004;
Lozowick, 2002). They contend that although Eichmann may have begun his Nazi career as a rather ordinary man, his increasing identification with the Nazi movement transformed him into a ‘genocidaire’ who gained approval and favor for inventing creative new ways to deport and kill Jews (Haslam & Reicher, 2007). A parallel challenge is growing within social psychology, one that tempers the classic claim that evil situations cause evil behavior with a more nuanced interplay between the ways individuals identify with groups and come to shape and be shaped by powerful situations (Haslam & Reicher, 2007). Exploring the dynamic interactions between, on the one hand, people’s identities, goals, and desires, and on the other hand, the ever-changing situations in which they find themselves, is the topic of much contemporary research and debate within social psychology. The landmark studies that best represent the power of situations to produce such unthinkable behavior were conducted in the 1960s by Stanley Milgram (1963, 1974) at Yale University. Nearly 50 years later, Milgram’s work continues to be a topic of considerable debate and discussion (Blass, 2004; Burger, 2008; Packer, 2008). Ordinary men and women were recruited through a newspaper ad that offered $4 for 1 hour’s participation in a ‘study of memory’. When they arrived at the laboratory, each participant met another participant (in actuality, a confederate of the experimenter) and was told that one of them would play the role of teacher in the study, and the other would play the learner. The two participants then drew slips of paper out of a hat, and the real participant discovered that he or she would be the teacher. In that role, the participant was to read a list of word pairs to the learner and then test his memory by reading the first word of each pair and asking him to select the correct second word from four alternatives. Each time the learner made an error, the participant was to press a lever that delivered an electric shock to him. The participant watched while the learner was strapped into a chair and an electrode was attached to his wrist. The participant was then seated in an adjoining room in front of a shock generator whose front panel contained 30 lever switches in a horizontal line (see photos). Each switch was labeled with a voltage rating, ranging in sequence from 15 to 450 volts, and groups of adjacent switches were labeled descriptively, ranging from ‘Slight Shock’ through ‘Danger: Severe Shock’ up to the extreme, labeled simply ‘XXX’. When a switch was depressed, an electric buzzer sounded, lights flashed, and the needle on a voltage meter deflected to the right. To illustrate how it worked, the participant was given a sample shock of 45 volts from the generator. As the procedure began, the experimenter instructed the participant to move one level higher on the shock generator after each successive error by the learner (see Figure 17.5). The ‘shock generator’ used in Milgram’s experiment on obedience (top left). The ‘learner’ is strapped into the ‘electric chair’ (top right). A participant receives a sample shock before starting the ‘teaching session’ (bottom left). Participant refuses to go on with the experiment (bottom right). Most participants became deeply disturbed by the role they were asked to play, whether they remained in the experiment to the end or refused at some point to go on. (From the film Obedience, distributed by New York University Film Library, copyright © 1965 by Stanley Milgram, Reprinted by permission of Alexandra Milgram) COMPLIANCE AND OBEDIENCE For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
The learner did not actually receive any shocks. He was a mild-mannered 47-year-old man who had been specially trained for his role and his behavior followed a precise script. Starting at 75 volts, his expressions of pain could be heard through the adjoining wall. At 150 volts, his escalating expressions of pain included a request to be released from the study. As the shocks became stronger still, he began to shout and curse. At 300 volts, he began to kick the wall, and at the next shock level (marked ‘Extreme Intensity Shock’), he no longer answered the questions or made any noise. As you might expect, many participants began to object to this excruciating procedure, pleading with the experimenter to call a halt. But the experimenter responded with a sequence of calm prods, using as many as necessary to get the participant to go on: ‘Please continue,’ ‘The experiment requires that you continue,’ ‘It is absolutely essential that you continue,’ and ‘You have no other choice – you must go on.’ Obedience to authority was measured by the maximum amount of shock the participant would administer before refusing to continue. When college students first learn the details of Milgram’s procedure and are asked whether they themselves would continue to administer the shocks after the learner begins to pound on the wall, about 99 percent say that they would not (Aronson, 1995). Milgram himself surveyed psychiatrists at a leading medical school. They predicted that most participants would refuse to go on after reaching 150 volts, that only about 4 percent would go beyond 300 volts, and that less than 1 percent would go all the way to 450 volts. What did Milgram find? That 65 percent of the participants continued to obey throughout, going all the way to the end of the shock series (450 volts, labeled ‘XXX’). Not one participant stopped before administering c) a) b) Figure 17.5 Milgram’s Experiment on Obedience. The ‘teacher’ (a) was told to give the ‘learner’ (b) a more intense shock after each error. If the ‘teacher’ objected, the experimenter (c) insisted that it was necessary to go on. (From Obedience to Authority:An Experimental View by Stanley Milgram. Copyright © 1974 by Stanley Milgram. Reprinted by permission of Alexandra Milgram.) CHAPTER 17 SOCIAL INFLUENCE For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
100 Percent of participants remaining in experiment 40 0 200 400 Shock level (volts) Figure 17.6 Obedience to Authority. The percentage of participants who were willing to administer a punishing shock did not begin to decline until the intensity level of the shock reached 300 volts (the danger level). (Graph based on ‘Table 2 Distribution of Breakoff Points’ in S. Milgram (1963), ‘Behavioral Study of Obedience,’ from Journal of Abnormal and Social Psychology, 67, p. 376. Used by permission of Alexandra Milgram.) 300 volts, the point at which the learner began to kick the wall (see Figure17.6). What makes us so unable to fathom the degree of obedience evident in Milgram’s work? The answer ties back to the fundamental attribution error, introduced at the start of the chapter. We assume that people’s behavior reflects their inner qualities – their wishes and their personalities. We underestimate – even overlook altogether – the power that situations hold over us. So we put Milgram’s procedures together with the knowledge that most people would not wish to inflict severe bodily harm to another innocent person and conclude that few would obey. It is true that few wanted to obey. Most voiced considerable distress and reservations about delivering the shocks. And yet they continued. Somehow their intentions to ‘do no harm’ – although voiced – failed to govern their behavior. In assuming that people’s intentions guide their behavior, we’ve failed to see how subtle features of the situation powerfully pulled for obedience. How do we know that it’s the situation at work here? Maybe these were particularly aggressive or spineless people? Maybe Milgram had unleashed the unconscious aggressive drive that Freud discussed? We know it’s the situation because Milgram conducted many variations on the standard procedure and assigned participants at random to different situations. And each variation in the situation led to drastic changes in the rates of obedience. Four important features of the For more Cengage Learning textbooks, visit www.cengagebrain.co.uk COMPLIANCE AND OBEDIENCE situation include (1) surveillance, (2) buffers, (3) the presence of role models, and (4) its emerging nature. Surveillance One situational comparison varied the degree to which the experimenter supervised the participant. When the experimenter left the room and issued his orders by telephone, the rate of obedience dropped from 65 percent to 21 percent (Milgram, 1974). Moreover, several of the participants who continued under these conditions cheated by administering shocks of lower intensity than they were supposed to. So the constant presence or surveillance of the experimenter is one situational factor that pulls for obedience. Buffers Another set of situational comparisons varied the proximity of the teacher and learner. In the standard procedure, the learner was in the next room, out of sight and only heard through the wall. When the learner was in the same room as the participant, the rate of obedience dropped from 65 percent to 40 percent. When the participant had to personally ensure that the learner held his hand on a shock plate, obedience declined to 30 percent. By contrast, when the psychological distance was increased and the learner offered no verbal feedback from the next room, the rate of obedience shot up to 100 percent. So a second situational factor that pulls for obedience is buffers. Milgram’s participants believed that they were committing acts of violence, but there were several buffers that obscured this fact or diluted the immediacy of the experience. The more direct the participants’ experience with the victim – the fewer buffers between the person and the consequences of his or her act – the less the participant will obey. The most common buffer found in warlike situations is the remoteness of the person from the final act of violence. Thus, Eichmann argued that he was not directly responsible for killing Jews; he merely arranged for their deaths. Milgram conducted an analog to this ‘link-in-thechain’ role by requiring participants only to pull a switch that enabled another teacher (a confederate) to deliver the shocks to the learner. Under these conditions, the rate of obedience soared: A full 93 percent of the participants continued to the end of the shock series. In this situation, the participant can shift responsibility to the person who actually delivers the shock. The shock generator itself served as a buffer – an impersonal mechanical agent that actually delivered the shock. Imagine how obedience would have declined if participants were required to hit the learner with their fists. In real life, we have analogous technologies that permit us to destroy distant fellow humans by remote control, thereby removing us from the sight of their suffering. Although we probably would all agree that it is worse to kill thousands of people by pushing a button that releases a
626 CHAPTER 17 SOCIAL INFLUENCE ª A. T. WILLETT / ALAMY Modern warfare allows individuals to distance themselves from the actual killing, giving them the feeling that they are not responsible for enemy deaths. guided missile than it is to beat one individual to death with a rock, it is still psychologically easier to push the button. Such are the effects of buffers. Role models One reason Milgram’s experiment obtained such high levels of obedience is that the social pressures were directed toward a lone individual. If the participant was not alone, would he or she be less obedient? We have already seen some data to support this possibility: A participant in the Asch conformity situation is less likely to go along with the group’s incorrect judgments if there is at least one other dissenter. A similar thing happens in Milgram’s obedience situation. In one variation of the procedure, two additional confederates were employed. They were introduced as participants who would also play teacher roles. Teacher 1 would read the list of word pairs, Teacher 2 would tell the learner if he was right or wrong, and Teacher 3 (the participant) would deliver the shocks. The confederates complied with the instructions through the 150-volt shock, at which point Teacher 1 informed the experimenter that he was quitting. Despite the experimenter’s insistence that he continue, Teacher 1 got up from his chair and sat in another part of the room. After the 210-volt shock, Teacher 2 also quit. The experimenter then turned to the participant and ordered him to continue alone. Only 10 percent of the participants were willing to complete the series in this situation. In a second variation, there were two experimenters rather than two additional teachers. After a few shocks, they began to argue. One of them said that they should stop the experiment; the other said that they should continue. Under these circumstances, not a single participant would continue, despite the orders to do so by the second experimenter (Milgram, 1974). So role models who disobeyed allow participants to follow their own conscience. But before we congratulate For more Cengage Learning textbooks, visit www.cengagebrain.co.uk these participants on their autonomy in the face of social pressure, we should consider the implication of these findings more closely. They suggest participants were not choosing between obedience and autonomy but between obedience and conformity: Obey the commanding experimenter or conform to the emerging norm to disobey. Obeying or conforming may not strike you as a very heroic choice. But these are among the processes that provide the social glue for the human species. One social historian has noted that ‘disobedience when it is not criminally but morally, religiously, or politically motivated is always a collective act and it is justified by the values of the collectivity and the mutual engagements of its members’ (Walzer, 1970, p. 4). Emerging situations So far, when we’ve discussed the power of situations, we’ve painted situations in fairly broad brushstrokes. For instance, we’ve considered how a group with a unanimous opinion exerts more social pressure than a group that includes a single dissenter. These broad brush strokes obscure the fact that the meaning of any given situation unfolds and changes over time. What begins benignly may insidiously evolve into something horrifying. Yet the emerging nature of situations – just like the power of situations more generally – often eludes us. For instance, many people who hear about the Milgram study wonder why anyone would ever agree to administer the first shock. Most everyone claims that they themselves wouldn’t do it. But that’s because people tend to focus on the end of the story – how outrageous the situation ends up, with participants delivering shocks so intense they are beyond description (‘XXX’) to a man who has presumably lost consciousness. What we need to do is focus on the how the situation started and, more importantly, how it evolved. The situation began innocuously enough. Participants replied to an advertisement and agreed to participate in a study at Yale University. By doing so they implicitly agreed to cooperate with the experimenter, follow the directions of the person in charge, and see the job through to completion. This is a very strong social norm, and we tend to underestimate how difficult it is to break such an agreement and go back on our implied word to cooperate. And when participants arrived, they found themselves in a fairly straightforward learning experiment. They might have been thinking, ‘How hard could it be to learn these simple word pairs?’ ‘I bet the threat of shock will speed up the learning process.’ Plus, the first shock was just 15 volts – perhaps not even noticeable. And the shock level increased by a mere 15 volts at a time. Although it’s abundantly clear that administering 450 volts is not a good thing, the change from innocuous to unfathomable is not so clear. Once participants gave the first shock, there was no longer a
natural stopping point. By the time they wanted to quit, they were trapped. The true character of the situation had emerged only slowly over time. Making matters worse, in order to break off, participants had to suffer the guilt and embarrassment of acknowledging that they were wrong to begin at all. And the longer they put off quitting, the harder it became to admit their misjudgment in going as far as they had. It is often easier to continue with bad behavior than to admit our mistakes. Perhaps most significantly, Milgram’s participants had no time to reflect. They had no time to think about the strange situation they now found themselves in, and what their own conscience would dictate. This effectively prevented them from accessing their own definition of the situation (as ‘horrifying’). Instead, participants were torn apart by two conflicting definitions of the situations: the authority’s definition – ‘The experiment requires that you continue’ – and the victim’s definition – ‘Let me out of here! My heart is starting to bother me!’ Most often, in this fast-paced and evolving situation, the participants’ behavior reflected other people’s definitions of the situation: Most participants, following the definition offered by the experimenter, obeyed. Others, following the definitions offered by peers who themselves broke off, disobeyed. Interestingly, among those who did manage to disobey, the vast majority did so at 150 volts, the first time at which the learner requested to be released from the experiment (Packer, in press). This suggests that, like obedience, disobedience is also very much connected to situational triggers. Participants’ own wishes and desires – although voiced – did not steer their behavior. Imagine how much less obedience there would have been if there had been a 15-minute break after 300 volts. Ideological justification On top of all the situational factors that pull for obedience (see the Concept Review Table for a review) are societal factors. Milgram suggested that the potential for obedience to authority is such a necessary requirement for communal life that it has probably been built into our species by evolution. The division of labor in a society requires that individuals be willing at times to subordinate their own independent actions to serve the goals of the larger social organization. Parents, school systems, and businesses nurture this willingness by reminding the individual of the importance of following the directives of others who ‘know the larger picture’. To understand obedience in a particular situation, then, we need to understand the individual’s acceptance of an ideology – a set of beliefs and attitudes – that legitimates the authority of the person in charge and justifies following his or her directives. As an example of ideological justification, the Islamic extremists who became suicide hijackers on September 11, 2001, believed that, as martyrs, they would enter infinite paradise if they followed the directives of For more Cengage Learning textbooks, visit www.cengagebrain.co.uk COMPLIANCE AND OBEDIENCE CONCEPT REVIEW TABLE Situational features of obedience to authority Feature Experimental Evidence Within Milgram’s Studies Surveillance Obedience rate drops when experimenter is not physically present. Buffers Obedience rates drop when ‘victim’ is moved closer to the participant and increase when the ‘victim’ is never heard. Role Models Obedience rates drop when a fellow ‘teacher’ or a second experimenter stops cooperating. Emerging Situations Obedience rates seem to depend on the innocuous start to the study, the small rate of change in shock intensity, and the lack of time for the participant to reflect. Osama bin Laden. With eerie similarity, the members of the People’s Temple believed that, in drinking the poison, they were ‘crossing over’ into paradise for the sake of ‘the Cause’ that Jim Jones illuminated. Ideologies not only guide the bizarre behaviors of religious extremists but also guide the day-to-day activities of military organizations. Nazi officers believed in the primacy of the German state and hence in the legitimacy of orders issued in its name. Similarly, soldiers of any stripe commit themselves to the premise that national or international security requires strict obedience to military commands. Killing other humans, under a forceful ideology, becomes an honor and a duty. In the Milgram experiments, ‘the importance of science’ can be viewed as the ideology that legitimated even extraordinary demands. Some critics have argued that the Milgram experiments were artificial, that the prestige of a scientific experiment led participants to obey without questioning the dubious procedures in which they participated, and that in real life people would never do such a thing (Baumrind, 1964). Indeed, when Milgram repeated his experiment in a rundown set of offices and removed any association with Yale University from the setting, the rate of obedience dropped somewhat from 65 percent to 48 percent (Milgram, 1974). But this criticism misses the major point. The prestige of science is not an irrelevant artificiality but an integral part of Milgram’s demonstration. Science serves the same legitimating role in the experiment that the German state served in Nazi Germany and that national security serves in wartime killing. It is precisely their belief in the importance of scientific research that prompts individuals
628 CHAPTER 17 SOCIAL INFLUENCE ª MATT ROBERTS / ALAMY Soldiers follow orders because they believe that national security requires that they do so. This provides an ideological justification for their obedience. to subordinate their moral autonomy and independence to those who claim to act on behalf of science. Ethical issues Milgram’s experiments have been criticized on several grounds. First, critics argue that Milgram’s procedures created an unacceptable level of stress in the participants during the experiment itself. In support of this claim, they quote Milgram’s own description: [Participants] were observed to sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh. These were characteristic rather than exceptional responses to the experiment. . . . One sign of tension was the regular occurrence of nervous laughing fits. . . . On one occasion we observed a seizure so violently convulsive that it was necessary to call a halt to the experiment. (Milgram, 1963, p. 375) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk Second, critics express concern about the long-term psychological effects on participants of having learned that they would be willing to give potentially lethal shocks to a fellow human being. Third, critics argue that participants are likely to feel foolish and ‘used’ when told the true nature of the experiment, thereby making them less trusting of psychologists in particular and of authority in general. In response to these and other criticisms, Milgram pointed out that after his experiments he conducted a careful debriefing; that is, he explained the reasons for the procedures and reestablished positive rapport with the participant. This included a reassuring chat with the ‘victim’ who the participant had thought was receiving the shocks. After the completion of an experimental series, participants were sent a detailed report of the purposes and results of the experiment. Milgram then conducted a survey, which revealed that 84 percent of the participants were glad to have taken part in the study; 15 percent reported neutral feelings; and 1 percent stated that they were sorry to have participated. These percentages were about the same for those who had obeyed and those who had defied the experimenter. In addition, 74 percent indicated that they had learned something of personal importance as a result of being in the study. Milgram also hired a psychiatrist to interview 40 of the participants to determine whether the study had any injurious effects. This follow-up revealed no indications of long-term distress or traumatic reactions (Milgram, 1964). In Chapter 1 we noted that research guidelines set forth by the U.S. government and the American Psychological Association emphasize two major principles: informed consent and minimal risk. Milgram’s studies were conducted in the early 1960s, before these guidelines were in effect. Despite the importance of the research and the precautions that Milgram took, it seems likely that most of the review boards that must now approve research projects would not permit Milgram’s exact study procedures to be carried out today. However, a recent partial replication of Milgram’s famous study provided greater protection of the rights and welfare of study participants and thereby allowed an empirical test of whether people today would still obey authority to the same degree as Milgram found. Burger (in press) reasoned that reaching 150 volts on the shock generator in Milgram’s original study was a critical turning point. Recall that 150 volts was the first point at which the learner demanded to be released from the study and the point at which the majority of those who disobeyed the experimenter broke off. Put differently, the vast majority of people who shocked the learner at 150 volts (nearly 80%) continued to obey the experimenter all the way to 450 volts. Burger thus terminated his replication of Milgram’s classic study when shocks reached 150 volts, reasoning that data gathered up until that point
would provide sufficient information about obedience rates today. Although Burger’s sample was more diverse, and included women as well as men, results were strikingly similar to those obtained nearly 50 years earlier. One sobering departure from Milgram’s findings, was that Burger (in press) uncovered virtually no effect of seeing a peer role model disobey the experimenter. Students who first learn the results of Milgram’s famous experiments have long questioned whether people today would obey authority as blindly. Apparently so. Situational pressures to obey authority appear as strong today as ever (Burger, in press). Obedience in everyday life Because the Milgram experiments have been criticized for being artificial (Orne & Holland, 1968), it is instructive to look at an example of obedience to authority under more ordinary conditions. Researchers investigated whether nurses in public and private hospitals would obey an order that violated hospital rules and professional practice (Hofling, Brotzman, Dalrymple, Graves, & Pierce, 1966). While on regular duty, the participant (a nurse) received a phone call from a doctor whom she knew to be on the staff but had not met: ‘This is Dr. Smith from Psychiatry calling. I was asked to see Mr. Jones this morning, and I’m going to have to see him again tonight. I’d like him to have had some medication by the time I get to the ward. Will you please check your medicine cabinet and see if you have some Astroten? That’s A-S-T-R-O-T-E-N.’ When the nurse checked the medicine cabinet, she saw a pillbox labeled: ASTROTEN 5 mg capsules Usual dose: 5 mg Maximum daily dose: 10 mg After she reported that she had found it, the doctor continued, ‘Now will you please give Mr. Jones a dose of 20 milligrams of Astroten. I’ll be up within 10 minutes; I’ll sign the order then, but I’d like the drug to have started taking effect.’ A staff psychiatrist, posted unobtrusively nearby, terminated each trial by disclosing its true nature when the nurse either dispensed the medication (actually a harmless placebo), refused to accept the order, or tried to contact another professional. This order violated several rules: The dose was clearly excessive. Medication orders may not be given by telephone. The medication was unauthorized – that is, it was not on the ward stock list clearing it for use. Finally, the order was given by an unfamiliar person. Despite all this, 95 percent of the nurses started to give the medication. Moreover, the telephone calls were all brief, and the nurses put up little or no resistance. None of them insisted on a written order, although several sought reassurance that the doctor would arrive promptly. In interviews after For more Cengage Learning textbooks, visit www.cengagebrain.co.uk COMPLIANCE AND OBEDIENCE the experiment, all the nurses stated that such orders had been received in the past and that doctors became annoyed if the nurses balked. Again, these results surprise us. And they surprise professionals as well. When nurses who had not been participants in the study were given a complete description of the situation and asked how they themselves would respond, 83 percent reported that they would not have given the medication, and most of them thought that a majority of nurses would also refuse. Twenty-one nursing students who were asked the same question all asserted that they would not have given the medication as ordered. This again portrays the unexpected power of situational forces. We make the mistake of assuming that people’s behavior reflects their character and their intentions. We make this fundamental attribution error time and again. INTERIM SUMMARY l Asch’s classic experiments on conformity found that a unanimous group exerts strong pressure on an individual to conform to the group’s judgments – even when those judgments are clearly wrong. Much less conformity was observed if even one person dissented from the group. l A minority within a larger group can move the majority toward its point of view if it maintains a consistent dissenting position without appearing to be rigid, dogmatic, or arrogant, a process called minority influence. Minorities sometimes even obtain private attitude change from majority members, not just public conformity. This is thought to occur through an implicit leniency contract in which majority members agree to let minority members have their say but don’t expect to be influenced by them. l Milgram’s classic experiments on obedience to authority demonstrated that ordinary people would obey an experimenter’s order to deliver strong electric shocks to an innocent victim. Situational factors conspiring to produce the high obedience rates include (1) surveillance by the experimenter, (2) buffers that distance the person from the consequences of his or her acts, (3) role models, and (4) the emerging properties of situations. An ideology about the importance of science also helped to justify obedience to the experimenter. l Although Milgram’s research is unquestionably important, the ethics of his experiments have generated considerable controversy. It is unclear whether similar research could be conducted today.
630 CHAPTER 17 SOCIAL INFLUENCE CRITICAL THINKING QUESTIONS 1 One account of how individuals are recruited to become suicide terrorists suggests that a charismatic leader indoctrinates a group of people at once, asking them to ‘please step forward’ if they have any doubts about becoming martyrs for the cause. Although the socialization of suicide terrorists is no doubt complex and multifaceted, describe how this simple tactic exploits the concept of pluralistic ignorance. 2 Consider the unsettling message of Milgram’s studies: That if a situation is arranged properly and supported by ideological beliefs, ordinary people – like you – can be pulled to act in ways that you find morally reprehensible. How will you fight against the power of such situations in your own life? Can certain other situations pull you to follow your own conscience? INTERNALIZATION Most studies of conformity and obedience focus on whether individuals overtly comply with the social influence wielded within the situation. In everyday life, however, those who attempt to influence us usually seek internalization; that is, they want to change our private attitudes, not just our public behaviors, and to obtain changes that will be sustained even after they are no longer on the scene. Certainly the major goal of parents, educators, clergy, politicians, and advertisers is internalization, not just compliance. In this section we begin to examine social influence that persuades rather than coerces. Self-justification When discussing the emerging situational predicament that Milgram’s participants found themselves in, we concluded that sometimes it’s easier to continue with bad behavior than to admit our mistakes. Why is that? Why is it so difficult for us to come clean and say, ‘I changed my mind. I no longer think that doing this is right’? Part of the answer is that people don’t like to be inconsistent. The pressure to be consistent can be so strong that often people will justify – or rationalize – past behavior by forming or adjusting their private beliefs to support it. A classic study of social influence tested the power of this pull to be consistent. To get a sense of the study, imagine that you were to knock on the doors of homeowners in your community, identify yourself as belonging to the Community Committee on Public Safety, and ask For more Cengage Learning textbooks, visit www.cengagebrain.co.uk those who answered: ‘Could we install a public service billboard on your front lawn?’ Naturally, you’d want to give folks a sense of what the billboard would look like, so you’d show them a photo of an attractive home nearly obscured by a huge, poorly lettered sign that reads ‘Drive Carefully’. Would people agree? Not many. In the early 1960s, a research team found that only 17 percent said yes (Freedman & Fraser, 1966). Although few could argue with the mission of promoting safe driving, the request was simply too large. There is probably no way that people would hand over the use of their front lawn for this or any other cause. Or is there? Suppose an associate of yours had approached these homeowners a few weeks earlier with a relatively minor request: ‘Would you place this sign in your living room window?’ Your associate would then show them a small, three-inch-square sign that reads ‘Be a Safe Driver’. The cause is good and the request so small that nearly everyone says yes. And although the actions taken by the homeowner are relatively minor, their effects are powerful and lasting. For the next few weeks, every time these people look at their window, they face a salient reminder that they care about public safety, so much so that they took action. When guests ask about the sign, they will find themselves explaining how important the matter of safe driving is to them and why they had to do something about it. Now, two weeks later, you drop by with your large request about the billboard. What happens under these circumstances? The study done in the 1960s found that a full 76 percent said yes (Freedman & Fraser, 1966). Consider how hard it was for these poor homeowners to say no! After all, they’re already known to the community and to themselves as the sort of people who care enough about safe driving to take action on the matter, so be it if that action involves a sacrifice. This study illustrates the social influence tool called the foot-in-the-door technique: To get people to say yes to requests that would ordinarily lead to no, one approach is to start with a small request that few would refuse. Ideally, the small request is a miniature version of the larger request that you already have in mind. Once people have publicly complied with this easy request, they’ll start reexamining who they are and what they stand for. The result is that their private attitudes will swing more strongly in line with their public behavior, making it harder for them to say no to the larger request. The original work on the foot-in-thedoor technique was conducted in the U.S. Recent experiments suggest that pressures to appear consistent may be especially strong in Western cultures that value individualism (Petrova, Cialdini, & Sills, 2007). The degree to which self-justification tendencies apply universally across cultures continues to be a hotly debated topic (Heine & Lehman, 1997; Hoshino-Browne, Zanna, Spencer, Zanna, Kitayama & Lackenbauer, 2005; Kitayama, Snibbe, Markus, Suzuki, & Kyoto, 2004).
Cognitive dissonance theory The foot-in-the-door technique also illustrates that one way to influence people’s attitudes is through their behavior. If you can induce people to act in a way that is consistent with the attitude you’d like them to adopt, then they will eventually justify their behavior by adopting the sought-after attitude. The most influential explanation of this sequence of events is Leon Festinger’s cognitive dissonance theory. This theory assumes that there is a drive toward cognitive consistency, meaning that two cognitions – or thoughts – that are inconsistent will produce discomfort, which will in turn motivate the person to remove the inconsistency and bring the cognitions into harmony. The term cognitive dissonance refers to the discomfort produced by inconsistent cognitions (Festinger, 1957). Although cognitive dissonance theory addresses several kinds of inconsistency, it has been most provocative in predicting the aftermath of behaving in ways that run counter to one’s attitudes. One label we have for attitudebehavior discrepancies is hypocrisy. For instance, we call the fundamentalist preacher who frequents strip bars a hypocrite. The sheer negativity of this label offers insight into the discomfort caused by any discrepancies between what we do and what we believe. A core idea within cognitive dissonance theory is that when attitudes and behavior are at odds, we take the easiest route to ridding ourselves of the unpleasant state of dissonance. That is, we create consonance or consistency by changing our attitudes. Past behavior, after all, cannot be changed. And changing a line of action already undertaken – like stopping the shocks in the Milgram experiment or quitting smoking – can produce even more dissonance because it introduces the idea that your initial judgment was poor, a thought that is inconsistent with your generally favorable view of yourself. So, the behavior is maintained or justified by changing or adding new consonant cognitions. Rationalization is another term for this process of self-justification. In the case of the Milgram experiment, some participants were likely to tell themselves, ‘At least I’m following orders, unlike that unruly guy who won’t learn these word pairs.’ If you smoke cigarettes, you may reduce dissonance by telling yourself and others something like, ‘I know smoking is bad for my health in the long-run, but it relaxes me so much, and that’s more important to me.’ One of the earliest and most famous studies of cognitive dissonance examined the effects of induced compliance. University students participated one at a time in an experiment in which they worked on a dull, repetitive task: They were asked to turn wooden pegs on a pegboard, over and over again. After completing the boring task, the experimenter asked participants a favor. They were told that the study was really about how people’s expectations influence their performance, and that the guy who normally plays the confederate role and tells people what to expect wasn’t available. Under this guise, some participants were offered For more Cengage Learning textbooks, visit www.cengagebrain.co.uk INTERNALIZATION 1.5 Rated enjoyment of the task 0.5 –0.5 –1 One dollar Twenty dollars Size of incentive Control Figure 17.7 An Induced-Compliance Experiment. The smaller incentive for agreeing to say that the tasks were interesting led participants to infer that they had actually enjoyed the tasks. The larger incentive did not. (After Festinger & Carlsmith, 1959) $1 to tell the next participant that the tasks had been fun and interesting. Others were offered $20 to do this. (This study was conducted in the 1950s. Back then, $1 could buy you dinner at a restaurant, whereas $20 could buy a week’s worth of groceries for your and your family.) Whether paid $1 or $20, all of the participants complied with the request. Later they were asked how much they had enjoyed the tasks. As shown in Figure 17.7, participants who had been paid only $1 stated that they had in fact enjoyed the tasks. But participants who had been paid $20 did not find them significantly more enjoyable than did members of a control group who never spoke to another participant (Festinger & Carlsmith, 1959). The small incentive for complying with the experimenter’s request – but not the large incentive – led participants to believe what they had heard themselves say. Why should this be so? According to cognitive dissonance theory, being paid $20 provides a very clear and consonant reason for complying with the experimenter’s request to talk to the waiting participant, and so the person experiences little or no dissonance. The inconsistency between the person’s behavior (telling the next person that the task was interesting) and his or her attitude toward the task (the task was boring) is outweighed by the far greater consistency between the compliance and the huge monetary incentive for complying. Accordingly, the participants who were paid $20 did not change their attitudes. Those who were paid $1, however, had no clear or consonant reason for complying. Accordingly, they experienced dissonance, which they reduced by coming to believe that they really did enjoy the tasks. The general conclusion is that dissonance-causing behavior will lead to attitude change in induced-compliance situations when the behavior can be induced with a minimum amount of pressure, whether in the form of reward or punishment. Experiments with children have confirmed the prediction about minimal punishment. If children obey a very
632 CHAPTER 17 SOCIAL INFLUENCE mild request not to play with an attractive toy, they come to believe that the toy is not as attractive as they first thought – a belief that is consistent with their observation that they are not playing with it. But if the children refrain from playing with the toy under a strong threat of punishment, they do not change their liking for the toy (Aronson & Carlsmith, 1963; Freedman, 1965). Other studies within the tradition of cognitive dissonance theory focused on how people justify their past efforts by valuing their chosen paths more strongly. An illustration of this occurs each year on university campuses in the U.S.: Students often go through elaborate rituals – and sometimes painful and dangerous hazings – to join campus fraternities and sororities. Experiments on cognitive dissonance provide clues as to why these rituals persist. People who go through more effort to join a group end up valuing that group more than those who join with little effort (Aronson & Mills, 1959). We justify our past decisions similarly (Brehm, 1956). Before a decision is made, a number of alternatives may seem equally attractive. Perhaps you had to decide which of several universities to attend. No doubt they each had their good features, but of course, you could only attend one. After you made your decision, cognitive dissonance theory predicts that the simple act of choosing one alternative would create dissonance in you, because it is inconsistent with the good features of the alternatives not chosen. To reduce this unpleasant state, the theory predicts that you will justify your choice by downplaying the good features of the paths not taken and exaggerating the good features of the path you took. Does this prediction fit with your own experience? Self-perception theory Over the years, alternative explanations have been offered for some of the findings of cognitive dissonance theory. For instance, social psychologist Daryl Bem argued that a simpler theory, which he called self-perception theory, could explain all results of the classic dissonance experiments without reference to any inner turmoil or dissonance. In brief, self-perception theory proposes that individuals come to know their own attitudes, emotions, and other internal states partially by inferring them from observations of their own behavior and the circumstances in which the behavior occurs. To the extent that internal cues are weak, ambiguous, or uninterpretable, self-perception theory states that the individual is like any outside observer who must rely on external cues to infer the individual’s inner states (Bem, 1972). Self-perception theory is illustrated by the common remark, ‘This is my second sandwich; I guess I was hungrier than I thought.’ Here the speaker has inferred an internal state by observing his or her own behavior. Similarly, the selfobservation ‘I’ve been biting my nails all day; something must be bugging me’ is based on the same external evidence that might lead a friend to remark, ‘You’ve been biting your nails all day; something must be bugging you.’ For more Cengage Learning textbooks, visit www.cengagebrain.co.uk With this alternative theory in mind, reconsider the classic peg-turning study (Festinger & Carlsmith, 1959). Recall that participants were induced to tell a waiting participant that a dull peg-turning task had in fact been fun and interesting. Participants who had been paid $20 to do this did not change their attitudes, whereas participants who had been paid only $1 came to believe that the tasks had in fact been enjoyable. Self-perception theory proposes that, just as an observer tries to understand the cause of someone else’s behavior, so, too, participants in this experiment looked at their own behavior (telling another participant that the tasks were interesting) and implicitly asked themselves, ‘Why did I do this?’ Selfperception theory further proposes that they sought an answer the same way an outside observer would, by trying to decide whether to explain the behavior with reference to the person (he did it because he really did enjoy the task) or with reference to the situation (he did it for the money). When the individual is paid only $1, the observer is more likely to credit the person: ‘He wouldn’t be willing to say it for only $1, so he must have actually enjoyed the tasks.’ But if the individual is paid $20, the observer is more likely to credit the situation: ‘Anyone would have done it for $20, so I can’t judge his attitude toward the tasks on the basis of his statement.’ If the individual follows the same inferential process as this hypothetical outside observer, participants who are paid $1 infer their attitude from their own behavior: ‘I must think the tasks were enjoyable. Otherwise I would not have said so.’ But participants who are paid $20 attribute their behavior to the money and therefore express the same attitudes toward the tasks as the control participants who made no statements to another participant. Importantly, virtually all the participants in the pegturning study were willing to tell the next participant that the task was enjoyable – even if they were offered only $1 to do so. But the participants themselves did not know this. Thus, when participants who were paid $1 inferred that they must think the tasks are enjoyable because otherwise they would not have said so, they were wrong. They should have inferred that they talked to the next participant because they were paid $1 to do so. In other words, they committed the fundamental attribution error: They overestimated causes due to the person and underestimated causes due to the situation. The opposite can also happen: People sometimes overestimate causes due to the situation and underestimate causes due to the person. We saw this back in Chapter 1, when we discussed the unexpected effects of rewarding kids with free pizza for meeting monthly reading goals. Kids do read more if reading earns them pizza. But do they enjoy reading? And do they continue reading once the pizza program ends? Dozens of studies, based on the principles of self-perception theory, suggest that rewards can undermine intrinsic interest and motivation. This happens because when people see
that their behavior is caused by some external, situational factor – like a free pizza – they discount the input of any internal, personal factors – like their own enjoyment of the activity. So when kids ask themselves why they read, they’ll say it’s for the pizza. And when there’s no more pizza to be had, they’ll see no other compelling reason to read. Even though they might have enjoyed reading, the rewards loomed larger. Recall that this undermining effect of rewards is called the overjustification effect, whereby people go overboard and explain their own behavior with too much emphasis on salient situational causes and not enough emphasis on personal causes. So which theory wins? Does cognitive dissonance theory or self-perception theory best explain our tendencies to justify our actions by changing our attitudes? In general, each of the alternative theories has generated data that the other theory cannot explain. Some studies find evidence that participants do experience arousal and discomfort when arguing for positions that are contrary to their true beliefs, a finding that is consistent with cognitive dissonance theory but not with self-perception theory (Elliot & Devine, 1994; Elkin & Leippe, 1986). Others have concluded that each theory may be correct – under slightly different circumstances – and that the focus of research should be on specifying when and where each theory applies (Baumeister & Tice, 1984; Fazio, Zanna, & Cooper, 1977; Paulhus, 1982). Recent experiments actually pose a challenge to both theories. A replication of classic self-justification paradigms used participants who were either amnesiac or under cognitive load (that is, multitasking and therefore having impaired attention and working memory). The results showed just as much attitude change, even when participants couldn’t even remember the recent behavior that their newly adopted attitude justified! (Lieberman, Ochsner, Gilbert, & Schacter, 2001). Another replication, this time with capuchin monkeys, found clear attitude change as well (Egan, Santos, & Bloom, 2007). These newer findings suggest that behaviorinduced attitude change can happen automatically, without much conscious thought, perhaps through some core innate or otherwise universal knowledge systems that favor consistency. So although both cognitive dissonance theory and self-perception theory hold that people ‘rationalize’ their past actions by changing their attitudes, this process may not actually involve the deliberate consonance-seeking or sense-making that either theory has presumed. These various perspectives on self-justification describe the psychological aftermath of potent social influence techniques. Throughout this chapter, we’ve seen one core lesson within social psychology illustrated time and again: Situational forces can be powerful. A related core lesson For more Cengage Learning textbooks, visit www.cengagebrain.co.uk INTERNALIZATION within social psychology is that these powerful situational forces are often invisible. When some form of social influence pressures us to behave in a certain way, we often fail to recognize it; and when left to make sense of our actions, we wittingly or unwittingly change our inner attitudes to be in line with our outward behavior. From this perspective, the classic experiments on self-justification can be viewed as social influence techniques in action. Self-justification in Jonestown Knowing how self-justification processes lead people to rationalize their actions by changing their attitudes, think back to the Jonestown case mentioned at the start of the chapter. When the public first learned of the mass suicide, the fundamental attribution error reigned: Jim Jones’s followers must have been crazy or weak-willed. Who else would take their own lives at another’s request? Later news reports challenged this view by highlighting the diversity of the People’s Temple membership. Although some were poor, uneducated, and perhaps more gullible than most, many were educated professionals. Recall that the lesson within the fundamental attribution error is that we underestimate the power of situations. Taking this to heart, a social psychological analysis of the Jonestown tragedy examines followers’ paths to Jonestown and the social influence tactics used by Jim Jones (Osherow, 1984). Oddly enough, we have a window into daily practices within Jonestown because Jim Jones insisted that most events be audiotaped, including the final act of suicide. Reports from former members of the People’s Temple also help complete the picture. From this evidence, it becomes clear that Jim Jones was artfully exploiting the foot-in-the-door technique. Jones did not start off by asking would-be members: ‘Give me your life savings and your children and move with me to the jungle.’ Rather, he first got prospective members to comply with small ª BETTMANN/CORBIS Jim Jones artfully exploited people’s tendencies to self-justify.
634 CHAPTER 17 SOCIAL INFLUENCE requests and then gradually stepped up the level of commitment. Recall that Jim Jones had painted a utopian vision of social equality and racial harmony that became known simply as ‘the Cause’. At first, members were just asked to donate their time to the Cause, later their money, and still later their possessions, legal custody of their children, and so on. Little by little, followers’ options became more limited. Step by step, they become motivated to explain or to justify their past behavior in support of the Cause. The easiest way to do that was to become even more committed to the Cause. Jeanne Mills managed to defect from the People’s Temple before the move to Guyana. She became a vocal critic of the group (and was later murdered). In her book, Six Years With God (1979), Mills describes the forces of self-justification at work: We had to face painful reality. Our life savings were gone. Jim [Jones] had demanded that we sell the life insurance policy and turn the equity over to the church, so that was gone. Our property had all been taken from us. . . . We thought that we had alienated our parents when we told them we were leaving the country. Even the children whom we had left in the care of [others in the church] were openly hostile toward us. Jim had accomplished all this in such a short time! All we had left now was Jim and the Cause, so we decided to buckle under and give our energies to these two. (Mills, 1979, cited in Osherow, 1984). So, wittingly or unwittingly, Jim Jones used virtually invisible social influence techniques to extract behavioral compliance from his followers. This strategy takes advantage of people’s tendencies to self-justify and results in members intensifying their beliefs in Jim Jones and the Cause, while minimizing their assessments of the noxiousness of the costs of membership. Once in Guyana, Jim Jones continued to escalate the level of commitment he required of members by introducing the idea of the ‘final ritual’ or ‘revolutionary suicide’. He staged events called ‘White Nights,’ which were essentially suicide drills. Jones would pass out wine and then announce later that the wine had been poisoned and that they would all soon die. To test his followers’ faith, Jones asked them whether they were ready to die for the Cause. One time, the membership was even asked to vote on its own fate. Later in the evening, Jones would announce, ‘Well, it was a good lesson, I see you’re not dead.’ One ex-member recounted how these White Nights affected him and other followers: [Jones] made it sound like we needed the 30 minutes to do very strong, introspective type of thinking. We all felt strongly dedicated, proud of ourselves . . . [Jones] taught that it was a privilege to die for what you believed in. (Winfrey, 1979, cited in Osherow, 1984) For more Cengage Learning textbooks, visit www.cengagebrain.co.uk This brief social psychological analysis of the events leading up to the Jonestown mass suicides illustrates social influence in action. It gives a window onto the power that situational forces had to alter the internalized ideologies of Jim Jones’s followers. Reference groups and identification Nearly every group to which we belong has an implicit or explicit set of beliefs, attitudes, and behaviors that it considers correct. Any member of the group who strays from these social norms risks isolation and social disapproval. Through social rewards and punishments, the groups to which we belong obtain compliance from us. Groups may also pull for identification. If we respect or admire other individuals or groups, we may obey their norms and adopt their beliefs, attitudes, and behaviors in order to be like them or to identify with them. We even experience vicarious dissonance and change our own attitudes if we see someone from a group we admire engage in inconsistent behavior (Norton, Monin, Cooper, & Hogg, 2003). Reference groups are groups with which we identify; we refer to them in order to evaluate and regulate our opinions and actions. Reference groups can also serve as a frame of reference by providing us not only with specific beliefs and attitudes but also with a general perspective from which we view the world – an ideology or set of ready-made interpretations of social issues and events. If we eventually adopt these views and integrate the group’s ideology into our own value system, the reference group will have produced internalization. The process of identification, then, can provide a bridge between compliance and internalization. An individual does not necessarily have to be a member of a reference group to be influenced by its values. For example, lower-middle-class individuals often use the middle class as a reference group. An aspiring athlete may use professional athletes as a reference group. Life would be simple if each of us identified with only one reference group. But most of us identify with several reference groups, which often leads to conflicting pressures. Perhaps the most enduring example of competing reference groups is the conflict that many young people experience between their family reference group and their university or peer reference group. The most extensive study of this conflict is Theodore Newcomb’s classic Bennington Study – an examination of the political attitudes of the entire population of Bennington College, a small, politically liberal college in Vermont. The dates of the study (1935–1939) are a useful reminder that this is not a new phenomenon. Today Bennington College tends to attract liberal students, but in 1935 most students came from wealthy conservative families. (It is also co-ed today, but in 1935 it was a women’s college.) More than two-thirds of the
parents of Bennington students were affiliated with the Republican Party. Most people at Bennington College were liberal during the 1930s, but this was not the reason that most of the women selected the college. Newcomb’s main finding was that with each year at Bennington, students moved further away from their parents’ attitudes and closer to the attitudes of their academic community. For example, in the 1936 presidential campaign, about 66 percent of parents favored the Republican candidate, Alf Landon, over the Democratic candidate, Franklin Roosevelt. Landon was supported by 62 percent of the Bennington freshmen and 43 percent of the sophomores, but only 15 percent of the juniors and seniors. For most of the women, increasing liberalism reflected a deliberate choice between the two competing reference groups. Two women discussed how they made this choice: All my life I’ve resented the protection of governesses and parents. At college I got away from that, or rather, I guess I should say, I changed it to wanting the intellectual approval of teachers and more advanced students. Then I found that you can’t be reactionary and be intellectually respectable. Becoming radical meant thinking for myself and, figuratively, thumbing my nose at my family. It also meant intellectual identification with the faculty and students that I most wanted to be like. (Newcomb, 1943, pp. 134, 131) Note that the second woman uses the term identification in the sense that we have been using it. Note, too, how the women describe a mixture of change produced by social rewards and punishments (compliance) and change produced by attraction to an admired group that they strive to emulate (identification). From identification to internalization As mentioned earlier, reference groups also serve as frames of reference by providing their members with new perspectives on the world. The Bennington community, particularly the faculty, gave students a perspective on the Depression of the 1930s and the threat of World War II that their home environments had not, and this began to move them from identification to internalization: Listen to how two other Bennington women described the process: It didn’t take me long to see that liberal attitudes had prestige value. . . . I became liberal at first because of its prestige value; I remain so because the problems around which my liberalism centers are important. What I want now is to be effective in solving problems. Prestige and recognition have always meant everything to me. . . . But I’ve sweat[ed] blood in trying to be For more Cengage Learning textbooks, visit www.cengagebrain.co.uk INTERNALIZATION honest with myself, and the result is that I really know what I want my attitudes to be, and I see what their consequences will be in my own life. (Newcomb, 1943, pp. 136–137) Many of our most important beliefs and attitudes are probably based initially on identification. Whenever we start to identify with a new reference group, we engage in a process of ‘trying on’ a new set of beliefs and attitudes. What we ‘really believe’ may change from day to day. The first year at a university often has this effect on students, because many of the views they bring from the family reference group are challenged by students and faculty from very different backgrounds. Students often try on the new beliefs with great intensity and strong conviction, only to discard them for still newer beliefs when the first set does not quite fit. This is a natural process of growth. Although the process never really ends for people who remain open to new experiences, it is greatly accelerated during young adulthood, before the individual has formed a nucleus of permanent beliefs on which to build more slowly and less radically. The real work of young adulthood is to evolve an ideological identity from the numerous beliefs and attitudes that are tested in order to move from identification to internalization. As noted earlier, one advantage of internalization over compliance is that the changes are self-sustaining. The original source of influence does not have to monitor the individual to maintain the induced changes. The test of internalization, therefore, is the long-term stability of the induced beliefs, attitudes, and behaviors. Was the identification-induced liberalism of Bennington women maintained when the students returned to the ‘real world’? The answer is yes. Two follow-up studies conducted 25 and 50 years later found the women had remained liberal. For example, in the 1984 presidential election, 73 percent of Bennington alumnae preferred the Democratic candidate, Walter Mondale, over the Republican candidate, Ronald Reagan, compared with less than 26 percent of women of the same age and educational level. Moreover, about 60 percent of Bennington alumnae were politically active, most (66 percent) within the Democratic Party (Alwin, Cohen, & Newcomb, 1991; Newcomb et al., 1967). We never outgrow our need for identification with supporting reference groups. The political attitudes of Bennington women remained stable partly because after college they selected new reference groups that supported the attitudes they had developed in college. Those who married more conservative men were more likely to be politically conservative in later life. As Newcomb noted, we often select our reference groups because they share our attitudes, and our reference groups, in turn, help develop and sustain our attitudes. The relationship is bidirectional. The distinction between identification and internalization is a useful one for understanding social
636 CHAPTER 17 SOCIAL INFLUENCE influence, but in practice it is not always possible to disentangle them. Intriguingly, bicultural and bilingual individuals provide yet another example of conflicting reference groups. Contemporary social psychologists have studied these individuals, who navigate their daily life with reference to two distinct sets of cultural values (Hong, Morris, Chiu & Benet-Martinez, 2000). Language turns out to be a pivotal situational cue. Studies of Hong Kong biculturals found that when choices are presented in Cantonese, Hong Kong biculturals become motivated to conform to the norms of Chinese culture, like moderation and compromise. But when the same choices are presented in English, norms of decisiveness and risk-taking exert a greater pull (Briley, Morris, & Simonson, 2005). Here again we see that the situational triggers of social influence can be subtle. INTERIM SUMMARY l Cognitive dissonance theory suggests that when people’s behavior conflicts with their attitudes it creates an uncomfortable tension that motivates them to change their attitudes to be more in line with their actions. This is one explanation for the process of rationalization, or self-justification. l Self-perception theory challenged cognitive dissonance theory by stating that inner turmoil does not necessarily occur. To the extent that internal cues are weak, ambiguous, or uninterpretable, people may simply infer their attitudes from their past behavior. l The phenomenon of self-justification is welldocumented and has been shown to exist in nonhuman animals, suggesting that it does not rely on complex conscious thought. l In the process of identification, we obey the norms and adopt the beliefs, attitudes, and behaviors of groups that we respect and admire. We use such reference groups to evaluate and regulate our opinions and actions. A reference group can regulate our attitudes and behavior by administering social rewards and punishments or providing a frame of reference, a ready-made interpretation of events and social issues. l Most people identify with more than one reference group, which can lead to conflicting pressures on beliefs, attitudes, and behaviors. University students frequently move away from the views of their family reference group toward the academic reference group. These new views are usually sustained in later life because (1) they become internalized and (2) after university we tend to select new reference groups that share our views. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CRITICAL THINKING QUESTIONS 1 Rites of passage or initiation rituals are common in young adulthood across many cultures. Explain how these rituals capitalize on people’s tendencies to selfjustify. What are the outcomes of the self-justification process? How would cognitive dissonance theory and self-perception theory differ in their explanations of this process? How could you explain the self-justification spawned by initiation rituals using simpler, perhaps unconscious mental habits? 2 Can you identify any changes in your beliefs and attitudes that have come about by being exposed to a new reference group? GROUP INTERACTIONS So far in our discussions of social influence and the power of situations we have emphasized the effects of these forces on lone individuals. Among the questions we’ve addressed are: How and why is an individual’s performance affected by the presence of others? How and why is an individual’s public behavior shaped by a group’s unanimity? How and why do an individual’s private attitudes change following social influence? In this section, our focus changes from lone individuals to groups of people. We will look at group interactions more generally to understand the dynamics and outcomes of group processes. Institutional norms Group interactions are often governed by institutional norms. Institutional norms are like social norms – implicit or explicit rules for acceptable behavior and beliefs – except they are applied to entire institutions, or organizations of the same type, like schools, prisons, governments, or commercial businesses. Group interaction patterns within these settings can often become ‘institutionalized,’ meaning that behavioral expectations are prescribed for people who occupy particular roles – roles like employee or boss, politician or military officer. Under these circumstances, behavior depends more on particular role expectations than on the individual character of the person who occupies the role. In other words, institutional settings are another potent situation that influences human behavior. A famous study showing just how potent institutional norms can be is the Stanford Prison Experiment, directed by Philip Zimbardo. Zimbardo and his colleagues were interested in the psychological processes involved in taking the roles of prisoner and prison guard. They created a simulated prison in the basement of the Psychology
Department at Stanford University and placed an ad in a local newspaper for participants to take part in a psychological experiment for pay. From the people who responded to the ad, they selected 24 ‘mature, emotionally stable, normal, intelligent white male college students from middle-class homes throughout the United States and Canada’. None had a prison record, and all seemed very similar in their values. By the flip of a coin, half were assigned to be prison guards and half to be prisoners. The ‘guards’ were instructed about their responsibilities and made aware of the potential danger of the situation and their need to protect themselves. The ‘prisoners’ were unexpectedly picked up at their homes by a mock police car, handcuffed, and taken blindfolded to the improvised jail, where they were searched, deloused, fingerprinted, given numbers, and placed in ‘cells’ with two other prisoners. The participants had signed up for the sake of the money, and all expected to be in the experiment for about two weeks. But by the end of the sixth day the researchers had to abort the experiment because the results were too frightening to allow them to continue. As Zimbardo explained: It was no longer apparent to most of the [participants] (or to us) where reality ended and their roles began. The majority had indeed become prisoners or guards, no longer able to clearly differentiate between role playing and self. There were dramatic changes in virtually every aspect of their behavior, thinking, and feeling. In less than a week the experience of imprisonment undid (temporarily) a lifetime of learning; human values were suspended, self-concepts were challenged, and the ugliest, most base, pathological side of human nature surfaced. We were horrified because we saw some boys (guards) treat others as if they were despicable animals, taking pleasure in cruelty, while other boys (prisoners) became servile, dehumanized robots who thought only of escape, of their own individual survival, and of their mounting hatred for the guards. (1972, p. 243) Far faster and more thoroughly than the researchers thought possible, ‘the experiment had become a reality’. The Stanford Prison Experiment is a demonstration of the extraordinary power of situations. It also illustrates the power of institutional norms within prison-like settings. Keep in mind that the participants were randomly assigned to the roles of prisoner and guard. Nothing in their character or backgrounds, then, could explain their behavior. Even though those playing the roles of guard and prisoner were essentially free to interact in any way they wished, the group’s interactions tended be negative, hostile, and dehumanizing, a pattern remarkably similar to interactions in actual prisons. These findings suggest that the situation itself – the very institution of prison – is so pathological that it can distort and rechannel the behavior of normal individuals. It’s been more than 30 years since the Stanford Prison Experiment was conducted. Has prison policy benefited from it? Have institutional norms and practices within prisons improved? Unfortunately not, at least not U.S. practices. Indeed, Zimbardo has argued that U.S. criminal justice policies have turned a blind eye to the lessons of their well-known experiment about the power of situations within prisons (Zimbardo, 2007; see also Haney & Zimbardo, 1998). As just one example, Zimbardo (2007) details the chilling parallels between the Stanford Prison Experiment and the cruel, inhumane treatment of prisoners by U.S. military personnel in the Iraqi prison at Abu Ghraib. These photos were taken of participants in the now-famous Stanford Prison Experiment. The results demonstrated that group interactions are often shaped by powerful institutional norms. Here we see that prison norms pulled for dehumanizing and violent behavior from guards, and servile and despondent behavior from prisoners. (ALL) © PHILIP ZIMBARDO GROUP INTERACTIONS For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
638 CHAPTER 17 SOCIAL INFLUENCE The Stanford Prison Experiment remains a lively topic of debate for other reasons as well. Following a trend toward ‘reality TV,’ the BBC filmed a replication of Zimbardo’s famous study, broadcasting it in 2002 as ‘The Experiment’ (Reicher & Haslam, 2006, see also Zimbardo, 2006, Haslam & Reicher, 2006). The results were altogether different from those Zimbardo obtained in the early 1970s. Indeed, in the BBC prison study, the prisoners quickly came to dominate the guards, and it was the guards, not the prisoners, who became depressed, stressed, and paranoid. At one point, the prisoners and guards even joined together to form one harmonious group, akin to a commune. Zimbardo has sharply criticized the BBC project, calling it both irresponsible and unscientific (Zimbardo, 2006). He points out numerous differences between his original work and this made-fortelevision replication, including the constant and apparent recording by the film crew. Zimbardo contends that surveillance and public accountability alone would eliminate prisoner abuses. Group decision making Many decisions are made not by individuals but by groups. Members of a family jointly decide where to spend their vacation; a jury judges a defendant to be guilty; a city council votes to raise property taxes. How do such decisions compare with those that might have been made by individual decision makers? Are group decisions better or worse, riskier or more cautious? These are the kinds of questions that concern us in this section. Group polarization In the 1950s, it was widely believed that decisions made by groups were typically cautious and conservative. For example, it was argued that because business decisions were increasingly being made by committees, the bold, innovative risk taking of entrepreneurs like Andrew Carnegie was a thing of the past (Whyte, 1956). James Stoner, then a graduate business student at MIT, decided to test this assumption (1961). In Stoner’s study, participants were asked to consider a number of hypothetical dilemmas. In one, an electrical engineer must decide whether to stick with his present job at a modest but adequate salary or take a job with a new firm offering more money, a possible partnership in the venture if it succeeds, but no long-term security. In another, a man with a severe heart ailment must seriously curtail his customary way of life or else undergo a medical operation that would either cure him completely or prove fatal. Participants were asked to decide how good the odds of success would have to be before they would advise the person to try the riskier course of action. For example, they could recommend that the engineer take the riskier job if the chances that the new venture would succeed were 5 in 10, 3 in 10, or only 1 in 10. By using For more Cengage Learning textbooks, visit www.cengagebrain.co.uk ª NIKOLAY MAMLUKE j DREAMSTIME.COM Group polarization often occurs in juries, especially when they are required to reach unanimous decisions. numerical odds like these, Stoner was able to compare the riskiness of different decisions quantitatively. Participants first made their decisions alone, as individuals. They then met in groups and arrived at a group decision for each dilemma. After the group discussion, they again considered the dilemmas privately as individuals. When Stoner compared the group’s decisions with the average of the individuals’ pregroup decisions, he found that the group’s decisions were riskier than the individuals’ initial decisions. Moreover, this shift reflected genuine opinion change on the part of group members, not just public conformity to the group decision: The private individual decisions made after the group discussion were significantly riskier than the initial decisions. These findings were replicated by other researchers, even in situations that presented real rather than hypothetical risks (Bem, Wallach, & Kogan, 1965; Wallach, Kogan, & Bem, 1962, 1964). The phenomenon was initially called the risky shift effect. This turned out not to be an accurate characterization, however. Even in the early studies, group decisions tended to shift slightly but consistently in the cautious direction on one or two of the hypothetical dilemmas (Wallach, Kogan, & Bem, 1962). The phenomenon is now called the group polarization effect because after many more studies it became clear that group discussion leads to decisions that are not necessarily riskier but are more extreme than the individual decisions. If group members are initially inclined to take risks on a particular dilemma, the group’s decisions will become riskier; if group members are initially inclined to be cautious, the group will be even more cautious (Myers & Lamm, 1976). More than 300 studies of the group polarization effect have been conducted, with a dazzling array of variations. For example, in one study, active burglars actually cased houses and then provided individual and group estimates of how easy each would be to burglarize. Compared with the individual estimates, the group estimates were more
conservative; that is, they rated the homes to be more difficult to break into successfully (Cromwell, Marks, Olson, & Avary, 1991). Group polarization extends beyond issues of risk and caution. For example, group discussion caused French students’ initially positive attitudes toward the country’s premier to become even more positive and their initially negative attitudes toward Americans to become even more negative (Moscovici & Zavalloni, 1969). Jury decisions can be similarly affected, leading to more extreme verdicts (Isozaki, 1984). Polarization in juries is more likely to occur on judgments concerning values and opinions (such as deciding on an appropriate punishment for a guilty defendant) than on judgments concerning matters of fact (such as the defendant’s guilt), and they are most likely to show polarization when they are required to reach unanimous decisions (Kaplan & Miller, 1987). Many explanations for the group polarization effect have been offered over the years, but the two that have stood up best to intensive testing refer to the concepts of informational social influence and normative social influence that we considered earlier in our discussion of conformity to a majority (Isenberg, 1986). Recall that informational social influence occurs when people see others as valid sources of information. During group discussions, members learn new information and hear novel arguments relevant to the decision under discussion. For example, in discussing whether the electrical engineer should go with the new venture – a decision that almost always shifts in the risky direction – it is quite common for someone in the group to argue that riskiness is warranted because electrical engineers can always find good jobs. A shift in the conservative direction occurred in the burglar study after one member of the group noted that it was nearly 3 p.m. and children would soon be returning from school and playing nearby. The more that arguments are raised in support of a position, the more likely it is that the group will move toward that position. And this is where the bias enters: Members of a group are most likely to present points in support of the position they initially favor and to discuss information they already share (Stasser, Taylor, & Hanna, 1989; Stasser & Titus, 1985). Accordingly, the discussion will be biased in favor of the group’s initial position, and the group will move toward that position as more of the group members become convinced. Interestingly, the polarization effect still occurs, even when all participants are given an extensive list of arguments before the experiment begins – a finding that casts doubt on explanations based solely on informational social influence (Zuber, Crott, & Werner, 1992). Normative social influence, you will recall, occurs when people want to be liked and accepted by a group. Under this type of social influence, people compare their own views with the norms of the group. During the discussion, they may learn that others have similar attitudes For more Cengage Learning textbooks, visit www.cengagebrain.co.uk GROUP INTERACTIONS or even more extreme views than they themselves do. If they are motivated to be seen positively by the group, they may conform to the group’s position or even express a position that is more extreme than the group’s. As one researcher noted, ‘To be virtuous . . . is to be different from the mean – in the right direction and to the right degree’ (Brown, 1974, p. 469). But normative social influence is not simply pressure to conform. Often the group provides a frame of reference for its members, a context within which they can reevaluate their initial positions. This is illustrated by a common and amusing event that frequently occurs in group polarization experiments. For example, in one group a participant began the discussion of the dilemma facing the electrical engineer by confidently announcing, ‘I feel this guy should really be willing to take a risk here. He should go with the new job even if it has only a 5 in 10 chance of succeeding.’ Other group members were incredulous: ‘You think that 5 in 10 is being risky? If he has any guts, he should give it a shot even if there is only 1 chance in 100 of success. I mean, what has he really got to lose?’ Eager to reestablish his reputation as a risk taker, the original individual quickly shifted his position further in the risky direction. By redefining ‘risky,’ the group moved both its own decision and its members’ postdiscussion attitudes further toward the risky extreme of the scale (Wallach, Kogan, & Bem, 1962; from the authors’ notes). As this example illustrates, both informational and normative social influence occur simultaneously in group discussions, and several studies have attempted to untangle them. Some studies have shown that the group polarization effect occurs if participants simply hear the arguments of the group, without knowing the actual positions of other members of the group (Burnstein & Vinokur, 1973, 1977). This demonstrates that informational social influence by itself is sufficient to produce polarization. Other studies have shown that the polarization effect also occurs when people learn others’ positions but do not hear any supporting arguments, demonstrating that normative social influence by itself is sufficient (Goethals & Zanna, 1979; Sanders & Baron, 1977). Typically, however, the effect of informational social influence is greater than the effect of normative social influence (Isenberg, 1986). Groupthink ‘How could we have been so stupid?’ This was U.S. President John Kennedy’s reaction to the disastrous failure of his administration’s attempt to invade Cuba at the Bay of Pigs in 1961 and overthrow the government of Fidel Castro. The plan was badly conceived at many levels. For example, if the initial landing was unsuccessful, the invaders were supposed to retreat into the mountains. But no one in the planning group had studied the map closely enough to realize that no army could have gotten through the 80 miles of swamp that separated the
640 CHAPTER 17 SOCIAL INFLUENCE mountains from the landing area. As it turned out, this didn’t matter, because other miscalculations caused the invading force to be wiped out long before the retreat would have taken place. The invasion had been conceived and planned by the president and a small group of advisers. Writing four years later, one of these advisers, the historian Arthur Schlesinger Jr., blamed himself for having kept so silent during those crucial discussions in the Cabinet Room, though my feelings of guilt were tempered by the knowledge that a course of objection would have accomplished little save to gain me a name as a nuisance. I can only explain my failure to do more than raise a few timid questions by reporting that one’s impulse to blow the whistle on this nonsense was simply undone by the circumstances of the discussion. (1965, p. 255) What were the ‘circumstances of the discussion’ that led the group to pursue such a disastrous course of action? After reading Schlesinger’s account, social psychologist Irving Janis introduced the term groupthink to describe the phenomenon in which members of a group are led to suppress their own dissent in the interests of group consensus (Janis, 1982). After analyzing several other foreign policy decisions, Janis set forth a broad theory to describe the causes and consequences of groupthink. Groupthink, according to Janis’s theory, is caused by (1) a cohesive group of decision makers, (2) isolation of the group from outside influences, (3) no systematic procedures for considering both the pros and cons of different courses of action, (4) a directive leader who explicitly favors a particular course of action, and (5) high stress, often due to an external threat, recent failures, moral dilemmas, and an apparent lack of viable alternatives. The theory suggests that these conditions foster a strong desire to achieve and maintain group consensus and avoid rocking the boat by dissenting. Janis argued that the consequences or symptoms of groupthink include (1) shared illusions of invulnerability, morality, and unanimity, (2) direct pressure on dissenters, (3) self-censorship (as Schlesinger’s account notes), (4) collective rationalization of a decision rather than realistically examination of its strengths and weaknesses, and (5) self-appointed mindguards, group members who actively attempt to prevent the group from considering information that would challenge the effectiveness or morality of its decisions. For example, the attorney general (President Kennedy’s brother Robert) privately warned Schlesinger, ‘The President has made his mind up. Don’t push it any further.’ The secretary of state also withheld information that had been provided by intelligence experts who warned against an invasion of Cuba (Janis, 1982). Janis proposed that these symptoms of groupthink combine to produce damaging flaws in the decision-making process – like incomplete information For more Cengage Learning textbooks, visit www.cengagebrain.co.uk search and failure to develop contingency plans – which in turn lead to bad decisions. Janis’s theory of groupthink has been extremely influential within social psychology, across the social sciences, and within the culture at large (Turner & Pratkanis, 1998b). Yet it has also received sharp criticism (such as Fuller & Aldag, 1998). First, it is based more on historical analysis of select cases than on laboratory experimentation. Plus, the few dozen experiments that have tested the theory have produced only mixed and limited support (Callaway, Marriott, & Esser, 1985; Courtright, 1978; Flowers, 1977; Longley & Pruitt, 1980; McCauley, 1989; Turner, Pratkanis, Probasco, & Lever, 1992). As just one example, Janis’s claim that cohesive groups are most likely to succumb to groupthink has not stood up to empirical test. Cohesive groups may, in fact, provide a sense of psychological safety, which has been shown to improve group learning and performance (Edmondson, 1999). One later reformulation of the theory, supported by experimental data, argues that group cohesion yields poor decisions only when combined with threats to the group’s positive image of itself. Faced with such threats, group members narrow their focus of attention to the goal of protecting and maintaining their positive group identity, a focus that often comes at the cost of effective decision making (Turner & Pratkanis, 1998a). Another reformulation of the theory states that the presence or absence of groupthink depends on the specific content of a group’s social norms. Recall that social norms are implicit or explicit rules for acceptable behavior and beliefs. In some cases, group norms favor maintaining consensus, and in these cases, the adverse effects of groupthink should take hold, resulting in poorquality decisions. In other cases, group norms favor critical thinking, and in these cases, group discussion should actually improve decision quality. A recent experiment tested these ideas (Postmes, Spears, & Cihangir, 2001). In it, the researchers manipulated group norms by randomly assigning several groups (of four college students each) to engage either in a task that fostered a norm of consensus seeking (making a poster together) or in a task that fostered a norm of critical thinking (discussing an unpopular policy proposal). Next, all groups participated in an unrelated group decision task. The researchers assessed the quality of decisions both before and after the group discussion. Figure 17.8 portrays the results. Inspection of Figure 17.8 shows that when the group norm favored consensus, group discussion did little to improve decision quality, yet when the group norm favored critical thinking, group discussion improved decision quality dramatically. You might notice that this reformulation of groupthink echoes the reformulation of deindividuation described earlier: In both cases, situation-specific social norms guide behavior more than more general features of the group, like group cohesion or personal anonymity.
Prediscussion Postdiscussion Percent correct decisions 60 20 Consensus Group norm Critical Figure 17.8 Group Norms and the Effectiveness of Group Decisions. Group norms can influence the quality of group decisions. In this experiment, the decisions made by groups with norms for consensus seeking did not benefit from group discussion, whereas those with norms for critical thinking did. (Adapted from Figure 1 on p. 923 in T. Postmes, R. Spears, & S. Cihangir (2001), ‘Quality of decision making and group norms, ‘in Journal of Personality and Social Psychology, 80, 918–930. Copyright © 2001 by the American Psychological Association. Adapted with permission.) This study on group norms spotlights one way to minimize the damaging effects of groupthink: Fostering norms for critical thinking – as a university education intends – should produce better group decisions. Other ways to improve group decisions include providing groups with trained facilitators who encourage a full sharing of ideas and alternating private idea-generating sessions with group sessions. Another beneficial strategy is to make the group a heterogeneous mix of people. A diverse group is more likely than a homogeneous group to generate a wide range of ideas (Paulus, 1998). Diversity within groups has other benefits as well. Surveys of university students in the U.S. have found that students of all ethnic backgrounds – European American, African American, Asian American and Hispanic American – reach higher levels of intellectual engagement and ability when their classrooms reflect ethnic diversity and when they interact informally with diverse peers outside of class. In addition, ethnic diversity on campus fosters perspective-taking and other skills that aid democracy (Gurin, Dey, Hurtado, & Gurin, 2002). Yet despite the evidence that ethnic diversity produces better individual and group outcomes, the value of affirmative action policies continues to be hotly debated. Two social psychological sides of this debate are featured in the Seeing Both Sides section at the end of this chapter. Many of the ideas raised within these essays – like how we decide what caused our own or someone else’s success and the self-fulfilling nature of prejudicial stereotypes – will be addressed further in Chapter 18, our second in this twochapter series on social psychology. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk GROUP INTERACTIONS INTERIM SUMMARY l Institutions have norms that strongly govern the behavior of people who occupy critical roles within the institution. An example of how institutional norms shape group interactions is provided by the Stanford Prison Experiment, in which ordinary young men were randomly assigned roles of ‘prisoner’ and ‘guard’ in a simulated prison. l When groups make decisions, they often display group polarization: The group decision is in the same direction but is more extreme than the average of the group members’ initial positions. This is not just public conformity; group members’ private attitudes typically shift in response to the group discussion as well. l The group polarization effect is due in part to informational social influence, in which group members learn new information and hear novel arguments that are relevant to the decision under discussion. Group polarization is also produced by normative social influence, in which people compare their own initial views with the norms of the group. They may then adjust their position to conform to that of the majority. l An analysis of disastrous foreign policy decisions led to a proposal that cohesive groups of decision makers can fall into the trap of groupthink, in which members of the group suppress their own dissenting opinions in the interest of group consensus. Later research suggests that group cohesion is not so much the problem, but rather threats to the group’s positive identity and group norms of consensus seeking. Evidence suggests that group outcomes can be improved by fostering norms of critical thinking and promoting group diversity. CRITICAL THINKING QUESTIONS 1 How are institutional norms, like those that operated in the Stanford Prison Experiment, communicated to new institutional members? What roles might informational and normative social influence and pluralistic ignorance play in the process of getting new members to conform to institutional norms? 2 Discuss how informational and normative social influence might produce group polarization in a jury’s deliberations. How might groupthink operate to affect such deliberations? Can you think of a specific trial in which some of these phenomena appear to have been present?
SEEING BOTH SIDES ARE THE EFFECTS OF AFFIRMATIVE ACTION POSITIVE OR NEGATIVE? Negative aspects of affirmative action Madeline E. Heilman, New York University Most people would say that rewards should be given according to merit. What happens, then, when people get rewarded not because of their accomplishments but because of who they are or what group they belong to? Many people, perhaps including yourself, react negatively. This is the heart of the affirmative action dilemma. While created to ensure nondiscriminatory treatment of women and minorities, affirmative action has come to be seen as little more than preferential selection and treatment without regard to merit (Haynes & Heilman, 2004). This, of course, may not depict reality, but it is this perception of affirmative action that is so problematic. There are a number of detrimental consequences. First, affirmative action (sometimes referred to as ‘positive discrimination’ in the UK) can stigmatize its intended beneficiaries, causing inferences of incompetence. If you believe that someone has been the beneficiary of preferential selection based on non-merit criteria, then you are likely to ‘discount’ that individual’s qualifications. In fact, you are likely to make the assumption that this person would not have been selected without the help of affirmative action. There has been research linking affirmative action with incompetence inferences (Garcia, Erskine, Hawn, & Casmay, 1981; Heilman, Block, & Lucas, 1992). It has been conducted in the laboratory, where people review employee records, and in the field, where people are asked to evaluate co-workers in their work units. Inferences of incompetence have been found whether the target beneficiary is a woman or a member of a racial minority, and whether the research participants are male or female, or students or working people (Heilman, Block, & Stathatos, 1997). These inferences of incompetence have been found even when affirmative action is not explicitly indicated, but is assumed, such as when women or blacks have been selected as part of a ‘diversity initiative’ (Heilman & Welle, 2006). In fact, affirmative action often is assumed – especially when the selection of a woman or minority group member is unusual – and people who have not even benefited from affirmative action are nonetheless victimized by stigmatization (Heilman & Blader, 2001). A second negative consequence of affirmative action concerns non-beneficiaries. When women and minorities are believed to be preferentially selected, those who traditionally would have been selected for jobs often feel they are really the more deserving, and consequently, they feel unfairly bypassed (Nacoste, 1990). This has been suggested as a major reason for the ‘backlash’ against affirmative action. Evidence indicates that there are indeed unfortunate byproducts of feeling unfairly bypassed by affirmative action. In one study, male participants were paired with a female who subsequently was preferentially selected for the more desirable task role on the basis of her gender (Heilman, McCullough, & Gilbert, 1996). Those who believed themselves to be more (or even equally) skilled than the female reported being less motivated, more angry, and less satisfied than those who were told the female was the more skilled and therefore the more deserving of the two. The third negative consequence of affirmative action concerns its potential effect on the intended beneficiary. Ironically, affirmative action may sometimes hurt those it was intended to help. When people believe that they have been preferentially selected on the basis of irrelevant criteria there can be a chilling effect on self-view. A series of laboratory experiments in which participants were selected for a desired task role (leader) either on the basis of merit or preferentially on the basis of their gender found strong support for the idea that preferential selection can trigger negative selfregard. In repeated studies, women, but not men, who were preferentially selected were found to rate their performance more negatively, view themselves as more deficient in leadership ability, be more eager to relinquish their desirable leadership role, and shy away from demanding and challenging tasks (see Heilman & Haynes, 2006 for a review of these studies). There is also evidence that the negative self-view prompted by preferential selection can adversely affect task performance, as is demonstrated in research involving problem solving (Brown, Charnsangavej, Newman & Rentfrow, 2000). Lastly, intended beneficiaries of affirmative action are burdened with the expectation that others have stigmatized them as incompetent, and this can affect their willingness to take on challenges that involve the risk of failure, but nonetheless are essential for their career progress (Heilman & Alcott, 2001). Given these consequences, it appears that affirmative action, as it currently is understood, can undermine its own objectives. The stigma associated with affirmative action is apt to fuel rather than discredit stereotypic thinking and prejudiced attitudes. Depriving individuals of the satisfaction and pride that comes from knowing that they have achieved something on their own merits can be corrosive, decreasing self-efficacy and fostering self-views of inferiority. It can also create anxieties about fulfilling others’ negative expectations, detrimentally affecting performance. And the frustration resulting from feeling unfairly bypassed for employment opportunities because one does not fit into the correct demographic niche can aggravate workplace tensions and intergroup hostilities. So, paradoxically, despite its success in expanding employment opportunities for women and minorities, affirmative action may contribute to the very conditions that gave rise to the problems it was designed to remedy. CHAPTER 17 SOCIAL INFLUENCE For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
SEEING BOTH SIDES ARE THE EFFECTS OF AFFIRMATIVE ACTION POSITIVE OR NEGATIVE? The benefits of affirmative action Faye J. Crosby, University of California, Santa Cruz Few Americans understand how affirmative action operates (Crosby, 2004). According to the American Psychological Association (APA): ‘Affirmative action occurs when an organization expends energy to make sure there is no discrimination in employment or education and, instead, equal opportunity exists’ (APA, 1995, p. 5). Affirmative action goes beyond reactive policies that passively endorse justice but wait until a conflict has erupted before enacting corrective measures. Affirmative action law, in the U.S., began in earnest for businesses in 1965, applying to all US government agencies and most organizations that contract work with the federal government. Today one in five employed Americans works for an affirmative action employer. Using well-established methods for making their calculations, affirmative action employers monitor themselves to make sure they employ qualified people from the ‘targeted classes’ in proportion to their availability. To see how the system works in the U.S., think about your professors as employees of your school. Imagine that 10 percent of the social science professors in your school are women (utilization 10 percent) and that 30 percent of PhDs in the social sciences are women (so that availability is 30 percent). Detected problems can be corrected using flexible goals (not rigid quotas) and realistic timetables. Support for affirmative action in U.S. employment is very strong among business leaders. During a set of U.S. Supreme Court cases in 2003, leaders from major corporations submitted a brief in favor of affirmative action (Smith & Crosby, 2008). Their support derived from the fact that the policy has assured business profits while increasing diversity. Affirmative action remains much debated in educational contexts. Why, people ask, should an applicant with lower scores be admitted to school just because he, or she, is an ethnic minority? Isn’t affirmative action just racial profiling in reverse? While such criticisms of race-sensitive educational policies seem reasonable, they are based on a set of false assumptions. To argue that we should accord rewards strictly according to a set of scores is to assume that the scores are themselves unbiased. In fact, the admissions criteria often privilege some groups over others in subtle ways. Consider admission to the University of California. One criterion for admissions is the applicant’s high school grade point average (GPA) with extra points given to Advanced Placement courses. Thus, an ‘A’ in a regular course gives the applicant a score of 4, while an ‘A’ in an AP course gives the applicant a score of 5, and so on. In-depth study has shown how this reasonable-sounding policy gives an undeserved boost to white applicants. High schools serving white neighborhoods offer many more AP courses than do high schools that serve ethnic minority students. Yet, surprisingly, the GPA is equally predictive of college grades when the GPA is calculated without granting the bump as when the GPA is calculated with the bump. From the point of view of predicting who will succeed at the UC, there is absolutely no reason to give extra points for AP courses. The AP-bump is only one of several practices that have been found to disadvantage minority applicants in non-obvious ways (Crosby, Iyer, Clayton & Downing, 2003). Some critics argue that affirmative action makes students of color feel stigmatized and irritates whites (Steele, 1991). Nobody likes to be told that he or she is advancing through unjustified preferential treatment, rather than merit (Heilman, 1994). However, a large number of studies have now shown that direct beneficiaries of affirmative action feel no stigma when they are given positive feedback about their performance (Iyer, 2008) or take pride in their group identity. Similarly, research shows that whites generally enjoy working or studying with people from diverse backgrounds who would have been excluded had it not been for affirmative action (Crosby, 2004). Nor does affirmative action set students of color up for failure, as some have argued (Crosby, Iyer, & Sincharoen, 2006). A landmark study looked at long-term outcomes for hundreds of black students who had been admitted through affirmative action to 24 elite U.S. colleges in 1951, 1976, and 1989. The black students graduated from school and obtained advanced degrees at rates comparable to white students. And even more than white alumni/ae, black graduates became civic leaders – giving back to the society that had nurtured them (Bowen & Bok, 1998). A provocative study of law-school admissions (Sander, 2004) has challenged the conclusions of Bowen and Bok, but aspects of that study remain contested (Gills, Schmukler, Azmitia, & Crosby, 2007). Further studies have shown that white students benefit intellectually from being in diverse college settings (Gurin, 2004). GROUP INTERACTIONS For more Cengage Learning textbooks, visit www.cengagebrain.co.uk
644 CHAPTER 17 SOCIAL INFLUENCE RECAP: SOCIAL PSYCHOLOGICAL VIEWS OF THE SEEMINGLY INEXPLICABLE We opened this chapter with several chilling examples – drawn both from recent world events and from history – that portray seemingly inexplicable and horrifying human behavior. How does a hijacker fly a plane into a worldfamous skyscraper, killing himself, his passengers, and the thousands of people working in and visiting that building? How does a religious follower decide to drink lethal poison for ‘the Cause’? How does a military official orchestrate and oversee the deaths of millions of innocent people? Although we may uncover some clues about the origins of these puzzling actions by looking to the character or personality traits of the people involved, one of the foremost lessons of social psychology is that stopping our inquiry at this level is a mistake, one that goes by the name of the fundamental attribution error. To build a more complete understanding of any form of human social behavior – from the extraordinary to the everyday – we need to also search for clues within situational forces. Better yet, we need to look at the complex interplay between people’s identities, goals, and desires and the evershifting situational landscapes they inhabit. There is no question that social influence and strong situations can shape people’s behavior in surprising and sometimes alarming ways. As you have seen, social psychologists dissect situations to uncover the particular tools of social influence at work. These tools include, CHAPTER SUMMARY One of the foremost lessons within social psychology is that situational forces have tremendous power to shape human behavior. A related lesson within social psychology is that these powerful situational forces are often invisible, and we mistakenly make sense of people’s behavior by referring to their personality or character. This mistake is so common that social psychologists call it the fundamental attribution error. Both humans and animals respond more quickly when in the presence of other members of their species. This social facilitation occurs whether the others are performing the same task (coactors) or simply watching (an audience). The presence of others appears to narrow people’s attention. This For more Cengage Learning textbooks, visit www.cengagebrain.co.uk among others, situation-specific social norms, pluralistic ignorance, informational and normative social influence, salient role models, internalized ideologies, self-justification and group polarization. Knowing how these and other social psychological concepts operate can help explain behavior that at first seems inexplicable. Many human actions may in fact be inexplicable from the exclusive perspective of personality psychology, but social psychology illuminates a different vantage point altogether. In Chapter 18, our second in this two-chapter series on social psychology, we take a closer look at the subjective inner workings of people as they make sense of the social world around them. There you will be introduced to the topics and concepts of social cognition. CRITICAL THINKING QUESTIONS 1 Now that you are acquainted with several types of social influence that have been used to explain people’s behavior from a social psychological perspective, what sorts of clues about particular situations would you look for to explain extreme behavior, like the latest activities of suicide terrorists? 2 Think of an example from your own experience when, while trying to explain what caused an acquaintance to behave in a particular way, you may have committed the fundamental attribution error. What was your initial explanation based on personality or character? What is a possible explanation that makes reference to situational influences? facilitates the correct performance of simple responses but hinders the performance of complex ones. For humans, cognitive factors such as concern with evaluation also play a role. The uninhibited aggressive behavior sometimes shown by mobs and crowds may be the result of a state of deindividuation, in which individuals feel that they have lost their personal identities and merged into the group. Both anonymity and group size contribute to deindividuation. A consequence of deindividuation is an increased sensitivity to situation-specific social norms linked with the group. This can increase aggression when the group’s norms are aggressive but reduce aggression when the group norms are benign.
4 A bystander to an emergency is less likely to intervene or help if in a group than if alone. Two major factors that deter intervention are defining the situation and diffusion of responsibility. By attempting to appear calm, bystanders may define the situation for one another as a nonemergency, thereby producing a state of pluralistic ignorance. The presence of other people also diffuses responsibility so that no one person feels the necessity to act. Bystanders are more likely to intervene when these factors are minimized, particularly if at least one person begins to help. In a series of classic studies on conformity, Solomon Asch found that a unanimous group exerts strong pressure on an individual to conform to the group’s judgments – even when those judgments are clearly wrong. Much less conformity is observed if even one person dissents from the group. A minority within a larger group can move the majority toward its point of view if it maintains a consistent dissenting position without appearing to be rigid, dogmatic, or arrogant. Minorities sometimes obtain private attitude change from majority members even when they fail to obtain public conformity. In a series of classic studies on obedience, Stanley Milgram demonstrated that ordinary people would obey an experimenter’s order to deliver strong electric shocks to an innocent victim. Factors conspiring to produce the high obedience rates include surveillance by the experimenter, buffers that distance the person from the consequences of his or her acts, the emerging properties of situations, and the legitimating role of science, which leads people to abandon their autonomy to the experimenter. There has been considerable controversy about the ethics of the experiments themselves. One way that people come to internalize attitudes and beliefs that are consistent with their actions is through the processes of self-justification. Cognitive dissonance theory suggests that when people’s behavior conflicts with their attitudes, it creates an uncomfortable tension that motivates them to change their attitudes to be more in line with their actions. Self-perception theory challenged this view by stating that inner turmoil does not necessarily occur. To the extent that internal cues are weak, ambiguous, or uninterpretable, people may simply infer their attitudes from their past For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CHAPTER SUMMARY behavior. Recent experiments with amnesiacs and monkeys call into question whether complex reasoning is necessary for self-justification processes to unfold. In the process of identification, we obey the norms and adopt the beliefs, attitudes, and behaviors of groups that we respect and admire. We use such reference groups to evaluate and regulate our opinions and actions. A reference group can regulate our attitudes and behavior by administering social rewards and punishments or providing a frame of reference, a ready-made interpretation of events and social issues. Most people identify with more than one reference group, which can lead to conflicting pressures on beliefs, attitudes, and behaviors. University students frequently move away from the views of their family reference group toward the college reference group. These new views are usually sustained in later life because (1) they become internalized and (2) after university we tend to select new reference groups that share our views. When groups make decisions, they often display group polarization: The group decision is in the same direction but is more extreme than the average of the group members’ initial positions. This is not just public conformity; group members’ private attitudes typically shift in response to the group discussion as well. The effect is due in part to informational social influence, in which group members learn new information and hear novel arguments that are relevant to the decision under discussion. Group polarization is also produced by normative social influence, in which people compare their own initial views with the norms of the group. They may then adjust their position to conform to that of the majority. An analysis of disastrous foreign policy decisions led to a proposal that cohesive groups of decision makers can fall into the trap of groupthink, in which members of the group suppress their own dissenting opinions in the interest of group consensus. Later research suggests that group cohesion is not so much the problem but rather threats to the group’s positive identity and group norms of consensus seeking. Evidence suggests that group outcomes can be improved by fostering norms of critical thinking and promoting group diversity.
646 CHAPTER 17 SOCIAL INFLUENCE CORE CONCEPTS diffusion of responsibility compliance informational social influence normative social influence minority influence implicit leniency contract ideology debriefing internalization foot-in-the-door technique fundamental attribution error social psychology coaction social facilitation social inhibition Stroop interference deindividuation social norms bystander effect pluralistic ignorance WEB RESOURCES http://www.atkinsonhilgard.com/ Take a quiz, try the activities and exercises, and explore web links. http://socialpsychology.org/social.htm A vast array of social psychology links can be found at the above website. http://www.spring.org.uk/2007/11/10-piercing-insights-into-human-nature.php An overview of classic social psychology experiments can be found on this site. For more Cengage Learning textbooks, visit www.cengagebrain.co.uk cognitive dissonance theory rationalization self-perception theory overjustification effect identification reference groups institutional norms group polarization effect groupthink
CD-ROM LINKS Psyk.Trek 3.0 Check out CD Unit 12, Social Psychology 12e Conformity and obedience For more Cengage Learning textbooks, visit www.cengagebrain.co.uk CD-ROM LINKS 647
No comments to display
No comments to display