Virtue & Vice
Mark Alfano (above), a new faculty member in the Department of Philosophy, is an experimental philosopher. This is an emerging area in philosophy that uses experiments to investigate philosophical topics, including the ways that moral or ethical behavior are influenced (or not) by circumstances.
One classic study: A group of seminary students were asked to prepare a sermon on the Good Samaritan—and on their way to give the sermon, each of them encountered a person in need of assistance. Would they stop and lend a hand, as the Good Samaritan did? Some stopped to help, others did not.
Presumably all of the students valued the ideal of selfless intervention on behalf of another human being. But there was a factor unrelated to virtue that was most likely to influence whether a student stopped: how pressed they were for time. In general, those who were told they were running late did not stop to help; those who perceived they had more time were more likely to offer assistance.
In his research, Alfano explores scenarios like these that probe the intersection of the internal (belief systems and values) and the external (actual behavior), and the ways that social expectations influence both. In his recently published book, Character as Moral Fiction, he argues that virtues such as courage, honesty and open-mindedness are not so much fixed traits as “social constructs” that can be molded and developed through social interactions.
Q Let’s start with a character trait like generosity—in your view, does this refer strictly to a pattern of behavior or can it also be an internal trait? In other words, can I only act generously or can I also be generous?
A It’s both, I think. There are a number of interesting experiments that are relevant. In one, researchers asked people to take what they thought was a personality test and told some of the subjects at random, “It turns out that this personality test says that you’re generous.” The researchers then gave another group the same test and told them that the test indicated that they were stingy. Later, when the study subjects had an opportunity to give money to a charity, the ones who had been told they were generous gave a lot more money.
The natural question is, “Why are they doing that?” One theory is that, as a result of being told they have this trait, they think to themselves, “Am I going to give? I am a generous person. That’s a natural thing for me to do. That sounds right.” Whereas those whose self-concept has been moved in the other direction by being told they’re stingy, might say, “Well, I guess I am kind of stingy; so, no, I’m not going to give.” Of course, you might think it would go in the other direction, where someone might react and say, “No, I’m not stingy. You don’t know me like that.” But in this experiment and others like it, behavior typically moves in the direction of the attribution.
On the one hand, this is behavioral because people end up giving or not giving. But on the other hand, it’s internal because it appears that self-concepts change among the people in these studies. And then there’s a third aspect: it’s also social because when I tell you that my personality inventory indicates that you’re generous, now you know that I expect certain things of you. I don’t think that it’s possible to divorce any of these three from the others entirely.
Q Isn’t there a danger that this shaping of thought and behavior could be used in manipulative ways?
A This kind of experiment does look sort of manipulative. But the experiment is supposed to just give us evidence for how these things work. The suggestion isn’t that we should go around giving people sham personality inventories all the time.
In fact, now that we know that telling people that they have certain traits can lead them to act in accordance with those traits, we should be very careful. We should be extremely reluctant to call people stingy or cowardly or otherwise vicious. And we should be generous about attributing virtues.
But the findings themselves are morally neutral. You can have a piece of technology and use it to do good things, or you can have the same piece of technology and use it to do bad things. It enables us; it gives us certain powers. That’s what technology does. But that doesn’t mean that it should just be used willy-nilly. Part of what I try to point out in my book is that we’re already doing this all the time without realizing it. It’s not like this is something completely foreign to human nature.
One place in particular where we do this all the time is in schools. It turns out that representing someone as having a studious disposition will tend to have exactly the same kind of effect as representing someone as having a particular moral disposition. So when teachers, for instance, unthinkingly say of a student, especially in front of others, “He’s not very hard-working” or “She’s very studious,” they’re using this kind of “moral technology,” possibly without even realizing it. Part of the idea is to become more aware of what we do and to be more careful and intentional about it.
Q It sounds like a self-perpetuating cycle: someone behaves in a way that is consistent with the way they’re perceived (or believe they’re perceived), and their behavior in turn reinforces the perception.
A Right. You get a looping effect: If someone is told that they’ve got a trait and then they act as if they’ve got the trait, the person who made the attribution in the first place could then say, “Now I have evidence. I was right all along.” It’s very easy to fail to notice that the person making the attribution is setting this in motion, whether they know it or not.
There’s an important longitudinal study of students in Saint Louis, for instance, that showed students were performing worse because they were labeled in a negative way and also treated accordingly, generating a feedback loop that created a self-perpetuating phenomenon.
Q But in this model, how do you account for a student who does well nonetheless?
A That’s a very important question because it would be enfeebling to think that there’s nothing we can do about this—to think that if someone came along and said, “You’re vicious,” or “You’re lazy,” then you’re stuck with it. In the studies like the one in Saint Louis, we don’t know about the rest of each student’s life—an individual might come in having a very strong self-concept; maybe their families were very supportive. It’s really hard to know. There could be any number of other social factors, or just individual dispositions, that could lead someone to hold up in the face of negative treatment.
So it’s not a deterministic model. The idea is that there are tendencies, based on the labels that we apply to people.
Q What role do motives play? Do good intentions matter?
A Different philosophers put different degrees of emphasis on motives and behavior. For instance, Julia Driver thinks that all that really matters is behavior. But, of course that means that indirectly a person’s motives will matter because we tend to achieve what we aim for. Other philosophers will put most of the emphasis on the motives and say, “As long as you’re generously motivated, if you turn out not to help people, if your behaviors aren’t actually effective in bringing about the alleviation of suffering or the improvement of people’s lives, that’s sort of neither here nor there.”
I personally side more with those who think that behavior is the key component. It’s very easy—and it’s happened a lot in recent philosophy—to think only about people’s intentions and motives and not worry about what happens when they actually act. But if ethics doesn’t involve real behaviors, if it doesn’t involve what people manage to accomplish or fail to accomplish, then it’s sort of a loose wheel; it’s irrelevant. That’s why I favor more the behavioral approach.
The Good Samaritan experiment helps illustrate why. There, we have people who presumably think they ought to help others—and they’re on the way to give a sermon encouraging others to act like the Good Samaritan—but they don’t end up helping just because they’re in a rush. Given these strong situational influences on behavior, we need to care about those as much as we care about motives. Focusing only on motives leaves out a lot that matters to ethics.
Situational influences can arise very quickly and easily. You might not even notice them; they can seem trivial. And some of them don’t even provide a reason to act one way or the other. For instance, if I tell you, “Here’s a person in need. Are you willing to donate money to help that person?” And then I tell you, “Oh, by the way, it’s really bright in here” or “Oh, by the way, you can smell cinnamon,” you’d probably think that’s irrelevant. But these factors turn out to matter quite a bit. In numerous studies, these “situational nonreasons” seem to have a pretty strong influence on our behavior. If it’s brighter, if it smells good, you’re more likely to give.
I try to contrast these with bad reasons and temptations. A bad reason for overeating would be to say, “It tastes so good to have another slice of cake.” We already understand how temptations work, to some extent, and it’s not surprising that temptations influence our behavior. But if someone eats extra cake because it’s on a bigger plate, or if the lighting in the room is pleasing, those are nonreasons.
Q In your book, you talk about how behavior can be influenced by the perception of someone watching—even if it’s just a representation of a face. How does that work?
A This goes all the way back to the beginning of the history of philosophy. In Plato’s Republic there’s a story about a character called Gyges who finds a magical ring that, when he puts it on, makes him invisible. Gyges starts out as a simple shepherd, and the first thing he does when he gets the ring is the first thing that anybody would do if they had the power of invisibility: he eavesdrops on his friends. The next thing he does is also something that probably a lot of other people would do: he becomes a criminal. He goes to the royal palace, rapes the queen, murders the king and usurps the throne.
The claim in The Republic is that almost anyone, no matter how seemingly virtuous they are, would act in the same way if they had this power of invisibility—namely, the ability to not be seen and held to account and looked in the eye. You have the same theme running through the history of philosophy. You see it in Epicurus, who set up a statue of himself in the garden at his school. That’s funny because Epicurus is on the record saying you shouldn’t make statues of people to honor them. So either he’s just a hypocrite or there’s something else going on here. My guess is that the statue in the garden is meant to be sort of a symbolic watcher, which fits nicely with certain fragments that survive from Epicurus, particularly, “You should always act as if Epicurus were watching you.”
This same theme comes all the way through to the British philosopher Jeremy Bentham, who came up with the “panopticon” design for prisons, which is basically a house of mirrors. There are windows everywhere; everyone can be seen at all times. He claims that, as long as people know they could be seen, they’ll regulate their own behavior and act as the one watching them would prefer them to act. So you don’t actually need someone always watching; you just need the possibility of being watched. This has proved to be almost too effective, as the philosopher Foucault describes in Discipline and Punish.
And you don’t need an actual human. One example of this is the dictator game, which is a game that experimental economists use to look at behavior. In the dictator game, one person—the dictator—gets a pool of money, let’s say ten dollars, and is told, “You can give as much or as little as you like to the recipient,” and that’s the whole game. They just give some, or they give none.
The most common thing to do is to keep everything. The second most common thing, actually, is to give half. In this experiment, when people played the dictator game on a computer and the desktop had a picture of a face on it, the players gave more to the recipients than when there was just a blank desktop. This has been replicated a number of times. The image can be a robot face; it can be an iconic face; it can even just be three dots arranged in a face-like pattern. This pattern is known to engage the fusiform face area of the brain. It’s like we’ve got a modular face detector and if you set that off, people tend to be more pro-social.
Besides artificial games, this also works in more real-life behavioral settings. Elizabeth Bateson found that people litter half as much when they’re eating in a cafeteria where there are images of faces on the walls than when there are images of flowers. Another instance involved honesty. Bateson put out a tea station where people were supposed to make their own tea and then pay a small amount, and they switched the decorations in the room—faces and flowers—every week. People paid 276 percent more for their tea on the weeks when there were faces on the walls than when there were flowers. This is an effect that’s been replicated quite a few times now.
Q So much of this crosses the boundary into social psychology, but you’re a philosopher. How did you arrive at this juncture?
A It’s an interesting question. All of my degrees are in philosophy, but I’ve always been interested in psychology. I first got interested in the philosophy of Nietzsche. You might think, “Well, he’s a nineteenth-century German philosopher, what does that have to do with empirical psychology?” But Nietzsche is extremely insightful, not just about abstract ethics, but about how actual people are motivated, how they behave and the strange twists and turns that lead to their behavior. If you start from this perspective—if you really want to explain people’s behavior in moral contexts—then you can’t just have the theory of morality, you also have to have the theory of behavior.
The idea that philosophy is separate from psychology is a relatively new phenomenon. The first psychology-philosophy split was tentatively initiated at Harvard by William James in the late nineteenth century and only completed in 1936. So the distinction is only about 100 years old. And now, just over the last couple decades, there’s been a movement called experimental philosophy, which involves conducting experiments. Sometimes philosophers conduct them directly, with the guidance of people who are more well-versed in experimental design and interpretation. Sometimes the research is conducted by philosophically savvy psychologists or other social scientists who end up publishing about philosophy because they find it interesting.
Q How do you see yourself fitting into the UO community?
A I feel very at home in the philosophy department here at the UO, a department that is quite unusual because faculty members here approach philosophy from many diverse perspectives—American pragmatism, feminism, Continental (meaning European)— and welcome others who do the same. I was brought in because of my empirical work, but my next book will be on Nietzsche. And I’m delighted that members of other departments have been so welcoming and keen to collaborate. In my graduate seminar last fall, I had guest lectures from Azim Shariff (psychology) and Bill Harbaugh (economics), among others.
— Lisa Raleigh
Photo: Matt Cooper