Jump to ContentJump to Main Navigation
Sentimental RulesOn the Natural Foundations of Moral Judgement$

Shaun Nichols

Print publication date: 2004

Print ISBN-13: 9780195169348

Published to Oxford Scholarship Online: January 2005

DOI: 10.1093/0195169344.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 25 September 2017

Moral Evolution

Moral Evolution

Chapter:
(p.141) 7 Moral Evolution
Source:
Sentimental Rules
Author(s):

Shaun Nichols

Publisher:
Oxford University Press
DOI:10.1093/0195169344.003.0007

Abstract and Keywords

This chapter turns to the genealogy of “harm norms,” norms against causing pain and suffering to others. The chapter sets out a range of historical and anthropological facts that need to be captured by a genealogy of harm norms. In particular, an adequate genealogy needs to explain the broad similarities and differences in harm norms across cultures and the characteristic evolution of harm norms. One prominent explanation for these facts appeals to moral progress. This chapter proposes an alternative account of the genealogy of norms that draws on the central thesis of chapter 6, that norms which resonate with our emotions will be more likely to survive.

Keywords:   affective resonance, cultural evolution, cultural variation, harm norms, moral progress, moral realism

The object is to explore the huge, distant and thoroughly hidden country of morality, morality as it has actually existed and actually been lived, with new questions in mind and with fresh eyes.

—Friedrich Nietzsche, Preface to The Genealogy of Morals

1. Introduction

The evolution of norms prohibiting disgusting actions carries at least a vulgar sort of interest. But our primary interest here is in the moral norms, and in this chapter, I will turn to harm norms, that is, norms against causing pain and suffering in others. First, I will set out a range of facts that need to be captured by a genealogy of harm norms. This will require a review of some basic claims in anthropology and social history. What needs to be explained are the broad similarities and differences in harm norms across cultures, and the characteristic evolution of harm norms. I will consider the most prominent explanation of these patterns—the appeal to moral progress. Then I will propose an alternative account of the genealogy of norms that draws on the Affective Resonance approach developed in the preceding chapter. Finally, I will compare this account more explicitly with the moral progress view.

2. Harm Norms: Variations on a Theme

Before we try to determine the best explanation of the basic historical and cultural patterns in harm norms, it will be important to get a richer description of these basic patterns. As in the first chapter, by “harm norms,” I mean norms prohibiting actions that cause pain and suffering. An adequate account of the genealogy of harm norms needs to accommo (p.142) date the ubiquity of harm norms, the cross-cultural variation of such norms, and the characteristic development of harm norms. So I will give a brief description here of each of these explananda.

2.1. Ubiquity of Harm Norms

In one sense, harm norms show robust cross-cultural consistency. Nearly all cultures that have been studied by anthropologists have been found to have norms prohibiting a range of harmful actions (e.g., Westermarck 1906–8; Murdock 1945; Kluckhorn 1953). More recent work in cross-cultural psychology has confirmed this (e.g., Haidt, Koller, and Dias 1993; Miller [in press]). Some anthropologists suggest that there are cultures in which harm norms are absent, for example, the Dobu (Benedict 1934) and the Ik (Turnbull 1972). These claims are often regarded with suspicion (e.g., Heine 1985), but even if it is true that the Ik and the Dobu lack harm norms, these count as striking exceptions to a rather profoundly confirmed trend that virtually all the cultures that have been studied clearly have harm norms. The alleged exceptions should not be taken to undermine the manifest truth that harm norms are effectively ubiquitous.

2.2. Variation in Harm Norms

Despite the ubiquity of harm norms, it is a mistake to assimilate all basic harm norms into a homogeneous paste. Anthropologists like to regale us with stories of the astonishingly alien norms and practices found in other cultures. Indeed, claims of cross-cultural variation in harm norms have been rife in anthropology since the nineteenth century. One of the best known examples in philosophy comes from Richard Brandt. Brandt found that the Hopi thought it was morally permissible for children to capture birds, tie them up, play with them roughly, and let them starve to death. According to Brandt (1954), the Hopi believed that the bird felt pain, but still did not regard the treatment as seriously wrong (213–15). The history of the Aztecs provides an even more disturbing picture. According to de Sahagun's sixteenth-century account, the Aztecs ritually killed and cannibalized huge numbers of slaves and prisoners taken in battle, including children. De Sahagun [1578–79] (1981) reports that the victims were often tortured in unspeakably gruesome ways before they were killed, and this was done as part of a public celebration. Perhaps the most compelling illustration of differences in harm norms comes from the treatment of women in other cultures. Chagnon (1992) maintains that the Yanomamö routinely beat their wives, often to display their fierceness to other men in the group (17). The Yanomamö also try to abduct women when raiding (p.143) enemy villages. According to Chagnon, “A captured woman is raped by all the men in the raiding party and, later, by the men in the village who wish to do so but did not participate in the raid. She is then given to one of the men as a wife” (190). Of course, in our own culture, we regard it as impermissible to torture birds, prisoners, and every other sentient being. We also regard rape and abduction as impermissible regardless of whether the woman is part of an enemy group.

Many of the cross-cultural differences in harm norms can be attributed to differences in who is regarded as part of the moral community. Allegedly, in many tribal cultures, only people in the tribe count as part of the moral community (e.g., Benedict 1934; Edel and Edel 1968). However, there is another dimension on which harm norms vary. Some harms to members of the moral community are regarded as acceptable in some cultures and unacceptable in others. This is nicely illustrated by looking, not to anthropology, but to social history. In eighteenth-century England, flogging school children (and even college students) was regarded as acceptable (e.g., Scott 1959). In late twentieth-century England, however, such punishment was roundly rejected. It was outlawed in 1986, though it was little practiced for many years before that. The culture of eighteenth-century England knew, of course, that flogging hurt the child, but they viewed these harms as acceptable. The English culture of today regards these kinds of harms as intolerable.

2.3. Characteristic Evolution of Harm Norms

It has become a commonplace in discussions of moral evolution that, in the long run, moral norms exhibit a characteristic pattern of development. The familiar account is that harm norms tend to evolve from being restricted to a small group of individuals to encompassing an increasingly larger group. That is, the moral community expands. The trend is bumpy and irregular, but this kind of characteristic normative evolution is affirmed by a fairly wide range of contemporary moral philosophers, including Peter Singer (1981), Peter Railton (1986), Thomas Nagel (1986, 148), David Brink (1989, 208–9), and Michael Smith (1994, 188). As far as I know, there has not been a systematic cross-cultural study of normative evolution that confirms this. But I suspect that the pattern is real. It is manifestly the case in our own culture that the moral community has shown this kind of development. Perhaps the most extensive record comes from Western European culture since the Middle Ages. In this case, we have a dramatic picture of the development of increasingly inclusive moral norms. Indeed, this is regarded as one of the hallmarks of the Enlightenment. Over the last several hundred years, Western culture has (p.144) shown increasing prohibitions of violence in general (e.g., McLynn 1991, 297). The particular character of this evolution is richly illuminated by work in social history (e.g., Thomas 1983; Spierenburg 1991; Dulman 1990; Weisser 1979). It is worth recounting part of this social history in some detail to give a richer idea of the actual changes. I want to focus on two points. First, in European culture the prohibition against harming others seems to have been expanded to prohibit cruelty to animals. Second, harsh corporal punishment has declined sharply since the Middle Ages. I choose these cases for a couple of reasons. First, the examples are at a greater historical remove than issues about, say, civil rights, and this makes the issues somewhat easier to discuss dispassionately. The second reason I choose these cases is more opportunistic—these topics have received excellent scholarship in social history.

Cruelty to Animals

General disapproval of cruelty to animals has become increasingly entrenched in Western society since the seventeenth-century. In Keith Thomas's (1983) magnificent Man and the Natural World, he describes the growing opposition in England to needless killing and cruelty to animals. In the seventeenth-century, blood sports like cock-fighting were popular, as was the practice of bear baiting, in which dogs were set on a bear that was tied to a post. Thomas maintains that “In the case of animals what was normally displayed in the early modern period was the cruelty of indifference. For most persons, the beasts were outside the terms of moral reference” (Thomas 1983, 148). Gradually, English society, and Western culture more broadly, expanded the moral community to include animals as moral patients.

Perhaps the simplest way to trace this change is by looking to legislative records. No European country had any laws protecting animals before the nineteenth century (Maehle 1994, 95). But the first half of the nineteenth century saw several prominent developments:

In the 1820s and 1830s the first important animal protection societies were founded. In 1824 the Society for the Prevention of Cruelty to Animals was established in London. . . . The successes of the humane movement in Britain are well known: after the Act of 1822 to “Prevent cruel and improper treatment of cattle,” in 1835 a Cruelty to Animals Act established the illegality of blood sports involving the baiting of animals, the keeping of cock-pits and of places for dog-fights. Cock-fighting as such was prohibited by an Act of 1849 ‘for the more effectual Prevention of Cruelty to Animals.' . . . In 1876 Britain enacted the world's first law regulating experiments on living animals. (Maehle 1994, 100)

The nineteenth century also saw the emergence of animal protection laws in Europe more widely (Maehle 1994, 100).

(p.145) One of the striking features of this history is that it makes clear that it is naïve and simplistic to assume that there is a single origin story for norms prohibiting cruelty to animals. In some cases, the norm against cruelty to animals seems to have emerged out of a concern about how such cruelty might also foment cruelty towards people (Thomas 1983, 150–51). This view has a precedent in Biblical sources. Opposition to cruelty to animals was also based on a rather different appeal to theological considerations. Cruelty was regarded as “an insult to God, a kind of blasphemy against his creation” (Thomas 1983, 156, 162). Among Puritans, cruel animal sports were opposed partly because of their connection with gambling and disorder (Thomas 1983, 158). In his History of England, Thomas Macaulay quipped that Puritans denounced bear baiting not because of the suffering it caused the animal, but because of the pleasure it gave to the observers (Thomas 1983, 158). Of course, another basis for opposition to animal cruelty came from people's own emotional reactions to the suffering of animals (Thomas 1983, 177). In some of these cases, the anticruelty norm was promoted by pet owners, who seem to have developed heightened sensitivity to the plight of animals (Thomas 1983, 119–20; see also Serpell and Paul 1994). This serves to reinforce worries about trying to find the origin of harm norms. For in the case of norms against harming animals, there is no single origin. We might then, be leery of undertaking to find the origin of norms against harming people.

Corporal Punishment

The other example that I'd like to give in some detail focuses on the growing rejection of graphically violent forms of punishment. The first thing to note is that in the Middle Ages, there simply was more violence in everyday life and people were more tolerant of violence. In Europe before the sixteenth century, people brooked physical aggression and the infliction of pain to a much greater extent than we do in contemporary society. There were also fewer legal measures restricting violence, and even minor insults would often provoke a violent response (Spierenburg 1991, 195; see also Halsall 1998). In light of this tolerance for violence, it is less difficult to comprehend the tolerance for severe corporal punishments.

In the late Middle Ages, a range of what now seem appalling punishments were available, including maiming, blinding, and branding. This is reflected in the German Empire's Constitutio Criminalis Carolina of 1532. This statute describes the punishments to be meted out for various offenses. Among other things, the statute gives guidelines for sentencing. Here is one example: “When . . . it is decided that the condemned person should be torn with glowing tongs before the execution, the following words shall in addition stand in the judgment, viz.: ‘And he shall in addi (p.146) tion before the final execution be publicly driven around upon a wagon up to the place of execution, and the body torn with glowing tongs, specifically with N. strokes'” (Langbein 1974, 303).

Severe corporal punishments, like maiming, began to recede in the early modern period. By the sixteenth century in Germany, blinding was rare. Cutting off of hands was more frequent, but it too was on the wane in the sixteenth century (Dulman 1990, 47–48). Spierenburg summarizes the trend: “An unambiguous development consisted in the disappearance of visible mutilation. Some corporal penalties which have become horrible to us, such as blinding and cutting off hands or ears, were no longer practiced. This development took place throughout Western Europe, although at different points in time. In Amsterdam such punishments were applied in the first half of the seventeenth century at the latest” (Spierenburg 1991, 211; see also Emsley 1987, 202; Langbein 1977, 27–28). Flogging continued to be a prominent punishment throughout the eighteenth century (e.g., Dulman 1990, 49; McLynn 1991), but of course, flogging has now largely disappeared from the European catalog of punishments.

Just as severe corporal punishments were on the wane, there were successful movements to abolish judicial torture in several European countries in the eighteenth and early nineteenth centuries (e.g., Peters 1985).1 Executions were also becoming increasingly less common in the Netherlands (Spierenburg 1991, 212) and England (Emsley 1987, 201). More generally, there has been a decline in capital punishment in European countries since the 1500s. Where capital punishment survives, it retains none of the corporal abuse that formerly attended it (Foucault 1977, 11–12). This too can be traced through the early modern era. At the close of the Middle Ages, horrifically painful methods of execution, such as breaking on the wheel and burning alive, were in use. Further, executions would sometimes be preceded by additional corporal punishments (e.g., Dulman 1990, 77–79). Gradually the additional punishments and pains were largely eliminated from executions. In the United States, for instance, considerable effort has been expended to make the ultimate penalty relatively painless.

Thus, there are important broad cross-cultural similarities in harm norms. Within our own culture at least, there are also important historical trends (p.147) in harm norms.2 We want a genealogical account that can explain such similarities and historical trends as well as the cross-cultural differences in harm norms across cultures. That is, of course, a preposterously ambitious project. My goal here will be to try to provide a partial explanation of the similarities and changes in a way that is fully compatible with the differences. Before I move on to the business of pursuing these goals, I need to make a brief detour to address a worry that the anthropology of ethics provokes almost reflexively for philosophers, the worry over “ultimate ethical disagreement.”

3. A Brief Interlude on Cross-Cultural Normative Disagreement

Early in the twentieth century, William Sumner (1906) and Edward Westermarck (1906–8) provided lengthy potpourris of anthropological exotica, recording, among other things, the unusual normative lives found in other cultures. Sumner and Westermarck thought that their anthropological reviews constituted evidence for some form of ethical relativism. However, in the wake of these works, scholars expressed skepticism about whether the alleged cultural differences were really differences in the basic norms embraced by the culture. For instance, Sumner relays that the indigenous peoples of Australia are reported to cannibalize their own infants: “Sickly and imperfect children were killed because they would require very great care. The first one was also killed because they thought it immature and not worth preserving. Very generally it was eaten that (p.148) the mother might recover the strength which she had given to it. If there was an older child, he ate of it, in the belief that he might gain strength” (Sumner 1906, 316). This is indeed a startling accusation to make of another culture—that they eat their young! Even if one accepts that the foreign culture engages in the behavior, what one wants to know is whether such cases reveal genuine normative disagreement between our culture and the culture under anecdote, or whether the exotic customs merely reveal disagreement over some nonmoral facts. The question is whether there is “ultimate” or “fundamental” disagreement about morality, or whether all disagreement can be attributed to disagreement over facts.

This worry over how to interpret the anthropological evidence was pressed in a particularly influential way by Gestalt psychologists like Karl Duncker. Duncker maintained that simply because we find people disagreeing about what the right thing to do is does not mean that the people really disagree about the norms. It might well be that they are interpreting the situation differently. Duncker takes the celebrated example of killing elderly parents as a case in point. “‘Killing an aged parent' may, according to circumstances, mean sparing him the miseries of a lingering death or an existence which, as a born warrior, he must feel to be exceedingly dull and unworthy; or it may mean protecting him against injuries from enemies or beasts, or causing him to enter the happy land which is not open save to those who die by violence” (Duncker 1939, 42). Westermarck himself had worried about this issue in the concluding pages of his mammoth The Origin and Development of the Moral Ideas (1906–8, II, 745–46). Later in the century, the philosopher Richard Brandt took this problem very seriously. Indeed, these kinds of worries drove Brandt, in one of the more admirable acts of twentieth-century naturalistic philosophy, to embark on his own sustained anthropological project. He studied the Hopi Indians over the course of several years. Brandt wanted, among other things, to see whether there were normative disagreements that were not disagreements about the facts. So, when he discovered that the Hopi thought it was acceptable for children to catch birds and let them starve to death, Brandt proceeded to explore whether perhaps they thought that the birds did not feel pain or that the birds would get some reward in the afterlife. Brandt could find no factual disagreement surrounding this issue, and so he tentatively suggested that this was a case of fundamental ethical disagreement (Brandt 1954, 213–15; 1959, 102–3).

This issue continues to inspire debate. For instance, Richard Boyd maintains that it is “useful to remember the plausibility with which it can be argued that, if there were agreement on all the nonmoral issues (including theological ones), then there would be no moral disagreement” (p.149) (Boyd 1988, 213). More recently, Michele Moody-Adams (1997) offers a sustained critique of Brandt and John Doris and Stephen Stich (in press) offer a state-of-the-art defense. My own sympathies lie with Brandt, Doris, and Stich, but fortunately, I do not need to defend fully those sympathies for present purposes. For it is crucial to distinguish between two claims about moral disagreement:

  • 1. All moral disagreements are really disagreements about the nonmoral facts.

  • 2. All moral disagreements are rationally resolvable under ideal conditions (of impartiality, etc.) once the parties agree on the nonmoral facts.

Note that the above quote from Boyd is ambiguous between these two claims. However, virtually all of the recent critical literature on moral disagreement (including Boyd) focuses on the latter claim. The latter claim is the claim that invites difficult and intractable debate. For on the latter claim, one needs to provide reason to affirm or deny that fully informed, fully rational, and fully attentive subjects will converge on their moral views. That claim is, to say the least, hard to test. Even if the ethics review board agreed, the requisite training study is prohibitive. However, for the purposes at hand, what is especially important is to reject (i). For the project here is merely to explain the normative phenomena as we find them, not as they would be under idealized conditions. Because few if any prominent figures in the contemporary debates actually defend (i), the rejection of (i) is not terribly controversial. Again, this is not to say that there wouldn't be convergence were all the participants supremely smart, rational, and knowledgeable. The claim is simply that it is plausible that the work in anthropology and social history reveals some cases of moral disagreement that can't be attributed to disagreement about the nonmoral facts.

4. Moral Evolution and Moral Realism

Let us now return to the central question of this chapter—how are we to explain the similarities, differences, and characteristic evolution of harm norms? Perhaps the most important answer to this question begins with the last phenomenon—the characteristic evolution of moral norms. As recounted in section 2, the historical trend in our own culture has been towards increasingly inclusive and antiviolent norms. The idea that this pattern counts as a kind of moral progress, is familiar from Enlightenment thinkers (e.g., Condorcet, Voltaire) as well as the chroniclers of the (p.150) Enlightenment (e.g., Macauley 1913; Trevelyan 1942). Over the last two decades, a number of prominent moral philosophers have resuscitated this interpretation of the characteristic pattern of moral evolution as a pattern of moral progress (Singer 1981; Nagel 1986, 148; Brink 1989, 208–9; Smith 1994, 188; Sturgeon 1985). What has happened, according to these theorists, is that people have been getting closer and closer to the truth about right and wrong. This view provides an important and powerful explanation of the pattern of moral evolution. Not only does this view promise to explain moral evolution, it also, as we shall see, offers a basic explanation for the ubiquity of harm norms.3

In recent years, Nicholas Sturgeon, David Brink, and other self-described “moral realists” have provided the most visible and developed appeal to moral progress (e.g., Brink 1989, Sturgeon 1985).4 The invocation of moral progress serves, according to these theorists, to support the claim that there are moral facts. Briefly, the idea is that the best explanation of the historical trends is that there are moral facts which people gradually come to recognize. Before we continue, we need to get a clearer picture of this view. First, there is an important terminological issue. The label “moral realism” is used for markedly different positions. In some cases, “moral realism” maintains only that some moral claims are true (e.g., Sayre-McCord 1986, 3). On this construal, moral realism is perfectly consistent with a thoroughgoing relativism, according to which moral claims (p.151) are true, but relativized to an individual or a culture (e.g., Benedict 1934). In other places, “moral realism” is used to pick out the view that moral claims are not just true, but that they are true apart from any particular perspective and independent of people's beliefs about right and wrong (e.g., Brink 1989, 20). The appeal to moral progress is typically used in defense of this stronger form of moral realism.5

The appeal to moral progress is supposed to support the claim that there are moral facts. Here is how Sturgeon makes the point:

Do moral features of the action or institution being judged ever play an explanatory role? Here is an example in which they appear to. An interesting historical question is why vigorous and reasonably widespread moral opposition to slavery arose for the first time in the eighteenth and nineteenth centuries, even though slavery was a very old institution; and why this opposition arose primarily in Britain, France, and in French- and English-speaking North America, even though slavery existed throughout the New World. There is a standard answer to this question. It is that chattel slavery in British and French America, and then in the United States, was much worse than previous forms of slavery, and much worse than slavery in Latin America. (Sturgeon 1985, 64)

According to Sturgeon, chattel slavery prompted opposition because it was morally worse than previous forms of slavery. In order to explain the historical changes, on his view, we need to appeal to the greater immorality of the new form of slavery. A similar line is pushed by Brink:

Most people no longer think that slavery, racial discrimination, rape, or child abuse is acceptable. Even those who still engage in these activities typically pay lip service to the wrongness of these activities . . . Cultures or individuals who do not even pay lip service to these moral claims are rare, and we will typically be able to explain their moral beliefs as the product of various religious, ideological, or psychological distorting mechanisms. This will seem especially appropriate here, since the relevant changes in moral consciousness have all been changes in the same direction. That is, with each of these practices, in almost all cases where people's moral attitudes toward the practice have undergone informed and reflective change, they have changed in the same way (with these practices, from approval to disapproval and not the other way (p.152) around). When changes in moral consciousness exhibit this sort of pattern, this is further reason to view the changes as progress. (Brink 1989, 208–9)

Brink goes on to say that this kind of moral convergence counts as “(defeasible) evidence of moral progress” and hence support for moral realism.

Both Brink and Sturgeon acknowledge Slote (1971) as an early exponent of this kind of argument from historical trends to realism about value judgments (Brink 1989, 209; Sturgeon 1985, 255). Slote's argument is actually focused on aesthetic value judgments, but it is instructive to consider his line of argument, because it remains one of the clearest presentations of this argumentative strategy. Slote maintains that there is an intriguing pattern of “unidirectionality” in aesthetic preferences and opinions (Slote 1971, 822, 834n5). For instance, most serious listeners of classical music prefer Mozart to Bruckner. Slote maintains that the best explanation of this pattern is that “the more people study and are exposed to music, the more they like what is good in the field of music and the less they like what is mediocre or bad in the field of music, and that Mozart is, in fact, a greater, a finer, a better composer than Bruckner” (824).

One of the nice features of Slote's presentation is that he makes the argument form wonderfully clear. The argument is a form of inference to the best explanation, and Slote is explicit that the realist explanation of unidirectionality should be accepted if but only if it is the best explanation: “When scientists attempt to explain a given fact or phenomenon, they generally consider various possible alternative explanations of the fact or phenomenon, and accept one of these explanations as correct only if it is clearly more reasonable, in terms of certain standards of scientific methodology and according to the available evidence, than any of the alternative explanations that they have been able to think up” (Slote 1971, 823–24). Slote adopts an analogous approach to defending the realist explanation of unidirectionality. He proceeds to consider all the alternative explanations he can think up and he argues that all of the alternatives are inferior to the realist explanation. He thus concludes that aesthetic value judgments are indeed rational judgments about objective qualities in the works.6

Now, if we exploit this form in trying to assemble the analogous argument for moral realism, the argument goes something like the following. (p.153) One explanation for the evolution of moral norms is that it is true that, for example, slavery is morally wrong, and this is something that rationality leads us to recognize. What are the available competing explanations in the moral case? On this point, Brink (1989) and Sturgeon (1985) say little. However, Peter Singer pushes a similar line of argument, and he does offer one alternative to the realist explanation:

the shift from a point of view that is disinterested between individuals within a group, but not between groups, to a point of view that is fully universal, is a tremendous change—so tremendous, in fact, that it is only just beginning to be accepted on the level of ethical reasoning and is still a long way from acceptance on the level of practice. Nevertheless, it is the direction in which moral thought has been going since ancient times. Is it an accident of history that this should be so, or is it the direction in which our capacity to reason leads us? (Singer 1981, 112–13, emphasis added)

Thus, Singer suggests that the pattern of moral evolution is either explained by rational processes or the pattern is an “accident of history.” Obviously if those are the choices, the realist story is attractive because it is rather less plausible that the trajectory is purely an historical accident.

The historical accident account of moral evolution scarcely counts as an explanation at all. By contrast, the appeal to moral facts has considerable explanatory power. It can accommodate all three features that we started with. Certain core harm norms are virtually ubiquitous because everyone has figured out this much about morality. The norms evolve in a characteristic way because people tend to get closer to the truth. And there is considerable variation because the process is a difficult one, distorted by self-interest and cultural idiosyncrasies.

Despite its manifest virtues, the appeal to moral progress has been attacked on a number of fronts. In philosophy, one familiar complaint is that it is not clear that the changes in moral attitudes can be attributed to any kind of rational process. Rorty makes this point in a characteristically inflammatory way:

To get whites to be nicer to blacks, males to females, Serbs to Muslims, or straights to gays . . . it is of no use whatever to say, with Kant: notice what you have in common, your humanity, is more important than these trivial differences. For the people we are trying to convince will rejoin that they notice nothing of the sort. Such people are morally offended by the suggestion that they should treat someone who is not kin as if he were a brother, or a nigger as if he were white, or a queer as if he were normal, or an infidel as if she were a believer. (Rorty 1998, 178; emphasis in original)

(p.154) A rather different critique comes from revisionist historians, who often ridicule appeals to moral progress as Pollyanna and Whiggish. For instance, Foucault (1977) maintains that the decline of harsh corporal punishment had nothing to do with increasing humaneness but only with new and more effective means of state-sponsored oppression (see also Ignatieff 1978). This is not the place to engage the details of such debates. But there is an important sense in which the complaint of Whiggishness is not really fair to the moral progress view. For the moral progress claim appeals to a broad set of changes in norms, over hundreds of years and in many different arenas. It is this broad trend that needs to be explained. It would indeed by striking if the entire truth were to be told by a series of individual revisionist stories. Then the trend would be chalked up to historical accident after all, and that, as noted above, seems highly unlikely. No, if we are to successfully challenge the moral progress account, we need an alternative explanation for the broad trend. Merely finding fault with the moral progress proposal will not suffice. One really needs to develop an alternative. Now, finally, I will turn to that task.

5. Affective Resonance and Harm Norms

In the previous chapter, I argued that a central cluster of our etiquette norms seems to be preserved partly because the norms are connected to core disgust. This argument was intended to help confirm the Affective Resonance account, according to which norms gain greater cultural fitness when they prohibit actions that are likely to elicit negative affect. It will come as no surprise, then, that I will maintain that some of our moral norms, the harm norms, gained an edge in cultural fitness by prohibiting actions that are likely to elicit negative affect.

As we saw in chapter 2, witnessing or learning of suffering in others often excites considerable affective response in humans. This emotional responsiveness to others' suffering emerges early in human development. Indeed, emotional responses to suffering in others seem to be present in infancy, such responses are almost certainly cross-culturally universal, and they exhibit quick onset. Suffering in conspecifics seems to provoke negative affect even in some nonhuman animals (see chapter 2, section 11). Evidently, we come pre-tuned to be upset by the distress signals of others. There are, as discussed in chapter 2, importantly different kinds of affective response to suffering in others. Reactive distress can be triggered by low-level cues of harm (e.g., pained facial expression, audible crying), whereas concern is triggered by (inter alia) knowledge that the target is in pain. Some forms of reactive distress are found in infancy and (p.155) even concern seems to be present well before the second birthday. By eighteen months or so, children seem to be emotionally sensitive not just to distress cues, but to the knowledge that someone else is in pain.

Although none of the forms of reactive distress or concern appears on standard lists of basic emotions (e.g., Ekman 1992), some of these responses do plausibly have the features that matter for building an epidemiological account—they are universal and have characteristic sets of eliciting conditions. Indeed, some of the eliciting conditions for reactive distress, for example, crying, seem to be hardwired. In addition, like basic emotions, the response to suffering in others seems to be at least largely insensitive to background knowledge. Knowing that inoculations are for the best does not eliminate the discomfort one feels on witnessing a child get inoculated.

In the previous chapter, I argued that norms prohibiting actions that are likely to elicit negative affect, “affect-backed norms,” will have an advantage in cultural evolution. In keeping with this, I suggest that our emotional sensitivity to suffering in others helped to secure for harm-norms the central role they occupy in our moral outlook. Suffering in others leads to serious negative affect, so harm norms would prohibit actions that are likely to elicit negative affect. Thus, if affect-backed norms are more culturally fit, the norms against harming others should have increased cultural fitness over norms that are not backed by affective response. That is, harm norms, like norms against disgusting behavior, enjoy Affective Resonance, which enhances their cultural fitness.

The thrust of the preceding is simply that harm norms will have an edge in cultural fitness. Now we need to see whether this can provide insight into the genealogy of morals. The cultural advantage enjoyed by affect-backed norms provides at least part of an explanation for the ubiquity of harm norms. In chapter 6, I reviewed several stories about how harm-norms originated. I maintained that we lack the evidence to determine which of these stories is right, and, moreover, that it is possible that no single origin story is right. The work in social history makes vivid the possibility that norms prohibiting harms actually had multiple different origins. For in the case of the norms prohibiting animal cruelty, it seems that these norms did have multiple different origins (see section 2). Nonetheless, whatever story or stories one prefers about how the harm-norms were generated, the fact that we are emotionally sensitive to others' suffering helps to explain why the harm norms ended up being so successful. For as harm norms entered the culture, their emotional resonance would have contributed to their cultural cachet.

It is worth noting that the above explanation of the ubiquity of harm norms is fully consistent with rich diversity in harm norms. For the claim (p.156) is simply that harm norms will have enhanced cultural fitness. This allows for considerable normative diversity, because it concedes that cultural processes play a vital role in the development of norms. Because cultural processes implicate a complex and variegated set of forces, it is hardly surprising, on this view, that there is so much diversity in the norms found in different cultures. Indeed, the Affective Resonance approach is even consistent with the radical claims that the Dobu and the Ik lack harm norms altogether. For the account only claims that normative prohibitions that resonate with our emotions will be more likely to survive. The account does not necessitate that the norms will be present.

The appeal to Affective Resonance also provides an explanation for much of the evolution of harm norms. Two central characteristics of the evolution of harm norms might be teased apart. First, as we've seen, harm norms seem to become more inclusive, that is, cultures seem to develop a more inclusive view of the set of individuals whose suffering matters. Second, harm norms come to apply to a wider range of harms among those who are already part of the moral community—that is, there is less tolerance of pain and suffering of others (e.g., Macklin 1999, 251; Railton 1986). Both of these patterns can be explained by the Affective Resonance of harm norms. Because we respond affectively to a wide range of distress cues and even to the knowledge that someone is in pain, the Affective Resonance account obviously explains why new norms prohibiting old harms would have a fitness advantage. In addition, because low-level cues of distress are affectively powerful, we know that the underpinnings of the affective-response to suffering in others is promiscuous. We apparently come pretuned to be emotionally upset by distress cues that are exhibited by all humans, regardless of ethnicity or gender. So harm norms that include more of this group will gain an advantage in cultural evolution.

Indeed, our responses to suffering in others are more promiscuous still. Although there is not much experimental evidence available, it is plausible that we respond to the sufferings of some animals with reactive distress or concern. Some of the cues of suffering in animals are similar to the cues of suffering in humans (e.g., bleeding, convulsing, shrieking), and knowledge that an individual is in pain can provoke concern. As a result, humans are likely predisposed to respond affectively to much animal suffering. There is actually a bit of evidence, from an unlikely source, on (adult) human responses to animal distress. In Milgram's famous obedience studies (Milgram 1963), subjects were told to deliver increasingly severe shocks to a “learner” person, culminating in a switch labeled “450-volt, XXX.” Notoriously, in several versions of the experiment, subjects tended to give shocks all the way to the end of the scale (see Milgram 1974). In Milgram's study, the “learner” was actually a confederate, and (p.157) there was no genuine victim. Critics complained that the absence of a real victim might have contributed to subjects' behavior in the study. In light of this criticism, Sheridan and King (1972), in an experiment which would probably be disallowed by most university ethics boards, replaced the confederate with a real victim—a “cute, fluffy puppy” (1972, 165).7 Paralleling the Milgram study, the subjects were told to shock the puppy when the puppy failed on a discrimination task. Sheridan and King found that subjects behaved much as they did in several of Milgram's experiments—the majority of subjects gave shocks all the way to the end of the scale. Now, what's of greater interest for us is the incidental responses subjects exhibit. First, in the original Milgram experiments, it is important to note that, although subjects would go to the end of the shock scale, they were not happy about this. Subjects who went to the end of the scale were typically an emotional wreck. Something similar was true in the puppy study. Sheridan and King report that the subjects “typically gave many indications of distress while giving shocks to the puppy. These included such things as gesturally coaxing the puppy to escape the shock, pacing from foot to foot, puffing, and even weeping” (1972, 166). This provides some reason to think that we are predisposed to feel serious negative affect in reaction to the distress of nonhuman animals.

Consequently, so long as one remains within the confines of species that are likely to inspire reactive distress or concern (this will probably exclude lots of insects), harm norms that are more inclusive will have a survival advantage over other norms. Thus, the Affective Resonance account seems well suited to explain the evolution of norms against cruelty to animals. These norms plausibly resonated with a preexisting tendency to respond emotionally to the suffering of animals.

The promiscuousness of reactive distress and concern also explains why norms against corporal punishment would have a fitness advantage. Witnessing or hearing about harsh corporal punishments is likely to trigger reactive distress or concern. Indeed, because our immediate responses to another's suffering is largely insulated from background knowledge (see chapter 2), even if one is convinced of a convict's guilt, witnessing severe punishments on that individual is likely to elicit reactive distress or concern. As a result, norms prohibiting those kinds of punishments will gain some fitness advantage.

The preceding remarks indicate that some of the core facts about the genealogy of harm norms fit with the Affective Resonance account. However, as with the case of etiquette norms, it is not really enough just (p.158) to point out that the pattern of evolution fits with the hypothesis. Perhaps moral norms tend to be preserved in all cases. So we need to see whether the moral norms that fall into desuetude tend not to be connected to strong core emotions. Our evidentiary situation here is not nearly so good as in the case of etiquette. Nonetheless, as with etiquette norms, it is plausible that there are lots of “moral” norms that have largely lost their cultural grip.8 For instance, a few hundred years ago, pride, greed, lust, envy, gluttony, anger, and sloth were all regarded as seriously immoral—the Seven Deadly Sins. Many of these dispositions or tendencies are now regarded largely as peccadilloes and, in some cases (e.g., pride, envy, anger), as barely counternormative. This is supported by a glance at the moral equivalent of etiquette manuals. The tradition of moral manuals was not nearly so rich as that of etiquette manuals. Nor has it received the kind of insightful attention that Elias brought to the etiquette manuals. Nonetheless, we can get some idea of prevailing seventeenth- and eighteenth-century moral norms by looking to these manuals. One manual emerges as particularly widely known and influential: Allestree's On the Whole Duty of Man (1684). As a youth, apparently Hume paid close attention to the precepts set out in the manual, though he later would eschew this catalogue of virtues (see MacIntyre 1998, 171; 1984, 231). We find in the Whole Duty admonitions against “Being injurious to our neighbor,” “Murder, open or secret,” “Maiming or hurting the body of our neighbor” (166). These kinds of normative prohibitions are obviously retained in contemporary morality. But the Whole Duty also warns against immoral behaviors like “Greedily seeking the praise of other men,” “Uncontentedness in our estates,” “Eating too much,” “Making pleasure, not health, the end of eating,” “Being too curious or costly in meats,” “Immoderate sleeping” (165–66). Some of these admonitions now strike Western Europeans as at best prudential advice, other items, for example, the puritanical restrictions on cuisine, seem positively quaint. These norms that have fallen away, of course, also have the feature that they prohibit actions that are unlikely to elicit reactive distress or concern (or any other core emotion). Hence, as the Affective Resonance account would suggest, the norms that are connected to reactive distress and concern seem to survive well, whereas many norms that are not so connected have disappeared.

It bears emphasizing that the Affective Resonance explanation is not (p.159) that individuals typically recognize that certain norms fit well with their emotions and accordingly decide to adopt those norms. The idea that people deliberately try to achieve some kind of equilibrium by bending their norms to fit their affect seems rather implausible. Rather, the Affective Resonance proposal approaches the phenomena from the broad vantage of cultural evolution. The central idea is simply that on balance, affect-backed rules will be more attractive and this advantage will accumulate down the ages.

Before closing this section, it is worth recalling the significant forces at work to help secure affect-backed norms in the culture. We can look back to the psychological research on normative judgment to see some of the ways in which affect-backed norms are more psychologically salient. First, affect-backed normative rules will be more memorable. More significantly, in the domains of both morals and manners, norms that prohibit intrinsically upsetting actions are treated as especially impermissible, more seriously wrong, and less contingent on authority than actions prohibited by affect-neutral norms. Indeed, people who lack the relevant emotion tend to regard these violations as less serious and less authority independent. In the etiquette case, subjects with low disgust response were more likely to regard disgust-backed violations as less serious and less authority independent (Nichols 2002b). In the moral case, the population of subjects with low response to suffering are also more likely to regard harmful violations as less authority independent (Blair 1997). As a result, the affective response seems to play a major role in determining the strength of one's normative commitments. In addition, the affect-backed norms are treated as having justifications that go beyond the conventional. Subjects are less likely to appeal to societal norms in explaining why these actions are wrong. These psychological facts about affect-backed norms make for a compelling case that affect-backed norms will enjoy a considerable advantage in the struggle for cultural success. People are more impassioned about the affect-backed norms and that will make them more outspoken and impassioned advocates for the norm. Perhaps more important, it will be easier to convince someone that an action is wrong if that action is easily regarded as affectively offensive. Against this background, it would be surprising if affect-backed norms failed to show cultural resilience.

6. Comparison with Alternatives

In the previous section, I set out the explanation of moral evolution provided by the Affective Resonance hypothesis. This explanation provides (p.160) an alternative to the moral realist proposal. Now it is time to consider in a bit more detail whether Affective Resonance provides a better explanation than moral realism. After all, the form of argument under investigation is inference to the best explanation.

First, however, a word is in order about a more general worry: why did the more inclusive and antiviolent norms emerge so recently in Western history? This is an interesting question, and again the work in social history is revealing. In the case of cruelty to animals, Thomas maintains that one crucial enabling condition was that animals were no longer threatening (Thomas 1983, 273–74). Humans had effectively eliminated the animals that competed for resources or posed a more violent threat to human well-being. By the sixteenth century, for instance, wolves had been eradicated from England. This, according to Thomas, made it possible for attitudes about animals to shift towards the more inclusive norms we know today. In the case of corporal punishment, the situation is rather complicated, but some maintain that the new forms of punishment, and in particular, the penitentiary, provided a viable alternative to corporal punishment (e.g., Ignatieff 1978; see also Emsley 1987). So it was only because an alternative emerged that people were able to shed themselves of corporal punishment. The details here are rich and fascinating, but for present purposes, the important point is just that it is likely that there were important historical factors that enabled the moral evolution we've charted. Both the Affective Resonance account and the realist account can comfortably acknowledge the importance of an enabling background for the evolution of norms.

6.1. Moral Realism Version 1

The simple appeal to moral facts is an interesting explanation of moral evolution, especially if the only alternative is the historical accident explanation. If one has to choose between moral realism and historical accident, then realism certainly seems like a better explanation of the patterns. One can simply say, as moral realists do, that the best explanation for the evolution is that people are getting a clearer picture of the moral facts. Of course, there is an important sense in which this is still a thin explanation. For without an independently established story about moral facts, the simple realist story does not generate specific predictions about the direction of moral evolution. What one needs to know antecedently is, what are the moral facts? The trouble is, of course, that there is no generally accepted story about moral facts. That's part of the reason, after all, that realists appeal to moral progress—to shore up their claim that there are moral facts. As a result, the simple realist story provides only (p.161) the barest sketch of an explanation—if there were moral facts, then that would explain why there are robust trends. By contrast, the Affective Resonance story above offers a much thicker explanation of the pattern. It makes a broad range of predictions about what sorts of norms will likely succeed—norms that prohibit actions that are likely to elicit negative affect are more likely to succeed. This kind of explanatory thickness carries a danger, for it is much easier to disconfirm a theory that makes more specific predictions. If the specific predictions are flouted, then the Affective Resonance account is in trouble. However, the predictions of the Affective Resonance account are apparently not flouted. The historical trend seems rather to confirm the predictions made by the Affective Resonance account, and this suggests that the account has a major explanatory advantage over the simple realist explanation. It is a commonplace in philosophy of science that, all else being equal, a theory with greater predictive success and explanatory depth is to be preferred. On these grounds, the Affective Resonance is to be preferred to the realist account. For the realist account only explains why we find moral evolution, whereas the Affective Resonance account explains both why we find moral evolution, and why it takes the course it does.

None of the foregoing, of course, shows that morality is not objective or that the simple realist theory is wrong. Rather, the point is just that the Affective Resonance account provides a better explanation for moral evolution without invoking a role for moral facts. The simple appeal to moral facts does not seem to be the best explanation for the kind of moral evolution that has been our focus.9 To make a case against moral realism would obviously require further argument. In the final chapter, I will exploit features of the Affective Resonance account to sketch a Humean argument against one kind of moral realism. First, however, we need to turn to a rather different approach to explaining moral evolution through moral realism.

6.2. Moral Realism Version 2

In the philosophical literature, one prominent realist explanation of moral evolution manifestly does not suffer from a lack of predictive specificity. (p.162) That explanation comes from Peter Railton. Railton argues that the characteristic evolution of norms is explained by “social rationality,” and Railton makes the explicit prediction that norms will evolve in ways that minimize the risk of social unrest. This proposal has a number of virtues, and what I hope to show in the following is that, although it may tell part of the story, it also fails to tell an important part of the story about the evolution of harm norms.

Railton writes, “Moral norms reflect a certain kind of rationality, rationality not from the point of view of any particular individual, but from what might be called a social point of view” (1986, 190). In particular, the norms of a culture move in the direction that will alleviate risks of social unrest: “A social arrangement . . . that departs from social rationality by significantly discounting the interests of a particular group [will] have a potential for dissatisfaction and unrest” (191). Railton elaborates on this notion of the potential for unrest: “The potential for unrest that exists when the interests of a group are discounted is potential for pressure from that group—and its allies—to accord fuller recognition to their interests in social decision-making and in the socially instilled norms that govern individual decision making. It therefore is pressure to push the resolution of conflicts further in the direction required by social rationality, because it is pressure to give fuller weight to the interests of more of those affected” (193). As Railton notes, this generates the following, rough, prediction: “one could expect an uneven secular trend toward the inclusion of the interests of (or interests represented by) social groups that are capable of some degree of mobilization” (194–95). Railton goes on to note that this prediction is borne out by “patterns in the evolution of moral norms” (197). For instance, this account predicts that moral norms will increase in generality (197), which is, of course, one of the central patterns that we hope to explain.

Railton's proposal is intriguing. First, it does not suffer from the lack of predictive specificity that other realist accounts do. And there can be no doubt that social unrest has played a vital role in shaping the norms that we have. This is probably true for both moral and nonmoral norms. The proposal also helps to reinforce that, just as there are plausibly multiple different origins for moral norms, there are plausibly multiple factors influencing the evolution of moral norms. However, I think that there are strict limitations on the extent to which the social unrest model can explain the phenomena at hand.

(p.163) Before setting out its limitations, I want to address Railton's formulation of the prediction. Railton's prediction is that there will be a trend toward “the inclusion of the interests of (or interests represented by) social groups that are capable of some degree of mobilization.” The parenthetical comment here dilutes the hypothesis to such an extent that it no longer makes any distinctive predictions. For basically any interests can be represented by social groups capable of mobilization. The substantive prediction follows when we focus on the interests of the social group itself. And that, indeed, is what Railton appeals to when he adduces the support for his proposal.

Although the social unrest theory surely explains some moral evolution, it is limited in several ways. First, I want to advert to a point raised by Railton himself. He notes that the work in the social history of unrest, “a common theme . . . is that much social unrest is re-vindicative rather than revolutionary, since the discontent of long-suffering groups often is galvanized into action only when customary entitlements are threatened or denied. The overt ideologies of such groups thus frequently are particularistic and conservative, even as their unrest contributes to the emergence of new social forms that concede greater weight to previously discounted interests” (193n33). If this is right, then it leaves an important gap in the social unrest story. If the risk of social unrest could be alleviated by merely restoring recently lost rights, why does it end up having such pervasive effects? If the long-suffering groups would be satisfied by mere restoration, why do the social reforms typically go beyond this? The social unrest story does not immediately provide an answer for this question.

A related limitation of Railton's account is that some moral evolution leads to norms that apply to groups that pose no serious threat from social unrest. As Railton notes, the moral evolution seems to result in extending the harm norms to include the entire species (197). But in most cases, we are at no serious risk of unrest from, say, the !Kung or the Yanomamö. So why do we extend our norms to include so many individuals who pose no threat to our society? The social unrest story lapses here. By contrast, the Affective Resonance account provides a plausible explanation. Our promiscuous responsiveness to cues of harm in others (and even thoughts about harm in others) explains why harm norms would become more inclusive than is required by social rationality.

In addition, as we saw above, the norms prohibiting harm have expanded to include norms prohibiting cruelty to animals. These norms have, like norms prohibiting harming outgroupers, grown in prominence and acceptance over the last several hundred years. The social unrest model clearly provides no explanation of this phenomenon, since nonhuman animals are no longer in any position to threaten us. Indeed, as (p.164) noted above, Thomas maintains that it is partly because animals were in no position to threaten us that the norms prohibiting cruelty took hold. Of course, while the social unrest proposal stumbles on norms prohibiting cruelty to animals, the Affective Resonance account provides a natural explanation. Suffering in animals excites negative affect in humans, and as a result, norms prohibiting actions that cause suffering in animals will have an edge in cultural fitness.

Neither is the other historical pattern that we considered in detail, the decline of corporal punishment, easily explained by the social unrest account. For in many cases, the individuals who were subjected to the punishment were presumably in no position to foment unrest. For instance, the gradual elimination of corporal punishment from executions is not easily explained by the relevant party's likelihood to incite social unrest. By contrast again, the Affective Resonance account does explain this trend. Corporal punishment will prompt serious negative affect, and norms that prohibit this kind of thing will be more likely to survive than other sorts of norms. Thus, there is a range of cases central to the pattern of the evolution of harm norms that cannot be easily accommodated by Railton's model, but that are explained by the Affective Resonance account.

Finally, consider again the striking fact, discussed in the previous chapter, that the norms we happen to have are often tightly coordinated with the emotions we happen to have. The Affective Resonance account provides a broad-based explanation for this tight fit between emotions and norms. The reason so many of our norms fit so well with our emotions is, in part, that our emotions play a crucial historical role in securing the norms. The social unrest story, of course, provides no explanation for the evident connection between emotions and norms. As such, it is at best a quite incomplete story about the evolution of norms.

7. Conclusion

In this chapter, I have tried to extend the Affective Resonance account of the cultural evolution of norms to the particular case of harm norms. There are two striking facts about harm norms that need to be explained by a genealogical account. Harm norms are culturally ubiquitous and they seem to exhibit a characteristic pattern of evolution. The ubiquity and evolution of harm norms is often explained by appeal to moral facts—through reason we come to recognize the fact that it is wrong to harm others. I have argued that the Affective Resonance account provides a better explanation of the phenomena than the appeal to moral facts. For (p.165) Affective Resonance provides a richer explanation of the distinctive character of the evolution of harm norms.

The realist approach to moral evolution, it should be noted, is typically supposed to provide both an explanation and a justification for why we have the norms we do. We have the norms we do because reason leads us to the right norms. This both explains why we have the norms we do and implies that we are justified in embracing the norms. By contrast, the Affective Resonance account is intended only as an explanation for why we have the harm norms we do. The account harbors no pretensions about providing a justification for our embracing the norms. Rather, the Affective Resonance approach is essentially descriptive and historical, rather than prescriptive and justificatory. Indeed, in the next chapter, I will argue that the Affective Resonance account might play a part in an argument that moral judgments lack the kind of justification sought by some moral realists.

Notes:

(1.) In the aftermath of recent terrorist attacks, there has actually been renewed support for torture in the United States (e.g., Dershowitz 2002). Of course, this does not undermine the central claim of interest. We are considering broad historical trends, and those trends will obviously be affected by a number of factors, including religion and perceived threats to the well being of one's community.

(2.) One would like to have cross-cultural evidence of the trends in moral evolution. As far as I can tell, the moral evolution enthusiasts mentioned above (Singer, Railton, Brink, Nagel, and Smith) do not have any evidence to adduce, and neither do I. It is thus a substantive assumption for all of us that, ceteris paribus, other cultures will show similar historical trends in moral evolution.

To be sure, the kind of cross-cultural evidence that would be most useful is not easy to obtain. Ideally, we would like to have evidence on moral evolution in other cultures in the absence of Western influences, or at least in the absence of Western Enlightenment influences. For we want to see whether there is something like convergent moral evolution. For this, we need to look to non-Western civilizations that have an historical record dating back well before Western Enlightenment influences on the culture. China, in fact, provides a promising case. For China had a thriving civilization for many centuries before the Western Enlightenment. In China, blood sports never achieved any prominence as they did in Europe. Neither did the Chinese ever have a tradition of chattel slavery. Hence we cannot look for changes on these dimensions. The Chinese do, of course, have a history of penal practices (as does every civilization), and the record here parallels the kind of evolution we find in the Western penal system. Mutilation, including branding on the forehead, cutting off the feet and cutting off the nose, was a common form of punishment before the Han dynasty (206 BCE—220 CE). But beginning with the Han, mutilation punishments gradually fell into disuse, and they were not reinstated (McKnight 1992, 331). Of course, that is a rather breezy gesture to throw at a vast topic. To explore systematically the cross-cultural similarities and differences in moral evolution would be extremely difficult and extremely valuable.

(3.) As noted in chapter 6, evolutionary ethics provides a different approach to explaining why we have harm norms. In the critical literature that has arisen around evolutionary ethics, most of the attention has focused on whether evolutionary ethics can tell us what norms we ought to embrace. There has been considerably less critical attention to the issue of interest here—why we have the norms we do. However, even this part of the evolutionary ethics view has not attracted many followers, and there is at least one prominent opponent. In a widely cited paper, Francisco Ayala explicitly rejects the idea that the specific moral norms we have are evolutionary adaptations. In setting out his objections, he appeals to both the diversity of moral norms and the evolution of moral norms: “[M]oral norms differ from one culture to another and even ‘evolve' from one time to another. Many people see nowadays that the Biblical injunction: ‘Be fruitful and multiply' has been replaced by a moral imperative to limit the number of one's children. No genetic change in human population accounts for this inversion of moral value” (Ayala 1987, 250). Ayala thus maintains, sensibly, that it is implausible to explain the diversity and evolution of moral norms in terms of genetic changes. By itself, this does not tell us why we should reject the idea that no specific moral norms are adaptations. An advocate of evolutionary ethics might maintain that he only seeks to maintain that ubiquitous moral norms are adaptations. However, one obvious way to fill out Ayala's argument is to note that we need some explanation of the diversity and evolution of moral norms. That explanation will, according to Ayala, appeal to cultural forces (1987, 1995). And we might expect that the cultural explanation of diversity and change will also extend to explain such ubiquity as there is. Hence, at this point, there is no reason to think that the best explanation of the ubiquity of harm norms is that the ubiquitous norms are adaptations.

(4.) Peter Railton (1986) also draws on moral evolution to defend a kind of moral realism. However, Railton's approach is somewhat different from the approaches offered by Brink and Sturgeon, so I put Railton's account to the side for present purposes. I will address his account in section 6.

(5.) This description of moral realism is broad enough to include very different forms of moral realism. For instance, a moral realist might maintain that the moral facts are grounded in human capacities and so the facts are species specific. A moral realist might alternatively make the stronger claim that moral facts, like “it is wrong to torture puppies,” are entirely independent of human capacities.

(6.) Slote maintains that his argument supports the view that we have the ability “to make rational, reasonable, objective aesthetic value judgment” (Slote 1971, 821). Elsewhere, Slote expresses cautious attraction to moral progress as well (Slote 1982).

(7.) Thanks to John Doris for drawing my attention to this study.

(8.) There is a delicate issue, of course, about when a norm counts as moral. I have assiduously avoided trying to define the moral domain, and I do not intend to change course now. In the present case, we want to look at norms that the culture treated as moral, at least in some salient respects, but that are not connected to core emotions. Perhaps some would maintain that such norms only count as moral in an inverted-commas sense; but we still might consider the extent to which such ‘moral' norms survive.

(9.) As Ron Mallon has pointed out to me, moral realists might adopt the following compatibilist line:

Affective Resonance is a mechanism by which we progress to the moral facts.

For present purposes, I want to remain neutral on this compatibilist position. I only want to maintain that, at least prima facie, the compatibilist's appeal to moral facts is an ancillary hypothesis that does not obviously contribute to the explanation of moral evolution. The Affective Resonance account itself makes no appeal to moral facts. If Affective Resonance does explain the moral evolution charted above, then, unless there is an independent argument for moral facts, the explanation of moral evolution need not invoke such facts. Of course, if there is an independently persuasive argument for moral facts, then the kind of compatibilism Mallon suggests might be welcome.