Jump to ContentJump to Main Navigation
Primate Neuroethology$

Michael Platt and Asif Ghazanfar

Print publication date: 2010

Print ISBN-13: 9780195326598

Published to Oxford Scholarship Online: February 2010

DOI: 10.1093/acprof:oso/9780195326598.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 23 November 2017

The Foundations of Transdisciplinary Behavioral Science

The Foundations of Transdisciplinary Behavioral Science

Chapter:
(p.160) Chapter 9 The Foundations of Transdisciplinary Behavioral Science
Source:
Primate Neuroethology
Author(s):

Herbert Gintis (Contributor Webpage)

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780195326598.003.0009

Abstract and Keywords

This chapter reviews contemporary models of human behavior in various fields, including economics, biology, anthropology, sociology, and neuroscience. It shows that although the core theoretical constructs of the various behavioral disciplines currently include mutually contradictory principles, progress over the past couple of decades has generated the instruments necessary to resolve the interdisciplinary contradictions.

Keywords:   human behavior, behavioral science, interdisciplinary studies

Introduction

The behavioral sciences include economics, biology, anthropology, sociology, psychology, and political science, as well as their subdisciplines, including neuroscience, archaeology and paleontology, and, to a lesser extent, such related disciplines as history, legal studies, and philosophy. These disciplines have many distinct concerns, but each includes a model of individual human behavior. These models are not only different, which is to be expected given their distinct explanatory goals, but also incompatible. This situation is well known, but does not appear discomforting to behavioral scientists, as there has been virtually no effort to repair this condition.

The behavioral sciences all include models of individual human behavior. Therefore, these models should be compatible, and indeed, there should be a common underlying model, enriched in different ways to meet the particular needs of each discipline. Realizing this goal at present cannot be easily attained, since the various behavioral disciplines currently have incompatible models. Yet, recent theoretical and empirical developments have created the conditions for rendering coherent the areas of overlap of the various behavioral disciplines, as outlined in this paper. The analytical tools deployed in this task incorporate core principles from several behavioral disciplines.

Evolutionary Perspective

Evolutionary biology underlies all behavioral disciplines because Homo sapiens is an evolved species whose characteristics are the product of its particular evolutionary history. For humans, evolutionary dynamics are captured by gene-culture coevolution. The centrality of culture and complex social organization to the evolutionary success of Homo sapiens implies that individual fitness in humans will depend on the structure of cultural life. Since obviously culture is influenced by human genetic propensities, it follows that human cognitive, affective, and moral capacities are the product of a unique dynamic of gene-culture interaction. This coevolutionary process has endowed us with preferences that go beyond the self-regarding concerns emphasized in traditional economic and biological theory and embrace such non–self-regarding values as a taste for cooperation, fairness, and retribution; the capacity to empathize; and the ability to value such constitutive behaviors as honesty, hard work, toleration of diversity, and loyalty to one’s reference group.2

Evolutionary Game Theory

The analysis of living systems includes one concept that does not occur in the nonliving world, and is not analytically represented in the natural sciences. This is the notion of a strategic interaction, in which the behavior of agents is derived by assuming that each is choosing a fitness-relevant response to the actions of other agents. The study of systems in which agents choose fitness-relevant responses and in which such responses evolve dynamically is called evolutionary game (p.161) theory. Game theory provides a transdisciplinary conceptual basis for analyzing choice in the presence of strategic interaction. However, the classical game theoretic assumption that agents are self-regarding must be abandoned except in specific situations (e.g., anonymous market interactions), and many characteristics that classical game theorists have considered deductions from the principles of rational behavior, including the use of backward induction, are in fact not implied by rationality. Evolutionary game theory, whose equilibrium concept is that of a stable stationary point of a dynamic system, must thus replace classical game theory, which erroneously favors subgame perfection and sequentiality as equilibrium concepts.

The Beliefs, Preferences, and Constraints (BPC) Model

General evolutionary principles suggest that individual decision making can be modeled as optimizing a preference function subject to informational and material constraints. Natural selection leads the content of preferences to reflect biological fitness. The principle of expected utility extends this optimization to stochastic outcomes. The resulting model is called the rational actor model in economics, but I will generally refer to this as the beliefs, preferences, and constraints (BPC) model to avoid the often misleading connotations attached to the term “rational.”

Society as Complex Adaptive System

The behavioral sciences advance not only by the developing analytical and quantitative models but also by the accumulating historical, descriptive, and ethnographic evidence that pays close attention to the detailed complexities of life in the sweeping array of wondrous forms that nature reveals to us. This situation is in sharp contrast with the natural sciences, which have found little use for narrative alongside analytical modeling. By contrast, historical contingency is a primary focus of analysis and causal explanation for many researchers working on sociological, anthropological, ecological, and even biological topics.

The reason for this contrast between the natural and the behavioral sciences is that living systems are generally complex, dynamic adaptive systems with emergent properties that cannot be fully captured in analytical models that attend only to the local interactions of the system. The hypothetico-deductive methods of game theory, the BPC model, and even gene-culture coevolutionary theory must therefore be complemented by the work of behavioral scientists who adhere to more empiricist and particularist traditions. For instance, cognitive anthropology interfaces with gene-culture coevolution and the BPC model by enhancing their capacity to model culture at a level of sophistication that fills in the black box of the physical instantiation of culture in coevolutionary theory.

A complex system consists of a large population of similar entities (in our case, human individuals) who interact through regularized channels (e.g., networks, markets, social institutions) with significant stochastic elements, without a system of centralized organization and control (i.e., if there a state, it controls only a fraction of all social interactions, and itself is a complex system). A complex system is adaptive if it evolves through some evolutionary (genetic, cultural, agent-based silicon, or other) process of hereditary reproduction, mutation, and selection (Holland, 1975). To characterize a system as complex adaptive does not explain its operation, and does not solve any problems. However, it suggests that certain modeling tools are likely to be effective that have little use in a noncomplex system. In particular, the traditional mathematical methods of physics and chemistry must be supplemented by other modeling tools, such as agent-based simulation and network theory.

Such novel research tools are needed because a complex adaptive system generally has emergent properties that cannot be analytically derived from its component parts. The stunning success of modern physics and chemistry lies in their ability to avoid or strictly limit emergence. Indeed, the experimental method in natural science is to create highly simplified laboratory conditions, under which modeling becomes analytically tractable. Physics is no more (p.162) effective than economics or biology in analyzing complex real-world phenomena in situ. The various branches of engineering (electrical, chemical, mechanical) are effective because they recreate in everyday life artificially controlled, noncomplex, nonadaptive, environments in which the discoveries of physics and chemistry can be directly applied. This option is generally not open to most behavioral scientists, who rarely have the opportunity of “engineering” social institutions and cultures.

In addition to these conceptual tools, the behavioral sciences of course share common access to the natural sciences, statistical and mathematical techniques, computer modeling, and a common scientific method.

Evolutionary Perspective

A replicator is a physical system capable of drawing energy and chemical building blocks from its environment to make copies of itself. Chemical crystals, such as salt, have this property, but biological replicators have the additional ability to assume myriad physical forms based on the highly variable sequencing of its chemical building blocks (Schrödinger called life an “aperiodic crystal” in 1943, before the structure of DNA was discovered). Biology studies the dynamics of such complex replicators using the evolutionary concepts of replication, variation, mutation, and selection (Lewontin, 1974).

Biology plays a role in the behavioral sciences much like that of physics in the natural sciences. Just as physics studies the elementary processes that underlie all natural systems, biology studies the general characteristics of survivors of the process of natural selection. In particular, genetic replicators, the epigenetic environments to which they give rise, and the effect of these environments on gene frequencies account for the characteristics of species, including the development of individual traits and the nature of intraspecific interaction. This does not mean, of course, that behavioral science in any sense reduces to biological laws. Just as one cannot deduce the character of natural systems (e.g., the principles of inorganic and organic chemistry, the structure and history of the universe, robotics, plate tectonics) from the basic laws of physics, similarly, one cannot deduce the structure and dynamics of complex life forms from basic biological principles. But, just as physical principles inform model creation in the natural sciences, so must biological principles inform all the behavioral sciences.

The Foundations of the BPC Model

For every constellation of sensory inputs, each decision taken by an organism generates a probability distribution over fitness outcomes, the expected value of which is the fitness associated with that decision. Since fitness is a scalar variable, for each constellation of sensory inputs, each possible action the organism might take has a specific fitness value, and organisms whose decision mechanisms are optimized for this environment will choose the available action that maximizes this value.3 It follows that, given the state of its sensory inputs, if an organism with an optimized brain chooses action A over action B when both are available, and chooses action B over action C when both are available, then it will also choose action A over action C when both are available. This is called choice consistency.

The so-called rational actor model was developed in the twentieth century by John von Neumann, Leonard Savage, and many others. The model appears prima facie to apply only when actors possess extremely strong information-processing capacities. However, the model in fact depends only on choice consistency and the assumption that agents can trade off among outcomes in the sense that for any finite set of outcomes A1,…,An, if A1 is the least preferred and An the most preferred outcome, then for any Ai, 1in there is a probability pi, 0pi1 such that the agent is indifferent between Ai and a lottery that pays A1 with probability pi and pays An with probability 1 – pi (Kreps, 1990). Clearly, these assumptions are often extremely plausible. When applicable, the rational actor model’s choice consistency assumption strongly enhances explanatory power, even in areas that have (p.163) traditionally abjured the model (Coleman, 1990; Hechter & Kanazawa, 1997; Kollock, 1996).

In short, when preferences are consistent, they can be represented by a numerical function, often called a utility function, which the individual maximizes subject to his or her beliefs (including Bayesian probabilities) and constraints. Four caveats are in order. First, this analysis does not suggest that people consciously maximize something called “utility,” or anything else. Second, the model does not assume that individual choices, even if they are self-referring (e.g., personal consumption), are always welfare enhancing. Third, preferences must be stable across time to be theoretically useful, but preferences are ineluctably a function of such parameters as hunger, fear, and recent social experience, while beliefs can change dramatically in response to immediate sensory experience. Finally, the BPC model does not presume that beliefs are correct or that they are updated correctly in the face of new evidence, although Bayesian assumptions concerning updating can be made part of consistency in elegant and compelling ways (Jaynes, 2003).

The rational actor model is the cornerstone of contemporary economic theory, and in the past few decades has become the cornerstone of the biological modeling of animal behavior (Alcock, 1993; Real, 1991; Real & Caraco, 1986). Economic and biological theory thus have a natural affinity: The choice consistency on which the rational actor model of economic theory depends is rendered plausible by biological evolutionary theory, and the optimization techniques pioneered by economic theorists are routinely applied and extended by biologists in modeling the behavior of a vast array of organisms.

In addition to the explanatory success of theories based on the rational actor model, supporting evidence from contemporary neuroscience suggests that expected utility maximization is not simply an “as if ” story. In fact, the brain’s neural circuitry actually makes choices by internally representing the payoffs of various alternatives as neural firing rates and choosing a maximal such rate (Glimcher, 2003; Glimcher et al., 2005). Neuroscientists increasingly find that an aggregate decision making process in the brain synthesizes all available information into a single, unitary value (Glimcher, 2003; Parker & Newsome, 1998; Schall & Thompson, 1999). Indeed, when animals are tested in a repeated trial setting with variable reward, dopamine neurons appear to encode the difference between the reward that an animal expected to receive and the reward that an animal actually received on a particular trial (Schultz et al., 1997; Sutton & Barto, 2000), an evaluation mechanism that enhances the environmental sensitivity of the animal’s decision-making system. This error-prediction mechanism has the drawback of only seeking local optima (Sugrue et al., 2005). Montague and Berns (2002) address this problem, showing that the orbitofrontal cortex and striatum contains a mechanism for more global predictions that include risk assessment and discounting of future rewards. Their data suggest a decision-making model that is analogous to the famous Black-Scholes options pricing equation (Black & Scholes, 1973).

The BPC model is the most powerful analytical tool of the behavioral sciences. For most of its existence this model has been justified in terms of “revealed preferences” rather than by the identification of neural processes that generate constrained optimal outcomes. The neuroscience evidence, for the first time, suggests a firmer foundation for the rational actor model.

Gene-Culture Coevolution

The genome encodes information that is used to construct a new organism, to instruct the new organism how to transform sensory inputs into decision outputs (i.e., to endow the new organism with a specific preference structure), and to transmit this coded information virtually intact to the new organism. Since learning about one’s environment is costly and error prone, efficient information transmission will ensure that the genome encodes all aspects of the organism’s environment that are constant, or that change only very slowly through time and space. By contrast, environmental conditions that vary across generations and/or in the course of the organism’s life history can be dealt with by providing the organism with the (p.164) capacity to learn, and hence phenotypically adapt to specific environmental conditions.

There is an intermediate case that is not efficiently handled by either genetic encoding or learning. When environmental conditions are positively but imperfectly correlated across generations, each generation acquires valuable information through learning that it cannot transmit genetically to the succeeding generation, because such information is not encoded in the germ line. In the context of such environments, there is a fitness benefit to the transmission of epigenetic information concerning the current state of the environment. Such epigenetic information is quite common (Jablonka & Lamb, 1995) but achieves its highest and most flexible form in cultural transmission in humans and to a considerably lesser extent in other primates (Bonner, 1983; Richerson & Boyd, 1998). Cultural transmission takes the form of vertical (parents to children) horizontal (peer to peer), and oblique (elder to younger), as in Cavalli and Feldman (1981); prestige (higher influencing lower status), as in Henrich and Gil-White (2001); popularity related, as in Newman and colleagues (2006); and even random population-dynamic transmission, as in Shennan (1997) and Skibo and Bentley (2003).

The parallel between cultural and biological evolution goes back to Huxley (1955), Popper (1979), and James (1880).4 The idea of treating culture as a form of epigenetic transmission was pioneered by Richard Dawkins, who coined the term “meme” in The Selfish Gene (Dawkins, 1976) to represent an integral unit of information that could be transmitted phenotypically. There quickly followed several major contributions to a biological approach to culture, all based on the notion that culture, like genes, could evolve through replication (intergenerational transmission), mutation, and selection (Boyd & Richerson, 1985; Cavalli-Sforza & Feldman, 1982; Lumsden & Wilson, 1981).

Cultural elements reproduce themselves from brain to brain and across time, mutate, and are subject to selection according to their effects on the fitness of their carriers (Boyd & Richerson, 1985; Cavalli-Sforza & Feldman, 1982; Parsons, 1964). Moreover, there are strong interactions between genetic and epigenetic elements in human evolution, ranging from basic physiology (e.g., the transformation of the organs of speech with the evolution of language) to sophisticated social emotions, including empathy, shame, guilt, and revenge seeking (Zajonc, 1980, 1984).

Because of their common informational and evolutionary character, there are strong parallels between genetic and cultural modeling (Mesoudi et al., 2006). Like biological transmission, culture is transmitted from parents to offspring, and like cultural transmission, which is transmitted horizontally to unrelated individuals, so in microbes and many plant species, genes are regularly transferred across lineage boundaries (Abbott et al., 2003; Jablonka & Lamb, 1995; Rivera & Lake, 2004). Moreover, anthropologists reconstruct the history of social groups by analyzing homologous and analogous cultural traits, much as biologists reconstruct the evolution of species by the analysis of shared characters and homologous DNA (Mace & Pagel, 1994). Indeed, the same computer programs developed by biological systematists are used by cultural anthropologists (Holden, 2002; Holden & Mace, 2003). In addition, archeologists who study cultural evolution have a similar modus operandi as paleobiologists who study genetic evolution (Mesoudi et al., 2006). Both attempt to reconstruct lineages of artifacts and their carriers. Like paleobiology, archaeology assumes that when analogy can be ruled out, similarity implies causal connection by inheritance (O’Brien & Lyman, 2000). Like biogeography’s study of the spatial distribution of organisms (Brown & Lomolino, 1998), behavioral ecology studies the interaction of ecological, historical, and geographical factors that determine distribution of cultural forms across space and time (Smith & Winterhalder, 1992).

Perhaps the most common critique of the analogy between genetic and cultural evolution is that the gene is a well-defined, discrete, independently reproducing and mutating entity, whereas the boundaries of the unit of culture are ill-defined and overlapping. In fact, however, this view of the gene is simply outdated. Overlapping, nested, and movable genes (p.165) discovered in the past 35 years have some of the fluidity of cultural units, whereas quite often the boundaries of a cultural unit (a belief, icon, word, technique, stylistic convention) are quite delimited and specific. Similarly, alternative splicing, nuclear and messenger RNA editing, cellular protein modification, and genomic imprinting, which are quite common, undermine the standard view of the insular gene producing a single protein and support the notion of genes having variable boundaries and having strongly context-dependent effects.

Dawkins added a second fundamental mechanism of epigenetic information transmission in The Extended Phenotype (Dawkins, 1982), noting that organisms can directly transmit environmental artifacts to the next generation, in the form of such constructs as beaver dams, bee hives, and even social structures (e.g., mating and hunting practices). The phenomenon of a species creating an important aspect of its environment and stably transmitting this environment across generations, known as niche construction, is a widespread form of epigenetic transmission (Odling-Smee et al., 2003). Moreover, niche construction gives rise to what might be called a gene-environment coevolutionary process, since a genetically induced environmental regularity becomes the basis for genetic selection, and genetic mutations that give rise to mutant niches will survive if they are fitness enhancing for their constructors. The analysis of the reciprocal action of genes and culture is known as gene-culture coevolution (Bowles & Gintis, 2005; Durham, 1991; Feldman & Zhivotovsky, 1992; Lumsden & Wilson, 1981).

Neuroscientific studies exhibit clearly the genetic basis for moral behavior. Brain regions involved in moral judgments and behavior include the prefrontal cortex, the orbitofrontal cortex, and the superior temporal sulcus (Moll et al., 2005). These brain structures are virtually unique to or most highly developed in humans and are doubtless evolutionary adaptations (Schulkin, 2000). The evolution of the human prefrontal cortex is closely tied to the emergence of human morality (Allman et al., 2002). Patients with focal damage to one or more of these areas exhibit a variety of antisocial behaviors, including the absence of embarrassment, pride and regret (Beer et al., 2003; Camille, 2004), and sociopathic behavior (Miller et al., 1997). There is a likely genetic predisposition underlying sociopathy, and sociopaths comprise 3% to 4% of the male population, but they account for between 33% and 80% of the population of chronic criminal offenders in the United States (Mednick et al., 1977).

It is clear from this body of empirical information that culture is directly encoded into the human brain, which of course is the central claim of gene-culture coevolutionary theory.

Game Theory: The Universal Lexicon of Life

In the BPC model, choices give rise to probability distributions over outcomes, the expected values of which are the payoffs to the choice from which they arose. Game theory extends this analysis to cases where there are multiple decision makers. In the language of game theory, players (or agents) are endowed with a set of available strategies and have certain information concerning the rules of the game, the nature of the other players and their available strategies, and the structure of payoffs. Finally, for each combination of strategy choices by the players, the game specifies a distribution of individual payoffs to the players. Game theory attempts to predict the behavior of the players by assuming that each maximizes its preference function subject to its information, beliefs, and constraints (Kreps, 1990).

Game theory is a logical extension of evolutionary theory. To see this, suppose there is only one replicator, deriving its nutrients and energy from nonliving sources (the sun, the earth’s core, amino acids produced by electrical discharge, and the like). The replicator population will then grow at a geometric rate, until it presses upon its environmental inputs. At that point, mutants that exploit the environment more efficiently will out-compete their less efficient conspecifics, and with input scarcity, mutants will emerge that “steal” from conspecifics who have amassed valuable resources. With the rapid growth of such predators, mutant prey will (p.166) devise means of avoiding predation, and predators will counter with their own novel predatory capacities. In this manner, strategic interaction is borne from elemental evolutionary forces. It is only a conceptually short step from this point to cooperation and competition among cells in a multicellular body, among conspecifics who cooperate in social production, between males and females in a sexual species, between parents and offspring, and among groups competing for territorial control.

Historically, game theory did not emerge from biological considerations, but rather from the strategic concerns of combatants in World War II (Poundstone, 1992; Vonneumann & Morgenstern, 1944). This led to the widespread caricature of game theory as applicable only to static confrontations of rational self-regarding agents possessed of formidable reasoning and information-processing capacity. Developments within game theory in recent years, however, render this caricature inaccurate.

First, game theory has become the basic framework for modeling animal behavior (Alcock, 1993; Krebs & Davies, 1997; Maynard Smith, 1982), and thus has shed its static and hyperrationalistic character, in the form of evolutionary game theory (Gintis, 2000). Evolutionary and behavioral game theory do not require the formidable information-processing capacities of classical game theory, so disciplines that recognize that cognition is scarce and costly can make use of game-theoretic models (Gintis, 2000; Gigerenzer & Selten, 2001; Young, 1998). Thus, agents may consider only a restricted subset of strategies (Simon, 1972; Winter, 1971), and they may use rule-of-thumb heuristics rather than maximization techniques (Gigerenzer & Selten, 2001). Game theory is thus a generalized schema that permits the precise framing of meaningful empirical assertions but imposes no particular structure on the predicted behavior.

Second, evolutionary game theory has become key to understanding the most fundamental principles of evolutionary biology. Throughout much of the twentieth century, classical population biology did not employ a game-theoretic framework (Haldane, 1932; Fisher, 1930; Wright, 1931). However, Moran (1964) showed that Fisher’s Fundamental Theorem, which states that as long as there is positive genetic variance in a population, fitness increases over time, is false when more than one genetic locus is involved. Eshel and Feldman (1984) identified the problem with the population genetic model in its abstraction from mutation. But how do we attach a fitness value to a mutant? Eshel and Feldman (1984) suggested that payoffs be modeled game theoretically on the phenotypic level and a mutant gene be associated with a strategy in the resulting game. With this assumption, they showed that under some restrictive conditions, Fisher’s Fundamental Theorem could be restored. Their results were generalized by Liberman (1988), Hammerstein and Selten (1994), Hammerstein (1996), Eshel and colleagues (1998), and others.

Third, the most natural setting for biological and social dynamics is game theoretic. Replicators (genetic and/or cultural) endow copies of themselves with a repertoire of strategic responses to environmental conditions, including information concerning the conditions under which each is to be deployed in response to character and density of competing replicators. Genetic replicators have been well understood since the rediscovery of Mendel’s laws in the early twentieth century. Cultural transmission also apparently occurs at the neuronal level in the brain, in part through the action of mirror neurons (Meltzoff & Decety, 2003; Rizzolatti et al., 2002; Williams et al., 2001). Mutations include replacement of strategies by modified strategies, and the “survival of the fittest” dynamic (formally called a replicator dynamic) ensures that replicators with more successful strategies replace those with less successful ones (Taylor & Jonker, 1978).

Fourth, behavioral game theorists now widely recognize that in many social interactions, agents are not self-regarding, but rather often care about the payoffs to and intentions of other players, and will sacrifice to uphold personal standards of honesty and decency (Fehr & Gächter, 2002; Gintis et al., 2005; Gneezy, 2005; Wood, 2003). Moreover, human actors care about power, self-esteem, and behaving morally (Bowles & Gintis, 2005; Gintis, 2003; Wood, (p.167) 2003). Because the rational actor model treats action as instrumental toward achieving rewards, it is often inferred that action itself cannot have reward value. This is an unwarranted inference. For instance, the rational actor model can be used to explain collective action (Olson, 1965), since agents may place positive value on the process of acquisition (for instance, “fighting for one’s rights”), and can value punishing those who refuse to join in the collective action (Moore, 1978; Wood, 2003). Indeed, contemporary experimental work indicates that one can apply standard choice theory, including the derivation of demand curves, plotting concave indifference curves, and finding price elasticities, for such preferences as charitable giving and punitive retribution (Andreoni & Miller, 2002).

As a result of its maturation of game theory over the past quarter century, game theory is well positioned to serve as a bridge across the behavioral sciences, providing both a lexicon for communicating across fields with divergent and incompatible conceptual systems and a theoretical tool for formulating a model of human choice that can serve all the behavioral disciplines.

Experimental Game Theory and Non–Self-regarding Preferences

Contemporary biological theory maintains that cooperation can be sustained based on inclusive fitness, or cooperation among close genealogical kin (Hamilton, 1963) by individual self-interest in the form of reciprocal altruism (Trivers, 1971). Reciprocal altruism occurs when an agent helps another agent, at a fitness cost to itself, with the expectation that the beneficiary will return the favor in a future period. The explanatory power of inclusive fitness theory and reciprocal altruism convinced a generation of biologists that what appears to be altruism—personal sacrifice on behalf of others—is really just long-run genetic self-interest. Combined with a vigorous critique of group selection (Dawkins, 1976; Maynard Smith, 1976; Williams, 1966), a generation of biologists became convinced that true altruism—one organism sacrificing fitness on behalf of the fitness of an unrelated other—was virtually unknown, even in the case of Homo sapiens.

The selfish nature of human nature was touted as a central implication of rigorous biological modeling. In The Selfish Gene, for instance, Richard Dawkins (1976) asserts, “We are survival machines—robot vehicles blindly programmed to preserve the selfish molecules known as genes… . Let us try to teach generosity and altruism, because we are born selfish.” (p. 1) Similarly, in The Biology of Moral Systems, R. D. Alexander (1987, p. 3) asserts, “Ethics, morality, human conduct, and the human psyche are to be understood only if societies are seen as collections of individuals seeking their own self-interest.” More poetically, Michael Ghiselin (1974) writes, “No hint of genuine charity ameliorates our vision of society, once sentimentalism has been laid aside. What passes for cooperation turns out to be a mixture of opportunism and exploitation…. Scratch an altruist, and watch a hypocrite bleed.”

In economics, the notion that enlightened self-interest allows agents to cooperate in large groups goes back to Bernard Mandeville’s “private vices, public virtues” (Mandeville, 1705) and Adam Smith’s “invisible hand” (Smith, 1759). Full analytical development of this idea awaited the twentieth-century development of general equilibrium theory (Arrow & Debreu, 1954; Arrow & Hahn, 1971) and the theory of repeated games (Axelrod & Hamilton, 1981; Fudenberg & Maskin, 1986). So powerful in economic theory is the notion that cooperation among self-regarding agents is possible that it is hard to find even a single critic of the notion in the literature, even among those that are otherwise quite harsh in their evaluation of neoclassical economics.

By contrast, sociological, anthropological, and social psychological theory generally explain that human cooperation is predicated upon affiliative behaviors among group members, each of whom is prepared to sacrifice a modicum of personal well-being to advance the collective goals of the group. The vicious attack on “sociobiology” (Segerstrale, 2001) and the widespread rejection of the bare-bones Homo economicus in (p.168) the “soft” social sciences (DiMaggio, 1994; Etzioni, 1985; Hirsch et al., 1990) is due in part to this clash of basic explanatory principles.

Behavioral game theory assumes the BPC model of choice and subjects individuals to strategic settings, such that their behavior reveals their underlying preferences. This controlled setting allows us to adjudicate between these contrasting models. One behavioral regularity that has been found thereby is strong reciprocity, which is a predisposition to cooperate with others and to punish those who violate the norms of cooperation, at personal cost, even when it is implausible to expect that these costs will be repaid. Strong reciprocity is other-regarding, as a strong reciprocator’s behavior reflects a preference to cooperate with other cooperators and to punish noncooperators, even when these actions are personally costly.

The result of the laboratory and field research on strong reciprocity is that humans indeed often behave in ways that have traditionally been affirmed in sociological theory and denied in biology and economics (Andreoni, 1995; Fehr & Gächter, 2000, 2002; Fehr et al., 1997, 1998; Gächter & Fehr, 1999; Henrich et al., 2005; Ostrom et al., 1992). Moreover, it is probable that this other-regarding behavior is a prerequisite for cooperation in large groups of nonkin, since the theoretical models of cooperation in large groups of self-regarding nonkin in biology and economics do not apply to some important and frequently observed forms of human cooperation (Boyd & Richerson, 1992; Gintis, 2005).

Character Virtues in the Laboratory

Another form of prosocial behavior conflicting with the maximization of personal material gain is that of maintaining such character virtues as honesty and promise keeping, even when there is no chance of being penalized for unvirtuous behavior. Our first example of non–self-regarding behavior will be of this form.

Gneezy (2005) studied 450 undergraduate participants paired off to play several games of the following form. There are two players who never see each other (anonymity) and they interact exactly once (one shot). Player 1, whom we will call the Advisor, is shown the contents of two envelopes, labeled A and B. Each envelope has two compartments, the first containing money to be given to the Advisor, and the other to be given to player 2. We will call player 2 the Chooser, because this player gets to choose which of the two envelopes will be distributed to the two players. The catch, however, is that the Chooser is not permitted to see the contents of the envelopes. Rather, the Advisor, who did see the contents, was required to advise the Chooser which envelope to pick.

The games all begin with the experimenter showing both players the two envelopes, and asserting that one of the envelopes is better for the Advisor and the other is better for the Chooser. The Advisor is then permitted to inspect the contents of the two envelopes, and say to the Chooser either “A will earn you more money than B,” or “B will earn you more money than A.” The Chooser then picks either A or B, and the game is over.

Suppose both players are self-regarding, each caring only about how much money he earns from the transaction. Suppose also that both players believe their partner is self-regarding. The Chooser will then reason that the Advisor will say whatever induces him, the Chooser, to choose the envelope that gives him, the Chooser, the lesser amount of money. Therefore, nothing the Advisor says should be believed, and the Chooser should just make a random pick between the two envelopes. The Advisor can anticipate the Chooser’s reasoning, and will pick randomly which envelope to advise the Chooser to choose. Economists call the Advisor’s message “cheap talk,” because it costs nothing to give, but is worth nothing to either party.

By contrast, suppose the Chooser believes that the Advisor places a positive value on transmitting honest messages, and so will be predisposed to follow whatever advice he is given, and suppose the Advisor does value honesty, and believes that the Chooser believes that he values honesty, and hence will follow the Advisor’s suggestion. Then, the Advisor will weigh the financial gain from lying against the cost of lying, and unless the gain is sufficiently large, (p.169) he will tell the truth, the Chooser will believe him, and the Chooser will get his preferred payoff.

Gneezy (2005) implemented this experiment as a series of three games with the aforementioned structure (his detailed protocols were slightly different). The first game, which we will write A = (6,5), B = (5,6), pays the Advisor 6 and the Chooser 5 if the Chooser picks A, and the reverse if the Chooser picks B. The second game, A = (6,5), B = (5,15), pays the Advisor 6 and the Chooser 5 if the Chooser picks A, but pays the Advisor 5 and the Chooser 15 if the Chooser picks B. The third game, A = (15,5), B = (5,15), pays the Advisor 15 and the Chooser 5 if the Chooser picks A, but pays the Advisor 15 and the Chooser 5 if the Chooser picks B.

Before having the subjects play any of the games, he attempted to determine whether Advisors believed that their advice would be followed, because, if they did not believe this, then it would be a mistake to interpret their giving advice favorable to Choosers to the Advisor’s honesty. Gneezy elicited truthful beliefs from Advisors by promising to pay an additional sum of money at the end of the session to each Advisor who correctly predicted whether his advice would be followed. He found that 82% of Advisors expected their advice to be followed. In fact, the Advisors were remarkably accurate, since the actual percent was 78%.

The most honesty was elicited in game 2, where A = (5,15) and B = (6,5), so lying was very costly to the Chooser and the gain to lying for the Advisor was small. In this game, a full 83% of Advisors were honest. In game 1, where A = (5,6) and B = (6,5), so the cost of lying to the Chooser was small and equal to the gain to the Advisor, 64% of the Advisors were honest. In other words, subjects were loathe to lie, but considerably more so when it was costly to their partner. In game three, where A = (5,15) and B =(15,5), so the gain from lying was large for the Advisor and equal to the loss to the Chooser, only 48% of the Advisors were honest. This shows that many subjects are willing to sacrifice material gain to avoid lying in a one-shot, anonymous interaction, their willingness to lie increasing with an increased cost of truth telling to themselves, and decreasing with an increase in their partner’s cost of being deceived.

Similar results were found by Boles and colleagues (2000) and Charness and Dufwenberg (2004). Gunnthorsdottir and colleagues (2002) and Burks and colleagues (2003) have shown that a social-psychological measure of “Machi-avellianism” predicts which subjects are likely to be trustworthy and trusting, although their results are not completely compatible.

The Public Goods Game

The public goods game has been analyzed in a series of papers by the social psychologist Toshio Yamagishi (1986, 1988a,b), by the political scientist Elinor Ostrom and her coworkers (Ostrom et al., 1992), and by economists Ernst Fehr and his coworkers (Fehr & Gächter, 2000, 2002; Gächter & Fehr, 1999). These researchers uniformly found that groups exhibit a much higher rate of cooperation than can be expected assuming the standard economic model of the self-regarding actor, and this is especially the case when subjects are given the option of incurring a cost to themselves in order to punish free riders.

A typical public goods game consists of a number of rounds, say 10. The subjects are told the total number of rounds, as well as all other aspects of the game. The subjects are paid their winnings in real money at the end of the session. In each round, each subject is grouped with several other subjects—say three others—under conditions of strict anonymity. Each subject is then given a certain number of “points,” say 20, redeemable at the end of the experimental session for real money. Each subject then places some fraction of his or her points in a “common account” and the remainder in the subject’s “private account.” The experimenter then tells the subjects how many points were contributed to the common account, and adds to the private account of each subject some fraction, say 40%, of the total amount in the common account. So if a subject contributes his or her whole 20 points to the common (p.170) account, each of the four group members will receive eight points at the end of the round. In effect, by putting the whole endowment into the common account, a player loses 12 points but the other three group members gain in total 24 (= 8 x 3) points. The players keep whatever is in their private account at the end of the round.

A self-regarding player will contribute nothing to the common account. However, only a fraction of subjects in fact conform to the self-interest model. Subjects begin by contributing on average about half of their endowment to the public account. The level of contributions decays over the course of the 10 rounds, until in the final rounds most players are behaving in a self-regarding manner (Dawes & Thaler, 1988; Ledyard, 1995). In a meta-study of 12 public goods experiments, Fehr and Schmidt (1999) found that in the early rounds, average and median contribution levels ranged from 40% to 60% of the endowment, but in the final period 73% of all individuals (N = 1,042) contributed nothing, and many of the remaining players contributed close to zero. These results are not compatible with the selfish actor model, which predicts zero contribution on all rounds, though they might be predicted by a reciprocal altruism model, since the chance to reciprocate declines as the end of the experiment approaches. However, this is not in fact the explanation of moderate but deteriorating levels of cooperation in the public goods game.

The explanation of the decay of cooperation offered by subjects when debriefed after the experiment is that cooperative subjects became angry at others who contributed less than themselves, and retaliated against free-riding low contributors in the only way available to them—by lowering their own contributions (Andreoni, 1995). This view is confirmed by the fact that when subjects play the repeated public goods game sequentially several times, each time they begin by cooperating at a high level, and their cooperation declines as the end of the game approaches.

Experimental evidence supports this interpretation. When subjects are allowed to punish noncontributors, they do so at a cost to themselves (Orbell, Dawes, & Van de Kragt, 1986; Sato, 1987; Yamagishi, 1988a,b, 1992). For instance, in the Ostrom et al. (1992) study subjects interacted for 25 periods in a public goods game, and by paying a “fee,” subjects could impose costs on other subjects by “fining” them. Since fining costs the individual who uses it, but the benefits of increased compliance accrue to the group as a whole, the only Nash equilibrium in this game that does not depend on incredible threats is for no player to pay the fee, so no player is ever punished for defecting, and all players defect by contributing nothing to the common pool. However, the authors found a significant level of punishing behavior.

These studies allowed individuals to engage in strategic behavior, since costly punishment of defectors could increase cooperation in future periods, yielding a positive net return for the punisher. Fehr and Gächter (2000) set up an experimental situation in which the possibility of strategic punishment was removed. They used 6- and 10-round public goods games with group sizes of four, and with costly punishment allowed at the end of each round, employing three different methods of assigning members to groups. There were sufficient subjects to run between 10 and 18 groups simultaneously. Under the Partner treatment, the four subjects remained in the same group for all 10 periods. Under the Stranger treatment, the subjects were randomly reassigned after each round. Finally, under the Perfect Stranger treatment, the subjects were randomly reassigned and assured that they would never meet the same subject more than once. Subjects earned an average of about $35 for an experimental session.

Fehr and Gächter (2000) performed their experiment for 10 rounds with punishment and 10 rounds without.5 Their results are illustrated in Figure 9.1. We see that when costly punishment is permitted, cooperation does not deteriorate, and in the Partner game, despite strict anonymity, cooperation increases almost to full cooperation, even on the final round. When punishment is not permitted, however, the same subjects experience the deterioration of cooperation found in previous public goods games. The contrast in cooperation rates between the Partner and the two Stranger (p.171) treatments is worth noting, because the strength of punishment is roughly the same across all treatments. This suggests that the credibility of the punishment threat is greater in the Partner treatment because in this treatment the punished subjects are certain that, once they have been punished in previous rounds, the punishing subjects are in their group. This result follows from the fact that a majority of subjects showed themselves to be strong reciprocators, both contributing a large amount and enthusiastically punishing noncontributors. The prosociality impact of strong reciprocity on cooperation is thus more strongly manifested the more coherent and permanent the group in question.

The Foundations of Transdisciplinary Behavioral Science

Figure 9.1 Average contributions over time in the Partner, Stranger, and Perfect Stranger treatments when the punishment condition is played first. Adapted from Fehr, E., & Gächter, S. (2000). Cooperation and punishment. American Economic Review, 90, 980–994. Used with permission.

Conclusion

I have shown that the core theoretical constructs of the various behavioral disciplines currently include mutually contradictory principles, but that progress over the past couple of decades has generated the instruments necessary to resolve the interdisciplinary contradictions. I have outlined several of the key ideas needed to specify a unified analytical framework for the behavioral sciences.

(p.172) References

Bibliography references:

Abbott, R. J., James, J. K., Milne, R. I., & Gillies, A. C. M. (2003). Plant introductions, hybridization and gene flow. Philosophical Transactions of the Royal Society of London B, 358, 1123–1132.

Alcock, J. (1993). Animal behavior: An evolutionary approach. Sunderland, MA: Sinauer.

Alexander, R. D. (1987). The biology of moral systems. New York: Aldine.

Allman, J., Hakeem, A., & Watson, K. (2002). Two phylogenetic specializations in the human brain. Neuroscientist, 8, 335–346.

Andreoni, J. (1995). Cooperation in public goods experiments: Kindness or confusion. American Economic Review, 85, 891–904.

Andreoni, J., & Miller, J. H. (2002). Giving according to garp: An experimental test of the consistency of preferences for altruism. Econometrica, 70, 737–753.

Arrow, K. J., & Debreu, G. (1954). Existence of an equilibrium for a competitive economy. Econometrica, 22, 265–290.

Arrow, K. J., & Hahn, F. (1971). General competitive analysis. San Francisco: Holden-Day.

Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211, 1390–1396.

Beer, J. S., Heerey, E. A., Keltner, D., Skabini, D., & Knight, R. T. (2003). The regulatory function of self-conscious emotion: Insights from patients with orbitofrontal damage. Journal of Personality and Social Psychology, 65, 594–604.

Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81, 637–654.

Boles, T. L., Croson, R. T. A., & Murnighan, J. K. (2000). Deception and retribution in repeated ultimatum bargaining. Organizational Behavior and Human Decision Processes, 83, 235–259.

Bonner, J. T. (1984). The evolution of culture in animals. Princeton, NJ: Princeton University Press.

Bowles, S., & Gintis, H. (2002). Homo reciprocans. Nature, 415, 125–128.

Bowles, S., & Gintis, H. (2005). Prosocial emotions. In: L. E. Blume & S. N. Durlauf (Eds.), The economy as an evolving complex system III. Santa Fe, NM: Santa Fe Institute.

Boyd, R., & Richerson, P. J. (1985). Culture and the evolutionary process. Chicago: University of Chicago Press.

Boyd, R., & Richerson, P. J. (1992). Punishment allows the evolution of cooperation (or anything else) in sizeable groups. Ethology and Sociobiology, 113, 171–195.

Brown, J. H., & Lomolino, M. V. (1998). Biogeography. Sunderland, MA: Sinauer.

Burks, S. V., Carpenter, J. P., & Verhoogen, E. (2003). Playing both roles in the trust game. Journal of Economic Behavior and Organization, 51, 195–216.

Camille, N. (2004). The involvement of the orbitofrontal cortex in the experience of regret. Science, 304, 1167–1170.

Cavalli-Sforza, L. L., & Feldman, M. W. (1981). Cultural transmission and evolution. Princeton, NJ: Princeton University Press.

Cavalli-Sforza, L. L., & Feldman, M. W. (1982). Theory and observation in cultural transmission. Science, 218, 19–27.

Coleman, J. S. (1990). Foundations of social theory. Cambridge, MA: Belknap.

Darwin, C. (1872). The origin of species by means of natural selection (6th ed.). London: John Murray.

Dawes, R. M., & Thaler, R. (1988). Cooperation. Journal of Economic Perspectives, 2, 187–197.

Dawkins, R. (1976). The selfish gene. Oxford: Oxford University Press.

Dawkins, R. (1982). The extended phenotype: The gene as the unit of selection. Oxford: Freeman.

DiMaggio, P. (1994). Culture and economy. In: N. Smelser & R. Swedberg (Eds.), The handbook of economic sociology (pp. 27–57). Princeton, NJ: Princeton University Press.

Durham, W. H. (1991). Coevolution: Genes, culture, and human diversity. Stanford, CA: Stanford University Press.

Eshel, I., & Feldman, M. W. (1984). Initial increase of new mutants and some continuity properties of ESS in two locus systems. American Naturalist, 124, 631–640.

Eshel, I., Feldman, M. W., & Bergman, A. (1998). Long-term evolution, short-term evolution, and population genetic theory. Journal of Theoretical Biology, 191, 391–396.

(p.173) Etzioni, A. (1985). Opening the preferences: A socio-economic research agenda. Journal of Behavioral Economics, 14, 183–205.

Fehr, E., & Gächter, S. (2000). Cooperation and punishment. American Economic Review, 90, 980–994.

Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415, 137–140.

Fehr, E., Gächter, S., & Kirchsteiger, G. (1997). Reciprocity as a contract enforcement device: Experimental evidence. Econometrica, 65, 833–860.

Fehr, E., Kirchsteiger, G., & Riedl, A. (1998). Gift exchange and reciprocity in competitive experimental markets. European Economic Review, 42, 1–34.

Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. Quarterly Journal of Economics, 114, 817–868.

Feldman, M. W., & Zhivotovsky, L. A. (1992). Gene-culture coevolution: Toward a general theory of vertical transmission. Proceedings of the National Academy of Sciences, 89, 11935–11938.

Fisher, R. A. (1930). The genetical theory of natural selection. Oxford: Clarendon Press.

Fudenberg, D., & Maskin, E. (1986). The folk theorem in repeated games with discounting or with incomplete information. Econometrica, 54, 533–554.

Gächter, S., & Fehr, E. (1999). Collective action as a social exchange. Journal of Economic Behavior and Organization, 39, 341–369.

Ghiselin, M. T. (1974). The economy of nature and the evolution of sex. Berkeley, CA: University of California Press.

Gigerenzer, G., & Selten, R. (2001). Bounded rationality. Cambridge, MA: MIT Press.

Gintis, H. (2000). Game theory evolving. Princeton, NJ: Princeton University Press.

Gintis, H. (2003). Solving the puzzle of human prosociality. Rationality and Society, 15, 155–187.

Gintis, H. (2005). Behavioral game theory and contemporary economic theory. Analyze Kritik, 27, 48–72.

Gintis, H. (2007). A framework for the unification of the behavioral sciences. Behavioral and Brain Sciences, 30, 1–61.

Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2005). Moral sentiments and material interests: On the foundations of cooperation in economic life. Cambridge, MA: MIT Press.

Glimcher, P. W. (2003). Decisions, uncertainty, and the brain: The science of neuroeconomics. Cambridge, MA: MIT Press.

Glimcher, P. W., Dorris, M. C., & Bayer, H. M. (2005). Physiological utility theory and the neuroeconomics of choice. New York: Center for Neural Science, New York University.

Gneezy, U. (2005). Deception: The role of consequences. American Economic Review, 95, 384–394.

Grafen, A. (1999). Formal darwinism, the individual-as-maximizing-agent analogy, and bet-hedging. Proceedings of the Royal Society B, 266, 799–803.

Grafen, A. (2000). Developments of price’s equation and natural selection under uncertainty. Proceedings of the Royal Society B, 267, 1223–1227.

Grafen, A. (2002). A first formal link between the price equation and an optimization program. Journal of Theoretical Biology, 217, 75–91.

Gunnthorsdottir, A., McCabe, K., & Smith, V. (2002). Using the machiavellianism instrument to predict trustworthiness in a bargaining game. Journal of Economic Psychology, 23, 49–66.

Haldane, J. B. S. (1932). The Causes of Evolution. London: Longmans, Green Co.

Hamilton, W. D. (1963). The evolution of altruistic behavior. American Naturalist, 96, 354–356.

Hammerstein, P. (1996). Darwinian adaptation, population genetics and the streetcar theory of evolution. Journal of Mathematical Biology, 34, 511–532.

Hammerstein, P., & Selten, R. (1994). Game theory and evolutionary biology. In: R. J. Aumann & S. Hart (Eds.), Handbook of game theory with economic applications (pp. 929–993). Amsterdam: Elsevier.

Hechter, M., & Kanazawam S. (1997). Sociological rational choice. Annual Review of Sociology, 23, 199–214.

Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., & Gintis, H. (2005). Economic man’ in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavioral and Brain Sciences, 28, 795–815.

Henrich, J., & Gil-White, F. (2001). The evolution of prestige: Freely conferred status as a mechanism for enhancing the benefits of cultural transmission. Evolution and Human Behavior, 22, 165–196.

Hirsch, P., Michaels, S., & Friedman, R. (1990). Clean models vs. dirty hands: Why economics (p.174) is different from sociology. In: S. Zukin & DiMaggio (Eds.), Structures of capital: The social organization of the economy (pp. 39–56). New York: Cambridge University Press.

Holden, C. J. (2002). Bantu language trees reflect the spread of farming across sub-saharan africa: A maximum-parsimony analysis. Proceedings of the Royal Society of London Series B, 269, 793–799.

Holden, C. J., & Mace, R. (2003). Spread of cattle led to the loss of matrilineal descent in Africa: A coevolutionary analysis. Proceedings of the Royal Society of London Series B, 270, 2425–2433.

Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor, MI: University of Michigan Press.

Huxley, J. S. (1955). Evolution, cultural and biological. Yearbook of Anthropology, 2–25.

Jablonka, E., & Lamb, M. J. (1995). Epigenetic inheritance and evolution: The Lamarckian case. Oxford: Oxford University Press.

James, W. (1880). Great men, great thoughts, and the environment. Atlantic Monthly, 46, 441–459).

Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge: Cambridge University Press.

Kollock, P. (1997). Transforming social dilemmas: Group identity and cooperation. In: P. Danielson (Ed.), Modeling rational and moral agents. Oxford: Oxford University Press.

Krebs, J. R., & Davies, N. B. (1997). Behavioral ecology: An evolutionary approach (4th ed.). Oxford: Blackwell Science.

Kreps, D. M. (1990). A course in microeconomic theory. Princeton, NJ: Princeton University Press.

Ledyard, J. O. (1995). Public goods: A survey of experimental research. In: J. H. Kagel & A. E. Roth (Eds.), The handbook of experimental economics (pp. 111–194). Princeton, NJ: Princeton University Press.

Lewontin, R. C. (1974). The genetic basis of evolutionary change. New York: Columbia University Press.

Liberman, U. (1988). External stability and ess criteria for initial increase of a new mutant allele. Journal of Mathematical Biology, 26, 477–485.

Lumsden, C. J., & Wilson, E. O. (1981). Genes, mind, and culture: The coevolutionary process. Cambridge, MA: Harvard University Press.

Mace, R., & Pagel, M. (1994). The comparative method in anthropology. Current Anthropology, 35, 549–564.

Mandeville, B. (1705). The fable of the bees: Private vices, publick benefits. Oxford: Clarendon.

Maynard Smith, J. (1976). Group selection. Quarterly Review of Biology, 51, 277–283.

Maynard Smith, J. (1982). Evolution and the theory of games. Cambridge: Cambridge University Press.

Mednick, S. A., Kirkegaard-Sorenson, L., Hutchings, B., Knop, J., Rosenberg, R., & Schulsinger, F. (1977). An example of bio-social interaction research: The interplay of socio-environmental and individual factors in the etiology of criminal behavior. In: S. A. Mednick & K. O. Christiansen (Eds.), Biosocial bases of criminal behavior (pp. 9–24). New York: Gardner Press.

Meltzhoff, A. N., & Decety, J. (2003). What imitation tells us about social cognition: A rapprochement between developmental psychology and cognitive neuroscience. Philosophical Transactions of the Royal Society of London B, 358, 491–500.

Mesoudi, A., Whiten, A., & Laland, K. N. (2006). Towards a unified science of cultural evolution. Behavioral and Brain Sciences, 29, 329–383.

Miller, B. L., Darby, A., Benson, D. F., Cummings, J. L., & Miller, M. H. (1997). Aggressive, socially disruptive and antisocial behaviour associated with fronto-temporal dementia. British Journal of Psychiatry, 170, 150–154.

Moll, J., Zahn, R., di Oliveira-Souza, R., Krueger, F., & Grafman, J. (2005). The neural basis of human moral cognition. Nature Neuroscience, 6, 799–809.

Montague, P. R., & Berns, G. S. (2002). Neural economics and the biological substrates of valuation. Neuron, 36, 265–284.

Moore, B., Jr. (1978). Injustice: The social bases of obedience and revolt. White Plains, NY: M.E. Sharpe.

Moran, P. A. P. (1964). On the nonexistence of adaptive topographies. Annals of Human Genetics, 27, 338–343.

Newman, M., Barabasi, A. L., & Watts, D. J. (2006). The structure and dynamics of networks. Princeton, NJ: Princeton University Press.

O’Brien, M. J., & Lyman, R. L. (2000). Applying evolutionary archaeology. New York: Kluwer Academic.

Odling-Smee, F. J., Laland, K. N., & Feldman, M. W. (2003). Niche construction: The neglected process in evolution. Princeton, NJ: Princeton University Press.

(p.175) Olson, M. (1965). The logic of collective action: Public goods and the theory of groups. Cambridge, MA: Harvard University Press.

Orbell, J. M., Dawes, R. M., & Van de Kragt, J. C. (1986). Organizing groups for collective action. American Political Science Review, 80, 1171–1185.

Ostrom, E., Walker, J., & Gardner, R. (1992). Covenants with and without a sword: Self-governance is possible. American Political Science Review, 86, 404–417.

Parker, A. J., & Newsome, W. T. (1998). Sense and the single neuron: Probing the physiology of perception. Annual Review of Neuroscience, 21, 227–277.

Parsons, T (1964). Evolutionary universals in society. American Sociological Review, 29, 339–357.

Popper, K. (1979). Objective knowledge: An evolutionary approach. Oxford: Clarendon Press.

Poundstone, W. (1992). Prisoner’s dilemma. New York: Doubleday.

Real, L. A. (1991). Animal choice behavior and the evolution of cognitive architecture. Science, 253, 980–986.

Real, L., & Caraco, T. (1986). Risk and foraging in stochastic environments. Annual Review of Ecology and Systematics, 17, 371–390.

Richerson, P. J., & Boyd, R. (1998). The evolution of ultrasociality. In: I. Eibl-Eibesfeldt & F. Salter (Eds.), Indoctrinability, idology and warfare (pp. 71–96). New York: Berghahn Books.

Rivera, M. C., & Lake, J. A. (2004). The ring of life provides evidence for a genome fusion origin of eukaryotes. Nature, 431, 152–155.

Rizzolatti, G., Fadiga, L., Fogassi, L., & Gallese, V. (2002). From mirror neurons to imitation: Facts and speculations. In: A. N. Meltzhoff & W. Prinz (Eds.), The imitative mind: Development, evolution and brain bases (pp. 247–266). Cambridge: Cambridge University Press.

Sato, K. (1987). Distribution and the cost of maintaining common property resources. Journal of Experimental Social Psychology, 23, 19–31.

Schall, J. D., & Thompson, K. G. (1999). Neural selection and control of visually guided eye movements. Annual Review of Neuroscience, 22, 241–259.

Schrödinger, E. (1944). What is Life?: The Physical Aspect of the Living Cell. Cambridge: Cambridge University Press.

Schulkin, J. (2000). Roots of social sensitivity and neural function. Cambridge, MA: MIT Press.

Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275, 1593–1599.

Segerstrale, U. (2001). Defenders of the truth: The sociobiology debate. Oxford: Oxford University Press.

Shennan, S. (1997). Quantifying archaeology. Edinburgh: Edinburgh University Press.

Simon, H. (1972). Theories of bounded rationality In: C. B. McGuire & R. Radner (Eds.), Decision and organization (pp. 161–176). New York: American Elsevier.

Skibo, J. M., & Bentley, R. A. (2003). Complex systems and archaeology. Salt Lake City: University of Utah Press.

Smith, A. (1759). The theory of moral sentiments. New York: Prometheus.

Smith, E. A., & Winterhalder, B. (1992). Evolutionary ecology and human behavior. New York: Aldine de Gruyter.

Sugrue, L. P., Corrado, G. S., & Newsome, W. T. (2005). Choosing the greater of two goods: Neural currencies for valuation and decision making. Nature Reviews Neuroscience, 6, 363–375.

Sutton, R., & Barto, A. G. (2000). Reinforcement learning. Cambridge, MA: MIT Press.

Taylor, P., & Jonker, L. (1978). Evolutionarily stable strategies and game dynamics. Mathematical Biosciences, 40, 145–156.

Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35–57).

Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton, NJ: Princeton University Press.

Williams, G. C. (1966). Adaptation and natural selection: A critique of some current evolutionary thought. Princeton, NJ: Princeton University Press.

Williams, J. H. G., Whiten, A., Suddendorf, T., & Perrett, D. I. (2001). Imitation, mirror neurons and autism. Neuroscience and Biobeheavioral Reviews, 25, 287–295.

Winter, S. G. (1971). Satisficing, selection and the innovating remnant. Quarterly Journal of Economics, 85, 237–261.

Wood, E. J. (2003). Insurgent collective action and civil war in El Salvador. Cambridge: Cambridge University Press.

Wright, S. (1931). Evolution in mendelian populations. Genetics, 6, 111–178.

(p.176) Yamagishi, T. (1986). The provision of a sanctioning system as a public good. Journal of Personality and Social Psychology, 51, 110–116.

Yamagishi, T. (1988a). The provision of a sanctioning system in the United States and Japan. Social Psychology Quarterly, 51, 265–271.

Yamagishi, T. (1988b). Seriousness of social dilemmas and the provision of a sanctioning system. Social Psychology Quarterly, 51, 32–42.

Yamagishi, T. (1992). Group size and the provision of a sanctioning system in a social dilemma. In: W. Liebrand, D. M. Messick, & H. Wilke (Eds.), Social dilemmas: Theoretical issues and research findings (pp. 267–287). Oxford: Pergamon Press.

Young, H. P. (1998). Individual strategy and social structure: An evolutionary theory of institutions. Princeton, NJ: Princeton University Press.

Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35, 151–175.

Zajonc, R. B. (1984). On the primacy of affect. American Psychologist, 39, 117–123.

Notes

Notes:

(2) . I use the term “self-regarding” rather than “self-interested” (and similarly “non–self-regarding” or “other-regarding” rather than “non–self-interested” or “unselfish”) for a situation in which the payoffs to other agents are valued by an agent.

(3) . This argument was presented verbally by Darwin (1872) and is implicit in the standard notion of “survival of the fittest,” but formal proof is recent (Grafen, 1999, 2000, 2002). The case with frequency-dependent (nonadditive genetic) fitness has yet to be formally demonstrated, but the informal arguments in this case are no less strong.

(4) . For a more extensive analysis of the parallels between cultural and genetic evolution, see Mesoudi et al. (2006). I have borrowed heavily from this paper in this section.

(5) . For additional experimental results and analysis, see Bowles and Gintis (2002) and Fehr and Gächter (2002).