Jump to ContentJump to Main Navigation
Supernatural AgentsWhy We Believe in Souls, Gods, and Buddhas$

Ilkka Pyysiäinen

Print publication date: 2009

Print ISBN-13: 9780195380026

Published to Oxford Scholarship Online: May 2009

DOI: 10.1093/acprof:oso/9780195380026.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 25 February 2017

(p.189) Appendix

(p.189) Appendix

Cognitive Processes and Explaining Religion

Source:
Supernatural Agents
Publisher:
Oxford University Press

A.1 Dual-process Theories and Modularity

A.1.1 Intuitive and Explicit Processes

The cognitive science of religion very much rests on the rather vague distinction between “intuitions” and “reflective beliefs” (Barrett 1999, 2004a). While it is true that people seem to use two different types of reasoning, a spontaneous and intuitive one, and a reflective and systematic one (Ch. 1.1.), it is not altogether clear how this distinction actually should be drawn (cf. Pyysiäinen 2003b,c, 2004c; Tremlin 2005, 2006, 172–82). Simply put, intuitive cognitive processes seem to be fast and easy, whereas reflective processes take time and effort (see Anderson and Lebiere 2003; Sperber 1996, 89–92, 1997; Stanovich and West 2000). It is, for example, easy to recognize faces, whereas higher mathematics gives most people a hard time. Computers work in the opposite way: face recognition or reading handwriting is relatively difficult to program, but even a simple pocket calculator outperforms humans in computing tasks.

Since around 1990, the so-called dual-process theories explaining this dichotomy have probably been the most popular large-scale theories in research on social cognition (Deutsch and Strack 2006). Scholars have differentiated between two types of neural pathways, cognitive mechanisms, and mental contents; examples follow.

Important differences between these dichotomies notwithstanding, this list as a whole describes distinctions that characterize the two systems of reasoning I have labeled the A- and B-systems (Pyysiäinen 2004c) and Stanovich and West (2000) have labeled systems 1 and 2. The distinction between these two systems seems to run somehow parallel to the distinction of “cognitivism/functionalism” versus “connectionism” in cognitive science. According to the cognitivist/functionalist view, sensory experience is somehow “transduced” into amodal symbols that are stored in the brain and can be retrieved and manipulated at will (see Barsalou 1999; Barsalou et al. 2005). In this view, behavior regulation takes place through various kinds of algorithms that resemble computer programs. Electrochemical impulses travel along neuronal pathways much the way electric current travels on silicon chips. Different tasks require different kinds of computer programs: we use Word for writing text, Excel for computing, and Photoshop for creating images, for example. The human mind is a similar set of different programs (“modules”) for such tasks as face recognition, agent detection, cheater detection, and so forth. These programs unfold in development and are “triggered” by the proper input, thus being by and large independent of developmental processes (see Pinker 1998, 27–31; cf. Lickliter and Honeycutt 2003b, 822–24).

An alternative to cognitivism is the kind of connectionism that emphasizes the fact that some cognitive tasks may not involve rule-based processing at all; they are instead handled by constraint-based, parallel distributed processing that is working to optimize, not maximize, performance (see Elman et al. 1998; Prince and Smolensky 2004; Tesar and Smolensky 2000). Connectionism emphasizes that the brain guides behavior without performing abstract computations on fixed mental representations within a language-like structure. Whereas in Fodorian functionalism almost all concepts are regarded as being innate (see Laurence and Margolis 2002), connectionism does not require the “nothing more from something less” principle. The so-called neural nets show cumulative learning to be possible with only a minimal initial bias (Churchland 1989, 1995; Elman et al. 1998; Goldblum 2001).1

Whereas older dual-process theories merely assumed two different routes to judgments, a new family of dual-system models invokes two mental systems, as in Deutsch and Strack’s (2006) reflective-impulsive model (RIM), for example. In this model, social (p.191) cognition and behavior is a function of a reflective system (RS) and an impulsive system (IS), each operating according to different representations and computations. Although the two systems serve different functions and have different conditions for optimal functioning, they operate interactively. My A-system is responsible for fast, associative, and emotionally colored thinking with purely practical goals, receiving information from innate, biologically instantiated dispositions and from the environment through analogic encoding. It is a subsymbolic pattern-recognition system that seems to rely on connectionist, parallel distributed processing.2 It operates reflexively, not reflectively, drawing inferences and making predictions on the basis of temporal relations and similarity. It employs knowledge derived from personal experience, concrete and generic concepts, images, stereotypes, feature sets, and associative relations, relying on similarity-based generalization and automatic processing. It serves such cognitive functions as intuition, fantasy, creativity, imagination, visual recognition, and associative memory (see esp. Pyysiäinen 2004a; Sloman 1996).

The A-system also is largely responsible for what is known as “common sense,” or “everyday thinking.” In contrast to scientific thinking, everyday thought proceeds from individuals’ immediate experience; it aims at short-term, practical efficacy, not at creating general theories; it seeks evidence and not counterevidence; it makes use of individual cases as evidence and personalizes values and ideals; it makes use of abductive inference; and its argumentation often takes a narrative form (Denes-Raj and Epstein 1994; Epstein 1990; Epstein and Pacini 1991; Epstein et al. 1992).

The B-system is a rule-based system capable of encoding any information with a well-specified formal structure and works by computing digital information syntactically. It thus needs to rely on external memory stores and cultural communication. Inferences are carried out in a “language of thought” that has a combinatorial syntax and semantics. The rule-based system thus looks for logical, hierarchical, and causal-mechanical structure in its environment, operating on symbol manipulation. It derives knowledge from language, culture, and formal systems, employing concrete, generic, and abstract concepts. It serves such cognitive functions as deliberation, explanation, formal analysis, verification, ascription of purpose, and strategic memory (see especially Pyysiäinen 2004c; Sloman 1996).

Such complex mental processes as those the social sciences study can hardly be either purely automatic or purely controlled; they instead are combinations of features of each (Bargh 1994; Deutsch and Strack 2006, 170). Yet the distinction between two different systems of reasoning has been successfully applied in the study of judgment and choice in the context of economic behavior, for example (Kahneman 2002, 2003a,b). Economic theory used to be dominated by the conception of humans as selfish and rational maximizers of utility (see Simon 1955, 1959, 1978). In 1950s, Herbert Simon presented the competing idea of “limited rationality,” and in the 1980s new evidence of it began to accumulate. Guth et al. (1982) introduced the “Ultimatum” game and showed experimentally that in low-income countries, people will choose to punish others for an unfair offer rather than accept a small amount of money. The psychologists Amos Tversky and Daniel Kahneman provide more experimental evidence to support the claim that persons frame problems to be solved in different ways that the rational choice models cannot account for. Kahneman then developed the idea of “bounded rationality” (in a number of publications that earned him the Nobel Prize in (p.192) 2002). Rationality has bounds because tacit and often emotional intuitions intrude on people’s calculating inferences (Kahneman 2002). These bounds also exist in religious reasoning.

Those who see the mind as “massively modular” (see below) tend to think that certain tasks are intuitive and easy because they relate to distinct domains about which all humans have tacit knowledge. Leda Cosmides and John Tooby (2002, 146), for example, refer to such ability to routinely solve predefined problems as “dedicated intelligence.” It is distinguished from “improvisational intelligence,” which helps solve novel problems for which one does not have tacit intuitions (cf. Kanazawa 2004; see Neisser et al. 1996; see Miller and Penke 2007). The modularity thesis is not without its problems. As a colleague once ironically put it, some seem to think that we know for sure that the mind is modular; what we do not know is what modularity actually means (see Carruthers 2004, 2006; cf. Woodward and Cowie 2004).

A.1.2 Modularity of Mind

As early members of the genus homo lived in exceptionally stable conditions during the Pleistocene epoch (about 1.6 million to ten thousand years ago), the adaptive problems they had to face remained the same generation after generation. This may have made possible the evolution of domain-specific cognitive adaptations such as social intelligence, cheater detection, and mating preferences. People did not have to think in the modern sense of the word; instead, they could rely on evolved responses that were automatic and modular (Kanazawa 2004; see Johnson 1987). Later, in the face of rapid ecological changes, cognitive plasticity came to be favored, and the so-called general intelligence evolved, either as a modular adaptation (Kanazawa 2004) or as an emergent by-product of modular abilities (Cosmides and Tooby 2002). This is a very important distinction; flexibility in the face of environmental variation seems to be a specifically human (but poorly understood) trait (Alexander 1979, 94–98; Kitcher 1990, 282–88). The crucial issue then becomes in what ways intuitions and general intelligence interact.

There are basically two types of intuitions: those acquired through routinization and those that are “innate” (see Bjorklund 2003, 837; Pyysiäinen 2004c). Innate intuitions supposedly are processed by innate cognitive “modules” (in a sense a new version of Durkheim’s “categories”). Routinization, for its part, may be due to explicit or implicit learning, or a mixture of the two. Because persons are not, for example, born with a chess module up and running, becoming a grand master in chess must be due to explicit learning. Modularity, then, is the outcome of a learning process, not its prerequisite (see Buller 2005a, 155; Hirschfeld and Gelman 1994, 5–20; Karmiloff-Smith 1992, 165–73; Simpson et al. 2005). Implicit learning refers to acquisition of knowledge that by and large takes place independently of conscious attempts to learn and without much explicit knowledge about what has been acquired (Reber et al. 1999). Learning thus can lead to domain-specific skills that are independent from general intelligence or other specialized skills.

Evolutionary psychologists argue that most intuitions are evolved adaptations and as such modular,3 while other scholars either deny this or even reject all evolutionary considerations in the case of the human mind.4 While the modularity of perceptual and affective processes is generally accepted (Fodor 1983), the modularity of cognitive (p.193) processes is under debate (see Carruthers 2006, 3; Gerrans 2002). The debate is difficult partly because of the vagueness of the concept of a module. There are at least three basic alternative ways of defining an evolved cognitive module (Gerrans 2002, 307; cf. Craver 2007, 217–21):

  1. 1. The human brain consists of distinct neural mechanisms dedicated for solving well-specified problems such as cheater detection, face recognition, and so forth.

  2. 2. Cognitive specializations are algorithms individuated by their computational (but not physical) architecture.

  3. 3. Humans have domain-specific bodies of innate knowledge.

The first alternative is problematic because evolutionary psychology has been isolated from neuroscience and is based almost exclusively on behavioral measures, ignoring neurophysiology and genetics (see Lickliter and Honeycutt 2003a,b; Panksepp and Panksepp 2000; Woodward and Cowie 2004; cf. Sperber 2005); only recently has the importance of neurophysiology been recognized. In Sperber’s words: “the true modularist is interested in ‘boxes’ that correspond to neurologically distinct devices” (2005, 57). A module thus should have a distinct history in the ontogeny of the brain. Biological adaptations are always produced by an environmental selection pressure acting on genetic variation. Thus, for a cognitive function to be an adaptation, it should have a genetic basis. Yet genes can only have an effect on a function by affecting the physical structure of the organism performing the function. Therefore, cognitive modules could only be shown to be adaptations by showing how certain brain structures have been selected for in evolution (Buller 2005a, 85, 200). As Panksepp and Panksepp put it, “without a strong linkage to neuroscientific research, evolutionary psychology has no credible way of determining whether its hypotheses reflect biological realities or only heuristics that permit provocative statistical predictions” (2000, 109; cf. Atran 2005).

The second alternative—cognition is based on modular algorithms—best captures the central tenet of evolutionary psychology. Peter Carruthers (2006, xii), for example, defines a module as a functionally distinct processing system of mind, whose operations are at least partly independent of those of others, and the existence and properties of which are partly dissociable from the others. This is a controversial idea (see Edelman 1992; Carruthers 2006, 3). The third alternative leaves room for the idea of domain-general mechanisms processing information that is stored in domain-specific databases (Buller 2005a, 127–200; Gerrans 2002, 307 n. 2; Samuels 2000).

We may thus distinguish between representational and computational modules, in that representational modules are mechanisms for storing symbols, while computational modules are mechanisms for processing symbols in a modular fashion. Representational modules are domain-specific bodies of data, while computational modules are domain-specific mechanisms. These two may also interact (Samuels 2000; Simpson et al. 2005, 12–15). Representational modularity means that perception and storage of information takes place in a modular fashion. In computational modularity, the computations on symbols in the brain are also based on modular algorithms that are evolved adaptations.5 Just as humans do not have any general-purpose biological organs but instead have separate organs such as the liver, heart and so on, all serving specific functions, the mind is divided into functionally different units (see Sperber 2005, 54–57). (p.194) As Tooby and Cosmides put it, a “psychological architecture that consisted of nothing but equipotential, general-purpose, content-independent or content-free mechanisms could not successfully perform the tasks the human mind is known to perform” (1995, 34).

But training and experience can extend the domain of a module far beyond its original limits, as Boyer and Barrett (2005) observe. Sperber thus suggests that we differentiate between the proper and actual domains of a system. By the proper domain is meant the things the system evolved to represent; the actual domain consists of things that are represented by the system at any given time after the system has evolved. These two do not always completely overlap. It is possible to extend the actual domain by training and experience and thus to create artificial “superstimuli” that have superb power to trigger a given module (Sperber 1994; Sperber and Hirschfeld 2004). A picture of a car with a human-like, smiling face, for example, readily triggers the “ToM module,” in addition to mechanical reasoning (see Guthrie 1993). Sperber and Hirschfeld (2004, 45) predict that ideas that at once trigger two separate intuitive systems may be especially salient.

Another example is the ability to read. This is a very demanding cognitive task, for many reasons. Healthy humans learn with considerable ease to recognize, within a fraction of a second, a pattern of light on the retina as a word, irrespective of the position, size, case, or font in which the word is printed (Dehaene et al. 2005). It even “deosn’t mttaer in waht oredr the ltteers in a wrod are; the olny iprmoetnt tihng is taht frist and lsat ltteer is at the rghit pclae.” Stanislas Dehaene and colleagues have identified the left occipito-temporal sulcus as the brain area that systematically identifies visual letter strings. This “visual word form system” plays an important role in informing other temporal, parietal, and frontal areas of the identity of the letter string, thus making semantic access and phonological retrieval possible (Dehaene 2005; Dehaene et al. 2005). Although this neural system does not seem to be strictly modular, it nevertheless exemplifies the way an existing neural system is adopted for a new use. The invention of symbol systems such as the alphabet and Arabic numerals relies on an extended range of cortical plasticity unique to humans; small changes to functionally specified brain areas in other primates suffice to adapt these regions to a new cultural domain (“recycling” of preexisting brain circuitry; Dehaene 2005).6

There is, indeed, much plasticity in the human brain. Not all neural connections of the cortex are genetically determined. Brain mechanisms are not built in accordance with rules but result from a “proliferate-and-prune” process of overproduction and subsequent death of unnecessary neurons (apoptosis) (Buller 2005a, 197–98;7 see Changeux 1985; Edelman 1992; Panksepp 1998, 61–79). The brain builds itself in ontogenetic interaction with the environment (see Lickliter and Honeycutt 2003a,b). Only the general structure is genetically controlled, while many areas of the cortex have much plasticity in the ways they help their owner adapt to the environment (Buller 2005a, 132; Deacon 1998, 193–224; Kujala, Alho, and Näätänen 2000; Panksepp and Panksepp 2000).

The human brain starts to develop about twenty-five days after conception, growing at the rate of 250,000 cells per minute. This continues until birth, with cell production taking place differently in the evolutionarily older parts of the brain (the midbrain and the limbic system) than in the more recent, cortical part. In the older parts, newly formed cells are simply added by pushing existing cells outward; in the cortex, cells “migrate” to their final destination through a long and winding path. Some time after birth, the brain’s development takes a different form: the neurons and synapses that (p.195) are not needed start to die. An adult brain thus is sculptured by gradually subtracting extra neurons and neural connections. The kinds of environmental stimuli the developing child is exposed to play a crucial role in shaping the neural circuitry of the cortex. Patterns of innervation from sensory receptors are projected on more and more remote brain structures, so that even the circuits that have almost no direct connection with sensory receptors yet are shaped by sensory perception (Buller 2005a, 131–35; Changeux 1985; Edelman 1992).

As there is no genetically encoded developmental program responsible for the functional roles of different cortical circuitries, the idea of hundreds of genetically encoded cognitive modules becomes suspect. It might instead be the cortical plasticity that is an adaptation. There is continual reorganization in the brain, and most cognitive processes are not isolated from each other. Neural circuits may be domain-sensitive, but they are not encapsulated. This concerns the neocortex in particular; the older parts of the brain seem to work differently and to contain more rigidly dedicated systems (Buller 2005a, 135–43).

The newer theories of modularity actually reject Fodor’s (1983) old criteria for modularity, emphasizing that there may be different degrees of encapsulation and modules may even share parts with each other.8 A modular system is turned on by a specific kind of information yet may need to consult also other systems. Face recognition, for example, may require consulting naïve physics but is not turned on without its proper input. Modules are isolable function-specific processing systems, all or most of which are domain-specific. Most (but not all) of them also are domain-specific in their input conditions. Although some modules are genetically channeled, many are constructed through learning (Barrett and Kurzban 2006; Carruthers 2006). Carruthers (2006, 150–210) argues that it is unlikely that the primate mind would have been so changed in evolution that its modular architecture would have been swept away. Nor is it plausible that the human mind is only the end-product of an increase in processing power. Instead, new modules have been added, especially the modules for ToM, language, and for dealing with social norms.

To the extent that the evolutionarily older parts of the human brain process spontaneous and unreflective behavioral responses and emotions (the so-called affect programs), the question becomes how the cortical processes interact with these more basic processes. Higher brain areas surely are not immune to subcortical influences. The lower areas, however, produce affect responses blindly: the fear system, for example, is triggered at the sight not only of a snake but also of all coiled shapes, such as hoses, ropes, and so forth. It is only cortical processes that can mediate such reflective thoughts as “Silly me, it was only a rope!” In emotional turmoil, bottom-up processes take over, overpowering top-down processes (Buller 2005a, 143; LeDoux 1998; Panksepp 1998, 300–306). It has therefore been argued that evolutionary considerations of the brain and cognitive functions should always begin with the subcortical structures, where homologies among mammals and specifically human modules are found. The cognitive modules that supposedly emerged in the Pleistocene may actually reflect our multimodal capacity to conceptualize world events and to relate them to primitive affects related to fitness concerns (Panksepp and Panksepp 2000, 110–11).

As the debate on modularity is related to the one on adaptationism, it may be illuminating to consult Peter Godfrey-Smith’s (1999, 186) division of adaptationism into (p.196) three types (see Shanahan 2004, 152–69). Empirical adaptationism claims that natural selection is a ubiquitous force and helps explain almost all important questions of biological variation; explanatory adaptationism claims that apparent design of organisms tells about their adaptedness to the environment and that explaining this is the core intellectual mission of evolutionary biology; methodological adaptationism says that the best way for scientists to approach biological systems is to look for features of adaptation and design.

I thus suggest that we distinguish between empirical, explanatory, and methodological modularity, in that empirical modularity refers to the massive modularity of mind as a fact, explanatory modularity claims that the best way to explain human cognition is to explore its modular structure, and methodological modularity suggests that we study cognition as if it were modular (although it may be found not to be). I think that such methodological modularity has been successful in the study of religion because it has guided scholars to look for the individual cognitive processes that make religion possible, instead of treating religion as a single entity–like whole (e.g. Boyer 2001; see Day 2005). Whether cognitive processes really are modular in some delicate and uneasily explicated manner is of little consequence for the study of religion.

What is more important is the way affects interact with higher cognition. It is not enough merely to study the functional roles of religious concepts in thinking and behavior (see Lewis 1972) because these roles seem to be partly determined by emotions that yield themselves to neurophysiological explanation (see Panksepp 1998, 2007). I will return to this from a methodological point of view in section A.2.2. Here I want to emphasize that religious belief and behavior may in some sense resemble such things as fear of flying, for example. Although religious belief is defended by rational arguments, it may actually be based on the fact that persons just feel that way; rational justification comes afterward (Anselm’s fides quarens intellectum). Similarly, merely knowing that flying an airplane is safer than walking on the Madison Avenue does not remove the fear of flying, because the fear is not based on a lack of statistical information. It is deeply rooted in affective brain processes beyond conscious control. It just feels that way. The same seems to hold for religious belief.

Thus, the information-processing approach to cognition needs to be supplemented by what Panksepp (2007) calls “energetic dynamics.” He points out, for example, that the human social brain arises from a set of innate emotional tendencies shared by all mammals. Emotions such as fear, rage, seeking, and lust are even premammalian. Prosocial feelings, for example, arise from dynamics of genetically provided instinctual systems, and social experience is mediated by opioids in the brain. Play releases opioids into the brain, leading to increased play and satisfaction (“social comfort”). In autistic brains, there is an overabundance of opioids and thus no need for social interaction. In various kinds of addictions it seems to be the other way around: drugs are used to compensate for the lack of natural opioids.

It is thus possible that affects or emotions exercise an influence on higher cognition through the link of (modular) intuitions. Edmund Rolls (2007), for example, argues that emotions, by and large produced by rewards and punishers, are adaptations enabling genes to specify goals for action. This leaves room for much cognitive and behavioral flexibility. Genes code for emotions, and emotions guide us toward rewards, while higher cognition guides the choices between various alternative ways of achieving (p.197) the rewards. To the extent that religion is intimately linked with social cognition, it seems to be driven, at least partly, by the affects related to social experience. There is, for example, experimental evidence that persons who have been induced to feel disconnected from other humans report stronger belief in supernatural agents (Epley et al. 2008). To further study this necessitates various kinds of experimental designs; it has not been possible for me to explore this idea here on the basis of historical materials. In the next section, I move on to provide some theoretical foundation for an explanatory approach to religion.

A.2 Explaining Religion

A.2.1 Studying Religion from Outside

Anthropologists distinguish between emic and etic concepts and respective anthropological approaches. In its most simplified form, this dichotomy boils down to the distinction between the concepts of the people studied and scientific or scholarly concepts (Harris 1968, 571, 575, 1990, 50; see Headland 1990, 15). However, Kenneth Pike (1971), who coined the concepts of emic and etic, wanted to develop a formal model for analyzing units of behavior in different cultural contexts. Pike’s book was published in the same year as Malinowski’s diaries, which, as Clifford Geertz put it (1993, 55–56), revealed that anthropologists had no special capacity to think and feel like true “natives” and that the ideal of an empathetic understanding of other cultures had to be reconsidered. Pike’s emic-etic distinction came in handy in that situation. Geertz (57–58), for example, pointed out that focusing only on the emic aspect leads to speculations about an illusory identification with the people studied, while a one-sided emphasis on the etic aspect leads to abstract jargon with no clear connection to the materials studied.

Pike originally coined the emic-etic distinction as a behavioral analogue to the distinction between phonemics and phonetics (Pike 1971, 323–43. 1990b, 65; see Keating 1999; Nespor 1999). A phoneme is the smallest linguistic unit that can be used to express a difference in meaning; phonology studies how different phonemes are used to differentiate between different word meanings. Phonetics, for its part, studies all the different phonemes in various languages, irrespective of meaning. Thus, phonetics is the necessary basis for phonology: the way speakers use different phonemes to express meanings in individual languages can only be studied using phonetics to identify different phonemes. Pike thought that human behavior consisted of “behavioremes” and “utteremes” as building blocks analogous to phonemes. Therefore, human behavior, just as language, could be studied either from an emic or an etic perspective. The scholar identifies the smallest meaningful units in any cultural context by starting from an etic understanding of the structure of human behavior. There is no direct access to the emic aspect, without the mediation of an etic perspective, because all the emic units we use are our scholarly reconstructions (Pike 1971, 1990a, 30–31). There is thus no absolute dichotomy between the scholars’ and the “natives’” perspectives; instead, the latter is reconstructed on the basis of the former.

In the present study, “soul,” “God,” and “buddha” are emic concepts reconstructed from the etic perspective of comparative religion. “God,” however, is problematic because (p.198) it can be used both as an emic concept, denoting the god of Christianity, and as an etic concept, in the sense of a general label for a type of beings (“gods”; see Pyysiäinen and Ketola 1999). The concept of “supernatural agent” and the cognitive mechanisms of agent representation are understood from an etic perspective, of course. I do not intend to view beliefs about supernatural agents simply from the point of view of the believers themselves. My approach is comparative and explanatory, not interpretive and local. In the background of this choice figures the more general issue of the proper way to understand and explain human behavior.

Some philosophers argue that in the human sciences, it is not legitimate to use concepts that are unfamiliar to the people studied. It is not legitimate to say that X did Y because of Z, if X does not know or understand Z. Peter Winch (1958, 95–108, 127, 1970), for example, argues that as social life supposedly consists entirely of concepts and ideas, a social scientist can only study social life at the explicit conceptual level. If social life is not studied from within, trying to grasp the way people understand their existence, it is reduced to the purely physical aspects of behavior (but cf. Bhaskar 1998, 134–35; Martin 2000; Pyysiäinen 2004d, 1–27). This, in turn, is unacceptable for an antireductionist who does not want to reduce social life to something that it allegedly is not.

In religious studies, reductionism is a notorious scarecrow; its basic function seems to have been to keep unwanted approaches away from the field, rather than to help analyze ways of explaining religion (see Idinopulos and Yonan 1994). Antireductionists claim that reductionist approaches lead to a misrepresentation of religion as a phenomenon, while reductionists often argue that antireductionists are only trying to defend religion, not study it scientifically (see Wiebe 1999, 2005; cf. Gothóni 2005). As I have argued elsewhere, much confusion has been caused by not keeping separate the two different questions: (1) reduction in the sense of explaining intentional religious behavior from a nonreligious, third-person perspective (naturalist reductionism), and (2) reduction in the sense of intra- and interlevel theory reduction (theory reductionism) (Pyysiäinen 2004d, 67–80, 2004e; see Craver 2007, 3 n. 2).

The logic that underlies naturalist antireductionism can be criticized by elaborating further the counterargument, the basics of which I have presented elsewhere (Pyysiäinen 2004d, 71–74). Antireductionists typically claim that insofar as scientific explanations do not embody what is subjectively and religiously important in religion, they are somehow incomplete as scientific explanations. Because scientific descriptions and explanations are made from a third-person perspective, they do not grasp the richness of subjective experience. And because they thus leave something out of the picture, they are incomplete. The inescapable incompleteness of scientific explanations of religion, then, means that religion always escapes scientific analysis.

This line of reasoning can be illustrated with an analogous example from philosophy of mind. Frank Jackson presents the following “knowledge argument” against physicalism (Jackson 1982, 1986). He invites us to imagine Mary, a brilliant neuroscientist who has lived all her life in a black-and-white room, watching a black-and-white television, and reading books without color illustrations. By these means, she has acquired all possible physical information about vision and color that there is. The moment she is released from the room, she will not only see things she has never seen before but also learn something new about the world and visual experience that she did not know before (p.199) (if we ignore the unfortunate fact that her neuronal pathways would by that time be atrophied for good). Jackson thus concludes that physical information about subjective experience is incomplete and physicalism false. The point is that Mary will gain new factual knowledge about seeing colors in general, not just that she acquires a new kind of subjective experience (Jackson 1982, 132; 1986). However, although subjective experience thus seems to bring along new knowledge, the argument fails to show why this knowledge should be something extraphysical.9 It might just as well be only a matter of new neural activations.

Antireductionist arguments in the study of religion are implicitly based on the same kind of reasoning. We could imagine a completely nonreligious scholar, Muriel, who has never had any religious feelings, although she knows everything there is to be known intellectually about religion. If God then kindled a religious feeling in her heart, she would for the first time have a subjective experience of what it is like to be a believer. It thus seems safe to conclude that Muriel would learn something new about religion and that her previous third-person knowledge would have been incomplete. Three problems remain, however.

First, although the new subjective experience might bring something new to Muriel’s previous knowledge about religion, a sense of understanding is not necessarily the same as actual understanding, and personal experience is not an argument. A new experience may help create new hypotheses, but there is no necessary link between an experience and understanding; the sense of understanding is only a fallible indicator of actual understanding. Moreover, knowing an increased number of facts about a phenomenon does not necessarily lead to a better understanding (Ylikoski 2008; see Keil 2003).

Second, “religion” and “religious experience” are wide and nebulous categories (see Proudfoot 1985; Taves 2008). The analogy to color vision may well be quite misleading. It is very difficult to say what one actually lacks in lacking religious experience and feeling. Opinions differ widely among philosophers of religion as well as among religious “believers.” Would it really be easier to understand Theravada Buddhism if one were a Calvinist? Does being a shaman help one understand Judaism? I doubt it. Or if the claim is that some kind of general religiosity (whatever it might be) helps one understand whatever religion, then we are no longer talking about seeing things simply from the point of view of the “believer.”

Third, it is not clear whether the antireductionist claim is that (1) the scholar as a person should be religious, or that (2) (also) the explanations the scholar provides should be religious. If the claim concerns the person of the scholar, we have two options. Either the person’s religiosity has nothing to do with her or his science, or it somehow extends its influence on the explanations she or he puts forward. In the first case, religiosity is obviously inconsequential from the scholarly point of view. The second option brings us to claim (2): explanations of religion are incomplete as long as they do not somehow embody religiosity. This, however, means that the explanation should actually replace the explanandum. This leads to a difficult situation.

Claiming that explanations of religious behavior should also serve as a substitute or surrogate of religious practice implies that the scholar should provide religiously relevant scientific answers to religious questions, which may turn out to be a difficult combination. Although it is possible to both have a religious motivation for one’s scholarly work and gain religious rewards from it, it is difficult to say how this should actually (p.200) change the scientific practice and in what sense such religious science would be superior as science. Not only is there a legitimate place for a nonreligious study of religious behavior but also mixing these two introduces serious new problems.

However, antireductionists sometimes refer to Brentano’s idea of intentionality and Husserl’s phenomenology, the basic claim being that the study of religion explains or interprets religious intentions in themselves, “bracketing” all their empirical constraints (Brentano 1924; Husserl 1950a,b; see Dreyfus 1982a,b; Føllesdal 1982; Kusch 1997). Brentano (1924, 1:124–28) argues that all psychological phenomena (and they alone) are characterized by the fact that they refer to a content: the objects toward which mental acts are directed somehow reside in the mind itself. These phenomena are grasped by understanding, in contrast to explanation (Hintikka 1975, 192; see chapter 1, section 1.3.2).

Husserl (1950a) takes the task of phenomenology to be the study of what “appears” in subjective consciousness, without any attempt to ground immediately given phenomena on anything that would explain them. The “intentional objects” of consciousness should be studied irrespective of whether or not anything in external reality corresponds to them, and irrespective of the processes that gave rise to them (perception, hallucination, etc.).10 The object of analysis is pure consciousness, apart from the factual world; in the “eidetic” reduction, facts are reduced to their “essences” (see Kusch 1997, 241–43).

Thus, trying to found epistemology on psychology or on any natural science is “nonsense” to Husserl; the natural sciences only lead us to senseless skepticism by making the laws of logic relative (Husserl 1950a, 20–21, 35–36). This is because Husserl thinks that logic, which cannot be derived empirically from the human constitution, can never be considered from outside—all inquiry already presupposing logic (Haaparanta 1995, 157–59; cf. Kusch 1989, 2–3; van Heijenoort 1967).11 The problem is the same as with Durkheim’s categories: are they simply given or do they arise from experience? Yet Husserl considers language (unlike logic) to be merely a calculus that can be looked at from outside, changed, and manipulated, whereas for Heidegger and Gadamer, language is the “universal medium” whose semantic relations are inaccessible to us (Kusch 1989, 1997; see Hintikka and Hintikka 1986).

Thus, phenomenologists of religion can argue that the relevance of religious data must be judged not in relation to the “believer’s” point of view or to a scientific theory, but in relation to a religious intention’s “inner structure” (Utriainen 2005; Kamppinen 2001). Religious intentions are objects of study in their own right, not merely insofar as they tell something about something else (brain, society, culture, etc.); the scholar describes the religious intention as it appears to him or her in the variable data, without postulating any foundational essence to religion (Utriainen 2005). Scholar of religion Terhi Utriainen thinks this leaves us with two alternatives: a never-ending reworking of our understanding of religion by providing ever new phenomenological descriptions of different religious situations, or a search for a final truth by “implanting religion (or at least the precondition of religion) to the natural constitution of the human animal” (47).

It is misleading, however, to oppose reworkable phenomenological descriptions to the final truth of a naturalistic approach. There is no final truth in science (e.g. Niiniluoto 1984, 1987, 2002). Explaining religious thought and practice in the light of the biological and psychological capacities of the human species that make religion possible (p.201) in no way means a search for general laws or immutable essences (see Bhaskar 1998; Halonen and Hintikka 1999; Hintikka 2001). The idea of an essence is contrary to a basic principle of evolutionary theory: emphasis on populational in contrast to typological thinking. Thus, the concept of religion (in the singular) may also be altered and redefined in the course of exploration, even though religion is studied within an empirical frame of reference (see Pyysiäinen 2001b, 1–5; Day 2005). And yet religion can be conceptualized from a coherent perspective, as is not the case for the kind of phenomenology that conflates data and theory.12 (How can one claim that only pure religion should be studied if religion is whatever one decides to call religion?)

A.2.2 Explanatory Pluralism and the Study of Religion

Reductionism can also refer to what philosophers of science call theory reduction (see Bechtel 2008, 130–35), which is based on the view that reality presents itself to us in hierarchically ordered layers or levels, starting from micro- and macrophysics and proceeding through chemistry, biology, and psychology to culture (Pyysiäinen 2004d, 24).13 It is usually thought that all macrolevel causal powers are somehow inherited from lower levels in some noncausal manner. The levels are differentiated on the basis of their relative complexity. The higher levels are more complex, in that they involve more complex causal mechanisms and we thus have to include more variables in our explanations. The altitude of a level of analysis is directly proportional to the complexity of the systems it deals with. The altitude of the level is also inversely proportional to the size of the domain of events in question: psychology, for example, is a higher level than biology and deals with only some of the phenomena within the realm of biology. In addition, the age of the relevant phenomena decreases from the bottom up: the lower the level, the longer its phenomena have been around in the world (McCauley 1986, 1996, 2007; Ylikoski 2001, 78–82).

Recently, Carl Craver has presented a detailed taxonomy of levels (2007, 171). First, the levels of science include the products of scientific inquiry, such as analyses, explanatory models, and theories. Second, the level of units consists of research groups, perspectives, and programs, and so forth. The levels of nature show by virtue of what criteria two items can be said to be at differing levels; causality, size, and composition have been suggested as criteria. Thus, it has been argued that things at lower levels are smaller than things at higher levels, or that things at higher levels are more complex. Some say that things at differing levels are causally related. Craver’s levels of mechanisms are levels of composition, but the composition relation is ultimately not spatial or material; instead, the units are behaving mechanisms at higher levels and their components at lower levels. Craver argues that there is no uniquely correct answer to the placement question for levels of mechanisms (171–72, 188–89).

Craver’s idea is that neuroscientific explanations always cover multiple levels because the components of the explanandum form a hierarchy of levels. He uses spatial memory in mice and rats as an example (Craver speaks of both, without a distinction). At the highest level, we have the phenomenon of spatial memory; then comes the level of spatial map formation in the hippocampus and the temporal and frontal cortices; third is the cellular-electrophysiological level; fourth, we have the molecular mechanisms that make the chemical and electrical activities of nerve cells possible (p.202) (Craver 2007, 163–70). Thus, levels of mechanisms are not levels of objects (such as cell, organism, society) and do not correspond to any fixed ontological division. It also does not make sense to say that things at different levels causally interact with one another. To the extent that the levels are levels of mechanisms, “interlevel causation” would mean an interaction between the behavior of a mechanism as a whole and the parts of the mechanism (but there is no difficulty in things of one size scale interacting with things of another size scale) (190, 195).

It thus is not meaningful to ask if “culture” or “society” are independent levels (see Bechtel 2008, 129–57). The question instead is whether we can describe mechanisms above the level of such things as spatial memory. According to Craver (2007, 1–9), a mechanism is “a set of entities and activities organized such that they exhibit the phenomenon to be explained.” Entities are the components of mechanisms and have properties that allow them to engage in a variety of activities that are the causal components of mechanisms (see Bechtel 2008, 14). The entities and activities of a mechanism are organized spatially, temporally, causally, or hierarchically.

A mechanistic explanation thus is not only an argument or an invariance. Explanations consist of objective features of the world and of public representations such as texts describing these features (Craver 2007, 26–28, 100–01). Craver (28–49) thus rejects Hempel’s covering law model, Paul Churchland’s prototype activation model, and Kitcher’s unification model, adopting Woodward’s manipulationist account of causality. In this view, X is causally relevant with regard to Y if an “ideal intervention I on X with respect to Y is a change in the value of X that changes Y, if at all, only via the change in X” (95–96). The explanatory generalizations describing causal relevance relations are stable or invariant but not necessarily or even usually universal. This means that the relation between cause and effect holds under a range of conditions (99). Thus, culture is causally relevant with regard to religion if an ideal intervention I on culture with (respect to religion) changes culture so that a change in religion follows.

Consider now Alan Garfinkel’s imaginary example of explaining increase and decrease in populations of rabbits and foxes (Garfinkel 1981, 49–74). Let the frequency with which foxes encounter rabbits (and eat them) be the main influence on the levels of populations. Both the fox level and the rabbit level have an effect on the frequency of encounters. When there are many foxes around, this places great pressure on rabbits. Imagine now that we want to explain the death of a rabbit. We might say:

The cause of the death of the rabbit was that the fox population was high.

As this is an explanation of a microstate by a macrostate, it should—by a theory-reductionist’s lights—be reducible to a microexplanation:

Rabbit r was eaten because he passed through the capture space of fox f.

But here the explanandum is no longer “the death of the rabbit” but something like “the death of rabbit r at the hands of fox f, at place p, at time t (and so on and so forth).” But this seems to include unnecessary details, considering that the original question was “why the rabbit was eaten {rather than not eaten}.” A rabbit, for example, might want to know why rabbits get eaten, not why they get eaten by specific foxes. A microexplanation, however, says “The rabbit was eaten {by fox f at time t … [rather than by (p.203) some other fox …]}.” But it does not follow that “If rabbit r had not been at place p, at time t (and so on and so forth), he would not have been eaten” (Garfinkel 1981, 163). The problem with microexplanations thus is that they bring a certain instability to the conditions under which they work.

It can be said that microexplanations answer how-questions by providing a mechanism for the operation of macroexplanations answering why-questions. As a particular mechanism is not necessary for the effect, it is not a good explanation. As Garfinkel puts it, it is “too true to be good.” Something materially being something else does not mean that the relevant explanation could be reduced (1981, 58–59). Therefore, upper-level explanations are often needed. Neither eliminative reductionism nor its antithesis can provide a satisfactory model for explaining religion, however.

Robert McCauley’s (1986, 1996, 2007; McCauley and Bechtel 2001) explanatory pluralism and Craver’s multilevel model offer an alternative for all kinds of “fundamentalist” views, according to which good explanations can be formulated only at the most fundamental level (Craver 2007, 9–16; see Pyysiäinen in press). A multilevel view of explanation should be combined with a contrastive counterfactual theory of explanation, because explanations are always answers to contrastive questions (Craver 2007, 202–11; Garfinkel 1981; Ylikoski 2001). This is because only some of the available information is relevant for any given explanation. Adding details to the explanation of why the rabbit got eaten, for example, does not make the explanation better or more accurate; it instead changes the explanandum. In order to be able to specify the explanandum properly, we need to contrast the question asked with something else: why did f occur instead of c? Second, it is necessary to specify a mechanism that ensures that f occurs instead of c. Consider the case of the bank robber Willie Sutton, for instance. When the prison chaplain asked him “Why do you rob banks?” he replied: “That’s where the money is!” thus making the erroneous contrast “Why do you rob banks {instead of, say, newsstands}?” The correct answer thus depends on how we conceive the question; a contrast is an economical way of fixing the causal field as it delivers us from the burden of listing all the components of the field (Garfinkel 1981, 21–22; Ylikoski 2001, 22, 27, 36, 2006).

Thus, religion can be explained in terms of cognitive mechanisms when we bear in mind that these mechanisms span different levels from molecular systems to culture, in the sense of shared knowledge. Take as an example the relationship between the serotonin system in the brain and spiritual experience. Borg et al. (2003) argue that the serotonin 5–HT1A receptor density correlates with the tendency to spiritual experience. It should now be possible to describe a causal mechanism whose levels consist of the molecular level, the appropriate cellular-electrophysiological level, the cognitive aspect of the experience, and the phenomenon of spiritual experience. Moreover, as social neuroscience is also interested in the neural representation of social phenomena, it should also be possible to add a fifth level, the societal role of spiritual experience (see e.g. Bruce 2002). Spiritual experience may have causal relevance with regard to society, but society is also causally relevant with regard to spiritual experience.

Whereas a scientific fundamentalist claims that good explanations of religion are possible only at some chosen level (be it culture or cognition), multilevel mechanistic explanation is based on the realization that good explanations of religion engage the levels of neurobiology, cognitive processes, and sociocultural systems. Religion is a hybrid formation and can be approached at any of these levels. Clearly, advance at any (p.204) level can in principle also benefit scholarship at other levels by, for instance, helping formulate new research questions and testable hypotheses. There is no religion without neural activity, but neuroscientists often do not have expertise in the study of religion. Religion cannot be identified at the neural level alone, but scholars of religion do not have expertise in studying the neural processes involved. Therefore, interdisciplinary cooperation is needed, and religion should be explored from different angles, and theories developed at different ontological levels (Boyer 2005b). Individual scholars may work on specific levels; the general understanding of religion may develop simultaneously at different levels.

By contrasting explanations with counterfactual scenarios, it is possible to specify the explanandum properly. In studying the serotonin system, for example, a proper contrastive research question could be, for example, “Does individual variation in serotonin 5-HT1A receptor density correlate with self-transcendence {rather than with some other personality traits}?” In Borg et al. (2003), serotonin binding potential correlated inversely with scores for self-transcendence (but no correlations were found for any of the other six “temperament” and “character inventory” dimensions). One possible interpretation of this finding is that subjects with low 5-HT1A receptor density have sparse serotoenergic innervation and thereby a weaker filtering function, allowing for increased perception and decreased inhibition. An improper research question is exemplified by such (imaginary) contrasts as “Does serotonin 5-HT1A receptor density {rather than social factors} serve as the basis for spiritual experience?” This question is based on an implicit fundamentalist assumption that there is only one proper level at which good explanations work.

Insofar as “religion” is not a unitary entity with clear boundaries, it is necessary to specify mechanisms of religious behavior at differing levels. This also helps connect the study of religion with other sciences across the board—though doing so is not always easy. The human sciences have a long history in research that is centered around the scholar as an individual; this type of research is by and large characterized by Boyer’s (2003c) “relevant connections” model:

  1. 1. There is no agreed corpus of knowledge.

  2. 2. There are no manuals, or agreed techniques or methods.

  3. 3. The history of the field is crucial, in the sense of a continual reframing of past theories.

  4. 4. Books are more important than articles.

  5. 5. There is no specific developmental curve of the scholar: one may be able to produce interesting connections from the beginning.

  6. 6. There is no agreement on who is a competent performer, although there are coalitional cliques and bitter feuds.

In contrast to this, studying religion at different levels of mechanisms often requires cooperation among several disciplines and scholars (see Boyer 2005b). The difficulties of doing so should challenge us to rethink the traditional division of labor between various disciplines that is supposed to reflect some kind of natural ontology. As reality does not consist of natural compartments, we should try our best to overcome the obstacles created by artificial disciplinary boundaries.

(p.205) A.3 Religion as By-product and Sexual Selection

The question remains why religious rituals often take the elaborate forms they do. One way of answering this question is the modes-of-religiosity theory, according to which there are specialized causal mechanisms that determine the development of religious traditions either toward an “imagistic” or a “doctrinal” form (Whitehouse 1995, 2000, 2002, 2004). Whitehouse calls these forms “attractor positions,” but it is not entirely clear what this is supposed to mean; as real-life religious traditions always seem to combine imagistic and doctrinal variable contents, attraction must be something very abstract (see Ketola 2007, 102–3; Whitehouse and Laidlaw 2004; Whitehouse and Martin 2004; Whitehouse and McCauley 2005).

The attraction seems to take place on something like the level of an abstract “competence,” as distinguished from “performance” (to use Chomsky’s terminology as a loose analogy; Pyysiäinen 2006c). However, the idea of modal causality has been criticized for vagueness (Wiebe 2004) and a more reductive strategy recommended (Boyer 2005a). In the standard model of the cognitive science of religion, causal mechanisms are searched below the surface level of what is referred to as “religion” (Boyer 2001; Sperber 2006; Kirkpatrick 2006, 2008). Religion is made possible by cognitive mechanisms that are adaptations for solving problems related to survival and reproduction. Another and much less studied option is that what biologists call sexual selection may also have played a role in the evolution of the capacities and behaviors of which religion is a by-product (Dennett 2006, 87–89; Pyysiäinen 2008).

I have elsewhere argued that sexual selection may have begun to operate on the individual tendency to ritualized behavior (Pyysiäinen 2008; see Dennett 2006, 87–89). I mean by ritualized behavior individual behavior that is repetitive, rigid, and noninstrumental (Boyer and Liénard 2006; Liénard and Boyer 2006). Ritual behavior, in contrast, refers to all actions that take place in the context of a collective ceremony. Boyer and Liénard (2006) argue that ritualized behavior results from a specific hazard precaution system (or several such systems) geared to the detection of and reaction to inferred threats to fitness. Such threats create a specific adaptive problem because (1) they are quite diverse; (2) there is no straightforward feedback demonstrating that a threat has been removed, it being in the nature of such threats that they are not directly observable; (3) appropriate measures cannot be mapped one-to-one onto physically different classes of threats, since each type of threat may require very different precautions, depending on the situation. Ritualized action is then typified by stereotypy, rigidity, repetitiveness, and partition of behavior into subactions that do not seem to have any immediate instrumental goals. Such ritualization of action is found not only in cultural ceremonies but in children’s rituals and obsessive compulsive disorder as well (see also Liénard and Boyer 2006).

The activation of a hazard precaution system leads to an arousal and a feeling that something must be done. Certain actions seem intuitively to be called for, although one does not have any explanation for why this is so. In the aroused state, one’s attention is focused on low-level properties of action, which thus is “parsed” in smaller units than normally. Such upper-level categories as “walking” are replaced by such lower-level categories as “walking-in-this-or-that-specific-manner.” To this is connected a “just right” syndrome: everything must be done very carefully, yet one can never be sure that a goal (p.206) has been reached. As the relationship of the low-level actions with the more general goal of the ritual is close to a mystery, repetition of action follows. The types of actions concerned relate to a few salient themes, such as pollution and purification, danger and protection, as well as intrusion of others and the construction of an ordered environment (Boyer and Liénard 2006; Liénard and Boyer 2006). These are also the themes underlying rituals as collective ceremonies: religious and magical rituals relate to purification (e.g. baptism, libations), protection (so-called crisis rites like rainmaking), and creation of social order (rites of passages such as initiations). Inferred threats also tend to trigger the HADD and HUI/ToM, because postulating an agent is in most cases necessary for effective precautions (see Taves 2008, 216). Thus representations of supernatural agents also get activated, and people try to please and propitiate them in attempts to do something about the inferred threat.

According to Geoffrey Miller (2001a, 345), ritualization results from such sexual selection of signals and displays in which movements and structures are modified to excite optimally the perceptual systems of receivers. Such ritualization may be accompanied with hazard precaution triggered by an inferred threat to one’s fitness: not finding a mating partner means not leaving any offspring. As Westermarck (1891, 185) points out, in tribal societies, only men run the risk of being obliged to lead a single life (see Trivers 2002, 66). Courtship thus may be ritualized because of the inferred threat and then get more elaborate because of sexual selection.

In sexual selection, as found among sexually reproducing organisms, the relative parental investment of the sexes in the young is the key variable: when one sex invests more time and resources in the care of the offspring, members of the other sex will compete among themselves to mate with the members of the former (Trivers 2002, 102). In such a situation, some individual phenotypic trait may become a sign either of the good overall condition of its bearer (good genes) or of the fact that the individual is likely to be capable of and willing to invest resources in the care of the offspring. A peacock’s tail, for example, takes much energy to grow, is difficult to carry, and is an obvious handicap when trying to avoid predators. For the peahen, it thus is a reliable sign of the peacock being in good overall condition, because it can afford the tail. In some other species of birds, females prefer males who rule over a favorable territory (Dunbar 1988, 288–91; Howard 1948). Biologists now recognize such female mate sampling and choice as a fact in many species (Byers and Waits 2006; Fisher et al. 2006; Trivers 2002, 56–110; cf. Rhodes and Simmons 2007).

When at some point in evolution, a given peahen had a liking for a long tail in the male, her offspring inherited both the liking (peahen) and the long tail (peacocks). The average length of the tail gradually grows in each generation, because peacocks with the longest tails have more offspring than their competitors. In each generation, it thus takes a little bit more to be above the average. Not all peacocks can grow a long tail, however, because a long tail presupposes a good overall condition, which, in turn, is determined by many different genes at different loci (see Kokko et al. 2003, 2006; Kotiaho et al. 2001; Miller 2001a,b; Tomkins et al., 2004). However, females also seem to prefer novel or rare phenotypes; when such choosiness is common, rare genotypes in one generation tend to become common in offspring generations (Kokko et al. 2007).

Psychologist Geoffrey Miller provides theoretical arguments to the effect that human cultural phenomena such as music, morality, and religion also might be due to (p.207) sexual selection (Miller 2001a,b, 2007). Haselton and Miller (2006), for example, show that in short-term mate choice, women near peak fertility (midcycle) prefer inherited creativity over wealth due to good luck. As creativity is a sign of good genes, women prefer creative males as short-term mating partners.

Miller (2001a, 349–50) points out that early human hunting strategies relied on the hunters’ long-range running, high aerobic capacity, and sweating ability. The best hunters were in good overall condition with good motor control under conditions of high aerobic effort over long periods of time. From the gene’s–eye perspective, females should have preferred them as mating partners. To the extent that they would have been unable to accompany men on the hunt to see who was the fittest, they would have needed some other means of rating men. Observing ritual dancing may have been one such means. As most tribal and folk dancing includes repeated high stepping, stamping, and jumping, using the largest and most energy-consuming muscles, dancing may well have evolved as a display of the fitness needed in hunting. Ritual dancing is universal among humans and has its origins close to the time of the emergence of Homo sapiens in Africa, before the spread of the human species on other continents (Kaeppler 1978; Kurath 1960; Bachner-Melman et al. 2005).

The good overall condition that skillful and prolonged dancing requires is measured by the so-called fluctuating asymmetry (FA) of the body (minor deviations from perfect bilateral symmetry of the body; see Trivers 2002, 309–27).14 The validity of FA, which has been used as a measure of developmental stability and thus of good genes, is diminished by the fact that the effect tends to be smaller than errors in measurement (Rhodes and Simmons 2007). However, the study by Brown et al. (2005, not included in the metaanalysis by Rhodes and Simmons [2007]) presents evidence of strong positive association between bodily symmetry and dancing ability, especially in men. Women also rate dances by symmetrical men relatively more positively than men do. It thus might be argued that ritual behaviors serve as courtship displays that have been sexually selected (Brown et al. 2005). Rhodes and Simmons (2007, 360) observe that although there is little reason to believe that a preference for symmetric bodies is an adaptation for obtaining indirect genetic benefits from mate choice, bodily symmetry yet may convey information about possible direct benefits in the form of provided resources.

A recent study of Bachner-Melman et al. (2005) suggests that two polymorphic genes seem to add to differences in aptitude, propensity, and need for creative dancing. These are the arginine vasopressin 1a (AVPR1a) gene and the SLC6A4 gene, which encodes the protein that transports serotonin. The AVPR1a gene makes a profound contribution to affiliative and social behavior, including courtship, in vertebrates in general; thus, the association between AVPR1a and dancing in humans may reflect the importance of social relations and communication in dance.

The role of the SLC6A4 gene with regard to dancing may relate to sensitivity to altered states of consciousness. The protein that this gene encodes terminates the action of serotonin and recycles it; it is also a target of such psychomotor stimulants as amphetamine, for example. Serotonin receptor density, for its part, has been shown to correlate inversely with apprehension of phenomena that cannot be explained by objective demonstration (“spirituality”) (Borg et al. 2003). The amphetamine derivative 3,4-methylenedioxymethamphetamine (known by its recreational users as “ecstasy”) causes a marked increase in serotonin and dopamine, an effect that is enhanced by such things as loud (p.208) music and hot, overcrowded conditions (Parrott 2004). Such conditions are typical of tribal rituals as well (see, e.g., Whitehouse 1995). Ritual dancing thus is also a favorable context for the so-called religious experiences of being in contact with supernatural agents (see Pyysiäinen 2001b, 77–139).

Ritual dancing is just one example of the kind of skills underlying religion that possibly have been favored not merely by natural selection but also by sexual selection. This example also shows how difficult it is to try to explain the evolution of religion; social institutions can only evolve if certain materially realized skills and capacities first evolve, which leads directly to the standard-model idea of religion as a by-product.

However, we may well also ask why many religious traditions favor institutionalized celibacy, if religion has evolved driven by sexual selection. There are several ways of trying to answer this question (see Deady et al. 2006; Qirko 2002). One option is that celibacy is an alternative strategy in conditions where it is not possible for everyone to find a partner and reproduce (see Deady et al. 2006, 394–96 for examples). Another line of reasoning is based on Trivers’s theory of parental investment as “any investment by the parent in an individual offspring that increases the offspring’s chance of surviving (and hence reproductive success) at the cost of the parent’s ability to invest in other offspring” (2002, 67). The chances of reproductive success are the best when a parent invests all resources in one of the sons (often the oldest), because of the higher reproductive capacity among males and because dividing scarce resources among several offspring may mean that no one gets enough to survive (Boone 1986; Hartung 1976).

This has led to various kinds of cultural practices meant to keep some of the youngest offspring as “helpers at the nest” who do not reproduce. There is, for example, anecdotal evidence from German folk traditions of people having “systematically kept [children] dumb and crippled” to ensure that they would stay at home (Voland 2007, 424–27). Deady et al. (2006) provide evidence from nineteenth-century Ireland, where one of the sons in each generation would inherit land that enabled him to marry; for the others, emigration or staying as a helper at home was often the only option (4.2 million people emigrated from Ireland between 1852 and 1921).

At the same time, there was a sharp increase in both female and male celibacy within Irish Catholicism. Deady et al. (2006) cite Larkin’s study that shows that vocations increased from five thousand priests, monks, and nuns for a Catholic population of 5 million in 1850 to fourteen thousand clergy for a population of 3.3 million Catholics by 1900. Deady et al. identified forty-six priests born in County Limerick between 1867 and 1911 and were able to show that a significantly larger proportion of priests originated from landholding households than the county average and that the mean number of children in priests’ households of origin was significantly greater than the national average number of children per household. They conclude that the wealthier landowners were able to pursue the alternative strategy of sending a son into the priesthood, thus enhancing their family’s social status at the reproductive expense of a son who might have had difficulty finding a marriage partner. From the gene’s-eye perspective, celibacy was also an adaptive strategy for the son (or his genes) because he could increase investment in close kin, such as nieces and nephews, which provides inclusive fitness benefits for the celibate individual via improved reproductive conditions for kin (Deady et al. 2006).

(p.209) There must, of course, also be some proximate, psychological explanations for why individuals are willing to and capable of acting against their biological nature. Among the factors contributing to individuals’ learning to identify with religious or other orders rather than their kin are close association among group members that resembles family ties; false phenotypic matches such as uniforms, emblems, hairstyles, speech patterns, and so forth; metaphoric kin terminology; recruitment of members at puberty; and separation from biological kin (Qirko 2002, 2004a; see also MacIntyre 2004).15 Not only is it possible to use these methods to recruit individuals to celibate orders; such methods may also partly underlie the recruitment of modern suicide terrorists (Qirko 2004b; cf. Atran 2004; see Bushman et al. 2007).

Sexual selection thus may well have played a role in the evolution of the capacities underlying “religion” as their by-product. The following are among the things that may speak for sexual selection of “religious” beliefs and practices: “religion” becomes important in puberty (rites of initiation); it is male-dominated (women mostly are “audience”); it is costly in terms of time and resources (the handicap principle); it is species-specific, like all sexually selected traits; there are heritable differences in “religious” attitudes and behavior (see Koenig and Bouchard 2006); and “religion” has a multimodular basis in mind. Although this is so far only a speculative hypothesis, it is testable in principle. It is possible to explore whether men display their “religiosity” as part of courtship, whether they also try to downplay the “religiosity” of their sexual competitors, whether females really prefer “religious” males, and whether females at peak fertility prefer “religious” indications of good genes over wealth (Miller 2007). (p.210)

Notes:

(1) . Among other alternatives is also dynamical systems theory (DST), which emphasizes feedback control systems, and the embedded/embodied (E/E) view of cognition, or embodied dynamicism (Thompson 2007). Different views, however, take different kinds of things to be paradigmatic of human cognition: when cognitivist take reasoning, playing chess, and processing language as paradigm cases, the E/E and (p.226) DST camps regard sensorimotor tasks as central (Grush 2002, 282–84). Keeping in mind the dual nature of cognition, there seems to be room for both views. For example Barsalou’s (1999) idea of cognition as simulating perceptual processes offline, as it were, is useful in many contexts, although it might be argued that the symbols that are grounded in perceptual experience need not “spend eternity underground” (Gabora 1999). In that case, a computational model seems to be necessary for explaining how the symbols work (Dennett and Viger 1999).

(2) . The symbol processor of the B–system may have evolved from the connectionist A–system and be situated on top of it (Smolensky 1988); the two systems may work in unison (Sun 2002; Sun et al. 2005); or they may be instances of a higher-order system (see Holyoak and Spellman 1993; Kokinov 1997).

(3) . E.g., Carruthers 2004; Cosmides and Tooby 1994, 2002; Pinker 1998, 2002, 2005a,b; Samuels 2000; Sperber 1994; Tooby and Cosmides 1995; Tooby et al. 2005. Geary (2005; Geary and Huffman 2002) has developed a mediating position he calls “soft modularity.”

(5) . The evolutionary psychologists’ ideas of humans’ cognitive modules having emerged in the Pleistocene and of present-day hunter-gatherers as essentially similar to their remote ancestors is strikingly similar to Tylor’s idea of survivals. The Tylorian idea of the evolution of religious beliefs is here replaced by attempts to trace the evolution of cognitive mechanisms that make religious belief possible (see Boyer 2001, 32–33).

(6) . Sperber here prefers the interpretation of many innate modules being learning modules that generate further modules (Sperber 2005, 59).

(7) . Buller, however, seems to draw rather extreme conclusions from the experimental work that has revealed the plasticity of the cortex; plasticity clearly has its limits (Sereno 2005; Smirnakis et al. 2005).

(8) . Carruthers (2006, 7) defines encapsulation to mean that a processing system’s internal operations cannot draw on any information held outside that system (in addition to its input). Domain specificity means that a system only receives inputs of a particular kind.

(9) . Churchland (1995, 200–02) makes this point but also unduly conflates Jackson’s argument with Thomas Nagel’s (1974) similar argument on “what is it like to be a bat.”

(10) . This, of course, is why this program has had a continuing appeal to some scholars of religion, even though no successful applications exist (Gilhus 1994; Jensen 2003; cf. Ryba 1991).

(11) . Jean-Paul Sartre (1978) criticized Husserl for making the apparently independent intentional objects actually dependent on a “transcendental ego” (see Husserl 1950b, 137–38). Sartre (1978, 1943) argued that the ego was in the world, that is, in the cognized objects, and that consciousness, in not having any intrinsic existence, thus was “unhappy consciousness.”

(12) . Some have also seen obvious overlap in the projects of phenomenology and cognitive science (see Gallagher 1997; Varela 1996); Dreyfus (1982b, 3–11) argues that in its first stage, Husserl’s theory of intentionality corresponded “exactly” to Fodor’s representational ToM, while in its second stage it also shows resemblances to Fodor’s idea of computation on mental representations. Cognitive science focuses on the structures and processes that make up a mind, by and large irrespective of verbal language, much as Husserl focuses on the “noematic” components of perception (McIntyre and Smith 1982). However, this view has also been contested on the basis that Husserl does not have the kind of representationalist theory of mind Dreyfus attributes to him and that Huserl’s project also includes the notion of a precognitive and non-object-directed “operative intentionality” (Thompson 2007).

(13) . In Nagel’s classical model of theory reduction, reduction means deriving a reduced theory’s laws from the laws of a reducing theory with the help of some bridge principles. The idea is not one of elimination of an upper-level theory but of its deduction from a lower-level theory of which it is a special case. Reduction is a kind of explanation of another explanation or a theory. However, as the reducing theory also corrects the reduced theory, the laws of the latter cannot deductively follow from the first. Thus, it has been argued that, for example, neuroscientific theories instead eliminate and displace folk psychology (Churchland 1998; Churchland 1989). This is a case of historical theory succession rather than of reduction (Craver 2007, 3 n. 2; McCauley 2007).

(14) . Directional asymmetry arises when one side of a trait is systematically larger than the other side; antisymmetry occurs when an organism develops asymmetrically, the side with larger traits being determined randomly by environmental factors; FA is caused by developmental instability, small random perturbations in cell division arising from the stochasticity of development (Rhodes and Simmons 2007).

(15) . Qirko (2004a, 699) regards “kin-cue manipulation” as “complementary” to explanations based on inclusive fitness theory. However, the two clearly operate on different levels: the ultimate and the proximate.