Beware the Passionate Robot - Oxford Scholarship Jump to ContentJump to Main Navigation
Who Needs Emotions?The brain meets the robot$

Jean-Marc Fellous and Michael A. Arbib

Print publication date: 2005

Print ISBN-13: 9780195166194

Published to Oxford Scholarship Online: March 2012

DOI: 10.1093/acprof:oso/9780195166194.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2015. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 28 August 2016

Beware the Passionate Robot

Beware the Passionate Robot

Chapter:
(p.333) 12 Beware the Passionate Robot
Source:
Who Needs Emotions?
Author(s):

MICHAEL A. ARBIB

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780195166194.003.0012

Abstract and Keywords

This chapter discusses the observation that human emotions sometimes have unfortunate effects, which raises the concern that robot emotions might not always be optimal. It analyzes brain mechanisms for vision and language to ground an evolutionary account relating motivational systems to emotions and the cortical systems which elaborate them. It also attempts to determine how to address the issue of characterizing emotions in such a way that a robot can be considered to have emotions, even though they are not emphatically linked to human emotions.

Keywords:   emotions, robots, brain mechanisms, vision, language, motivational systems, cortical systems

The warning, “Beware the Passionate Robot,” comes from the observation that human emotions sometimes have unfortunate effects, raising the concern that robot emotions might not always be optimal. However, the bulk of the chapter is concerned with biology: analyzing brain mechanisms for vision and language to ground an evolutionary account relating motivational systems to emotions and the cortical systems which elaborate them. Finally, I address the issue of whether and how to characterize emotions in such a way that one might say that a robot has emotions even if they are not empathically linked to human emotions.

A Cautionary Tale

On Tuesday, I had an appointment with N at 3 P.M., but when I phoned his secretary at 2:45 to check the place of the meeting, I learned that she had forgotten to confirm the meeting with N. I was not particularly upset, we rescheduled the meeting for 4 P.M. the next day, and I proceeded to make contented use of the unexpected free time to catch up on my correspondence.

On Wednesday, I decided in midafternoon to put together a chart to discuss with N at our meeting; but the printer gave me some problems, and it was already after 4 when I left my office for the meeting, feeling somewhat flustered but glad that I had a useful (p.334) handout and pleased that I was only a few minutes late. I was looking forward to the meeting and experienced no regret or frustration that it had been postponed the day before. However, when I arrived at about 4:06, N's secretary wasn't there and neither was N. Another secretary explained that N's secretary had left a message for me earlier in the day to move the meeting forward to 2:30. (I had not received the message because my secretary had taken the day off to tend to her ill mother. When I heard the news that morning, I felt slightly frustrated that some correspondence would be delayed but equally concerned for the mother's health and had no question about the appropriateness of my secretary's action. The matter, having been accepted, had no further effect upon my mood, at least until I learned of the loss of the message. At midday, I had been transiently and mildly upset by the cancellation of a luncheon appointment.) N was not available. Would I care to make an appointment? With a curt “No,” I turned on my heels, and stormed out of the office and back to my own. As I walked, I simultaneously felt fury at the double cancellation and shame at my own rude behavior, as well as realizing that another appointment had to be made. I felt tight and constricted. After a minute or two in my office, unable to concentrate and with my thoughts dominated by this stew of emotions, I decided to return to N's office. Once there, I apologized to the secretary for my discourtesy, explained the annoyance of the double cancellation, set a new appointment, made a feeble joke in the form of a threat about the dire consequences of another cancellation, and then returned to my office. The physiological effects I had felt just a few minutes earlier quickly dissipated. I felt pleased that I had “done the right thing.”

In this particular instance, no real harm was done. In other cases, my shows of temper have triggered unfortunate effects in others slower to anger and slower to forgive, which have had long-lasting consequences. I say this not to confess my faults or engender curiosity about my autobiography but simply to make the point, almost entirely neglected elsewhere in this book, that emotions can have negative consequences (cf. Chapter 10 “Emotional disturbances,” of Hebb, 1949).1 Thus my warning, “Beware the Passionate Robot.” If we create a computer tutor, it may be useful to provide it with a human-like voice and perhaps even a face that can provide emotional inflections to its delivery of information, thus adding a human-like emollient that may aid the human student's learning. However, if the computer were to become so exasperated with a poor student that it could lose its temper, screaming out the student's shortcomings in emotional tones laced with invective (p.335) and carrying out the electronic equivalent of tearing up the student's papers in a rage, then where would the benefit lie? One might argue that even though such outbursts are harmful to many children, they may be the only way to “get through” to others; but if this is so, and the production of emotional behavior is carefully computed within an optimal tutoring strategy, it may be debated whether the computer tutor really has emotions or is simply “simulating the appearance of emotional behavior”—a key distinction for the discussion of robot emotions. We will return to these questions later (see Emotion without Biology, below).

To complement the above account of my own tangled emotions on one occasion, I turn to a fictional account of the mental life of a chimpanzee under stress, an excerpt from a lecture by the fictional Australian writer Elizabeth Costello as imagined by J. M. Coetzee (2003):

In 1912 the Prussian Academy of Sciences established on the island of Tenerife a station devoted to experimentation into the mental capacities of apes, particularly chimpanzees.… In 1917 Köhler published a monograph entitled The Mentality of Apes describing his experiments. Sultan, the best of his pupils … is alone in his pen. He is hungry: the food that used to arrive regularly has unaccountably ceased coming. The man who used to feed him and has now stopped feeding him stretches a wire over the pen three metres above ground level, and hangs a bunch of bananas from it. Into the pen he drags three wooden crates.… One thinks: Why is he starving me? One thinks: What have I done? Why has he stopped liking me? One thinks: Why does he not want these crates any more? But none of these is the right thought.… The right thought to think is: How does one use the crates to reach the bananas? Sultan drags the crates under the bananas, piles them one on top of the other, climbs the tower he has built, and pulls down the bananas. He thinks: Now will he stop punishing me? … At every turn Sultan is driven to think the less interesting thought. From the purity of speculation (Why do men behave like this?) he is relentlessly propelled towards lower, practical, instrumental reason (How does one use this to get that?) and thus towards acceptance of himself as primarily an organism with an appetite that needs to be satisfied. (pp. 71–73)

This may or may not be a realistic account of what Sultan was thinking (see de Waal, 2001, for the views of a primatologist who supports such “an-thropomorphism”), but my point here is to stress a “two-way reductionism” (Arbib, 1985; Arbib & Hesse, 1986) which understands the need to establish a dialog between the formal concepts of scientific reductionism and the richness of personal experience that drives our interest in cognition and (p.336) emotion. How can we integrate the imagination of the novelist with the rigor of the neurobiologist?

In the opening chapter of this book, our fictitious “Russell” argued for the utility of definitions in the analysis of emotion, only to be rebuffed by “Edison” with his emphasis on inventions. Being somewhat Russellian, let me provide here some definitions based on those in the Oxford English Dictionary (OED). While some biologists stress that everyday language oftenlacks the rigor needed to advance scientific research, I believe that—in the spirit of Elizabeth Costello—we have much to gain by challenging our scientific concepts in cognitive science and artificial intelligence by confronting them with the human experience that normal usage enshrines. As those familiar with the OED know, each word comes with its etymology and with a range of definitions and relevant historical quotations. What follows is an edited and no doubt biased sampling that may be useful for systematizing what has been learned in this volume.2

Emotion: 4b. Psychology. A mental “feeling” or “affection” (e.g. ofpleasure or pain, desire or aversion, surprise, hope or fear, etc.), as distinguished from cognitive or volitional states of consciousness.

Motivation: b. orig. Psychol. The (conscious or unconscious) stimulus for action towards a desired goal, esp. as resulting from psychological or social factors; the factors giving purpose or direction to human or animal behaviour.

Affect: I. Mental. 1.a. The way in which one is affected or disposed; mental state, mood, feeling, desire, intention. esp. b. Inward disposition, feeling, as contrasted with external manifestation or action; intent, intention, earnest, reality. c. Feeling, desire, or appetite, as opposed to reason; passion, lust, evil-desire.

On the basis of these definitions, I see a spectrum from motivation and affect, which dispose one to act in a certain way, to emotion, which is linkedto conscious feelings of “pleasure or pain, desire or aversion, surprise, hope or fear, etc.” Thus, where Fellous and LeDoux (Chapter 4), for example, are comfortable speaking of “emotional behavior” that may be unaccompanied by “emotional feelings,” I usually regard this as “motivated behavior” and reserve the term emotion for cases in which “feelings” are involved. However, I think most authors in this volume would agree that emotional feelings cannot so easily be “distinguished from cognitive or volitional states of consciousness,” as the above definition assumes. Alas, the above is the beginning of clarity, not its achievement. One can have emotion without feeling the emotion—as in “I didn't know I was angry until I over-reacted like that”—and one can certainly have feelings—as in “I feel that the color does (p.337) not suit you”—that do not seem emotional. My suggestion, however, is that the potential for feeling is essential to the concept of emotion, as I will try to make more clear when I set the stage for a new look at biological evolution below. This is clearly a work in progress.

The Narrative Chronologically Rearranged and Annotated

In this section, I rearrange the narrative of the previous section, annotating it to make clear the hugely cognitive content of most of the emotional states reported, thus setting down the gauntlet to any theory that jumps too quickly from a basic system for “motivation” to the human brain's capacity to integrate rich cognitions with subtle emotions. How do we get from a basic system of neuromodulators (Kelley, Chapter 3), reward and punishment (Rolls, Chapter 5) or behavioral fear (Fellous & LeDoux, Chapter 4) to these nuanced emotions that humans experience? This question is addressed below, under An Evolutionary Approach to Heated Appraisals. The following section will present evolutionary stories (or essays in comparative cognitive neuroscience) for the diversity of vision and for the expansion of communication to include language in the perspective offered later, under From Drives to Feelings. These insights will then serve to anchor a fresh look at the issue of robot emotions under Emotion without Biology.

Following each excerpt from the narrative, I offer a sequence of mental states and events, without teasing out the overlapping of various segments. The notion x will denote the experience of the emotional state x, by which I mean a reportable experience in which emotional feelings play an important role. The challenge (to be only partially met below) is to understand the iceberg of which such experiences are but the tip. Each sequence is followed by a few comments relevant to any later attempt to explain the underlying neural dynamics. These comments constitute an a posterioriM reconstruction of what went on in my head at the time: there is no claim that the suggested interpretations are complete, and similar emotional behaviors and experiences might well have quite different causes in different circumstances.

  1. 1. I phoned N's secretary and learned that she had forgotten to confirm the meeting with N. I was not particularly upset, we rescheduled the meeting for 4 P.M. the next day, and I proceeded to make contented use of the unexpected free time to catch up on my correspondence.

    Hope for success of the meeting → Expected meeting not confirmed → mild annoyance → meeting rescheduled; good use of free time → annoyance dissipated; contentment (1)

    (p.338) Presumably, the fact that the annoyance is mild rests on cognitive computations showing that the meeting is not urgent; this mild reaction was further defused when it was found that the meeting could be rescheduled prior to any pressing deadline.

  2. 2. On Wednesday, my secretary took the day off because her mother was ill. When I heard the news that morning, I felt slightly frustrated that some correspondence would be delayed, but equally concerned for the mother's health and had no question about the appropriateness of my secretary's action. The matter, having been accepted, had—as far as the next few hours were concerned—no further effect upon my mood.

    Absence of secretary → realization of delayed work → mild annoyance → mother's ill health; absence is appropriate → concern for mother; annoyance dissipated(2)

    This diagrams the transition in emotions as serial, when it was probably a parallel process in which annoyance and concern were simultaneously activated. What seems to unite the sequences in (1) and (2) is that the blocking of the performance of a plan yields annoyance. What determines the intensity of affect is a point to which I return in (5). What is important here is that the emotional state can continue, coloring and modifying a variety of cognitive states until it is in some way resolved. The resolution in (1) can be put down to the formulation of an alternative plan (a new appointment); the resolution in (2) is more complex, accepting that circumstances require a delay. The experience of concern is separate but helps to mitigate the annoyance by offering an acceptable reason for this particular change of plan. Moreover, I assumed my secretary would care for her mother, so this concern dissipated in turn.3

  3. 3. At midday, I was transiently and mildly upset by the cancellation of a luncheon appointment.

    Luncheon cancelled → mild disappointment → other activity → disappointment dissipated(3)

    Why disappointment and not annoyance? The former shades toward resignation; the latter shades toward anger and the possibility of impulsive action.

    Why did this new cancellation not have the aggravative effect of the second disappointment with N, to be recounted in (5)? The notion is that an emotional state may be terminated (p.339) when a plan is completed which addresses the source of that state; or it may simply dissipate when alternative plans are made. However, if some major goal has been rendered unattainable by some event, the negative emotion associated with the event may not be dissipated by embarking on other plans since they do not approach the goal. Another issue is that our memory of an episode may be more or less charged with the emotional state which was part of the event. On one occasion, one might recall the death of a loved one with real grief; on another occasion, that death is recalled without any trace of sadness. Thus, our emotional state on any one occasion can be modulated by the emotional states evoked by the more or less conscious recall of associated episodes. (I do not claim to have any theory of how memory and emotion interact or of the variability of these interactions; but see, e.g., Christianson, 1992.)

  4. 4. I decided in midafternoon to put together a chart to discuss with N at our meeting, but the printer gave me some problems, and it was already after 4 when I left my office for the meeting, feeling somewhat flustered, but glad that I had a useful handout and pleased that I was only a few minutes late. I was looking forward to the meeting, and experienced no regret or frustration that it had been postponed the day before.

    Putting together a chart → using printer and having problems → running late for meeting → feeling flustered → but only mildly late and with a good handout → glad; pleasant anticipation (4)

    Presumably, the extra time to prepare the chart provides another reason to dispel residual annoyance (if any) from Tuesday's cancellation; but this opportunity was not realized in time to affect my mental state on Tuesday. Consequences of a situation may not be realized until long afterward. Note that the lack of regret or frustration is not part of my mental state at this point. Rather, it is part of the narrative composed later and was designed to highlight what happened next.

  5. 5. However, when I arrived at about 4:06, N's secretary wasn't there and neither was N. Another secretary explained that N's secretary had left a message for me earlier in the day to move the meeting forward to 2:30. (I had not received the message because of my secretary's absence that day.) N was not available. Would I care to make an appointment? With a curt “No,” I turned on my heels, and stormed out of the office and back to my own.

    (p.340) Absence of N and N's secretary → mild disappointment → news that message had been left that meeting had been cancelled → fury → curt response to offer to set new appointment; abrupt return to my office(5)

    It is perhaps worth considering to what extent the “over-the-top” level of annoyance here labeled “fury” was targeted rather than diffuse. It was not targeted at my secretary—the recollection of her absence served to explain why I had not received the message (it had been left on the voicemail, which she would normally relay to me), not to blame her for being away. It was the cancellation of the meeting, not the loss of the message, that annoyed me; and this fury was directed at N and his secretary. However, in their absence, the “bearer of bad tidings” received the immediate brunt of my anger. I do not think anyone unconnected with this news would have received an overt action beyond seeing the facial expression of this strong negative emotion.

    Note the immense difference from (1). This dramatic overreaction is not a response to the cancellation alone but seems to be a case of “state dependence” (Blaney, 1986). In this case, the cumulative effect of earlier negative emotional states was “to blame” (recall the earlier comment that the present emotional state can be modulated by the emotional states evoked by the recall of associated episodes).

  6. 6. As I walked, I simultaneously felt fury at the double cancellation and shame at my own rude behavior, as well as realizing that another appointment had to be made. I felt tight and constricted. After a minute or two in my office, unable to concentrate and with my thoughts dominated by this stew of emotions, I decided to return to N's office.

    fury → realization that behavior was inappropriate → fury mixed with shame → recognition that an apology is due to the secretary and that another appointment must be made(6)

    The emotional state of fury provides a strong drive for a set of violent behaviors. Here, we see the internal battle between the acting out of this aggression and the social imperatives of “correct” behavior. This provides another example of the competition and cooperation that is so distinctively the computing style of the brain (Arbib, 1989). Note the role here of social norms in judging the behavior to be inappropriate with the concomitant emotion of shame, providing the motivation to take a (p.341) course of action (apology) that will make amends for the inappropriate behavior; note that the situation was simplified because this course of action was consistent with the (nonemotional?) recognition that another appointment had to be made. Indeed, many of our action plans are social and require interaction with others to enlist aid, invite sympathy, bargain, intimidate, threaten, provide rewards, etc.

The analysis can continue like this until the end of the narrative. This section has provided a (very restricted) data set on the interaction between perception, emotion, and action that has brought out the interaction between cognitive processing that stresses both the “heat” and the state dependence involved in emotions. Below, I will attempt to make sense of this data set within an evolutionary framework (see An Evolutionary Approach to Heated Appraisals). To set the stage for this, in the following section we present a general evolutionary framework, then an analysis of vision and language within that framework, and finally a look at the motivational systems which ground the emotions.

Hughlings Jackson: An Evolutionary Framework

I now offer a general framework for the study of the evolution of brain mechanisms which will inform the following two sections. Hughlings Jackson was a 19th century British neurologist who viewed the brain in terms of levels of increasing evolutionary complexity (Jackson, 1878–79). Influenced by the then still novel Darwinian concepts of evolution, he argued that damage to a “higher” level of the brain disinhibited “older” brain regions from controls evolved later, to reveal evolutionarily more primitive behaviors. My arguments in this chapter will be structured by my attempt (Arbib, 1989) to extract computational lessons from Jackson's views on the evolution of a system that exhibits hierarchical levels.

  • The process starts with one or more basic systems to extract useful information from a particular type of sensory input.

  • These basic systems make data available which can provide the substrate for the evolution of higher-level systems to extract new properties of the sensory input.

  • The higher-level systems then enrich the information environment of the basic systems by return pathways.

  • The basic systems can then be adjusted to exploit the new sources of information.

(p.342) Thus, evolution yields not only new brain regions connected to the old but also reciprocal connections which modify those older regions. The resulting system is not unidirectional in its operation, with lower levels simply providing input to higher levels. Rather, there is a dynamic equilibrium of multiple subsystems of the evolved system that continually adjust to significant changes in each other and (more or less directly) in the world.

The following section will offer a “Jacksonian” analysis of the evolution of brain mechanisms for vision and—via mechanisms for the visual control and recognition of hand movements—language, rooted in a brief comparison of frogs, rats, monkeys, and humans.

The usual caveats: (a) Frog → Rat → Monkey → Human is not an evolutionary sequence; rather, the first three species are used as reference points for stages in human evolution within the mammalian lineage. (b) There is no claim that human vision is inherently better than that of other species. The question is, rather, how it has adapted to our ecological niche. Human vision (to say nothing of human bodies) is ill-suited to, for instance, making a living in a pond by catching flies. We will turn to (a suitable abstraction of) the notion of “ecological niche” when we return to our discussion of in what senses may robots have emotions.

The relevance of vision and language to our account of the evolution of emotion and its underlying brain mechanisms (see below, From Drives to Feelings) is as follows:

  1. 1. We apply the term vision for the processing of retinal signals in all these creatures. However, vision in the frog is midbrain-dominated and specially adapted to a limited repertoire suitable for species survival, whereas mammals augment these midbrain mechanisms with a rich set of cortical mechanisms that make possible a visual repertoire which becomes increasingly open-ended as we pass from rat to monkey to human. In other words, we see an evolutionary change in vision which is qualitative in nature. Yet, the ancestral mechanisms remain an integral part of the human visual system.

  2. 2. All these creatures have communication in the sense of vocal or other motor signals that can coordinate behavior between conspecifics. Yet, none of these communication systems forms a language in the human sense of an open-ended system for expressing novel as well as familiar meanings. The closest we can come, perhaps, is the “language” of bees, but this is limited to messages whose novelty lies in the variation of three parameters which express the quality, heading, and distance of a food source. We again see in human evolution a qualitative change (p.343) but this time with a special terminology to express it—from communication as a general phenomenon among animals to language as a specific form of communication among humans.

    Note that it is not my point to say that English has made the right absolute choices. I am simply observing that we do use the term vision for the processing of focused patterns of variations in light intensity whether in fly, octopus, bird, or human. In each creature, vision is served by a network of interacting mechanisms, but this network varies from species to species in ways related to the animal's ecological niche. What humans lack in visual acuity they may make up for in visual processes for face recognition and manual control. By contrast, although some people will use the term language loosely for any form of communication, most people will understand the sense of the claim that “Humans are the only creatures that have language;” yet, none would accept the claim that “Only humans have vision” unless vision were used in the metaphorical sense of “the ability to envision explicitly alternative futures and plan accordingly.”

  3. 3. Again, all these creatures are endowed with motivational systems—hunger, thirst, fear, sex, etc.—and we may trace the linkage of these systems with developing cortical mechanisms as we trace our quasi-evolutionary sequence. However, are we to follow the analogy with vision and refer to the processes involved as “emotion” throughout, or should we instead follow the example of communication and suggest that emotion provides in some sense a special form of motivational system? Here, I do not think there is the same consensus as there is for vision or language. I choose the latter path, suggesting the following:

    Motivation ≈ Communication

    Emotion ≈ Language

    There is this difference: whereas language seems to be restricted to humans alone, emotion seems to be less clear-cut. Following Darwin, we see expressions of emotion in dogs and monkeys, even if most people would not credit them with the capability for the emotional nuances of a Jane Austen heroine.

  4. 4. Much of the biological discussion of emotion has turned on the distinction between “emotional behavior” and “emotional feelings.” “Emotional expression” adds another dimension, where mammalian (especially primate) facial expressions are understood to signal the emotional state of the animal but distinguished from the emotional behavior itself (e.g., a fearful expression is (p.344) different from such fear behavior as fleeing or freezing). Emotional feelings are tied up with notions of consciousness, but it is well known that one may be conscious of the possible emotional overtones of a situation yet not feel emotionally involved oneself (and brain damage may leave a person incapable of emotional feelings; cf. the Chapter 3 section “A Modern Phineas Gage” in Damasio, 1994).

Below, we will discuss the notion of emotion as suitable for characterizing aspects of the behavior and inner workings of robots that share with humans neither an evolutionary history as flesh-and-blood organisms nor the facial or vocal expressions which can ground empathy. In particular, we will return to the question of ecological niches for robots and the issue of to what extent emotions may contribute to, or detract from, the success of a “species” of robots in filling their ecological niche.

Elsewhere (e.g., Arbib, 1989), I have developed a theory of schemas as functional (as distinct from structural) units in a hierarchical analysis of the brain. Extant schemas may be combined to form new schemas as coordinated control programs linking simpler (perceptual and motor) schemas to more abstract schemas which underlie thought and language more generally. The behavioral phenotype of an organism need not be linked to a localized structure of the brain but may involve subtle patterns of cooperative computation between brain regions which form a schema. Selection may thusact as much on schemas as it does on localized neural structures. Developing this view, Arbib and Liaw (1995) argued that evolution yields not only new schemas connected to the old but also reciprocal connections which modify those older schemas, linking the above Jacksonian analysis to the language of schema theory.

Evolution of the Brain Mechanisms Supporting Vision and Language

Over the years, I have attempted to create a comparative computational neuroethology (i.e., a comparative computational analysis of neural mechanisms underlying animal behavior) in which the brains of humans and other creatures come to be better understood by seeing homologous mechanisms as computational variants which may be related to the different evolutionary history or ecological niche of the creatures that contain them. Arbib (2003) stresses the notion of “conceptual neural evolution” as a way of understanding complex neural mechanisms through incremental modeling. Although somewhat ad hoc, this process of adding features to a model “to see (p.345) what happens” is constrained by biological data linking behavior to anatomy and neurophysiology, though without a necessary analysis of the underlying genes. The aim is to discover relations between modules (neural circuits at some grain of resolution) that implement basic schemas (functions, as distinct from structures) in simpler species with those that underlie more elaborate schemas in other species. Clearly, the evolutionary path described in this way is not necessarily substantiated as the actual path of evolution by natural selection that shaped the brains of the species we study today but has two benefits: (1) making very complex systems more comprehensible and (2) developing hypotheses on biological evolution for genetic analysis. In 2003 I offered a conceptual evolutionary perspective on brain models for frog, rat, monkey, and human. For rat, I showed how a frog-like taxonaffordance model (Guazzelli, Corbacho, Bota, & Arbib, 1998) provides a basis for the spatial navigation mechanisms that involve the hippocampus and other brain regions. (As in Chapters by Rolls and Kelley, taxis[plural taxes] are simple movements in response to a set of key stimuli. Affordances (Gibson, 1966) are parameters for motor interactions signaled by sensory cues without the necessary intervention of “high-level processes” of object recognition.) For monkey, I recalled two models of neural mechanisms for visuomotor coordination. The first, for saccades, showed how interactions between the parietal and frontal cortex augment the superior colliculus, seen as the homolog of the frog tectum (Dominey & Arbib, 1992). The second, for grasping, continued the theme of parietofrontal interactions, linking parietal affordances to motor schemas in the premotor cortex (Fagg & Arbib, 1998). This further emphasized the mirror system for grasping, in which neurons are active both when the monkey executes a specific grasp and when it observes a similar grasp executed by others. The model of human brain mechanisms is based on the mirror-system hypothesis of the evolution of the language-ready brain, which sees the human Broca's area as an evolved extension of the mirror system for grasping. In the next section, I will offer a related account for vision and next note how dexterity involves the emergence of new types of visual system, carrying forward the mirror-system hypothesis of the evolution of the language-ready brain. The section ends with a brief presentation of a theory of how human consciousness may have evolved to have greater linkages to language than animal awareness more generally. However, these sections say nothing about motivation, let alone emotion. Thus, my challenge in the section From Drives to Feelings is to use these insights to both apply and critique the evolutionary frameworks offered in Chapters 35 by Kelley, Rolls, and Fellous & LeDoux and thus to try to gain fresh insight into the relations between emotion and motivation and between feelings and behavior. The mirror-system hypothesis, with its emphasis on communication, provides one example of how we may link this brain-in-the-individual (p.346) approach to the social interactions stressed by Adolphs (Chapter 2). Indeed, Jeannerod (Chapter 6) explores the possible role of mirror systems in empathy and our ability to understand the emotions of others. However, I must confess here that the current chapter will place most emphasis on the brain-in-the-individual approach and will conclude by giving a theory of robot emotions grounded in the analysis of a robot going about its tasks in some ecological niche, rather than emphasizing social interactions.

Vision Evolving

The year 1959 saw the publication of two great papers on the neurophysiology of vertebrate vision: the study by Lettvin, Maturana, McCulloch, and Pitts (1959) of feature detectors in the frog's retina and that by Hubel and Wiesel (1959) of receptive fields of neurons in the cat primary visual cortex. We will analyze the first work in relation to later studies of frog behavior (postponing a brief look at the role of motivation; we will then look at the more generic coding in the cat visual system and ponder its implications.

Action-Oriented Feature Detectors in Frog Retina

Lettvin, Maturana, McCulloch, and Pitts (1959) studied “what the frog's eye tells the frog's brain” and reported that frog ganglion cells (the output cells of the retina) come in four varieties, each providing a retinotopic map of a different feature to the tectum, the key visual region of the midbrain (the homolog, or “evolutionary cousin,” of what in mammals is often referred to as the “superior colliculus”):

  1. 1. The boundary detectors

  2. 2. The movement-gated, dark convex boundary detectors

  3. 3. The moving or changing contrast detectors

  4. 4. The dimming detectors

Indeed, axons of the cells of each group end in a separate layer of the tectum but are in registration: points in different layers which are stacked atop each other in the tectum correspond to the same small region of the retina. All this shows that the function of the frog retina is not to transmit information about the point-to-point pattern distribution of light upon it but rather to analyze this image at every point in terms of boundaries, moving curvatures, changing contrasts, and local dimming. Lettvin's group argues that the convexity detectors (operation 2 above) serve as “bug perceivers,” while operation 4 could be thought of as providing “predator detectors.” (p.347) However, this is only the first approximation in unraveling the circuits which enable the frog to tell predator from prey. Where Lettvin's group emphasized retinal fly and enemy detectors, later work emphasized tectal integration (Grüsser-Cornehls & Grüsser, 1976) and interactive processes involving the optic tectum and the thalamic pretectal region (Ewert, 1987). Cobas and Arbib (1992) defined the perceptual and motor schemas involved in prey catching and predator avoidance in frog and toad, charting how differential activity in the tectum and pretectum could play upon midbrain mechanisms to activate the appropriate motor schemas:

Prey capture: orient toward prey, advance, snap, consume Predator avoidance: orient away from predator, advance

Note that the former includes “special-purpose” motor pattern generators, those for snapping and ingestion, while the latter uses only “general-purpose” motor pattern generators for turning and locomotion.

Generic Feature Detectors in Cat Primary Visual Cortex

In 1959, Hubel and Wiesel published “Receptive fields of single neurones in the cat's striate cortex.” A whole string of further papers (such as Hubel & Wiesel, 1962, 1965, 1968; Wiesel & Hubel, 1963; Hubel, Wiesel, & LeVay, 1977) extended the story from cat to monkey, placed the neurophysiology in an anatomical and developmental framework, and introduced the crucial notions of orientation and ocular dominance columns in visual cortex—a cumulative achievement honored with a Nobel Prize in 1981. Where Kuffler (1953) had characterized retinal ganglion cells in cat as on-center off-surround and off-center on-surround, Hubel and Wiesel showed that cells in the primary visual cortex of cat (and monkey) could be classified as “simple” cortical cells, responsive to edges at a specific orientation in a specific place, and “complex” cells, which respond to edges of a given orientation in varying locations. Paralleling the work of Mountcastle and Powell (1959) on somatosensory cortex, Hubel and Wiesel found that the basic unit of mammalian visual cortex is the hypercolumn, 1 mm2 × 2 mm deep. Each such hypercolumn contains columns responsive to specific orientations. The columns form an overarching retinotopic map, with fine-grained details such as orientation available as a “local tag” at each point of the map. Overlaid on this is the pattern of ocular dominance “columns” (really more like zebra stripes when viewed across the cortical surface), alternate bands each dominated by input from a single eye.

How are we to reconcile the “ecologically significant” features extracted by the frog retina with the far more generic features seen in cats and primates (p.348) at the much higher level of visual cortex? Different animals live in different environments, have different behaviors, and have different capabilities for motor behavior. As a result, the information that they need about their world varies greatly. On this basis, we may hope to better understand the problem of vision if we can come to see which aspects of visual system design converge and which differences are correlated with the differing behavioral needs of different species. The frog will snap at, or orient toward, an object moving in prey-like fashion and will avoid a large moving object. It responds to localized features of the environment—information from a large region of its visual field only affects its action when determining a barrier it must avoid when seeking prey or escaping an enemy, and this is mediated elsewhere in the brain. Thus, preprocessing at the ganglion cell level in the frog is already action-oriented. In the cat (and monkeys and humans), processing in the primary visual cortex is “action-neutral,” providing efficient encoding of natural stimuli and serving as a precursor to processes as diverse as face recognition and manual dexterity. Specializations appropriate to certain crucial tasks do occur but only further along the visual pathway.

The Where, What, and How of Vision

Until the late 1960s, the study of the visual system of mammals emphasized the contributions of the visual cortex, with little attention paid to midbrain mechanisms. An important move toward a more subtle understanding came with the symposium contributed to by Ingle, Schneider, Trevarthen, and Held (1967), who suggested that we should think of vision not in terms of a single pathway running through the lateral geniculate nucleus to the visual cortex (the geniculostriate pathway) but rather in terms of the interaction of two pathways: the geniculostriate system for identifying and a midbrain system, the superior colliculus or tectum, for locating (see Schneider, 1969, for relevant data on the hamster). It thus became fashionable to talk about the “two visual systems” in mammals, one for what and one for where.

However, analysis of the frog (e.g., Arbib, 1987, for a review) showed that there could be more than two visual systems even subcortically, with different parts of the brain serving different visual mechanisms. For example, prey catching by the frog seems to rely on the tectum for processing of visual cues. The pretectum seems necessary for the tectum to play its role in the avoidance of visual threat, as well as in mediating the recognition of barriers. The role of the tectum in directing whole-body movements in the frog is analogous to the role of the superior colliculus in directing eye movements in the cat and monkey. When humans without primary visual cortex are asked “Am I moving my left or right hand?” they say “I can't see” but, (p.349) asked to make a guess, will point in the direction of the moving hand. They can catch a ball even though they believe they cannot see it. This phenomenon is referred to as blindsight (Weiskrantz, Warrington, Sanders, & Marshall, 1974; see Stoerig, 2001, for a review and Humphrey, 1970, for a study linking frog and monkey). The midbrain visual system is thus quite powerful but not connected to consciousness. Indeed, when a normal person catches a ball, he or she is usually aware of seeing the ball and of reaching out to catch it but certainly not of the processes which translate retinal stimulation into muscle contraction, so most neural net activity is clearly unconscious. The lesson is that even schemas that we think of as normally under conscious control can in fact proceed without our being conscious of their activity.

Recent research has extended the what and where dichotomy to a variety of cortical systems. Studies of the visual system of monkeys led Ungerleider and Mishkin (1982) to distinguish inferotemporal mechanisms for object recognition (what) from parietal mechanisms for localizing objects (where). Goodale, Milner, Jakobson, and Carey (1991) studied a human patient (D. F.) who had developed a profound visual form of agnosia following a bilateral lesion of the occipitotemporal cortex. The pathways from the occipital lobe toward the parietal lobe appeared to be intact. When the patient was asked to indicate the width of any one of a set of blocks either verbally or by means of her index finger and thumb, her finger separation bore no relationship to the dimensions of the object and showed considerable trial-to-trial variability. Yet, when she was asked simply to reach out and pick up the block, the peak aperture between her index finger and thumb (prior to contact with the object) changed systematically with the width of the object, as in normal controls. A similar dissociation was seen in her responses to the orientation of stimuli. In other words, D. F. could preshape her hand accurately, even though she appeared to have no conscious appreciation (either verbal or by pantomime) of the visual parameters that guided the preshape. With Goodale and Milner (1992), then, we may rename the where pathway as the how pathway, stressing that it extracts a variety of affordances relevant to action (recall that affordances are parameters for motor interactions extracted from sensory cues), not just object location.

The Many Systems of Vision

This brief tour of the neural mechanisms of vertebrate vision, and a great body of related modeling and empirical data, supports the enunciation of a general property of vertebrate neural control: a multiplicity of different representations must be linked into an integrated whole. However, this may be (p.350) mediated by distributed processes of competition and cooperation. There need be no one place in the brain where an integrated representation of space plays the sole executive role in linking perception of the current environment to action.

Dean, Redgrave, and Westby (1989; see also Dean & Redgrave, 1989) used a study of the rat informed by findings from the study of the frog to provide an important bridge between frog and monkey. Where most research on the superior colliculus of cat and monkey focuses on its role in saccadic eye movements—an approach behavior for the eyes—Dean et al. looked at the rat's own movements and found two response systems in the superior colliculus which were comparable with the approach and avoidance systems studied in the frog and toad. We thus see the transition from having the superior colliculus itself commit the animal to a course of action (frog and rat) to having it more often (but not always) relinquish that role and instead direct attention to information for use by cortical mechanisms in committing the organism to action (e.g., cat, monkey, and human). We now turn to one system for committing the organism to action, that for grasping, and then present an evolutionary hypothesis which links cerebral mechanisms for grasping to those that support language.

The Mirror System and the Evolution of Language

Having looked at vision from a very general perspective, I now focus on two very specific visual systems that are especially well developed in primates: the system that recognizes visual affordances for grasping and the system that recognizes grasping actions made by others. I shall then argue that these systems provide the key to a system that seems specifically human: the brain mechanisms that support language.

Brain Mechanisms for Grasping

In macaque monkeys, parietal area AIP (the anterior region of the intraparietal sulcus; Taira et al., 1990) and ventral premotor area F5 (Rizzolatti et al., 1988) anchor the cortical circuit which transforms visual information on intrinsic properties of an object into hand movements for grasping it. The AIP processes visual information on objects to extract affordances (grasp parameters) relevant to the control of hand movements and is reciprocally connected with the so-called canonical neurons of F5. Discharge in most grasp-related F5 neurons correlates with an action rather than with the (p.351) individual movements that form it so that one may relate F5 neurons to various motor schemas corresponding to the action associated with their discharge.

The FARS model (named for Fagg, Arbib, Rizzolatti & Sakata; Fagg &

Arbib, 1998) provides a computational account centered on the pathway:

AIP (object affordances) →(F5canonical (abstract motor schemas) → F1 (motor cortex instructions to lower

motor areas and motor neurons)

Figure 12.1 gives a view of “FARS Modificato,” the FARS model updated on the basis of suggestions by Rizzolatti and Luppino (2003), based on the neuroanatomical data reviewed Rizzolatti and Luppino (2001), so that information on object semantics and the goals of the individual influences AIP rather than F5 neurons, as was the case in Fagg and Arbib (1998). The dorsal stream via the AIP does not know what the object is; it can only see the object as a set of possible affordances (it lies on the how pathway). The ventral stream (from primary visual cortex to inferotemporal cortex), by contrast, is able to recognize what the object is. This information is passed

                      Beware the Passionate                             Robot

Figure 12.1. A reconceptualization of the FARS model (Fagg & Arbib, 1998), in which the primary influence of the prefrontal cortex (PFC) on the selection of affordances is on the parietal cortex (AIP, anterior intraparietal sulcus) rather than the premotor cortex (hand area F5). This diagram includes neither the circuitry encoding a sequence, possibly the part of the supplementary motor area called the pre-SMA (Rizzolatti, Luppino, & Matelli, 1998), nor the administration of the sequence (inhibiting extraneous actions, while priming imminent actions) by the basal ganglia.

(p.352) to the prefrontal cortex, which can then, on the basis of the current goals of the organism and the recognition of the nature of the object, bias the AIP to choose the affordance appropriate to the task at hand. Figure 12.1 gives only a partial view of the FARS model, which also provides mechanisms for sequencing actions. It segregates the F5 circuitry, which encodes unit actions from the circuitry encoding a sequence, possibly the part of the supplementary motor area called “pre-SMA” (Rizzolatti, Luppino, & Matelli, 1998). The administration of the sequence (inhibiting extraneous actions, while priming imminent actions) is then carried out by the basal ganglia (Bischoff-Grethe, Crowley, & Arbib, 2003).

Bringing in the Mirror System

Further study revealed a class of F5 neurons that discharged not only when the monkey grasped or manipulated objects but also when the monkey observed the experimenter make a gesture similar to the one that, when actively performed by the monkey, involved activity of the neuron (Rizzolatti, Fadiga, Gallese, & Fogassi, 1995). Neurons with this property are called “mirror neurons.” The majority of mirror neurons are selective for one type of action, and for almost all mirror neurons there is a link between the effective observed movement and the effective executed movement.

Two positron emission tomography (PET) experiments (Rizzolatti et al., 1996; Grafton, Arbib, Fadiga, & Rizzolatti, 1996) were then designed to seek mirror systems for grasping in humans. Grasp observation significantly activated the superior temporal sulcus (STS), the inferior parietal lobule, and the inferior frontal gyrus (area 45). All activations were in the left hemisphere. The last area is of especial interest—areas 44 and 45 in the left hemisphere of the human brain constitute Brocás area, a major component of the language mechanisms. Indeed, F5 is generally considered to be the homolog of Broca's area.

And on to Language

The finding that human Broca's area contains a mirror system for grasping led us (Arbib & Rizzolatti, 1997; Rizzolatti and Arbib, 1998) to explore the hypothesis that the mirror system provided the basis for the evolution of human language via seven stages:

  1. 1. Grasping.

  2. 2. A mirror system for grasping.

  3. (p.353) 3. A “simple” imitation system: we hypothesize that brain mechanisms supporting a simple imitation system—imitation of novel object-directed actions through repeated exposure—for grasping developed in the 15 million-year evolution from the common ancestor of monkeys and apes to the common ancestor of apes and humans.

  4. 4. A “complex” imitation system: we hypothesize that brain mechanisms supporting a complex imitation system—acquiring (longer) novel sequences of more abstract actions in a single trial—developed in the 5 million-year evolution from the common ancestor of apes and humans along the hominid line that led, in particular, to Homo sapiens.

  5. 5. Protosign, a manual-based communication system, resulting fromthe freeing of action from praxis to be used in pantomime and then in manual communication more generally.

  6. 6. Protospeech, a vocal-based communication system exploiting thebrain mechanisms that evolved to support protosign.

  7. 7. Language. Arbib (2002) argues that stages 6 and 7 are separate, characterizing protospeech as being the open-ended production and perception of sequences of vocal gestures, without implying that these sequences have the syntax and semantics adequate to constitute a language. But the stages may be interleaved.

Nonhuman primates have a call system and orofacial gestures expressive of a limited range of emotional and related social indicators. However, we do not regard primate calls as the direct precursor of speech. Combinatorial properties for the openness of communication are virtually absent in basic primate calls, even though individual calls may be graded. Moreover, the neural substrate for primate calls is in a region of the cingulate cortex distinct from F5. The mirror-system hypothesis offers detailed reasons why Broca's area—as the homologue of F5—rather than the area already involved in vocalization, provided the evolutionary substrate for language.

Consciousness, Briefly

We have now established that vision is no single faculty but embraces a wide variety of capabilities, some mediated by subcortical systems, others involving cooperation between these and other, more highly evolved systems in the cerebral cortex. The evolution of manual dexterity went hand in hand [!] with the evolution of a dorsal cortical pathway dedicated to extracting the visual affordances appropriate to that dexterity and a ventral cortical (p.354) pathway of much more general capability, able to recognize objects and relationships in a fashion that enables the prefrontal cortex to support the planning of action, thus determining which affordances to exploit in the current situation. The mirror-system hypothesis suggests how the recognition of manual actions might have paved the way for cumulative evolutionary changes in body and brain to yield early humans with the capability for complex imitation, protosign, and protospeech.

I would argue that we are conscious in a fully human sense only because we have language—i.e., that as awareness piggybacks on all manner of neural functions, so too must it piggyback on language, thus reaching a subtlety and complexity that would otherwise be impossible. However, I strongly deny that consciousness is merely a function of language. For example, one can be aware of the shape and shading and coloration of a face in great subtlety and be totally unable to put one's vivid, conscious perception of that face into words. Moreover, I view consciousness as a system function that involves networks including, but not necessarily limited to, the cerebral cortex and that as the cerebral cortex evolves, so too does consciousness.

Arbib and Hesse (1986; Arbib, 1985) suggest that the key transition from the limited set of vocalizations used in communication by, say, vervet monkeys to the richness of human language came with a migration in time from an execution/observation matching system, enabling an individual to recognize the action (as distinct from the mere movement) that another individual is making, to the individual becoming able to pantomime “this is the action I am about to take” (see Arbib, 2001, for an exposition of the Arbib-Hesse theory within the mirror-system framework.) Arbib and Hesse emphasize the changes within the individual brain made possible by the availability of a “précis”—a gesturable representation—of intended future movements (as distinct from current movements). They use the term communication plexus for the circuits involved in generating this representation. The Jacksonian element of their analysis is that the evolution of the communication plexus provides an environment for the further evolution of older systems. They suggest that once the brain has such a communication plexus, a new process of evolution begins whereby the précis comes to serve not only as a basis for communication between the members of a group but also as a resource for planning and coordination within the brain itself. This communication plexus thus evolves a crucial role in schema coordination. The thesis is that it is the activity of this coevolved process that constitutes consciousness. As such, it will progress in richness along with the increased richness of communication that culminates as language in the human line. Since lower-level schema activity can often proceed successfully without this highest-level coordination, consciousness may sometimes be active, if active at all, as a monitor (p.355) rather than as a director of action. In other cases, the précis of schema activity plays the crucial role in determining the future course of schema activity and, thus, of action.

From Drives to Feelings

Our conceptual evolutionary analysis has allowed us to tease apart a variety of visual mechanisms and relate them to a range of behaviors, from the feeding and fleeing of the frog to the visual control of hand movements in monkeys and humans. We briefly examined accounts of the evolution of language and a particularly human type of consciousness. We saw that this type of consciousness builds upon a more general form of consciousness—awareness of both internal and external states—that we did not explain but for which we made a crucial observation: activities in regions of the cerebral cortex can differ in their access to awareness. However, although we looked at visual processing involved in what some might label two “emotional behaviors” in the frog—feeding and fleeing—we did not explicitly discuss either motivation or emotion, beyond suggesting that nonhumans may be aware of subtle social cues or the difference between feeling maternal and feeling enraged and noting that nonhuman primates have a call system and orofacial gestures expressive of a limited range of emotional and related social indicators. The time has come to put these insights to work. As stated above, some would use the term emotion to cover the whole range of motivated behavior, whereas others (myself included) stress the emergent subtlety of emotions. The following section will briefly review an account of motivated behavior in toads and rats, then use this basis together with the insights from the Jacksonian analysis above to offer an integrated perspective on the evolutionary insights provided by Kelley, Rolls, and Fellous & LeDoux in Chapters 35.

Basic Models of Motivation

Karl Pribram (1960) has quipped that the limbic system is responsible for the “four Fs:” Feeding, Fighting, Fleeing, and Reproduction. It is interesting that three of the four have a strong social component. In any case, the notion to be developed in this section is that the animal comes with a set of basic drives—for hunger, thirst, sex, self-preservation, etc.—and that these provide the basic motor, or motivation, for behavior. This will then ground our subsequent discussion of motivation.

(p.356) The Motivated Toad

Earlier, we examined the basic, overlapping circuitry for fleeing and feeding in the toad but did not show how this circuitry could be modulated by motivational systems. Toads can be conditioned to treat objects that normally elicit escape behavior as prey. Toads which are allowed to eat meal-worms out of an experimenter's hand can be conditioned to respond to the moving hand alone (Brzoska & Schneider, 1978). Heatwole and Heatwole (1968) showed that the upper size threshold for acceptable prey increases with long-term food deprivation while the lower size threshold remains constant. In spring, prey-catching behavior decreases or fails to occur. Indeed, prey recognition is “exchanged” for female recognition, introducing mating behavior. Moving females release orienting, approaching, and clasping behaviors in the male (Heusser, 1960; Kondrashev, 1976). In many species, males will attempt to clasp practically any moving object including other males during mating season (Wells, 1977).

Betts (1989) carried forward the modeling of tectal-pretectal interactions (reviewed in Arbib, 1987) to include the effects of motivation. The basic idea is that perceptual schemas can be modulated by broadcast signals for drive levels such as those for feeding or mating, thus shifting the balance of behavior. For example, standard models of prey catching address the finding that ablation of the pretectum results in disinhibition of prey catching, with animals snapping at objects much larger than normal prey (Ewert, 1984); modulating the level of pretectal inhibition can thus shift the balance between feeding and fleeing. Betts (1989) further suggested parallels between the effects of pretectal ablation and the conditioning results and changes that occur during the mating season. T5 neurons in the tectum have a variety of responses, including those which we classify as prey recognition. Betts modeled T5 neuron function by distinguishing the T5 base of T5 cells within the tectum from the pretectal inhibition which modulates it. A T5 neuron then has the potential to be, for example, either a prey or mate feature detector depending on this inhibition. Betts suggests that the subclasses of T5 neurons with different detector properties should be regarded as more or less stable states of a modulated system capable of adaptability and changeability.

In summary, the frog has basic systems for feeding, fleeing, and mating; but we can distinguish circuitry that provides basic “subroutines” from circuitry that uses motivational factors to bias their deployment. This separates the motivation from the behavior. From my point of view, fear is not a behavior, such as freezing, but rather a process that biases the system to be more likely to emit such a behavior. Freezing is one of the many possible behaviors that express fear. Which one is chosen depends on learning, species, and all kinds of other bias. Part of the task of a model of emotion and (p.357) fear is to offer an account of those biases which may be linked to drive states. To link this to our own experience, we may eat because we are hungry or simply because food is placed in front of us. Hunger is what puts us “on alert” to get food, not the behavior of eating the food, though food itself has an incentive value which can increase the likelihood of eating. It is not the reaction to a stimulus but the bias on the way in which we will react to stimuli, as well as the bias on the stimuli we will seek out, paying attention to cues on how to locate food that might otherwise have been ignored.

Before going further, recall our observation that prey capture in the frog includes special-purpose motor pattern generators, those for snapping and ingestion, while predator avoidance uses only general-purpose motor pattern generators for turning and locomotion. Our bodies have complex systems for chewing, digestion, and excretion specialized for feeding, whereas grasping and manipulation are par excellence general-purpose, playing vital roles in feeding, fighting, and expressions of tenderness, to name just a few. We must thus note an increasing dissociation between motivation and the choice of a specific motor system. This is quite orthogonal to my view of emotion as an evolutionary emergence but serves simply to stress that there is no easy correlation between a motivation system and the class of effectors used for the associated motivated behaviors. In any case, a crucial aspect of primate evolution that may be as intimately linked to distinguishing motivation from emotion is the ability to plan behaviors on the basis of future possibilities rather than only in terms of present contingencies. The frog lives in the present, with very little predictive ability and, therefore, only a short-term action-perception cycle. The long-term (from knowing when to refuel to the day-night cycle to the mating season) is handled for the most part by bodily systems and specialized neural systems closely coupled to them. As we compare frog to rat to cat to monkey, the ability to link current decisions to past experiences and future possibilities becomes more explicit and more diverse as the role of the cortex expands.

The Driven Rat

Arbib and Lieblich (1977; see also Lieblich & Arbib, 1982) posited a set {d1, d2, … dk} of discrete drives which control the animal's behavior. Typical drives include appetitive drives, like thirst, hunger, and sex, and aversive drives, like fear. At time t, each drive d has a value d(t), 0 ≤ d(t) ≤ dmax.4 They say a drive increas … es if it changes toward dmax and is reduced if it changes toward 0. Their approach seems consistent with the scheme describing the temporal organization of motivated behavior elaborated by Swanson and Mogenson (1981) and recently reviewed by Watts (2003) but with an (p.358) explicitly formal representation of the underlying dynamics to explain how motivation affects the way in which an animal will move around its environment. For Watts, interactions of sensory information, arousal state, and interoceptive information determine the value of various drives; the integration of competing drives, presumably via complex sets of projections from the hypothalamus, then determines which series of actions will generate the most appropriate procurement (or appetitive) phase, where the goal object which will reduce the drive intensity is actively sought. The motor events expressed during the procurement phase involve foraging behavior, are individualized for the particular situation, and can be quite complex. When the goal object has been located, the subsequent consummatory phase involves more stereotypic rhythmic movements—licking, chewing, copulating, etc.—that allow the animal to interact directly with the goal object. Watts also notes that interactions between different drive networks, particularly in the hypothalamus, are of paramount importance. For example, the effects of starvation are not limited to increasing the drive to eat but also include reduced reproductive capacity. Similarly, dehydration leads to severe anorexia as well as increased drive to drink (Watts, 2001). This cross-behavioral coordination is part of the mechanism that selects the drive with the highest priority and most likely involves hormonal modulation acting together with the divergent neuroanatomical outputs from individual drive networks. As Watts notes, the notions of drive and that particular behaviors are selected to reduce the level of specific drive states have been very influential in neuroscience but remain somewhat controversial.

Arbib and Lieblich (1977) represented the animal's knowledge of its world in a structure they called the “world graph” (WG), a set of nodes connected by a set of edges, where the nodes represent places or situations recognized by the animal, and the links represent ways of moving from one situation to another. A crucial notion is that a place encountered in different circumstances may be represented by multiple nodes but that these nodes may be merged when the similarity between these circumstances is recognized. They model the process whereby the animal decides where to move next, on the basis of its current drive state (hunger, thirst, fear, etc.) and how the WG may itself be updated in the process. The model includes the effects of incentive (e.g., sight or smell of food) as well as drives (e.g., hunger) to show how a route, possibly of many steps, that leads to the desired goal may be chosen and how short cuts may be chosen. Perhaps the most important feature of their model is their description of what drive-related information is appended to the nodes of the WG and how the WG changes over time. They postulate that each node x of WG(t) is labeled with the vector [R(d1, x, t) … R(dk, x, t)] of the animal's current expectations at time t about the drive-related properties of the place or situation P(x) represented (p.359) by x. The changes in the WG are of two kinds: changes in the Rvalues labeling the nodes (this finds echoes in the theory of reinforcement learning: Sutton & Barto, 1998) and actual structural changes in the graph.

More recently, we have integrated the WG model with a model of how a rat can still exhibit spatially guided behavior when its hippocampus is lesioned (Guazzelli, Corbacho, Bota, & Arbib, 1998; Guazzelli, Bota, & Arbib, 2001). Figure 12.2 can be seen as the interaction of the following subsystems:

  1. 1. The path Sensory Inputs → Parietal Affordances → Premotor Action Selection → Motor Outputs is modulated by drive state (hypothalamus, nucleus accumbens). These interactions are an example of the taxon affordance model (TAM) (Guazzelli, Corbacho, Bota, & Arbib, 1998).

  2. 2. Motor outputs affect goal objects in ways that have consequences (gaining food, getting injured, etc.) for the organism. These can affect the internal state of brain and body, updating the drive state. This affects the modulation of (1) in 2 ways:

                          Beware the Passionate                             Robot

    Figure 12.2. The taxon affordance model-world graph (TAM-WG) has as its basis a system, TAM, for exploiting affordances. The path Sensory Inputs → Parietal Affordances → Premotor Action Selection → Motor Outputs is modulated by drive state. This is elaborated by the WG model, which can use a cognitive map, mediated by interactions between hippocampus and prefrontal cortex, to plan paths to targets which are not currently perceptible.

    (p.360) directly and by providing positive or negative reinforcement for learning processes that will affect perceptual, motor, and other schemas. Figure 12.1 shows the latter in terms of the action of dopamine neurons providing the reinforcement for an actor-critic architecture for reinforcement learning in the basal ganglia (Fellous & Suri, 2003; Prescott, Gurney, & Redgrave, 2003).

  3. 3. The hippocampus provides a representation of context, most notably by activation of place cells encoding the animal's place in space. Dynamic remapping is the process whereby this representation may be updated on the basis of an efferent copy of the animal's actions even when sensory data on the new context or location may be missing.

  4. 4. The hippocampal representation is now seen in greater generality as encoding any situation that is linked to a node in the animal's WG. Thus, the hippocampus is seen as providing the “you are here” function; it must be integrated with the WG to provide a full cognitive map linking current position with current goals to determine a path through the world which will achieve one or more of them. Thus, premotor action selection becomes embedded within the prefrontal planning behavior associated with the WG, planning which depends crucially on representations of goal states and internal states (inputs not shown) as well as a combination of the current situation and the available affordances (Guazzelli, Bota, & Arbib, 2001).

An Evolutionary Approach to Heated Appraisals

In item (3) of the analysis of my emotional narrative (see above, “The Narrative Chronologically Rearranged and Annotated”), I stated the following:

an emotional state may be “terminated” when a plan is completed which addresses the source of that state; or it may simply dissipate when alternative plans are made. However, if some major goal has been rendered unattainable by some event, the negative emotion associated with the event may not be dissipated by embarking on other plans, since they do not approach the goal.

This analysis, and others elsewhere in the above section, seems at first to be very much in the spirit of appraisal theories of emotion (e.g., Ortony, Clore, & Collins, 1988). However, while Ortony, Clore and Collins admit that visceral sensations and facial expressions set emotions apart from other psychological states, they exclude these from their study (Arbib, 1992). They (p.361) insist that the origins of emotional states are based on the cognitive construal of events, with cognition being presupposed by the physiological, behavioral, and expressive aspects of emotion. However, is the addition of a cognitive evaluation, unlinked to a physiological structure, enough to convert information processing into emotion? While I find great value in the attempt of Ortony, Clore, and Collins to see how different cognitive structures may relate to different emotional states, it gives no sense of the “heat” of emotion that I tried to convey in my short narrative. I argue here that the “heat” is added to the appraisal because the cerebral cortex is linked to basic motivational systems, while human emotions are so much more varied and subtle than mere hunger or thirst because basic motivational systems are integrated with cortical systems that can provide varied appraisals.

My approach will be to integrate insights from chapters by Kelley, Rolls, and Fellous & LeDoux with the evolutionary perspective provided above while confronting the personal data set of my emotional narrative.

Behavioral Control Columns

To start, we must note that a full analysis of motivated behavior must include not only the somatomotor behavior (e.g., feeding and fleeing; other forms relevant to the study of motivation-related behavior include orofacial responses and defensive and mating activities) but also autonomic output (e.g., heart rate and blood pressure) and visceroendocrine output (cortisol, adrenaline, release of sex hormones). In general, behavior will combine effects of all three kinds. Kelley (Chapter 3) places special emphasis on Swanson's (2000) notion of the behavioral control column (Fig. 12.3). This is a column of nuclei arrayed along the brain stem. Swanson proposes that very specific and highly interconnected sets of nuclei in the hypothalamus are devoted to the elaboration and control of specific behaviors necessary for survival: spontaneous locomotion, exploration, ingestive, defensive, and reproductive behaviors. Animals with chronic transections above the hypothalamus can more or less eat, drink, reproduce, and show defensive behaviors, whereas if the brain is transected below the hypothalamus, the animal displays only fragments of these behaviors, enabled by motor pattern generators in the brain stem. As Kelley notes, many instances of motivated behavior—eating, drinking, grooming, attacking, sleeping, maternal behavior, hoarding, copulating—have been evoked by direct electrical or chemical stimulation of the hypothalamus.

The behavioral control column contains a rostral and a more caudal segment. The former contains nuclei involved in ingestive and social (reproductive and defensive) behaviors such as sexually dimorphic behaviors, defensive (p.362)

                      Beware the Passionate                             Robot

Figure 12.3. Major features of cerebral hemisphere regulation of motivated behavior, according to Swanson, as seen on a flatmap of the rat central nervous system. (Left) The neuroendocrine motor zone, shown in black, and three subgroups of hypothalamic nuclei: the periventricular region (PR) most centrally, the medial nuclei (MN,) and the lateral zone (LZ). The PR contains a visceromotor pattern generator network, and the medial nuclei (MN) form the rostral end of the behavior control column. In addition to this longitudinal division, the hypothalamus can be divided into four transverse regions based on the characteristic medial nucleus residing within it: preoptic (pro), supraoptic or anterior (suo), tuberal (tub), and mammillary (mam). (Center) An overview of the behavior control column. Almost all nuclei in this column generate a dual, typically branched projection, descending to the motor system and ascending to thalamocortical loops: AHN, anterior hypothalamic nucleus; MAM, mammillary body; MPN, medial preoptic nucleus (lateral part in particular); PMdv, premammillary nuclei, dorsal ventral; PVHd, descending division of paraventricular hypothalamic nucleus; SC, superior colliculus, deeper layers; SNr, reticular substantia nigra; TH, dorsal thalamus; TU, tuberal nucleus; VMH, ventromedial hypothalamic nucleus; VTA, ventral tegmental area. (Right) Triple cascading projection from the cerebral hemispheres to the brain-stem motor system. This minimal or prototypical circuit element consists of a glutamatergic (GLU) projection from layer 5 pyramidal neurons of the isocortex (or equivalent pyramidal neurons in allocortex), with a glutamatergic collateral to the striatum. This dual projection appears to be excitatory (e, +). The striatum then generates a gaminobutyric acid-ergic (GABAergic) projection to the motor system with a GABAergic collateral to the pallidum. This dual striatal projection appears to be inhibitory (i, –). Finally, the pallidum generates a GABAergic projection to the brain-stem motor system, with a GABAergic collateral to the dorsal thalamus. This dual pallidal projection can be viewed as disinhibitory (d, –) because it is inhibited by the striatal input. (Adapted from Swanson, 2000, Figs. 8, 10, 14, respectively.)

(p.363) responses, or controls for food and water intake. The more caudal segment of the column is involved in general foraging/exploratory behaviors.

Kelley notes that the lateral hypothalamus is not specifically included in Swanson's behavioral control column scheme but probably plays a critical role in arousal, control of behavioral state, and reward-seeking behavior. It includes what Olds (1977) referred to as the “pleasure center” because rats will press a lever thousands of times per hour to deliver electrical stimulation to this region.

We may distinguish drive signals—energy deficits, osmotic imbalance, visceral cues (including pain, temperature, and heart rate), metabolic and humoral information, etc.—from external cues about objects and other animals in the world. Following Risold, Thompson, & Swanson (1997), Kelley reviews the many paths whereby both kinds of information reach the hypothalamus, with much specificity as to which kinds of information affect which nuclei.

Amygdala, Orbitofrontal Cortex, and their Friends

Kelley notes that the amygdala's role in reward valuation and learning, particularly in its lateral and basolateral aspects (which are intimately connected with the frontotemporal association cortex) can influence and perhaps bias lateral hypothalamic output, citing literature on ingestive behavior which complements the emphasis of Fellous and LeDoux (Chapter 4) of certain nuclei of the amygdala in fear behavior.

Figure 12.4 summarizes the evolutionary perspective of Fellous and LeDoux on fearful behavior. The role of the hippocampus in conditioning to contextual cues can be usefully compared to its “you are here” function in the Figure 12.2 model of motivated spatial behavior. The crucial element from an evolutionary point of view is the set of reciprocal interactions between the amygdala and cerebral cortex: the amygdala can influence cortical areas by way of feedback either from proprioceptive or visceral signals or hormones, via projections to various arousal networks (these are discussed extensively in Kelley's Chapter 3), and through interaction with the medial prefrontal cortex. This area has widespread influences on cognition and behavior and sends connections to several amygdala regions, allowing cognitive functions organized in prefrontal regions to regulate the amygdala and its fear reactions.

This, for fear behavior, provides but one example of the major principle for organization of the behavioral control columns—namely, that they project massively back to the cerebral cortex/voluntary control system directly or indirectly via the dorsal thalamus (Risold, Thompson, & Swanson, 1997; (p.364)

                      Beware the Passionate                             Robot

Figure 12.4. (a) Differential paths through the amygdala for fear conditioning to an auditory stimulus and to contextual cues. (b) Interaction of the amygdala with cortical areas allows cognitive functions organized in prefrontal regions to regulate the amygdala and its fear reactions. LA, lateral nucleus of amygdala; CE, central nucleus of amygdala; B/AB, basal/accessory basal nuclei of amygdala. (Adapted from Fellous & LeDoux, Chapter 4, Fig. 4.2).

Swanson, 2000). Kelley stresses that this feed-forward hypothalamic projection to the cerebral hemispheres provides the anatomical substrate for the

intimate access of associative and cognitive cortical areas to basic motivational networks [which] enables the generation of emotions, or the manifestation of “motivational potential.” Thus, in the primate brain, this substantial reciprocal interaction between … behavioral control columns and … cortex subserving higher order processes (p.365) such as language and cognition has enabled a two-way street for emotion.

Rolls (Chapter 5) emphasizes diverse roles of the amygdala in the monkey. Monkey amygdala receives information about primary reinforcers (e.g., taste and touch) and about visual and auditory stimuli from higher cortical areas (e.g., inferior temporal cortex) that can be associated by learning with primary reinforcers (Fig. 12.5). Monkeys will work in order to obtain electrical stimulation of the amygdala; single neurons in the amygdala are activated by brain-stimulation reward of a number of different sites, and some amygdala neurons respond mainly to rewarding stimuli and others to punishing stimuli. There are neurons in the amygdala (e.g., in the basal accessory nucleus) which respond primarily to faces; they may be related to inferring the emotional content of facial expressions. Moreover, the human amygdala can be activated in neuroimaging studies by observing facial expressions, and lesions of the human amygdala may cause difficulty in the identification of some such expressions (see Rolls, 2000).

Figure 12.5 also suggests the crucial role of the orbitofrontal cortex in linking the frontal cortex to the emotional system. It receives inputs from the inferior temporal visual cortex and superior temporal auditory cortex;

                      Beware the Passionate                             Robot

Figure 12.5. Some of the pathways involved in emotion shown on a lateral view of the brain of the macaque monkey, emphasizing connections from the primary taste and olfactory cortices and from the inferior temporal cortex to the orbitofrontal cortex and amygdala. The secondary taste cortex and the secondary olfactory cortex are within the orbitofrontal cortex. Connections from the somatosensory cortex reach the orbitofrontal cortex directly and via the insular cortex, as well as the amygdala via the insular cortex. TG, architectonic area in the temporal pole; V4 is visual area 4. (Adapted from Rolls, Chapter 5, Fig. 5.4.)

(p.366) from the primary taste cortex and the primary olfactory (pyriform) cortex; from the amygdala; and from midbrain dopamine neurons. As Rolls documents, damage to the caudal orbitofrontal cortex produces emotional changes, which include the tendency to respond when responses are inappropriate (i.e., the tendency of monkeys not to withhold responses to nonrewarded stimuli). Rolls sees orbitofrontal neurons as part of a mechanism which evaluates whether a reward is expected and generates a mismatch (evident as a firing of the nonreward neurons) if the reward is not obtained when it is expected.

As Fellous and LeDoux note, decision-making ability in emotional situations is also impaired in humans with damage to the medial prefrontal cortex and abnormalities in the prefrontal cortex may predispose people to develop fear and anxiety disorders. They suggest that the medial prefrontal cortex allows cognitive information processing in the prefrontal cortex to regulate emotional processing by the amygdala, while emotional processing by the amygdala may influence the decision-making and other cognitive functions of the prefrontal cortex. They then suggest that the prefrontal-amygdala interactions may be involved in the conscious feelings of fear. However, this neat division between the cognitive cortex and emotional amygdala strikes me as too glib—both because not all parts of the cortex give rise to conscious feelings and because human emotions seem to be inextricably bound up with “cortical subtleties.”

Neuromodulation

We have now seen something of the crucial roles of the hypothalamus, amygdala, and orbitofrontal cortex in the motivational system. In a similar vein, Fellous (1999) reviewed the involvement of these three areas in emotion and argued that the neural basis for emotion involves both computations in these structures and their neuromodulation. It is thus a useful feature of the present volume that Kelley's analysis of motivation and emotion emphasizes three widely distributed chemical signaling systems and their related functions across different phyla. Following are some key points from her richly detailed survey.

We first discuss dopamine, reward, and plasticity. In mammals, dopamine is proposed to play a major role in motor activation, appetitive motivation, reward processing, and cellular plasticity and may well play a major role in emotion. In the mammalian brain, dopamine is contained in specific pathways, which have their origins in the substantia nigra pars compacta and the ventral tegmental area of the midbrain and ascend to innervate widespread regions of striatal, limbic, and cortical regions such as the striatum, prefrontal cortex, amygdala, and other forebrain regions. Studies in the (p.367) awake, behaving monkey show dopamine neurons which fire to predicted rewards and track expected and unexpected environmental events, thereby encoding prediction errors (Schultz, 2000; Fellous & Suri, 2003). Moreover, prefrontal networks are equipped with the ability to hold neural representations in memory and to use them to guide adaptive behavior; dopamine receptors are essential for this ability. Thus, dopamine plays essential roles all the way from basic motivational systems to the working memory systems seen to be essential to the linkage of emotion and consciousness (see below for a critique).

Next, consider serotonin, aggression and depression. Kelley (Chapter 3) shows that serotonin has been widely implicated in many behavioral functions, including behavioral state regulation and arousal, motor pattern generation, sleep, learning and plasticity, food intake, mood, and social behavior. The cell bodies of serotonergic systems are found in midbrain and pontine regions in the mammalian brain and have extensive descending and ascending projections. Serotonin plays a critical role in the modulation of aggression and agonistic social interactions in many animals—in crustaceans, serotonin plays a specific role in social status and aggression; in primates, with the system's expansive development and innervation of the cerebral cortex, serotonin has come to play a much broader role in cognitive and emotional regulation, particularly control of negative mood or affect.

Finally, we look at opioid peptides and their role in pain and pleasure. Kelley shows that opioids, which include the endorphins, enkephalins, and dynorphins, are found particularly within regions involved in emotional regulation, responses to pain and stress, endocrine regulation, and food intake. Increased opioid function is clearly associated with positive affective states such as relief of pain and feelings of euphoria, well-being, and relaxation. Activation of opioid receptors promotes maternal behavior in mothers and attachment behavior and social play in juveniles. Separation distress, exhibited by archetypal behaviors and calls in most mammals and birds, is reduced by opiate agonists and increased by opiate antagonists in many species (Panksepp, 1998). Opiates can also effect the reduction or elimination of the physical sensation induced by a painful stimulus as well as the negative emotional state it induces.

What is striking here is the way in which these three great neuromodulatory systems seem to be distinct from each other in their overall functionalities, while exhibiting immense diversity of behavioral consequences within each family. The different effects depend on both molecular details (the receptors which determine how a cell will respond to the presence of the neuromodulator) and global arrangements (the circuitry within the modulated brain region and the connections of that region within the brain). Kelley notes that much of the investigation of central opioids has been fueled (p.368) by an interest in understanding the nature of addiction—perhaps another aspect of the “beware the passionate robot” theme. As she documents in her section Addictive Drugs and Artificial Stimulation of Emotions, these systems may have great adaptive value in certain contexts yet may be maladaptive in others. This raises the intriguing question of whether the effects of neuromodulation could be more adaptive for the animal if the rather large-scale “broadcast” of a few neuromodulators were replaced by a targeted and more information-rich distribution of a far more diverse set of neuromodulators. This makes little sense in terms of the conservatism of biological evolution but may have implications both for the design of drugs which modify neuromodulators to target only cells with specific molecular markers and in future research on robot emotions which seeks to determine useful computational and technological analogs for neuromodulation.

Emotion and Consciousness with a Nod to Empathy

With this, let us turn to notions of the linkage between emotion and consciousness. Fellous and LeDoux (Chapter 4) endorse theories of consciousness built around the concept of working memory. They say

the feeling of being afraid would be a state of consciousness in which working memory integrates the following disparate kinds of information: (1) an immediately present stimulus (say, a snake on the path in front of you); (2) long-term memories about that stimulus (facts you know about snakes and experiences you have had with them); and (3) emotional arousal by the amygdala.

However, we saw that activity in the parietal cortex may have no access to consciousness (patient D. F.), even though (Fig. 12.1) it is coupled to prefrontal working memory. Thus, working memory is not the key to consciousness; but if we agree to simply accept that some cortical circuits support conscious states while others do not, then we can still agree with Fellous and LeDoux as to the importance of emotional feelings of connections from the amygdala to the medial (anterior cingulate) and ventral (orbital) prefrontal cortex. As they (and Rolls) note, humans with orbitofrontal cortex damage ignore social and emotional cues and make poor decisions, and some may even exhibit sociopathic behavior. They stress that, in addition to being connected with the amygdala, the anterior cingulate and orbital areas are intimately connected with one another as well as with the lateral prefrontal cortex, and each of the prefrontal areas receives information from sensory processing regions and from areas involved in various aspects of implicit and explicit memory processing.

(p.369) Where Fellous and LeDoux emphasize working memory in their cortical model and include the orbital cortex as part of this cortical refinement, Rolls (1999) includes the orbitofrontal cortex in both routes of his two-route model. Before we look at this model, though, one caveat about Rolls' general view that emotions are states elicited by rewards and punishers. Rolls states that his approach helps with understanding the functions of emotion, classifying different emotions, and understanding what information processing systems in the brain are involved in emotion and how they are involved. Indeed it does but, where Rolls emphasizes the polarity between reward and punishment, I would rather ground a theory of emotions in the basic drives of Arbib and Lieblich (1977) as seen in the basic hypothalamic and midbrain nuclei of Swanson's (2000) behavioral control column and the basic neuromodulatory systems of Fellous (1999) and Kelley (Chapter 3). Where Rolls argues that brains are designed around reward-and-punishment evaluation systems because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase their fitness, I would stress (with Swanson) the diversity of specific motor and perceptual systems that the genes provide, while agreeing that various learning systems, based on various patterns of error feedback as well as positive and negative reinforcement, can provide the organism with adaptability in building upon this basic repertoire that would otherwise be unattainable. (Consider Figure 12.2 to see how much machinery evolution has crafted beyond basic drives and incentives, let alone simple reward and punishment.)

Rolls argues that there are two types of route to action performed in relation to reward or punishment in humans. The first route (see the middle row of Fig. 5.2) includes the amygdala and, particularly well-developed in primates, the orbitofrontal cortex. These systems control behavior in relation to previous associations of stimuli with reinforcement. He notes various properties of this system, such as hysteresis, which prevents an animal that is equally hungry and thirsty from continually switching back and forth between eating and drinking. In other words, despite the emphasis that Rolls lays on reward and punishment, the analysis is in many ways linked to the differential effects of different drive systems. The second route (see the top row of Fig. 5.2) involves a computation with many “if … then” statements, to implement a plan to obtain a reward. Rolls argues that syntax is required here because the many symbols that are part of the plan must be correctly linked, as in: “if A does this, then B is likely to do this, and this will cause C to do this.” I think Rolls may be mistaken to the extent that he conflates syntax in simple planning with the explicit symbolic expression of syntax involved in language. Nonetheless (as in the Arbib-Hesse theory), I do agree that the full range of emotion in humans involves the interaction of the language system with a range of other systems. Rolls holds that the second route is related to consciousness, which (p.370) he sees as the state that arises by virtue of having the ability to think about one's own thoughts and analyze long, multistep syntactic plans. He aligns himself implicitly with Fellous and LeDoux when he says that another building block for such planning operations may be the type of short-term memory (i.e., working memory) provided by the prefrontal cortex. The type of working memory system implemented in the dorsolateral and inferior convexity of the prefrontal cortex of nonhuman primates and humans (Goldman-Rakic, 1996) could provide mechanisms essential to forming a multiple-step plan. However, as I have commented earlier, the prefrontal cortex involves a variety of working memories, some of which have no direct relation either to consciousness or to emotion.

The model of thought that emerges here sees each mental state as combining emotional and cognitive components. While in some cases one component or the other may be almost negligible, it seems more appropriate, on this account, to see the emotional states as being continually present and varying, rather than as intermittent. The model also sees physiological states and emotion as inextricably intertwined—a cognitive state may induce an emotional reaction, but a prior emotional state may yield a subconscious physiological residue that influences the ensuing unfolding of cognitive and emotional states.

Adolphs (Chapter 2) stresses the important role of social interaction in the forming of emotions. Clearly, human emotions are greatly shaped by our reactions to the behavior of other people. Returning once more to the OED,

Empathy: The power of projecting one's personality into (and so fullycomprehending) the object of contemplation. (This term was apparently introduced to English in 1909 by E. B. Titchener Lect. Exper. Psychol. Thought-Processes: “Not only do I see gravity and modestyand pride … but I feel or act them in the mind's muscles. This is, I suppose, a simple case of empathy, if we may coin that term as a rendering of Einfühlung.”

We find a definition which carries within itself the simulation theory discussed by Jeannerod in Chapter 6, but with “the mind's muscles” transformed into the mirror system, which is a network of neurons active both when the “brain owner” acts in a certain way and when he or she observes another acting in a similar fashion. Earlier, we outlined a number of intermediate stages in the evolution of mechanisms that support language. I suggest that, similarly, a number of stages would have to intervene in the evolution of the brain mechanisms that support emotion and empathy. However, this topic and the related issue of the extent to which there has been a synergy between the evolution of language and the evolution of empathy are beyond the scope of the present chapter.

(p.371) Emotion without Biology

Norbert Wiener, whose book Cybernetics: Or Control and Communication in the Animal and the Machine introduced the term cybernetics in the sensethat is at the root of its modern usage, also wrote The Human Use of Human Beings (Wiener 1950, 1961). I remember that a professor friend of my parents, John Blatt, was shocked by the latter title; in his moral view, it was improper for one human being to “use” another. I suspect that Wiener would have agreed, while noting that modern society does indeed see humans using others in many disturbing ways and, thus, much needs to be done to improve the morality of human interactions. By contrast, robots and other machines are programmed for specific uses. We must thus distinguish two senses of autonomy relevant to our discussion but which often infect each other.

  • When we talk of an “autonomous human,” the sense of autonomy is that of a person becoming a member of a society and, while working within certain constraints of that society and respecting many or all of its moral conventions, finding his or her own path in which work, play, personal relations, family, and so on can be chosen and balanced in a way that grows out of the subject's experience rather than being imposed by others.

  • When we talk of an “autonomous machine,” the sense is of a machine that has considerable control over its sensory inputs and the ability to choose actions based on an adaptive set of criteria rather than too rigidly predesigned a program.5

On this account, a human slave is autonomous in the machine sense but not in the human sense. Some researchers on autonomous machines seem to speak as if such machines should be autonomous in the human sense. However, when we use computers and robots, it is with pragmatic human-set goals rather than “finding one's own path in life” that we are truly concerned. In computer science, much effort has been expended on showing that programs meet their specifications without harmful side effects. Surely, with robots, too, our concerns will be the same. What makes this more challenging is that, in the future, the controllers for robots will reflect many generations of machine learning and of tweaking by genetic algorithms and be far removed from clean symbolic specifications. Yet, with all that, we as humans will demand warranties that the robots we buy will perform as stated by the supplier. It might be objected that “as adaptive robots learn new behaviors and new contexts for them, it will be impossible for a supplier to issue such a guarantee.” However, this will not be acceptable in the marketplace. If, for example, one were to purchase a car that had adaptive circuitry (p.372) to detect possible collisions and to find optimal routes to a chosen destination, one should demand that the car does not take a perverse delight (assuming it has emotions!) in avoiding collisions by stopping so suddenly as to risk injury to its human occupants or in switching the destination from that chosen by the driver to one the car prefers. If we purchase a computer tutor, then the ability to provide the appearance of emotions useful to the student may well be part of the specifications, as will the ability to avoid (with the caveats mentioned at the beginning of this chapter) the temper tantrums to which the more excitable human teacher may occasionally succumb. In summary, machine learning must meet constraints of designed use, while at the same time exploring novel solutions to the problem. This does not rule out the possibility of unintended consequences, but when a machine “goes wrong,” there should be maintenance routines to fix it that would be very different from either the medical treatment or penal servitude applied to humans.

Once again, let us look at the OED for definitions and recall the warning “beware the passionate robot.” We can then consider some key questions in our attempt to understand the nature of robot emotions, if indeed they do or will exist.

Action: I. Generally. 1. The process or condition of acting or doing (in the widest sense), the exertion of energy or influence; working, agency, operation. a. Of persons. (Distinguished from passion, from thought or contemplation, from speaking or writing.)

Active: gen. Characterized by action. Hence A. adj. 1. a. Opposedto contemplative or speculative: Given to outward action rather than inward contemplation or speculation. 2. Opposed to passive: Originating or communicating action, exerting action upon others; acting of its own accord, spontaneous.

Passion: III. 6. a. Any kind of feeling by which the mind is powerfully affected or moved; a vehement, commanding, or overpowering emotion; … as ambition, avarice, desire, hope, fear, love, hatred, joy, grief, anger, revenge. 7. a. spec. An outburst of anger or bad temper. 8. a. Amorous feeling; strong sexual affection; love.

Passive: A. adj. 2. a. Suffering action from without; that is the object, as distinguished from the subject, of action; acted upon, affected, or swayed by external force; produced or brought about by external agency.

I included these definitions because I find it instructive to consider that in everyday parlance we lose active control of our selves when we are in the (p.373) grip of passion, that is, of strong emotion. In other words, while our emotions and our reason may usually be brought into accord, there are other times when “ambition, avarice, desire, hope, fear, love, hatred, joy, grief, anger, [or] revenge” may consume us, effectively banishing all alternatives from our thoughts. In noting this, I frame the question “If emotions conveyed an advantage in biological evolution, why can they be so harmful as well?” We have already noted that Kelley (Chapter 3) examined the role of opioids in addiction and discussed how these may have great adaptive value in certain contexts yet may be maladaptive in others.

Such issues raise the prior question “Did emotions convey a selective advantage?” and the subsequent questions “Are emotions a side effect of a certain kind of cognitive complexity?” (which might imply that robots of a certain subtlety will automatically have emotion as a side effect) and “Were emotions the result of separate evolutionary changes, and if so, do their advantages outweigh their disadvantages in a way that might make it appropriate to incorporate them in robots (whether through explicit design or selective pressure)?”

At the beginning of this chapter, we considered a scenario for a computer designed to effectively teach some body of material to a human student and saw that we might include “providing what a human will recognize as a helpful emotional tone” to the list of criteria for successful program design. However, there is no evolutionary sequence here as charted by the neurobiologists—none of the serotonin or dopamine of Kelley, none of the punishment and reward of Rolls, none of the “fear circuits” of Fellous & LeDoux. This is not to deny that there can be an interesting study of “computer evolution” from the switches of the ENIAC, to the punchcards of the PDP11 to the keyboard to the use of the mouse and, perhaps, to the computer that perceives and expresses emotions. My point here is simply that the computer's evolution to emotion will not have the biological grounding of human emotion. The computer may use a model of the student's emotions yet may not be itself subject to, for example, reward or punishment. Intriguingly, this is simulation with a vengeance—yet not simulation in the mirror sense employed by Jeannerod in Chapter 6—the simulation is purely of “the other,” not a reflection of the other back onto the self. In the same way, one may have a model of a car to drive it without having an internal combustion engine or wheels. Then we must ask if this is an argument against the simulation theory of human emotion. This also points to a multilevel view. At one level, the computer “just follows the program” and humans “just follow the neural dynamics.” It is only a multilevel view that lets us single out certain variables as drives. What does that imply for robot emotions?

Suppose, then, that we have a robot that simulates the appearance of emotional behavior but has none of the “heated feeling” that governed so (p.374) much of my behavior in the opening anecdote. If neither biology nor feeling remains, can we say that such a robot has emotions? In a related vein, reinforcement learning (Sutton & Barto, 1998) has established itself as being of great value in the study of machine learning and artificial neural networks. However, when are we justified in seeing positive reinforcement at the psychological/emotional level rather than being simply a mathematical term in a synaptic adjustment rule?

Appraisal theory (as in Ortony, Clore, & Collins, 1988) develops a catalog of human emotions and seeks to provide a computational account of the appraisals which lead to invocation of one emotion over another. I suggest that robot emotions may partake of some aspects of the appraisal approach to emotions but without the “heat” provided by their biological underpinnings in humans. If it is part of the job of a robot to simulate the appearance of human emotional behavior to more effectively serve human needs, then it may (as suggested earlier) incorporate a model of human emotions within its circuitry, and this may well be appraisal-based. It might then be a matter of terminology as to whether or not one would wish to speak of such a robot having emotions. However, I think a truly fruitful theory of robot emotions must address the fact that many robots will not have a human-computer interface in which the expression of human-like emotional gestures plays a role. Can one, then, ascribe emotions to a robot (or for that matter an animal or collective of animals) for which empathy is impossible?

Perhaps a more abstract view of emotion is required if we are to speak of robot emotions.

To this end, I must first deliver on my promise to provide an abstraction of the notion of ecological niche suitable for robots. In the case of animals, we usually refer to the part of the world where the animal is to make a living. However, locale is not enough. Foodstuffs that are indigestible to one species may be the staff of life to another, the size of the creature can determine where it can find a suitable resting place, and different creatures in a given environment may have different predators. A new species in an environment may create new ecological niches there for others. In biology, the four Fs (feeding, fighting, fleeing, and reproduction) are paramount, and it is success in these that defines the animal's relation to its environment. However, none of this applies to most robots. One can certainly imagine scenarios in which the “struggle for fuel” plays a dominant role in a robot economy, but robot design will normally be based on the availability of a reliable supply of electricity. Although the study of self-reproducing machines is well established (von Neumann, 1966; Arbib, 1966), the reproduction of robots will normally be left to factories rather than added to the robot's own workload. Thus, the ecological niche of a robot will not be defined in terms of general life functions as much as in a set of tasks that it is (p.375) designed to perform, though certainly environmental considerations will have their effect. The design of a Mars rover and a computer tutor must take into account the difference between an environment of temperature extremes and dust storms and an air-conditioned classroom. Nonetheless, my point holds that whereas an animal's set of tasks can be related more or (as in the case of humans) somewhat less directly to the four Fs, in the case of robots, there will be an immense variety of task sets, and these will constrain the sensors, controllers, and effectors in a way which must be referred to the task sets without any foundation in the biological imperatives that have shaped the evolution of motivational and emotional systems for biological creatures.

Another classic brain model is relevant here. Kilmer, McCulloch, and Blum (1969) implemented McCulloch's idea of extending observations on the involvement of the reticular formation in switching the animal from sleep to waking (Magoun, 1963) to the hypothesis that the reticular formation was responsible for switching the overall mode of feeding or fleeing or whatever, and then the rest of the brain, when set into this mode, could do the more detailed computations. The data of Scheibel and Scheibel (1958) on the dendritic trees of neurons of the reticular formation suggested the idea of modeling the reticular formation as a stack of modules, each with a slightly different selection of input but trying to decide to which mode to commit the organism. They would communicate back and forth, competing and cooperating until finally they reached a consensus on the basis of their diverse input; that consensus would switch the mode of the organism. In this framework, Kilmer, McCulloch, and Blum (1969) simulated a model, called S-RETIC, of a modular system designed to compute modes in this cooperative manner. Computer simulation showed that S-RETIC would converge for every input in fewer than 25 cycles and that, once it had converged, it would stay converged for the given input. When the inputs strongly indicate one mode, the response is fast; but when the indication is weak, initial conditions and circuit characteristics may strongly bias the final decision.

Within any mode of behavior many different acts are possible: if the cat should flee, will it take the mouse or leave it, climb a tree or skirt it, jump a creek or swim it? The notion is that a hierarchical structure that computes modes and then acts within them, might in some sense be “better” (irrespective of the particular structural basis ascribed to these functions) than one that tries to determine successive acts directly.

For robot emotions, then, the issue is to what extent emotions may contribute to or detract from the success of a “species” of robots in filling their ecological niche. I thus suggest that an effort to describe robot emotions requires us to analyze the tasks performed by the robot and the strategies available to perform them.

(p.376) Consider a robot that has a core set of basic functions, each with appropriate perceptual schemas, F1, F2Fn (generalizing the four Fs!), each of which has access to more or less separate motor controllers, M1, M2Mn, though these may share some motor subschemas, as in the use of orienting and locomotion for prey capture and predator avoidance in the frog. Each Fj evaluates the current state to come up with an urgency level for activating its motor schema Mj, as well as determining appropriate motor parameters (it is not enough just to snap, but the frog must snap at the fly). Under basic operating conditions, a winner-take-all or similar process can adjudicate between these processes (does the frog snap at the fly or escape the predator?). We might want to say, then, that a motivational system is a state-evaluation processes that can adjust the relative weighting of the different functions, raising the urgency level for one system while lowering the motivation system for others, so that a stimulus that might have activated Mk in one context will now instead activate Mi.

What if, as is likely, the set of tasks is not a small set of survival tasks but indeed a very large set? It may help to recall (Watts, 2003) that the procurement phase in animal behavior is individualized for the particular situation and can be quite complex, whereas the subsequent consummatory phase involves more stereotypic movements. What I take from this is not the idea of organizing a variety of behaviors with respect to the consummatory phase they serve but, rather, the idea that action selection may well involve grouping a large set of actions into a small number of groups, each containing many actions or tasks. In other words, we consider the modes of the S-RETIC model as abstract groups of tasks rather than as related to biological drives like the four Fs. I consider the case where there are m strategies which can be grouped into n groups (with n much less than m) such that it is in general more efficient, when faced with a problem, to first select an appropriate group of strategies and then to select a strategy from within that group. The catch, of course, is in the caveat “in general.” There may be cases in which rapid commitment to one group of strategies may preclude finding the most appropriate strategy—possibly at times with disastrous consequences. Effective robot design would thus have to balance this fast commitment process against more subtle evaluative process that can check the suitability of a chosen strategy before committing to it completely. We might then liken motivation to biases which favor one strategy group over another and emotion to the way in which these biases interact with more subtle computations. On this abstract viewpoint, the “passionate robot” is not one which loses its temper in the human-like fashion of the computer tutor imagined earlier but rather one in which biases favoring rapid commitment to one strategy group overwhelm more cautious analysis of the suitability of strategies selected from that group for the task at hand.

(p.377) As far as the chapters in Part III, Robots, are concerned, Arkin's “Moving Up the Food Chain: Motivation and Emotion in Behavior-based Robots” comes closest to providing insight from animal brains and behavior, whereas Chapter 7 (Ortony et al.) and Chapter 8 (Sloman et al.) provide multilevel views of artificial intelligence that offer fertile ground for a comparative analysis with the approach to conceptual evolution offered in the present chapter. Nonetheless, my approach does not link directly to the crucial role of social interactions in emotion stressed by Adolphs (Chapter 2), though Jeannerod (Chapter 6) does explore the possible role of mirror systems in our ability to understand the emotions of others, and we saw in the mirror system of primates bridging from an individual skill (dexterity) to a social skill (language).

It is thus up to future research to integrate the preliminary theory of robot emotions given here, grounded as it is in the analysis of a robot going about its tasks in some ecological niche, with an approach emphasizing social interactions. To set directions for such integration in the future, I close by noting the relevance of two chapters in this volume. Breazeal and Brooks (Chapter 10) survey progress in making robots more effective in social interactions with humans, arguing that emotion-inspired mechanisms can improve the way autonomous robots operate in a human environment and can improve the ability of these robots to effectively achieve their own goals. More generally, Nair, Tambe, and Marsella (Chapter 11) use a survey of the state of the art in multiagent teamwork (an agent may be considered a generalization of robots to include “softbots” as well as embodied robots) and in computational models of emotions to consider the role of emotions not only in agent-human teams but also in pure agent teams. The stage seems well set for dramatic progress in integrating brain and society in theories of emotion in animals and humans and for linking solo tasks, robot-human interaction, and teamwork in the further exploration of the topic “Who needs emotion?” Not only will the study of brains continue to inform our analysis of robots, but the precision required to make explicit the computational strategies of robots will enrich the vocabulary for the study of motivation and emotion in humans and other animals.

Notes

References

Bibliography references:

Arbib, M. A. (1966). Simple self-reproducing universal automata. Information and Control, 9, 177–189.

Arbib, M. A. (1985). In search of the person: Philosophical explorations in cognitive science. Amherst: University of Massachusetts Press.

Arbib, M. A. (1987). Levels of modeling of visually guided behavior (with peer commentary and author's response). Behavioral and Brain Sciences, 10, 407–465.

Arbib, M. A. (1989). The metaphorical brain 2: Neural networks and beyond. New York: Wiley-Interscience.

Arbib, M. A. (1992). Book review: Andrew Ortony, Gerald L. Clore, & Allan Collins, The cognitive structure of emotions. Artificial Intelligence, 54, 229–240.

Arbib, M. A. (2001). Coevolution of human consciousness and language. In P. C. Marijuan (Ed.), Cajal and consciousness: Scientific approaches to consciousness on the centennial of Ramón y Cajal's textura (pp. 195–220). New York: New York Academy of Sciences.

Arbib, M. A. (2002). The mirror system, imitation, and the evolution of language. In C. Nehaniv & K. Dautenhahn (Eds), Imitation in animals and artifacts, The MIT Press, pp. 229–280.

Arbib, M. A. (2003). Rana computatrix to human language: Towards a computational neuroethology of language evolution. Philosophical Transactions of the Royal Society of London. Series A, 361, 1–35.

Arbib, M. A., & Hesse, M. B. (1986). The construction of reality. Cambridge: Cambridge University Press.

Arbib, M. A., & Lieblich, I. (1977). Motivational learning of spatial behavior. In J. Metzler (Ed.), Systems Neuroscience (pp. 221–239). New York: Academic Press.

(p.379) Arbib, M. A., & Liaw, J.-S. (1995). Sensorimotor transformations in the worlds of frogs and robots. Artificial Intelligence, 72, 53–79.

Arbib, M. A., & Rizzolatti, G., (1997). Neural expectations: A possible evolutionary path from manual skills to language. Communication and Cognition, 29, 393–424.

Betts, B. (1989). The T5 base modulator hypothesis: A dynamic model of T5 neuron function in toads. In J. P. Ewert & M. A. Arbib (Eds.) Visuomotor coordination: Amphibians, comparisons, models and robots (pp. 269–307). New York: Plenum.

Bischoff-Grethe, A., Crowley, M. G., & Arbib, M. A. (2003). Movement inhibition and next sensory state predictions in the basal ganglia. In VI (A. M. Graybiel, M. R. Delong, & S. T. Kitai (Eds.), The basal ganglia (Vol. VI, pp. 267–277). New York: Kluwer Plenum.

Blaney, P. H. (1986). Affect and memory: A review. Psychological Bulletin, 99, 229–246.

Brzoska, J., & Schneider, H. (1978). Modification of prey-catching behavior by learning in the common toad (Bufo b bufo L, Anura, Amphibia): Changes in response to visual objects and effects of auditory stimuli. Behavioural Processes, 3, 125–136.

Christianson, S.-Å. (Ed.) (1992). The handbook of emotion and memory: Research and theory. Hillsdale, NJ: Erlbaum.

Cobas, A., & Arbib, M. (1992). Prey-catching and predator-avoidance in frog and toad: Defining the schemas. Journal of Theoretical Biology, 157, 271–304.

Coetzee, J. M. (2003). The Lives of Animals: ONE: The Philosophers and the Animals, being Lesson 3 of the novel Elizabeth Costello. New York: Viking.

Damasio, A. R. (1994). Descartes' error: Emotion, reason, and the human brain. New York: Grosset/Putnam.

Dean, P., Redgrave, P. (1989). Approach and avoidance system in the rat. In M. A. Arbib & J-P. Ewert (Eds.), Visual structures and integrated functions. Research Notes in Neural Computing. New York: Springer-Verlag.

Dean, P., Redgrave, P., & Westby, G. W. M. (1989). Event or emergency? Two response systems in the mammalian superior colliculus. Trends in Neuroscience, 12, 138–147.

de Waal, F. (2001). The ape and the sushi master: Cultural reflections by a primatologist. New York: Basic Books.

Dominey, P. F., & Arbib, M. A. (1992). A cortico-subcortical model for generation of spatially accurate sequential saccades. Cerebral Cortex, 2, 153–175.

Ewert, J.-P. (1984). Tectal mechanisms that underlie prey-catching and avoidance behaviors in toads. In H. Vanegas (Ed.), Comparative neurology of the optic tectum (pp. 247–416). New York: Plenum.

Ewert, J.-P. (1987). Neuroethology of releasing mechanisms: Prey-catching in toads. Behavioral and Brain Sciences, 10, 337–405.

Fagg, A. H., & Arbib, M. A. (1998). Modeling parietal-premotor interactions in primate control of grasping. Neural Networks, 11, 1277–1303.

Fellous, J. M. (1999). Neuromodulatory basis of emotion. Neuroscientist, 5, 283–294.

(p.380) Fellous, J.-M., & Suri, R. (2003). Dopamine, roles of. In M. A. Arbib, (Ed.), The handbook of brain theory and neural networks (2nd ed., pp. 361–365). Cambridge, MA: Bradford MIT Press.

Gibson, J. J. (1966). The senses considered as perceptual systems. London: Allen & Unwin.

Goldman-Rakic, P. S. (1996). The prefrontal landscape: Implications of functional architecture for understanding human mentation and the central executive. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 351, 1445–1453.

Goodale, M. A., Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neuroscience, 15, 20–25.

Goodale, M. A., Milner, A. D., Jakobson, L. S., & Carey, D. P. (1991). A neurological dissociation between perceiving objects and grasping them. Nature, 349, 154–156.

Grafton, S. T., Arbib, M. A., Fadiga, L., & Rizzolatti, G. (1996). Localization of grasp representations in humans by PET: 2. Observation compared with imagination. Experimental Brain Research, 112, 103–111.

Grüsser-Cornehls, U., & Grüsser, O.-J. (1976). Neurophysiology of the anuran visual system. In R. Llinás & W. Precht (Eds.), Frog neurobiology (pp. 298–385). Berlin: Springer-Verlag.

Guazzelli, A., Corbacho, F. J., Bota, M., & Arbib, M. A. (1998). Affordances, motivation, and the world graph theory. Adaptive Behavior, 6, 435–471.

Guazzelli, G., Bota, B., & Arbib, M. A. (2001. Competitive Hebbian learning and the hippocampal place cell system: Modeling the interaction of visual and path integration cues. Hippocampus, 11, 216–239.

Heatwole, H., & Heatwole, A. (1968). Motivational aspects of feeding behavior in toads. Copeia, 4, 692–698.

Hebb, D. O. (1949). The organization of behavior. New York: Wiley.

Heusser, H. (1960). Instinkterscheinungen an Kröten unter besonderer Berück-sichtigung des Fortpflanzungsinstinktes der Erdkrote (Bufo bufo L). Zeitschrift fur Tierpsychologie, 17, 67–81.

Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat's striate cortex. Journal of Physiology, 148, 574–591.

Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular and functional architecture in the cat's visual cortex. Journal of Physiology, 160, 106–154.

Hubel, D. H., & Wiesel, T. N. (1965). Receptive fields and functional architecture in two non-striate visual areas (18 and 19) of the cat. Journal of Neurophysiology, 28, 229–289.

Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 195, 215–243.

Hubel, D. H., Wiesel, T. N., & LeVay, S. (1977). Plasticity of ocular dominance columns in monkey striate cortex. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 278, 377–409.

Humphrey, N. K. (1970). What the frog's eye tells the monkey's brain. Brain, Behavior and Evolution, 3, 324–337.

(p.381) Ingle, D., Schneider, G. E., Trevarthen, C. B., & Held, R. (1967). Locating and identifying: Two modes of visual processing (a symposium). Psychologische Forschung, 31, 1–4.

Jackson, J. H. (1878–79). On affections of speech from disease of the brain. Brain, 1, 304–330; 2, 203–222, 323–356.

Kilmer, W. L., McCulloch, W. S., & Blum, J. (1969). A model of the vertebrate central command system. International Journal of Man-Machine Studies, 1, 279–309.

Köhler, W. (1927). The mentality of apes (E. Winter, Trans.). London: Routledge & Kegan Paul.

Kondrashev, S. L. (1976). Influence of the visual stimulus size on the breeding behavior of anuran males. Akad Nayk Zool Zh, 55, 1576–1579.

Kuffler, S. W. (1953). Discharge patterns and functional organization of mammalian retina. Journal of Neurophysiology, 16, 37–68.

Lettvin, J. Y., Maturana, H., McCulloch, W. S., & Pitts, W. H. (1959). What the frog's eye tells the frog brain. Proceedings of the IRE, 47, 1940–1951.

Lieblich, I., & Arbib, M. A. (1982). Multiple representations of space underlying behavior. Behavioral and Brain Sciences, 5, 627–659.

Magoun, H. W. (1963). The waking brain (2nd ed.). Springfield, IL.: Thomas.

Mountcastle, V. B., & Powell, T. P. S. (1959). Neural mechanisms subserving cutaneous sensibility, with special reference to the role of afferent inhibition in sensory perception and discrimination. Bulletin of Johns Hopkins Hospital, 105, 201–232.

Olds, J. (1977). Drives and Reinforcements: Behavioral studies of hypothalamic functions. New York: Raven.

Ortony, A., Clore, G. L., & Collins, A. (1988). The cognitive structure of emotions. New York: Cambridge University Press.

Panksepp, J. (1998). Affective neuroscience. New York: Oxford University Press.

Prescott, T. J., Gurney, K., & Redgrave, P. (2003). Basal ganglia. In M. A. Arbib (Ed.), The handbook of brain theory and neural networks (2nd ed., pp. 147–151). Cambridge, MA: Bradford MIT Press.

Pribram, K. H. (1960). A review of theory in physiological psychology. Annual Review of Psychology, 11, 1–40.

Risold, P. Y., Thompson, R. H., & Swanson, L. W. (1997). The structural organization of connections between hypothalamus and cerebral cortex. Brain Research. Brain Research Reviews, 24, 197–254.

Rizzolatti, G., & Arbib, M. A. (1998). Language within our grasp. Trends in Neurosciences, 21, 188–194.

Rizzolatti, G., Camarda, R., Fogassi, L., Gentilucci, M., Luppino, G., & Matelli, M. (1988). Functional organization of inferior area 6 in the macaque monkey II. Area F5 and the control of distal movements. Experimental Brain Research, 71, 491–507.

Rizzolatti, G., Fadiga L., Gallese, V., & Fogassi, L. (1995). Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3, 131–141.

Rizzolatti, G., Fadiga, L., Matelli, M., Bettinardi, V., Perani, D., & Fazio, F. (1996). (p.382) Localization of grasp representations in humans by positron emission tomography: 1. Observation versus execution. Experimental Brain Research, 111, 246–252.

Rizzolatti, G., & Luppino, G. (2001). The cortical motor system. Neuron, 31, 889–901.

Rizzolatti, G., & Luppino, G. (2003). Grasping movements: Visuomotor transformations. In M. A. Arbib (Ed.), The handbook of brain theory and neural networks (2nd ed., pp. 501–504). Cambridge, MA: MIT Press.

Rizzolatti, G., Luppino, G., & Matelli, M. (1998). The organization of the cortical motor system: New concepts. Electroencephalography and Clinical Neurophysiology, 106, 283–296.

Rolls, E. T. (1999). The brain and emotion. Oxford: Oxford University Press.

Rolls, E. T. (2000). Neurophysiology and functions of the primate amygdala, and the neural basis of emotion. In J. P. Aggleton (Ed.), The amygdala: A functional analysis (2nd ed., pp. 447–478). Oxford: Oxford University Press.

Scheibel, M. E., & Scheibel, A. B. (1958). Structural substrates for integrative patterns in the brain stem reticular core. In H. H. Jasper et al. (Eds.), Reticular formation of the brain (pp. 31–68). Boston: Little, Brown.

Schneider, G. E. (1969). Two visual systems. Science, 163, 895–902.

Schultz, W. (2000). Multiple reward signals in the brain. Nature Reviews Neuroscience, 1, 199–207.

Stoerig, P. (2001). The neuroanatomy of phenomenal vision: A psychological perspective. Annals of the New York Academy of Science, 929, 176–194.

Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge, MA: MIT Press.

Swanson, L. W. (2000). Cerebral hemisphere regulation of motivated behavior. Brain Research, 886, 113–164.

Swanson, L. W., & Mogenson, G. J. (1981). Neural mechanisms for the functional coupling of autonomic, endocrine and somatomotor responses in adaptive behavior. Brain Research, 228, 1–34.

Taira, M., Mine, S., Georgopoulos, A. P., Murata, A., & Sakata, H. (1990). Parietal cortex neurons of the monkey related to the visual guidance of hand movement. Experimental Brain Research, 83, 29–36.

Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), Analysis of visual behavior. Cambridge, MA: MIT Press.

von Neumann, J. (1966). Theory of self-reproducing automata. (edited and completed by A.W. Burks). Urbana: University of Illinois Press.

Watts, A. G. (2001). Neuropeptides and the integration of motor responses to dehydration. Annual Review of Neuroscience, 24, 357–384.

Watts, A. G. (2003). Motivation, neural substrates. In M. A. Arbib (Ed.), The handbook of brain theory and neural networks (2nd ed., pp. 680–683). Cambridge, MA: MIT Press.

Weiskrantz, L., Warrington, E. K., Sanders, M. D., & Marshall, J. (1974). Visual capacity in the hemianopic field following a restricted occipital ablation. Brain, 97, 709–728.

(p.383) Wells, K. D. (1977). The social behavior of anuran amphibians. Behavior, 25, 666–693.

Wiener, N. (1950). The human use of human beings. Boston: Houghton Mifflin.

Wiener, N. (1961). Cybernetics: Or control and communication in the animal and the machine (2nd ed.). Cambridge, MA: MIT Press.

Wiesel, T. N., & Hubel, D. H. (1963). Effects of visual deprivation on morphology and physiology of cells in the cat's lateral geniculate body. Journal of Neurophysiology, 26, 978–993. (p.384)

Notes:

(1.) A note of relevance to the evolutionary perspective of the present chapter: in his Chapter 7, Hebb (1949) states that humans are the most emotional of all animals because degree of emotionality seems to be correlated with the phylogenic development of sophisticated nervous systems.

(2.) The definition numbers are those given in the online version of the OED (http://dictionary.oed.com/ © Oxford University Press, 2003; accessed December 2003).

(p.378) (3.) As already noted, such interpretations are tentative, designed to stimulate a dialogue between the discussion of realistically complex emotional behavior and the neurobiological analysis of well-constrained aspects of motivation and emotion. Thus, in contrast to the above analysis, one might claim that there are no separate emotions, that all emotions are linked somehow, and that the experience of one emotion depends on the experience of another. I invite the reader to conduct a personal accounting of such alternatives.

(4.) Arbib and Lieblich (1977) used positive values for appetitive drives and negative values for aversive drives, but I will use positive values here even for negative drives.

(5.) Of course, at one level of analysis, one could construe, for example an autonomous system governed by an adaptive neural network as following a “program” expressed at the level of neural and synaptic dynamics. Conversely, a present-day personal computer will check for e-mail even as its user is actively engaged in some other activity, such as word processing. Thus, the notion of autonomy here is one of degree.