Jump to ContentJump to Main Navigation
Decision Making, Affect, and LearningAttention and Performance XXIII$

Mauricio R. Delgado, Elizabeth A. Phelps, and Trevor W. Robbins

Print publication date: 2011

Print ISBN-13: 9780199600434

Published to Oxford Scholarship Online: May 2011

DOI: 10.1093/acprof:oso/9780199600434.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 20 June 2018

Neuroeconomics of risky decisions: from variables to strategies

Neuroeconomics of risky decisions: from variables to strategies

Chapter:
(p.153) Chapter 7 Neuroeconomics of risky decisions: from variables to strategies
Source:
Decision Making, Affect, and Learning
Author(s):

Vinod Venkatraman

John W. Payne

Scott A. Huettel

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780199600434.003.0007

Abstract and Keywords

We make a variety of decisions throughout our lives. Some decisions involve outcomes whose values can be readily compared, especially when those outcomes are simple, immediate, and familiar. Other decisions involve imperfect knowledge about their potential consequences. Understanding the choice process when consequences are uncertain — often called the study of decision making under risk — remains a key goal of behavioural economics, cognitive psychology, and now neuroscience. An ongoing challenge, however, lies in the substantial individual differences in how people approach risky decisions. Using a novel choice paradigm, this chapter demonstrates that people vary in whether they adopt compensatory rules (i.e., tradeoffs between decision variables) or non-compensatory rules (i.e., a simplification of the choice problem) in economic decision making. The chapter shows that distinct neural mechanisms support variability in choices and variability in strategic preferences. Specifically, compensatory choices are associated with activation in the anterior insula and the ventromedial prefrontal cortex, while non-compensatory choices are associated with increased activation in the dorsolateral prefrontal cortex and the posterior parietal cortex. The dorsomedial prefrontal cortex shaped decision making at a strategic level through its functional connectivity with these regions. Individual-difference analyses are a key direction through which neuroscience can influence models of choice behaviour.

Keywords:   economic decision making, choice, risky decisions, compensatory rules, non-compensatory rules, individual difference

(p.154)

The science of decision making has a remarkably rich history that reaches from the early insights of Pascal and Bernoulli (Bernoulli, 1738), through axiomatic formalizations of rational choice under risk by von Neumann and Morgenstern (von Neumann and Morgenstern, 1944) and Savage (Savage, 1954), to recent explorations of seemingly “irrational” biases in choice by Kahneman, Tversky, and many others (Kahneman and Tversky, 1979; Loewenstein, Weber, Hsee, and Welch, 2001; Slovic and Lichtenstein, 1968; Tversky and Kahneman, 1974, 1981). Throughout most of this history, there has been a focus on abstracting the risky decision problem into small sets of decision variables that combine into simple compensatory functions. The decision variables of most historical interest have included aspects of option value (e.g., magnitude, valence), the probability of option delivery (e.g., risk, ambiguity), and the delay until outcomes are resolved (e.g., immediate vs. distal rewards). In the late-twentieth century, contextual factors, such as the regret one experiences when another option would have had a better outcome than the one chosen (Zeelenberg, 1999; Zeelenberg and Pieters, 2004) or whether the outcomes of a gamble are framed as gains or losses (Tversky and Kahneman, 1981), rose to the fore in response to robust violations of existing models of choice. Traditionally, decision scientists posit that the observed choice reflects tradeoffs between two or more of these variables (Slovic and Lichtenstein, 1968; Tversky and Fox, 1995; Tversky and Kahneman, 1992). Further, it has often been assumed that every individual can be represented by the same compensatory model, with individual differences in choice reflecting differences in model parameters (Birnbaum, 2008). Yet, there exists broad evidence that people adaptively draw upon multiple models for choice, some compensatory and some non-compensatory, depending on context, individual preferences and task structure (Gigerenzer and Goldstein, 1996; Payne et al., 1988, 1992).

7.1 Neural correlates of decision variables

Studies using the techniques of neuroscience to understand decision making—often described as the emerging discipline of neuroeconomics (Glimcher and Rustichini, 2004)—have heretofore adopted a similar focus on identifying decision variables within a general compensatory model framework. In particular, the vast majority of neuroeconomic studies target a particular decision variable (e.g., temporal delay), incorporate that variable into a model function (e.g., hyperbolic discounting), manipulate the level of that variable across a range of stimuli (e.g., monetary gambles), and then identify aspects of brain function that track changes in that variable (Platt and Huettel, 2008). Using this approach, researchers have identified potential neural underpinnings of nearly all of the core variables present in standard descriptive economic models, including value of monetary rewards (Knutson et al., 2003; Yacubian et al., 2007) and other rewards (Aharon et al., 2001; Berns et al., 2001), risk (Huettel, 2006; Preuschoff et al., 2006), ambiguity (Hsu et al., 2005; Huettel et al., 2006), probability weighting (Berns et al., 2008; Hsu et al., 2009), and temporal delay (Kable and Glimcher, 2007; McClure et al., 2004). Moreover, recent studies have identified effects of complex variables implied by particular frameworks for decision making, such as framing strength (De Martino et al., 2006), regret (p.155) (Camille et al., 2004; Coricelli et al., 2005) and other fictive signals (Chiu et al., 2008; Hayden et al., 2009; Lohrenz et al., 2007), and even unexpected changes in risk over time (Preuschoff et al., 2008).

The focus of neuroeconomics on decision variables plays into the strengths of neuroscience methods. Functional magnetic resonance imaging (fMRI), in particular, provides robust information about metabolic changes that pervade a particular brain region for a period of several seconds (Logothetis, 2008; Logothetis et al., 2001). To the degree that a decision variable influences computations within a region, and thus alters local metabolism, fMRI can be very useful for mapping its neural correlates. Conversely, by assuming that choice reflects the interactions of particular variables within a well-defined function, neuroscience researchers can ignore other complexities of the decision environment, from variability in how a given individual approaches choices over time, to potential differences among individuals in their choice behavior. Excluding these latter factors simplifies analyses, ameliorating the weakness of neuroscience methods in dealing with complex sets of predictors that could combine in an unexpected manner.

Yet, despite many real advances, an emphasis on compensatory interactions between variables has some clear limitations. For example, it leads to the intuitive, but often misleading, interpretation that brain systems interact competitively to generate choices. Most common is the canonical rational-affective distinction, also referred to as “Hot vs. Cold” or “System 1 vs. System 2” (Bernheim and Rangel, 2004; Greene et al., 2001; Kahneman, 2003; McClure et al., 2004; Sanfey et al., 2003). And, even when other models have been proposed (Kable and Glimcher, 2007), they still assume that individual differences reflect the relative strength of a parameter within a decision function. To the extent that individuals’ choices reflect fundamentally different valuation or comparison functions—not merely differences in parameter values—then the underlying mechanisms would be invisible to standard neuroeconomic analyses. In short, a focus on tradeoffs between decision variables risks missing the very ways people represent and process decision problems, or their decision strategies.1

In the following sections, we describe strategies used by many individuals to solve complex risky decision problems, show how those strategies relate to the predictions of canonical decision models (i.e., those common in neuroeconomic research),2 and outline an approach to elucidating individual differences in use of those strategies using neuroscience. We are hopeful that understanding variability in strategic preferences will (p.156) facilitate construction of models that are both parsimonious and biologically plausible (Clithero et al., 2008; Glimcher and Rustichini, 2004), the cardinal goals of neuroeconomics.

7.2 Strategic influences on choice: the probability-maximizing heuristic

Most studies of risky choice behavior have used very simple gambles for reasons of experimental control and simplicity. Typical decision scenarios juxtapose two gambles of the form ($x, p; 0, 1–p) where one receives $x with probability p or $0 with probability 1–p. The utility of such a simple gamble is modeled, for obvious reasons, using a function that combines the probability and utility of each outcome and sums the weighted outcome values over all the possible outcomes. Such models suffice even when using mixed gambles, typically one non-zero gain and one non-zero loss outcome ($x〉0, p; $y〈0, 1–p). Yet, as argued by Lopes among others, more complex gambles with multiple gain and loss outcomes are needed to simulate real-world choice scenarios (Lopes, 1995; Lopes and Oden, 1999). Most notably, complex mixed gambles with more than two outcomes allow researchers to identify simplifying strategies (heuristics) that may guide more complex risky choice behavior.

One potentially valuable, and computationally tractable, strategy for simplifying a complex gamble is to make decisions based on the overall probability of winning. This strategy was explored by Payne, who created a novel incentive-compatible decision-making task in which subjects added money to (i.e., improved) specific options within complex gambles (Payne, 2005). In this task, subjects viewed a five-outcome gamble G = (x1, p1; x2, p2; x3, p3; x4, p4; x5, p5), where pi indicates the probability of monetary outcome xi. The outcomes are rank-ordered x1〉x2〉x3〉x4〉x5, where at least two outcomes are strict gains (x1〉x2〉$0) and two are strict losses (x5〈x4〈$0). The value of the middle, referent outcome (x3) varies across trials, but is typically $0 or slightly negative (Fig. 7.1A).

Subjects then chose between different ways of improving this gamble (Fig. 7.1B). Adding money to the extreme positive outcome, x1, would be a gain-maximizing (Gmax) choice, whereas adding money to the extreme negative outcome, x5, would be a loss-minimizing (Lmin) choice. As discussed more extensively below, the gambles were constructed so that these two sorts of choices have the greatest effect on the overall subjective value of the gamble, as calculated using well-known descriptive models of choice like cumulative prospect theory along with standard parameter values (Tversky and Kahneman, 1992). Thus, we hereafter describe them collectively as value-maximizing choices. Conversely, adding money to the middle outcome, x3, increases the overall probability of winning or decreases the overall probability of losing. We refer to such choices as probability-maximizing (Pmax). Note that in the terminology of Payne (2005), the bias toward options that minimize the probability of losing compared to winning, was called the “Probability-of-Winning Heuristic.”

Responses from more than 500 subjects demonstrated that choices in this task systematically violated the predictions of models like expected utility theory (EU) and (p.157)

                      Neuroeconomics of risky decisions: from variables to strategies

Fig. 7.1 Experimental stimuli and paradigm. (A) Subjects initially viewed a multi-attribute mixed gamble consisting of five different outcomes, each associated with a probability of occurrence. The outcomes were rank-ordered and typically consisted of two gains, two losses and a central reference outcome. (B) Subjects had to choose between two options for improving the gamble, highlighted in red. Here, the addition of $20 to the central, reference outcome would maximize the overall probability of winning (Pmax choice), whereas the addition of $20 to the extreme loss would reflect a loss-minimizing (Lmin) choice. In other trials, subjects could have a chance to add money to the extreme gain outcome ($85), reflecting a gain-maximizing (Gmax) choice. For the fMRI experiment, subjects had 6 s to make their choice after which two arrows identified the buttons corresponding to the two options. Subjects indicated their choice by pressing the corresponding button as soon as possible.

cumulative prospect theory (CPT), such that subjects sacrificed considerable expected value in favor of maximizing the overall probability of winning compared to losing (Payne, 2005). Most subjects (over two-thirds) preferred to add money to the central reference outcome (x3) that improved the overall probability of winning (or not losing) for the gamble. These results provided strong evidence that many, but not all, individuals incorporate information about the overall probabilities of positive and negative outcomes into their decision making, consistent with both older (Lopes and Oden, 1999) and recent frameworks that include aspiration levels in utility calculations (Diecidue and van de Ven, 2008). Coding an outcome or attribute as good or bad relative to an aspiration level has often been seen as one important form of cognitive simplification. Simon uses such an aspiration level concept in the “satisificing” strategy for decision making (Simon, 1957). That is, a satisficing person is hypothesized to select the first option that meets minimum aspiration levels on all the relevant attributes. Alba and Marmorstein suggest that people may choose alternatives based simply upon the counts of the good or bad features that the alternatives possess (Alba and Marmorstein, 1987). Similarly, the probability-maximizing choices in our task improve the overall probability of winning relative to a neutral aspiration level.

7.3 Individual differences in the probability-maximizing strategy

In follow-up experiments, hereafter drawn from Venkatraman and colleagues (Venkatraman et al., 2009), we modified the above decision-making paradigm to explore, in greater detail, individual differences in the bias toward maximizing the probability of winning compared to losing (i.e., a probability-maximizing strategy). To match the procedure of Payne (2005), subjects (N = 71) chose among two of the above ways to improve five-outcome mixed gambles, but those gambles were not resolved and subject (p.158) compensation was unrelated to their decisions. Across a set of conditions, we modified the payoff structure of the gambles in ways theorized to bias subjects toward or against use of a probability-maximizing strategy (Fig. 7.2A).

A first test was designed to replicate the basic phenomenon from Payne (2005), while also demonstrating that subjects’ choices are still sensitive to expected value. When subjects chose between gain-maximizing and probability-maximizing choices, each associated with similar improvements in the expected value of the gamble, they selected the probability-maximizing option approximately two-thirds of the time (Fig. 7.2B). (Similar effects were observed when subjects chose between loss-minimizing and probability-maximizing options.) But, when the probability-maximizing choice was associated with reduced expected value (i.e., if only $10 could be added to the middle option, compared to $15 to the extreme), the bias toward probability maximizing was reduced but still present (58% of all choices). This bias towards probability-maximizing choices is

                      Neuroeconomics of risky decisions: from variables to strategies

Fig. 7.2 Subjects prefer choices that increase the overall probability of winning. (A) In a behavioral experiment (N = 71), we manipulated the gamble attributes across multiple conditions. In the Pmax-unavailable trials, none of the options would result in a change in the overall probability of winning. In the Pmax-exaggerated trial, the gambles were modulated such that the Pmax choice now reflects moving from an uncertain gain to a certain gain gamble. (B) Subjects show a significant bias towards Pmax choices across three independent experiments. (C) More importantly, the preference for the Pmax choices can be reversed or accentuated by experimental manipulations. When none of the options changed the overall probability, subjects now preferred value-maximizing choices. Similarly, when provided with an option to eliminate the possibility of losing, Pmax choices (indicated with arrows) increased dramatically. Note that the Pmax-unavailable condition did not have any choice that changed the overall probability, and the value in the plot represents the proportion of choices of the central outcome in these gambles.

(p.159) consistent with a recent dual-criterion model for risky choice (Diecidue and van de Ven, 2008), in which an overall probability criterion can be traded off against other decision variables. Moreover, subjects were significantly faster for probability-maximizing choices compared to either sort of value-maximizing choice, consistent with a less effortful strategy (Shah and Oppenheimer, 2008). Finally, and most critically, we found that there was tremendous variability in subjects’ relative preference for different strategies, such that some subjects nearly exclusively adopted a probability-maximizing strategy, while others nearly exclusively adopted a value-maximizing strategy.

A second condition included decision problems identical in format to those from Payne (2005), save for one small change: the middle option (x3) was subtly modified such that the subject’s decision could not change its valence (e.g., on some trials, it started at $5, so adding money might increase it to $20). Note that this change is miniscule compared to the gamble’s overall range, which was typically greater than $150. Under these task conditions, subjects now only chose the middle option on 39% of trials (Fig. 7.2C), a highly significant decrease from the first condition. These results confirm that many subjects who do preferentially adopt a probability-maximizing strategy when such choices are available in the decision problem, will readily switch to a value-maximizing strategy otherwise.

Our third condition included problems that would exaggerate the probability-maximizing strategy. Based on earlier behavioral studies (Payne et al., 1980, 1981), we hypothesized that people will be particularly attracted to changes in overall probabilities that involve moving from an uncertain gain to a certain gain or from a certain loss to an uncertain loss. We selected gambles used in the first condition above, and then translated all values by adding the magnitude of the largest loss (i.e., the worst outcome became $0) or subtracting the magnitude of the largest gain (i.e., the best outcome became $0). When faced with such gambles, subjects indeed showed a significantly increased tendency to choose the option that altered the overall probability of winning (e.g., 82% for gambles that improved the overall probability of winning from 0.8 to 1; Fig. 7.2C). Thus, the use of the overall probability of winning strategy (heuristic) has been shown to be used by many (but not all) subjects, and to be sensitive in its use to the specific task factors.

To identify potential individual trait correlates of this strategic variability, we collected psychometric data that included tendency toward satisficing (Schwartz et al., 2002) and emotional sadness (Fordyce, 1988). Across our subject sample, increased maximizing trait responses predicted a decreased bias toward probability-maximizing choices. In other words, satisficers preferred the probability-maximizing choices more than maximizers, consistent with these choices representing a simplifying strategy. Similarly, an increase in the sadness trait measure also predicted a decreased bias towards probability maximizing. Sadness has also been typically associated with reduced certainty, increased elaboration, and reduced heuristic processing (Bodenhausen et al., 1994; Schwarz et al., 1991).

In summary, we show a consistent strong bias towards probability-maximizing choices. Within subjects, we demonstrate that this bias is indeed related to the overall probability of winning and that it can be attenuated or exaggerated by subtle manipulations in (p.160) decision context. Across subjects, we show that differences in decision strategies can be explained by individual variability in trait measures like satisficing. Taken together, these strategy-trait relationships suggest that the robust individual differences in these risky choice paradigms (and, presumably, other settings) may reflect measurable cognitive or affective differences across individuals. We return to this idea further in our discussion of the fMRI experiment below.

7.4 Evaluations of consistency with economic models

Given the overall bias towards the probability-maximizing strategy in the previous experiments, we next sought to explicitly evaluate whether choices associated with this response strategy were consistent with traditional economic models such as expected utility (EU) maximization and cumulative prospect theory (CPT). For this test, we used behavior from 72 trials of the task described above, collected during an fMRI study with 23 subjects (neural data will be discussed in a subsequent section). The fMRI experiment used an incentive-compatible payment method, such that subjects were provided an initial unknown but fixed endowment and were later paid for a subset of their improved gambles, selected randomly. (See Venkatraman et al., 2009, for complete experimental details.)

Our model comparisons used both standard and free parameters for the EU and CPT models. Note that we did not include original prospect theory (OPT), since it was meant for simple two-outcome gambles. All gambles used in this study were of the form G = {x1,p1; x2,p2; x3,p3; x4,p4; x5,p5}. The expected utility (EU) of each gamble is given by:

E U = i = 1 5 u ( x i ) p i , w h e r e u ( x i ) = { x i β , x i 0 λ | x i | β , x i 0 }
The cumulative prospect theory (CPT) predictions were obtained using:
C P T = i = 1 5 v ( x i ) c ( i ) w h e r e v ( x i ) = { x i β , x i 0 λ | x i | β , x i 0 } , c ( i ) = { w + ( p i ) , i = 1 w + ( j = 1 i p j ) w + ( j = 1 i 1 p j ) , i = 2 , .. , k ( g a i n s ) w ( j = 1 5 p j ) w ( j = 1 + 1 5 p j ) , i = k + 1 , .. , 4 ( l o s s e s ) w ( p i ) i = 5 } , w + ( p ) = p [ p γ + + ( 1 p ) γ + ] 1 γ + a n d w ( p ) = p γ [ p γ + ( 1 p ) γ ] 1 γ .

(p.161) For the first level of model comparisons, we determined the model predictions for each of 72 trials in which expected value was approximately matched between possible choices. We first tested models using parameters drawn from the prior literature. Our EU model used a concave utility function with β = 0.88. For the CPT model, we used parameter values of γ+ = 0.61, γ– = 0.69 and λ = 2.25 (Tversky and Kahneman, 1992; Tversky and Wakker, 1995). Despite these standard parameters, neither model was a good predictor of subjects’ aggregate choices. As one example, consider a trial on which a subject chooses whether to add a fixed amount of money either to a large negative outcome (e.g., –$75) or to a middle outcome of $0, each of which is equally likely to occur. Both EU and CPT models predict that subjects should always add the money to the large negative outcome (i.e., minimizing the worst loss). However, subjects showed an opposite effect, adding money to the middle outcome 68% of the time (i.e., typically making a probability-maximizing choice). This and other observations indicated that subjects’ behavior in these experiments was inconsistent with standard model predictions.

To account for potential individual differences in model parameter values, we performed a robustness check using a split-sample analysis. We used one half of the choices of each subject to estimate model parameters: for EU, β; and for CPT, β and γ, keeping λ fixed at 2.25. We also simplified the equation for CPT in our estimation by assuming γ+ = γ–. We then assessed the performance of the EU and CPT models by estimating the reliability of the fitted model in predicting the other half of that subject’s data. We found that parameters estimated from one half of the sample showed poor reliability in predicting choices in the complementary sample (EU: Cronbach α = 0.37; CPT: α = 0.39). However, the proportion of probability-maximizing choices was much more reliable across the two samples (α = 0.78), indicating that subjects remained highly consistent in their strategy preference across the experiment.

We stress that these results should not be interpreted to imply that the probability-maximizing strategy provides a new, better, and general model of risky choice behavior. On the contrary, this particular simplifying strategy only applies to a subset of decision problems—those that involve comparisons between similar gambles that differ in their overall probability of winning—and cannot be used for choice problems that involve only gains or only losses, or that involve constant probability of winning. In such cases, people may use other heuristic strategies such as the priority heuristic advocated by Gigerenzer and colleagues (Brandstätter et al., 2006) that focuses more on the outcomes of the gambles. We do suggest that, under certain contexts, most individuals adopt heuristic decision strategies not encapsulated in standard models (but see Birnbaum, 2008). Clarifying the neural mechanisms that support such strategies will be the focus of the remainder of this chapter.

7.5 Neural substrates for the strategic control of behavior

Prior research in cognitive neuroscience provides candidate brain regions that may contribute to sorts of the computations necessary for strategic control. As a broad beginning, nearly all models of neural control posit an important role for regions within the (p.162) prefrontal cortex (Miller and Cohen, 2001), which in turn is argued to shape processing in other cortical and subcortical brain regions. Substantial evidence in support of this perspective comes from studies of patients with lesions to prefrontal cortex, who exhibit impairments in changing behavior based on current task context (Bechara et al., 2000). Importantly, this inflexibility is not always economically irrational or even maladaptive. For example, Shiv and colleagues demonstrated that patients with PFC damage did better than individuals with intact prefrontal cortex on a gambling game; examination of the specific pattern of choices revealed that the patients were less likely to shift away from high expected-value but risky options following negative feedback (Shiv et al., 2005). That is, the patients exhibited less of a risk-aversion bias than neurologically normal individuals. Note that patients with similar damage would show considerably worse performance than control individuals in decision scenarios where increased risk is associated with reduced expected value.

Moreover, new conceptions of prefrontal function have argued that specific subregions of PFC may be critical for contextual control. A commonly held framework, one advanced in different guises by different theorists, suggests that lateral prefrontal cortex contains a topographic organization along its posterior to anterior axis (Koechlin et al., 2000; Koechlin et al., 2003). More posterior regions, those immediately adjacent to premotor cortex, are associated with setting up general rules for behavior. Conversely, more anterior regions support the instantiation of rules for behavior based on the current context. Findings from functional neuroimaging studies argue for further divisions within anterior prefrontal cortex, such that regions around the frontal pole support relational integration, or the combination of disparate information into a single judgment (Christoff et al., 2001). An open, but critical, question is how the computational capacities of these distinct regions differentially contribute to decisions under different contexts.

In recent years, there has been substantial interest in the dorsomedial prefrontal cortex (dmPFC)—also called the anterior cingulate cortex (ACC)—as playing a key role in assessing and/or shaping behavior based on context (Hadland et al., 2003; Kennerley et al., 2006; Rushworth et al., 2004). Several independently arising lines of evidence have contributed to the interest in dmPFC. First, electrophysiological studies, in both human and non-human primates, have identified neuronal signatures of a very rapid response to events that signal a change in the current task demands (Nieuwenhuis et al., 2004). These have included, but are not limited to, external feedback about errors (Holroyd and Coles, 2002; Miltner et al., 1997; Ruchsow et al., 2002), self-recognition that an action was likely to be an error (Gehring and Fencsik, 2001), and even the quality of monetary outcomes (Gehring and Willoughby, 2002). For example, a recent study from our group found that the scalp-recorded ERP signal over fronto-central cortex to a monetary outcome, as obtained in a probabilistic guessing task, depended on the valence and magnitude of that outcome, as well as the history of outcomes over preceding trials (Goyer et al., 2008).

Second, a large corpus of functional neuroimaging research has shown that dmPFC activation is evoked by task contexts that involve conflict, particularly between competing response tendencies. Many such studies have used variants of the Stroop paradigm (p.163) (MacLeod, 1992), which requires individuals to inhibit a fast, prepotent response (e.g., color word reading) and instead engage in a slower, less common process (e.g., naming an ink color). Under such task conditions, a very large number of studies have reported activation in dmPFC (Bush et al., 1998; Derrfuss et al., 2005), as reviewed by Bush and colleagues (Bush et al., 2000). (Note, however, that a simple contrast of response conflict versus non-conflict conditions also evokes activation in many other regions, reflecting that other processes are brought online, as well.) Subsequent work has greatly refined the description of this dmPFC activation, such that current theories of dmPFC function emphasize its role in coordinating activation in other regions (Beckmann et al., 2009; Cohen et al., 2005; Meriau et al., 2006). For example, a seminal study by Kerns and colleagues demonstrated that the magnitude of activation change in dmPFC on one trial predicted the subsequent change in lateral PFC activation on the next trial, suggesting that dmPFC may engage executive control regions based on task demands (Kerns et al., 2004).

Third, recent work has implicated dmPFC in the detection of environmental volatility, or the degree to which the current task context is static or variable over time. In an elegant set of studies, Rushworth and colleagues have shown that volatility in the mappings of responses to outcomes is associated with increased dmPFC activity, even when controlling for variability in responses, outcomes, learning rates, and other factors (Behrens et al., 2007). Moreover, these authors have suggested the possibility of a spatial topography within dmPFC such that distinct subregions support volatility associated with social and non-social contexts, and that those subregions have distinct functional connectivity to regions in ventral PFC (Rudebeck et al., 2008; Rushworth et al., 2007). Collectively, these studies point to dmPFC as a likely candidate for implementing the strategic control of behavior.

7.6 Functional neuroimaging of strategic control

Our fMRI experiment sought to dissociate the neural mechanisms that underlie choices from those that shape strategic preferences in risky decision making (Venkatraman et al., 2009). Subjects completed a series of choices using the basic paradigm described in the previous experiments. In each trial, subjects chose between a probability-maximizing simplifying option and a value-maximizing compensatory option (which could be gain-maximizing in some trials and loss-minimizing in others). Subjects made these choices without feedback, so that their decisions would not be shaped by outcome learning. We characterized our subjects’ strategic preferences according to their relative proportion of simplifying (Pmax) versus compensatory (Gmax or Lmin) choices. Such a definition creates a continuum with a high value indicating an individual who prefers a simplifying strategy and a low value indicating an individual who prefers a compensatory strategy. Like in the previous experiments, there was substantial variability across individuals in their strategic preferences.

Our first analyses identified regions whose activation during the decision phase of the task, but before responses were indicated, predicted the type of choice made on a given trial (Fig. 7.3). Somewhat counterintuitively, increased activation in nominally emotional (p.164)

                      Neuroeconomics of risky decisions: from variables to strategies

Fig. 7.3 Distinct sets of brain regions predict choices. (A) Increased activation in the right anterior insula (peak MNI space coordinates: x = 38, y = 28, z = 0) and in the ventromedial prefrontal cortex (x = 16, y = 21, z = –23) predicted Lmin and Gmax choices, respectively, while increased activation in the lateral prefrontal cortex (x = 44, y = 44, z = 27) and posterior parietal cortex (x = 20, y = –76, z = 57) predicted Pmax choices. Activation maps show active clusters that surpassed a threshold of z〉2.3 with cluster-based Gaussian random field correction. (B–D) Percent signal change in these three regions to each type of choice. On this and subsequent figures, error bars represent ±1 standard error of the mean for each column.

regions predicted that a subject would make choices consistent with economic models: activation in anterior insula (aINS) predicted loss-minimizing choices, whereas activation in vmPFC predicted gain-maximizing choices. Conversely, activation in the nominally cognitive regions of dorsolateral prefrontal cortex (dlPFC) and posterior parietal cortex (PPC) predicted probability-maximizing choices (i.e., simplifying). These results are inconsistent with the canonical neural dual-systems model for decision making, namely that economic rationality results from the activation of cognitive brain systems and that the opposing activation of emotional brain systems drives people toward simplifying and heuristic choices. Instead, these data support an interpretation in terms of the specific consequences of choices in this task: anterior insula activation reflects aversion to potential negative consequences (Kuhnen and Knutson, 2005), whereas vmPFC activation reflects the magnitude of the greatest gain (Bechara et al., 2000). We emphasize, based on these results and those highlighted in the previous section, that economic (p.165)
                      Neuroeconomics of risky decisions: from variables to strategies

Fig. 7.4 Dorsomedial prefrontal cortex predicts strategy use during decision making. (A) Activation in dorsomedial prefrontal cortex (dmPFC, x = 10, y = 22, z = 45; indicated with arrow) tracked strategic preferences, such that the difference in activation between probability-maximizing (i.e., simplifying) and value-maximizing (i.e., compensatory) choices was significantly correlated with variability in strategic preference across individuals. (B) Additionally, we also found that this region exhibited differential functional connectivity with choice-related regions: there was increased connectivity with dlPFC (and PPC) for probability-maximizing choices and increased connectivity with aINS (and amygdala) for value-maximizing choices.

rationality reflects an idealized model of behavior, not the output of a specific brain region. Regions within prefrontal cortex, for example, may shape behavior in a manner consistent with—or contrary to—a particular economic model, depending on task and context.

Furthermore, activation in a third region, dorsomedial PFC (dmPFC), predicted strategic preferences across subjects (Fig. 7.4A). Specifically, activation in this region increased when subjects made choices that were inconsistent with their preferred strategy (i.e., greater activation when people with a preference for probability-maximizing choices made value-maximizing choices, and vice versa). (Note that activation was minimal, regardless of choice, in those individuals who found the two strategies similarly preferable, consistent with a strategy-conflict explanation but inconsistent with a response-conflict explanation.) We also found differential functional connectivity of this region to dlPFC and anterior insula (Fig. 7.4B), two regions that predicted different types of choices. These findings support the interpretation that control signals from dmPFC modulate the activation of choice-related brain regions, with the strength and directionality of this influence dependent on an individual’s preferred strategy.

Finally, we found that individual differences in strategic preferences were also correlated with neural sensitivity to reward outcomes. At the end of the experiment, we resolved a subset of the gambles for real rewards, during which we measured the neural response to anticipation of reward, to gain outcomes, and to loss outcomes. Across subjects, a neurometric measure of reward sensitivity, namely the difference between responses to (p.166)

                      Neuroeconomics of risky decisions: from variables to strategies

Fig. 7.5 Ventral striatal sensitivity to rewards predicts strategic bias. At the end of the experiment, some gambles were resolved to monetary gains or losses. (A) Activation in the ventral striatum (x = 14, y = 16, z = –10) increased to realized gains but decreased to realized losses. (B) Notably, the difference in activation to gains and losses in this region correlated with variability in strategic preferences across subjects, with subjects who were most likely to prefer the probability-maximizing strategy exhibiting the greatest neural sensitivity to rewards.

gain outcomes and loss outcomes in the ventral striatum, was positively correlated with bias toward a probability-maximizing strategy (Fig. 7.5). Finally, we also found that the neurometric measure of differential reward sensitivity to gain versus loss outcomes in the ventral striatum was negatively correlated with the trait measure of maximizing (r = –0.69), indicating that satisficers were more sensitive than maximizers to the reward consequences of their decisions. While these results by itself provides no information about the direction of causation, we can speculate that increased neural sensitivity to reward outcomes may underlie preferences for decision strategies that seek to maximize the probabilities of winning over losing.

7.7 Implications for future research

Neuroeconomic research will, for the foreseeable future, continue to be focused on identifying neural mechanisms that underlie decision variables and the operators that process those variables. We do not disagree with this focus—simply put, many factors that contribute to even simple decisions deserve elucidation. However, we predict that identifying the factors that shape how people differentially represent decision problems, both across contexts and across individuals, will become an increasingly critical topic.

An increased focus on strategic variability would have several salutary consequences. First, it would provide an avenue by which neuroscience data could be extended to modeling in economics (and other social sciences). A common criticism of neuroeconomics, at least among economic theorists, is that neuroscience data is simply irrelevant for core models in economics. Where such theories cannot be derived from first principles, it is argued, they can be identified based on expressed behavior without recourse to internal neural mechanisms. This criticism, which we have labeled the “behavioral sufficiency” (p.167) argument, can be countered on several grounds (Clithero et al., 2008). Most relevant for the current topic, economic models of behavior are disconnected from the substantial psychological and neuroscientific literature on individual differences. Because of this disconnect, a model may well describe behavior of young adults making decisions in a relaxed setting, but nevertheless have little predictive validity when applied to older adults making decisions under time pressure. To the extent that neuroscience can illuminate the mechanisms underlying individual choice biases and strategic preferences, it may become critical for creating robust and flexible models of real-world decision behavior.

Second, there will be substantial value in moving descriptions of decision mechanisms—within neuroeconomics, cognitive psychology, behavioral economics, and even the lay public—away from the oft-claimed interaction between competing decision systems. Clearly, based on the many studies cited at the outset of this chapter, there has been substantial progress in mapping specific decision variables to specific brain regions. Yet, considering decisions to reflect the simple interactions between sets of these regions would be, much like hydraulic theories of personality, an unnecessary and misleading oversimplification. No one region, nor any set of regions, can be unambiguously claimed to implement a rational decision-making process. Instead, specific brain regions contribute to particular computations, which may or may not be consistent with models for rational behavior. Some such regions may even exert context-dependent influences, like that shown for dmPFC in the final experiment, making it impossible to categorize them within a two-systems framework (Frank et al., 2009).

Third, an important direction for neuroeconomics will be to strengthen its connections to the broader cognitive neuroscience literature. For example, our results are consistent with the growing consensus that dmPFC reflects a mechanism for identifying and responding adaptively to changes in the context for behavior. The postulated computations for dmPFC, which involve broad cognitive control functions like alerting (Gehring and Willoughby, 2002) or monitoring (Carter et al., 1998) changes in the external (or internal) milieu, may reflect specific physiological constraints in the implementation of adaptive behavior. Notably, complex aspects of behavior, like full consideration of decision problems, require a wide range of processes of executive control. By adopting simplifying rules for behavior, and only changing those rules when environmental conditions change, the brain can operate with a much-reduced metabolic demand. We contend that dmPFC plays such a role in complex decision making: it signals changes in how a decision problem is represented, and thus shapes computational processing elsewhere in the brain based on the current decision strategy. We note that such strategic considerations are unlikely to limited to abstract economic decisions; they are also likely to be critical for interpersonal interactions in social contexts (Camerer, 2003b). Initial converging evidence from primate lesion studies and functional neuroimaging data indicate a dissociation between contributions of the anterior cingulate sulcus and the anterior cingulate gyrus to non-social and social behavior, respectively (Behrens et al., 2008, 2009). Future studies need to extend these findings to the role of trust and reputation in influencing decision strategies (King-Casas et al., 2005).

(p.168) Fourth, and finally, given a potentially large repertoire of strategies, how could individuals determine which one to employ in a particular situation? Cognitive models of strategy selection often argue that strategy selection occurs through a simple cost/benefit analysis (Payne et al., 1988). In other words, an adaptive decision maker often evaluates costs (effort and computational resources) and benefits (outcomes, potential regret, ease of justification) before selecting and applying a strategy. Like other aspects of decision making, strategic preferences can vary both as a function of the decision context (like complexity of the problem and timing constraints) and the decision maker. However, it is unclear how the trade off works at a computational level (Busemeyer and Townsend, 1993). Alternatively, strategy selection may represent a learned response that is based on past experiences with different strategies (Rieskamp and Hoffrage, 2008; Rieskamp and Otto, 2006), such that the decision maker merely chooses the most efficient strategy for each situation based on prior experience (e.g., reward history, consistent with Fig. 7.5). Neuroscience may help answer the metacognitive question of how people decide how to decide.

References

Bibliography references:

Aharon, I., Etcoff, N., Ariely, D., Chabris, C. F., O’Connor, E., and Breiter, H. C. (2001). Beautiful faces have variable reward value: fMRI and behavioral evidence. Neuron, 32(3), 537–51.

Alba, J. W. and Marmorstein, H. (1987). The effects of frequency knowledge on consumer decision-making. The Journal of Consumer Research, 14(1), 14–25.

Bechara, A., Tranel, D., and Damasio, H. (2000). Characterization of the decision-making deficit of patients with ventromedial prefrontal cortex lesions. Brain, 123(Pt 11), 2189–2202.

Beckmann, M., Johansen-Berg, H., and Rushworth, M. F. S. (2009). Connectivity-based parcellation of human cingulate cortex and its relation to functional specialization. Journal of Neuroscience, 29(4), 1175–90.

Behrens, T. E., Woolrich, M. W., Walton, M. E., and Rushworth, M. F. (2007). Learning the value of information in an uncertain world. Nature Neuroscience, 10(9), 1214–21.

Behrens, T. E., Hunt, L. T., Woolrich, M. W., and Rushworth, M. F. (2008). Associative learning of social value. Nature, 456(7219), 245–49.

Behrens, T. E., Hunt, L. T., and Rushworth, M. F. (2009). The computation of social behavior. Science, 324(5931), 1160–64.

Bernheim, B. D. and Rangel, A. (2004). Addiction and cue-triggered decision processes. American Economic Review, 94(5), 1558–90.

Bernoulli, D. (1738). Specimen theoriae novae de mensura sortis. Commentarii Academiae Scientarum Imperialis Petropolitanae, 5, 175–92.

Berns, G. S., McClure, S. M., Pagnoni, G., and Montague, P. R. (2001). Predictability modulates human brain response to reward. Journal of Neuroscience, 21(8), 2793–98.

Berns, G. S., Capra, C. M., Chappelow, J., Moore, S., and Noussair, C. (2008). Nonlinear neurobiological probability weighting functions for aversive outcomes. Neuroimage, 39(4), 2047–57.

Birnbaum, M. H. (2008). New paradoxes of risky decision-making. Psychological Review, 115(2), 463–501.

Bodenhausen, G. V., Sheppard, L. A., and Kramer, G. P. (1994). Negative affect and social judgment—the differential impact of anger and sadness. European Journal of Social Psychology, 24(1), 45–62.

Brandstätter, E., Gigerenzer, G., and Hertwig, R. (2006). The priority heuristic: making choices without trade-offs. Psychological Review, 113(2), 409–32.

Busemeyer, J. R. and Townsend, J. T. (1993). Decision field-theory - a dynamic cognitive approach to decision-making in an uncertain environment. Psychological Review, 100(3), 432–59.

(p.169) Bush, G., Luu, P., and Posner, M. I. (2000). Cognitive and emotional influences in anterior cingulate cortex. Trends in Cognitive Science, 4(6), 215–22.

Bush, G., Whalen, P. J., Rosen, B. R., Jenike, M. A., McInerney, S. C., and Rauch, S. L. (1998). The counting Stroop: an interference task specialized for functional neuroimaging—validation study with functional MRI. Human Brain Mapping, 6(4), 270–82.

Camerer, C. F. (2003a). Behavioral studies of strategic thinking in games. Trends in Cognitive Science, 7(5), 225–31.

Camerer, C. (2003b). Behavioral game theory: experiments in strategic interaction. Princeton, NJ.: Princeton University Press.

Camille, N., Coricelli, G., Sallet, J., Pradat-Diehl, P., Duhamel, J.R., and Sirigu, A. (2004). The involvement of the orbitofrontal cortex in the experience of regret. Science, 304(5674), 167–70.

Carter, C. S., Braver, T. S., Barch, D. M., Botvinick, M. M., Noll, D., and Cohen, J. D. (1998). Anterior cingulate cortex, error detection, and the online monitoring of performance. Science., 280(5364), 747–49.

Chiu, P. H., Lohrenz T. M., and Montague P. R. (2008). Smokers’ brains compute, but ignore, a fictive error signal in a sequential investment task. Nature Neuroscience, 11(4), 514–20.

Christoff, K., Prabhakaran, V., Dorfman, J., Zhao, Z., Kroger, J. K., Holyoak, K. J. et al. (2001). Rostrolateral prefrontal cortex involvement in relational integration during reasoning. Neuroimage, 14(5), 1136–49.

Clithero, J. A., Tankersley, D., and Huettel, S. A. (2008). Foundations of neuroeconomics: from philosophy to practice. Plos Biology, 6(11), 2348–53.

Cohen, M. X., Heller, A. S., and Ranganath, C. (2005). Functional connectivity with anterior cingulate and orbitofrontal cortices during decision-making. Cognitive Brain Research, 23(1), 61–70.

Cope, D. E. and Murphy, A. J. (1981). The value of strategies in problem-solving. Journal of Psychology, 107(1), 11–16.

Coricelli, G., Critchley H. D., Joffily M., O’Doherty J. P., Sirigu A., and Dolan R. J. (2005). Regret and its avoidance: a neuroimaging study of choice behavior. Nature Neuroscience, 8(9), 1255–62.

De Martino, B., Kumaran, D., Seymour, B., and Dolan, R. J. (2006). Frames, biases, and rational decision-making in the human brain. Science, 313(5787), 684–87.

Derrfuss, J., Brass, M., Neumann, J., and von Cramon, D. Y. (2005). Involvement of the inferior frontal junction in cognitive control: meta-analyses of switching and Stroop studies. Human Brain Mapping, 25(1), 22–34.

Diecidue, E. and van de Ven, J. (2008). Aspiration level, probability of success and failure, and expected utility. International Economic Review, 49(2), 683–700.

Fordyce, M. W. (1988). A review of research on the happiness measures - a 60 second index of happiness and mental-health. Social Indicators Research, 20(4), 355–81.

Frank, M. J., Cohen, M. X., and Sanfey, A. G. (2009). Multiple systems in decision-making: a neurocomputational perspective. Current Directions in Psychological Sciences, 18(2), 73–77.

Gehring, W. J. and Fencsik, D. E. (2001). Functions of the medial frontal cortex in the processing of conflict and errors. Journal of Neuroscience, 21(23), 9430–37.

Gehring, W. J. and Willoughby, A. R. (2002). The medial frontal cortex and the rapid processing of monetary gains and losses. Science, 295(5563), 2279–82.

Gigerenzer, G. and Goldstein, D.G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 650–69.

Glimcher, P. W. and Rustichini, A. (2004). Neuroeconomics: the consilience of brain and decision. Science, 306(5695), 447–52.

Goyer, J. P., Woldorff, M. G., and Huettel, S. A. (2008). Rapid electrophysiological brain responses are influenced by both valence and magnitude of monetary rewards. Journal of Cognitive Neuroscience, 20(11), 2058–69.

(p.170) Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., and Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–08.

Hadland, K. A., Rushworth, M. F. S., Gaffan, D., and Passingham, R. E. (2003). The effect of cingulate lesions on social behavior and emotion. Neuropsychologia, 41(8), 919–31.

Hayden, B.Y., Pearson J.M., Platt M.L. (2009). Fictive reward signals in the anterior cingulate cortex. Science, 324(5929), 948–50.

Holroyd, C. B., and Coles, M. G. (2002). The neural basis of human error processing: reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109(4), 679–709.

Hsu, M., Bhatt, M., Adolphs, R., Tranel, D., and Camerer, C. F. (2005). Neural systems responding to degrees of uncertainty in human decision-making. Science, 310(5754), 1680–3.

Hsu, M., Krajbich, I., Zhao, C., and Camerer, C. F. (2009). Neural response to reward anticipation under risk is nonlinear in probabilities. Journal of Neuroscience, 29(7), 2231–7.

Huettel, S. A. (2006). Behavioral, but not reward, risk modulates activation of prefrontal, parietal, and insular cortices. Cognitive, Affective, and Behavioral Neuroscience, 6(2), 141–51.

Huettel, S. A., Stowe, C. J., Gordon, E. M., Warner, B. T., and Platt, M. L. (2006). Neural signatures of economic preferences for risk and ambiguity. Neuron, 49(5), 765–75.

Kable, J. W., and Glimcher, P. W. (2007). The neural correlates of subjective value during intertemporal choice. Nature Neuroscience, 10(12), 1625–33.

Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93(5), 1449–75.

Kahneman, D. and Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–91.

Kennerley, S. W., Walton, M. E., Behrens, T. E., Buckley, M. J., and Rushworth, M. F. (2006). Optimal decision-making and the anterior cingulate cortex. Nature Neuroscience, 9(7), 940–47.

Kerns, J. G., Cohen, J. D., MacDonald, A. W., 3rd, Cho, R. Y., Stenger, V. A., and Carter, C. S. (2004). Anterior cingulate conflict monitoring and adjustments in control. Science, 303(5660), 1023–26.

King-Casas, B., Tomlin, D., Anen, C., Camerer, C. F., Quartz, S. R., and Montague, P. R. (2005). Getting to know you: reputation and trust in a two-person economic exchange. Science, 308(5718), 78–83.

Knutson, B., Fong, G. W., Bennett, S. M., Adams, C. M., and Hommer, D. (2003). A region of mesial prefrontal cortex tracks monetarily rewarding outcomes: characterization with rapid event-related fMRI. NeuroImage, 18(2), 263–72.

Koechlin, E., Corrado, G., Pietrini, P., and Grafman, J. (2000). Dissociating the role of the medial and lateral anterior prefrontal cortex in human planning. Proceedings of National Academy of Science USA, 97(13), 7651–56.

Koechlin, E., Ody, C., and Kouneiher, F. (2003). The architecture of cognitive control in the human prefrontal cortex. Science, 302(5648), 1181–5.

Kuhnen, C. M. and Knutson, B. (2005). The neural basis of financial risk taking. Neuron, 47(5), 763–70.

Loewenstein, G. F., Weber, E. U., Hsee, C. K., and Welch, N. (2001). Risk as feelings. Psychological Bullet, 127(2), 267–86.

Logothetis, N.K. (2008). What we can do and what we cannot do with fMRI. Nature, 453(7197), 869–878.

Logothetis, N.K., Pauls, J., Augath, M., Trinath, T., and Oeltermann, A. (2001). Neurophysiological investigation of the basis of the fMRI signal. Nature, 412(6843), 150–7.

Lohrenz, T, McCabe K, Camerer C.F., and Montague, P.R. (2007). Neural signature of fictive learning signals in a sequential investment task. Proceedings of National Academy of Science USA, 104(22), 9493–8.

Lopes, L. L. (1995). Algebra and process in the modeling of risky choice. In J. R. Busemeyer, R. Hastie, and D. L. Medin (Eds.), Decision-making from a cognitive perspective. San Diego: Academic Press.

(p.171) Lopes, L. L. and Oden, G. C. (1999). The role of aspiration level in risky choice: a comparison of cumulative prospect theory and sp/a theory. Journal of Mathematical Psychology, 43(2), 286–313.

MacLeod, C. M. (1992). The Stroop task: The “gold standard” of attentional measures. Journal of Experimental Psychology: General, 121(1), 12–14.

McClure, S. M., Laibson, D. I., Loewenstein, G., and Cohen, J. D. (2004). Separate neural systems value immediate and delayed monetary rewards. Science, 306(5695), 503–507.

Meriau, K., Wartenburger, I., Kazzer, P., Prehn, K., Lammers, C. H., van der Meer, E. et al. (2006). A neural network reflecting individual differences in cognitive processing of emotions during perceptual decision-making. Neuroimage, 33(3), 1016–27.

Miller, E. K. and Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24, 167–202.

Miltner, W. H. R., Braun, C. H., and Coles, M. G. H. (1997). Event-related brain potentials following incorrect feedback in a time-estimation task: evidence for a “generic” neural system for error-detection. Journal of Cognitive Neuroscience, 9, 788–98.

Nieuwenhuis, S., Holroyd, C. B., Mol, N., and Coles, M. G. (2004). Reinforcement-related brain potentials from medial frontal cortex: origins and functional significance. Neuroscience Biobehavioral Reviews, 28(4), 441–48.

Payne, J. W. (2005). It is whether you win or lose: The importance of the overall probabilities of winning or losing in risky choice. Journal of Risk and Uncertainty, 30(1), 5–19.

Payne, J. W., Laughhunn, D. J., and Crum, R. (1980). Translation of gambles and aspiration level effects in risky choice behavior. Management Science, 26(10), 1039–60.

Payne, J. W., Laughhunn, D. J., and Crum, R. (1981). Further tests of aspiration level effects in risky choice behavior. Management Science, 27(8), 953–58.

Payne, J. W., Bettman, J. R., and Johnson, E. J. (1988). Adaptive strategy selection in decision-making. Journal of Experimental Psychology-Learning Memory and Cognition, 14(3), 534–52.

Platt, M. L., and Huettel, S. A. (2008). Risky business: the neuroeconomics of decision-making under uncertainty. Nat Neurosci, 11(4), 398–403.

Preuschoff, K., Bossaerts, P., and Quartz, S. R. (2006). Neural differentiation of expected reward and risk in human subcortical structures. Neuron, 51(3), 381–90.

Preuschoff, K., Quartz, S. R., and Bossaerts, P. (2008). Human insula activation reflects risk prediction errors as well as risk. Journal of Neuroscience, 28(11), 2745–52.

Rieskamp, J. and Hoffrage, U. (2008). Inferences under time pressure: how opportunity costs affect strategy selection. Acta Psychologica, 127(2), 258–76.

Rieskamp, J. and Otto, P. E. (2006). SSL: a theory of how people learn to select strategies. Journal of Experimental Psychology-General, 135(2), 207–36.

Ruchsow, M., Grothe, J., Spitzer, M., and Kiefer, M. (2002). Human anterior cingulate cortex is activated by negative feedback: evidence from event-related potentials in a guessing task. Neuroscience Letters, 325(3), 203–206.

Rudebeck, P. H., Bannerman, D. M., and Rushworth, M. F. S. (2008). The contribution of distinct subregions of the ventromedial frontal cortex to emotion, social behavior, and decision-making. Cognitive Affective and Behavioral Neuroscience, 8(4), 485–97.

Rushworth, M. F. S., Behrens, T. E. J., Rudebeck, P. H., and Walton, M. E. (2007). Contrasting roles for cingulate and orbitofrontal cortex in decisions and social behavior. Trends in Cognitive Sciences, 11(4), 168–76.

Rushworth, M. F. S., Walton, M. E., Kennerley, S. W., and Bannerman, D. M. (2004). Action sets and decisions in the medial frontal cortex. Trends in Cognitive Sciences, 8(9), 410–417.

Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., and Cohen, J. D. (2003). The neural basis of economic decision-making in the Ultimatum Game. Science., 300(5626), 1755–58.

(p.172) Savage, L. J. (1954). Foundations of statistics. New York: Wiley.

Schwarz, N., Bless, H., and Bohner, G. (1991). Mood and persuasion—affective states influence the processing of persuasive communications. Advances in Experimental Social Psychology, 24, 161–99.

Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., and Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of Personality and Social Psychology, 83(5), 1178–97.

Shah, A. K. and Oppenheimer, D. M. (2008). Heuristics made easy: an effort-reduction framework. Psychological Bulletin, 134(2), 207–22.

Shiv, B., Loewenstein, G., and Bechara, A. (2005). The dark side of emotion in decision-making: when individuals with decreased emotional reactions make more advantageous decisions. Cognitive Brain Research, 23(1), 85–92.

Simon, H. A. (1957). Models of man: social and rational. New York: Wiley.

Slovic, P. and Lichtenstein, S. (1968). The relative importance of probabilities and payoffs in risk-taking. Journal of Experimental Psychology Monograph Supplement, 72, 1–18.

Tversky, A. and Fox, C. R. (1995). Weighing risk and uncertainty. Psychological Review, 102(2), 269–83.

Tversky, A. and Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science, 185, 1124–31.

Tversky, A. and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453–58.

Tversky, A. and Kahneman, D. (1992). Advances in prospect theory: cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297–323.

Tversky, A. and Wakker, P. (1995). Risk attitudes and decision weights. Econometrica, 63(6), 1255–80.

Venkatraman, V., Payne, J. W., Bettman, J. R., Luce, M. F., and Huettel, S. A. (2009). Separate neural mechanisms underlie choice and strategic preference in risky decision-making. Neuron, 62(4), 593–602.

von Neumann, J. and Morgenstern, O. (1944). Theory of games and economic behavior. Princeton, NJ: Princeton University Press.

Yacubian, J., Sommer, T., Schroeder, K., Glascher, J., Braus, D. F., and Buchel, C. (2007). Subregions of the ventral striatum show preferential coding of reward magnitude and probability. Neuroimage, 38(3), 557–63.

Zeelenberg, M. (1999). Anticipated regret, expected feedback and behavioral decision-making. Journal of Behavioral Decision-making, 12(93–106).

Zeelenberg, M. and Pieters, R. (2004). Consequences of regret aversion in real life: The case of the Dutch postcode lottery. Organizational Behavior and Human Decision Processes, 93(2), 155–68.

Notes:

(1) We note that the term “strategy” has different meanings in different contexts. Within game theory it refers to a particular choice option available to one player (Camerer, 2003a), whereas within cognitive psychology it refers to the manner in which people seek out and use information in the pursuit of some goal (Cope and Murphy, 1981). We adopt this term based on the latter connotation, but we note that similar concepts are implied by terms like “decision modes” or “heuristics.”

(2) Cumulative prospect theory is the most frequently cited model for decision making under risk. However, it is important to note that even in the original description of that model (Tversky and Kahneman, 1992), its authors clearly acknowledge that multiple decision strategies contribute to risky choices—“when faced with a complex problem, people employ a variety of heuristic procedures in order to simplify the representation and the evaluation of prospects” (p. 317).