Jump to ContentJump to Main Navigation
Realistic Decision TheoryRules for Nonideal Agents in Nonideal Circumstances$

Paul Weirich

Print publication date: 2004

Print ISBN-13: 9780195171259

Published to Oxford Scholarship Online: November 2004

DOI: 10.1093/019517125X.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 18 November 2018

(p.193) Appendix A: Optimization and Utility Maximization

(p.193) Appendix A: Optimization and Utility Maximization

Source:
Realistic Decision Theory
Publisher:
Oxford University Press

Chapter 2 presents the principles of optimization and utility maximization. Its objective is to lay the foundation for a study of their idealizations and for a method of relaxing those idealizations. It postpones a full articulation of the principles because the techniques for generalizing them are versatile and do not rely on details. This appendix elaborates chapter 2's presentation of the principles. It addresses questions about an option's nature, the comparison set for an option's evaluation, and calculations of an option's utility. My glosses on the decision principles extend to their generalizations in chapters 49. They yield a more refined final product.

The appendix assumes chapter 2's idealizations for the principles. Agents are ideal and rational. Their situations are perfect for complying with the principles. The principle of optimization assumes full information, whereas the principle of utility maximization allows for uncertainty about options' outcomes.

A.1. Sets of Options

Chapter 2 applies optimization to momentary acts and utility maximization to a special type of momentary act: decisions. This section supports these versions of the principles. It explains the advantages of taking options as momentary acts and, given uncertainty, as decisions.

A.1.1. Full Information

To clarify the principle of optimization, one must say which acts it evaluates and which (possible) acts form the comparison set for an act evaluated. Assuming that the act evaluated is in the comparison set for it, the chief requirement is to specify for an agent the acts over which she should optimize. Some candidate sets are: all acts, all momentary acts, all extended acts, all intentional acts, and all acts not (p.194) performed by performing other acts. The appropriate set depends on at least three factors. First, it depends on the principle's force: Does the principle express only a necessary condition of rationality, or also a sufficient condition of rationality? Second, it depends on the principle's idealizations: Does the principle, for instance, assume that an agent has unlimited cognitive power and direct control of all her acts? Third, it depends on the criteria of individuation and identity for acts: For example, are acts concrete, abstract, or hybrid events?

The features of an act that matter to standards of rationality, such as optimization, are related to responsibility. In particular, an agent's control over the act and awareness of her control over it matter. So chapter 2 applies optimization to the set of acts in an agent's direct control, basic acts, and adopts idealizations that ensure the agent's knowledge of that set. It adjusts the principle's force according to its characterization of acts.

To round out chapter 2's characterization of basic acts, consider some types of act. Acts may be simple or composite. A driver may change lanes and accelerate at the same time. Her act at the time is composite. Composite acts may be in an agent's direct control. So I do not require that basic acts be simple.

Acts may be specific or nonspecific. I may raise an arm. At the same time, I may raise my right arm high. The second act is more specific than the first. Both specific and nonspecific acts may be in an agent's direct control. So I do not require that basic acts be specific. Lewis (1981: 7) takes options to be maximally specific acts. But optimization advanced as a necessary condition of rationality can ignore an act's specificity without contravening Lewis's version of the principle. An option optimal among basic specific acts is optimal among basic acts.

Under my assumptions, agents directly control at a moment only acts at that moment. So I take basic acts as momentary acts and restrict the principle of optimization to them. Optimization is then relative to rival momentary acts.1 Other comparison sets may also work, especially given adjustments in the principle's force, idealizations, and characterization of acts. This section examines and rejects just one alternative.

Should optimization also consider extended acts of some sort? Extended acts include following rules, policies, strategies, plans, and decision procedures. They include composite acts of self‐direction, such as forming an intention to perform an act and then carrying out the intention, in particular, forming a resolution and then acting on it and also adopting a constraint on action and then adhering to it. Extended acts also include composite acts of self‐influence, such as forming a disposition to act in a certain way and then acting in that way. Optimizing among extended acts thus covers proposals that section 1.1 mentions concerning plans, resolutions, and dispositions. This section briefly assesses and dismisses optimization among extended acts. For more discussion, see Gauthier (1986: chap. 6; 1998), Bratman (1987: chap. 2; 1998; 1999, pt. 1), McClennen (1990: chap. 1; 1998), Weirich (2001a: 84–86), and related essays in Mele and Rawling (2004).

Some features of human decision making recommend optimizing among extended acts. People commonly decide to perform extended acts. One decides to take a walk, for instance, and afterward does not think about each step. When grading a logic class's homework, one decides to deduct so many points for (p.195) a misapplication of modus ponens and afterward follows the policy without reviewing the reasons for it every time one spots the mistake. Extended acts have many benefits. Adopting a plan and following it is the best way to perform a complex act. Without a plan, it is unlikely that one can cook dinner, build a canoe, or become a doctor. Coordination with others also depends on one's sticking to a plan adopted in concert with them. Taking a broad perspective on action makes good sense. Leaving action to spur‐of‐the‐moment decisions runs the risk of missing opportunities because one overlooks the best options or misevaluates them. Following a plan also has lower cognitive costs than constantly surveying and assessing one's options. Because extended acts have a prominent, beneficial role in deliberations, it may seem sensible to optimize among them.

Despite the importance of extended acts, optimization works best applied only to momentary acts. This restriction does not entirely ignore extended acts, however. Momentary acts include decisions to perform extended acts. Execution of a strategy, for example, takes place over an extended period of time and is not in an agent's direct control. But a decision to adopt the strategy is momentary and a matter of direct control. The restriction excludes the strategy's execution but not its adoption.

Although agents justifiably deliberate about extended acts, they realize those acts by performing momentary acts. Momentary acts are the basic units of control and so should be the basic units of evaluation. The rationality of momentary acts explains the rationality of extended acts. Two points support this position. First, optimization among momentary acts is feasible for the agents I treat. Optimization, as I take it, is a standard of evaluation, not a decision procedure; momentary acts need not be the focus of deliberation to be optimization's target. One may optimize among momentary acts without deliberating about them. Also, even if cognitive limits make optimization among momentary acts too high a standard for humans, it is rational for ideal agents in ideal circumstances to optimize among those acts. Ideal agents can deliberate about momentary acts. They have the cognitive power to examine and evaluate every option. In ideal circumstances, they can identify optimal momentary acts. Chapter 2 treats ideal agents in ideal circumstances and for them proposes optimization among momentary acts. Its idealizations support optimization's concentration on momentary acts.

Second, the benefits of extended acts redound upon the momentary acts that constitute them. Consider, for example, the extended act of giving a toy to each child in a class. Because stopping short disappoints some children, the act's rationality seems to require optimization among extended acts occupying the same period. However, each step toward completion of the extended act is optimal given that the agent will complete the extended act, as one assumes when evaluating it. Similarly, if adopting and following a plan for cooking dinner has benefits, then so does each step of the plan. Each step, by contributing to the realization of the extended act, shares in producing the extended act's consequences. The benefits that justify a plan's execution typically justify the momentary acts that constitute the plan's execution. So optimizing among momentary acts generally agrees with optimizing among extended acts. In particular, optimization among momentary acts is not myopic. It does not counsel short‐term optimization at the expense of long‐term optimization. If passing up a benefit now brings a greater benefit in the (p.196) future, then passing up the benefit now optimizes among momentary acts. The virtue of prudence accrues to the momentary acts that yield prudent extended acts.

One consequence of section 2.1.2's evaluating acts according to their outcomes is that any extended act performed has the same utility as each of its parts. It and each of its parts have the same possible world as outcome and so the same desirability. This does not hold for a counterfactual extended act, however. It may have a part that is performed. Perhaps the extended act was started but abandoned. The outcome of the part performed is the actual world, whereas the outcome of the unperformed extended act is some other possible world. The desirabilities of the two worlds may differ. So the desirabilities of the extended act and its part may differ.2

Clearly, taking optimization among acts to go by comparison of acts' worlds erases the distinction between short‐ and long‐term optimization. Take an alleged case where a multistage act optimal among contemporaneous acts has a first stage not optimal among contemporaneous acts, a case where, for instance, by forgoing optimization among momentary acts now, one optimizes one's future. Such cases cannot arise given appraisal according to acts' worlds. If the possible world resulting from an allegedly nonoptimal momentary act is an optimal future, then the momentary act is really optimal, because its world contains its temporal sequel. Its generating an optimal future makes it an optimal momentary act.

Sometimes, comparison of decision theory with other theories suggests optimization among extended acts. Take ethical theory. Some moralists argue for rule‐consequentialism over act‐consequentialism. They argue for following rules that optimize among possible rules. This position may be appealing because people follow moral rules, such as rules prohibiting violence, and because the acts the rules mandate seem to have noninstrumental moral value. But similar reasons do not ground the rationality of optimization among rules. In typical cases, the standard of rationality for acts is instrumental. An act is rational if it promotes rational values. An act does not have noninstrumental rational value that supports following a rule that requires the act. Compare sparing another person pain with preventing pain to oneself. Morality may require sparing another person pain even if causing him pain spares many other people pain. On the other hand, rationality does not require preventing pain to oneself even if causing it prevents pain to oneself many other times. Acts of a certain type may have noninstrumental moral value that rules to perform that type of act inherit. But typically, acts do not have noninstrumental rational value that rules inherit.

Next, consider physics. Imagine one is studying a system of molecules that form a gas. Studying the system only at a moment is short‐sighted. Understanding the system requires studying its dynamics, too. Similarly, it may seem that optimization among momentary acts overlooks a momentary act's connections with the past and the future. The analogy is imperfect, however. Optimization among momentary acts takes account of temporal context. If a connection with the past or future is valuable, then a present act derives value from establishing that connection. A momentary act's utility depends on the act's outcome including connections with the past and future. It takes a broad view stretching across time, not just a snapshot at a moment.

(p.197) The reasons for optimizing among extended acts are either eliminated by idealizations or else are already covered by perspicacious optimization among momentary acts. Moreover, applying optimization to extended acts runs into serious difficulties. One proposal recommends a maximally extended act optimal among all acts occupying the same time interval. This recommendation yields the advice to lead the best life possible. Another proposal recommends optimizing among acts that start now, regardless of their duration. Both proposals face a powerful objection. Some extended acts optimal among acts occupying the same interval, or starting at the same time, include momentary acts not optimal among acts occupying the same moment. To execute one of these extended acts, at some time the agent must pick a momentary act worse than a rival momentary act. This is irrational for ideal agents ideally situated. An extended act can be executed only by executing its stages. If a stage is irrational, then so is the whole act, even if it is optimal in some comparison classes. The attractions of the whole do not make up for the flaws of the parts.

For example, suppose that an agent will choose between $1.00 and $2.00. If he will choose the lesser amount, a predictor of that choice gives him $5.00 before his choice. The agent's optimal extended act involves later choosing $1.00 rather than $2.00. But if, when the time comes, the predictor has already given him $5.00, he lacks a reason to take $1.00 rather than $2.00. Taking the lesser amount is irrational even if it is part of an optimal extended act. Given my idealizations, an extended act is rational if and only if each of its momentary stages is optimal with respect to acts that can be performed at its time.3

Optimization among momentary acts differs from optimization among extended acts in other cases in which an agent is rewarded because another predicts that he will perform an act nonoptimal among acts at the same moment. Suppose that a bully leaves alone a child he predicts will futilely combat an attack. Then the child gains from a disposition to deviate from optimization moment by moment. Even in such cases, the rational course is to optimize moment by moment because only the disposition to deviate brings benefits, not the deviation itself. Sections 7.3 and 7.4 argue this point further, evaluating separately dispositions and the acts they yield and also forming intentions and carrying them out. They treat cases in which a momentary act is irrational although it is part of an optimizing extended act.

In general, if optimizing among extended acts differs from optimizing among momentary acts, then at some moment it asks an agent to perform an act not best at the moment because it is part of a best extended act. It demands forgoing the best momentary act for some gain from the extended act. But evaluation of the best momentary act is comprehensive. That momentary act is best taking account of the gain from the extended act. So there is no reason on balance to forgo the best momentary act.

The strongest argument I know for forgoing optimization among momentary acts draws an analogy. Suppose that buying a red hat is optimal. One may buy a hat, although the act is not optimal because it is entailed by the optimal act. Similarly, the argument claims, one may perform a momentary act that is not optimal because it is entailed by an optimal extended act.

(p.198) A disanalogy defeats the argument, however. Performing a nonoptimal momentary act rules out the optimal momentary act, although buying a hat does not rule out the optimal act of buying a red hat. The argument does not provide a reason to tolerate nonoptimal momentary acts.

Inconsistency results from advancing optimization for both momentary and extended acts. As the examples show, in some cases an optimal extended act contains a nonoptimal momentary act. The agent cannot realize a rival optimal momentary act and also the optimal extended act. Rationality cannot require both forms of optimization, and the balance of considerations supports applying optimization to momentary acts only.

A.1.2. Uncertainty

The principle of utility maximization is more general than the principle of optimization. It is designed for agents who may lack full information. Without full information an agent may not know which acts are in her direct control. To ensure that an agent is responsible for maximizing, the principle should be restricted to options she knows are in her direct control. I assume that an agent knows that her decisions, at least, are in her direct control. So I present the principle of utility maximization for decisions, a particular type of momentary act.4

Is my version of the principle too restrictive? Maximization among all acts an agent is certain she can perform, decisions and nondecisions alike, also keeps uncertainty in its place. Why apply maximization to decisions alone? Shouldn't the principle require an agent at a moment to maximize utility among all acts at the moment she is certain she can perform?

Simplification is one reason to apply utility maximization to decisions alone. The restriction does not sacrifice substance. It does not actually eliminate any act an ideal agent is certain she can perform. Such acts correspond to decisions. A nondecision an agent is certain she can perform may be represented by a decision to perform it. The decision may substitute for the nondecision because they are equally within the agent's control and have the same utility. Deciding is not obstructionist for ideal agents. Any act an agent can perform, except a spontaneous act, may be the product of a decision. If an agent is certain she can perform the act, she is certain she can decide to perform it and then carry out her decision. Also, decision is costless under my idealizations. A decision to perform an act has no relevant consequences besides the act and its consequences. Without decision costs, an act whose execution is certain has the same utility as a decision to perform that act. So the decision to perform the act may replace the act in maximization problems. The special case of a spontaneous act may be taken as the product of the null decision, a decision without content. A rational ideal agent in ideal circumstances assigns it the utility of maximizing among spontaneous acts at the moment. Consequently, considering all decisions amounts to considering each act whose execution is certain, and maximizing utility among decisions maximizes utility among those acts. Action problems reduce to decision problems.5

My idealizations justify focusing on decisions, but that focus has an independent warrant, too. Utility maximization is a necessary condition of rationality (p.199) for ideal agents. Therefore, its restriction to decisions does not conflict with its extension to all momentary acts an agent is certain she can perform. No conflict arises if to be rational a decision, besides maximizing utility among rival possible decisions, must also maximize utility in a more inclusive set of momentary acts. My restricted principle is not threatened by the possibility that utility maximization governs momentary acts besides decisions.

Making decisions the comparison set for the principle of utility maximization conforms with tradition. The principle of utility maximization belongs to decision theory. Textbook examples explicitly treat acts but implicitly treat decisions to perform acts. The acts specified give the contents of the decisions considered. The examples assume that an act's consequences do not differ relevantly from the consequences of a decision to perform the act. Restricting utility maximization to decisions is convenient and sets the stage for dealing with decision costs.

A.2. Utility of Worlds

Broome (1991) presents three dimensions along which an option's utility may be divided into components. They are the dimensions of possible outcomes, time, and people. He considers the possibility of dividing an option's utility into components that concern sorts of good such as the realization of an agent's goals, say, fame and comfort, but does not think that the analysis can be achieved (25–26). Binmore (1998: 362–63) distinguishes direct utility that attaches to final ends from indirect utility that attaches to means. Direct utility focuses on outcomes, is insensitive to changes in information, and assesses realizations of goals. Indirect utility focuses on acts, is sensitive to changes in information, and assesses prospects for realizations of goals. In a decision problem, Binmore does not analyze indirect utility in terms of direct utility, nor does he analyze an outcome's direct utility in terms of the agent's final ends. He defines indirect utility, or utility tout court, in terms of choices so that an agent's deciding in a way that maximizes utility just amounts to deciding in a way that is consistent with the agent's other decisions (1994: 50–51). He does not think that utility can be interpreted so that an agent's utility assignment explains a decision's rationality (180–81).

Broome and Binmore describe a traditional method of analyzing a world's utility in terms of basic goals or final ends but do not flesh out that method. This section formulates the method precisely to explain a world's utility and, through it, an option's utility. A precursor is Keeney and Raiffa's (1976) method of assessing an option's outcome in terms of multiple objectives.

Given full information and other idealizations, section 2.1.2 claims that an act's utility equals the utility of the act's outcome, the trimmed world that would be realized if the act were realized. The utility of the act's world depends on an agent's basic intrinsic attitudes. To introduce these attitudes, I first consider intrinsic attitudes.

The intrinsic attitudes of interest are intrinsic desire, aversion, and indifference. These differ from their extrinsic counterparts in evaluating a proposition with respect to its logical consequences and not also other features of its realization, such as its causal consequences. Thus, a person's intrinsic desire to be wise attends (p.200) only to the logical consequences of being wise (such as being prudent) and not also to wisdom's monetary rewards.6

An intrinsic desire is not a desire that is intrinsic, but rather a desire whose grounds are intrinsic features of its object's realization. A person's intrinsic attitude toward a proposition depends on not only the intrinsic features of the proposition, the proposition's logical consequences, but also the person's evaluation of those features. She evaluates the proposition's realization, the realization of its logical consequences. One person may intrinsically desire that the Cubs win the World Series while another person lacks that desire. Both nonetheless evaluate the same proposition, focusing on only its logical consequences. Moreover, a person may have an intrinsic and an extrinsic desire concerning the same proposition. She may intrinsically desire to be healthy because of health's logical consequences (such as life) and extrinsically desire to be healthy because of health's causal consequences (such as productivity). The type of evaluation affects the character of the attitude to the proposition, although the proposition itself is constant.

An agent's basic intrinsic desires, aversions, and attitudes of indifference are the attitudes that cause the agent's other intrinsic desires, aversions, and attitudes of indifference. For example, an agent's basic intrinsic desires to be healthy and to be wise may cause her intrinsic desire to be healthy and wise. An agent's basic intrinsic attitudes (BITs) are the foundation of her intrinsic attitudes and her utility assignment.7

In a rational ideal agent, intrinsic attitudes obey rules of coherence, and so intrinsic attitudes have a certain structure. In other agents, they are less regimented. Because causation may operate in various ways, anomalies may arise: (1) a person's extrinsic desire for money may prompt thoughts about having the wisdom to spend well; these thoughts about wisdom may then produce an intrinsic desire for wisdom; (2) an agent's intrinsic desire for health may lead to thoughts about having the wisdom to live well, which then produce an intrinsic desire for wisdom; (3) a person may think about health and then wisdom and consequently intrinsically desire wisdom more strongly; as a result, he may think about wisdom and then health and consequently intrinsically desire health more strongly. The symbiotic relation between the intrinsic desires may ratchet up their intensities in a mutually supportive way. Are these three anomalies trouble for the view that BITs are a causal foundation for other intrinsic attitudes (ITs)?

The foundational thesis is this: BITs are sufficient sustaining causes of other ITs, and no ITs are sufficient sustaining causes of any BIT. A sufficient cause is a complete cause, not just a contributing cause. A sustaining cause's effect ends as soon as the cause ceases. The cause is contemporaneous with its effect, not just prior to its effect. As a table holds up a vase at the same time the vase is restrained from falling, BITs are contemporaneous with the intrinsic attitudes they cause.

None of the three anomalies refutes the view that BITs are a causal foundation for intrinsic attitudes. They are not cases in which a BIT has as a sufficient sustaining cause some other intrinsic attitudes. They present causes that are not intrinsic attitudes or are not sufficient sustaining causes. Even the case of ratcheting up strengths of intrinsic desire is not a counterexample because a sufficient sustaining cause of a BIT must fully account for the BIT's intensity. No pair of (p.201) basic intrinsic attitudes is such that each attitude is a sufficient sustaining cause of the other.8

A rational ideal agent's BITs are typically independent of information because they assess logical consequences, not empirical consequences. However, suppose that such an agent is ignorant about some a posteriori matters. In some cases, new information, without providing grounds for a change in BITs, may trigger a revision of BITs. BITs are revisable without reason even if the revision process must be judicious. For a rational, fully informed ideal agent, supposition of an act's performance may involve a similar ungrounded change in BITs. It is possible, therefore, that if an act were performed, the agent's utility assignment to worlds would change. For example, joining a group may lead to adopting its values. How should one interpret the utility of an act? Should it assess the act according to current BITs, or according to the BITs the act generates? These generated BITs are hypothetical if the act is not performed and future if the act is performed.

Of course, a rational agent has an intrinsic desire for the realization of her other intrinsic desires and an intrinsic aversion to the realization of her other intrinsic aversions. These attitudes promote conformity between an assessment of an act's world with respect to current BITs and an assessment of that world with respect to BITs the act generates. However, conflicting intrinsic desires may arise. Suppose an agent is intrinsically averse to having a certain intrinsic desire, and consider an unperformed act in whose world the agent has that intrinsic desire and it is realized. In an assessment of the act's world, the intrinsic aversion conflicts with the general intrinsic desire for the realization of intrinsic desires. Because a current intrinsic aversion disapproves of the agent's intrinsic desire in the act's world, the world's utility is lower with respect to current BITs than with respect to BITs in the act's world. It therefore makes a difference whether one evaluates the act's world using current BITs or that world's BITs.

I use current BITs to assess all acts. To yield rational action, the principle of utility maximization needs an assessment of acts using current BITs. Rationality requires that current acts serve current BITs. An act's utility is therefore the current utility of its outcome, not the utility that the act's outcome would have if the act were performed. It is the current utility assignment to the act's world, not a hypothetical or future utility assignment in the act's world.9

What counts as a current BIT is ambiguous, however. Are the current BITs those that control an act's selection and obtain just prior to its performance, or those that obtain at the moment the act is performed? The difference matters because the act's performance may immediately create new BITs. Perhaps someone craving a cigarette is repulsed the moment he takes a puff.10 I take the goal of rationality to be an act supported by contemporaneous BITs. The reasons for an act involve those BITs. Prior higher‐order BITs controlling an act's performance should aim for an act supported by BITs when the act is performed. This account of rationality makes BITs a potentially unstable basis of acts' comparison. As one supposes various acts, BITs accompanying the act supposed may vary. Thus, the utility assignment evaluating acts may change from act to act. Chapter 8 addresses this complication. Earlier chapters put it aside by adopting the idealization of a stable basis of comparison for acts.11

(p.202) Ordinary utility evaluates the total outcome of its object's realization. I sometimes call it comprehensive utility. An analog that evaluates just the logical consequences of its object's realization I call intrinsic utility. The intrinsic utility of a BIT's realization is the degree of intrinsic desire for its realization. This is positive in the case of an intrinsic desire, negative in the case of an intrinsic aversion, and zero in the case of an attitude of intrinsic indifference. I assume an agent with a finite number of BITs, all comparable, forming a causal foundation for intrinsic attitudes as specified. The intrinsic utility of an act's outcome, a possible world, is then the sum of the intrinsic utilities of the objects of BITs realized in that world. The next section supports this tradition‐inspired summation principle.

A world's utility is its intrinsic utility because all aspects of its outcome are logical consequences. Because a world's intrinsic utility is the sum of the intrinsic utilities of the objects of BITs realized there, the world's utility is the same sum. Furthermore, an act's utility is its outcome's utility, the utility of the act's world. Transitivity of identity thus yields the following principle of utility:

An act's utility is the sum of the intrinsic utilities of the objects of the BITs that would be realized if the act were performed.

This principle presumes that agents are ideal and ideally situated, in particular, fully informed. A fully informed agent knows the world that would be realized if the act were performed. This knowledge is presumed by the principle's assumption that an act's utility equals its outcome's utility, its world's utility. If the agent were not fully informed, she might not know the act's outcome and the act's utility might not equal its outcome's utility. The principle's application assumes the existence of quantitative intrinsic utilities, but its accuracy does not. In their absence, the principle does not apply and so is not violated.

The rest of this section responds to some worries about analyzing a world's utility according to the BITs it realizes. I suspect there are two sources of reluctance to embrace this method of analysis, which I call intrinsic utility analysis. First, additivity may seem to fail because of complementarity between realizations of BITs. Second, the analysis may seem nonoperational because intrinsic attitudes seem nonoperational. I address these concerns in order.

Cases of alleged counterexamples to additivity either misidentify BITs or else ignore BITs. Suppose that someone likes sardines and likes chocolate but does not like eating both together. Assuming that the only relevant BITs are gustatory, is this a counterexample to the summation principle? To answer, the BITs must be identified. In a normal case, the objects of BITs are the taste of sardines alone and the taste of chocolate alone. These BITs are not realized when both foods are eaten together.

Another case supposes that an agent has an intrinsic desire for leisure and an intrinsic aversion to having this intrinsic desire. Imagine a world in which both the desire and the aversion are realized, and imagine that the world's utility is not a sum of the intrinsic utilities of realizing them. This case is not troubling for the summation principle unless the two intrinsic attitudes are basic and the only basic intrinsic attitudes the world realizes. However, in a rational ideal agent, the sort to which the principle applies, the two attitudes are not basic. They are not causally (p.203) independent. The intrinsic aversion to desiring leisure influences the intrinsic attitude to leisure.12

Next, take the objection that intrinsic attitudes are not operational. Operationist theories of meaning have been refuted.13 The only plausible operationist standard is inferential. It insists on testability for theoretical entities. Intrinsic utilities meet the standard. In a rational ideal agent, one may identify an intrinsic attitude as one independent of information and a BIT as an intrinsic attitude not caused (in the way explained) by other intrinsic attitudes. Also, one may infer the intrinsic utilities of objects of BITs from the utilities of worlds, which are operationalizable in terms of preferences among gambles, as in Savage (1972: chap. 3). To illustrate the method of inference, grant that health and wisdom are objects of basic intrinsic desires and hence BITs. Suppose, moreover, that no other BITs exist. Then trimmed worlds treat only health and wisdom. Imagine that U(H & W) = 2, U(H & ∼W) = U(∼H & W) = 1, and U(∼H & ∼W) = 0. Using IU to stand for intrinsic utility, it follows that IU(H) = IU(W) = 1.

One may construct a representation theorem generalizing this pattern of inference. It states sufficient conditions for inferring intrinsic utilities of objects of BITs from utilities of worlds. The theorem assumes that intrinsic utilities of BITs yield intrinsic utilities of worlds in accordance with the summation principle. It also assumes that BITs have a rich, fine‐grained structure. Start with basic intrinsic desires (BIDs). Suppose that for any n, there are n BIDs of equal intensity realized with no other BITs in a world of unit intrinsic utility. The intrinsic utility of realizing one of the n BIDs is therefore 1/n. Suppose also that for any such BID, there are an indefinite number of other compatible BIDs of equal intensity and that every combination of m of them is realized with no other BITs in some world whose intrinsic utility is therefore m/n. As a result, for every positive rational number m/n, a world exists whose intrinsic utility equals that number. Then consider an arbitrary BID. By comparison of a world in which it is realized by itself with other worlds in which combinations of BIDs are realized by themselves, one can measure the intrinsic utility of a BID's realization as finely as desired. The same can be done for basic intrinsic aversions. The theorem thus concludes that if BITs have the structure described, then an IU function over worlds and realizations of BITs exists such that the intrinsic utility of a world is the sum of the intrinsic utilities of objects of BITs it realizes, and, taking intrinsic indifference as a zero point, the function is unique up to multiplication by a positive constant.14

A.3. The Principle of Pros and Cons

A general principle of utility analysis, the principle of pros and cons, supports the previous section's analysis of a world's utility. To obtain a proposition's utility, it says to list the pros and cons of the proposition's realization. Then, to indicate the importance of those considerations, attach utilities to them, positive or negative according as they are pros or cons. Finally, add the pros and cons using their utilities to obtain the proposition's utility. This principle of pros and cons is familiar, going back at least to Benjamin Franklin (1945: 280–81). The procedure it (p.204) sketches needs elaboration in applications. Precision requires directions for listing pros and cons and attaching weights to them.

When applying the principle of pros and cons to obtain a proposition's utility, the first step is to separate considerations bearing on an evaluation of the proposition's realization. The separation of considerations must satisfy two conditions: (1) no consideration may be counted twice, and (2) no relevant consideration may be omitted. If these conditions are not satisfied, then adding the considerations' utilities may not yield the proposition's utility. For double‐counting inappropriately boosts the utilities' sum and omission inappropriately lowers it.

It is difficult to divide relevant considerations so that none is omitted or double‐counted. The more considerations entertained, the less likely is omission but the more likely is double‐counting. Everyday deliberations typically fail to separate relevant considerations adequately. Someone buying a new car, for instance, may rate models according to styling, fuel economy, and other considerations. But if he likes aerodynamic sleekness, a factor affecting his rating for fuel economy also influences his rating for styling. Then adding his ratings implicitly double‐counts that factor.

The second step in applying the principle of pros and cons is to obtain the utilities of considerations. A difficulty in this second step is making assessments of utility quantitative. In some cases, one consideration clearly has more utility than another, but not clearly a certain number of times more utility. Because of this difficulty, the principle of pros and cons seems impractical. Like most quantitative methods of treating the mental, it appears unrealistic. Quantitative methods may work for corporations single‐mindedly seeking profit, but they seem ungrounded in other contexts.

This worry is not an objection to the principle of pros and cons, but a reservation about its range of application. Where the principle applies, the input for it is available; considerations have quantitative weights. I table the worry by idealizing when applying the principle. I assume circumstances that warrant the applications' quantitative aspects. Methods of utility analysis initially advanced under idealizations may be adjusted later, when the idealizations are removed.

To justify a form of utility analysis using the principle of pros and cons, I investigate the analysis's method of separating considerations and assigning utilities to them. First, I verify that it separates considerations in a way that neither omits nor double‐counts any relevant consideration. Then I verify that it assigns considerations suitable utilities. Section A.2's analysis of a world's utility passes the test. Basic intrinsic attitudes (BITs) separate considerations without omission or double‐counting. Also, the intrinsic utility of realizing a BIT is an appropriate weight for its realization. Adding the intrinsic utilities of objects of BITs realized in a world thus yields the world's utility. The analysis of a world's utility is a paradigmatic application of the principle of pros and cons.

The principle of pros and cons also supports expected utility analysis (section 2.4). The analysis separates considerations for and against an option according to subjective chances for the option's possible outcomes, as given by a partition of states.15 Chances for good outcomes are pros; chances for bad outcomes are cons. Given an option's realization, the agent has a chance for each of its possible (p.205) outcomes. The agent is certain to have the chances for the possible outcomes if he performs the option. Each chance obtains with certainty even if each possible outcome has only a limited probability of obtaining. The division of chances according to a partition of states guarantees that no chance is omitted or double‐counted. Even if the relevant possible outcomes are the same for two states, the corresponding chances are different because the probabilities of the possible outcomes come from different states.

The utility of a chance for a possible outcome is the utility of the outcome multiplied by the outcome's probability according to the chance. Adding the utilities of the chances is a way of adding pros and cons. An option's expected utility, the sum of the products of the probabilities and utilities of its possible outcomes, therefore yields its utility.

A.4. Outcomes

Given full information, an act's utility assesses the act's outcome. But given uncertainty, an act's outcome may be unknown. More precisely, an agent may not know the world that would be realized if the act were realized, that is, the act's world. An implicit part of section 2.3's accommodation of uncertainty is a suitable interpretation of an act's outcome, the target of an act's utility assignment.

An act's world is a maximal consistent proposition. Because it may be unknown, its utility may be unknown as well. For an act's appraisal, one needs a utility that is known. Given uncertainty, one cannot take the act's utility as the utility of the act's world because the act's world may be any in the range of worlds where the act is realized. Instead, I take the act's utility to be the utility of the nonmaximal proposition that the act's world obtains. This proposition is true just in case the act's world is realized. Although the proposition's full expression, “The act's world obtains,” contains a name of the act's world, the expression “the act's world,” that expression does not specify the act's world. In contrast, the full expression of the proposition that is the act's world specifies all the details of the act's world. Given certainty, the utility of the act's world equals the utility of the proposition that the act's world obtains. But given uncertainty, they may differ. The utility of the proposition that the act's world obtains is the utility of a lottery over the worlds where the act is realized, not the utility of one of those worlds. Its utility is an estimate of the utility of the act's world. Although the act's world and its utility may be unknown, the proposition that the act's world obtains, and its utility, are known. I therefore take the proposition that the act's world obtains to express the act's outcome. This interpretation of an act's outcome makes its utility and the act's utility accessible despite uncertainty.16

My specification of an act's outcome makes it possible for agents to comply knowingly with utility rules despite uncertainty. In an application of a rule, an agent needs to know the propositions involved and their utilities. To achieve this end, I formulate each rule so that propositional variables are substitutional rather than directly referential. For example, the rule that an act's utility is the utility of its outcome, or U(a) = U(O[a]), is taken as a schema where a is a place‐holder for (p.206) a name of a proposition expressing an act rather than as a generalization where a has propositions as values. O[a] is to be replaced by a name for the act's outcome, a name the routine O forms from a name for the act. As the previous paragraph explains, the routine yields a name that fully specifies a nonmaximal proposition, the proposition that the act's world obtains. An ideal agent knows the proposition under that standard name despite ignorance of the act's world. The proposition that the act's world obtains is itself fully specified even though it is about a maximal consistent proposition that is not fully specified. Because the act's outcome is fully specified, the agent knows the utility he assigns to it. He is in a position to make sure that its utility equals the act's utility.17

Accommodating uncertainty also brings attention to the objects of utility assignments, which, according to section 2.1.2, are propositions. I ascribe to the Russellian view of propositions, expounded, for example, by D. Kaplan (1989). It takes propositions as structured entities (and so not sets of possible worlds) and allows them to contain individuals (and so not just concepts of individuals). Sentences expressing propositions may contain directly referential terms that refer to individuals without the mediation of a Fregean sense. An example of a directly referential term is a variable under an assignment of a value to it. Granting that proper names are directly referential, “Cicero is Tully” and “Cicero is Cicero” express the same proposition because “Cicero” and “Tully” refer to the same individual.

An agent evaluates a proposition according to a way of grasping it, a way indicated in a context by a name of the proposition.18 Strictly speaking, utility attaches to a proposition taken a certain way. Ways of grasping a proposition add grain to the objects of utility. My utility rules make the additional grain irrelevant by specifying a standard way of grasping a proposition. They fix the way a proposition is grasped to make the way of grasping it an otiose parameter. My rules assume that a propositional variable is replaced by a standard sentential name of a proposition, under which ideal agents grasp the proposition. A propositional variable is not replaced, for instance, by the name “Quine's favorite proposition.” This descriptive name may denote the proposition that Cicero is Tully, but that proposition may not be grasped under the descriptive name, and so under that name an agent may not know its utility. An agent's responsibilities under standards of rationality concern propositions as the agent grasps them.

Also, in utility laws a propositional variable must be replaced uniformly by the same sentential name for a proposition. Otherwise, a proposition with two sentential names may receive two utilities, one for each of two ways the proposition is grasped. For instance, U(Cicero is Tully) may differ from U(Cicero is Cicero), even though “Cicero is Tully” and “Cicero is Cicero” express the same proposition, if the agent does not know that Tully is Cicero. Multiple occurrences of a single propositional variable in a utility law should not occasion questions about identity of propositions.

In contrast, occurrences of multiple variables in a utility law may generate such questions. Consider the two‐variable utility law that U(p) = U(q) if p and q are a priori equivalent. Under my interpretation, it says that the identity holds if it is an a priori matter that the names replacing p and q name equivalent propositions. (p.207) The law permits U(Cicero is Tully) to differ from U(Cicero is Cicero) because it is not an a priori matter that the proposition that Cicero is Tully is the same as, and thus equivalent to, the proposition that Cicero is Cicero.

In short, utility laws often function as if utility attached to sentences rather than propositions. I take utility to attach to propositions, nonetheless, because some sentences are ambiguous and indexical, so not sufficiently fine‐grained. The same sentence may express different propositions in different contexts.19 In addition, in some cases an agent knows that two sentential names for a proposition express the same proposition. In those cases, the agent's utility assignment is insensitive to the name used, and so sentences are too fine‐grained.

Given my idealizations, rational ideal agents knowingly comply with utility rules in instances that uniformly replace propositional variables with standard sentential names of propositions and follow prescribed routines for constructing more complex propositional names from those sentential names. In particular, they comply with applications of the expected utility principle, U(o) = Σi P(si given o)U(o given si), given transparent designations of options and states.

A.5. Intrinsic Utility Analysis

Expected utility analysis provides a reliable method of calculating an option's utility given uncertainty about the option's outcome. However, when agents are nonideal and in nonideal circumstances, it is often advantageous to have multiple ways of generating an option's utility. Conditions may block or impede application of one form of analysis but leave open another form of analysis. This section presents a new form of utility analysis. It draws on section A.2's method of analyzing a world's utility according to basic intrinsic attitudes (BITs) realized.

Given uncertainty, an option's utility is an estimate of, and not necessarily the same as, the utility of the option's world. Its value may depend on the utilities of many possible worlds. Nonetheless, intrinsic utilities are useful for analyzing an option's utility. The intrinsic utilities of BITs' objects and their probabilities of realization given an option yield an option's utility. The utility analysis treats realizations of an agent's BITs as possible results of an option o's realization. It assumes that there is an intrinsic utility of realizing each BIT and a probability of realizing each BIT if o were realized. To obtain U(o) it then takes the intrinsic utility of each BIT's object, weights it by its probability given o, and adds the products as follows:

Intrinsic utility analysis. U(o) = Σj P(BITj given o)IU(BITj), where BITj ranges over the objects of all BITs.

In the analysis, unlike the probabilities of BITs' objects, the intrinsic utilities of BITs' objects need not be conditional on the option. The option's supposition does not influence their intrinsic utilities, assuming the stability of BITs, because intrinsic utilities depend only on logical consequences and o's realization does not change the logical consequences of a BIT's realization.

The assumptions for the principle of intrinsic utility analysis are similar to those for section A.2's analysis of a world's utility. The principle assumes the rationality of (p.208) input probabilities and utilities and the absence of cognitive limits. Conformity with the principle is a requirement of rationality for agents meeting my idealizations.

The principle of pros and cons (section A.3) supports intrinsic utility analysis. An option's pros and cons are the chances it generates for realizations of BITs. Chances for realizations of basic intrinsic desires are pros; chances for realizations of basic intrinsic aversions are cons. The weights of these considerations are their utilities. The utility of a chance for the realization of a BIT is the probability of its realization times the intrinsic utility of its realization. The chances for BITs' realizations cover all relevant considerations without double‐counting. Addition of the utilities of their chances of realization therefore yields the option's utility. Weirich (2001a: sec. A.3) shows that intrinsic utility analysis is consistent with expected utility analysis.

A.6. Conditional Probability and Utility

The principles of optimization, utility maximization, and expected utility entertain options and states not realized. How are those unrealized possibilities imagined? What features of the actual world do their suppositions preserve?

The principle of optimization (section 2.1) evaluates an act by evaluating the act's world, the world that would be realized if the act were realized. It matters that an act's world is entertained by supposing that the act were realized rather than that the act is realized. The subjunctive form of supposition is sensitive to the act's causal influence, whereas the indicative form of supposition is sensitive to the act's epistemic influence. The principle of optimization, being a principle of rational action, should be sensitive to an act's causal influence rather than its epistemic influence. In favorable circumstances, a rational act causes a good outcome rather than creates evidence of a good outcome.20

According to expected utility analysis (section 2.4),

U ( o ) = Σ i P ( s i given o ) U ( o given s i ) .

This principle also prompts questions about the appropriate way of supposing options and states. How should one interpret U(o given si), the utility of an option given a state? The role of the state is to provide information about the option's outcome. Thus, the conditional utility supposes the state indicatively to bring to bear the information it carries. On the other hand, the conditional utility evaluates the option to direct the agent's decision. It considers the option's causal consequences. It therefore supposes the option subjunctively. It seeks the outcome that would obtain if the option were realized. Because the option and state are supposed differently, an option's utility given a state should not be taken as the utility of the option and state conjoined. Conjoining the option and state to form a single object of utility makes it impossible to do justice to each. Moreover, the expected utility principle imposes no restriction on states. Hence, it has to handle cases in which an option and its negation serve as states. This means evaluating U(o given ∼o). (p.209) For generality, this utility should have a value, and intuitively it does, but intuitively U(o & ∼o) has no value, as Armendt (1988) observes. For these reasons, the utility of an option given a state is best taken as a primitive concept introduced by its role in utility theory.

How should one interpret P(si given o), the probability of a state given an option? In cases in which options may influence states, the probability of a state given an option should register the option's influence on the state rather than the evidence the option provides for the state. It should assess the state under a subjunctive supposition of the option. It should consider possibilities if the option were realized. I therefore take a state's probability given an option as a primitive quantity rather than, as in probability theory, as a ratio of nonconditional probabilities. According to the standard definition, a state's probability given an option registers the evidence the option provides for the state rather than its causal influence on the state. In contrast, I interpret P(si given o) so that it equals P(si) unless o does not merely provide evidence for s but causally influences s.21

Section 2.4's formulation of expected utility analysis needs further refinement to fully register the attractive or unattractive effects of an option's influence on states. Multiplying U(o given si) by P(si given o) does not suffice. Although the probability takes account of the influence of o on si, the utility must be modified to take account of that influence. For example, suppose that the agent desires that o cause si. This desire does not boost U(o given si), even if the agent believes that o causes si. The indicative supposition that si makes the belief idle. The supposition makes si part of the background for entertaining o and so precludes o's causing si. Although U(o given si) entertains worlds, not just causal consequences, the supposition of si carries implications about causal relations and so directs evaluation to a set of worlds where o does not cause si. The conditional utilities used in expected utility analysis must have suppositions that direct evaluation to the right set of worlds.

To obtain a utility that registers the influence of o on si, I conditionalize on the conditional that si if o. I replace U(o given si) with U(o given (si if o)). The latter quantity is the utility of the outcome that obtains if the option were realized given that it is the case that the state would obtain if the option were realized. Even though the conditional is supposed indicatively, the conditional is itself subjunctive, and in it si is supposed subjunctively.22 The change in type of supposition for si makes the revised conditional utility sensitive to o's causal influence on si. The complex condition is sensitive to o's causal influence on si, unlike the supposition that si obtains, with its implication in the context of U(o given si) that si obtains independently of the option realized. Because the supposition that si if o leaves it open that o causes si, the revised conditional utility increases if the agent believes and desires that o causes si. Using the subjunctive conditional as the condition for an option's utility accommodates cases in which the option has a desirable or undesirable influence on the state.23

Adopting these revisions yields an accurate, general form of expected utility analysis. According to it,

U ( o ) = Σ i P ( s i given o ) U ( o given ( s i if o ) ) .

(p.210) With respect to a partition of states {si}, the summation ranges over only si such that it is possible that (si if o). This restriction is necessary because utility is not defined with respect to an impossible condition. Given my interpretation of the suppositions involved, the restriction ignores only si for which P(si given o) equals zero. So ignoring those states would not affect an option's expected utility even if utilities conditional on them were defined.24

A.7. Informed Utility's Primacy

Section 2.3 uses the goal of maximizing informed utility to explain the goal of maximizing utility. Maximizing informed utility is the primary goal, it claims. Beebee and Papineau (1997: 238–43) argue for the reverse. They claim that maximizing utility is the primary goal of rational decision and that maximizing informed utility is a subordinate goal. They state their claim in terms of expected utility rather than utility, but one may identify the two. Also, rather than contrast informed and current utility, they contrast utility resting on single‐case probabilities and utility resting on relative probabilities, novel probabilities standing between subjective and objective probabilities. In the cases I consider, however, utility resting on single‐case probabilities is informed utility. Moreover, utility resting on relative probabilities, for my purposes, is not importantly different from current utility, which rests on subjective probabilities.

Beebee and Papineau support their claim about the primacy of the goal of maximizing current utility by using that goal to derive the goal of maximizing informed utility. For the derivation, they use Ramsey's (1990) and Good's (1967) theorem about expectations of expectations to derive the intermediate goal of gathering relevant information. They move from current utility maximization, to gathering information, to informed utility maximization. The crucial theorem shows that relevant information increases the expected utility of the option of maximum expected utility, the expected expected utility of the option adopted. Beebee and Papineau conclude that, given the availability of additional relevant information, gathering the information has higher expected utility than deciding without it.

An example displays the reasoning behind their argument. “Imagine you are presented with three sealed boxes. You know that a ball has been placed in one, and that the other two are empty, and that some chance mechanism has given each box the same 1/3 single‐case probability of getting the ball. Felicity offers you her £3 to your £2, with you winning the total stake if the ball is in box 1. You can either accept or decline the bet now (option G, for ‘go’), or you can look in box 3 and then accept or decline the same bet (option W, for ‘wait‐and‐see’)” (1997: 239). Notice that the chance mechanism has finished operating, so that the single‐case probability that the ball is in a given box is an informed probability with the value 1 or 0 according as the ball is or is not in that box.

Take utilities to equal expected gains in pounds. The expected gain from G is £0 because you will decline the bet, its being disadvantageous given current information. The expected gain from W is £0.33 because if you see a ball in box 3 (p.211) you will not bet, betting's then having an expected loss of £2, and if you do not see a ball in box 3 you will bet, betting's then having an expected gain of £0.50. Hence, W is preferable to G. In general it is better, if possible, to wait for additional relevant information before acting.

This derivation is inadequate for showing that the goal of maximizing utility is primary, however. It depends on the assumption that acquiring information relevant to meeting goals is possible and has negligible cost. Not all cases meet that assumption. For instance, acquiring information may overload the mind or trigger destructive emotions. When the assumption is not met, the goal of maximizing utility does not yield the goal of maximizing informed utility. Thus, the derivation incompletely grounds the latter goal.

Beebee and Papineau's derivation of the goal of maximizing informed utility also has another flaw. The step from current utility maximization to gathering information uses anticipated future utilities to calculate current utilities. To illustrate, return to the example. Because one knows that on looking in box 3 and seeing a ball the utility of betting on box 1 will be −2, one uses that value as the current utility of that outcome of W. This is sensible because the future utility is known to be informed, and maximizing informed utility is a goal. But the goal of maximizing informed utility may not be used to derive the goal of gathering information, if the latter is to be used to derive the goal of maximizing informed utility. It is circular to use the goal of gathering information, with its implicit appeal to the goal of maximizing informed utility, to carry out that derivation.

In the argument for the value of gathering information, it is crucial that future utilities are known to be more informed than current utilities. In the example, the mere fact that the future utility of betting on box 1 will have a certain value given some information that W's realization might yield is not a good reason to use that value as the current utility of that outcome of W. To illustrate, suppose that you look in box 3 and then decide whether to go, G, or wait, W. Imagine that you see a ball in box 3. Also suppose that you know that after waiting you will forget you inspected box 3 and think you saw a ball in box 1. After waiting, you will bet, the expected gain from betting's being £3 then. It would be a mistake to take that future expected gain as the current utility of that result of W. The future expected gain is less well informed than the current expected gain. Future utilities are good reasons for current utilities only if informed.25

To dramatize the inconclusiveness of Beebee and Papineau's derivation of the goal of maximizing informed utility, consider again its step from maximizing expected utility to maximizing expected expected utility. This step uses the principle that the goal of maximizing x justifies the goal of maximizing the expectation of x. This general principle instantiated to informed utility asserts that the goal of maximizing informed utility justifies the goal of maximizing expected informed utility, that is, maximizing utility. Hence, the reasoning their derivation assumes, generalized, grounds the reverse derivation.

Beebee and Papineau have not derived the goal of maximizing informed utility from the goal of maximizing current utility. Their derivation uses the goal they attempt to derive. Moreover, the goal of maximizing utility may be obtained from the goal of maximizing informed utility, contrary to their claims (1997: 242). (p.212) As section 2.3 explains, given standard idealizations, maximizing utility is the rational way to pursue the goal of maximizing informed utility. Maximizing informed utility is the primary goal of rational decisions.

Section 2.3's method of obtaining the goal of maximizing utility from the goal of maximizing informed utility takes probability as the guide of life. Given uncertainty, maximizing expected informed utility is sensible. It amounts to maximizing utility. In each particular case, the rational way of pursuing informed utility is to maximize utility. This derivation of the goal of maximizing utility is immediate, but the principle of expectation is basic and not a likely candidate for a deep‐going derivation from a more fundamental principle.

Showing that one goal is derivable from another is an inconclusive demonstration of primacy because mutual derivation is possible. The usual way of stating the goal of maximizing utility comprehends the special case of full information so that the goal of maximizing informed utility follows from it directly. Given section 2.3's reverse derivation, mutual derivability follows. Derivability does not indicate which goal is primary.

To verify maximizing informed utility's primacy over maximizing utility, consider which goal explains the other. Consider how changes in utility assignments affect other utility assignments. Imagine that an agent wants a prize and wants a lottery ticket for it, too. Suppose that he were to cease desiring the prize. Would he continue to want the lottery ticket? No, it would lose its appeal. On the other hand, suppose that he were to cease desiring the lottery ticket. Would he continue to want the prize? Yes, in the most plausible scenario, he desires the prize but not the ticket because he thinks the ticket offers a negligible chance of winning the prize. The utility of the prize is primary and the utility of the ticket subordinate. In general, informed utility of possible worlds generates utility in accordance with expected utility analysis, and therefore informed utility is primary and ordinary utility subordinate. Maximizing the primary form of utility is a rational agent's basic goal.

Notes:

(1.)  Optimization among momentary acts supports the familiar principle of dynamic consistency (which in turn supports the standard of subgame‐perfect equilibrium for extensive‐form games). This principle declares a sequence of acts rational only if each act in the sequence maximizes utility at the time it is performed. For a discussion of dynamic consistency, see Strotz (1956), Hammond (1988), McClennen (1990: sec. 7.4, chap. 9), Machina (1991), Rabinowicz (1995), and Gauthier (1997). The interpretation of dynamic consistency varies slightly among these authors, and not all adopt my interpretation exactly. Some reject the principle of dynamic consistency. For a criticism of the principle, see Velleman (1997: 45–50). See Skyrms (1996: 38–42) for a defense of dynamic consistency, or, using his term, modular rationality.

(2.)  If it seems impossible for an actual and a merely possible act to share a stage, given that acts are concrete, let them have stages with the same propositional representation. Having the same propositional representation is all that matters in the end because desirabilities attach to propositions.

(3.)   Sobel (1997) investigates pairwise preferences. A pairwise preference between two options is a preference between them given that choice is restricted to them. Sobel argues that cyclical pairwise preferences may be rational for ideal agents in ideal circumstances because the changing conditions of the pairwise preferences in the cycle change the grounds for preferences. Sobel (2001: sec. 5) argues that a perfectly reasonable and ideally knowledgeable agent with cyclical pairwise preferences may be financially ruined by a series of cyclical trades known as a money pump. Sobel's examples are cases where stepwise optimization among momentary acts yields an extended act nonoptimal among contemporaneous acts. Like my example, his examples argue that rationality counsels a series of acts, each of which optimizes among acts at the same time, rather than an extended act that optimizes among acts filling the same period. However, his examples rest on controversial assumptions. In particular, they rest on the assumption that rationality does not require changing the cyclical pairwise preferences that generate the money pump.

(4.)  My application of the principle of utility maximization to decisions has motives similar to Jeffrey's (1983: 83) application of the principle to attempts or tryings and Joyce's (1999: 57) application of the principle to exercises of will.

(5.)  The reduction is also suitable as an independently adopted idealization for the principle of utility maximization. However, if it were independently adopted, then whenever one removes the idealization that decisions are costless, one must also remove the idealization that the reduction holds. Decision costs may create a disparity between the utilities of top decisions and top acts of certain execution.

In my ideal cases, an agent knows that she will execute any decision she adopts. She also knows that she is fully rational except possibly in the current decision problem. These (p.243) idealizations rule out situations in which utility maximization evaluates differently a decision and its execution. In nonideal cases such as Kavka's (1983) toxin puzzle, however, a maximizing decision may settle on a nonmaximizing act. A maximizing agent then adopts but does not execute the decision.

(6.)  The term “intrinsic desire” is Brandt's (1979: 111). He uses it for a desire assessing intrinsic qualities in part. I use it for a desire assessing intrinsic qualities exclusively. My account of intrinsic desires follows Weirich (2001a: sec. 2.1).

(7.)  Basic intrinsic desires are analogous in many ways to basic intrinsic values as described by Harman (1967), Feldman (2000), and Zimmerman (2001: chap. 5).

(8.)  I thank Wlodek Rabinowicz and James Joyce for good questions about the causes of BITs.

(9.)  A consequence of this view is that an agent may rationally perform an act she knows she will regret. See Weirich (1981) and, for an opposing viewpoint, Fuchs (1985).

(10.)  In Racine's (1986: 16–17) Phaedra, Act I, scene 3, Oenone attributes similar fickleness to Phaedra:

  • Her wishes war against each other still.
  • 'Twas you who, full of self‐reproach, just now
  • Insisted that our hands adorn your brow;
  • You who called back your strength so that you might
  • Come forth and once more see the light.
  • Yet seeing it, you all but turn and flee,
  • Hating the light which you came forth to see.

(11.)  Normally, I assume that BITs do not change as an act is performed. But my idealization allows BITs to change as long as they change the same way given any act. Hence, the idealization does not rule out moments when BITs change. It does not require that BITs be constant throughout an agent's life.

(12.)  I thank Troy Nunley for illuminating discussion of cases involving higher‐order intrinsic attitudes.

(13.)   Weirich (2001a: sec. 1.4) reviews objections to operationism.

(14.)  What types of input may a representation theorem use? This is an open question. Quantitative relations are too powerful given the objective of reducing the quantitative to the comparative. Preferences are too theoretical given strict operationist objectives. For a modest, standard inferential objective, however, one may use the input of the representation theorem sketched. Its input and assumptions are comparable to those of other, standard representation theorems.

(15.)  To realize an x percent subjective chance of an outcome is to perform an act that yields that outcome with a subjective probability of x percent. The subjective chance is a result of the act, whereas the subjective probability is a degree of belief concerning the act's outcome. Having terms for both the chance and the probability aids exposition.

(16.)   Section 2.1 takes an act's utility as the utility of the act's world merely for simplicity. To make a seamless transition from maximization given full information to maximization given uncertainty, one may recast section 2.1.3's principle of maximization so that it takes an act's utility as the utility of the proposition that the act's world obtains. Given full information, this is the same as the utility of the act's world. Then allowing for uncertainty does not change the formulation of the principle of maximization or its interpretation of an act's utility.

The problem of characterizing outcomes given uncertainty is related to the problem of small worlds introduced by Savage (1972: sec. 5.5) and recently discussed by Joyce (1999: sec. 3.3).

(17.)  The rule identifying an act's utility with its outcome's utility, when applied to decisions, also identifies a decision's utility with its outcome's utility. The latter is the utility of the proposition that the decision's world obtains. This proposition expresses the decision's outcome. If a decision is carried out, its world includes the act selected and that act's temporal sequel.

(18.)  See Crimmins (1992) for an analysis of belief according to which a person believes a proposition in a way, a way given by a context including words used to state the proposition believed.

(19.)  To illustrate the problem, consider a case in which the utility of the proposition that George Bush speaks in Washington differs from the utility of the proposition that George Clooney speaks in Hollywood. Suppose that the sentence “George speaks here” expresses the first proposition on one occasion and the second proposition on another occasion. Attaching a utility to the sentence then conflates a crucial difference.

(20.)  The indicative supposition of an act not performed may change the epistemic basis of the act's evaluation even given full information. But the effects of an act's indicative supposition are especially prominent given uncertainty, the topic of sections 2.3 and 2.4.

(21.)  Consider the subjunctive conditional that if o were realized, then si would obtain. The conditional is true if both option and state are actual, but in counterfactual cases, its truth generally requires a causal connection between option and state. Suppose that one takes P(si given o) as the probability of the subjunctive conditional. This definition attends to causal influence but limits the existence of conditional probabilities unnecessarily, as Weirich (2001a: sec. 4.2.1) points out.

(22.)   Lewis (1981: 11) introduces dependency hypotheses that he uses to compute an option's expected utility. The condition (si if o) resembles a dependency hypothesis.

(23.)  For more support, see Weirich (1980; 2001a: sec. 4.2.2) and, for an opposing viewpoint, see Davis (1982).

(24.)  Expected utility analysis may be extended to cases not meeting the assumption that the number of relevant worlds is finite. Then it may use infinite partitions and generate infinite utilities. The extension calls for calculus and nonstandard numerical analysis in the style of Robinson (1966), mathematics beyond my project's scope. See Skyrms (1995: sec. 3), Sobel (1996), Vallentyne and Kagan (1997: 7–9), and Vallentyne (2000) for applications of nonstandard numerical analysis to probability and utility theory.

(25.)  Forgetting causes trouble for familiar diachronic principles of probability such as Conditionalization and Reflection. See, for example, Williamson (2000: 219) and Monton (2002). Williamson (sec. 10.2) argues that in general, an agent may violate the Principle of Conditionalization, which governs updating probability assignments as new evidence arrives, because she may not know all her evidence. An agent may not know what she knows.