Objections and Replies
2. Circularity worries
3. Is Ameliorative Psychology really normative?
4. The grounds of normativity, or Plato's Problem
5. The relative paucity of SPRs
6. Counterexamples, counterexamples
7. Reliability scores
8. Explanatory promises
9. Abuse worries
10. The generality problem
11. Strategic Reliabilism and the cannon
We address objections we expect to be leveled at our view. We identify the objections of our imagined critic with italicized type. We won't pretend to have addressed all the serious objections to our view; and we won't pretend to have given conclusive answers to all the objections we do consider. Our goal is to give the reader some sense of the resources available to Strategic Reliabilism for dealing with some important issues, many of which articulate longstanding epistemological concerns.
Any epistemological theory worthy of the name must address the skeptical challenge. The skeptic begins with a fund of presumptively justified beliefs and proceeds to argue that one can't legitimately make inferences that go beyond that evidence. For example, a skeptic about the material world argues that on the basis of our sensory beliefs, we can equally well support the brain‐in‐the‐vat hypothesis, the ideational world hypothesis, the evil demon hypothesis, the material world hypothesis, etc. All of these hypotheses are underdetermined by the evidence. The skeptical challenge is that since the evidence does not support any one of these hypotheses over any other, we cannot justifiably believe any of them.
A central problem with naturalistic approaches to epistemology, including the one defended in this book, is that they fail to address the skeptical challenge. Naturalists begin their epistemological investigations by making substantive assumptions that skeptics are unwilling to grant: that there is a material world, that there are other minds, etc. You face a dilemma. Either you ignore the skeptical challenge, in which case your theory does not deserve to be called an epistemological theory, or you beg the question against the skeptic.
Let's begin with a “live and let live” response to the skeptical problem. Our approach to epistemology does not provide a solution to skepticism. But what do we really want from an epistemological theory? It would certainly be nice to have a theory that solved the problem of skepticism. But it would also be nice to have a theory that provided useful guidance to reasoners. An epistemological theory that provided a framework for how to reason in an excellent manner could have many practical benefits. It could provide a framework for thinking about diagnosis that led to better medical outcomes, a framework for thinking about parole board decisions that led to a less violent society, a framework for thinking about public policy that helped the electorate support policies that better serve its values, and so on. Now, one might legitimately wonder whether a useful reason‐guiding theory is possible; but one might equally well wonder whether a theory that solves the problem of skepticism is possible. Our point is that if philosophers insist that a theory of reasoning excellence that has this ameliorative aim is not epistemology, well then, so much the worse for epistemology.
The “live and let live” response notes that many successful theories don't do everything we might like them to do. Newton's theory of motion (p.161) was highly successful even though it does not explain all physical phenomena (e.g., electromagnetic phenomena). So a theory of reasoning excellence might be highly successful at providing useful guidance to reasoners even though it does not solve some other epistemological problems. In particular, it might well not solve the skeptical problem. So we admit that we don't have a solution to the skeptical challenge, but we're proposing a theory that aims to meet a different goal. Unless one rejects this as a legitimate goal of epistemology, then the skeptic's criticism fails to uncover a problem with any theory, naturalistic or not, that has this goal. And of course that includes Strategic Reliabilism.
The “live and let live” response is problematic for any naturalistic theory, like yours, that takes reasoning excellence to be partly determined by the reliability of reasoning strategies. You argue that according to Strategic Reliabilism, one ought to use Goldberg's Rule in making tentative diagnoses of psychiatric patients on the basis of the MMPI. That's because Goldberg's Rule has low costs and is reliable on problems that are significant. But the skeptic can reformulate her challenge so that it is about reasoning excellence. If a skeptical hypothesis is true, if, for example, there are no other minds or other people, then Goldberg's Rule would not be reliable after all. Given this possibility, how could we ever know how to reason in an excellent fashion? The problem with the “live and let live” response is that it fails to recognize that the skeptical problem is so pervasive that it cannot be sidestepped or avoided.
In the face of this challenge, our inclination is to restrict our theory to normal worlds—that is, non‐skeptical worlds that are presumed to be like our own (Goldman 1986, 107–9). A reasoning strategy is reliable when it has a high truth ratio on the assumption that the world is as we presume it to be, i.e., nonskeptical.
But the move to “normal worlds” is cheating. You escape the skeptical challenge simply by ruling by fiat that the skeptical hypotheses are false. Is there some principled reason that warrants the move to “normal worlds”? Or is this move simply motivated by the understandable but unprincipled desire to avoid a difficult problem?
Technically, we are not ruling that the skeptical hypotheses are false. Our point is that judgments of reasoning excellence are insensitive to whether or not a skeptical hypothesis is true. There are two principled reasons for this move to normal, non‐skeptical worlds. First, the goal of Strategic (p.162) Reliabilism is not to solve the problem of skepticism. It aims to be a useful reason‐guiding theory. This is a legitimate goal of epistemology. Strategic Reliabilism should be assessed in terms of whether it meets this goal. If it does, then the fact that it does not meet a different goal (solving the skeptical problem) does not by itself give us a reason to doubt it. Rather, it suggests that the epistemological theory that guides reasoning is not the theory that will solve the problem of skepticism. Second, recall that Strategic Reliabilism is supposed to articulate the normative principles that guide the prescriptions of Ameliorative Psychology. A cursory examination of Ameliorative Psychology makes evident that it ignores the skeptical challenge; for example, it employs the processes and categories of contemporary psychology. So given that our aim is to articulate the normative presuppositions of Ameliorative Psychology, it is perfectly reasonable for our theory to ignore the possibility of skepticism if the sciences does. And the science does.
There are good, principled reasons for restricting Strategic Reliabilism to normal worlds. Still, some might be disappointed. After all, Strategic Reliabilism does not even hold out the hope of solving the problem of skepticism. Is this a reason to have doubts about our theory? Perhaps. Who wouldn't prefer a lovely theory that both guided reason and solved the problem of skepticism? For those who might be disappointed, however, it's important to recognize two points. First, there might be no unified epistemological theory that meets all our goals. We suggest that Strategic Reliabilism reflects the fact that you can't always get what you want, but if you try sometimes, you just might find you get what you need. Second, even if the failure of Strategic Reliabilism to address the skeptical challenge is a mark against it, that is not by itself a mark in favor of any other approach. In particular, it is not a reason to believe that the standard analytic approach to epistemology can yield a satisfying solution to the skeptical problem.
Strategic Reliabilism recognizes that for most people dealing with everyday issues, skepticism is not a significant problem. This is a point contextualists make in defense of their account of justification (e.g., DeRose 1995). Our point is about what problems the excellent reasoner will tackle and what problems she will ignore. At the risk of undermining our own “live and let live” response to the problem of skepticism, we should note that as children of the 60s, we are nothing if not reflexive: our theory of reasoning excellence applies to us as epistemologists as well. The philosopher who takes skepticism seriously has made judgments, perhaps (p.163) implicit, about what problems are important in epistemology. If naturalists have not been sufficiently sensitive to the problems posed by the skeptical challenge and other concerns of SAE, perhaps it is because we recognize that there is a need for a genuinely prescriptive epistemological theory—one that provides a framework for improving the reasoning of individuals and institutions about significant issues. It's not that skepticism and other concerns of SAE are insignificant. But Standard Analytic Epistemology so often ignores so much of the world that we do not believe that the values implicit in its practice accurately reflect the values of its proponents.
If improving the world is so important to you, why don't you give up epistemology and devote your lives to charity?
Our account of significance does not depend on a kind of maximizing consequentialism. Significance ultimately is based on the conditions that promote human flourishing, and given our physical and psychological makeup, we take those conditions to be variable but also constrained. So we would deny that people's reasoning or action must always aim to maximize some notion of the good.
On what grounds, then, can you criticize proponents of SAE for focusing attention on skepticism? Let's grant for the sake of argument that skepticism is not the most significant problem facing epistemology. You admit that people can be excellent reasoners even if they do not always address the most significant problems. So your criticism of proponents of SAE depends on holding them to standards you admit are unnecessarily high.
This objection fails to understand the nature of our criticism. We are critical of epistemology as a field of study in the English‐speaking world. We are critical of the way resources (everything from human talent to institutional support) are distributed in epistemology. We are happy to grant that a healthy intellectual discipline can and should afford room for people to pursue highly theoretical issues that don't have any obvious practical implications. So we do not object to any particular epistemologist tackling the skeptical challenge. We object to the fact that proponents of SAE insist (rightly) that epistemology has a prescriptive reason‐guiding function, while precious few resources are devoted to developing an epistemological theory with useful prescriptive, reason‐guiding advice.
(p.164) 2. Circularity worries
You begin your epistemological investigations with empirical findings, i.e., some findings of Ameliorative Psychology. Any epistemological project that begins with empirical findings raises a circularity objection. We can put it in the form of a dilemma. Why did you begin your epistemological investigations with these particular empirical findings? In particular, do you have good reasons for believing them? If so, you are presupposing epistemological principles before you begin your epistemological investigations. And this is viciously circular. If not, if you don't have good reasons for believing the empirical findings on which your epistemological theory is based, then how can you defend this book with a straight face?
Your theory, Strategic Reliabilism, raises a particularly dramatic form of this circularity objection. Chapter 1 says that a good epistemological theory doesn't just mimic the findings of Ameliorative Psychology, and chapter 8 employs your theory to resolve disputes in Ameliorative Psychology. But when you constructed your epistemological theory in the first part of the book, it could not have been “informed” by the instances of Ameliorative Psychology you argue are mistaken in chapter 8. So you must have been making decisions about which instances of Ameliorative Psychology are good and which are not‐so‐good in the construction of your theory. If so, you must have been presupposing epistemological principles in deciding which empirical findings to accept, and these empirical findings informed your normative theory, which in turn justified those very empirical findings. Again, isn't this viciously circular?
In doing any sort of science, including physics, biology or Ameliorative Psychology, scientists bring substantive normative assumptions to bear in deciding what theories are good or true or worthy of pursuit. But this point is not restricted to scientists. Anyone who provides reasons of any kind in support of any kind of doctrine is up to their ears in substantive epistemological assumptions. And that includes epistemologists. We challenge the proponent of the circularity objection to show us the epistemological theory that begins without relying on any judgment that is informed by some kind of substantive epistemological assumption. Such an epistemology would not begin by assuming, for example, that we have beliefs (for that assumes that we have good reason to reject eliminativism, the view that propositional attitudes don't exist). It would not begin by assuming that certain ways of reasoning about normative, epistemic matters are superior to others (for that would require epistemological (p.165) assumptions about how we ought to reason about epistemology). The circularity objection seems to require that we begin construction of an epistemological theory without making any normative, epistemic assumptions whatsoever. And that's a fool's errand.
I'm certainly not insisting that epistemology proceed without any normative assumptions whatsoever. Rather, our epistemological investigations should be based on some privileged class of normative, epistemic assumptions. These are the epistemological assumptions of a priori epistemology.
The circularity objection seems to leave us with a choice. But it is not a choice between beginning our epistemological theorizing with substantive epistemological assumptions or without substantive epistemological assumptions. It is a choice between beginning our epistemological theorizing with the epistemological assumptions of a priori epistemology (whatever they may be) or the epistemological assumptions of science (whatever they may be). On what grounds do we make this choice? It is certainly not based on the relative success of a priori epistemology (or a priori philosophy in general) over science in coming up with theories that are fruitful and can lay some claim to being true. In fact, if we were to use any reasonable version of the major philosophical theories of justification (reliabilism, coherentism or foundationalism) to assess itself and our best scientific theories, each would surely return the verdict that our best scientific theories are far more justified than the epistemological theory. If this is right, why not embrace the normative presuppositions of the theories that all parties to this debate agree are superior?
But the epistemological assumptions of a priori epistemology are superior to those of naturalistic epistemology. The reason is that the former are a subset of the latter. Naturalists give themselves permission to reason about a priori matters and a posteriori matters when doing epistemology; a priori epistemologists permit only the former. Therefore, the epistemological assumptions of a priori epistemology are safer and more likely to be true.
Even if we grant this point, why is safer better? Epistemologists have a choice about what sorts of epistemic assumptions to adopt when doing epistemology. We suspect that many epistemologists haven't explicitly made a choice about this. They have simply absorbed a tradition still haunted by Descartes and the neurotic abhorrence of error. But error isn't the only enemy—or even the greatest enemy—in life, or in philosophy. (p.166) Our approach does risk error by taking Ameliorative Psychology seriously. But what is the risk of constructing an epistemological theory in happy ignorance of such findings—findings that have a half‐century's worth of empirical support? Two possible risks stand out. First, if our a priori theories contradict such findings, we risk error. Second, if our a priori theories imply nothing very specific about such findings, we risk irrelevance. And if the proponent of a priori epistemology insists that his approach does not carry these risks, we wonder: How on earth could he possibly be so sure? Any choice we make about where to begin our epistemological investigations carries risk of some kind. From our perspective, there are moral, political and pragmatic grounds for doing what we can to make sure that our epistemological theory is informed by our best scientific findings about how we can reason better about significant matters (for an interesting discussion of failures to meet this standard in moral reasoning, see Sunstein 2003). After all, when people fail to heed the advice offered by Ameliorative Psychology about how best to reason about diagnosing disease or predicting violence, people die. Why build an epistemological theory that risks endorsing or not condemning such epistemic practices?
Let's end our thoughts about the circularity objection by considering why the objection is supposed to be damning. The problem, presumably, is that the epistemological assumptions the naturalist begins with will ultimately be vindicated by the naturalist's epistemological theory. In this way, the naturalist's epistemology is self‐justifying and so viciously circular. There are three points to make about the viciousness contention. First, it can be made equally well against any epistemological method or theory, no matter how pristinely a priori. After all, the a priori epistemologist must begin her investigations with epistemological assumptions of some sort. Presumably, these assumptions will be vindicated by her epistemological theory. So a priori epistemologies are just as viciously circular as naturalistic epistemologies. Second, it is hard to see how the viciousness claim can be reasonably made with any confidence (including the viciousness claim we just made against a priori epistemology). After all, no one has a clear and compelling account of what epistemological assumptions are being presupposed by epistemologists, naturalists or otherwise. Without knowing this, how can anyone be sure that the prescriptions coming out of such theories will be the same as those that went in? And how can anyone be sure that the prescriptions coming out of such theories will vindicate those that went in? Third, suppose that Strategic Reliabilism really does end up vindicating the epistemological assumptions (p.167) of science. Would that mean that the naturalistic method was vicious? Not unless there was something necessary or inevitable about this outcome. But let's stop to consider what it would be for Strategic Reliabilism to vindicate every epistemological assumption of all of our best scientific theories. This would mean that the methods and substance of every scientific theory and discipline presuppose epistemological principles that yield prescriptive judgments that are identical to those of Strategic Reliabilism. As we've already admitted, we have no idea whether this sort of vindication is in the offing (although we have serious reservations). But we are most eager to see this case made by the proponent of the circularity objection. We are confident that after articulating the epistemological assumptions of (say) nuclear physics, cognitive psychology and evolutionary biology, and then determining if these assumptions are vindicated by Strategic Reliabilism, our overwhelmed philosopher will grant that there is nothing inevitable about the outcome. And let's suppose that after decades of work, the proponent of the circularity objection finds—to everyone's surprise—that Strategic Reliabilism does vindicate all the epistemological assumptions of our best science. Given that this result was not inevitable, we would have no need to take this as an objection. We could simply conclude that science makes even more terrific epistemological presuppositions than we thought.
3. Is Ameliorative Psychology really normative?
Ameliorative Psychology is no more normative than any other science. Like Ameliorative Psychology, physics, chemistry and biology give us new reasoning strategies that are better than old ones all the time. We ought to adopt these reasoning strategies for solving certain problems, and people often do. So the mere fact that Ameliorative Psychology is in the business of giving us new and better ways to reason doesn't make it any more normative than physics, chemistry, biology, etc. This calls into question your philosophy of science approach to epistemology. There is no reason for us to begin our epistemological speculations with Ameliorative Psychology rather than with any other successful branch of empirical science.
When there is a theoretical improvement in (say) chemistry, it improves our thinking only by improving our knowledge of the world—our knowledge of the subject matter of chemistry. Theoretical advancements in chemistry do not improve our knowledge of ourselves as human cognizers. They (p.168) get us closer to the truth about the chemical world. Ameliorative Psychology is like chemistry in that it improves our thinking about certain aspects of the world. For example, Goldberg's Rule improves our thinking about diagnosing psychiatric patients, credit scoring models improve our reasoning about credit risks, etc. So, like any science, Ameliorative Psychology helps us get closer to the truth about the world. But Ameliorative Psychology also improves our knowledge of ourselves as reasoners. At its best, Ameliorative Psychology identifies how people reason about a problem and offers ways to better reason about the problem. And from these findings, we can pretty immediately draw generalizations about how we ought to reason. From our perspective, what makes Ameliorative Psychology special from a normative perspective—what differentiates it from other sciences—is that the generalizations drawn about how we ought to reason can (in principle at least) put pressure on our deepest epistemological judgments about how we ought to reason.
You claim that Ameliorative Psychology yields generalizations about how we ought to reason while other sciences do not. But this is not obvious. It is clearly possible that we might be able to draw generalizations about how we ought to reason from attending to the character of theoretical advances in the natural sciences. Further, given that the natural sciences offer us the most powerful ways of reasoning about the world that we have, it seems, in fact, plausible to suppose that we might be able to extract lessons about how we ought to reason. For example, suppose one believed that unification is an important virtue in successful scientific theories ( Friedman 1974 , Kitcher 1981 ). One might reasonably draw a generalization about how we ought to reason—we ought to seek unification in our belief systems. If this is right, then there really is no distinction in the ‘normative’ status of Ameliorative Psychology and other sciences.
This is a tricky objection. We expect to be criticized for our extreme naturalism. But this objection suggests our approach is not extreme enough. It says that it's not just that we can extract epistemological lessons from Ameliorative Psychology, we can extract epistemological lessons from all the sciences (or at least all the successful sciences). So epistemology isn't just the philosophy of psychology (or the philosophy of Ameliorative Psychology), it's the philosophy of all the (successful) sciences! We have no principled objections to this attempt to push us toward a more radical naturalism. Perhaps we can extract epistemological lessons from (say) physics that can put pressure on our deepest epistemological (p.169) judgments about how individuals ought to reason. Whatever else might be said about this project, it is certainly going to be difficult. It is going to be hard to extract surprising lessons from physics about how people ought to reason in their day‐to‐day lives. As we argue in chapters 2 and 9, the lessons of Ameliorative Psychology for how people ought to reason are fairly clear. So this objection does nothing to undermine our approach. There are fairly clear—and quite surprising—epistemological lessons to extract from Ameliorative Psychology. That's what we have tried to do. If it should turn out that there are surprising lessons to extract from other areas of science, that's great! We await those results.
4. The grounds of normativity, or Plato's Problem
In the Euthyphro, Plato famously asks whether something is pious because it is loved by the gods or if it is loved by the gods because it is pious. Your approach to epistemology raises an analogous issue. You often appeal to Ameliorative Psychology in the assessment of epistemological excellence. So: Is a reasoning strategy excellent because Ameliorative Psychology says it's excellent, or does Ameliorative Psychology say it's excellent because it really is excellent?
We have argued that on occasion, proponents of Ameliorative Psychology are mistaken about epistemic excellence. So even though we think that attending to the results of Ameliorative Psychology is a reliable way to discover excellent reasoning strategies, it is not perfectly reliable. So on our view, epistemic excellence is a feature of the world discovered by Ameliorative Psychology. Our access to it is akin to our access to any theoretical posit of natural science.
Our empirical investigation into the epistemic excellence begins with the Aristotelian Principle, which says that in the long run, poor reasoning tends to lead to worse outcomes than good reasoning. This principle allows us to take empirical results and infer with confidence that one way of reasoning is better than another. For example, when it comes to medical diagnosis, using frequency formats brings substantially better outcomes than using probability formats (see chapter 9, section 1). The Aristotelian Principle licenses the inference that frequency formats are epistemically superior to probability formats. The construction of an empirical theory of epistemic excellence can begin with many such examples. But a catalog (p.170) of such examples will not be enough. A theory of epistemic excellence will also lean on what is known about the causal dependence between reasoning and well‐being. There is a substantial body of evidence concerning the conditions of human well‐being and the conditions for the exercise of human capabilities. For example, people are notoriously unreliable at forecasting their affective reactions to events in their lives (Wilson and Gilbert 2003). A piece of friendly advice: Don't underestimate the impact of a long commute to work on your psychological well‐being, when, for example, buying a house (Stutzer and Frey 2003). One would expect a theory of epistemic excellence to evolve with discoveries about human well‐being, just as the theory of natural selection evolved with the discovery of the gene.
Our access to epistemic excellence derives from what we can infer about the regularities in the world that are responsible for the success of certain reasoning strategies. Like any domain of empirical inquiry, the access is sometimes indirect. In science, measurement often documents a subtle causal chain, not open to casual inspection. But measurement strategies constitute a powerful class of methods in contemporary science. Ameliorative Psychology has made use of these strategies in generating a substantial body of evidence. We expect that the very scientific methods that vindicate Ameliorative Psychology will confirm the posits of a normative theory of epistemic excellence.
5. The relative paucity of SPRs
Let's grant that Ameliorative Psychology offers some wonderful SPRs. But there just aren't that many, compared to the number of significant reasoning problems we face every day. If John had at his disposal all successful, tractable SPRs, they would not help him deal with the overwhelming majority of the significant reasoning problems in his life. Throughout this book, you attack SAE for offering theories that do not provide useful guidance to reasoners. But your theory fares just about as badly on this score. A handful of successful SPRs for making judgments about a hodgepodge of issues hardly counts as useful reasoning advice.
There are three points to make in response to this objection. First, Ameliorative Psychology provides considerably more guidance than is here suggested. There is more to Ameliorative Psychology than SPRs. For example, the consider‐the‐opposite strategy and the various strategies for (p.171) thinking about causation (chapter 9) are potentially applicable to a very wide range of reasoning problems. Second, this objection seems to assume that the epistemological theory we defend, Strategic Reliabilism, is exhausted by the practical advice offered by Ameliorative Psychology. This is a misunderstanding. Strategic Reliabilism offers a general framework that accounts for the epistemic quality of particular reasoning strategies. While Strategic Reliabilism grounds the prescriptions of Ameliorative Psychology, it is not exhausted by those prescriptions. And third, while Ameliorative Psychology might not provide as much reason‐guidance as we might hope, it does provide more than the theories of Standard Analytic Epistemology. The theories of SAE are almost entirely indifferent to issues of significance and to issues of the costs and benefits of reasoning. Such theories can perhaps advise that we should only adopt justified beliefs, and they can explain in exquisite detail what they mean by ‘justified’. But this hardly counts as useful advice for three reasons. (a) We doubt that SAE embodies a reasonable method of identifying the proper goal of reasoning (see chapter 7). (b) For most of us at most times, there are infinitely many justified beliefs we could adopt. Without an account of significance or an account of the costs and benefits of reasoning, the theories of SAE have no way to advise someone to adopt one justified belief rather than any other (see chapters 5 and 6). And (c) at best, the theories of SAE define a goal of reasoning, they don't provide any useful guidance about how to achieve that goal (see chapter 9). This is reminiscent of the advice offered by one of our Little League baseball coaches who told his players, “When I tip my cap, that means you should hit a home run.” Unlike proponents of SAE, the coach was joking.
6. Counterexamples, counterexamples
A number of counterexamples against reliabilist theories of justification depend on a disconnect between the reliability of a particular belief‐forming mechanism and the subject's evidence for trusting that mechanism. To take a classic case, a reasoner might have a perfectly reliable clairvoyant belief‐forming mechanism but no evidence for trusting it—in fact she might have positive reasons for not trusting it ( BonJour 1980 , Putnam 1983 ). The reliable clairvoyant case raises hard problems for Strategic Reliabilism (as do other examples of this sort). According to Strategic Reliabilism, what would it be for the reliable clairvoyant to reason in an excellent fashion when she has reasons not to trust her clairvoyant powers? And more generally, how does Strategic (p.172) Reliabilism handle cases in which a reasoning strategy is reliable (or unreliable) and the subject has strong reason to believe the opposite?
There are many examples that are going to be hard cases for Strategic Reliabilism, and this includes cases in which there is a disconnect between the reliability of a reasoning strategy and the subject's evidence for trusting it. The strength of Strategic Reliabilism does not reside in the ease with which it can be applied to cases in order to make straightforward, univocal epistemic judgments. The strength of Strategic Reliabilism is its reason‐guiding capacity. Strategic Reliabilism provides a framework for identifying and developing excellent reasoning strategies—robustly reliable reasoning strategies for tackling significant problems. This is reversed for theories of SAE. A theory of SAE is supposed to be able to be applied to cases in order to determine whether particular beliefs are justified or not. But theories of SAE don't provide much in the way of useful reason‐guiding resources (a point we have endlessly harped on in this book). And so we are content to admit that there will be plenty of hard cases in which a reasoner uses a number of different reasoning strategies and Strategic Reliabilism takes some of them to be excellent and others to be less so. The fact that Strategic Reliabilism does not always yield a simple, univocal normative judgment is a problem only if epistemic judgments of reasoning excellence must always be simple and univocal. But people reason in wonderfully complex and varied ways. Why should we expect our assessments of every instance of human reasoning to be simple?
Although we have admitted that the strength of Strategic Reliabilism is not its ability to be applied to particular cases, we should not overstate this point. There is no principled reason why we can't apply Strategic Reliabilism to very complicated cases. There are, however, two thoroughly practical reasons why the application of Strategic Reliabilism can be difficult. First, in order to apply Strategic Reliabilism to (say) the clairvoyant case, we need to know a lot about what reasoning strategies the clairvoyant is using. The SAE literature tends to ignore this, except to say that by hypothesis the subject's clairvoyance is reliable. But we are not told much about how the clairvoyance works or about the nature of the clairvoyant's second‐order reasoning strategies about whether to trust her clairvoyant powers. The SAE literature does not give details about such reasoning strategies because the theories of SAE, including process reliabilism, are theories of justification; and justification is a property of belief tokens. Details about the workings of the clairvoyant's reasoning strategies are irrelevant to theories of SAE. But even if we are given lots of details about (p.173) how the clairvoyant is reasoning, there is a second reason Strategic Reliabilism can be practically difficult to apply. The assessment of a particular reasoning strategy employed by the clairvoyant depends on many factors we might not know. For example, we would need to know the reliability scores of the clairvoyant's reasoning strategy; and if we wanted to make relative judgments, we'd need to know the reliability scores of its competitor strategies. (We would need to know more about these strategies as well—their robustness, their costs and the significance of the problems in their ranges.) There is no principled reason we couldn't find out about these matters. But in absence of detailed information about them, it will be very difficult to apply Strategic Reliabilism to particular cases. Strategic Reliabilism is hard to apply, but not because Strategic Reliabilism is so abstract it cannot be applied to real cases. The reason Strategic Reliabilism is hard to apply is that we need to know a lot in order to apply it.
7. Reliability scores
You define the reliability score of a reasoning strategy as the ratio of true to total judgments in the strategy's expected range. But what about cases (like the frequency formats) in which the strategy makes probabilistic inferences. If a reasoning strategy says that the probability of E is 1/3 (where E is a single event), and E happens (or doesn't happen), we can't say that on that basis that that's a true judgment. So reliability scores seem undefined for these sorts of reasoning strategies. And that's a serious lacuna in your theory.
This worry is analogous to the hoary problem facing the frequentist account of probability of single event probabilities. Because the frequency interpretation defines “probability” in terms of observed frequency, no probability of coming up heads (or tails) can be assigned to an unflipped coin. And, notoriously, the future posture of unflipped coins has no observed value. Our problem is similar in that we define a reasoning strategy's reliability score in terms of the relative frequency of true judgments in its expected range. If a reasoning strategy leads one to predict that there is a 1/3 chance of single event E, how do we determine what the probability of E really is? If we can't assign a probability to E, then we have no way of determining how reliable the probabilistic reasoning strategy is.
Our solution to the problem is analogous to how a frequentist might handle the problem of single event probabilities. A frequentist will not explain the probability of a single event in terms of an unobserved, (p.174) independently specifiable disposition or propensity. Instead, a frequentist might say that the probability of a single event is an idealization concerning the observed values yielded under an indefinite (or infinite) number of samplings or potentially infinite sequence of trials. Turning to the problem of assigning reliability scores to probabilistic reasoning strategies, we should note that we define probability scores in terms of a reasoning strategy's expected range for a subject in an environment. The expected range is an idealization based on the nature of the environment in which a subject finds herself. The reliability score of a reasoning strategy applied to a single case (whether that strategy yields probability judgments or not) is, similarly, based on an idealization: It is the ratio of true to total judgments in the strategy's expected range, where this range is defined by an indefinite (or infinite) number of samplings or potentially infinite sequence of trials.
The introduction of an idealized expected range provides a way (or more likely, a number of ways) to assess the accuracy of a probabilistic reasoning strategy. Take a probabilistic reasoning strategy, R. Next take all the propositions R judges to haave (say) probability 1/3. In R's expected range, we should expect 1/3 of those propositions to be true. So if we have a perfectly accurate probabilistic reasoning strategy, R, then for all propositions that R takes to have probability n/m, the frequency of those propositions that are true in R's expected range will be n/m. We can measure R's accuracy in terms of a correlation coefficient that represents how closely R's probability judgments reflect the actual frequencies of truths in R's expected range. (Notice, this is just how overconfidence in subjects was assessed. When we examine those cases in which subjects assign very high probabilities to events, those events turn out to be true at much lower frequencies. See chapter 2, section 3.4.)
8. Explanatory promises
In chapter 1 and elsewhere, you claim that a successful epistemological theory will help explain the Aristotelian Principle and the success of Ameliorative Psychology. It's not at all clear that you have kept these explanatory promises.
Let's begin with the Aristotelian Principle, which says that in the long run, good reasoning tends to lead to good outcomes. According to Strategic Reliabilism, good reasoning involves the efficient allocation of robustly reliable reasoning strategies to problems of significance. So the excellent reasoner will tend to have true beliefs about significant matters. We take it to be (p.175) a true empirical hypothesis that true beliefs about significant matters tend to be instrumentally valuable in achieving good outcomes. People and institutions can more easily achieve their goals insofar as they have a true picture of relevant parts of the world. The explanation for the instrumental value of significant truth is likely to be complex (Kornblith 2002). But as long as significant truth is instrumentally valuable, the account of good reasoning provided by Strategic Reliabilism helps us to understand (i.e., plays a role in the explanation of) the Aristotelian Principle.
Strategic Reliabilism also helps us to understand the success of Ameliorative Psychology in at least three ways. First, Strategic Reliabilism is a general account of reasoning excellence, and so it applies to science. The fact that science displays excellent reasoning—that it involves robustly reliable reasoning strategies for solving significant problems—is part of the explanation for the characteristic pragmatic and epistemic success of science. In this way, Strategic Reliabilism helps us to understand the epistemic and pragmatic success of Ameliorative Psychology. Second, Strategic Reliabilism can be used to explain the success of the recommendations of Ameliorative Psychology. For example, the recommendation that Goldberg's Rule be used to make tentative diagnoses of psychiatric patients on the basis of a MMPI profile is successful because it is cheap, its reliability is unsurpassed and it tackles a problem that is significant for certain people. (On the other hand, it is not particularly robust, since its conditions of application are fairly restricted. But highly reliable reasoning strategies whose ranges are restricted to mostly very significant problems can nonetheless be excellent.) There is a third way in which Strategic Reliabilism can explain the success of Ameliorative Psychology: it can do so by helping it to be more successful. Ameliorative Psychology is not a monolith. There are occasionally disagreements about how to evaluate certain reasoning strategies. As we showed in chapter 8, Strategic Reliabilism provides a framework for understanding reasoning excellence, and so it can be used to assess the prescriptive recommendations made by Ameliorative Psychologists. So Strategic Reliabilism can be used to improve Ameliorative Psychology by identifying some of its less successful recommendations.
9. Abuse worries
You advocate the increased use of SPRs. But some SPRs depend for their success on not being widely known. For example, the details of the credit (p.176) scoring models used by financial institutions are kept secret so that people cannot “play” them by engaging in activities solely for the purpose of improving their scores. Expanding the use of SPRs, particularly covert SPRs, leaves open the possibility of significant abuse. It is not hard to envision scenarios in which governments use SPRs to identify and persecute people whose political or religious views are out‐of‐favor, or in which (say) insurance companies use SPRs to identify people with health risks in order to restrict their access to life or health insurance.
Before we get too head‐up about the potential abuses of SPRs, we must remember that honest policy assessment is comparative. We must compare the threat of the increased use of SPRs to the threat posed by expert judgment. Perhaps those suspicious of SPRs suppose that, while expert judgment is inferior in accuracy, it is also less prone to abuse. But this is by no means obvious. As Robyn Dawes has pointed out many times, expert judgment is more mysterious, more covert and less available to public inspection than SPRs (e.g., Dawes, 1994). SPRs are in principle publicly available and they come with reliability scores—they do not suffer from overconfidence. When a bank loan officer or a parole board member makes a decision, third parties typically do not know what evidence they took to be most important or how they weighed it. Indeed, most of us are considerably worse at identifying the main factors involved in our reasoning than we believe (Nisbett and Wilson, 1977). The loan officer who makes relatively more and better loans to white males than to minorities or women in the same financial situation might insist that he doesn't take race or gender into account. And unless we had pretty good evidence, provided, for instance, by an explicit model, who could doubt him? Dawes gives a terrific example of the sorts of abuses that can be avoided with more objective SPRs.
A colleague of mine in medical decision making tells of an investigation he was asked to make by the dean of a large and prestigious medical school to try to determine why it was unsuccessful in recruiting female students. My colleague studied the problem statistically “from the outside” and identified a major source of the problem. One of the older professors had cut back on his practice to devote time to interviewing applicants to the school. He assessed such characteristics as “emotional maturity,” “seriousness of interest in medicine,” and “neuroticism.” Whenever he interviewed an unmarried female applicant, he concluded (p.177) she was “immature.” When he interviewed a married one, he concluded she was “not sufficiently interested in medicine,” and when he interviewed a divorced one, he concluded she was “neurotic.” Not many women were positively evaluated on these dimensions. … (Dawes 1988, 219).
This example makes clear that “expert” judgment is no defense against bias and discrimination.
We are badly in need of some cost‐benefit judgment here. We know that well designed SPRs are more accurate than expert judgment. (For a treatment explicitly sensitive to the threat of SPR abuse, see Monahan, [submitted].) Using SPRs will lead to fewer errors in parole decisions, clinical psychiatric diagnosis, medical diagnosis, college admission, personnel selection, and many more domains of life. While SPRs can be abused, expert judgment may leave even greater potential for abuse. In absence of some reasonable evidence for thinking that SPRs bring more serious costs than expert judgment, the case for SPRs is straightforward. For those who insist on holding out, it might be useful to imagine the situation reversed. Suppose we had found that experts are typically more reliable than the best SPRs. Would it be reasonable to insist on using SPRs because of an ill‐defined concern about the potential abuse of expert judgment?
Strategic Reliabilism does not recommend SPRs because they are secret (when they are secret). It recommends SPRs because they are the tools most likely to (say) discriminate a person who will default on a loan from one who won't. Any procedure for making high stakes decisions comes with the potential of harmful errors. In the case of SPRs, we can reasonably expect certain kinds of errors. An undertrained or overworked credit‐scoring employee might make a keystroke error, or a troubled employee might willfully enter incorrect information. A sensitive application of our view to a social institution would recognize the potential for such errors and would recommend the implementation of corrective procedures. Nothing in Strategic Reliabilism supports using SPRs irresponsibly—just the opposite. Still, what about the possibility of abuse that comes with SPRs being used for dastardly ends? Here we come to the limits of what epistemology can do. A monster like Hitler might employ SPRs to reason in an excellent manner. And that possibility is of course frightening. But it is no objection to our epistemological theory that it doesn't have the resources to condemn the wicked. Physics and chemistry don't either. And neither do the traditional theories of SAE. That is a job for moral and political theory.
There is another issue that may be an appropriate concern. If a SPR appeals to factors an individual cannot control, there is potential for serious abuse. For example, we can imagine a SPR that uses variables that appeal to race in making (say) credit decisions. Now, as a matter of fact, it (p.178) turns out that the best models we have appeal to past behavior: “In a majority of situations, an individual's past behavior is the best predictor of future behavior. That doesn't mean that people are incapable of changing. Certainly many of us do, often profoundly. What it does mean is that no one has yet devised a method for determining who will change, or how or when … But if we are responsible for anything, it is our own behavior. Thus, the statistical approach often weights most that for which we have the greatest responsibility” (Dawes 1994, 105). But if someday a successful SPR does discriminate along questionable dimensions, it is always an open moral question whether we should use it.
10. The generality problem
Your view, Strategic Reliabilism, seems to fall victim to the generality problem. The generality problem arises because there is more than one way to characterize the belief‐forming mechanism that produces a particular belief. Some of these characterizations will denote a reliable process, whereas other characterizations will not. Without some way of deciding which of these processes to count as the one that produced the belief, the reliabilist runs the risk of having to say that such a belief is both justified (because it was produced by a reliable mechanism) and unjustified (because it was produced by an unreliable mechanism). And that's absurd ( Goldman 1979 , Feldman 1985 ). Here is Richard Feldman's characterization of the problem:
The fact that every belief results from a process token that is an instance of many types, some reliable and some not, may partly account for the initial attraction of the reliability theory. In thinking about particular beliefs one can first decide intuitively whether the belief is justified and then go on to describe the process responsible for the belief in a way that appears to make the theory have the right result. Similarly, of course, critics of the theory can describe processes in ways that seem to make the theory have false consequences. For example, Laurence BonJour has proposed as counter‐examples to the reliability theory cases in which a person believes things as a result of clairvoyance. In his examples, clairvoyance is a reliable process but the person has no reason to think that it is reliable. BonJour claims that the reliability theory has the incorrect consequence that the person's beliefs are justified. He assumes, however, that the relevant process type is clairvoyance. If one instead assumes that the relevant type is “believing something as a result of a process one has no reason to trust” the reliability theory seems to have different implications for these cases (1985, 160).
(p.179) So how can Strategic Reliabilism overcome the generality problem?
In thinking about how Strategic Reliabilism handles the generality problem, it will be useful to consider a particular example. Suppose that whenever S is faced with the task of making predictions about human performance, she always uses what we might call the human performance predictor (HPP): She considers only the two lines of evidence she believes are most predictive, weighs them equally, and predicts that higher scores will be more highly correlated with better performance. In some sense, this is a meta‐strategy, since it is a strategy for formulating strategies for making predictions about human performance. Now S is faced with some admissions problems, so she uses HPP: She considers only the two lines of evidence she deems most predictive (say, high school rank and test score rank), weighs them equally, and predicts that the best students will be those with the highest scores. We have already seen this reasoning strategy—it is ASPR (chapter 4, section 1). HPP and ASPR are nested reasoning strategies: ASPR's range (i.e., admissions problems) is a proper subset of HPP's range.
Now suppose that after having used these nested strategies to make a prediction about an admissions problem, S comes to believe that Jones will be a more successful student than Smith. Suppose further that ASPR is very reliable (i.e., it makes a high percentage of true predictions on admissions problems), but the more general HPP is not (i.e., while it leads to reliable predictions on admissions problems, it leads to very unreliable predictions on other sorts of human prediction problems). The classical reliabilist about justification is faced with a problem. S's belief was the product of a reliable belief‐forming process (ASPR), and so on reliabilist grounds is justified. But S's belief was also the product of an unreliable belief‐forming process (HPP), and so on reliabilist grounds is unjustified. The reliabilist seems committed to claiming that S's belief that Jones will be a more successful student than Smith is both justified and unjustified. Contradiction.
Goldman (1986) tries to solve the generality problem by arguing that the correct way to characterize the mechanism that produces a belief token is in terms of the narrowest (p.180) causally operative process involved in its production. Thus, Goldman would argue that S's belief is justified, since the narrowest causally operative process involved in its production (i.e., ASPR) is reliable. On the other hand, if ASPR had been unreliable and the more general HPP had been reliable, Goldman would deem the belief unjustified. For our purposes, what's right about Goldman's suggestion is that any form of reliabilism need only countenance psychologically real, causally operative processes. But if we take reliabilism to be a theory about epistemic excellence rather than a theory about epistemic justification (i.e., if we accept Strategic Reliabilism instead of classical reliabilism), we can simply avoid the generality problem altogether.
How is that?
Strategic Reliabilism aims to assess reasoning processes rather than belief tokens. Suppose it is possible for a belief token to be produced by a reliable process (on one characterization) and by an unreliable process (on a different characterization). We can pass a positive judgment on the first process and a negative judgment about the second process. There is no need for the reliabilist about excellence to demand a unique characterization of the process that produces a belief token. To take the example spelled out above, the strategic reliabilist might judge S's use of ASPR to have been epistemically excellent, though this will depend on the reliability and ease of use of competitor strategies. On the other hand, the strategic reliabilist might judge S's use of the HPP to have been not epistemically excellent (though this again will depend on the quality of the competition). It is trivial that different reasoning strategies can have different, incompatible epistemic properties. So there is no need for the Strategic Reliabilist to demand a unique characterization of the process that produces a belief token. And so there is no generality problem.
We should note that Earl Conee and Richard Feldman take the generality problem to be devastating to classical process reliabilism.
In the absence of a brand new idea about relevant types, the problem looks insoluble. Consequently, process reliability theories of justification and knowledge look hopeless. (1998, p.24)
So if our view is able to overcome the generality problem, apparently this is news.
But it still seems that the generality problem raises a worry about Strategic Reliabilism. After all, a theory of epistemic excellence should tell us whether S's reasoning to the belief that Jones will be a more successful student than Smith was excellent or was not excellent. To do that, the theory needs to decide whether S's reasoning was excellent because the belief was the result of a reliable process (ASPR) or not excellent because the belief was the result of an unreliable process (HPP). So it would appear that the generality problem arises in a slightly new guise for Strategic Reliabilism.
(p.181) This is not right. We take epistemic excellence to be a property of a temporal process that's dedicated to the achievement of certain specific goals. If we want to know whether a state (i.e., a belief) was the result of an epistemically excellent reasoning process, then it's important to specify what reasoning process we mean to assess. If we specify the reasoning narrowly, so that the belief is the result of ASPR, then the reasoning is excellent. If we specify the reasoning broadly, so that the belief is the result of HPP, then the reasoning is not excellent. If we want to know whether the entire voluntary reasoning process, involving both predictors, was excellent, then there is no single, univocal, uncomplicated assessment. In some ways it was excellent, and in some ways it was not. We can describe in quite a bit of detail the precise ways in which the reasoning was excellent and the precise ways in which it was not. But our theory yields no single, univocal, uncomplicated assessment of this episode of reasoning. And surely, that is a virtue of our theory.
But isn't it odd for you to simply say that there are episodes of reasoning that are in some ways excellent, and in other ways not? You don't seem inclined to say much about the epistemic quality of the reasoning in general. Resting content with this conclusion might reasonably strike one as stubbornly unambitious and perversely indolent.
There are two points to make against this worry. First, accurate theories about complicated subjects will sometimes yield complicated judgments. While the desire for simplicity is understandable, the advice often attributed to Einstein seems apt: theories should be as simple as possible, but no simpler. Second, from our perspective, epistemology is a forward‐looking enterprise. So while epistemology inevitably involves passing judgments about the epistemic quality of people's reasoning and beliefs, evaluating the past is not the main point of epistemology. The main point of epistemology is to offer clear, usable criteria for epistemic excellence that will yield judgments about the relative quality of competing reasoning strategies. So going back to the example, the fundamental issue for us is not whether there is some way to characterize S's reasoning so that we may pass simple epistemic judgments. The real issue for epistemology to address is: What are the epistemically better ways S might reason about significant issues (and, of course, what makes those reasoning strategies better)?
But this still seems problematic. Besides insisting that an account of a process be “psychologically real,” you do not favor any particular way of individuating (p.182) belief‐forming mechanisms when it comes to passing judgments of epistemic excellence. But a reasoning episode might involve dozens, or even hundreds, of such processes. Do you really want to say that for some reasoning episodes, every psychologically real belief‐forming mechanism has its own epistemic worth?
Well, yes. There is no theoretical problem with this result. Some might worry that this result will make epistemology impossibly complex. It's true that it might take a superhuman effort to actually try to evaluate all the processes that went into the production of a single belief. But it's also true that as a practical matter, there is seldom a need to evaluate all the processes that went into producing a belief. Our efforts have typically been directed at voluntary reasoning strategies—strategies reasoners can choose to use or not to use. That's not to say that involuntary reasoning processes should be completely ignored. In fact, in our view, epistemology must pay closer attention to such processes. For example, a practical epistemology will offer voluntary reasoning strategies that correct involuntary reasoning processes (e.g., don't trust your visual color experiences in artificial light).
11. Strategic Reliabilism and the cannon
I understand that you haven't tried to set your view in context of (what you have been calling) Standard Analytic Epistemology. But isn't your theory, Strategic Reliabilism, really just a trivial variant of standard reliabilism (e.g., Armstrong 1973 , Dretske 1981 , Goldman 1986 )?
Actually, our theory is unlike any traditional theory of justification defended by proponents of SAE. But we do gladly admit that there are many theories and views in contemporary epistemology that we believe point in the right direction. We will begin by briefly pointing out the ways in which our theory differs from the standard theories of SAE (see chapter 1 for a fuller discussion). We will then turn to some of the views that we think point in the right direction.
There are four ways in which Strategic Reliabilism differs from the standard theories of justification found in the SAE literature.
1. It is not a theory of justification.
2. It does not take as a major starting point philosophers' considered judgments about the epistemic status of beliefs, theories, or reasoning strategies.
3. Strategic Reliabilism is an explicitly cost‐benefit approach to epistemology.
4. Strategic Reliabilism takes significance to be an ineliminable feature of epistemic evaluation.
As far as we know, no contemporary theory of justification has features 1–3. And only contextualism embraces something like 4 (DeRose 1995). Still, some of these ideas can be found in contemporary epistemology.
11.1 Not justification
At least two well‐known philosophers have called for epistemological theories that do not focus on justification. In 1979, Alvin Goldman argued for an approach to epistemology he called epistemics that would focus on assessing and guiding our mental processes. While we clearly do not share Goldman's appreciation for our “epistemic folkways”, his call for a “scientific epistemology” has not received the response it deserves (1992). In 1990, Stephen Stich defended a pragmatic account of “cognitive evaluation” and it was clearly not a theory for the assessment of belief tokens, but something very much like what we offer here: it was a theory for the assessment of a person's reasoning strategies. We could cite other philosophers' work who do not focus primarily on justification (e.g., Harman 1986), but the theory we have presented in this book is very much in the spirit of the proposals of Goldman and Stich.
What about virtue epistemology? These theories tend to focus on providing an account of epistemic virtue rather than epistemic justification (although many virtue theorists offer an account of epistemic justification in terms of epistemic virtue). There are, of course, quite different theories of virtue epistemology (e.g., Sosa 1991, Zagzebski 1996). We admire much of Sosa's epistemology. For example, we agree that “it is philosopher's arrogance to suppose mere reflection the source of all intellectual virtue” (Sosa 1991a, 266). Still, we do not take virtue theories of epistemology, as they currently stand, to be fellow travelers. Our primary worry is that current virtue theories are not sufficiently informed by empirical psychology. If we take an epistemic virtue to be (roughly) a habit of mind that tends to lead to truths, it is a thoroughly empirical question which habits of mind will do this. While virtue theorists would agree (e.g., Sosa 1991b), we suspect that they have underestimated how counterintuitive the “virtues” are likely to be. One worry is that insofar as virtues are dispositions that are reasonably stable across contexts, there is (p.184) some reason to wonder whether people exhibit virtues of this sort (see Doris 2002 for a discussion of the moral virtues along these lines). Another problem is that the psychological evidence is likely to show that we just aren't as wise about epistemic matters as we think we are. Given the evidence presented in this book, it must be the case that we have a lot of mistaken beliefs about what habits of mind are virtuous. In a nutshell, the framework of virtue epistemology—roughly, that we should seek to instill in ourselves habits of mind that tend to be reliable—is fine as far as it goes. But to think we have a good intuitive sense of what those habits of mind might be strikes us as optimistic.
11.2. No theory of “our” considered epistemic judgments
We do not begin our epistemological investigations by focusing on our deeply considered epistemic intuitions about knowledge or justification. In contemporary epistemology, this view was championed by Stich in The Fragmentation of Reason (1990) and has found its most forceful defense in the recent empirical work of Weinberg, Nichols and Stich (2001). In terms of the number of our fellow travelers, this is perhaps the most radical aspect of our approach. The diversity findings of Weinberg, Nichols and Stich suggest that the attempt to provide a traditional account of knowledge is just anthropology. Once one grants the essentially anthropological nature of the standard project, one is forced to rethink whether it can lead to a genuine reason‐guiding epistemology. And yet even Goldman, who for a quarter century has called for a “scientific epistemology” that does not focus on justification, insists on the traditional project (1992, 2001). It is time for naturalistically inclined philosophers to reject the traditional project—epistemology as armchair anthropology—as anathema not only to science but also to the essentially normative character of epistemology.
11.3. Costs and benefits
Many naturalistically inclined philosophers have argued against epistemological theories that require that people have brains “the size of a blimp” (in Stich's memorable phrase [1990, 27]). But as far as we know, no philosopher has explicitly proposed a cost‐benefit approach to epistemology. So where does the idea come from? The idea is deeply embedded in psychology. Indeed, this book project received a withering review from a psychologist who was incensed that we would bother wasting ink on the (p.185) utterly trivial proposition that good reasoning involves the efficient allocation of limited cognitive resources. Regardless of whether it is trivial, it is certainly not an implicit tenet of the philosophical discipline charged with the normative evaluation of cognition. It's not that most analytic epistemologists would deny the proposition, it's just that they appear to have no use for it in their theorizing. This is one more example—as if one more were needed—of the yawning chasm that separates the discipline that studies reasoning from the discipline that seeks to evaluate it.
Finally, what about significance? The idea that good reasoning is reasoning about significant matters is, of course, a central idea of the pragmatic tradition in epistemology. And plenty of non‐pragmatists have pointed out that not all truths are created equal. But in recent years, this point has been made best by a philosopher of science, Philip Kitcher (1993, 2001). Not only has Kitcher written forcefully about significance, but the final chapter of The Advancement of Science (1993; see also his 1990) is a fascinating attempt to view social epistemology from a cost‐benefit perspective. There are three features of this emerging trend that give reason for optimism. First, it honors what psychologists have already shown: Good reasoning is an intricate achievement of busy brains in complex environments. Second, treating cost‐benefit measures as an essential component in epistemology allows economics and psychology—the current and future tools of public policy—to recruit and assimilate the normative, theory‐building efforts of properly trained epistemologists. The third reason for optimism is more self‐serving: This approach places epistemology not just where it belongs, but where this book began—in the philosophy of science, and in so doing, in science itself. (p.186)