Free Will - Oxford Scholarship Jump to ContentJump to Main Navigation
Mind, Brain, and Free Will$

Richard Swinburne

Print publication date: 2012

Print ISBN-13: 9780199662562

Published to Oxford Scholarship Online: January 2013

DOI: 10.1093/acprof:oso/9780199662562.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2015. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 28 August 2016

Free Will

Free Will

Chapter:
(p.174) 7 Free Will
Source:
Mind, Brain, and Free Will
Author(s):

Richard Swinburne

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780199662562.003.0008

Abstract and Keywords

Contrary to language-of-thought theory and in agreement with connectionism, there could not be causal laws relating types of particular conscious events to types of particular brain events, but only ones relating total conscious states to total brain states. Conscious events include events of innumerable different kinds (all totally different in nature from brain events) which cannot be measured on common scales; and no human at a given time has the same brain state as any human ever, or the same conscious state when considering difficult moral decisions. So no total determinisitic theory of which brain events cause and are caused by which conscious events could have enough evidence in its favour to be well justified. Hence we should believe that things are as they seem—that when we make difficult moral decisions we have free will. Neuroscience can show the influences on us, but cannot predict individual decisions.

Keywords:   connectionism, free will, neuroscience, moral decisions, language-of-thought theory

1. Moral beliefs and the scope for decision

I argued in Chapter 4 that brain events often cause mental events including conscious events, and that conscious events often cause brain events and also other conscious events. Among the conscious events which cause brain events are intentions, and the brain events which they cause in turn cause public behaviour. I argued in Chapter 5 that intentions are simply the intentional exercises of causal influence, normally via a brain event in order to produce some bodily movement and thereby affect the world in a certain way. I argued in Chapter 6 that humans are pure mental substances. So humans are pure mental substances who intentionally cause their bodies to move in certain ways. I turn in this chapter to examine the extent to which humans are caused by other events to form their intentions, that is, to exercise causal influence. In this section I consider the influence of mental events of other kinds on the formation of our intentions.

Humans are in part rational beings, and—in the respect that we form the intentions which we do because we have reasons for forming those intentions—fully rational. Most of our intentions are intentions about how to fulfil some other intention, and so in the end about how to fulfil what I called our ‘ultimate’ intentions. These intentions about how to fulfil ultimate intentions I will call ‘executive intentions’, our reason for having which is to fulfil an ultimate intention. Our ultimate intentions determine our executive intentions. If an agent has only one ultimate intention, and a strong belief about what is the quickest way to fulfil that intention, they inevitably form an intention to take that way. If—when staying at a hotel—I form an ultimate intention to go to bed, and believe strongly that my hotel bedroom is number 324 on the third floor, I will form an executive intention to go to the third floor and look for number 324. If I believe strongly that the lift is situated at the end of the corridor, and that the lift will provide the quickest way to get to the third floor, I will form the intention to go to the end of the corridor. And so on. The dynamics of the interaction of beliefs and ultimate intentions in forming executive intentions is more complicated when an agent’s ultimate intention is more complicated, for example, if the agent has an ultimate intention to achieve two separate goals, the intention to achieve one being stronger than the intention to achieve the other. I may, for example, intend to go to bed, and also intend to stop at a hotel shop on the way so long as the extra time involved is not (p.175) longer than five minutes. My belief about what is the quickest way of fulfilling the combined intention will be different from my belief about what is the quickest way of fulfilling the single intention. The dynamics become more complicated still if I have competing beliefs of different strengths, for example, if my belief that the probability that the lift is at the end of the corridor is only somewhat stronger than my belief that it is in the other direction where there are also stairs by which I can reach the shop, albeit less quickly than by the lift. We do not of course normally go through an explicit reasoning procedure in working out how to execute our intentions (and we could not normally ascribe numerical values to the relative strengths of the ultimate intentions and the probabilities of different ways of fulfilling them). I am claiming only that we respond in the way that we believe is probably the quickest way to execute our intentions, given their relative strengths. When we believe that there are two or more different equally quick ways to execute some ultimate intention (and no quicker way), we need to choose between these ways by an arbitrary decision, and any such decision will constitute a rational way of executing our ultimate intention in the light of our beliefs about how we can do so.1

Our reasons for forming some particular ultimate intention will be either that we desire to do so and/or that we believe that it is in some way a good thing to do so for a reason other than that we desire to do so. The reader will recall from Chapter 3 that I understand by an agent having a desire to do some action, the agent having an inclination to do that action not solely because of a belief that doing so would be a step towards achieving some other goal. A desire which leads to an ultimate intention may be a very short-term one; someone may swear simply because they desire to swear. But of course most of us have desires to achieve longer-term goals. Normally we regard fulfilling any desire as such to be a good thing. But we also often believe that an action is intrinsically good to do for a reason independent of whether we desire to do it; and we sometimes believe that the intrinsic goodness of doing an action makes it on balance good to do even if we have a strong desire not to do it.

Value beliefs (in the sense in which I shall understand this notion) are beliefs about the objective intrinsic goodness or badness of doing actions of different kinds, and about their overall goodness or badness (that is whether they are good or bad on balance when all their different properties are weighed together). I mean by the beliefs concerning the ‘objective’ goodness or badness of actions that the beliefs are beliefs that their goodness or badness is a fact about certain actions which does not depend on the believer believing it to be such or desiring to do it. Value beliefs as such, unlike other beliefs, motivate us. All other beliefs need to be combined with some desire in order to incline us to act—a belief that there is food in the cupboard will only lead to any action when combined with some desire, for example, (p.176) to eat some food. A belief that an action is good in this sense, however, by itself gives the believer a reason for doing it and thereby at least a minimum inclination to do it, and a belief that it is bad gives that person a reason for not doing it and thereby at least a minimum inclination not to do it. And the stronger the value belief, the stronger the inclination to conform to it (although this inclination may still be weak). An agent has most reason to do the action which is—the agent believes—best overall among incompatible alternative actions. But the agent may also have a stronger desire to do some different alternative action; and they will then have to choose whether to do what they most desire or what they believe would be best to do. (The reader will recall from Chapter 3 that I define the strongest desire as the one on which the agent would act automatically and naturally if they had no value belief that it would not be the best action to do so; the second strongest desire is the one on which the agent would act if they had no value belief that it would not be the best action to do so and did not have the strongest desire. And so on.)

A person’s values may be of a very peculiar kind. A person may believe, for example, that the only actions worth doing by them or anyone else are actions of walking on alternate paving stones. But such beliefs will still be value beliefs if the agent sees them as providing a reason for doing the action, which may conflict with the agent’s desires. The agent who believes that some action would be overall best because it would be an action of walking on alternate paving stones, may still feel tired and fed up with doing such actions and have a strongest desire not to bother doing them in future. In that case the agent will have to choose between forcing him or herself once again to do the overall best action (as they believe it to be), and yielding to a desire not to do it. A desire to do an action other than the best is naturally called a temptation.

It would of course be very peculiar for someone to have as their only value belief, that the only actions worth doing by them or anyone else are actions of walking on alternate paving stones. When our value beliefs overlap substantially with the beliefs of most other humans about the overall goodness or badness of actions of different kinds, we recognize that our value beliefs and theirs are beliefs about a special kind of overriding goodness, and that the value beliefs of other such humans which we do not share as well as the ones which we do are beliefs of this kind. I shall call all such beliefs ‘moral beliefs’. Although much of what I shall have to say about conflicts between desires and moral beliefs applies, I believe, to all conflicts between desires and value beliefs of any kind, for reasons of space I will concern myself only with conflicts of the former kind, because that is the form which the desire/value belief conflict takes in almost all of us. Because our moral beliefs are of crucial importance in the formation of our intentions, and also because—I shall be arguing in Chapter 8—we need them in order to have a certain kind of free will. I need to say a lot more about what I understand by a ‘moral belief’, the relevance of some of which will not be apparent until Chapter 8.

Although there is in the world a wide diversity of beliefs about which actions are good or bad, overall good or overall bad, almost all of us have some beliefs in common with most of those who disagree with us about some of the properties which make (p.177) actions overall good or bad—different beliefs in common with different groups; and many of us share many such beliefs with many others. The community of all humans, I suggest, is a community with overlapping beliefs of this kind. Among such beliefs about which actions are good are beliefs about which actions are obligatory to do (of overriding importance to do); and among such beliefs about which actions are bad are beliefs about what is obligatory not to do (overriding importance not to do), that is, wrong to do. (I shall understand by a ‘right’ action one which is not wrong.) Obligations are debts to others. Hence, we suppose, it is normally more important to fulfil one’s obligations than to do good actions which are not obligatory; and it is more important not to do what is wrong than not to do bad actions which are not wrong. Almost all people agree that—except perhaps under certain circumstances—causing pleasure and saving life are good actions; keeping promises, feeding and educating their children, caring for their aged parents, are obligatory actions; telling lies, killing or wounding others who have done no harm are wrong actions. But then different groups put different qualifications on these very general claims—killing is good if it is a punishment for serious wrongdoing, pleasure is good only if it is pleasure at what causes others no pain; and so on. Hence the considerable extent of disagreement about what is overall good or bad or of overriding importance to do or not to do. But the extent of this disagreement must not obscure the fact that the disagreement takes place within a network of considerations about which there is very considerable agreement. We thus derive from similar examples of beliefs about which actions are ‘overall’ good or bad, or of ‘overriding’ importance to do or not to do, the reasons for doing which have some connection with the beliefs of others of us about these matters, the concept of a ‘moral’ belief.

I stress the wide sense in which I am using the expression ‘moral belief’. I am not using it in any of the many different narrower ways in which it is sometimes used—for example, merely to denote a belief about obligations, or a belief about obligations to other humans, or a belief about obligations in respect of personal relations. A belief that it is better for me to give a certain amount of money to feed the hungry than to use that money for a foreign holiday which I need in order to refresh myself is in my sense a moral belief. But so too, on this definition, would be a belief that it would be better for me to use the money for a foreign holiday than to give it to feed the hungry—so long as my belief fits in with beliefs about the overall goodness or badness of actions which many other people have. It may fit in with such beliefs, for example, if I believe it because I believe that I am weary and that it is good that everyone (not only I) should refresh themselves when weary. (In such cases people sometimes say ‘I owe it to myself’ to do the action.)

My definition of ‘morally good’ actions includes actions of benefiting others in ways in which we have no obligation to benefit them; these are ‘supererogatory’ good actions. No one is obliged to sacrifice their own life to save the life of some stranger, but it is a supererogatory good action if they do (so long as they do not thereby fail to fulfil an obligation to some third person), an action even better than merely fulfilling an obligation. My definition of ‘morally’ good actions also includes actions which (p.178) are worthwhile even if they benefit no one except oneself—for example, keeping physically fit, or learning a foreign language, or sculpting a beautiful ice-statue which will be seen by no one else and which will melt the next day. Such actions are clearly not obligatory. Nevertheless, most of us are inclined to think, it is good if someone does not waste their ‘talents’, but develops and applies them creatively. Plausibly also there are bad actions which are not wrong—for example, slouching in front of the TV all day watching pornographic films, even if this wrongs no one. I call such actions ‘infravetatory’ actions.

Moral beliefs as such, I suggest, like all value beliefs and unlike other beliefs, motivate us. I could not believe that some action was really morally good to do (as opposed to being what other people call ‘morally good’) and yet not see myself as having a reason for doing it. And I could not see myself as having a reason for doing it unless I had some inclination to do it. And the better I believe some good action to be, the greater as such is my inclination to do it. But such a moral inclination may be weak, and agents may show ‘weakness of will’ in yielding to some incompatible inclination (including merely the inclination not to do the relevant action) instead.

To call some mental event a moral ‘belief’ implies that it is a belief in a proposition about how things are; one which in the believer’s view corresponds to how things are. I claim that almost all of us have moral beliefs in this sense. Some philosophers have seemed to deny this, claiming that ‘moral beliefs’ are really not beliefs at all, but merely attitudes towards, in my terminology desires about, the propositions said to be ‘believed’. On such a ‘non-cognitivist’ view, to ‘believe’, for example, that it is always wrong for a state to allow capital punishment (i.e. allow the imposition of the death penalty for a crime) is merely to have a desire that capital punishment be not practiced, or that the agent and others campaign against it.2 Now certainly some people may not have moral beliefs (in my sense) but merely desires about the occurrence of actions about which the rest of us have moral beliefs, but I suggest that almost all of us have moral beliefs. We believe that the propositions we are said to ‘believe’ would be true independently of whether we believed them. It may however be the case, as other philosophers have maintained, that we are under an ‘error’ or ‘illusion’ in believing that propositions about the objective goodness or badness of actions are or even could (logically) be true; and that therefore we ought to regard our attitudes to those propositions as mere desires. I will call those who hold that some moral beliefs are true moral objectivists, in contrast to moral subjectivists who deny that any moral beliefs are true.3 My concern in this chapter being with the effects of the beliefs which (p.179) most of us have, not with whether those beliefs are or could be true, I need to take no view at this stage about whether moral objectivism is correct. I assume merely that almost all of us are in fact objectivists about morality,4 and I shall now spell out what moral objectivism involves. We believe that certain particular actions are good and other particular actions are bad, and we believe that if anyone disagrees with us they are mistaken. For example, most people believe that Hitler did a morally wrong action in commanding the extermination of the Jews. We do, however, of course differ from each other considerably in our beliefs about which actions are morally good, or bad; and some people believe that most actions are morally indifferent (neither good nor bad).

Like our non-moral beliefs, our moral beliefs are held partly because of other beliefs which we hold and change as those other beliefs change. Moral beliefs have logical connections of two kinds with other beliefs, which influence how they change as the other beliefs change. The first connection arises from the logically necessary truth that moral properties (being good or bad, obligatory or wrong) supervene on non-moral properties, often called ‘natural properties’.5 In Chapter 1 I illustrated the concept of supervenience by the example of the moral theory of utilitarianism, according to which moral properties supervene on hedonic properties; different moral theories have different views from utilitarianism about which are the non-moral properties on which moral properties supervene. What this supervenience amounts to is that particular actions are morally good or bad, right or wrong, because of some non-moral properties which they have. Thus, plausibly what Hitler did on such and such occasions in 1942 and 1943 was morally wrong because it was an act of genocide; what mother Teresa did in Calcutta was good because it was an act of feeding the starving; and so on. No action can be just morally good or bad; it is good or bad because it has certain other non-moral properties—those of the kinds which I have illustrated. And any other action which had just those non-moral properties would have the same moral properties. The conjunction of non-moral properties which gives rise to the moral property (p.180) may be a long one or a short one. It may be that all acts of telling lies are bad, or it may be that all acts of telling lies in such and such circumstances (the description of which is a long one) are bad. But it must be that if there is a (logically and so metaphysically possible) world W in which a certain action a having various non-moral properties (e.g. being an act of killing someone to whom the killer had a certain kind of relation) was bad, there could not be another world W* which was exactly the same as W in all non-moral respects, but in which a was not bad. A difference in moral properties has to arise from a difference in non-moral properties. If a certain sort of killing is not bad in one world, but bad in another world, there must be some (logically contingent non-moral) difference between the two worlds (e.g. in social organization or the prevalence of crime) which makes for the moral difference.

The supervenience of moral properties on non-moral properties must be supervenience of the kind analysed in Chapter 1. Our concept of the moral is such that it makes no sense to suppose both that there is a world W in which a is wrong and a world W* exactly the same as W except that in W* a is (overall) good. It follows that there are metaphysically necessary truths of the form ‘If an action has non-moral properties A, B, and C, it is morally good’, ‘If an action has non-moral properties C and D, it is morally wrong’, and so on. If there are moral truths, there are necessary fundamental moral truths—ones which hold in all words. I re-emphasize that, for all I have said so far, these may often be very complicated principles—for example, ‘All actions of promise breaking in circumstances C, D, E, F, and G are wrong’, rather than just ‘All actions of promise breaking are wrong’. All moral truths are either necessary (of the above kind) or contingent. Contingent (particular) moral truths (e.g. that what you did yesterday was good) derive their truth from some contingent non-moral truth (e.g. that what you did yesterday was to feed the starving) and some necessary moral truth (e.g. that all acts of feeding the starving are good). The fundamental moral truths are necessary truths. The only way to deny this latter claim is to deny that there are true moral propositions.

Given this logical supervenience of the moral on the non-moral, it follows that our particular moral beliefs are causally sustained by a conjunction of particular non-moral beliefs, and fundamental moral beliefs, that is, beliefs in some necessary moral principles. Someone’s belief that executing some particular person Jones, found guilty of murder, is right might, for example, be causally sustained by some non-moral belief that the prospect of capital punishment for murder deters would-be murderers, and the fundamental moral belief that it is good to deter would-be murderers. In practice most people will have several relevant fundamental moral beliefs which need to be weighed against each other in order to determine whether a particular action about which they have several relevant non-moral beliefs is overall good or bad, right or wrong. Thus someone may have, as well as the previous belief, the fundamental moral belief that it is bad to execute someone unless a jury has found them guilty by a unanimous vote, and that Jones was not found guilty by a unanimous vote. They may hold a further explicit moral principle about how to weigh fundamental principles against each other, or (p.181) merely believe that with respect to any action which has the same non-moral properties as some particular action that the balance of principles favours one moral belief over another, for example the belief that executing the condemned man is wrong over the rival belief. But the general point remains that particular moral beliefs are sustained in part by non-moral beliefs, and so will change as the non-moral beliefs change. Or rather this will happen unless the change of non-moral belief simultaneously causes a change of fundamental moral belief. A change of the latter kind would be an irrational process, for while non-moral propositions together with fundamental moral propositions make particular moral propositions probable, non-moral propositions do not by themselves make fundamental moral propositions probable.

I analysed in Chapter 2 some of the criteria for when some (non-moral) propositions make other (non-moral) propositions probable, and these are the criteria determining when a rational believer will change his non-moral beliefs in the light of other beliefs. I derived these criteria by considering what most of us would say after reflection are the actual criteria which we would be right to use in forming probable beliefs on the basis of other beliefs, and so—to the extent to which my account of this is correct—humans must often form their particular moral beliefs in the light of new non-moral beliefs using these criteria. Humans are not always rational in their processes of belief formation, but they are quite often.

Particular moral beliefs are also causally sustained in part by fundamental moral beliefs, and change (or rationally should change) as they change. It is, I suggest, a highly plausible contingent truth that people do often change their beliefs about fundamental moral principles by using the method of reflective equilibrium (as described in Chapter 2). Thus we might be told by our parents and teachers that it is morally obligatory to feed your family or your close neighbours if they are starving, yet it is not merely not obligatory but wrong to feed foreigners if they are starving. But we may come to doubt the latter claim by reflecting that the obvious simple principle which makes the former obligations obligatory is that it is good to feed any starving human. Human need is the same in the cases of all who are starving, and what is good for our family and neighbours must be good for foreigners also; and so even if, given limited resources, we have greater obligations to those close to us, it cannot be bad, let alone wrong to feed foreigners. Or we may be told that it is morally right (i.e. not wrong) for the state to execute those found guilty of murder and for anyone to kill in order to save their own or others’ lives, and also that killing in a duel to defend one’s honour is morally obligatory. But we may then come to derive through reflection on the former situations and other possible situations where we are told that it is not permissible to kill, a general principle that someone’s life is a very valuable thing, so valuable that it should only be taken from them to save a life or in reparation for a life which they have taken away; that is, that no one should ever try to kill anyone except to prevent them killing someone or as a punishment for killing someone. So we conclude that although it is not wrong to kill in a war to save the lives of fellow soldiers or to execute a (p.182) convicted murderer, it is wrong to kill in a duel to defend one’s honour. This kind of reflection can lead each of us and (over the centuries) the whole human race to improve our grasp of what are—on the objectivist view—the necessary truths of morality. This process is often facilitated by personal experience of some events of the kinds at issue: those who think that torture is sometimes not wrong might well change their mind when they actually see someone being tortured and so understand more fully what torture involves.

We have seen that on a moral objectivist view particular moral propositions rely for their justification in part on fundamental necessary moral propositions. What kind of necessity do these latter propositions have? My own belief is that the fundamental moral principles are logically necessary truths; the sentences which express them are true in virtue of the senses of such phrases as ‘overall good’ and ‘overriding importance’. I analysed in Chapter 2 the methods by which we can resolve disagreements about whether some sentence (and so the proposition which it expresses) is a logically necessary truth. While the paradigm way to show some sentence to be logically necessary is to deduce a contradiction from its negation, any attempted deduction of a contradiction from the negation of a purported necessary moral principle is likely to be controversial. Yet, as we have just seen, the less direct method of reflective equilibrium does offer hope for progress towards agreement. And given my claim that almost all humans have quite a lot of paradigm beliefs about the ‘overall goodness’ or ‘overriding importance’ of various kinds of cases of actions in common, and given that humans have similar cognitive mechanisms for extrapolating from particular examples to the implicit general principles, it follows that reflection on those paradigm examples is bound to yield to some extent a common view of the principles involved in them. So the view that the true fundamental moral propositions are logically necessary explains the utility of the method of reflective equilibrium in beginning to secure agreement about what they are.

Other philosophers have held that the fundamental moral propositions are metaphysically but not logically necessary, that is are a posteriori necessary.6 On that view ‘overall goodness’ (or whatever) pick out properties by (in my terminology) uninformative designators, but what makes some action ‘overall good’ (or whatever) are properties which underlie these. If the fundamental moral principles are in this way metaphysically but not logically necessary (and so necessary a posteriori), it needs to be explained how humans can acquire many justified beliefs about morality and why the (p.183) method of reflective equilibrium would enable them to acquire many more. It seems to me, however, that a moral subjectivist could explain the utility of reflective equilibrium. For while a moral subjectivist cannot regard it as a method for discovering some necessary moral truth, they can regard it as showing the similarity between (for example) one’s family who are starving and foreigners who are starving as giving someone an inclination to take the same attitude towards both groups, and so perhaps no longer to regard feeding starving foreigners with disapproval.

Those who begin to align the moral judgements about the overall worth or overriding nature of different kinds of actions by the method of reflective equilibrium must already have to some extent a shared understanding of what it is to believe that some action is ‘overall good’ or ‘of overriding importance’. So although different people derive their concept of the morally good from different kinds of examples, there must be enough overlap between the examples to give rise to a sufficiently similar concept of moral goodness for them to begin to sort out which actions are morally good, that is, on which non-moral properties moral goodness supervenes. By contrast, there seem to me no grounds for saying that someone who does not share with many of us quite a few beliefs about which actions are overall good and bad, of overriding importance to do or to avoid doing, has ‘moral beliefs’ in anything like my sense. If someone believes that no action is ‘overall good’ except walking on alternate paving stones, or of ‘overriding importance’ except killing anyone who lives close to them, I suggest that they do not have any belief at all about what is morally good or obligatory. This is because that person does not understand ‘overall’ goodness and ‘overriding’ importance—the terms by which I am elucidating our concepts of ‘moral’ goodness and obligation—in nearly the same kind of way as the rest of us. Even if that person is angry with him or herself if they do not act on their weird beliefs, that person is not angry with him or herself for the same kind of reason as other humans are when they tell a lie when they believe that it was ‘wrong’ to do so. This person’s ‘conscience’ would not have the same flavour as ours. Such a person is a psychopath beyond moral assessment.

So much for the way in which particular moral beliefs are causally sustained by non-moral beliefs and fundamental moral beliefs, and the way in which fundamental moral beliefs in their turn are sustained by particular moral beliefs, which in turn sustain new particular moral beliefs.

Beliefs, I claimed in Chapter 3, are, at a given time, involuntary states; we cannot change them at will; and normally too, I claimed, the desires of humans are also involuntary. It follows that if I have equally strong desires (felt inclinations) to do any of two or more available actions (e.g. to give money to this charity or to that charity) and no stronger rival desire, but believe that one of these actions is the overall best action to do (i.e. the one which, I now understand as the ‘morally best action), I will inevitably form the intention to do the latter action. For my reason provides the extra inclination, which I have no desire to oppose, beyond the ‘felt’ inclination, of my desires, which leads to action. If I believe that it would be equally good to do any of (p.184) two or more incompatible actions (e.g. to lunch at this restaurant rather than that one), and that there is no better rival action, but I desire to do one of these actions more than the others, I will inevitably do that one for I have no reason not to do so. In either of these circumstances the formation of my intention does not require a decision between alternatives. I form the intention I form inevitably.

But when I have equally strong desires to do any of two incompatible actions (and no stronger desire to do a different incompatible action) and believe each of the actions to be equal best actions, neither reason nor desire can determine what I will do. I will have to make an arbitrary decision. In these circumstances the formation of whatever intention I form will be fully rational, for whatever I do, I have a reason to do it, and no better reason for not doing it. The same applies when I have to form an executive intention about which of two equally quick ways to take in order to fulfil an ultimate intention.

Finally there is the situation where the action which I most desire to do (or each of the actions which I most desire equally to do) is incompatible with what I believe to be the morally best action to do. Here too neither reason nor desire can determine which intention I will form; it cannot be determined solely by the strengths of my desires relative to each other or by my belief about which action would be best to do. For as I am understanding these terms, beliefs and desires are measured on incommensurable scales; a belief is strong insofar as the agent believes it to be very probably true, whereas a desire to do an action is strong insofar as the agent is spontaneously and naturally inclined to do it. A moral belief may be strong, while the inclination to act on it may be weak.

Here I have to decide whether to yield to desire and do the less good action, or to force myself—contrary to my strongest desire—to do the best action. This is a familiar situation vividly described by Plato in Phaedrus,7 and by St Paul in his letter to the Romans,8 when yielding to desire manifests ‘weakness of will’. This situation I will call the situation of difficult moral decision. Both in the situation of having equally strong desires and moral beliefs, and in the situation of conflict between strongest desire and moral belief, if the agent’s intention is fully caused, the route of causation must involve brain events—since the (accessible) mental events cannot determine which action the agent will do. I should add that since moral beliefs as such motivate—that is, incline us to act—some desire may be the strongest partly (or for some occasional saintly agents, wholly) because the agent desires to do the best action. In that case the agent will inevitably do the best action. We may suppose that when Luther took the path which led to the Reformation with the words ‘I can do no other’, that was his situation. But, too often for many of us, our strongest desires conflict with our moral beliefs and so we (p.185) have to decide whether to yield to felt inclination and follow our strongest desire, or to resist it and do (what seems to us to be) the best action.

When the issue is important, we often deliberate about what to do before reaching a decision—as a result of a prior decision to deliberate, itself motivated by a desire to reach, or a belief that it would be morally good to reach, a well justified belief about which action would be the best to do or would best satisfy our considered desires. Such deliberation consists of intentionally bringing about thoughts relevant to whether or not to do so-and-so, and drawing out their consequences—which is itself a mental intentional action. The process is completed when we form an intention, that is, decide. Sometimes a decision for immediate execution is simply the conscious first stage of having the intention which immediately influences our movements. At other times, and especially when it is a decision to do some important action which would take some time to execute, there will be a recognizable short gap between the decision and the first stage of its execution. But if the decision is one for execution in the more distant future, then, if we do not forget it, it remains ready to guide our movements at the relevant time, though as a continuing mental state, not as an event of which we are all the time conscious.

As well as being in part creatures of reason, we humans are also largely creatures of habit. Even most of our ultimate intentions are habitual ones. We go to lunch at the same restaurants, watch the same television programmes, and attend the same football matches—out of habit. It is the same desire or the same belief about what would be best to do, which inclines us to form the same intentions when similar circumstances recur. Most of our executive intentions are also habitual—we have particular routes along which we walk to the restaurant, certain routines we follow in shaving or dressing. This is because we have over periods of time the same beliefs about the quickest ways to fulfil our ultimate intentions. And when the action is a fairly short-term one and easy to do, and especially when it results from a desire rather than a moral belief, the causal role of the resulting intention is ‘permissive’ rather than ‘active’. I let my body carry on the way it is tending to do.

I have argued that beliefs and desires are caused, and I shall assume (since nothing in my argument turns on this) that all other mental events with the possible exception of intentions are also caused. I shall assume that all such causation (with the possible exception of causation of intentions) is deterministic and of a law-like kind—that is, for any such brain or mental event which causes another one, any event of the same type as the cause is a sufficient causal condition of an event of the same type as the effect. Clearly some desires and sensations are caused directly by brain events without any other mental events having much influence on the causal process; desires to drink or sleep, and sensations of pain or noise are normally9 surely in this category. But almost all our propositional events—most of our desires and, I suggest, all of our beliefs and (p.186) occurrent thoughts—couldn’t be had without belonging to mutually sustaining packages of other beliefs and desires; or be conscious without being sustained by other conscious beliefs and desires. I could not have a desire to be Prime Minister without it being sustained by many beliefs about what prime ministers do, as well no doubt as by some brain events causing me to desire to be famous or powerful. And I couldn’t even come consciously to believe (through perceiving it) that there is a lectern in front of me without having many other (at least to some degree) conscious beliefs, such as a belief that lecterns are used for giving lectures and so a belief about what lectures are.

I shall understand by a total conscious state at a time all a person’s conscious events happening at that time, and by a total brain state at a time all that person’s brain events happening at that time. Many total conscious states are large ones, containing many beliefs and sensations (consider merely the sensory content of our visual field and the beliefs which we acquire about the objects we see, as we enter a room), and often also occurrent thoughts and intentions (we often have some ultimate intention and some executive intention we are trying to fulfil). The part of the brain state which sustains even the conscious part of any mental state will also be a large state; recent neuroscience suggests that it consists in a ‘temporal synchrony between the firing of neurons located even in widely separated regions of the brain’, between which there are ‘reciprocal long-distance connections’, a synchrony which attains a ‘sufficient degree and duration of self-sustained activity’.10 Different conscious events are sustained by different variants of this pattern of activity. So if we are to make predictions of future conscious events and brain events, we would need a theory of which aspects of a total brain state (which types of brain events) cause or are caused by which aspects of a total mental (including conscious) state (which types of mental events). Then we could predict that any new total brain state which contained a certain type of brain event would cause a certain type of conscious event, including perhaps a certain type of intention.

2. Obstacles to assembling data for a mind–brain theory

To have evidence in favour of such a theory we would need to acquire a lot of data in the form of a very long list of particular (‘token’) total conscious states occurring approximately simultaneously with token total brain states. To get information about which conscious events are occurring, we must depend ultimately—for reasons given in Chapters 3 and 4—on the reports of subjects about their own conscious events. There are, however, two major obstacles which make it difficult or impossible to get full information from subjects.

(p.187) The first obstacle concerns the ‘propositional’ mental events, thoughts, desires, beliefs, and intentions. The problem is that while the content of most of these events can be described in a public language, as I commented in Chapter 1, its words are often understood in slightly different senses by different speakers. One person’s thought which they describe as the occurrent thought that scientists are ‘narrow-minded’, or the belief which that person describes as the belief that there is a ‘table’ in the next room, or the desire which they describe as the desire for a ‘jolly’ holiday in Greece has a slightly different content from another person’s thought, belief, or desire described in the same way. What one person thinks of as ‘narrow-minded’ another person does not, some of us count any surfaces with legs as ‘tables’ whereas others discriminate between desks, sideboards, and real tables, and different people have different views about what would constitute a holiday being ‘jolly’.11 This obstacle can be overcome by questioning subjects about exactly what they mean by certain words. But it has the consequence that, since beliefs and so on are the beliefs they are in virtue of the way their owners think of them, relatively few people have exactly the same types of beliefs, desires, etc. as anyone else—which makes the kind of experimental repetition which scientists require to establish their theories very difficult to obtain.

There is, however, a much larger obstacle to understanding what people tell us about their sensations, which I discussed in Chapter 3. This is, that we can understand what they say only on the assumption that the sensations of anyone else are the same as we would ourselves have in the same circumstances—and that is often a highly dubious assumption. This obstacle applies to all experiences of colour, sound, taste, and smell (the ‘secondary qualities’). We can recognize when someone makes the same discriminations as we do in respect of the public properties of colour and so on, but we cannot check whether they make the discriminations on the basis of the same sensations as we do. And, I argued in Chapter 3, there are good reasons to suppose that different people do not always make the same discriminations on the basis of the same sensations. Maybe green things look a little redder to some people than to others, or coloured things look fainter to some people than to others, or curry tastes differently to different people, when none of these differences affect their abilities to make the same discriminations. The ways things look and feel, however, inevitably affect the way people react to them. Our inability to discover fully how things look and feel to others has, as I pointed out in Chapter 3, the consequence that we cannot fully understand what they mean by some sentence which uses words whose meaning derives from the way things look or feel. For example, our understanding of colour terms derives from the way objects of a certain group look; and if we don’t know exactly how green objects look (p.188) to someone else, we don’t fully understand what they mean when they describe a house as ‘green’. I did, however, make a qualification to all this in Chapter 3, that while we may be unable to understand the natures of the individual sensations of others, their sensations may exhibit patterns which are the same as some publicly exemplifiable patterns; and so we can know what someone means when they describe a mental image as ‘square’.

3. The high improbability that human behaviour will be predictable

So, bearing in mind these limits to the kinds of data about the conscious events of different subjects we can have, what are the prospects for forming a theory supported by evidence which will not merely explain and so predict how brain events ultimately cause conscious (and other mental) events of other kinds but how these (together with brain events) cause our subsequent intentions? On the account of the criteria for the probable truth of a scientific explanatory theory given in Chapter 2, to be fairly probable a scientific theory must be fairly simple—that is, postulate mathematically simple relations between only fairly few properties of entities of similar kinds—and make many correct predictions therefrom. Mathematical relations can hold only between properties which have degrees, greater or less, which can be measured on some scale.

What makes a scientific theory such as a theory of mechanics able to explain a diverse set of mechanical phenomena is that the laws of mechanics all deal with the same sort of thing—physical objects, and concern only a few of their properties—their mass, shape, size, and position, which differ from each other in measurable ways. (One may have twice as much mass as another, or be three times as long as another.) Because the values of these measurable properties are affected only by the values of a few other such properties, we can have a few general laws which relate two or more such measured properties in all physical objects by a mathematical formula. We do not merely have to say that, for example, when an inelastic object of 100 g mass and 10 m/sec velocity collides with an inelastic object of 200 g mass and 5 m/sec velocity, such and such results; and have a quite unrelated formula for what happens when an inelastic object of 50 g mass and 20 m/sec collides with an inelastic object of 150 g mass and 5 m/sec velocity, and other unrelated formulae for each different mass and velocity of colliding inelastic objects. We can have a general formula, a law saying that for every pair of inelastic physical objects in collision the quantity of the sum of the mass of the first multiplied by its velocity plus the mass of the second multiplied by its velocity is always conserved. As I illustrated in Chapter 2, what made Newton’s theory very probably true is that it contained only four simple laws relating the masses and velocities of all bodies, and made successful predictions about innumerable bodies, small and large. But there can only be such laws because mass and velocity can be measured on (p.189) scales—for example, of grams and metres per second. And we can extend mechanics to a general physics including a few more measurable quantities (charge, spin, colour charge, etc.) which interact with mechanical quantities, to construct a theory with laws making testable predictions.

A mind-brain theory, however, would need to deal with things of very different kinds. Brain events differ from each other in the chemical elements involved in them (which in turn differ from each other in measurable ways) and in the velocity and direction of the transmission of measurable electric charge. But mental events do not have any of these properties. The propositional events (beliefs, desires, etc.) are what they are, and have the influence they do in virtue of their propositional content (and strength—to which I’ll come shortly), often expressible in language but a language which—I noted earlier—has a content and rules differing slightly for each person. (And note that while the meaning of a public sentence is a matter of how the words of the language are used, the (narrow) content of a propositional event such as a thought is intrinsic to it; it has the content it does, independently of how the subject or others use words on other occasions.) Propositional events have relations of deductive logic to each other; and some of those deductive relations (the relations of mini-entailment) determine the identity of the propositional event. My belief that all men are mortal wouldn’t be that belief if I also believed that Socrates was an immortal man; and my thought that 2 = 1 + 1, and 3 = 2 + 1, and 4 = 3 + 1 wouldn’t be the thought normally expressed by those equations if I denied that it followed from them that 2 + 2 = 4. And so generally. Much of the content of the mental life cannot be described except in terms of the content of propositional events; and that cannot be done except by some language (slightly different for each person) with semantic and syntactic features somewhat analogous to those of a public language. The rules of a language which relate the concepts of that language to each other cannot be captured by a few ‘laws of language’ because the deductive relations between sentences and so the propositions which they express are so complicated that it needs all the rules contained in a dictionary and grammar of the language to express them. These rules are independent rules and do not follow from a few more general rules. Consider how few of the words which occur in a dictionary can be defined adequately by other words in the dictionary, and so the same must hold for the concepts which they express; and consider in how many different ways described by the grammar of the language words can be put together so as to form sentences with different kinds of meaning, and so the same must hold for the propositions which they express.

So any mind–brain theory which sought to explain how prior brain events cause the beliefs, desires, etc. which they do would consist of laws relating brain events with numerically measurable values of transmission of electric charge in various circuits, to conscious (and non-conscious) beliefs, desires, intentions, etc. formulated in propositional terms, and also sensations (of different strengths). The contents of the mental events do not differ from each other in any measurable way, nor do they have any intrinsic order (so that one can be thought of as greater than another). Those concepts (p.190) which are not designated by words fully defined by other words designating other concepts—and that is most of the concepts we have—are not functions of each other. And they can be combined in innumerable different ways which are not functions of each other, to form the propositions which are the contents of thoughts, intentions, and so on. So it looks as if the best we could hope for is an enormously long list of separate laws relating brain events (of certain strengths) and mental events (of certain strengths) without these laws being derivable from a few more general laws.12

But could we not have at least an ‘atomic’ theory which would state the causal relations of particular types of brain events involving only a few neurons to particular aspects of a total conscious state—particular types of beliefs, occurrent thoughts, etc., the content of which was describable by a single sentence (of a given subject’s language), in such a way that we could at least predict that a belief with exactly the same content would be formed when the same few neurons fired again in the same sequence at the same rate (if ever that happened)?

The ‘language of thought’ hypothesis13 (LOT) is a particular version of such an atomic theory. It claims that there are rules relating brain events and beliefs of these kinds, albeit a very large and complicated set of them. It holds that different concepts and different logical relations which they can have to each other are correlated with different features in the brain. For example, it holds that there are features of the brain which are correlated with the concepts of ‘all’, ‘man’, ‘mortal’, and ‘Socrates’, and that there is a relation R which these features can sometimes have to each other. When someone believes that Socrates is mortal, this relation R holds in their brain between the ‘Socrates’ feature, and the ‘mortal’ feature; when someone believes that Socrates is a man, R holds between the ‘Socrates’ feature, and the ‘man’ feature; and when someone believes that all men are mortal, R holds between the ‘man’ feature and the ‘mortal’ feature. (The holding of this relation might perhaps consist in the features being connected by some regular pattern of signals between them.) The main argument given for LOT is that unless our brain worked like this, the operation of the brain couldn’t explain how we reason from ‘all men are mortal’ and ‘Socrates is a man’ to ‘Socrates is mortal’, since our reasoning depends on our ability to recognize the relevant concepts as separate concepts connected in a certain particular way. Beliefs, thoughts, etc. then, the theory claims, correspond to ‘sentences in the head’.

I argued earlier, however, that no belief can be held without being sustained by certain other beliefs—for logical reasons; which other beliefs a given belief is believed (p.191) to mini-entail determines in part which belief the former belief is. Now consider two beliefs, the belief that a particular object is square and the belief that that object has four sides; someone couldn’t hold the first belief without holding the second. So these two beliefs cannot always be correlated with different brain events, since in that case a neuroscientist could eliminate the brain event corresponding to the latter belief without eliminating the brain event corresponding to the former belief. On the other hand these two beliefs cannot always be correlated with the same brain event since someone can have the belief that the particular object has four sides without having the belief that that object is square. We can generalize this result as follows. In every believer a belief q ‘this object has four sides’ must be sustained by every brain event x which (if it occurred) would sustain any belief r which is such that the believer believes that r mini-entails q—for example, the belief r ‘this object is a rectangle’, the belief ‘this object is a rhombus’, and so on. And every belief which the believer believes to be mini-entailed by q but not to mini-entail q must be sustained also by some brain event other than x. And what goes for the beliefs just discussed clearly applies generally. All the distinct beliefs which any believer has must be sustainable in them by very many different brain events which sustain other beliefs also. That leads naturally to the view that it is the type of the total mental state to which propositional events belong which is correlated with and so causally related to the type of a total brain state without there being correlations between small parts of the mental and brain states. This view is that of connectionism,14 the rival theory to LOT, which holds that mind–brain relations are holistic. Only if connectionists hold, as they often do, that mental events are identical with (or supervene on) brain events, is it an objection to connectionism that, according to it brain events do not have to each other the kind of relations between sentences and so between propositions characteristic of rational thought. But given my arguments in Chapter 3 to the effect that mental events are events distinct from brain events (and do not supervene on them), mental events can have a sentential structure without the brain events which sustain them having such a structure. So, given connectionism, a mind–brain theory could at best predict the occurrence of some mental event only in the context of a large mental state (a large part of an overall mental state, consisting of many beliefs, desires, etc.) and of a large brain state (events involving large numbers of neurons).

We must suppose that mental events often cause other mental events in a rational way. To deny this would involve holding that the only justified beliefs are those caused by experience, memory, and testimony. We saw in Chapter 4 that no argument could show that beliefs never cause other beliefs—since one is only justified in believing the conclusion of an argument on the basis of its premises if one believes that believing the premises causes one to believe the conclusion. It would be equally self-defeating to believe the conclusion of an argument purporting to show that no-one believes the (p.192) conclusion of an argument because the premises make it rational to believe the conclusion. So (by the principle of credulity) we should believe that things are as they seem to be, that many of us come to believe that some historical, scientific, or philosophical belief is true because we have reached it by a process of rational thought. And the criteria for premises supporting conclusions used by scientists and other investigators are merely sharpened up versions of those used in less sophisticated activities, such as working out the way to go home or the cost of a holiday. As I have already illustrated, the laws of rational thought include the criteria of valid deductive inference, and these can be codified only by lists as long as those of the dictionary and grammar of a human language. They also include the criteria of cogent inductive inference—that is, the criteria of epistemic probability, of which propositions make which other propositions probable, some of which were analysed in Chapter 2. They also include the criteria for forming moral beliefs, analysed earlier in this chapter. But at a given time each person has slightly different criteria from other persons, in part because of the slightly different concepts with different deductive relations, with which their inductive and moral criteria are concerned. And of course humans are not always rational even by their own criteria of rationality, and so we would need laws stating when and how brain events disturb rational processes; these laws would vary with the overall mental and brain states of the subject, and the mental states which disturb rationality would often need to be described in terms of the concepts with which that subject operates (e.g. some particular fixation preventing someone reasoning rationally about a particular subject matter).

As I noted earlier, moral beliefs and desires vary in strength. And so do all other mental events, apart from occurrent thoughts. One person’s sensation of the taste of curry may be stronger than another person’s. One person’s belief that humans are causing global warming and that it is good to prevent this may be stronger than another person’s (i.e. the first person believes this proposition to be epistemically more probable on their evidence than does the second person on their evidence). And one person may have a stronger intention (i.e. may try harder) than another person to bring about some effect.

These differences of strength affect the influence of mental events on each other in a rational way. Someone who dislikes the taste of curry (i.e. desires not to taste that taste) will be more likely to stop eating curry, the stronger is that taste of curry. Someone will be more likely to choose to travel by bus rather than by car, if they have a moral belief of a certain strength that it is good to prevent global warming, the stronger is their belief that petrol-driven cars cause global warming. The stronger is someone’s intention (i.e. the harder they try) to lift a weight, the more likely it is that the brain event which will cause that person’s arm to raise the weight will occur. So although there cannot be a mathematical law relating changes in types of brain event to changes in types of mental event except in the context of a whole brain state and a whole mental state, perhaps in the context of such a state there could be a law determining how some change of brain event could increase or decrease the strength of a particular mental event. Then maybe we could calculate from the strength of the belief or (p.193) sensation the strength of the intention which they could cause. But in order to determine that influence on intentions in a new situation where there are many conflicting influences, we need a measure of the absolute strength of sensations and so on (not merely of their strength relative to that of a similar event in a different past situation) which can play its role in an equation connecting these; and subjects cannot provide that from introspection. While subjects can sometimes put sensations in order of strength in virtue of their subjective experience, what they cannot do is to ascribe to them numerical degrees of strength in any objective way. People do not have any criteria which would enable them to answer the doctor’s question ‘Is this pain more than twice as severe as that pain?’ There is no clear meaning in someone saying that one pain is or is not twice as severe as another one.15

The same applies to beliefs and other propositional events. There is a long philosophical tradition of trying to measure the strength of a subject’s belief in some proposition by which actions that person is prepared to do (in effect, which intentions they would form) in different circumstances.16 But even if we take the subject’s word for how they would act in some different circumstances, we cannot use this information to measure the strength of the subject’s beliefs unless we could measure the strength of their desires (and moral beliefs) in those different circumstances on some scale. For, as we have seen, what someone does depends not only on what that person believes will result from their actions, but how much they desire (or believe it good to achieve) that result. We can measure the strength of a subject’s desires relative to other desires by what a subject tells us; and if we make the implausible supposition that the subject’s brain state and the rest of their mental state remain exactly the same, we may be able to make some predictions about which intention the subject will form without knowing the absolute value of their desire. It would, for example, follow that if the subject got food from a cupboard (p.194) yesterday and today had a desire to eat stronger than yesterday’s desire, and a belief that there was food in the cupboard no weaker than yesterday’s belief (and everything else in the subject’s mind and brain were the same) that they would form the intention to get the food. But in order to make a prediction about what the subject will do in any new circumstances (where their other desires and brain states are slightly different) we need to know the absolute value of the strength of the subject’s beliefs and desires in that situation on an objective scale—just as we need to know the exact masses and velocities of two billiard balls in order to predict accurately what their velocities will be after a certain collision. And all of this is complicated by the fact that people do not always act on their moral beliefs in a way which reflects the ‘strength’ (in the sense in which beliefs are strange) of those beliefs. All of this suggests that we could not derive from data about what a subject believed that they would do under different circumstances any absolute situation-independent numerical values of the strengths of their beliefs or desires.

So could neuroscience provide those exact values which subjects cannot provide from introspection? On a common scale which reflects the influence of the different mental events neuroscience might discover that greater activity of certain kinds of brain event causes (for example) the beliefs caused by those brain events to be stronger. But for prediction of their effects we would need to know how much stronger were the resultant beliefs. So we would need a theory by means of which to calculate this, which gave results compatible with subjects’ subjective reports about the relative strengths of their beliefs. But although almost all adults have brains containing the same interconnected parts—thalamus, hippocampus, amygdala, and so on, they vary in size and have different connections with each other in different people, and the brain circuits, rates of firing etc., which sustain beliefs in different people, are so different from each other that it is difficult to see how there could be a general formula connecting some feature of brain events with the absolute strength of the mental events which they sustain.17 Again, we could only have a long list of the kinds of brain activity which increase or decrease the strength of which kinds of mental events.

So the part of a mind–brain theory which predicts human intentions and so human actions would consist of an enormous number of particular laws, relating brain events to subsequent mental events (some of them conscious), and these (together with other brain events) to subsequent intentions, having the following shape: Brain events (B1, B2…Bj) + sensations (M1…Mj) + Beliefs (including moral beliefs) (Mj.…Mk) + Desires (Mk…M1) →Intention (Mn) + Beliefs (about how to execute the intention) (Mp…Mq) + Brain events →bodily movements. The B’s designate events in individual neurons, and each law would involve large numbers of these; the M’s designate mental events with a content (p.195) describable by a short sentence and with a certain strength, and again each law would involve large numbers of these. The strength of an intention measures how hard the agent will try to do the intended action. The arrow designates ‘causes’.

There would be an enormous number of different laws for each person relating total brain states to total mental states, including total conscious states, and relating these and subsequent brain states to subsequent intentions. So we could not work out what a person will do on one occasion when they had one set of brain events, beliefs, and desires, on the basis of what that person (or someone else) did on a previous occasion when they had a different set differing only in respect of one belief. For there could be no general rule about the effect of just that one change of belief on different belief and desire sets; the effect of the change would be different according to what was the earlier set, and what were the brain events correlated with it. But no human being ever has the same overall brain state and mental state at any two times, or as any other human does at any time; and—I suggest—no human being considering a difficult moral decision ever has the same conscious state, let alone the same brain state in the respects which give rise to consciousness and determine its transitions, as at another time or as any other human ever. For making a difficult moral decision involves taking into account many different conflicting beliefs and desires. The believed circumstances of each such decision will be different, and (consciously or unconsciously) an agent will be much influenced by her previous moral reflections and decisions.

Consider someone deciding how to vote at a national election. That person will have beliefs about the moral worth of the different policies of each party, and the probability of each party executing its policies; they will desire to vote for this candidate and against that candidate (liking or disliking them) for various different reasons; they will desire to vote in the same way as (or in a different way from) their parents, and so on. This person will have slightly different beliefs and desires of these kinds, different from those of almost any other voter. Further, that part of the voter’s total brain state which determines the strength of their different mental events, and how rationally they will react to them, will almost certainly be different from those of any other voter. So because exactly the same overall conscious state would never have occurred previously together with its brain correlates, there could not be any evidence supporting a component law of the mind–brain theory to predict what would happen this time. It might just happen that a very similar conscious state had occurred on one previous occasion (in the same or a different voter) correlated with a not too dissimilar brain state, which would support a detailed law about the effects of that similar conscious state. But that suggested law would (because of the slight difference in the conscious events and brain events) only make it quite probable what would happen this time, and the law itself would only be very weakly supported by one piece of evidence about what happened on the one previous occasion.

What applies to the example of the voter applies even more evidently to the difficult moral decisions over which people agonize from time to time: whether to begin a sexual relationship, get married, have children, leave one’s husband or wife, accept a (p.196) new job, move house, give up a job to look after aged parents, join a religious community, etc. In each case so many different mental events will be involved. The conscious state of someone agonizing over what to do will differ from the conscious state of someone else agonizing over a decision of the same kind. Our images of and beliefs about the others involved will be different, and the moral beliefs and desires and their strengths which we bring to the decision process will be different; and so too will be the brain events which sustain them.

Add to all this the points made in section 2 about the difficulty involved in getting some of the evidence required to support any mind–brain theory, and I conclude that a prediction about which difficult moral decision someone would make, and so which resulting action they would do, could never be supported by enough evidence to make it probably true. Human brains and human mental life are just too complex for humans to understand completely. That conclusion is of course compatible with human behaviour being predetermined (but its laws too complex to be inferred from a finite collection of data about prior brain and mental events), or not being fully predetermined, by prior brain events. But it does have a crucial consequence that those brain events which most immediately cause the movements which constitute human actions of deepest moral significance will never be totally predictable.

4. Brain indeterminism and physics

I pointed out in Chapter 4 that the normal indeterministic interpretation of quantum theory has the consequence that no physical system is totally deterministic, and that there can be systems in which small-scale non-determined events cause large-scale effects. It is possible that the brain is just such a system, and, we saw in Chapter 4, there are theories developed from quantum theory which purport to explain how intentions can affect brain events.

Nevertheless if sequences of brain events must conform to the laws of quantum theory, it will surely be fairly rare for a very small change in the brain (of a value within the limits of unpredictability stated in the Heisenberg indeterminacy principle), to make a difference to which bodily movements an agent makes. It will still normally be the case that which brain events cause which bodily movements will be unaffected by variations within those limits. This is because only some variations within those limits will make any difference to whether potential is transmitted to an adjoining neuron, the potential transmitted at one synapse will relatively seldom (in view of all the other changes of potential arising from that neuron’s other synapses) make any crucial difference to whether or not the neuron fires, and the firing of one neuron will relatively seldom (in view of the behaviour of other neurons) make any crucial difference to whether a bodily movement occurs. As I wrote in Chapter 4, we simply do not know just how many is ‘some’ and how ‘seldom’ is relatively seldom.

I argued earlier, however, that most of our executive intentions follow inevitably from our ultimate intentions and our beliefs. I argued also that in the case of ultimate (p.197) intentions, if we have no conflicting moral belief, we inevitably form the intention to do what we most desire; and, if we have no conflicting desire, we inevitably form an intention to do what we believe best. Given, as I assumed earlier, that all our beliefs and desires at a given time are caused ultimately by prior brain events, any intention we form will in these cases also be so formed. Hence in these cases any sequence of brain events which leads to an intentional action will be caused by an intention which is caused by the brain events which cause our beliefs and desires; and so the role of the intention will be merely ‘permissive’. We allow our brain events to cause those movements which they are already ‘on track’ to cause us to perform, and thereby we constitute those movements as intentional actions; it would be contrary to our intention to interfere in this process. Almost all our intentional actions are like this.

If, however, in the absence of any relevant moral belief we have two (or more) conflicting desires of equal strength to do alternative actions, or we believe that it would be equally morally good to do any of two (or more) alternative actions and our desires do not favour one over others, we have to make a decision; but we believe that it doesn’t matter how we choose. Either way our conduct is rational; and so we have no reason to interfere with whatever would otherwise be determined by brain or mental events.

It is only when our strongest moral beliefs conflict with our strongest desires that we have to make a decision that matters about which action to do. In this case there are clearly brain processes inclining us to do two different incompatible actions, in the form of brain events drawing our attention to our moral beliefs and other brain events causing contrary desires. Our strongest desires are our strongest inclinations and will inevitably determine what we will do unless we force ourselves to do the best action. So the brain processes causing the agent’s strongest desires will inevitably determine what he will do, unless he interferes in the process. And alone in this situation of having to make a difficult moral decision, unlike in the other situations, the agent has good reason to interfere in the brain processes. My guess is that on average humans are faced with such choices maybe once a day; but clearly there are conscientious people who are faced with such choices much more often than that, and other people less sensitive to moral dilemmas who face difficult moral decisions only perhaps once a month.18 But for everyone surely the occasions on which they face a difficult moral decision are very rare in comparison to all the other occasions on which they form intentions. Hence the proportion of occasions on which variations in the brain within the Heisenberg limits (p.198) would make any difference to our movements may well coincide with the proportion of occasions on which we are faced with having to make difficult moral decisions. On this model the greater the natural probability that some sequence of brain events will eventually lead to the occurrence (or non-occurrence) of certain bodily movements, the stronger the desire which its earlier stages will cause in the agent to allow it to continue. But in this situation of a conflict with their moral beliefs the agent has a reason to interfere; yet the greater that natural probability (relative to the natural probability of the sequence of brain events causing the action believed best), the harder it will be for the agent to do so, and so the less probable it will be that they will do so. Nevertheless on this model the agent can force themself to do a less desired action which they believe would be the best one to do (despite having only a weaker inclination to do it), and it would be compatible with quantum theory that the agent should do the less desired action by their intention to do so causing the sequence of brain events to bring about a naturally less probable outcome. (This model yields a way of measuring the strength of a desire (relative to that of a rival desire including one to do an action believed best) by the degree of this natural probability, the possibility of which I doubted earlier.) Hence it may well be that humans can make difficult moral decisions which they are not fully caused to make, in a way compatible with the full conformity of the operation of the brain to the laws of quantum theory.

Yet even if it should turn out that quantum theory has the consequence that it is immensely improbable that a change in the brain within the Heisenberg limits could affect the pattern of our bodily movements as frequently as the occasions on which humans make difficult moral decisions, or even if it should turn out that quantum theory is replaced by a (very probably true) deterministic theory which makes the behaviour of physical systems other than the brain totally predictable, it follows inevitably from the argument of Chapter 4 that it will not be possible fully to predict brain behaviour by means of a theory of physics alone. For if it was the case that every brain event was caused by another brain event in a totally deterministic way in accord with physical laws, then our intentions would not cause brain events—and I argued in Chapter 4 that no evidence could make that conclusion probable. Either quantum theory or some rival theory must find a place for intentions to influence the brain, if that theory is to provide a fully adequate account of the brain. As I have already argued in this book, it should not be too surprising that the brains (of humans and perhaps higher animals also) are different from other physical systems, since the brain is unlike any other physical system in that—quite apart from whether intentions cause brain events—brain events cause innumerable conscious events. And if intentions cause brain events in the light of beliefs and desires, then, I have already argued, how these will interact to yield an intention in the case of difficult moral decisions is too complicated to be predicted by any mind–brain theory which could be well justified by evidence.19

(p.199) 5. What neuroscience can discover

The limits to the ability of neuroscience to predict mental events arise, I have claimed, from the enormously large number of detailed laws which would have to govern any interaction of many different kinds of mental (including conscious) events and brain events. But neuroscience may be able to discover, and has begun to discover mind–brain laws which do not involve such complicated interactions. Thus it has begun to discover which particular brain events are necessary and sufficient for the occurrence of those non-propositional events which do not involve the inaccessible aspects of sensations, but only the patterns of sensations. A mental image has the same sort of properties of shape and size as the properties of public objects such as brain events. So neuroscience is on the way to discovering a law-like formula by which it can predict from a subject’s brain events both the images caused by the public objects at which they are looking and the images which the subject is intentionally causing.20 But that formula will not tell us what the subject regards their image as an image of—for example, as an image of a television set or of a shiny box. Which beliefs subjects acquire about what they are seeing is clearly going to vary with their prior beliefs about the way objects of different kinds look, for example, that something of such-and-such a shape is a television set. But if the neuroscientist discovers these prior beliefs in some other way than from observation of brain events (e.g. from what subjects tell them, or by analogy with the neuroscientist’s own beliefs), then theyt should be able to predict from a subject’s brain events not merely the shape of the image which the subject is seeing, but the subject’s belief about what they believe that they are seeing.

Similar considerations apply to the other senses. Which words a subject hears depends on the pattern of sensed sounds rather than their intrinsic qualities; and patterns of sensed sounds have the same describable shape as patterns of public noises, that is, air vibrations. So it should be possible to construct a formula describing how the brain events caused by certain patterns of public noises cause patterns of sensed sounds. Given people’s linguistic beliefs (their beliefs about what words mean) discoverable in some other way, it should then be possible to predict from their brain events what they understand to be the content of what is being said to them. So scientists should be able to arrange for sentences to be ‘heard’ by the deaf whose auditory nerves no longer function, by means of electrodes in their brain causing the appropriate brain events.

Desires to do basic actions can occur in the absence of a large set of beliefs. Hence neuroscience could discover the brain events which are the immediate causes of desires to form intentions to do instrumentally basic actions, these being intentions which can be had independently of any beliefs, such as to drink or scratch. It could also discover the brain events which are the immediate effects of intentions to perform basic actions (p.200) of a kind which are normally done in order thereby to do a less basic action, such as to move a hand, or utter a certain sound. That will enable it to detect what ‘locked in’ people are trying to do, and so to set up some apparatus which will enable them to succeed.21 But in order to predict which non-basic action a subject has the intention of performing, a neuroscientist would need to know the subject’s beliefs about which basic actions would bring about the performance of the non-basic action. Hence we need to know subjects’ linguistic beliefs in order to know which proposition as opposed to which sounds subjects are trying to utter.

Neuroscience may be able to make various kinds of statistical predictions, to the effect that a change in the pattern of certain kinds of brain events will probably lead to an increase or decrease in the strength of certain kinds of desire or belief and so make more probable the formation of certain intentions. Thus it may be able to discover how certain brain events affect the relative strengths of very general kinds of desire (e.g. for fame or power). Desires influence but, when the subject also has competing desires and moral beliefs, do not determine a subject’s intentions and so behaviour. And which intention a general desire will tend to cause will depend on the subject’s beliefs (e.g. about how fame can be obtained). So again in the absence of a formula for calculating beliefs of any complexity from brain events, and in the absence of a formula for calculating intentions from competing beliefs and desires and brain events, all we can hope for is statistical predictions to the effect that the more or less of some physical quantity that brain events have, the greater or less the desire to do so-and-so, and so—probably—the greater the proportion of subjects who will do so-and-so. Hence drugs or mirror neurons may indeed promote or diminish altruistic desires,22 or strengthen or weaken a desire to commit suicide. But such increases or decreases of desires only yield probable statistics; they don’t tell you who will do what, since we all have different rival desires of different strengths and different value beliefs of different strengths.

However, it follows, finally, that neuroscience should be able to predict what individual humans will do in order to execute certain general instructions which have as a consequence that their behaviour will depend on only one simple desire of a kind caused directly by a brain event. For example, in the Libet experiments discussed in Chapter 4 subjects were told to move their hand at any time within a short period when they decided to do so; and since they would not have had any moral beliefs about when to do so, they must have decided to do so when they ‘felt like’ it, that is, desired to do it. Such a desire is like an itch and so presumably has a direct cause in a brain event. If subjects disobeyed the instructions, and didn’t move their hand within the period—either because they didn’t feel the requisite desire or because they had rival (p.201) desires (e.g. to be a nuisance) or rival moral beliefs (e.g. that it was immoral to take part in the experiment)—their actions would not count in assessing the experiment. So under these experimental conditions neuroscience may be able to correlate prior brain events with the movements which they cause, via the desire which causes the agent to form the intention to cause them. Hence in this case 100 per cent success in predicting hand movements is by no means impossible. But once again that tells us nothing about how people will behave in situations of conflicting desires and moral beliefs.

But despite the possibility (and in some cases the actuality) of all these advances in neuroscience, the main conclusion of this chapter remains that for the prediction of individual behaviour in circumstances where there are many different variables, both brain events and mental events of different and competing kinds and strengths affecting the outcome, neuroscience would need a general formula well supported by evidence to enable it to relate the strengths of these kinds of events to each other; and that very probably cannot be had.

6. The probability that human behaviour is not fully determined

The argument so far leads to the conclusion that (at least sometimes) humans (as pure mental substances) cause brain events which cause bodily movements which they intend to cause, and that when they make difficult moral decisions we will never have enough evidence to predict in advance what they will decide. Yet, even if it is unpredictable which intention we will form and how strong it will prove, what reason do we have for supposing that that intention (with its particular strength) is not caused (in a way too complicated to predict) by brain events? After all, I have acknowledged, our intentions often are caused—when they are caused by a strongest desire and we have no contrary moral belief, by a strongest moral belief when we have no contrary desire, and when they simply execute (in the way which we believe to be the quickest way) some ultimate intention.

My answer to the question posed is that it is in those circumstances where desires and moral beliefs are in opposition to each other or we have equally strong competing desires and moral beliefs, and only in those circumstances, that we are conscious of deciding between competing alternatives. We then believe that it is up to us what to do, and we make a decision. Otherwise we allow ourselves to do as our desires and moral beliefs dictate—which is so often just to conform to habit. The principle of credulity (Chapter 2, section 2) says that things are probably the way they seem to be in the absence of counter-evidence. So we should believe that in these circumstances where we believe that we are making a choice without being caused to choose as we do, that we are indeed doing just this. In other cases it does not seem to us that we are choosing without being caused to choose as we do, and so we should not believe that we are then making an uncaused choice. The phenomenology of deciding between (p.202) rival possible actions, ones which are not determined by our mental states (our existing desires and beliefs with their relative strengths), is so different from the phenomenology of doing the everyday things we do intentionally, that we should expect the underlying brain processes to be similarly different. And the apparent indeterminism of the physical world suggested by quantum theory (see Chapter 4, especially note 21) gives us a further reason for expecting that the mental world will not be fully deterministic.

There may be people who do not have much opportunity to exercise any significant choices for a long time; slaves may not have much opportunity even to choose between equally strong desires (for example, to spend a rest day doing either this or that), and no moral beliefs that it would be good to rebel against their servitude. Some mentally retarded persons may simply both desire and have moral beliefs to do what they are told to do. A few others perhaps simply have no moral beliefs; their choices are confined to arbitrary choices between equally desired alternatives. And then there are others who do have a choice (as well as between equally desired or equal moral best alternatives) between their strongest desires and their moral beliefs, but their strongest desires are so strong and their moral beliefs so weak, that it feels to them too difficult to try to rebel against their desires, and indeed it is almost inevitable what they will do. Drug addicts or children under the powerful influence of domineering adults are often in this situation. But when slaves suddenly become aware of simple alternatives (for example, they are allowed a rest day on which they can do either this or that) or are stirred to an awareness that it would be good to rebel against their servitude, they become conscious of having a choice. And when the drug addiction is weakened by some medicine or therapy, or the children leave home and meet others with different life styles and moral beliefs, then they become conscious of a significant freedom: ‘it’s really up to me what I am to do’. And most of us are very conscious of it being up to us whether we yield to temptation or fulfil what we believe to be our obligation—sometimes giving into temptation knowingly (‘Why should I always be moral?’, ‘It won’t matter if I am not moral this time’), or trying to persuade ourselves that we don’t really have an obligation in this case. So, in the absence of counter-evidence (in the form of a deterministic theory of our behaviour in such circumstances, rendered probably true by much evidence), in those circumstances we probably are choosing without our choice being caused. Our situation is like that where we are being pulled by ropes tied to our body exerting different degrees of force upon it; it is easier to yield to the strongest force and move in its direction, but we have the power to resist it.

Having avoided using this expression up to now, I shall in future write of an agent having ‘free will’ insofar as the agent acts intentionally without their intentions being fully determined by prior causes. ‘Free will’ is a term which can be used in different senses, and needs to be defined before anyone argues about whether humans have free will. I think that the definition which I have just given captures the normal understanding of claims made by those who are not professional philosophers; that humans do or do not have ‘free will’. Many philosophers, however, understand ‘free will’ in such a sense that someone has free will to the extent to which that person is morally (p.203) responsible for their actions. I shall investigate the issue of when people are ‘morally responsible’ for their actions in Chapter 8, but meanwhile I suggest that it is desirable to be able to discuss whether humans have free will without investigating at the same time whether they are ‘morally responsible’ for their actions.

It is natural to suppose that there follows from someone having free will in my sense a principle called ‘the principle of alternative possibilities’ (PAP) that:

A does x freely only if he could have not done x (i.e. could have refrained from doing x).

By ‘A could have refrained’ is meant ‘it is naturally possible’, not merely ‘metaphysically’ possible, that A refrains. Harry Frankfurt produced a thought experiment designed to show PAP to be false, which has provoked a large philosophical literature. Frankfurt supposes that a scientist, Black, has acquired the power to intervene in Jones’s brain processes so that Jones acts in one way rather than another.23 It is evident to Black, Frankfurt supposes, by some instrument which gives information about Jones’s brain state, when Jones is about to do a certain action. So if Jones was about to choose to do some action which Black did not wish him to do, Black has the power to intervene and make Jones choose not to do the action. Now suppose that Black wants Jones to do A, and would have intervened if Jones had been about not to do A. But in fact, Frankfurt supposes, it turned out, there was no need for an intervention—Black saw that Jones was about to choose to do A anyway. So Black did not intervene, and Jones in fact did A. In this situation, argues Frankfurt, one way or the other Jones could not have not done A. But that fact by itself cannot have made his actual action unfree—since the actual action could have been done as a result of forming an intention which he was not caused to form by any prior cause. Although Jones could not have not done A, his actual action could still have been free. Hence, argued Frankfurt, PAP is false.

In response to Frankfurt, David Widerker24 argued that even if Frankfurt’s argument works for what Widerker calls ‘complex actions’, ones involving an intention and something further (e.g. its effect), in my terminology actions which are not causally basic, it would not work for intentions themselves. An interferer can prevent the occurrence of a complex action by preventing the second part (the effect of the intention), when he observes the occurrence of the intention. If Black observes that Jones has freely formed an intention to do A, he can prevent the intention having any effect. But the same argument will not work for intentions themselves. For in the postulated actual situation either Jones is predetermined by prior causes to form the intention to do x, or he is not. In the former case Jones does not form the intention freely. In the latter case, before having formed any intention, Jones could still not form (p.204) the intention to do x. Only if some person or mechanism, such as Black, could know in advance whether Jones is going to form the intention to do x if they do not intervene can they ensure that Jones does not form the intention. But Black (or any other person or mechanism) can only know that for certain if there is an infallible sign indicating which intention Jones is going to form; and there can only be such an infallible sign if that sign is a sufficient cause of the intention, or has a sufficient cause of the intention as a necessary causal condition. There cannot be an infallible sign if the intention does not have a sufficient cause. A Frankfurt scenario gives no reason to suppose that PAP does not apply to intentions.

Given that, it then follows that Frankfurt’s argument does not show the falsity of ‘A does x freely at time t only if A could have refrained from doing x at t’. For A could only have done x at t if they had the intention to do x at t; and Frankfurt’s argument does not show that some agent could have infallibly ensured that they would form that intention unless (by the agent’s act or some other means), A was caused (fully, that is, by a sufficient cause) to form the intention to do x at t. But of course Black has the power to make Jones do A at an immediately subsequent time, if Black fails to do it at t. So Frankfurt’s argument does show that it is false that ‘A does x freely at t only if (if A had not done x at t) he could also have not done x at an immediately following time (t + 1)’. But this is not the significant result which Frankfurt’s argument seemed to reach. In a more precise form (PAP*), ‘A does x freely at t only if he could have done not-x at t instead’ is surely true.

My claim that human intentions and so actions are not fully determined by prior events, depending on my claim that they will never be fully predictable, is one for which I have produced what I regard as strong probabilistic arguments, the kind of arguments from evidence which can be produced for the existence of electrons or the theory of evolution by natural selection. But of course scientific theories which are probable on the evidence available at a time can be overthrown by new evidence. In the case of theories which really are very probable, this is most unlikely to happen. I do not regard what I have written in this chapter as immune from any possible future scientific discoveries, whereas I do regard what I have written in all the previous chapters as immune from any possible future scientific discoveries. No science could possibly show that mental events (in my sense) are identical to physical events, or that no intentions cause brain events or that humans are not pure mental substances. But my argument in this chapter depends on the claim that no scientist will ever be able to find enough subjects having at a given time sufficiently similar brain events over a large brain area and a large enough number of sufficiently similar mental events, so as to be able to check whether those events are always followed by the same intentions (of the same strength). If this could be done, my claim that they will not always be followed by the same intentions could itself be checked, and—it is logically possible—be confirmed or refuted.

(p.205) 7. The arguments of Pereboom and van Inwagen

Derk Pereboom and Peter van Inwagen have both given arguments claiming that it is inevitable that, if our intentional actions are not fully determined, the overall pattern of those movements will be such as it would be if the movements were determined by ‘chance’; and that that is good reason to suppose that they are determined by chance. Hence, they claim, there is no good reason to suppose that they are determined by the free will of an agent.

Pereboom’s argument is directed against theories of the kind I described above which hold that while quantum theory determines what happens in the brain and so there is sometimes a certain natural probability significantly greater than zero and less than one of some bodily movement occurring, the agent can then intervene so as to bring it about that the movement will occur or that it will not occur without violating physical laws. He objects that if what happens in the brain is subject to quantum theory (or the statistical laws of any similar indeterministic theory), then if an event of a certain kind has a natural probability of occurring of p (e.g. 1/10), then in the long run, he concludes, ‘it is overwhelmingly likely’25 that events of that kind will occur approximately a proportion p (e.g. 1/10) of the time. But, he argues, ‘the agent-causal libertarian’s proposal that the frequencies of agent-caused free choices dovetail with determinate physical probabilities involves coincidences so wild as to make it incredible’.

I argued in previous sections that the agent can only make a difference to which bodily movements he or she makes in two kinds of circumstance—where there are equal strongest desires and moral beliefs or where there is a conflict between desire and moral belief. In the former case the outcome will indeed be determined by ‘chance’, that is, the natural probability of different sequences of brain events—the greater the natural probability of a sequence, the more such sequences will occur in the long run. But this will happen because the agent will rationally allow it to happen—since the agent has no inclination or reason to do one intentional action rather than a different one. In the latter case, I have just argued, the more probable is it that a sequence of brain events will lead to certain bodily movements the greater the agent’s inclination (in the form of the strength of the agent’s strongest desire relative to the strength of the inclination produced by their moral belief) to allow that sequence to occur; and so it is more probable that the agent will allow it. So it is true that if the whole brain were in exactly the same state in relevant respects at various times, then it is ‘overwhelmingly likely’ that there would be no difference in the long run, between the actual frequencies of choices and the frequencies that would occur if the choices of agents were not free at all. But the view that the agent sometimes intervenes in the brain can explain why that happens. It is because the stronger is an inclination to do some action, the more effort it requires to resist it; and the agent only has a reason to resist it where that agent has a contrary moral belief. Although, as such, this does not entail that the strength of an inclination can be given a (p.206) precise numerical value, it is to be expected that ‘in the long run’ agents will do what requires less effort. No ‘wild’ coincidences are involved. But, I argued earlier, human brains are very seldom, if ever during a human life, in exactly the same state in relevant respects; and, as Keynes famously remarked ‘in the long run we are all dead’. And even if we lived for far longer than we do today, what is ‘overwhelmingly likely’ may still not happen. There is no reason to suppose that in the course of a short life, a human’s brain will always behave in the most probable way; and, if my conclusions so far are correct, that human can ensure that it doesn’t. I conclude that the fact that given quantum theory, ‘it is overwhelmingly likely’ that there would be no difference in the long run, between the actual frequencies of choices, whether or not agents have free will, is not a good objection to the claim that they have free will.

Peter van Inwagen’s argument is more general, and doesn’t presuppose that what can happen in the brain is limited by quantum theory or any other physical theory. It is designed to show that, whatever agents do, if it is not determined how they will act, we can only conclude that agents’ choices are ‘a matter of chance’.26 In effect—though this is not how van Inwagen would describe his argument—it is an argument developed to show that indeterminism forces us to suppose that some law-like principle having the same indeterministic character as quantum theory is operative, and from that he reaches a similar conclusion to that of Pereboom. Here is van Inwagen’s argument. Suppose that at time t1 Alice has a choice of lying or telling the truth; and tells the truth, and that her choice was not predetermined. Then,

suppose that God a thousand times caused the universe to revert to exactly the state it was at t1…what would have happened?…We observers shall—almost certainly—observe the ratio of the outcome ‘truth’ to the outcome ‘lie’ settling down to, converging on, some value…Let us imagine the simplest case: we observe that Alice tells the truth in about half the replays and lies in about half the replays. If, after one hundred replays, Alice has told the truth fifty-three times and has lied forty-eight times…is it not true that we shall become convinced that what will happen in the next replay is a matter of chance?

For the reasons given above in cases where an agent has a strongest desire and no contrary moral beliefs, or incompatible desires of equal strength and a strongest moral belief, the issue will be determined. But, when an agent has equal strongest desires and moral beliefs of equal strength, van Inwagen is right to claim that it is immensely probable that the result of many repetitions will be the same if the agent caused them intentionally as if they are caused by ‘chance’. I have acknowledged that free will does not operate in the first two situations, and that chance operates in the third situation—but only because any agent rationally allows it to do so. But in the situation where the agent has a conflict between what he desires and a belief about what is best to do, why (p.207) should we suppose that the outcome of the series of choices will converge on some value, unless some physical law such as a law of quantum theory determines this?

To claim that a series of heads or tails resulting from a coin toss converges on a value p of the proportion of heads, for example, 1/3, is to claim that if you record the proportion of heads to total throws as you toss the coin more and more times, there is for any small value δ a finite number of tosses after which always the proportion of heads will vary between p + δ and p-δ. If the series yields heads and tails as follows HTTHHHHTHTH, the proportions of heads after each throw will be 1/1, 1/2, 1/3, 2/4, 3/5, 4/6, 5/7, 5/8, 6/9, 6/10, 7/11, and so on. Now if we do not know what is the mechanism producing the series, we can only come to a justified belief about whether the series will converge on the evidence of what happens in a finite part of the series. We would need to examine whether the proportion of successes is very similar in each of several disjointed segments of the series of equal length—for example, whether the proportion of heads is similar in the second, third, and fourth hundred throws from what it is in the first hundred. That would increase the (epistemic) probability that this will hold for (almost all) indefinitely many future segments; and if so, it is evidence that the series is convergent. But if the proportions in different segments get on average larger or smaller as the series gets longer, that would increase the (epistemic) probability that the series is divergent. And if the proportions vary considerably, not in accord with any simple formula, that leaves it as probable as it was initially that the series is either convergent or divergent. But in ignorance of the mechanism producing the series we would need to examine a large number of segments before we could reach a conclusion which was more probable than not. Evidence that the series was convergent would in its turn make it (epistemically) probable that the mechanism producing the series has an inbuilt bias of a measurable kind; that on each occasion there is a certain natural probability of a certain outcome. But if the evidence did not point to the series being convergent, we would have no reason to believe that the mechanism had any inbuilt bias of a measurable kind.

Now van Inwagen has given us no reason to suppose that if Alice had the same choice in exactly the same circumstances 100 further times, as well as the 100 times in which she told the truth 53 times, the proportion of times she told the truth would be roughly the same; let alone that this would be the case also for many more segments of 100 opportunities to tell the truth. If we don’t have evidence one way or the other on this matter, we may still know enough about the mechanism to know what will probably happen. For some mechanisms we know that, although they operate in a virtually deterministic way, the outcome depends on very slight variations of initial conditions, which typically vary in such a way as to produce a roughly similar pattern of outcomes when repeated frequently. Coin-tossing is such a mechanism. The pattern is determined by a certain measurable bias in the weighting of the coin. And we also know what to expect for any process where physical theory indicates that there is a fixed probability of outcome from exactly the same initial conditions. The spread of points on a screen marking the arrival of a photon sent towards it through a small slit is (p.208) due, we justifiably suppose, to an inbuilt bias (a natural probability) in the photon-plus-slit apparatus. We know this from the principles of quantum theory without doing the measurements each time. But in the case of a person making choices all we know is that the person is responding to the influence of reason against stronger desires, and (unless we suppose that some theory such as quantum theory governs the brain) we don’t know that that process is characterized by a measurable bias; and so in the absence of evidence of convergence of the kind described above, we have no reason to suppose that on each occasion of choice Alice has any fixed bias in favour of one outcome. But if there is no fixed bias, no natural probability on any particular occasion that Alice will tell the truth, the claim that there is a certain ‘chance’ with a precise value that she will has no meaning. I have given a sense to one desire being stronger than another in terms of which action the agent would do if they had no moral contrary beliefs. But I have given no sense to this strength having any precise numerical value except in the case where a physical theory such as quantum theory provides a way of measuring this, in the way described above.

But suppose that the result of the repetition of the experiment many times was that Alice told the truth approximately the same number of times in each of many segments of 100 opportunities, what should we conclude? If Alice were simply a machine which did not make conscious decisions, we should indeed conclude that there was a fixed bias towards truth-telling on each occasion, making the result a matter of measurable ‘chance’. But as the outcome results from Alice’s conscious choice, we should surely regard a ‘fixed bias’ of p (e.g. 1/3) as measuring how easy it is for her to resist the temptation to lie. In that case she would be in the same situation as that of a brain governed by quantum theory in which the agent can intervene only to the extent allowed by that theory. The stronger the temptation is, the harder it will be to resist, and so the less likely she is to succeed.

I have urged that the natural probability of a sequence of brain events leading to some movement may correspond to the relative strength of a desire to bring about that sequence. It is hardly news that it is harder for humans to do some free acts than to do other free acts; but that doesn’t mean that we don’t have free will. It means only that our free will is a limited one. An agent can still do what he or she is on balance inclined not to do. So I argue against van Inwagen, as against Pereboom, that in any finite human life it may often be that the most probable outcome does not occur because the agent may do what they are on balance inclined not to do. And it is what the agent does, not what they are inclined to do which matters.

I have been arguing that we have not the slightest reason to suppose that if van Inwagen’s experiment was done, the outcome would have the kind of result van Inwagen expects. However, van Inwagen’s experiment is most unlikely to be done—it is most improbable that God will cause the universe to revert to an earlier state a thousand times. And the actual choices of normal human beings are crucially unlike the outcome of a normal coin toss with a fixed physical bias in a further respect beyond the one discussed here. In normal coin-tossing the outcome of a second toss is unaffected (p.209) by the outcome of the first toss. If the coin lands heads first time, it is no more likely than it would be otherwise to land heads or to land tails next time. Humans, however, are so made that if we make a certain kind of choice once, then that makes us more inclined to make a choice of that kind next time. As Aristotle wrote, ‘we become just by doing just acts, prudent by doing prudent acts, brave by doing brave acts.’27 Every time we overcome a bad inclination, it is easier to resist it the next time we are subject to it. In this way over time humans can change the strengths of their desires. As we come to recognize an action of some kind as good, and force ourselves to do it when we have a stronger contrary desire, we can in the course of time make ourselves such that it becomes natural, that is, it becomes our strongest desire, to do an action of that kind when the occasion arises. Conversely we can lose our moral beliefs through neglect. If we always yield to the desire to do the (believed) bad action, the time may come when it never even crosses our mind to do the good action—we don’t even have a moral belief that it would be bad to do the bad action.28 And one of the opportunities which we have at times is to choose to reflect on our moral beliefs and—through experience of the world and talking to others—improve them; or we may neglect to do this. In these ways over time we can either make ourselves naturally virtuous beings or allow ourselves to become amoral beings.

Notes:

(1) For a fuller account of how complicated beliefs and intentions interact see my Epistemic Justification, ch. 2, especially pp. 40–6. (What I call ‘intentions’ here, I called ‘purposes’ there.)

(2) The best known forms of moral non-cognitivism are the ‘emotivism’ of C.L. Stevenson, which holds that moral utterances are expressions of emotion which the utterer desires others to share; and the ‘prescriptivism’ of R.L. Hare, which holds that moral utterances are expressions of commitment which the utterer desires others to follow.

(3) The best known defender of ‘error theory’ is J.L. Mackie. See his Ethics, Inventing Right and Wrong, Penguin, 1977, ch. 1.

(4) In thus tying believing an action to be morally good to having some inclination to do it, while maintaining that the content of a moral ‘belief’ (the proposition believed) is in the believer’s view true, I take an internalist realist view of the content of moral beliefs. One alternative view to this is moral externalism, the view that believing an action to be morally good, is just like believing an action to have any other property such as causing pain or giving pleasure, and has to be combined with some inclination to do morally good action before it leads to an inclination to do the action. The other alternative view is moral non-cognitivism, which can be described more precisely as ‘moral internalist anti-realism’: the view described above that ‘believing’ an action to be good is simply being inclined to approve it, or act or react in some other way with respect to it, without the content of the belief being true or false. Internalist realism about morality seeks to combine the positive insights of the two alternative approaches to moral philosophy just described. It accepts the positive insight of non-cognitivist theories that having a moral belief gives the believer some inclination to act on it when it is relevant to the believer’s decisions; but it also accepts the positive insight of externalist theories that ‘beliefs’ about what is morally good or bad really are beliefs which are in the believer’s view true. For three recent discussions of views about the nature of moral belief, all largely favouring both internalist realism and moral objectivism, see Michael Smith, The Moral Problem (Blackwell, 1994), Russ Shafer-Landau, Moral Realism (Oxford University Press, 2003), and Derek Parfit On What Matters (Oxford University Press, 2011), Parts I and VI.

(5) ‘Everyone agrees that it is an a priori truth that the moral supervenes on the natural’ (Smith op.cit. p. 22.)

(6) This is the view expounded, for example, by Robert Adams in his Finite and Infinite Goods, Oxford University Press, 1999, ch. 1. He holds that the underlying property of actions picked out as ‘good’ by the superficial properties of (for example) manifesting ‘kindness’ and ‘creativity’ is ‘resembling God’. So, on his view, it is necessary a posteriori that goodness consists in resembling God. This does have the consequence that many actions which seem to us obvious paradigm cases of good actions might turn out to be bad when we learn what God is like, unless we make it a matter of definition that God has a certain character such as being kind and creative. But the latter move would make the necessity of ‘it is good to be kind’ a priori. Adams seeks to deal with such objections at the end of his chapter.

(7) ‘In each of us there are two ruling and leading principles, which we follow wherever they lead; one is the innate desire for pleasures, the other is an acquired belief which strives for the best. Sometimes these two within us agree, and sometimes they are at war with each other; and then sometimes the one and sometimes the other prevails.’ (Phaedrus, 238e.)

(8) ‘So I find it to be a law that when I want to do what is good, evil lies close at hand. For I delight in the law of God in my inmost self. But I see in my members another law at war with the law of my mind, making me captive to the law of sin that dwells in my members.’ (Letter to the Romans 7: 21–3.)

(9) The strength of pain felt (and even sometimes whether it is felt at all) is, however, affected by mood and by whether one is engaged in an attention-absorbing activity. For a general survey of the current research on pain, its causes, and cures, see a popular article ‘Pain be gone’ by Claire Wilson, New Scientist, 22 January 2011.

(10) See Jeffrey Gray, Consciousness, Oxford University Press, 2004, pp. 173 and 175. The ‘global workspace’ model has been confirmed by recent work of Raphael Gaillard and others; see R. Robinson (2009), ‘Exploring the “global Workspace” of Consciousness’, PloS Biol 7 (3) doi 10.1371/journal.pbio.1000066.

(11) Though it hardly needs such support, this point is born out by recent experiments showing that presenting some image to a group of subjects produced in all subjects similar patterns of activity in different regions, but slightly different patterns for each subject. See S.V. Shinkareva and others (2008), ‘Using fMRI Brain activation to identify cognitive states associated with perception of tools and dwellings’, PloS ONE 3(1):e1394.doi10.1371/journal.pone.0001394.

(12) Donald Davidson is well known for arguing that ‘there are no strict psychophysical laws’ (see p. 222 of his ‘Mental Events’, republished in his Essays on Actions and Events, Oxford University Press, 1980). This thesis (if we understand ‘strict’ as ‘general’ is the same as mine, and his reasons for it are similar to mine. However, he uses this thesis in defence of his theory of ‘anomalous monism’, that all events are physical while some of them are also ‘mental’, and so physical-mental causal interaction is law-like causal interaction of two physical events. But, contrary to Davidson, I am assuming (for all the reasons given in chapter 3) that there are events of two distinct types, physical and ‘mental’ (in my sense); and so I reject Davidson’s resulting theory.

(13) This theory was originally put forward by J.A. Fodor in his The Language of Thought, Harvard University Press, 1975.

(14) For a selection of papers on both sides of the language-of-thought/connectionism debate see Parts II and III of (ed.) W.G. Lycan and J.J. Prinz Mind and Cognition: An Anthology, 3rd edition, Blackwell Publishing, 2009.

(15) This, despite the fact that ‘psychophysics’ has been trying to measure the strength of sensations for the past 150 years. See the article by D.R.J. Laming, ‘Psychophysics’, in (ed.) Richard L. Gregory, The Oxford Companion to the Mind, second edition, Oxford University Press, 2004. He writes: ‘Most people have no idea what “half as loud” means…In conclusion, there is no way to measure sensation that is distinct from measurement of the physical stimulus’.

(16) This tradition originates from the work of F.P. Ramsey (‘Truth and Probability’ in his The Foundations of Mathematics and other Logical Essays, Routledge and Kegan Paul, 1931.) It typically measures someone’s degree of belief in a proposition (the ‘subjective probability’ which they ascribe to it) by the lowest odds at which they believe that they would be prepared to bet that it was true. If someone is, they believe, prepared to bet £N that q is true at odds of 3 to 1 (so that they would win £3N if q turned out true, but lose their £N if q turned out false) but not at any lower odds (e.g. 2–1), that—it was claimed—showed that they ascribe to q a probability of ¼ (because then in their view what they would win multiplied by the probability of their winning would equal what they would lose multiplied by the probability of their losing). But that method of assessing subjective probability will give different answers varying with the amount to be bet—someone might be willing to bet £10 at 3–1 but £100 only at odds of 4–1; which shows that people’s desire for a sum of money does not increase in proportion to the sum. And surely too how it increases with the sum varies with the person betting, and people have desires and moral beliefs which affect whether or not they bet, which have nothing to do with the sum of money which they might win. Again, none of this information obtained from subject’s beliefs about how they would act in different situations will yield precise numerical values of beliefs, desires etc. to enable us to calculate how they would act in a still different new situation.

(17) For a summary of some of these differences between people see Michael Gazzaniga, Who’s in Charge?, HarperCollins, 2011, pp. 195–8. For one example see the paper of Michael B. Miller and others, ‘Extensive individual differences in brain activations associated with episodic retrieval over time’, Journal of Cognitive Neuroscience, 14:8 (2002), 1200–14. These authors showed that when subjects were asked to recall words which they had previously been shown, although in most subjects this process involved activations in the right anterior prefrontal cortex, it also involved activations in different parts of that cortex and in different other places in each different subject. When subjects were retested several months later each subject had activations in similar brain areas on each occasion. These variations between individuals were connected with differences of memory ability and character.

(18) There has been a recent study of the frequency of ‘experienced desire’ and of attempts to resist it by ‘willpower’ among Germans living near Wurzburg. The study concluded that ‘the average adult spends approximately eight hours per day feeling desires, three hours resisting them, and half an hour yielding to previously resisted desires’! W. Hoffmann and others, ‘What people desire, feel conflicted about, and try to resist in ordinary life’, Psychological Science, 23 (2012), 582–88. However, the detailed tables of which desires were resisted for what reasons suggest that what the authors call ‘moral integrity’ was a very infrequent reason for resisting a desire. The normal case of resisting a desire would therefore seem to be merely the case where one has a stronger desire. In such a case, as I argue in the text, the relative strengths of the desires inevitably determine the outcome. Moral conflicts only arise when someone believes that it would be overall good to resist a desire for a reason other than that they desire to do something incompatible with fulfilling it.

(19) I give a brief assessment of J.R. Lucas’s argument from Godel’s theorem for the indeterminism of human conscious life in Additional Note H.

(20) K.N. Kay and others (2008) devised a decoding method which made it possible to identify, ‘from a large net of completely novel natural images, which specific image was seen by an observer.’ See their ‘Identifying natural images from human brain activity’, Nature 452 (20 March 2008), 352–5.

(21) See the work described in S. Kellis and others, ‘Decoding Spoken Words using local field potentials recorded from the cortical surface’, Journal of Neural Engineering, 7 (2010), 1–10.

(22) Paul J. Zak and others found that increasing testosterone in men makes them less generous in the game situations created by psychologists. See their 2009 paper ‘Testosterone Administration Decreases Generosity in the Ultimatum Game’ PloS ONE, 4(12):e8330.doi:10.1371/journal.pone.0008330.

(23) Harry G. Frankfurt, ‘Alternate Possibilities and Moral Responsibility’, Journal of Philosophy 66 (1969), 829–39; reprinted in (ed.) G. Watson, Free Will, second edition, Oxford University Press, 2003 (see p. 173).

(24) My argument in the next two paragraphs is in essence that of David Widerker, see his ‘Libertarianism and Frankfurt’s attack on the Principle of Alternative Possibilities’, Philosophical Review, 104 (1995), 247–61, republished in (ed.) Watson.

(25) Derk Pereboom, Living without Free Will, Cambridge University Press, 2001, pp. 84–5.

(26) Peter van Inwagen, ‘Free Will Remains a Mystery’, Philosophical Perspectives 14:Action and Freedom, 2000, pp. 1–19.

(27) Aristotle, Nichomachaean Ethics, 1103b.

(28) Recent psychological studies have shown that not merely does forcing yourself to do some action which is difficult to do make it easier to do an action of the same kind next time, but forcing yourself to do difficult actions of one kind makes it easier to do difficult actions of another kind. See R.F. Baumeister and J. Tierney, Willpower, Penguin Press, 2011, ch. 6. These studies were not, however, concerned only or mainly with actions done because the agent believed that they were morally good, and so with their effects on moral character. Willpower can be exercised to weaken the influence of one purely selfish desire (e.g. to stay in bed) and so make it easier to fulfil another selfish desire (e.g. to get rich).