Abstract and Keywords
The previous chapter concluded by asking the question: how does the unconscious brain create and inspect the display medium of conscious perception? If the argument so far is correct, the Hard Problem of consciousness can be reduced to just this question. This chapter starts on the exploration of some possible ways to answer it.
We concluded the previous chapter by asking the question: how does the unconscious brain create and inspect the display medium of conscious perception? If the argument so far is correct, the Hard Problem of consciousness can be reduced to just this question. In this chapter we start on the exploration of some possible ways to answer it.
In the search for useful answers, getting the question right is half the battle. The way I have formulated the Hard Problem has already foreclosed some options. Some of the foreclosures were made explicit earlier, but they should be borne in mind.
First, like any formulation of the Hard Problem, this one presupposes that there is such a thing as conscious experience, and that it can be picked out (in some way or another) as being different from either behaviour or brain activity. I have not spent much time justifying this assumption, since the contrary view—that there is really no such thing as consciousness—is unlikely to find favour among most readers. Are we not familiar enough with our own conscious experiences?
But it is worth putting on record that some Radical Behaviourists (an endangered but living species) do not accept the reality of conscious experience. They treat subjective accounts of experience as fundamentally misguided, and replace them with statements about behaviour. Once, in a heated argument with one such, Howard Rachlin, I came up triumphantly with an example which, surely, would convince him that some conscious experiences just don’t translate into behavioural terms. Suppose, I asked, you come into a room and see two individuals both sitting motionless in arm-chairs; a gramophone is playing a Mozart string quartet; one of the two individuals is listening to the music, but the other is deaf: how can you describe the difference between what is going on in these two individuals without reference to the subjective experience of the one listening to Mozart? Quick as a flash, Howard answered as follows: the one who isn’t deaf has a whole lot of behavioural patterns which will include, when he is later asked, making verbal statements about Mozart and string quartets. That is how ‘listening to music’ is translated into behavioural terms. Since then I have given up all attempt to convince Radical Behaviourists that conscious experience is independently real.
(p.124) Second, again like any formulation of the Hard Problem, this one supposes that such a problem exists. As we saw in Chapter 1, not everyone agrees with this. A majority of working scientists, in particular, take the view that the problem of consciousness will be solved in the same general way as the problem of life was solved. That is to say, there will be solutions to each of the separate aspects that today appear to make up the Hard Problem, and these solutions will require no more than the detail of experimental discovery plus standard biological explanation. If this point of view is correct, then it will be sufficient to discover more and more about the detailed brain mechanisms that underlie conscious perception, and all will then fall into place.
Let me be clear that, if this should prove to be the case, I shall applaud. I have no desire—as do those whom Dan Dennett aptly christened the New Mysterians—for consciousness to remain mysterious; and, if there is a solution to be found within normal science, so much the better. But there are strong reasons to doubt that this will be so.
Chapter 3 described the contract that biology made with physics and chemistry: biological explanation will respect their laws, provided these allow selection by consequences (in natural selection and in the feedback mechanisms designed by natural selection). That contract works well enough over the rest of biology, but seems to break down for the special medium of conscious perception. It is not selection by consequences where the difficulty principally lies. If one could sort the physics out, there doesn’t seem to be an insuperable problem in finding causal effects for conscious perception to produce. Though these are much more restricted than appears to introspection, they still provide enough purchase for natural selection to work. As to servomechanisms, we identified in Section 8.4 a range of ways in which conscious perception is likely to increase their scope and efficiency. To be sure, acceptance of these causal effects for consciousness per se means that we are also foreclosing on epiphenomenalism; but we found good reason to do this in Chapter 9.
The problem, rather, lies in the physics and chemistry. It isn’t that conscious experience doesn’t obey their laws, it is rather that they don’t seem to apply to it at all. None of the usual measurements make any sense. You cannot speak of the position, mass, momentum, acceleration, energy, etc., of qualia, let alone measure them. And the fact that you can apply all these concepts to the brain and the brain’s components doesn’t help, once you decide (as we did when we killed off epiphenomenalism) that conscious experience is capable of having causal effects over and above those of its underlying neural activities.
This, by the way, is not a problem just for psychology and neuroscience. Physics aims to give a complete and completely unified account of the entire universe. Thus it cannot rest easy with a set of natural phenomena, such as those of conscious experience, which resist physical measurement and explanation. So either consciousness must be made to fit contemporary physics or physics itself must (p.125) change to accommodate consciousness. Some physicists, like Roger Penrose (whose theory we consider in Chapter 16), advocate just that.
The scientific stakes, therefore, are high—so high, in fact, that I shall not yet foreclose on the possibility that there will after all be a ‘normal science’ account of consciousness. The most promising contemporary attempt to construct such an account flies under the banner of ‘functionalism’—a doctrine that this chapter therefore submits to close scrutiny.
Now, the single fact about consciousness of which we can be most certain is that it is in some way connected to the activity of the human brain. So the brain is a good place to start in seeking possible solutions to the Hard Problem. The trouble is that there are many different ways in which to think about the brain; and, depending on the way you choose, you arrive at quite different types of solution.
For the brain is:
(1) a system which
(2) interacts with an environment
(3) and is made up of physicochemical components;
(4) these components are biological cells,
(5) more specifically, neural cells.
Depending upon which of (1)–(5) you think comprise the critical conditions for consciousness, you can end up with wildly divergent hypotheses. Functionalism takes (1) and/or (2) as its starting point.
10.2 Conscious computers?
Suppose the only thing that matters for the making of conscious experience is the nature of the system, irrespective of the components of which it is made. Then, if you make a system that has functions identical to those of the human brain but make it out of different components (silicon chips, for example), the system will have conscious experience. In its extreme form, this line of thought leads to the supposition that computers which merely simulate functions identical to those of the human brain would experience consciousness.
The famous Turing test encapsulates this supposition. You face two closed doors through which you feed a series of test questions. Printed answers to your questions are slipped back under the doors. The answers coming from the door to your left are as convincing as those from the door to your right. No matter how complicated you make the questions, how demanding of what you take to be intelligence or emotion or social skills or aesthetic appreciation or any other human attribute, you cannot distinguish between the quality of the answers from left and right. You then open the doors: on the left, you see a human being typing the answers into a computer console and, on the right, just a computer producing the answers itself. (p.126) The computer has just passed the Turing test. If you regard this test as valid, then you must concede that the computer has all the functions of the human brain—including that of being conscious. (Strictly speaking, Turing introduced his test as one of intelligent behaviour; but the general form of the argument can be, and frequently is, applied to consciousness.)
No computer has yet passed the Turing test. But this has not prevented the notion that a sufficiently sophisticated computer would develop consciousness from gaining wide currency, not only in science fiction, but also among philosophers and scientists, especially those working on ‘artificial intelligence’. This field is devoted to making computers as clever as they can get. There are already well-known demonstrations that they can be very clever indeed—clever enough to beat Grand Masters at chess, for example. However, there are convincing theoretical arguments that make it unlikely that, no matter how clever they become, computers will ever develop consciousness.
These arguments turn on the distinction, most familiar in the context of ordinary language, between syntax and semantics: that is, between the rules that govern the ways in which strings of symbols can be put together (syntax) and the meanings to be attached to the strings (semantics). If you have ever learned Latin, you know how to take the stem of a verb and conjugate it (am-o, am-as, am-at and so on). You can do this without any idea of what the verb means, or even if the stem doesn’t exist at all. That’s syntax without semantics. And that, critically, is what computers do: they conjugate strings of symbols without any knowledge of the meanings of the strings.
In most computers, the strings take the form of a series of interconnecting switches, each of which at any one time can be either open or closed. The two possible positions of the switches can be regarded as 0s (closed) and 1s (open), and so the positions of a series of switches can be used to represent numbers in the binary arithmetic that most children nowadays learn at school. This is the way the computer’s ‘machine code’ works. Higher-order computer languages merely provide ways of manipulating the machine code in a manner less laborious than that required to specify each and every change in switch position. So all that computers do is to transform one set of switch positions into another. The sets of positions are enormously complex and the switching takes place at a very great speed. Nonetheless, that’s all there is. The interpretation of the series of switch positions—even at their most basic level as 0s and 1s—is carried out, not by the computer, but by the human beings who build, program and use it.
This line of argument may appear to contradict a common way of describing computers, namely, that they are systems for the processing of ‘information’. But this is something of a weasel word. In its everyday sense, ‘information’ is information about something, it conveys meaning. But I have just asserted that computers cannot on their own compute meaning. The information they process is interpreted by human beings; for the computer itself, it is uninterpreted information.
(p.127) The reason that, nonetheless, it is common to apply the language of information processing to computers is that the word ‘information’ has a second, technical, sense within the mathematical theory of ‘information’ or ‘communication’. Consider again a series of computer switches in open or closed positions, or their equivalent as a string of 0s and 1s. Let’s say the string is just four units long. At each position in the string, there are just two possibilities: 0 or 1. Across all four positions, there are therefore 24 = 16 possibilities. If you have no further knowledge of the switch settings, your total ‘uncertainty’ is quantified as these 16 possibilities, expressed in ‘bits’, that is, powers of the base 2. So, here, the uncertainty is 4 bits (16 = 24). You are given ‘information’ in the sense of mathematical communication theory to the extent that you are able to reduce this uncertainty. To know all the actual switch settings is to reduce uncertainty completely, so you would gain 4 bits of information. It is in this sense, and this sense only, that computers, properly speaking, transmit information: as switch positions are set, the uncertainty as to what possible strings the settings might form is reduced. This, by the way, is exactly the same sense in which a chain of nucleotides constituting a stretch of DNA can be said to transmit information. And, just as DNA is ignorant of the proteins for which it is ‘coding’, so a computer is ignorant of what its switch settings stand for.
To dramatise the distinction between syntax and semantics, John Searle (in his 1980 paper published in Behavioral and Brain Sciences) used an analogy, which has since become famous as the ‘Chinese Room’. Again imagine the two doors set up for the Turing test. You feed into each door a series of Chinese words written in Chinese pictograms. Out from behind the doors come their English equivalents, written in normal Latin script. The outputs from the two doors are equally accurate. You then open up the doors. Behind one is a bilingual speaker of Chinese and English who knows what the words in both languages mean; he is performing a normal task of translation. Behind the other is, this time, not a computer but another human being. This person speaks neither Chinese nor English. But he has a look-up table: two long columns with Chinese pictograms on one side and equivalent English words on the other. So he just looks up the pictogram he receives and sends back its English equivalent—without the least understanding of what the words mean. Computers are like the second person.
Searle’s analogy has led to intense, sometimes ferocious, debate. For my part, I find the argument totally convincing. Conscious experiences are nearly always ‘intentional’ (Chapter 4). What we perceive is perceived, immediately and automatically, as this or that meaningful entity. In Fig. 10.1, for example, you see either a vase or two profiles facing one another. You cannot see both of these percepts at once; and it is extremely difficult to see the figure as a series of lines that form nothing particular, neither vase nor profiles. Given these assumptions, then, the argument is simple. Conscious experiences are imbued with meaning; computers cannot (without human interpretation) compute meaning; therefore, computers cannot be conscious.
But beware. Whatever the force and clarity of this argument, this has not prevented many eminent thinkers from continuing to believe the opposite: that, with increasing complexity of the processing they perform, the day will come when computers develop consciousness. Our further discussion here, however, will take it as established that this can never happen.
10.3 Conscious robots?
Computers are systems for the processing of information, in the technical sense of this word. So, for that matter, are brains. Rather than multiple series of switch positions, the brain employs multiple series of ‘spikes’ (passages of electochemical currents along and between neurons) to determine which, out of a very large number of possibilities, will be the actual total state of neuronal events at any one time. In the previous section we have seen that a system of this kind is incapable on its own of generating meaning, and therefore incapable of having conscious experience. We can accept this conclusion without contradiction for computers, but clearly not for brains, since we know that brains do have conscious experiences. So what do we need to add to an information-processing system for it to cross the barrier between syntax and semantics?
(p.129) Computers don’t interact directly with their environment (except in the trivial sense of the human environment that programs them and interprets their output). But robots do. Might such interactions be sufficient to endow a robot, made out of silicon chips, tin cans or anything else, with the capacity to interpret in a meaningful way its own informational states?
Let us return to the Chinese room, but this time apply its lessons to a robot rather than a computer. We can readily see the difference between the modes of operation of the two people in the Chinese room, the one doing normal translation, the other using a look-up table. But is this a difference that matters? Maybe, behaviourists might argue, ‘meaning’ is as fictional a concept as is (for them) consciousness itself.
Suppose all there is to understanding the meaning of a ‘stimulus’ (whether it be a word, object, faces or anything else) is simply to have a repertoire of behavioural responses appropriate to all the different circumstances in which you might encounter it. Then the difference between the two people in the Chinese room is that the one who understands Chinese and English has a very large and varied behavioural repertoire (depending on environmental circumstances, the sentences in which the words are embedded, and so on) for responding to Chinese words, whereas the other one has a very limited repertoire (that of picking out correspondences between pictograms and words in the same row of the look-up table). On this analysis, they can both give a ‘meaning’ to a Chinese pictogram, but in the one case it is very broad and in the other, very narrow. Or, to put the same point in the behaviourist manner, neither can truly be said to give a meaning to a Chinese pictogram, because ‘meaning’ is in both cases a misleading abstraction from the real facts of behavioural dispositions.
Robots differ from computers in that they are endowed with just such behavioural dispositions. The dispositions may be built in at the time the robot is constructed. More interestingly, they can also be acquired by learning, which is the way that human beings acquire most of their behavioural dispositions. So, if this behaviourist analysis of meaning is correct, the language of meaning may perhaps be applied to robotic behaviour as appropriately as to the human kind. This, indeed, is a step we have already taken, when we endorsed Harnad’s treatment of the formation of categorical representations (Section 4.5). And, as pointed out there, there is good evidence that artificial neural networks are able to learn and use such representations, if they are provided with a series of possible category exemplars together with feedback as to which are ‘correct’ or ‘incorrect’. There would seem to be no difference in principle between this way of using feedback to train neural networks and the way in which human beings learn to categorise elements in the world with which they interact.
The argument from meaning, used above to defeat the possibility of a ‘conscious computer’, does not therefore apply to the possibility of a conscious robot. But this represents only a limited step forward. As we have seen, there is good evidence (p.130) that the computation of meaning is conducted by the unconscious brain and that consciousness is not required for that computation to influence behaviour (consider, for example, Groeger’s experiment, described in Section 4.4). Thus our discussion so far rules out the syntax-only computer, but not the behaving robot, as a model for the unconscious brain; but it has nothing to say about whether a behaving robot would be conscious or not.
One line of argument which suggests that a behaving robot would not be conscious is the following.
The appropriate set of behavioural responses, to which behaviourists appeal as the true ‘meaning of meaning’, itself depends upon the way in which the ‘stimulus’ is perceived. How I respond to the drawing in Fig. 10.1 will differ dramatically depending on whether I see it as a vase or as two facing profiles. This difference does not depend upon changes in the ‘stimulus’ on the retina, since perceptual shifts of this kind occur even if the pattern of retinal stimulation is kept constant. Nor is this just a classroom trick performed by clever psychologists. Few of us have not at some time or other taken fright at the sight of something that is in fact totally innocuous: a menacing figure lurking behind a tree that suddenly transforms itself into a bush. Interpretation in this way of the ‘stimulus’ as this or that is an integral part of virtually all perception. Yet, the allocation of at least some behavioural responses becomes possible only after perceptual interpretation is complete (and it changes when interpretation changes). I cannot, for example, talk about the picture in Fig.10.1 as a vase until I have seen it as one. It is these perceptual qualities that lie at the heart of the Hard Problem. If, on at least some occasions, behavioural dispositions have to wait upon the formation of a percept, then the intentional qualities of the percept cannot be explained in terms of these dispositions.
So, while we may grant robots the power to form meaningful categorical representations at a level reached by the unconscious brain and by the behaviour controlled by the unconscious brain, we should remain doubtful whether they are likely to experience conscious percepts. This conclusion should not, however, be over-interpreted. It does not necessarily imply that human beings will never be able to build artefacts with conscious experiences. That will depend on how the trick of consciousness is done. If and when we know the trick, it may be possible to duplicate it. But the mere provision of behavioural dispositions is unlikely to be up to the mark.
The most common contemporary approach to the problem of consciousness is generally known as ‘functionalism’. Essentially, this is what I have described in the previous section as the ‘conscious robot’ position: the hypothesis that, if one duplicates in a robot all those functions which, in a human being, are associated with (p.131) conscious experience, then the robot would also have conscious experiences—no matter what the robot is made of. So dominant is this position in cognitive science, artificial intelligence and philosophy that it is hard to make a contrary voice heard at all. But I have come to the view that functionalism is false. (Notice, by the way, that this is not a comfortable view for me to come to. For my own hypothesis concerning the survival functions of conscious experience, set out in Chapters 7 and 8, is functionalist. I leave till later the search for a way out of the dilemma thus posed.)
My reasons for the conclusion that functionalism is false turn on the results of a particular set of experiments on the phenomenon of ‘synaesthesia’. This is a condition in which stimuli presented in one sensory modality give rise to sensations in another. Although synaesthesia may involve many different combinations of senses, one of the most common is ‘word–colour synaesthesia’ or ‘coloured hearing’. In this condition, when the synaesthete hears or sees a word, she sees in addition, in her mind’s eye, a colour or multicoloured pattern. I say ‘she’, by the way, not for reasons of political correctness, but because the great majority of synaesthetes are women. Synaesthetes are in other respects normal. The details of their synaesthetic experience are varied and idiosyncratic. The condition tends strongly to run in families. But, even within a family of, say, coloured hearing synaesthetes, different family members have different specific experiences. So one may respond to the word ‘train’ with a bluish-green experience and another to the same word with an orange experience, and so on. Synaesthetes almost universally report that they have had their synaesthesia for as long as they can remember. Once they discover that few other people have this kind of experience, they tend not to talk about it to anyone. They fear, with justice, that they will be regarded as queer or crazy. And, indeed, the scientific community has only recently begun to take seriously the reports they give of their experiences.
Before I describe the results of our experiments, let me state the doctrine of functionalism in a form in which it is particularly imperilled by them. As we know, the crux of the ‘Hard Problem’ of consciousness lies in the phenomena of perception—qualia. Consider, as a specific version of the Hard Problem, this question: how should one explain the difference between two subjective experiences of colour, say of red and green? Functionalism approaches a question of this kind in the following way.
It starts by eliminating from the question the qualia—of red and green—as such. For these, it substitutes as the explicandum the repertoire of responses by which the experiencing individual demonstrates, behaviourally, the capacity to discriminate between red and green. This repertoire would include, e.g. pointing to a red (green) colour when requested to do so, using the word ‘red’ (‘green’) appropriately in relation to the colours red and green, stopping (going) at red (green) traffic lights, stating that a lime is green and a tomato, red, and so on. Next, functionalism seeks an understanding of the mechanisms by which these behavioural (p.132) ‘functions’ are discharged. This understanding may be sought at a ‘black-box’ level, as in the box-and-arrow diagrams familiar in cognitive psychology, neural networks, computer simulations and so on; or it may be sought in the circuitry of the actual brain systems which connect the inputs to the outputs of each of the discriminating behavioural functions. A full ‘function’ for a given difference between qualia then consists in a detailed account of the corresponding differences in inputs, in outputs, and in the mechanisms that mediate between input and output. As a shorthand, I shall describe such a full function as taking the form ‘input-mechanism-output’. (The argument is essentially unchanged if one interprets full functions in a more sophisticated manner, as including, for example, feedback from output to input or other additional cybernetic machinery.) If a full functional account is given, then, according to functionalism, there is no further answer that can be given to the original question: what is the difference between the subjective experiences (the qualia) of red and green? To continue asking this question in the face of a complete functionalist account would, so the doctrine holds, be a meaningless activity. For, according to functionalism, qualia just are the functions (input-mechanism-output) by which they are supported.
Note that, even though functionalism is willing (at least in some of its forms) to take into account the detailed circuitry of the brain that mediates between input and output as part of the full description of a function, it does so only as circuitry. The tissue out of which brain circuits are made (neurons, membranes, synapses and so on) and the means by which the circuits operate (passage of impulses along axons, release of neurotransmitter into the synapse, etc.) are regarded within functionalism as irrelevant. In principle, the functionalist holds, one could mimic the circuitry with any materials to hand, and the result, in terms of either conscious or unconscious processing, would be the same. Same functions, same processes: if the relevant brain process attains consciousness, so would the same function no matter what material was used to carry it out.
From this formulation of functionalism one can draw the following, ‘primary’, inference: (1) For any discriminable difference between qualia, there must be an equivalent discriminable difference in function. There is also a ‘complementary’ inference: (2) For any discriminable functional difference, there must be a discriminable differenced between qualia. Clearly, there are ways in which this second, complementary inference may be false. There are many forms of behaviour which are not accompanied by qualia at all. So, for example, the pupils of one’s eyes constrict if illumination increases and dilate if it decreases; but one is not normally aware of either of these changes in pupil size. However, in the case of a behavioural domain which is normally accompanied by qualia, whenever functionalism draws the primary inference, it should also draw the complementary one.
Let us apply these inferences again to the example of red and green. The primary inference is that (within the domain of colour vision), if someone claims to have different red and green experiences, then there must be different functions (p.133) (input–mechanism–output) to support this claim. The complementary inference would be that (within the domain of colour vision), if someone manifests different functions, then there must be different qualia accompanying them. The two inferences together constitute a claim for identity between qualia and functions within the domain of colour vision. Functionalism at its strongest generalises this identity claim across all qualia within each domain and all domains of conscious experience.
There is a related but nonetheless separate strand of functionalist thought. This treats the functions that give rise to qualia as providing benefit to the behaving organism. This strand is particularly evident in discussions of the evolution of qualia. The claim here is that evolution works by selection of behavioural functions that contribute (in the usual way) to Darwinian survival, and thus by selection of the neural mechanisms which mediate those functions. The evolution of qualia themselves, on this view, occurs only parasitically by linkage to such functions. From this view one can draw a further inference. (3) One would not expect to find qualia which adversely compete with the functions to which they are linked.
A final word about functionalism is this: functionalism is proposed in two different flavours (I say ‘flavours’ rather than ‘forms’, since the nuances are often quite subtle). In one flavour, qualia are reduced to so little beyond the functions with which they are linked as to be virtually eliminated. This is more or less Dan Dennett’s position in his book Consciousness Explained. In the other, the separate existence of qualia is explicitly acknowledged, but all empirical data are treated as requiring explanation in terms only of the functions with which they are linked. So, as we have seen, Stevan Harnad argues that qualia are epiphenomena: they are caused by functions and their underlying mechanisms, but have no causal effects of their own. In either flavour, qualia are left with no substantive properties of their own.
10.5 Experiments on synaesthesia
At the Institute of Psychiatry in London, we have been using neuroimaging techniques to study word–colour synaesthetes. Our data appear to contradict what I called above ‘the complementary inference’, and also to demonstrate qualia with behavioural effects that are adverse to the functions with which they are linked (contrary to inferences 2 and 3, above). This evidence constitutes a serious challenge to functionalism in both its flavours. I shall therefore describe the experiments here in some detail.
The starting point for any study of synaesthesia lies in the synaesthete’s own report of her experience. However, if the arguments advanced here are to hold, it must be the case that the report is veridical, in two senses. First, the report must be more than mere confabulation—there must be something that is separate from the report and is reliably reported. Second, the reported experience must be perceptual. Otherwise, we could not base arguments about qualia upon it.
First, Simon Baron-Cohen, Laura Goldstein and their colleagues in London demonstrated the reliability of reports of word–colour synaesthesia. Their subjects gave essentially identical reports of their colour experiences in response to a list of words at a year’s interval, with no prior warning that they would be retested at that time. In a group of non-synaesthetes retested over a period of just a month, the similarity of reported word–colour associations was strikingly inferior.
Second, the perceptual nature of the synaesthetic experience has been well-documented in experiments by ‘Rama’ Ramachandran and Ed Hubbard in San Diego. I give here just one example of their findings. This depends on the phenomenon known as ‘visual pop-out’. It is characteristic of visual perception that items in a display with a feature that differs from other ‘background’ items ‘pop out’ from the background—that is to say, they are seen automatically and involuntarily as being different, and they are grouped together as being separate, from the background. Exploiting this feature of perception, Ramachandran and Hubbard presented subjects with a black-against-white display of 2s and 5s, computer-generated so that the latter were mirror images of the former. The 2s were disposed among the background 5s so as to form a triangle (Plate 10.1). Non-synaesthetes found it hard to detect the triangle. In contrast, number-colour synaesthetes, for whom 2s and 5s gave rise to different colour sensations (e.g. red and green), at once saw the triangle, which stood out in one colour against a background of a different colour. It is virtually impossible to account for this and similar phenomena except by giving credit to the synaesthetes’ own reports: that they have a perceptual experience of colours when they see black and white number displays just as they do when they inspect coloured surfaces.
This evidence from Ramachandran’s perception experiments is supported by our neuroimaging data. For this work we (Julia Nunn and other colleagues) used the technique of functional magnetic resonance imaging (fMRI). When a particular brain region is active, it uses up oxygen. This is then replenished by an increase in the supply of oxygen delivered to the region via the blood supply. Using fMRI it is possible to distinguish between signals from oxygenated and deoxygenated haemoglobin (the carrier of oxygen in the blood), and so to detect brain regions that are particularly active at a particular time. Functional MRI is usually done by subtraction. That is, you measure activation in your experimental condition of interest, e.g. listening to words, and in a control condition, and you subtract the latter from the former. The results of this subtraction give you the pattern of activation that is specific to the experimental condition of interest.
We used this method to detect brain regions activated by simply hearing spoken words. In our study, the control condition consisted of a series of tones. So, the subjects listened to alternating blocks of words and tones, each block lasting 30 seconds. In non-synaesthete controls, as you would expect, the activation caused by spoken words occurred in the auditory cortex and language areas, as it (p.135) did also in a group of coloured-hearing synaesthetes. However, the synaesthetes (but not the non-synaesthetes) showed an additional area of activation in the visual system. The location of this activation coincided perfectly with the area that is selectively activated by colour (Plate 2.1). This area—known as V4 or sometimes V8—is determined in fMRI by subtraction of the activation patterns produced by monochrome patterns from those produced by coloured versions of the same patterns. The patterns most often used are called ‘Mondrians’, since they resemble the abstract paintings of the artist, Piet Mondrian (Plate 7.2). Coloured, but not black-and-white, Mondrians activate V4. So, just as one would expect from the synaesthetes’ own reports, the same region is activated in their brain by heard words, as is activated by seen colours (Plate 10.2). These findings, like those in Ramachandran and Hubbard’s experiments, support the hypothesis that synaesthetic colour experiences are truly perceptual in nature. The synaesthetic experience, then, at least in the cases of word- or number-colour synaesthesia, is reliable, veridically reported and perceptual.
The colour experiences of synaesthetes, in addition, provide a particularly uncluttered example of the general truth (Chapter 2) that perceptual experiences are constructed by the brain, and are only (at best) indirectly related to the states of affairs in the ‘real world’ that cause them to be constructed. The perceptual experiences of colour, in word–colour synaesthetes, are reliably, automatically and involuntarily triggered by spoken words. Thus they bear no relationship to the wavelength properties of light reflected from surfaces which normally provide the external basis for experienced colour. This leaves no room for doubt that synaesthetic colours are constructs of the brain, nor any room for interpretation within a ‘naïve’ or ‘direct’ perceptual realist framework. For such an interpretation even to get off the ground, there has to be at least a resemblance between the state of affairs in the external world that gives rise to the percept and the percept itself (though this begs the question of what could possibly be meant by ‘resemblance’ in this context). Clearly, no such resemblance exists when a synaesthete reacts, say, to the heard word ‘train’ with a greenish-blue experience. When qualia of this kind are experienced, therefore, they cannot be construed as direct perception of any state of affairs in the real world.
These experiments demonstrate yet again, by the way, that the ‘privacy’ of conscious experience offers no barrier to good science. Synaesthetes claim a form of experience that is, from the point of view of most people, idiosyncratic in the extreme. Yet it can be successfully brought into the laboratory.
10.6 Function vs tissue
Section 10.4 presented a formulation of functionalism without contrasting it to any alternative approach to qualia. In the context of our experiments on synaesthesia, the most relevant contrast is with what I shall call, for want of a better word, the (p.136) ‘tissue’ approach. (The term ‘physicalism’ is also used in this sense.) The ‘want of a better word’ reflects the fact that this alternative to functionalism has been articulated far less clearly than functionalism itself. Indeed, it is not clear that it has ever been fully articulated at all.
As we have seen, functionalism more or less inevitably leads to the conclusion that, if a system displays behaviour of a kind that, in us, is associated with conscious experience, then the components out of which the system is made are irrelevant (as one among many examples, see Igor Aleksander’s book, How to Build a Mind). The contrary, tissue, view, however, holds that there is something special about the physical components from which brains are made that provides a necessary condition for consciousness to arise. This view (which we consider in its own right later) may stress the physics of these components, as in Hameroff and Penrose’s quantum gravitational theory of consciousness (see Chapter 16), or their biology, as in Koch and Crick’s search for genes underlying the evolution of the neural correlates of behaviour. Views of this kind are sometimes explicitly proposed as superior to functionalism, as by Hameroff and Penrose, but more often it is left unclear whether or not they are compatible with functionalism. Similarly, on the functionalist side, some thinkers (e.g. Harnad) concede the possibility that, for a complete account of consciousness, the actual mechanisms that the brain utilises may be a crucial addition to its functions, whereas others scorn the whole idea as relying upon ‘wonder tissue’, in Dennett’s caustic phrase.
Despite its relative lack of conceptual articulation, I shall here use as the contrast to functionalism, the tissue approach. This choice is dictated by a useful parallel that the contrast offers to the two most plausible accounts of synaesthesia. These hold that synaesthesia is based upon either (1) early and strong associative learning, or (2) an unusual form of ‘hard wiring’ in the synaesthete brain. The parallel recognises equivalences between, on the one hand, associative learning and functionalism and, on the other, hard wiring and the tissue approach.
Recall that synaesthetes generally report that they have had their synaesthesia for as far back as they can remember. They do not normally report any specific learning experience that might have led to their associating a particular word with a particular colour. However, such learning may have taken place at a sufficiently early age to fall into the period of infantile amnesia. Thus, one possible explanation for synaesthesia is that the individuals concerned formed exceptionally strong and enduring associations between words and colours at an early age. This is the associative learning account of synaesthesia. Since the general process of associative learning offers no problems for functionalism, neither does a specific associative learning account of synaesthesia.
The alternative, hard-wiring, account is that the synaesthete brain has abnormal projections that link one part of the brain (the sensory system in which the inducing stimulus is processed) to another (the sensory system in which the synaesthetic percept is experienced). So, in the instance of word–colour synaesthesia, there (p.137) would be a projection, not existing in the non-synaesthete brain (nor even in the brains of individuals with other types of synaesthesia), from the parts of the brain which process heard and/or seen words to the colour-selective regions of the visual system. This abnormal projection might arise because the synaesthete has a genetic mutation which promotes its growth. Alternatively, a genetic mutation might prevent the extra projection from being ‘pruned’ during early development, a time at which the brain normally shows an abundance of connections that are no longer present in the adult brain. The likelihood of a genetic basis for synaesthesia is strengthened by the fact that there is a strong tendency for the condition to run in families, and especially in the female line.
There is at present no way directly to test the hard-wiring hypothesis (though recent developments in MRI are bringing this prospect closer), since this would require anatomical investigation of the brain. What we have tried to do, therefore, is to test the associative learning hypothesis. To do this, we performed two experiments.
In the first experiment, we trained non-synaesthetes on a series of word–colour associations and then tested them with fMRI to see whether their pattern of activity in response to spoken words had come to resemble the pattern spontaneously displayed by synaesthetes (as shown in Plate 10.2). We made strenuous efforts to ensure that the subjects had formed strong associations between the words and the colours. First, we gave them extensive over-training outside the MRI scanner. We were concerned that, nonetheless, the contextual shift from the training environment to the scanner would weaken these associations. We therefore retrained the subjects once they were in the scanner. Finally, since the synaesthete experience is perceptual, we asked our subjects to ‘imagine’ the colour associated with each word, and also included as a comparison a condition in which they were asked only to ‘predict’ the colour. We anticipated that, if the associative learning hypothesis of synaesthesia is correct, then these non-synaesthete subjects should show, particularly in the ‘imagine’ condition, at least some activation in the V4 region where the synaesthetes showed activation in response to spoken words.
In all, four sets of activation patterns to words were gathered from these non-synesthete subjects. Two were gathered prior to retraining: ‘pre-predict’ (with instructions to predict the associated colors) and ‘pre-imagine’ (with instructions to imagine them). Two further sets were gathered after retraining in the scanner: ‘post-predict’ and ‘post-imagine’. No activation was seen in any of these conditions in the V4 region activated by words in the coloured hearing synaesthetes. These negative results did not represent any general failure of activation, as might happen for example if the subjects simply did not attend to the stimuli, since there was clear activation in the auditory cortex and regions of the brain concerned with language, such as Broca’s area, presumably reflecting the active processing of heard words.
This experiment on word–colour associations in non-synaesthetes weakens the possibility that synaesthetic colour experiences result from normal associative (p.138) learning. If that were so, the non-synaesthetes given over-training on word–colour associations and listening to the words in the scanner should have shown at least some activation in the V4 region.
Conceivably, however, synaesthetes differ from non-synaesthetes in the nature of their associative learning process. Perhaps, in them, this is unusually strong. If so, one might more easily train synaesthetes than non-synaesthetes on an association which is not spontaneously present in the synaesthetes.
To test this possibility, in our second experiment we used training methods similar to those used in the first, but for word–colour associations we substituted melody–colour associations. Neither our synaesthete nor our control subjects had any pre-existing colour associations to the melodies. These were chosen from classical works, by Chopin or Mozart for example. We trained both word–colour synaesthetes and non-synaesthetes before testing them as before in the MRI scanner; and we again retrained them in the scanner. If synaesthetes have generally strong associative learning processes, then they (but not the controls) would be expected to show, after training, responses to melodies in the same V4 region activated by words in these subjects. However, the results showed no significant differences in activation patterns between the synaesthetes and controls; and in neither case was there significant activation in the V4 region. There was again clear activation in the auditory system, so the lack of activation in the visual system could not be attributed to a failure to attend to the stimuli. Thus these results lend no support to the hypothesis that synaesthetes might show particularly effective associative learning. In addition, they clearly distinguish between the brain activation patterns elicited by the kind of sensory association that the synaesthetes spontaneously report (word–colour) and the kind they deny (music–colour).
10.7 Implications of synaesthesia for functionalism
It is always difficult to reject a hypothesis on the basis of negative findings alone. Clearly, we cannot rule out the possibility that, despite the considerable effort we put into over-training non-synaesthetes on word–colour associations in the first experiment, or both synaesthetes and non-synaesthetes on melody–colour associations in the second, we were unable to achieve the strength of the early learning which hypothetically underlies word–colour associations in synaesthesia. Perhaps there is something special about the period of early learning which cannot be duplicated in adult subjects. Nonetheless, the complete absence in these experiments of any activation in the colour-selective regions of the visual system, except in the case of spontaneous synaesthete word–colour associations, casts considerable doubt on the hypothesis that the latter are the fruit of normal associative learning.
Given this (albeit weak) conclusion, we are left by default with the hard-wiring hypothesis. This supposes that synaesthetic perceptual experience arises because (p.139) of ‘sparking over’ of neural excitation from one pathway (the inducing pathway) to another (the induced pathway) to which the inducing pathway is abnormally connected. In word–colour synaesthesia colours are usually triggered by both auditory and visual presentation of words. The most important feature of the trigger usually lies in the first syllable of the word, whether spoken (when it is called a ‘phoneme’) or seen on the page (a ‘grapheme’). This evidence suggests that the inducing pathway most likely consists in regions in which the auditory and visual representations of phonemes and graphemes are jointly represented. Our fMRI data do not directly throw further light upon the inducing pathway. Nor would they be expected to do so. Recall that the fMRI method depends on the possibility of comparison or subtraction between experimental and control conditions or groups. But the inducing pathway is presumably activated to a similar degree whether words are presented to synaesthetes or non-synaesthetes. So comparison between the activation patterns observed in these two groups of subjects is uninformative.
Our fMRI results do, however, sharpen up hypotheses concerning the likely route from the inducing to the induced pathway. The word–colour synaesthetes in our experiments responded to spoken words by activating the colour-selective region of the visual system without activation at any earlier point in the visual pathways, such as V1 or V2 (see Plate 4.1), although these regions are activated when subjects are presented with coloured visual stimuli. This pattern of results—similar activation in more central parts of the visual pathway, but V1/V2 more clearly activated by the more ‘normal’ route of stimulation—has been reported also in studies of colour after-images, motion after-effects and illusory motion (see Plate 13.1 for an example of such an illusion). In contrast, imagining colours is insufficient to activate either of these regions, V1/V2 or V4. These contrasting patterns of activation are consistent with the common introspection that after-images and after-effects are true visual percepts, whereas merely imagined visual features are not. (You should try this contrast out for yourself. As with many other assertions in this book, careful observation is all you need do your own experimental detective work.)
Overall, then, these results suggest that activation of modules in the visual system specialised for the analysis of particular visual features, such as colour or motion, is both necessary and sufficient (not requiring supplementation by activity in regions earlier in the visual pathway) for the conscious experience of that visual feature. Data tending to the same conclusion have been reported by Dominic ffytche and his colleagues for hallucinatory experiences of colour in certain patients with eye disease, in whom V4 activation again accompanied the illusory experience. From this point of view, then, word–colour synaesthesia can be regarded as an example of illusory experience in which the triggering stimulus (words) occurs with very high frequency, as compared to triggers for other illusions, e.g. colour after-images or motion after-effects, which occur with much lower frequency. (p.140) In all these cases, however, once the relevant visual module (V4 for colour, V5 for motion) is activated, the illusory experience occurs automatically. These results, then, have strong implications for determining (in Francis Crick’s phrase) the ‘neural correlate of consciousness’. We deal with this issue in Chapter 13.
A further important aspect of our findings is that we saw activation in the word–colour synaesthetes presented with spoken words in V4 only in the left hemisphere. Given the lateralisation of cortical language systems also to the left hemisphere, this left-lateralised activation in synaesthesia may relate to the fact that it is speech sounds rather than sounds in general which elicit the synesthete’s colour experiences. Thus the abnormal projection which hypothetically underlies word–colour synaesthesia appears to travel from left-lateralised cortical language systems directly (without involvement of regions lower in the visual system; see above) to left V4.
A final result from our fMRI study of word–colour synaesthetes deserves mention. The data from the Mondrian experiment showed good agreement in the area activated by colour as between synaesthetes and non-synaesthete controls—but only in the right hemisphere. In this hemisphere, both groups showed activation of V4. However, left V4 was activated by coloured Mondrians only in non-synaesthetes. Thus, in the synaesthetes, left (but not right) V4 was activated by spoken words and right (but not left) V4 was activated by coloured Mondrians. These data raise the interesting possibility that, in word–colour synaesthesia, the putative abnormal projection from left cortical language systems to left V4 prevents the normal dedication of the latter region (alongside its right-sided homologue) to colour vision.
Taken together, these results and our inferences from them paint the following picture. Word–colour synaesthetes are endowed with an abnormal extra projection from left-lateralised cortical language systems to the colour-selective region (V4) of the visual system, also on the left. Whenever the synaesthete hears or sees a word, this extra projection leads automatically to activation of the colour-selective region. Activation of this region is sufficient to cause a conscious colour experience. The exact nature of that experience presumably depends upon the particular set of V4 neurons activated. Importantly, there is no evidence that the experienced colour plays any functional role in the synaesthete’s auditory or visual processing of words. (In the next section, indeed, we consider evidence that the experienced colour may actively interfere with such processing.) Thus, there is no relationship between the occurrence of the synaesthete’s colour experiences and the linguistic function that triggers them. This conclusion is incompatible with the functionalist analysis of conscious experience.
10.8 The alien colour effect
The data reviewed in the previous section tend strongly to the conclusion that word–colour synaesthesia is based upon an abnormal, probably genetically determined, (p.141) projection hard-wired into the brain. Conversely, these data lend no support to the hypothesis that this condition results from any special form of associative learning. This section describes additional experimental data which further weaken the associative learning hypothesis. These data come from a study of a sub-group of word–colour synaesthetes who experience what I have termed the ‘alien colour effect’, or ACE for short. In this phenomenon, the names of colours induce a colour experience that is different from the colour named. So, for example, the word ‘red’ might give rise to the experience of green, ‘blue’ to the experience of pink, and so on. For a given synaesthete, the ACE may affect all, some or just a few colour names.
As is the case for synaesthesia in general, the ACE appears to have been present for as long as the synaesthete can remember, that is, back to early childhood. Now, consider the opportunities for associative learning that this situation provides. A young child with the ACE would frequently encounter circumstances in which someone makes a statement of the kind: ‘see the red bus coming round the corner’. From statements such as these, the child has normal opportunities to learn the visual colour to which the word ‘red’ applies. Synaesthetes do indeed learn colour names normally: as adults they show normal colour perception and normal colour naming. Yet, in the example given above, as well as seeing a red bus come round the corner just after being told about the bus, the child with ACE would also experience a different colour, e.g. green, upon hearing the word ‘red’. Thus she must frequently encounter opportunities for associative learning provided by chains of events such as: the word ‘red’ followed by an experience of green and then the sight of a red bus. If the first part of this chain, the word ‘red’ followed by a green experience, were due to associative learning in the first place, one would expect it to be unlearnt by these further associative learning opportunities. This, certainly, is what happens in countless experiments on so-called reversal learning with both animal and human subjects. Thus, the existence of the ACE is incompatible with the associative learning account of word–colour synaesthesia.
Given the scope of the conclusions we seek to draw from these phenomena for functionalism, it is important to validate the ACE experimentally. To do so, we modelled our approach on the ‘Stroop interference effect’, encountered in the previous chapter (Section 9.3). This is demonstrated most easily in experiments in which the subject merely has to name, as quickly as possible, the colour of the ‘ink’ in which a series of letters is displayed. (It used to be real ink, but the experiment is now normally done using a computer; nonetheless, everyone continues to call it the ink colour.) In a control condition, the letters are a simple row of Xs. The critical experimental condition is one in which the names of colours (e.g. the word ‘red’) are displayed in ink of an incongruent colour (e.g. green). The subject has to disregard the colour name ‘red’ and answer ‘green’, since this is the colour of the ink. The speed of naming the ink colour in which incongruent colour names are written is reliably slower than the speed of naming the ink colour for a row of Xs. (p.142) This is the ‘Stroop effect’. It is thought to arise because of the difficulty the subject has in ignoring the colour name when attempting to retrieve the name of the ink colour.
We reasoned that something similar should happen in subjects with the ACE even when they are asked to name just the row of Xs. Suppose this row is printed in red ink. The subject retrieves the name ‘red’ preparatory to uttering it. But, in a subject with the ACE, the name ‘red’ gives rise to an experience of green (or pink, or stripy orange and blue—it doesn’t matter). The green experience gives rise to a tendency to utter, instead of ‘red’, the word ‘green’. This should interfere with the utterance ‘red’ and so slow down colour naming. We anticipated, however, that this might be a small effect, so we also tested our subjects in a full Stroop paradigm.
We first assessed a group of colour-word synaesthetes for the degree to which they displayed the ACE (as the percentage of colour names which caused ‘alien’ colour experiences). On the basis of these scores subjects were assigned to one of three groups: with 0–35%, 35–70% and 70–100% ACE. We also tested a group of non-synaesthete controls. We measured speed of colour naming in a conventional Stroop test, using Xs as the control condition and incongruent colour words (ink colour different from colour name) as the Stroop condition. The results (Fig. 10.2) (p.143) were very clear. As the degree to which the ACE occurs increased, so colour naming was slowed.
These results confirm the reality of the self-reported ACE, showing once again that such reports are a reliable source of information in synaesthesia. The greater the percentage ACE reported, the slower was colour naming. This effect, furthermore, was observed as clearly in the control condition, in which the subject had only to name the ink colour in which four Xs were presented, as in the Stroop condition. The additional conflict between ink colour and colour name inherent in the latter condition was not required to bring out the effect of the ACE upon the speed of colour naming. Indeed, the Stroop effect, as such, was unaffected by the ACE. This pattern of results presumably indicates that, in subjects with the ACE, the basic process of retrieving the name of the ink colour is sufficient (because the name leads automatically to the experience of a different colour) to slow down colour naming. Quantitatively, the degree of this ACE-induced slowing (if one compares full ACE colour naming speed to that of the non-synaesthetes; Fig. 10.2) was about the same as the size of the Stroop effect itself. Note that the interference caused in colour naming by the ACE must precede the subject’s overt utterance of the colour name. We may therefore infer from this pattern of results that the degree of interference in colour naming caused by the percept of an incongruent colour is as great when this is induced by subvocal retrieval of the colour name as when it is perceived by the normal visual route.
The reality of the ACE, demonstrated in this experiment, casts further doubt on the possibility that word–colour synaesthesia could be the result of any associative learning process. Every time a colour name occurs in association with the perception of the colour named, and also in conjunction with the alien colour experience triggered by the name, as presumably occurred in the experiment reported here, there is an opportunity for normal associative learning processes to reverse the aberrant association that putatively underlies the ACE. Yet the ACE persists unchanged from childhood to adulthood. It is extremely unlikely therefore that the ACE is established as the result of an initial stage of normal associative learning. By extension, it is also unlikely that word–colour synaesthesia in general rests upon such an associative basis.
Overall, the results of these experiments, together with their various strands of supporting data and argument, give rise to the following conclusions.
1. Word–colour synaesthesia does not result from aberrant associative learning.
2. Word–colour synaesthesia is most likely due to an extra, abnormal, left-lateralised projection from cortical language systems to the colour-selective region (V4) of the visual system.
3. On this analysis, excitation in synaesthetes by heard or seen words of cortical language systems ‘sparks over’ to activation of the colour-selective region of the visual system.
4. Activation of the colour-selective region of the visual system is sufficient to lead, automatically and involuntarily, to the conscious experience of colour, with the specifics of the colour experience depending upon the particular pattern of neuronal firing caused in V4 by the sparking over.
5. The occurrence of the synaesthetic colour experience in word–colour synaesthesia plays no functional role in relation either to speech or language perception or to colour vision. An intriguing gloss on this conclusion is provided by Ramachandran and Hubbard’s description of a grapheme-colour synaesthete with a form of colour blindness, who “claimed to see numbers in colours that he could never see in the real world (‘Martian colours’)”. Such ‘Martian’ colours imply that, if a pattern of V4 neuronal firing induced in synaesthesia differs from any caused via the normal visual pathway, it can nonetheless give rise to a colour experience specific to the pattern per se and not to any visually linked functional relationships.
6. The occurrence of the synaesthetic colour experience in the alien colour effect has behaviourally dysfunctional effects (as shown by the slowed naming of colours).
These conclusions are incompatible with a functionalist account of word–colour synaesthesia. This condition provides a clear counter-example to what I called above the ‘complementary inference’ from functionalism: namely, that, for any discriminable functional difference, there must be a corresponding discriminable difference between qualia. Within the behaviour of any given word–colour synaesthete there is a clear functional separation between the seeing of a colour presented via the normal visual channel, on the one hand, and the perception of that same colour triggered by a word. Yet, apparently, neither the qualia nor their neural bases (as tested in our fMRI experiments) produced by these two functional routes differ. It is, of course, difficult to affirm a lack of difference in qualia with any certainty. However, to examine this issue, we have worked with a small number of word–colour synaesthetes with sufficient artistic talent to depict their colour experiences in response to specific words (see cover illustration). We are currently applying fMRI to these subjects to determine just how closely the activation patterns elicited in V4 by a given word and its corresponding picture resemble one another. This is a difficult experiment that may lie beyond the technical limitations of current neuroimaging techniques. But we hope that it will provide a route by which to test objectively this key assumption in the argument: that, in word–colour synaesthesia, similarity or even perhaps identity of qualia can occur despite disparate functional routes underlying them.
There is an apparent escape hatch for functionalism in our finding that, in coloured hearing synaesthetes, left V4 is devoted to synaesthetic colours and right (p.145) V4 to visually detected colours. The sensitivity of fMRI does not allow us to assert that this observation represents complete lateralised separation between the two functions. But, given that the different lateralisations were observed in the same subjects within a single scanning session, they cannot be dismissed as artefact. Thus one might try to salvage the functionalist account of coloured hearing synaesthesia by asserting that the two functions (elicited by spoken words or seen colours) do not in fact share qualia, since one is associated with qualia generated in left V4 and the other, with qualia generated in right V4. However, this line of defence takes as axiomatic what ought to be an empirical hypothesis: namely, that different neural processing produces different qualia. Yet subjectively, to the synaesthete, both are experienced as colour. Indeed, one may also interpret the different lateralisation of colour produced visually and synaesthetically as providing an even stronger counter-example to functionalism. For there is considerable evidence that activity in V4 in either hemisphere is sufficient for the experience of colour. Thus an opponent of functionalism might argue that, in coloured hearing synaesthetes, colour experiences are produced by two routes which differ in all critical respects: input, output and the site (left or right hemisphere) of the strongest neural correlate of the consciousness of colour.
It may appear that, in adopting this joint line of attack upon functionalism, I am trying to have my cake and eat it too. The argument needs, therefore, to be spelt out carefully. There are three terms that have to be put together in any understanding of the relations between qualia (Q), functions (F) and brain processes (B). The complementary inference drawn from functionalism states that, if F1 differs from F2, then (provided that F1 and F2 belong to a domain of processing associated with qualia) F1 must be associated with Q1 and F2 with Q2, such that Q1 differs from Q2. As noted earlier, in most version of functionalism functions are specified in terms of abstract processes alone (the box-and-arrow diagrams of cognitive psychology being a familiar example); however, in others they are specified in terms of actual neural processes in the brain. In the latter case, F1 is mediated by B1 and F2 by B2. Assuming a case (like that of word–colour synaesthesia) in which Q1 and Q2 are the same even though F1 and F2 differ, we can therefore envisage two possibilities: (1) that B1 and B2 do not differ, or (2) that they do. Both of these patterns of results run counter to the complementary inference and therefore to functionalism. However, they differ in that the first alternative places the fault line in functionalism between functions, on the one hand, and qualia-plus-brain processes on the other; whereas the second places the fault line between functions-plus-brain processes, on the one hand, and qualia on the other. We anticipated the former outcome to our experiments. The latter, which is the result observed, is equally inimical to functionalism but perhaps inimical to physicalism (the tissue approach) too, in that qualia appear to be stripped by it of any necessary connection to either specific functions or specific brain processes.
(p.146) Our findings also run counter to functionalist expectation in a second respect. Harnad (in his paper on ‘Turing indistinguishability’) has argued that qualia can be selected in biological evolution only in virtue of the fact that they are epiphenomenally linked to functions that have survival value. It is difficult to see, on this basis, how the ACE could ever arise. The understanding of language, in audition and vision, clearly has survival value, as does colour vision. One can also see that a neural linkage between language systems and colour vision could provide survival value, for example, by facilitating the naming of colours. But no natural account emerges along these lines of why this neural linkage should give rise to the perception of colours triggered by words in word–colour synaesthesia. Such an arrangement is at best functionally neutral. Still less does this functionalist account offer any explanation of how the alien colour effect, which is (as we have seen) actively dysfunctional, could have arisen during evolution. Functionalism supposes that qualia are fully dependent upon the functions with which they are associated. If that is so, it should be impossible for qualia to compete negatively with those very same functions. Yet, in the case of the alien colour effect, that is just what they appear to do.
There will perhaps be a temptation to dismiss these findings on the basis that they depend upon ‘illusory’ perception. I have myself, above, drawn a parallel between word–colour synaesthesia and other illusory experiences of colour and motion. In particular, these all appear to rest upon the same neural foundation (discussed more fully in Chapter 13), namely, activation of that part of the visual system which is responsible for the analysis of the specific visual feature concerned (colour, motion), without activation in earlier parts of the visual pathways. However, to dismiss our findings on this basis would be to misunderstand how normal vision works. In a very real sense, this too is illusory. Thus, for example, in the particular case of concern to us here, that of colour vision, it is universally agreed that colours, as such, are not properties of the objects that we perceive as being coloured (see Chapter 7). The basis that such objects provide for the brain’s construction of colours lies in the light reflectances of their surfaces as a function of the wavelengths of light that fall upon them. There is no known relationship (other than correlational) between these reflectances, whether measured on the surfaces themselves or as computed by the brain, and the qualia by which they emerge into conscious perception. The phenomenon of word–colour synaesthesia therefore provides an empirical basis upon which to ask an ancient philosophical question: why should not colour qualia have been used normally, as they are used unusually by word–colour synaesthetes, to model in consciousness auditory inputs (words) rather than visual inputs (reflectances)?
This question, of course, takes us to the heart of the Hard Problem of conscious experience. Until we can go beyond correlation to mechanism in understanding how qualia come to be allocated to function, that problem will remain. The considerations advanced in this chapter render it less likely that the allocation of (p.147) qualia in word–colour synaesthesia is determined solely or even at all by function as such. And, given that functionalism purports to provide a completely general account of how conscious experiences relate to brain activity, even one such counter-instance, if it can be firmly established, should be sufficient to overthrow it. The consequences that would flow from such an overthrow might be dramatic. Pretty well every contemporary account of consciousness is functionalist in both concept and detail. That tally includes my own theory, as sketched out in Chapters 7 and 8, so creating a conceptual tension that will hover around the rest of the book (for a resolution, see Section 20.3).
It is, of course, too soon to come to such a dramatic conclusion. The data base is extraordinarily slim. Indeed, to the best of my knowledge, ours are the first studies that have explicitly sought to put functionalism to the test of experiment. Nonetheless, it is not too soon to ask this question: if functionalism were to be overthrown, what might take its place? The answer is far from clear. We next need to look at some possibilities. (p.148)