Jump to ContentJump to Main Navigation
The World in the Head$

Robert Cummins

Print publication date: 2010

Print ISBN-13: 9780199548033

Published to Oxford Scholarship Online: May 2015

DOI: 10.1093/acprof:osobl/9780199548033.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 25 June 2018

(p.v) Preface

(p.v) Preface

Source:
The World in the Head
Publisher:
Oxford University Press

This is primarily collection of papers on mental representation. There are four exceptions. ‘What is it Like to Be a Computer?’ is a paper about consciousness, written in the mid-1970s for a convocation at my alma mater, Carleton College, but never published. Beside the fact that I kind of like it, it is included to explain why, as a philosopher of mind interested in mental representation, I have never written about consciousness. The answer, in a brief phrase, is that I think that consciousness, or at least what David Chalmers has called the ‘hard problem,’ isn’t ready for prime time (Chalmers 1995). The other three—‘‘‘How does it Work?’’ vs. ‘‘What are the Laws?’’: Two Conceptions of Psychological Explanation,’ ‘Cognitive Evolutionary Psychology without Representational Nativism,’ and ‘Biological Preparedness and Evolutionary Explanation’—are obviously about psychological explanation. They are there because a guiding principle of my work on representation has been that a philosophical account of representation should, first and last, account for its explanatory role in the sciences of the mind. As I argue in ‘Methodological Reflections on Belief,’ it should not be about the semantics of propositional attitude sentences or its implications. ‘Connectionism and the Rationale Constraint on Psychological Explanation’ is apparently about psychological explanation, but it is motivated by the idea that connectionist representation does not sort well with propositional attitude psychology.

Philosophers of mind come in many flavors, but a useful broad sort when thinking about the philosophy of mental representation is the distinction between philosophers of science, whose science was one of the mind sciences, and philosophers of language, whose interest in the verbs of propositional attitude led to an interest in the contents of beliefs. This latter route to the ‘theory of content’ can seem natural enough. We have beliefs, and their semantic contents are propositions, and so one naturally comes to wonder how a belief, considered as a psychological state rooted in the brain, could come to have one proposition rather than another as its content, and to wonder what is for a belief to have a content at all. This can even look like (p.vi) philosophy of science if you think that scientific psychology does or should be propositional attitude psychology. This whole issue is discussed in ‘Meaning and Content in Cognitive Science.’

Most of the papers in this collection, however, do not assume that the propositional attitudes will have a central role in a mature science of the mind. They focus, therefore, on non-propositional representations and their role in our understanding of cognitive success and failure. The result is an attempt to replace the true–false dichotomy appropriate to propositional representations with a graded notion of accuracy along an number of simultaneous, and perhaps competing, dimensions, and to explore the virtues of a distinction between accuracy and effectiveness. The thought is that accuracy is expensive and accurate representation is often intractable, while less accurate representation is often cheap and easy to manage, hence more effective.

Underlying this approach is a healthy respect for the fact that our representational resources and their deployments are the result of a complex interaction between evolution, development, and learning. Learning, I am beginning to suspect, is not fundamentally distinct from development, even when development is conceived strictly as physiological growth. To learn anything, synapses have to be altered. To alter a synapse, something has to grow. Proteins have to be synthesized. Protein synthesis is orchestrated by the genes, and especially by their expression. Because gene expression is always responsive to the local environment, so is growth. The DNA in your stomach is the same as the DNA in your mouth, yet you do not grow teeth in your stomach, unless something goes seriously wrong. Because the brain has, via the senses, a broadband connection with the whole body and beyond, growth in the brain can be, and generally is, responsive to distal events and conditions. When distal influences loom large, and especially when their effects on changes in the brain can be modeled as inference, we have a paradigm case of learning. But the underlying mechanisms for replacing kidney cells are basically the same as those for growing binocular columns or a capacity to speak a language or to do algebra.

If anything like this is even close to right, the role of the propositional attitudes and their semantically tracked interactions is going to be limited at best, and often deeply misleading. I am not an eliminativist (see Ch. 11, ‘Meaning and Content in Cognitive Science’), but it is becoming increasingly clear that Paul Churchland was right to warn us against propositional (p.vii) attitudes (Churchland 1981). As I argue in Chapter 5 (‘Methodological Reflections on Belief’), we should beware of reading the structure of cognition off the structure of a propositionally based epistemology, and the structure of mental representation off the logical form of belief sentences.

If you like the propositional attitudes, you will like the Language of Thought as a theory of mental representation. The propositional attitudes have propositions as contents, and sentences are just the thing for representing propositions. If, like me, you are suspicious of the propositional attitudes and the language of thought, then you are likely to be attracted to the idea that mental representations are more like maps than like sentences, and are built up from indicators—detector signals—in a way quite distinct from semantic composition. This is the theme of Chapter 7 (‘Representation and Indication’).

The papers in this volume are not ordered by date of publication, but by topic and, where possible, by an attempt to put papers that presuppose x after papers that explain x.

References

Bibliography references:

Chalmers, David, ‘Facing up to the Problem of Consciousness’, Journal of Consciousness Studies, 2/3 (1995), 200–19.

Churchland, Paul, ‘Eliminative Materialism and the Propositional Attitudes,’ Journal of Philosophy, 78 (1981), 67–90.