Jump to ContentJump to Main Navigation

Jerry A. Fodor

Print publication date: 1998

Print ISBN-13: 9780198236368

Published to Oxford Scholarship Online: November 2003

DOI: 10.1093/0198236360.001.0001

The Demise of Definitions, Part I: The Linguist's Tale

(p.40) 3 The Demise of Definitions, Part I: The Linguist's Tale

Jerry A. Fodor (Contributor Webpage)

Oxford University Press

Abstract and Keywords

A consideration of `linguistic’ evidence for and against the view that typical concepts of `middle‐sized’ objects (DOG, CHAIR, UNCLE, DOORKNOB) are complex definitional constructions out of more primitive concepts. Critical analysis of arguments by Ray Jackendoff, Steven Pinker, and others.

Keywords:   concepts, definitions, Jackendoff, Pinker

Certain matters would appear to get carried certain distances whether one wishes them to or not, unfortunately.

—David Markham, Wittgenstein's Mistress


I want to consider the question whether concepts are definitions. And let's, just for the novelty, start with some propositions that are clearly true:

  1. 1. You can't utter the expression ‘brown cow’ without uttering the word ‘brown’.

  2. 2. You can utter the word ‘bachelor’ without uttering the word ‘unmarried’.

The asymmetry between 1 and 2 will be granted even by those who believe that the “semantic representation” of ‘bachelor’ (its representation, as linguists say, “at the semantic level”) is a complex object which contains, inter alia, the semantic representation of ‘unmarried’.

Now for something that's a little less obvious:

  1. 3. You can't entertain the M(ental) R(epresentation) BROWN COW without entertaining the MR BROWN.

  2. 4. You can't entertain the M(ental) R(epresentation) BACHELOR without entertaining the MR UNMARRIED.

I'm going to take it for granted that 3 is true. I have my reasons; they'll emerge in Chapter 5. Suffice it, for now, that anybody who thinks that 3 and the like are false will certainly think that 4 and the like are false; and that 4 and the like are indeed false is the main conclusion this chapter aims at. I pause, however, to remark that 3 is meant to be tendentious. It claims not just what everyone admits, viz. that anything that satisfies BROWN COW inter alia satisfies BROWN, viz. that brow cows are ipso facto brown. (p.41) Proposition 3 says, moreover, that to think the content brown cow is, inter alia, to think the concept BROWN, and that would be false if the mental representation that expresses brown cow is atomic; like, for example, BROWNCOW.

What about 4? Here again there is a way of reading what's being claimed that makes it merely truistic: viz. by not distinguishing concept identity from content identity. It's not, I suppose, unreasonable (for the present illustrative purposes, I don't care whether it's true) to claim that the content bachelor and the content unmarried man are one and the same. For example, if concepts express properties, then it's not unreasonable to suppose that BACHELOR and UNMARRIED MAN express the same property. If so, and if one doesn't distinguish between content identity and concept identity, then of course it follows that you can't think BACHELOR without thinking UNMARRIED (unless you can think UNMARRIED MAN without thinking UNMARRIED. Which let's just concede that you can't).1

However, since we are distinguishing content identity from concept identity, we're not going to read 4 that way. Remember that RTM is in force, and RTM says that to each tokening of a mental state with the contentso‐and‐so there corresponds a tokening of a mental representation with the content so‐and‐so. In saying this, RTM explicitly means to leave open the possibility that different (that is, type distinct) mental representations might correspond to the same content; hence the analogy between mental representations and modes of presentation that I stressed in Chapter 2. In the present case, the concession that being a bachelor and being an unmarried man are the same thing is meant to leave open the question whether BACHELOR and UNMARRIED MAN are the same concept.

RTM also says that (infinitely many, but not all) mental representations have constituent structure; in particular that there are both complex (p.42) mental representations and primitive mental representations, and that the former have the latter as proper parts. We are now in a position to make expository hay out of this assumption; we can rephrase the claim that is currently before the house as:

  1. 5. The M(ental) R(epresentation) UNMARRIED, which is a constituent of the MR UNMARRIED MAN, is likewise a constituent of the MR BACHELOR.

Here's a standard view: the concept BACHELOR is expressed by the word “bachelor”, and the word “bachelor” is definable; it means the same as the phrase “unmarried man”. In the usual case, the mental representation that corresponds to a concept that corresponds to a definable word is complex: in particular, the mental representation that corresponds to a definable word usually has the same constituent structure as the mental representation that corresponds to its definition. So, according to the present proposal, the constituent structure of the mental representation BACHELOR is something like ‘UNMARRIED MAN’.

The thesis that definition plays an important role in the theory of mental representation will be the main concern in this chapter and the next. According to that view, many mental representations work the way we've just supposed that BACHELOR does. That is, they correspond to concepts that are expressed by definable words, and they are themselves structurally complex. This thesis is, to put it mildly, very tendentious. In order for it to be true, it must turn out that there are many definable words; and it must turn out, in many cases, that the MRs that correspond to these definable words are structurally complex. I'm going to argue that it doesn't, in fact, turn out in either of those ways.2

One last preliminary, and then we'll be ready to go. If there are no definable words, then, of course, there are no complex mental representations that correspond to them. But it doesn't follow that if there are many complex mental representations, then lots of words are definable. In fact, I take it that the view now favoured in both philosophy and cognitive science is that most words aren't definable but do correspond to (p.43) complex MRs (to something like prototypes or exemplars). Since the case against definitions isn't ipso facto a case against complex mental representations, I propose the following expository strategy. In this chapter and the next, I argue that concepts aren't definitions even if lots of mental representations are complex. Chapter 5 will argue that there are (practically) no complex mental representations at all, definitional or otherwise.3 At that point, atomism will be the option of last resort.

If we thus set aside, for the moment, all considerations that don't distinguish the claim that mental representations are typically definitional from the weaker claim that mental representations are typically complex, what arguments have we left to attend to? There are two kinds: the more or less empirical ones and the more or less philosophical ones. The empirical ones turn on data that are supposed to show that the mental representations that correspond to definable words are, very often and simply as a matter of fact, identical to the mental representations that correspond to phrases that define the words. The philosophical ones are supposed to show that we need mental representations to be definitions because nothing else will account for our intuitions of conceptual connectedness, analyticity, a prioricity, and the like. My plan is to devote the rest of this chapter to the empirical arguments and all of Chapter 4 to the philosophical arguments. You will be unsurprised to hear what my unbiased and judicious conclusion is going to be. My unbiased and judicious conclusion is going to be that neither the philosophical nor the empirical arguments for definitions are any damned good.

So, then, to business.

Almost everybody used to think that concepts are definitions; hence that having a concept is being prepared to draw (or otherwise acknowledge) the inferences that define it. Prima facie, there's much to be said for this view. In particular, definitions seem to have a decent chance of satisfying all five of the ‘non‐negotiable’ conditions which Chapter 2 said that concepts have to meet. If the meaning‐constitutive inferences are the defining ones, then it appears that:

—Definitions can be mental particulars if any concepts can. Whatever the definition of ‘bachelor’ is, it has the same ontological status as the mental representation that you entertain when you think unmarried man. That there is such a mental representation is a claim to which RTM is, of course, independently committed.

—Semantic evaluability is assured; since all inferences are semantically (p.44) evaluable (for soundness, validity, reliability, etc.), defining inferences are semantically evaluable inter alia.

—Publicity is satisfied since there's no obvious reason why lots of people might not assign the same defining inferences to a given word or concept. They might do so, indeed, even if there are lots of differences in what they know/believe about the things the concept applies to (lots of differences in the ‘collateral information’ they have about such things).

—Compositionality is satisfied. This will bear emphasis later. I'm going to argue that, of the various ‘inferential role’ theories of concepts, only the one that says that concepts are definitions meets the compositionality condition. Suffice it for now that words/concepts do contribute their definitions to the sentences/thoughts that contain them; it's part and parcel of ‘bachelor’ meaning unmarried man that the sentence ‘John is a bachelor’ means John is an unmarried man and does so because it has ‘bachelor’ among its constituents. To that extent, at least, definitions are in the running to be both word meanings and conceptual contents.

—Learnability is satisfied. If the concept DOG is a definition, then learning the definition should be all that's required to learn the concept. A fortiori, concepts that are definitions don't have to be innate.

To be sure, learning definitions couldn't be the whole story about acquiring concepts. Not all concepts could be definitions, since some have to be the primitives that the others are defined in terms of; about the acquisition of the primitive concepts, some quite different story will have to be told. What determines which concepts are primitive was one of the questions that definition theories never really resolved. Empiricists in philosophy wanted the primitive concepts to be picked out by some epistemological criterion; but they had no luck in finding one. (For discussion of these and related matters, see Fodor 1981a, 1981b.) But, however exactly this goes, the effect of supposing that there are definitions is to reduce the problems about concepts at large to the corresponding problems about primitive concepts. So, if some (complex) concept C is defined by primitive concepts c 1, c 2, . . . , then explaining how we acquire C reduces to explaining how we acquire c 1, c 2, . . . And the problem of how we apply C to things that fall under it reduces to the problem of how we apply c 1, c 2, . . . to the things that fall under them. And explaining how we reason with C reduces to explaining how we reason with c 1, c 2, . . . And so forth. So there is good work for definitions to do if there turn out to be any.

All the same, these days almost nobody thinks that concepts are definitions. There is now something like a consensus in cognitive science that the notion of a definition has no very significant role to play in theories of meaning. It is, to be sure, a weakish argument against (p.45) definitions that most cognitive scientists don't believe in them. Still, I do want to remind you how general, and how interdisciplinary, the collapse of the definitional theory of content has been. So, here are some reasons why definitions aren't currently in favour as candidates for concepts (/word meanings):

—There are practically no defensible examples of definitions; for all the examples we've got, practically all words (/concepts) are undefinable. And, of course, if a word (/concept) doesn't have a definition, then its definition can't be its meaning. (Oh well, maybe there's one definition. Maybe BACHELOR has the content unmarried man. Maybe there are even six or seven definitions; why should I quibble? If there are six or seven definitions, or sixty or seventy, that still leaves a lot of words/concepts undefined, hence a lot of words/concepts of which the definitional theory of meaning is false. The OED lists half a million words, plus or minus a few.)

Ray Jackendoff has suggested that the reason natural language contains so few phrases that are definitionally equivalent to words is that there are “nondiscrete elements of concepts . . . [which] play a role only in lexical semantics and never appear as a result of phrasal combination” (1992: 48). (I guess that “nondiscrete” means something like analogue or iconic.) But this begs the question that it's meant to answer, since it simply assumes that that there are contents that only nondiscrete symbols can express. Notice that you don't need nondiscrete symbols to express nondiscrete properties. ‘Red’ does quite a good job of expressing red. So suppose there is something essentially nondiscrete about the concepts that express lexical meanings. Still, it wouldn't follow that the same meanings can't be expressed by phrases. So, even if nondiscrete elements of concepts never appear as a result of phrasal combination, that still wouldn't explain why most words can't be defined.

—It's a general problem for theories that seek to construe content in terms of inferential role, that there seems to be no way to distinguish the inferences that constitute concepts from other kinds of inferences that concepts enter into. The present form of this general worry is that there seems to be no way to distinguish the inferences that define concepts from the ones that don't. This is, of course, old news to philosophers. Quine shook their faith that ‘defining inference’ is well defined, and hence their faith in such related notions as analyticity, propositions true in virtue of meaning alone, and so forth. Notice, in particular, that there are grounds for scepticism about defining inferences even if you suppose (as, of course, Quine does not) that the notion of necessary inference is secure. What's at issue here is squaring the theory of concept individuation with the theory of concept possession. If having a concept requires accepting the (p.46) inferences that define it, then not all necessities can be definitional. It is, for example, necessary that 2 is a prime number; but surely you can have the concept 2 and not have the concept of a prime; presumably there were millennia when people did. (Similarly, mutatis mutandis, for the concept WATER if it's necessary that water is H2O. I'll come back to this sort of point in Chapter 4.)

It is often, and rightly, said that Quine didn't prove that you can't make sense of analyticity, definition, and the like. But so what? Cognitive science doesn't do proofs; it does empirical, non‐demonstrative inferences. We have, as things now stand, no account of what makes an inference a defining one, and no idea how such an account might be devised. That's a serious reason to suppose that the theory of content should dispense with definitions if it can.

—Although in principle definitions allow us to reduce all sorts of problems about concepts at large to corresponding problems about concepts in the primitive basis (see above), this strategy quite generally fails in practice. Even if there are definitions, they seem to play no very robust role in explaining what happens when people learn concepts, or when they reason with concepts, or when they apply them. Truth to tell, definitions seem to play no role at all.

For example, suppose that understanding a sentence involves recovering and displaying the definitions of the words that the sentence contains. Then you would expect, all else equal, that sentences that contain words with relatively complex definitions should be harder to understand than sentences that contain words with relatively simple definitions. Various psychologists have tried to get this effect experimentally; to my knowledge, nobody has ever succeeded. It's an iron law of cognitive science that, in experimental environments, definitions always behave exactly as though they weren't there.

In fact, this is obvious to intuition. Does anybody present really think that thinking BACHELOR is harder than thinking UNMARRIED? Or that thinking FATHER is harder than thinking PARENT? Whenever definition is by genus and species, definitional theories perforce predict that concepts for the former ought to be easier to think than concepts for the latter. Intuition suggests otherwise (and so, by the way, do the experimental data; see e.g. Paivio 1971).

Hold‐outs for definitions often emphasize that the experimental failures don't prove that there aren't any definitions. Maybe there's a sort of novice/expert shift in concept acquisition: (defining) concepts like UNMARRIED MAN get ‘compiled’ into (defined) concepts like BACHELOR soon after they are mastered. If experiments don't detect UNMARRIED MAN in ‘performance’ tasks, maybe that's because (p.47) BACHELOR serves as its abbreviation.4 Maybe. But I remind you, once again, that this is supposed to be science, not philosophy; the issue isn't whether there might be definitions, but whether, on the evidence, there actually are some. Nobody has proved that there aren't any little green men on Mars; but almost everybody is convinced by repeated failures to find them.

Much the same point holds for the evidence about concept learning. The (putative) ontogenetic process of compiling primitive concepts into defined ones surely can't be instantaneous; yet developmental cognitive psychologists find no evidence of a stage when primitive concepts exist uncompiled. I appeal to expert testimony; here's Susan Carey concluding a review of the literature on the role of definitions (‘conceptual decompositions’, as one says) in cognitive development: “At present, there simply is no good evidence that a word's meaning is composed, component by component, in the course of its acquisition. The evidence for component‐by‐component acquisition is flawed even when attention is restricted to those semantic domains which have yielded convincing componential analyses” (1982: 369). (I reserve the right to doubt that there are any such domains; see below.)

So it goes. Many psychologists, like many philosophers, are now very sceptical about definitions. This seems to be a real case of independent lines of enquiry arriving at the same conclusions for different but compatible reasons. The cognitive science community, by and large, has found this convergence pretty persuasive, and I think it's right to do so. Maybe some version of inferential role semantics will work and will sustain the thesis that most everyday concepts are complex; but, on the evidence, the definitional version doesn't.

I'd gladly leave it here if I could, but it turns out there are exceptions to the emerging consensus that I've been reporting. Some linguists, working in the tradition called ‘lexical semantics’, claim that there is persuasive distributional (/intuitional) evidence for a level of linguistic analysis at which many words are represented by their definitions. It may be, so the argument goes, that these linguistic data don't fit very well with the results in philosophy and psychology; if so, then that's a problem that cognitive scientists should be worrying about. But, assuming that you're prepared to take distributional/intuitional data seriously at all (as, no doubt, you (p.48) should be) then the evidence that there are definitions is of much the same kind as the evidence that there are nouns.

Just how radical is this disagreement between the linguist's claim that definition is a central notion in lexical semantics and the otherwise widely prevalent view that there are, in fact, hardly any definitions at all? That's actually less clear than one might at first suppose. It is entirely characteristic of lexical semanticists to hold that “although it is an empirical issue [linguistic evidence] supports the claim that the number of primitives is small, significantly smaller than the number of lexical items whose lexical meanings may be encoded using the primitives” (Konrfilt and Correra 1993). Now, one would have thought that if there are significantly fewer semantic primitives than there are lexical items, then there must be quite a lot of definable words (in, say, English). That would surprise philosophers, whose experience has been that there are practically none. However, having made this strong claim with one hand, lexical semanticists often hedge it with the other. For, unlike bona fide (viz. eliminative) definitions, the lexical semanticist's verb “decompositions . . . intend to capture the core aspects of the verb meanings, without implying that all aspects of the meanings are represented” (ibid.: 83).

Whether the definition story about words and concepts is interesting or surprising in this attenuated form depends, of course, on what one takes the “core aspects” of meaning to be. It is, after all, not in dispute that some aspects of lexical meanings can be represented in quite an exiguous vocabulary; some aspects of anything can be represented in quite an exiguous vocabulary. ‘Core meaning’ and the like are not, however, notions for which much precise explication gets provided in the lexical semantics literature. The upshot, often enough, is that the definitions that are put on offer are isolated, simply by stipulation, from prima facie counter‐examples.5

This strikes me as a mug's game, and not one that I'm tempted to play. I take the proper ground rule to be that one expression defines another only if the two expressions are synonymous; and I take it to be a necessary condition for their synonymy that whatever the one expression applies to, the other does too. To insist on taking it this way isn't, I think, merely persnickety on my part. Unless definitions express semantic equivalences, they can't do the jobs that they are supposed to do in, for example, theories (p.49) of lexical meaning and theories of concept acquisition. The idea is that its definition is what you acquire when you acquire a concept, and that its definition is what the word corresponding to the concept expresses. But how could “bachelor” and “unmarried male” express the same concept—viz. UNMARRIED MALE—if it's not even true that “bachelor” and “unmarried male” apply to the same things? And how could acquiring the concept BACHELOR be the same process as acquiring the concept UNMARRIED MALE if there are semantic properties that the two concepts don't share? It's supposed to be the main virtue of definitions that, in all sorts of cases, they reduce problems about the defined concept to corresponding problems about its primitive parts. But that won't happen unless each definition has the very same content as the concept that it defines.

I propose now to consider some of the linguistic arguments that are supposed to show that many English words have definitions, where, however, “definitions” means definitions. I think that, when so constrained, none of these arguments is any good at all. The lexical semantics literature is, however, enormous and I can't prove this by enumeration. What I'll do instead is to have a close look at some typical (and influential) examples. (For discussions of some other kinds of ‘linguistic’ arguments for definitions, see Fodor 1970; Fodor and Lepore, forthcoming a; Fodor and Lepore, forthcoming b.)


Here's a passage from Jackendoff 1992. (For simplification, I have omitted from the quotation what Jackendoff takes to be some parallel examples; and I've correspondingly renumbered the cited formulas.)

The basic insight . . . is that the formalism for encoding concepts of spatial location and motion, suitably abstracted, can be generalized to many verbs and prepositions in two or more semantic fields, forming intuitively related paradigms. [J1–J4] illustrates [a] basic case.

  • [J1 Semantic field:] Spatial location and motion: ‘Harry kept the bird in the cage.’

  • [J2 Semantic field:] Possession: ‘Susan kept the money.’

  • [J3 Semantic field:] Ascription of properties [sic]:6 ‘Sam kept the crowd happy.’

  • (p.50)
  • [J4 Semantic field:] Scheduling of activities: ‘Let's keep the trip on Saturday.’ . . .

The claim is that the different concepts expressed by ‘keep’. . . are not unrelated: they share the same functional structure and differ only in the semantic field feature. (1992: 37–9).

I think the argument Jackendoff has in mind must be something like this: ‘Keep’ is “polysemous”. On the one hand, there's the intuition that the very same word occurs in J1–J4; ‘keep’ isn't ambiguous like ‘bank’. On the other hand, there's the intuition that the sense of ‘keep’ does somehow differ in the four cases. The relation between Susan and the money in J2 doesn't seem to be quite the same as the relation between John and the crowd in J3. How to reconcile these intuitions?

Well, suppose that ‘keep’ sentences “all denote the causation of a state that endures over a period of time” (37).7 That would account for our feeling that ‘keep’ is univocal. The intuition that there's something different, all the same, between keeping the money and keeping the crowd happy can now also be accommodated by reference to the differences among the semantic fields, each of which “has its own particular inferential patterns”(39). So Jackendoff “accounts for [the univocality of ‘keep’ in J1–J4] by claiming that they are each realizations of the basic conceptual functions” (specified by the putative definition) (37). What accounts for the differences among them is “a semantic field feature that designates the field in which the Event [to which the analysis of ‘keep’ refers] . . . is defined” (38). So if we assume that ‘keep’ has a definition, and that its definition is displayed at some level of linguistic/cognitive representation, then we can see how it can be true both that ‘keep’ means what it does and that what it means depends on the semantic field in which it is applied.8

So much for exposition. I claim that Jackendoff's account of polysemy offers no good reason to think that there are definitions. As often happens in lexical semantics, the problem that postulating definitions is supposed to solve is really only begged; it's, as it were, kicked upstairs into the metalanguage. The proposed account of polysemy works only because it (p.51) takes for granted a theoretical vocabulary whose own semantics is, in the crucial respects, unspecified.9 Since arguments from data about polysemy to the existence of definitions have been widely influential in linguistics, and since the methodological issues are themselves of some significance, I'm going to spend some time on this. Readers who are prepared to take it on faith that such arguments don't work are advised to skip.

The proposal is that whatever semantic field it occurs in, ‘keep’ always means (expresses the concept) CAUSE A STATE THAT ENDURES OVER TIME. Notice, however, that this assumption would explain the intuitive univocality of ‘keep’ only if it's also assumed that ‘CAUSE’, ‘STATE’, ‘TIME’, ‘ENDURE’, and the rest are themselves univocal across semantic fields. A's always entailing B doesn't argue for A's being univocal if B means sometimes one thing and sometimes another when A entails it. So, then, let's consider the question whether, for example, ‘CAUSE’ is univocal in, say, ‘CAUSE THE MONEY TO BE IN SUSAN'S POCKET’ and ‘CAUSE THE CROWD TO BE HAPPY’? My point will be that Jackendoff is in trouble whichever answer he gives.

On the one hand, as we've just seen, if ‘CAUSE’ is polysemic, then BLAH, BLAH, CAUSE, BLAH, BLAH is itself polysemic, so the assumption that ‘keep’ always means BLAH, BLAH, CAUSE, BLAH, BLAH doesn't explain why ‘keep’ is intuitively univocal, and Jackendoff looses his argument for definitions. So, suppose he opts for the other horn. The question now arises what explains the univocality of ‘CAUSE’ across semantic fields? There are, again, two possibilities. Jackendoff can say that what makes ‘CAUSE’ univocal is that it has the definition BLAH, BLAH, X, BLAH, BLAH where ‘X’ is univocal across fields. Or he can give up and say that what makes ‘CAUSE’ univocal across fields isn't that it has a univocal definition but just that it always means cause.

Clearly, the first route leads to regress and is therefore not viable: if the univocality of ‘CAUSE’ across fields (p.52) is required in order to explain the univocality of ‘keep’ across fields, and the univocality of ‘X’ across fields is required in order to explain the univocality of ‘CAUSE’ across fields, then presumably there's got to be a ‘Y’ whose univocality explains the univocality of ‘X’ across fields. From there it's turtles all the way up.

But the second route is equally embarrassing since it tacitly admits that you don't, after all, need to assume that a word (/concept) has a definition in order to explain its being univocal across semantic fields; ‘CAUSE’ would be a case to the contrary. But if that is admitted, then how does the fact that ‘keep’ is univocal across semantic fields argue that ‘keep’ has a definition? Why not just say that ‘keep’ is univocal because it always means keep; just as, in order to avoid the regress, Jackendoff is required to say that ‘CAUSE’ is univocal because it always means cause. Or, quite generally, why not just say that all words are univocal across semantic fields because semantic fields don't affect meaning. This ‘explanation’ is, of course, utterly empty; for all words to be univocal across semantic fields just is for semantic fields not to affect meaning. But Jackendoff's ‘explanation’ is empty too, and for the same reason. As between “keep' is univocal because it is field invariant' and “keep' is univocal because its definition is field invariant' there is, quite simply, nothing to choose.

In short: Suppose ‘CAUSE’ is ambiguous from field to field; then the fact that ‘keep’ always entails ‘CAUSE’ is not sufficient to make ‘keep’ univocal from field to field. Well then, suppose ‘CAUSE’ is univocal from field to field; then the fact that ‘keep’ (like ‘CAUSE’) occurs in many different fields doesn't explain its intuitive polysemy. Either way, Jackendoff loses.

A recent letter from Jackendoff suggests, however, that he has yet a third alternative in mind: “I'm not claiming”, he writes, “that keep is univocal, nor that cause is. Rather, the semantic field feature varies among fields, the rest remaining constant. AND THE REST IS ABSTRACT AND CANNOT BE EXPRESSED LINGUISTICALLY, BECAUSE YOU HAVE TO CHOOSE A FIELD FEATURE TO SAY ANYTHING” (sic; Jackendoff's caps. Personal communication, 1996). This suggestion strikes me as doubly ill‐advised. In the first place, there is no obvious reason why its being “abstract”, ineffable, and so on, should make a concept univocal (/field invariant); why shouldn't abstract, ineffable concepts be polysemic, just like concrete concepts that can be effed? Unless Jackendoff has an answer to this, he's back in the old bind: ‘CAUSE’ is field invariant only by stipulation. Secondly, this move leaves Jackendoff open to a charge of seriously false advertising. For it now turns out that ‘cause a state that endures over time’ doesn't really express the definition of ‘keep’ after all: ‘Keep’ means something that can't be said. A less misleading definition than the one Jackendoff offers might thus be “keep' means @#&$(*]', which has the virtue of not even appearing to say (p.53) anything. The same, mutatis mutandis, for the rest of English, of course, so lexical semantics, as Jackendoff understands it, ends in silence. The methodological moral is, surely, Frank Ramsey's: ‘What can't be said can't be said, and it can't be whistled either.’

I should add that Jackendoff sometimes writes as though all accounts that agree that keeping is a kind of causing are ipso facto “notational variants” of the definition theory. (I suppose this means that they are also ipso facto notational variants of the non‐definitional theory, since the relation notational variant of is presumably symmetrical.) But I would have thought that the present disagreement is not primarily about whether keeping is a kind of causing; it's about whether, if it is, it follows that sentences with ‘keep’ in their surface structures have ‘CAUSE’ in their semantic representations. This inference is, to put it mildly, not trivial since the conclusion entails that the meaning of ‘keep’ is structurally complex, while the premise is compatible with ‘keep’ being an atom. (By the way, what exactly is a notational variant?)

The moral of this long polemic is, I'm afraid, actually not very interesting. Jackendoff's argument that there are definitions is circular, and circular arguments are disreputable. To the best of my knowledge, all extant arguments that there are definitions are disreputable.

Auntie: Anyone can criticize. Nice people try to be constructive. We've heard a very great deal from you of ‘I don't like this’ and ‘I think that won't work’. Why don't you tell us your theory about why ‘keep’ is intuitively polysemic?

—: Because you won't like it. Because you'll say it's silly and frivolous and shallow.

Auntie: I think you don't have a theory about why ‘keep’ is intuitively polysemic.

—: Yes I do, yes I do, yes I do! Sort of.

My theory is that there is no such thing as polysemy. The appearance that there is a problem is generated by the assumption that there are definitions; if you take the assumption away, the problem disappears. As they might have said in the '60s: definitions don't solve the problem of polysemy; definitions are the problem of polysemy.

Auntie: I don't understand a word of that. And I didn't like the '60s.

—: Well, here's a way to put it. Jackendoff's treatment of the difference between, say, ‘NP kept the money’ and ‘NP kept the crowd happy’ holds that, in some sense or other, ‘keep’ means different things in the two sentences. There is, surely, another alternative; viz. to say that ‘keep’ means the same thing in both—it expresses the same relation—but that, in one case, the relation it expresses holds between NP and the crowd's being happy, and in the other case it holds between NP and the money. Since, on (p.54) anybody's story, the money and the crowd's being happy are quite different sorts of things, why do we also need a difference between the meanings of ‘keep’ to explain what's going on in the examples?

People sometimes used to say that ‘exist’ must be ambiguous because look at the difference between ‘chairs exist’ and ‘numbers exist’. A familiar reply goes: the difference between the existence of chairs and the existence of numbers seems, on reflection, strikingly like the difference between numbers and chairs. Since you have the latter to explain the former, you don't also need ‘exist’ to be polysemic.

This reply strikes me as convincing, but the fallacy that it exposes dies awfully hard. For example, Steven Pinker (personal communication, 1996) has argued that ‘keep’ can't be univocal because it implies possession in sentences like J2 but not in sentences like J3. I think Pinker is right that ‘Susan kept the money’ entails that something was possessed and that ‘Sam kept the crowd happy’ doesn't. But (here we go again) it just begs the question to assume that this difference arises from a polysemy in ‘keep’.

For example: maybe ‘keep’ has an underlying complement in sentences like (2) and (3); so that, roughly, ‘Susan kept the money’ is a variant of Susan kept having the money and ‘John kept the crowd happy’ is a variant of John kept the crowd being happy. Then the implication of possession in the former doesn't derive from ‘keep’ after all; rather, it's contributed by material in the underlying complement clause. On reflection, the difference between keeping the money and keeping the crowd happy does seem strikingly like the difference between having the money and the crowd being happy, a fact that the semantics of (2) and (3) might reasonably be expected to capture. This modest analysis posits no structure inside lexical items, and it stays pretty close to surface form. I wouldn't want to claim that it's apodictic, but it does avoid the proliferation of lexical polysemes and/or semantic fields and it's quite compatible with the claim that ‘keep’ means neither more nor less than keep in all of the examples under consideration.10

Auntie: Fiddlesticks. Consider the case where language A has a single unambiguous word, of which the translation in language B is either of two words, depending on context. Everybody who knows anything knows that happens all the time. Whenever it does, the language‐A word is ipso (p.55) facto polysemous. If you weren't so embarrassingly monolingual, you'd have noticed this for yourself. (As it is, I'm indebted to Luca Bonatti for raising the point.)

—: No. Suppose English has two words, ‘spoiled’ and ‘addled,’ both of which mean spoiled, but one of which is used only of eggs. Suppose also that there is some other language which has a word ‘spoilissimoed’ which means spoiled and is used both of spoiled eggs and of other spoiled things. The right way to describe this situation is surely not that ‘spoiled’ is ipso facto polysemous. Rather the thing to say is: ‘spoiled’ and ‘addled’ are synonyms and are (thus) both correctly translated ‘spoilissimoed’. The difference between the languages is that one, but not the other, has a word that means spoiled and is context restricted to eggs; hence one language, but not the other, has a word for being spoiled whose possession condition includes having the concept EGG. This is another reason for distinguishing questions about meaning from questions about possession conditions (in case another reason is required. Remember WATER and H2O).

Auntie (who has been catching a brief nap during the preceding expository passage) wakes with a start: Now I've got you. You say ‘keep’ is univocal. Well, then, what is the relation that it univocally expresses? What is the relation such that, according to you, Susan bears it to the money in J2 and Sam bears it to the crowd's being happy in J3?

—: I'm afraid you aren't going to like this.

Auntie: Try me.

—: It's (sigh!) keeping. (Cf: “What is it that “exist” expresses in both ‘numbers exist’ and ‘chairs exist’?” Reply: “It's (sigh!) existing.”)

In effect, what I'm selling is a disquotational lexicon. Not, however, because I think semantic facts are, somehow, merely pleonastic; but rather because I take semantic facts with full ontological seriousness, and I can't think of a better way to say what ‘keep’ means than to say that it means keep. If, as I suppose, the concept KEEP is an atom, it's hardly surprising that there's no better way to say what ‘keep’ means than to say that it means keep.

I know of no reason, empirical or a priori, to suppose that the expressive power of English can be captured in a language whose stock of morphologically primitive expressions is interestingly smaller than the lexicon of English. To be sure, if you are committed to ‘keep’ being definable, and to its having the same definition in each semantic field, then you will have to face the task of saying, in words other than ‘keep’, what relation it is that keeping the money and keeping the crowd happy both instance. But, I would have thought, saying what relation they both instance is precisely what the word ‘keep’ is for; why on earth do you suppose that you can say it ‘in other words’? I repeat: assuming that ‘keep’ (p.56) has a definition is what makes the problem about polysemy; take away that assumption and ‘what do keeping the money and keeping the crowd happy share?’ is easy. They're both keeping.

Auntie: I think that's silly, frivolous, and shallow! There is no such thing as keeping; there isn't anything that keeping the money and keeping the crowd happy share. It's all just made up.11

—: Strictly speaking, that view isn't available to Aunties who wish also to claim that ‘keep’ has a definition that is satisfied in all of its semantic fields; by definition, such a definition would express something that keeping money and keeping crowds happy have in common. Still, I do sort of agree that ontology is at the bottom of the pile. I reserve comment till the last two chapters.


There is, as I remarked at the outset, a very substantial linguistic literature on lexical semantics; far more than I have the space or inclination to review. But something needs to be said, before we call it quits, about a sustained attempt that Steven Pinker has been making (Pinker 1984; 1989) to co‐opt the apparatus of lexical semantics for employment in a theory of how children learn aspects of syntax. If this project can be carried through, it might produce the kind of reasonably unequivocal support for definitional analysis that I claim that the considerations about polysemy fail to provide.

Pinker offers, in fact, two kinds of ontogenetic arguments for definitions; the one in Pinker 1984 depends on a “semantic bootstrapping” theory of syntax acquisition; the one in Pinker 1989, turns on an analysis (p.57) of a problem in learnability theory known as “Baker's Paradox”. Both arguments exploit rather deep assumptions about the architecture of theories of language development, and both have been influential; sufficiently so to justify taking a detailed look at them. Most of the rest of this chapter will be devoted to doing that.

The Bootstrapping Argument

A basic idea of Pinker's is that some of the child's knowledge of syntactic structure is “bootstrapped” from knowledge about the semantic properties of lexical items; in particular, from knowledge about the semantic structure of verbs. The details are complicated but the outline is clear enough. In the simplest sorts of sentences (like ‘John runs’, for example), if you can figure out what syntactic classes the words belong to (that ‘John’ is a noun and ‘runs’ is an intransitive verb) you get the rest of the syntax of the sentence more or less for free: intransitive verbs have to have NPs as subjects, and ‘John’ is the only candidate around.

This sort of consideration suggests that a significant part of the child's problem of breaking into sentential syntax is identifying the syntax of lexical items. So far so good. Except that it's not obvious how properties like being a noun or being an intransitive verb might signal their presence in the learner's input since they aren't, in general, marked by features of the data that the child can unquestion‐beggingly be supposed to pick up. There aren't, for example, any acoustic or phonetic properties that are characteristic of nouns as such or of verbs as such.

The problem with almost every nonsemantic property that I have heard proposed as inductive bases [sic] is that the property is itself defined over configurations . . . that are not part of the child's input, that themselves have to be learned . . . [By contrast] how the child comes to know such things, which are not marked explicitly in the input stream, is precisely what the semantic bootstrapping hypothesis is designed to explain. (Pinker 1984: 51)

Here's how the explanation goes. Though (by assumption) the child can't detect being a noun, being a verb, being an adjective, etc. in the “input stream”, he can (still by assumption) detect such putative reliable semantic correlates of these syntactic properties as being a person or thing, being an action or change of state, and being an attribute. (For more of Pinker's suggested pairings of syntactic properties with their semantic correlates, see 1984: 41, table 2.1.) Thus, “when the child hears ‘snails eat leaves,’ he or she uses the actionhood of ‘eat’ to infer that it is a verb, the agenthood of ‘snails’ to infer that it plays the role of subject, and so on” (ibid.: 53). In effect, the semantic analysis of the input sentence is (p.58) supposed somehow to be perceptually given; and the correspondence between such semantic features as expressing a property and such syntactic features as being an adjective are assumed to be universal. Using the two together provides the child with his entering wedge.

Now, prima facie at very least, this seems to be a compact example of two bad habits that lexical semanticists are prone to: kicking the problem upstairs (‘How does the child detect whatever property it is that ‘attribute’ denotes?’ replaces ‘How does the child detect whatever property it is that ‘adjective’ denotes?’); and a partiality for analyses that need more analysis than their analysands. One sort of knows what an adjective is, I guess. But God only knows what's an attribute, so God only knows what it is for a term to express one.

The point isn't that ‘attribute’ isn't well defined; I suppose theoretical terms typically aren't. Rather, the worry is that Pinker has maybe got the cart before the horse; perhaps the intuition that ‘red’ and ‘12’ both express “attributes” (the first of, as it might be, hens (cf. ‘red hens’), and the second of, as it might be, sets (cf. ‘twelve hens’)) isn't really semantical at all; perhaps it's just a hypostatic misconstrual of the syntactic fact that both words occur as modifiers of nouns.12 It's undeniable that ‘red’ and ‘twelve’ are more alike than, as it might be, ‘red’ and ‘of’. But it's a fair question whether their similarity is semantic or whether it consists just in the similarity of their syntactic distributions. Answering these questions in the way that Pinker wants us to (viz. ‘Yes’ to the first, ‘No’ to the second) depends on actually cashing notions like object, attribute, agent, and the rest; on saying what exactly it is that the semantics of two words have in common in so far as both words ‘denote attributes’. So far, however, there is nothing on offer. Rather, at this point in the discussion, Pinker issues a kind of disclaimer that one finds very often in the lexical semantics literature: “I beg the thorny question as to the proper definition of the various semantic terms I appeal to such as ‘agent,’ ‘physical object’, and the like” (ibid.: 371 n. 12). Note the tactical similarity to Jackendoff, who, as we've seen, says that ‘keep’ means CAUSE A STATE TO ENDURE, but is unprepared to say much about what ‘CAUSE A STATE TO ENDURE’ means (except that it's ineffable).

Digression on method. You might suppose that in “begging the thorny question”, Pinker is merely exercising a theorist's indisputable right not to (p.59) provide a formal account of the semantics of the (meta)language in which he does his theorizing. But that would misconstrue the logic of intentional explanations. When Pinker says that the child represents the snail as an agent, ‘agent’ isn't just a term of art that's being used to express a concept of the theorist's; it's also, simultaneously, being used to express a concept that the theorist is attributing to the child. It serves as part of a de dicto characterization of the intentional content of the child's state of mind, and the burden of the theory is that it's the child's being in a state of mind with that content that explains the behavioural data. In this context, to refuse to say what state of mind it is that's being attributed to the child simply vitiates the explanation. Lacking some serious account of what ‘agent’ means, Pinker's story and the following are closely analogous:

—Why did Martha pour water over George?

—Because she thinks that George is flurg.

—What do you mean, George is flurg?

—I beg that thorny question.

If a physicist explains some phenomenon by saying ‘blah, blah, blah, because it was a proton . . . ’, being a word that means proton is not a property his explanation appeals to (though, of course, being a proton is). That, basically, is why it is not part of the physicist's responsibility to provide a linguistic theory (e.g. a semantics) for ‘proton’. But the intentional sciences are different. When a psychologist says ‘blah, blah, blah, because the child represents the snail as an agent . . . ’, the property of being an agent‐representation (viz. being a symbol that means agent) is appealed to in the explanation, and the psychologist owes an account of what property that is. The physicist is responsible for being a proton but not for being a proton‐concept; the psychologist is responsible for being an agent‐concept but not for being an agent‐concept‐ascription. Both the physicist and the psychologist is required to theorize about the properties he ascribes, and neither is required to theorize about the properties of the language he uses to ascribe them. The difference is that the psychologist is working one level up. I think confusion on this point is simply rampant in linguistic semantics. It explains why the practice of ‘kicking semantic problems upstairs’ is so characteristic of the genre.

We've encountered this methodological issue before, and will encounter it again. I do hate to go on about it, but dodging the questions about the individuation of semantic features (in particular, about what semantic features denote) lets lexical semanticists play with a stacked deck. If the examples work, they count them for their theory. If they don't work, they count them as metaphorical extensions. I propose that we spend a couple of pages seeing how an analysis of this sort plays out.

(p.60) Consider the following, chosen practically at random. It's a sketch of Pinker's account of how the fact that a verb has the syntactic property of being ‘dativizable’ (of figuring in alternations like ‘give Mary a book’/‘give a book to Mary’) can be inferred from the child's data about the semantics of the verb.

Dativizable verbs have a semantic property in common: they must be capable of denoting prospective possession of the referent of the second object by the referent of the first object . . . [But] possession need not be literal . . . [V]erbs of communication are treated as denoting the transfer of messages or stimuli, which the recipient metaphorically possesses. This can be seen in sentences such as ‘He told her the story,’ ‘He asked her a question,’ and ‘She showed him the answer’ [all of which have moved datives]. (Pinker 1989: 48)

What exactly Pinker is claiming here depends quite a lot on what relation “prospective possession” is, and on what is allowed as a metaphor for that relation; and, of course, we aren't told either. If John sang Mary a song, does Mary metaphorically prospectively possess the song that John sang to her? If so, does she also metaphorically prospectively possess a goodnight in “John wished Mary a goodnight?” Or consider:
  1. Zen told his story to the judge/Zen told the judge his story

  1. Zen repeated his story to the judge/*Zen repeated the judge his story.

I think this is a counter‐example to Pinker's theory about datives. Could the difference really be that the judge was a prospective possessor of the story when Zen told it the first time, but not when he repeated it? On the other hand, since who knows what prospective possession is, or what might express it metaphorically, who knows whether such cases refute the analysis?

Or consider:

  1. John showed his etchings to Mary/John showed Mary his etchings.

  1. John exhibited his etchings to Mary/*John exhibited Mary his etchings.

Is it that Mary is in metaphorical possession of etchings that are shown to her but not of etchings that are exhibited to her? How is one to tell? More to the point, how is the child to tell? Remember that, according to Pinker's story, the child figures out that ‘exhibit’ doesn't dative‐move when he (p.61) decides that it doesn't—even metaphorically—express prospective possession. But how on earth does he decide that?13

I should emphasize that Pinker is explicitly aware that there are egregious exceptions to his semantic characterization of the constraints on dative movement, nor does he suppose that appeals to “metaphorical possession” and the like can always be relied on to get him off the hook. At least one of the things that he thinks is going on with the double‐object construction is a morphological constraint on dative movement: polysyllabic verbs tend to resist it (notice show/*exhibit; tell/*repeat in the examples above). But though Pinker remarks upon the existence of such non‐semantic constraints, he appears not to see how much trouble they make for his view.

Remember the architecture of Pinker's argument. What's on offer is an inference from ontogenetic considerations to the conclusion that there are definitions. What shows that there are definitions is that there is a semantic level of linguistic representation at which verbs are lexically decomposed. What shows that there are semantic‐level representations is that you need semantic vocabulary to formulate the hypotheses that the child projects in the course of learning the lexicon; and that's because, according to Pinker, these hypotheses express correlations between certain semantic properties of lexical items, on the one hand, and the grammatical structures that the items occur in, on the other. Double‐object constructions, as we've seen, are supposed to be paradigms.

But it now appears that the vocabulary required to specify the conditions on such constructions isn't purely semantic after all; not even according to Pinker. To predict whether a verb permits dative movement, (p.62) you need to know not only whether it expresses (literally or metaphorically) ‘prospective possession’, but also the pertinent facts about its morphology. What account of the representation of lexical structure does this observation imply? The point to notice is that there isn't, on anybody's story, any one level of representation that specifies both the semantic and the morphological features of a lexical item. In particular, it's a defining property of the (putative) semantic level that it abstracts from the sorts of (morphological, phonological, syntactic, etc.) properties that distinguish between synonyms. For example, the semantic level is supposed not to distinguish the representation of (e.g.) “bachelor” from the representation of “unmarried man”, the representation of “kill” from the representation of “cause to die”, and so forth.

Well, if that's what the semantic level is, and if the facts about morphological constraints on double‐object structures are as we (and Pinker) are supposing them to be, then the moral is that there is no level of linguistic representation at which the constraints on dative movement can be expressed: not the morphological level because (assuming that Pinker's story about “prospective possession” is true) morphological representation abstracts from the semantic properties on which dative movement is contingent. And, precisely analogously, not the semantic level because semantic level representation abstracts from the morphological properties of lexical items on which dative movement is also contingent.

Time to pull this all together and see where the argument has gotten. Since heaven only knows what “prospective possession” is, there's no seriously evaluating the claim that dative movement turns on whether a verb expresses it. What does seem clear, however, is that even if there are semantic constraints on the syntactic behaviour of double‐object verbs, there are also morphological constraints on their syntactic behaviour. So to state such generalizations at a single linguistic level, you would need to postulate not semantic representations but morphosemantic representations. It is, however, common ground that there is no level of representation in whose vocabulary morphological and semantic constraints can be simultaneously imposed.

This isn't a paradox; it is perfectly possible to formulate conditions that depend, simultaneously, on semantic and morphological properties of lexical items without assuming that there is a semantic level (and, for that matter, without assuming that there is a morphological level either). The way to do so is to suppose that lexical entries specify semantic features of lexical items.

Linguistic discussions of lexical semantics just about invariably confuse two questions we are now in a position to distinguish: Are there semantic features? and Is there a semantic level? It is, however, important to keep (p.63) these questions distinct if you care about the structure of concepts. It's especially important if what you care about is whether “kill”, “eat”, and the like have definitions; i.e. whether KILL, EAT, and the like are complex concepts or conceptual primitives. To say, in the present context, that there are semantic features is just to say that semantic facts can have syntactic reflexes: what an expression means (partially) determines the contexts in which it is syntactically well‐formed. To say that there is a semantic level is to make a very much stronger claim: viz. that there is a level of representation at which only the semantic properties of expressions are specified, hence at which synonymous expressions get the same representations, hence at which the surface integrity of lexical items is not preserved. I am, as no doubt the reader will have gathered, much inclined to deny both these claims; but never mind that for now. My present concern is just to emphasize the importance of the difference between them.

For many of the familiar tenets of lexical semantics flow from the stronger claim but not from the weaker one. For example, since everybody thinks that the concepts expressed by phrases are typically complex, and since, by definition, representations at the semantic level abstract from the lexical and syntactic properties that distinguish phrases from their lexical synonyms, it follows that if there is a semantic level, then the concepts expressed by single words are often complex too. However, this conclusion does not follow from the weaker assumption: viz. that lexical entries contain semantic features. Linguistic features can perfectly well attach to a lexical item that is none the less primitive at every level of linguistic description.14 And it's only the weaker assumption that the facts about dative movement and the like support, since the most these data show is that the syntactic behaviour of lexical items is determined by their semantics inter alia; e.g. by their semantic features together with their morphology. So Pinker's argument for definitions doesn't work even on the assumption that ‘denotes a prospective possession’ and the like are bona fide semantic representations.

THE MORAL: AN ARGUMENT FOR LEXICAL SEMANTIC FEATURES IS NOT IPSO FACTOAN ARGUMENT THAT THERE IS LEXICAL SEMANTIC DECOMPOSITION!!! Pardon me if I seem to shout; but people do keep getting this wrong, and it does make a litter of the landscape.

(p.64) Well, but has Pinker made good even the weaker claim? Suppose we believe the semantic bootstrapping story about language learning; and suppose we pretend to understand notions like prospective possession, attribute, and the like; and suppose we assume that these are, as it were, really semantic properties and not mere shadows of distributional facts about the words that express them; and suppose we take for granted the child's capacity for finding such semantic properties in his input; and suppose that the question we care about is not whether there's a semantic level, but just whether the mental lexicon (ever) represents semantic features of lexical items. Supposing all of this, is there at least a bootstrapping argument that, for example, part of the lexical entry for ‘eat’ includes the semantic feature ACTION.

Well, no. Semantic bootstrapping, even if it really is semantic, doesn't require that lexical entries ever specify semantic properties. For even if the child uses the knowledge that ‘eat’ denotes an action to bootstrap the syntax of ‘snails eat leaves’, it doesn't follow that “denoting an action” is a property that “eat” has in virtue of what it means. All that follows—hence all the child needs to know in order to bootstrap—is that ‘eat’ denotes eating and that eating is a kind of acting. (I'm indebted to Eric Margolis for this point.) Indeed, mere reliability of the connection between eating and acting would do perfectly well for the child's purposes; “semantic bootstrapping” does not require the child to take the connection to be semantic or even necessary. The three‐year‐old who thinks (perhaps out of Quinean scruples) that ‘eating is acting’ is true but contingent will do just fine, so long as he's prepared to allow that contingent truths can have syntactic reflexes.

So much for the bootstrapping argument. I really must stop this grumbling about lexical semantics. And I will, except for a brief, concluding discussion of Pinker's handling of (what he calls) ‘Baker's Paradox’ (after Baker 1979). This too amounts to a claim that ontogenetic theory needs lexical semantic representations; but it makes quite a different sort of case from the one we've just been looking at.

The ‘Baker's Paradox’ Argument

Pinker thinks that, unless children are assumed to represent ‘eat’ as an action verb (mutatis mutandis, ‘give’ as a verb of prospective possession, etc.). Baker's Paradox will arise and make the acquisition of lexical syntax unintelligible. I'll tell you what Baker's Paradox is in a moment, but I want to tell you what I think the bottom line is first. I think that Baker's Paradox is a red herring in the present context. In fact, I think that it's two red herrings: on Pinker's own empirical assumptions, there probably isn't a (p.65) Baker's Paradox about learning the lexicon; and, anyhow, assuming that there is one provides no argument that lexical items have semantic structure. Both of these points are about to emerge.

Baker's Paradox, as Pinker understands it, is a knot of problems that turn on the (apparent) fact that children (do or can) learn the lexical syntax of their language without much in the way of overt parental correction. Pinker discerns, “three aspects of the problem [that] give it its sense of paradox”, these being the child's lack of negative evidence, the productivity of the structures the child learns (“if children simply stuck with the argument structures that were exemplified in parental speech . . . they would never make errors . . . and hence would have no need to figure out how to avoid or expunge them”), and the “arbitrariness” of the linguistic phenomena that the child is faced with (specifically “near synonyms [may] have different argument structures” (1989: 8–9)). If, for example, the rule of dative movement is productive, and if it is merely arbitrary that you can say ‘John gave the library the book’ but not *‘John donated the library the book’, how, except by being corrected, could the child learn that the one is OK and the other is not?

That's a good question, to be sure; but it bears full stress that the three components do not, as stated and by themselves, make Baker's Paradox paradoxical. The problem is an unclarity in Pinker's claim that the rules the child is acquiring are ‘productive’. If this means (as it usually does in linguistics) just that the rules are general (they aren't mere lists; they go ‘beyond the child's data’) then we get no paradox but just a standard sort of induction problem: the child learns more than the input shows him, and something has to fill the gap. To get a paradox, you have to throw in the assumption that, by and large, children don't overgeneralize; i.e. that, by and large, they don't apply the productive rules they're learning to license usages that count as mistaken by adult standards. For suppose that assumption is untrue and the child does overgeneralize. Then, on anybody's account, there would have to be some form of correction mechanism in play, endogenous or otherwise, that serves to expunge the child's errors. Determining what mechanism(s) it is that serve(s) this function would, of course, be of considerable interest; especially on the assumption that it isn't parental correction. But so long as the child does something that shows the world that he's got the wrong rule, there is nothing paradoxical in the fact that information the world provides ensures that he eventually converges on the right one.

To repeat, Baker's Paradox is a paradox only if you add ‘no overgeneralizations' to Pinker's list. The debilitated form of Baker's Paradox that you get without this further premiss fails to do what Pinker very much wants Baker's Paradox to do; viz. “[take] the burden of explaining learning (p.66) out of the environmental input and [put] it back into the child” (1989: 14–15). Only if the child does not overgeneralize lexical categories is there evidence for his “differentiating [them] a priori” (ibid.: 44, my emphasis); viz. prior to environmentally provided information.

Pinker's argument is therefore straightforwardly missing a premiss. The logical slip seems egregious, but Pinker really does make it, as far as I can tell. Consider:

[Since there is empirical evidence against the child's having negative information, and there is empirical evidence for the child's rules being productive,] the only way out of Baker's Paradox that's left is . . . rejecting arbitrariness. Perhaps the verbs that do or don't participate in these alterations do not belong to arbitrary lists after all . . . [Perhaps, in particular, these classes are specifiable by reference to semantic criteria.] . . . If learners could acquire and enforce criteria delineating the[se] . . . classes of verbs, they could productively generalize an alternation to verbs that meet the criteria without overgeneralizing it to those that do not. (ibid.: 30)

Precisely so. If, as Pinker's theory claims, the lexical facts are non‐arbitrary and children are sensitive to their non‐arbitrariness, then the right prediction is that children don't overgeneralize the lexical rules.

Which, however, by practically everybody's testimony, including Pinker's, children reliably do. On Pinker's own account, children aren't “conservative” in respect of the lexicon (see 1989: 19–26, sec. for lots and lots of cases).15 This being so, there's got to be something wrong with the theory that the child's hypotheses “differentiate” lexical classes a priori. A priori constraints would mean that false hypotheses don't even get tried. Overgeneralization, by contrast, means that false hypotheses do get tried but are somehow expunged (presumably by some sort of information that the environment supplies).

At one point, Pinker almost ’fesses up to this. The heart of his strategy for lexical learning is that “if the verbs that occur in both forms have some [e.g. semantic] property . . . that is missing in the verbs that occur [in the input data] in only one form, bifurcate the verbs . . . so as to expunge nonwitnessed verb forms generated by the earlier unconstrained version of the rule if they violate the newly learned constraint” (1989: 52). Pinker admits that this may “appear to be using a kind of indirect negative evidence: it is sensitive to the nonoccurrence of certain kinds of verbs”. To be sure; it sounds an awful lot like saying that there is no Baker's Paradox for the learning of verb structure, hence no argument for a priori semantic (p.67) constraints on the child's hypotheses about lexical syntax. What happens, on this view, is that the child overgeneralizes, just as you would expect, but the overgeneralizations are inhibited by lack of positive supporting evidence from the linguistic environment and, for this reason, they eventually fade away. This would seem to be a perfectly straightforward case of environmentally determined learning, albeit one that emphasizes (as one might have said in the old days) ‘lack of reward’ rather than ‘punishment’ as the signal that the environment uses to transmit negative data to the learner. I'm not, of course, suggesting that this sort of story is right. (Indeed Pinker provides a good discussion of why it probably isn't, see section My point is that Pinker's own account seems to be no more than a case of it. What is crucial to Pinker's solution of Baker's Paradox isn't that he abandons arbitrariness; it's that he abandons ‘no negative data’.

Understandably, Pinker resists this diagnosis. The passage cited above continues as follows:

This procedure might appear to be using a kind of indirect negative evidence; it is sensitive to the nonoccurrence of certain kinds of forms. It does so, though, only in the uninteresting sense of acting differently depending on whether it hears X or doesn't hear X, which is true of virtually any learning algorithm . . . It is not sensitive to the nonoccurrence of particular sentences or even verb‐argument structure combinations in parental speech; rather it is several layers removed from the input, looking at broad statistical patterns across the lexicon. (1989: 52)

I don't, however, think this comes to anything much. In the first place, it's not true (in any unquestion‐begging sense) that “virtually any learning algorithm [acts] differently depending on whether it hears X or doesn't hear X”. To the contrary, it's a way of putting the productivity problem that the learning algorithm must somehow converge on treating infinitely many unheard types in the same way that it treats finitely many of the heard types (viz. as grammatical) and finitely many heard types in the same way that it treats a different infinity of the unheard ones (viz. as ungrammatical). To that extent, the algorithm must not assume that either being heard or not being heard is a projectible property of the types.

On the other hand, every treatment of learning that depends on the feedback of evidence at all (whether it supposes the evidence to be direct or indirect, negative or positive, or all four) must “be several layers removed from the input, looking at broad statistical patterns across the lexicon”; otherwise the presumed feedback won't generalize. It follows that, on anybody's account, the negative information that the environment provides can't be “the nonoccurrence of particular sentences” (my emphasis); it's got to be the non‐occurrence of certain kinds of sentences. (p.68) This much is common ground to any learning theory that accounts for the productivity of what is learned.

Were we've gotten to now: probably there isn't a Baker's Paradox about lexical syntax; you'd need ‘no overgeneralization’ to get one, and ‘no overgeneralization’ is apparently false of the lexicon. Even if, however, there were a Baker's Paradox about the lexicon, that would show that the hypotheses that the child considers when he makes his lexical inductions must be tightly endogenously constrained. But it wouldn't show, or even suggest, that they are hypotheses about semantic properties of lexical items. No more than the existence of a bona fide Baker's Paradox for sentential syntax—which it does seem that children hardly ever overgeneralize—shows, or even suggests, that it's in terms of the semantic properties of sentences that the child's hypotheses about their syntax are defined.

So much for Pinker's two attempts at ontogenetic vindications of lexical semantics. Though neither seems to work at all, I should emphasize a difference between them: whereas the ‘Baker's Paradox’ argument dissolves upon examination, there's nothing wrong with the form of the bootstrapping argument. For all that I've said, it could still be true that lexical syntax is bootstrapped from lexical semantics. Making a convincing case that it is would require, at a minimum, identifying the straps that the child tugs and showing that they are bona fide semantic; specifically, it would require showing that the lexical properties over which the child generalizes are typically among the ones that semantic‐level lexical representations specify. In principle, we could get a respectable argument of that form tomorrow; it's just that, so far, there aren't any. So too, in my view, with the other ‘empirical’ or ‘linguistic’ arguments for lexical decomposition; all that's wrong with them is that they aren't sound.

Oh, well, so be it. Let's go see what the philosophers have.


(1) It will help the reader to keep the uses distinct from the mentions, to bear in mind that the expressions appearing in caps. (e.g. ‘BACHELOR’) are names, rather than structural descriptions, of mental representations. I thus mean to leave it open that the MR that ‘BACHELOR’ names might be structurally complex; for example, it might have as constituents the MRs that ‘UNMARRIED’ and ‘MAN’ name. By contrast, it's stipulative that no formula is a structural description of a mental representation unless it contains names of the MR's constituents. The issues we'll be concerned with can often be phrased either by asking about the structure of mental representations or about the structural descriptions of mental representations. In practice, I'll go back and forth between the two.

The claim that concepts are definitions can be sharpened in light of these remarks. Strictly speaking, it's that the definiens is the structural description of the definiendum; for example, ‘UNMARRIED MAN’ is the structural description of the concept BACHELOR.

(2) It's common ground that—idioms excepted—MRs that correspond to phrases (for example, the one that corresponds to “brown cow”) are typically structurally complex, so I've framed the definition theory as a thesis about the MRs of concepts that are expressed by lexical items. But, of course, this way of putting it relativizes the issue to the choice of a reference language. Couldn't it be that the very same concept that is expressed by a single word in English gets expressed by a phrase in Bantu, or vice versa? Notice, however, that this could happen only if the English word in question is definable; viz. definable in Bantu. Since it's going to be part of my story that most words are undefinable—not just undefinable in the language that contains them, but undefinable tout court—I'm committed to claiming that this sort of case can't arise (very often). The issue is, of course, empirical. So be it.

(3) i.e. there are no complex mental representations other than those that correspond to concepts that are expressed by phrases; see the preceding footnote. From now on, I'll take this caveat for granted.

(4) I am playing very fast and loose with the distinction between concepts and their structural descriptions (see n. 1 above). Strictu dictu, it can't both be that the concept BACHELOR abbreviates the concept UNMARRIED MAN and that the concept BACHELOR is the concept UNMARRIED MAN. But not speaking strictly makes the exposition easier, and the present considerations don't depend on the conflation.

(5) It's important to distinguish the idea that definitions typically capture only the core meaning of a univocal expression from the idea that definitions typically capture only one sense of an ambiguous expression. The latter is unobjectionable because it is responsive to pretheoretic intuitions that are often pretty emphatic: surely ‘bank’ has more than one meaning. But who knows how many “aspects” the meaning of an unambiguous word has? A fortiori, who knows when a theory succeeds in capturing some but not all of them?

(6) Wherein does this semantic field differ from any other? If I say that Harry kept the bird in the cage, don't I thereby ascribe a property—viz. the property of keeping the bird in the cage—to Harry? Jackendoff has a lot of trouble deciding what to call his semantic fields. This might well be because they're gerrymandered.

(7) This analysis couldn't be exhaustive; cf. ‘keep an appointment/ promise’ and the like. But perhaps ‘keep’ is ambiguous as well as polysemous. There's certainly something zeugmatic about ‘He kept his promises and his snowshoes in the cellar’.

(8) On the West Coast of the United States, much the same sort of thesis is often held in the form that lexical analysis captures the regularities in a word's behaviour by exhibiting a core meaning together with a system of ‘metaphorical’ extensions. See, for example, the putative explanation of polysemy in Lakoff (1988) and in many other treatises on “cognitive semantics”. As far as I can tell, the arguments against Jackendoff that I'm about to offer apply without alteration to Lakoff as well.

(9) Examples of this tactic are legion in the literature. Consider the following, from Higginbotham 1994. “[T]he meanings of lexical items systematically infect grammar. For example . . . it is a condition of object‐preposing in derived nominal constructions in English that the object be in some sense ‘affected’ in the events over which the nominal ranges: that is why one has (1) but not (2)” (renumbered):

  1. (1.) algebra's discovery (by the Arabs)

  2. (2.) *algebra's knowledge (by the Arabs).

Note that ‘in some sense’ is doing all the work. It is what distinguishes the striking claim that preposing is sensitive to the meanings of verbs from the rather less dramatic thought that you can prepose with some verbs (including ‘discover’) and not with others (including ‘know’). You may suppose you have some intuitive grasp of what ‘affecting’ amounts to here, but I think it's an illusion. Ask yourself how much algebra was affected by its discovery? More or less, would you say, than the light bulb was affected by Edison's inventing it?

(10) Fodor and Lepore (forthcoming a) provides some independent evidence for the analysis proposed here. Suppose, however, that this horse won't run, and the asymmetry Pinker points to really does show that ‘keep’ is polysemous. That would be no comfort to Jackendoff, since Jackendoff's account of the polysemy doesn't predict the asymmetry of entailments either: that J2 but not J3 belongs to the semantic field “possession” in Jackendoff's analysis is pure stipulation.

But I won't stress this. Auntie says I should swear off ad hominems.

(11) Auntie's not the only one with this grumble; Hilary Putnam has recently voiced a generalized version of the same complaint. “[O]n Fodor's theory . . . the meaning of . . . words is not determined, even in part, by the conceptual relations among the various notions I have mastered—e.g., between ‘minute’ and my other time concepts—but depends only on ‘nomic relations’ between the words (e.g. minute) and the corresponding universals (e.g. minutehood). These ‘universals’ are just word‐shaped objects which Fodor's metaphysics projects out into the world for the words to latch on to via mysterious ‘nomic relations’; the whole story is nothing but a ‘naturalistic’ version of the Museum Myth of Meaning” (1995: 79; italics and scare‐quotes are Putnam's). This does seem to me to be a little underspecified. Since Putnam provides no further exposition (and, endearingly, no arguments at all), I'm not sure whether I'm supposed to worry that there aren't any universals, or only that there aren't the universals that my semantics requires. But if Putnam thinks saying “ ‘takes a minute’ expresses the property of taking a minute” all by itself puts me in debt for a general refutation of nominalism, then he needs to have his methodology examined.

Still, it's right that informational semantics needs an ontology, and that the one it opts for had better not beg the questions that a semantic theory is supposed to answer. I'll have a lot to say about all that in Chapters 6 and 7.

(12) For an account of language acquisition in which the horse and cart are assigned the opposite configuration—syntax bootstraps semantics—see Gleitman 1990. To the extent that we have some grasp on what concepts terms like ‘S’, ‘NP’, ‘ADJ’ express, the theory that children learn by syntactic boostrapping is at least better defined than Pinker's. (And to the extent that we don't, it's not.)

(13) When Pinker's analyses are clear enough to evaluate, they are often just wrong. For example, he notes in his discussion of causatives that the analysis PAINTVTR = cover with paint is embarrassed by such observations as this: although when Michelangelo dipped his paintbrush in his paint pot he thereby covered the paintbrush with paint, nevertheless he did not, thereby, paint the paintbrush. (The example is, in fact, borrowed from Fodor 1970.) Pinker explains that “stereotypy or conventionality of manner constrains the causative . . . This might be called the ‘stereotypy effect’ ” (1984: 324). So it might, for all the good it does. It is possible, faut de mieux, to paint the wall with one's handkerchief; with one's bare hands; by covering oneself with paint and rolling up the wall (in which last case, by the way, though covering the wall with the paint counts as painting the wall, covering oneself with the paint does not count as painting oneself even if one does it with a paintbrush; only as getting oneself covered with paint).

Whether you paint the wall when you cover it with paint depends not on how you do it but on what you have in mind when you do it: you have to have in mind not merely to cover the wall with paint but to paint the wall. That is, “paintvtr” apparently can't be defined even in terms of such closely related expressions as “paintn”. Or, if it can, none of the decompositional analyses suggested so far, Pinker's included, comes even close to showing how.

(14) Compare: no doubt, the lexical entry for ‘boy’ includes the syntactic feature +Noun. This is entirely compatible with ‘boy’ being a lexical primitive at every level of linguistic description.

Saying that lexical items have features is one thing; saying that lexical items are feature bundles is quite another. Do not conflate these claims.

(15) Though the facts are a little labile, to be sure. For some recent data, see Marcus et al. 1992.