Logical Consequence: A Constructivist View - Oxford Scholarship Jump to ContentJump to Main Navigation
The Oxford Handbook of Philosophy of Mathematics and Logic$

Stewart Shapiro

Print publication date: 2005

Print ISBN-13: 9780195148770

Published to Oxford Scholarship Online: July 2005

DOI: 10.1093/0195148770.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2015. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 27 August 2016

Logical Consequence: A Constructivist View

Logical Consequence: A Constructivist View

Chapter:
(p.671) Logical Consequence: A Constructivist View
Source:
The Oxford Handbook of Philosophy of Mathematics and Logic
Author(s):

Dag Prawitz (Contributor Webpage)

Publisher:
Oxford University Press
DOI:10.1093/0195148770.003.0022

Abstract and Keywords

The main question addressed in this chapter is how to analyze the modal ingredient in the concept of logical consequence or logical validity of an inference, here expressed by saying that the truth of the conclusion of a logically valid inference should follow by necessity of thought from the truth of the premisses. It is claimed that this modal ingredient is not taken care of by Tarski’s requirement, later developed in model theory, that for all interpretations of the non-logical terms of the sentences, the conclusion is true if the premisses are. Instead we must bring in reasoning or the notion of proof in the analysis, and it is shown how one may develop a notion of proof or validity of an argument based on the constructive meaning of the logical constants. It is proposed that from a constructive point of view, a sentence is a consequence of a set of premisses if and only if there is a proof in the sense explained of the sentence from the set of premisses. Knowledge of such a proof compels us to hold the conclusion true given that we hold the premisses true.

Keywords:   modality, logical consequence, validity, Tarski, model theory, proof, constructive meaning

In spite of the great advancement of logic in our time and the technical sophistication of disciplines such as model theory and proof theory, the concept of logical consequence—the most basic notion of logic—is still poorly understood. Basic intuitions are often in conflict with each other, and rather few attempts have been made to sort this out in a systematic fashion. Here I shall critically review some of the attempts that have been made to articulate intuitions about logical consequence,1 but the review makes no claim to be comprehensive or to settle the issues. Most attention will be given to how the notion of logical consequence may be developed from a constructivist point of view.

The idea of logical consequence has been with us since the time of Plato and Aristotle, and since then has been central for our understanding of philosophy and science. Presumably it arose at least partly in reflections upon Greek mathematics. It must have been a challenge to Greek intellectuals to explain the noteworthy form (p.672) that mathematics took in their time. How could it be that the Greeks could not only calculate such things as the length of one side of a right‐angled triangle given the lengths of the other two—something that the Babylonians already knew perfectly well how to do—but also could prove general laws such as Pythagoras' theorem? Somehow the idea occurred that the new practice was to be seen as involving the drawing of logically valid inferences. The idea of such inferences is present in Plato's writing and is clearly formulated by Aristotle.

From this beginning there have been at least three basic intuitions about what it is for an inference to be logically valid—or, as we also say, for its conclusion to follow logically from its premisses, or to be a logical consequence of its premisses.2 The most basic one, which goes almost without saying, is that a valid inference is truth‐preserving: if the premisses are true, so is the conclusion. If the conditional here is understood as material implication, this means only that it is not the case that the premisses are true and the conclusion is false. Of course, the satisfaction of this condition is not enough to make the inference valid. Two further conditions, which occur more or less explicitly in Aristotle's writings, must also be satisfied:

  1. (1) It is because of the logical form of the sentences involved, and not because of their specific content, that the inference is truth‐preserving.

  2. (2) It is impossible that the premisses are all true but the conclusion is false—or, in positive terms, it is necessary that if the premisses are true, then so is the conclusion.

It remains to be discussed to what extent these two conditions are independent of each other, but there has been a general consensus that both conditions are necessary. While the modal notion that occurs in (2) is notoriously difficult to explicate, the main idea of (1) is comparatively easy to develop into a more precise form. Let us therefore first turn to that.

Variations of Specific Contents

Condition (1) comes into play in a well‐known manner, already employed by Aristotle, when giving counterexamples to the logical validity of an inference. (p.673) When a conclusion B is inferred from premisses A 1, A 2, … , An, one may object to the logical validity of the inference by presenting true sentences A 1*, A 2*, … , An* and a false sentence B*, of the same logical form as A 1, A 2, … , An and B, respectively. To make this more precise, one must specify the logical forms and the different categories of nonlogical terms. This is indeed something that has been done in modern logic for various languages, the first‐order languages being one kind of example.

Given a language where these things have been specified, we may quantify over all variations of the content of nonlogical terms. Let A be the conclusion and Γ the set of premisses of an inference considered in this language, and let a substitution S be a pairing of the nonlogical terms in these sentences with other nonlogical terms, each pair consisting of terms of the same category. Let AS be the result obtained from A by replacing the nonlogical terms in A with the ones they are paired to by S, and let Γ S be the result of carrying out such a substitution in each sentence of Γ. A counterexample to the inference then consists of a set Γ S of true sentences and a false sentence AS. Since S replaces only nonlogical terms with other nonlogical terms, the counterexample has the same logical form as the original inference.

The idea of (1) may be expressed by saying that there is no counterexample to a logically valid inference or, in positive terms, an inference is logically valid only if all inferences of the same logical form as the given one are truth‐preserving. Letting S vary over all substitutions of the kind described above, condition (1), or at least part of its content, may now be expressed by saying simply:

(1′) For all substitutions S, if all the sentences of Γ S are true, then so is AS.

This way of expressing condition (1) occurs already in medieval times and is taken up again by Bolzano, except that he considered propositions instead of sentences and concepts instead of terms. Disregarding some minor further conditions, which we need not consider here, Bolzano took (1′) also as a sufficient condition for A being a logical consequence of Γ. This last idea we shall discuss in the next section.

An obvious objection to (1′) (when one comes to think about it) is that (1′) may be satisfied simply because of a poverty of the language that limits the number of substitutions S. One should therefore consider extensions of the given language by introducing names of all existing things in the categories involved, and demand that (1′) hold for all such extensions of S. Or, alternatively (and better, not having to consider languages with uncountably many terms), one speaks of interpretations instead of substitutions. An interpretation of a given language is an assignment I to the nonlogical terms of the language which assigns denotations appropriate for the various categories of nonlogical terms. Speaking of truth under such assignments, as this is nowadays defined in elementary textbooks of logic, we can replace (1′) with (p.674)

(1″) For all interpretations I, if all sentences of Γ are true under I, then A is true under I.

This is how Tarski [1936] developed condition (1). If we take quantifiers to be always relativized by explicit specification of the domains, the interpretations also take care of the variations of these domains. Sentences such as “everyone is mortal” and “some numbers are prime” may then rendered “(∀x ∈ humans)(x is mortal)” and “(∃x ∈ numbers)(x is prime)”; and an interpretation of these sentences then also assigns some extensions to the domain names “humans” and “numbers.”

Like Bolzano, Tarski takes his variant of (1) not only as a necessary but also as a sufficient condition for logical consequence. This is what is done also in model theory, where (1″) is slightly modified by making the domains explicit. Restricting ourselves to first‐order languages and letting all the quantifiers range over the same individual domains D, we say that a structure or a possible model for a first‐order language is a pair M = <D, I> where D is a set and I is an interpretation in D of the nonlogical terms of the language. This means that I assigns set‐theoretical objects that can be obtained from D (i.e., individual constants are assigned elements of D, first‐order functional symbols are assigned functions from D to D, one‐place first‐order predicates are assigned subsets of D, and so on). Saying that a possible model <D, I> is a model for a sentence A when A is true in D under I (i.e., A is true when the domain of the quantifiers is taken to be D and the values of the nonlogical terms are taken to the ones assigned by I), and that it is a model for the set Γ of sentences when it is a model for all the sentences of Γ, we may replace (1″) by the simple and well‐known condition

(1‴) All models of Γ are models of A.

I consider condition (1″) to be a reasonable explication of the idea expressed in (1), and condition (1‴) to be a way of expressing a slight variation of (1″) in set‐theoretical terms. However, the idea of making a condition of this kind a defining one for the concept of logical consequence must be discussed.

In discussing this question it is sometimes convenient to consider the closely related notion of logical truth instead of logical consequence. Logical truth may be considered the special case of logical consequence arising when the set Γ is empty. All the variants of (1) thus give a necessary condition for a sentence to be a logical truth, and this condition is turned into a definition of logical truth by Tarski and the model theory inspired by him. Hence, according to these analyses, a sentence A is logically true if and only if A is true under all interpretations of A, or in all possible models of A, respectively.

(p.675) Criticism of the Tarskian Analysis of Logical Consequence

Tarski agrees that there is a modal ingredient in our intuitive idea of logical consequence and that it can be formulated in the way of (2). He must thus claim that condition (1″), which he equates with A being a logical consequence of Γ, contains a modal element that takes care of the intuition expressed in (2). Notwithstanding that our modal intuitions may be obscure, there is reason to be suspicious of that claim.

The validity of inferences is not a small theme of philosophy. It is said that with the help of valid inferences, we justify our beliefs and acquire knowledge. The modal character of a valid inference is essential here, and is commonly articulated by saying that a valid inference guarantees the truth of the conclusion, given the truth of the premisses. It is because of this guarantee that a belief in the truth of the conclusion becomes justified when it has been inferred by the use of a valid inference from premisses known to be true. But if the validity of an inference is equated with (1) (or its variants), then in order to know that the inference is valid, we must already know, it seems, that the conclusion is true in case the premisses are true. After all, according to this analysis, the validity of the inference just means that the conclusion is true in case the premisses are, and that the same relation holds for all inferences of the same logical form as the given one. Hence, on this view, we cannot really say that we infer the truth of the conclusion by the use of a valid inference. It is, rather, the other way around: we can conclude that the inference is valid after having established for all inferences of the same form that the conclusion is true in all cases where the premisses are.3

To say that a valid inference guarantees the truth of the conclusion, given the truth of the premisses, is to give the modal character of an inference an epistemic ring, and it seems obvious that condition (1″) has no connection with such a modality. What, then, about a modality of a more ontic kind that is pruned from epistemic matters? Does the claim that (1″) contains a modal ingredient (and in that way explicates condition (2)) fare better for a modality of that kind?

(p.676) A verdict on that question may most easily be arrived at by considering the Tarskian definition of logical truth. One may then note that the definition equates the logical truth of a sentence A with the actual truth of a closely related sentence, namely, the universal closure of the open sentence A* obtained from A by replacing all its nonlogical terms with variables of the same category. This observation enforces the suspicion that there is no modal ingredient at all in (1″). Clearly, the actual truth of the universal closure of A* in no way implies the necessary truth of its instance A.4 It is therefore difficult to see how the Tarskian analysis of logical consequence, which proceeds entirely in terms of actual truth (although under different interpretations of the descriptive terms), touches any modality.

But perhaps there is another way to look at the Tarskian analysis, or at least at the closely related model theory which grew out of Tarski's semantics. The basic intuitions behind our conditions (1) and (2) may be summarized by saying that two kinds of invariance are required of a logically valid inference: it must be truth‐preserving regardless of the meaning of the descriptive terms of the sentences involved, and also regardless of the facts of the world. Etchemendy [1990] coined the term interpretational semantics for a theory that describes how the truth‐value of a sentence varies with what the nonlogical terms of the sentence mean, and the term representational semantics for a theory that describes how it varies with how the world is. If it now has to be admitted that the Tarskian semantics occurring in (1″) is formulated as an interpretational semantics, it may be thought that at least the model theory occurring in (1‴) can be viewed both as an interpretational and as a representational semantics. If so, there would be a modal ingredient in condition (1‴) after all.

This stand seems to be the one taken by Shapiro (in his contribution concerning logical consequence in this volume). He argues that in a possible model <D, I>, the domain D gives the size of a possible world, and by assigning values (extensions) to the descriptive terms, I gives the facts of a possible world. The possible model may therefore be seen as representing a way the world might be. The idea is thus that a possible model can be viewed in two ways, either as depicting the meaning of descriptive terms or as representing a possible world. Viewed in the first way, (1‴) says that the inference is truth‐preserving regardless of meaning, and viewed in the second way, it says that it is truth‐preserving regardless of the world.

This standpoint is exactly one of the main targets of Etchemendy's criticism. Etchemendy argues that the standpoint is reasonable only if it can be made (p.677) plausible that the semantic theory simultaneously represents both all possible meanings of the descriptive terms and all the ways the world might be. There are simple instances where this is the case, but for several reasons ordinary model theory is not such a case according to Etchemendy. One objection is that some models cannot coherently be looked upon as representing a possible world for example, ones in which analytically true sentences such as 2 + 2 = 4 come out false. A second, equally serious objection is that there are possible worlds that are not represented by any model.

The question of whether the model‐theoretical notion of consequence contains a modal ingredient is thus controversial. The view that it does, is, in my opinion, not very well developed, and has to meet a number of challenges raised by Etchemendy. In any case, there is certainly general agreement that if the Tarskian or model theoretic notion of logical consequence contains a trace of modality, then it is not of an epistemic kind. Let us therefore now turn to condition (2) stated in the introduction and ask how it can be explicated.

Necessity of Thought

Two ways to articulate condition (2) have already figured in the discussion of the previous section, one ontic and one more epistemic. The ontic alternative is to speak, with Leibniz, of possible worlds and to say simply:

(2a) A is true in all possible worlds in which all the sentences of Γ are true.

In a more epistemic formulation I spoke of the truth of the premisses guaranteeing the truth of the conclusion. Another way of bringing out an epistemic force of necessity more clearly is to say

(2b) The truth of A follows by necessity of thought from the truth of all the sentences of Γ.

In the same direction there are formulations such as one is committed to holding A true, having accepted the truth of the sentences of Γ; one is compelled to hold A true, given that one holds all the sentences of Γ true; on pain of irrationality, one must accept the truth of A, having accepted the truth of the sentences of Γ.

To develop the idea of a necessity of thought more clearly we must bring in reasoning or proofs in some way. It must be because of an awareness of a piece of reasoning or a proof that one gets compelled to hold A true given that one holds all the sentences of Γ true.

(p.678) Let us call a verbalized piece of reasoning an argument, and let us speak of an argument for A from Γ, if Γ is the set of hypotheses on which the reasoning depends and A is the conclusion of the reasoning. By a proof of A from Γ we may understand either a valid argument for A from Γ or, more abstractly, what such an argument represents; I shall here reserve it for the latter use.

A direction in which we may try to explicate (2b) may then be put in one of the following two ways:

(2b′) There is a valid argument for A from Γ; there is a proof of A from Γ.

It may be said that condition (2b′) does not in itself involve any idea of necessity of thought, since the mere existence of a valid argument or of a proof does not compel us to anything. Condition (2b′) may nevertheless be said to bring in the idea of a necessity of thought by requiring the existence of something such that when we know of it, we are compelled to hold A true, given that we hold the sentences of Γ true.

On the face of it, (2a) and (2b) (or (2b′)) may seem very different. Both formulations have of course to be developed further in order to be something more than mere phrases or metaphors, and before this is done, one cannot say how the two ideas are related. If the notion of a possible world is thought of as something determined by set of sentences Γ that are consistent in the sense of there being no valid argument or proof from Γ ending in a contradiction, then the two kinds of modalities may go together in the end. Here I shall leave the question how (2a) should be further analyzed and shall concentrate on (2b′).

We have thus to account for what it is that makes something a proof or, alternatively, to account for the validity of arguments. It should be noted at this point that not infrequently the validity of an argument is defined in terms of logical consequence. The idea here is to reverse the order of explanation in an attempt to catch a modal element in logical consequence. In other words, the validity of an argument for A from Γ is to be taken as a more basic notion, and is to be analyzed so that it constitutes evidence for A when given Γ, i.e., something that by necessity of thought makes us conclude A, given Γ.

What is it that makes an argument valid and thus compels us, by necessity of thought, to hold the conclusion true, given the truth of the premisses? It is difficult to think of any answer that does not bring in the meaning of the sentences in question. In the end it must be because of the meaning of the expressions involved that we get committed to holding one sentence true, given the truth of some other sentences.

To get further, we should thus turn to the notion of meaning. Since we are here interested in logical consequence, we shall focus on how one is to understand (p.679) that an argument is valid in virtue of the meaning of the logical constants occurring in the sentences of the argument.

The Meaning of the Logical Constants

One idea about meaning in general going back to Frege is that the meaning of a sentence is given by its truth condition. Its rationale is the notion that the question of whether a sentence is true or not depends both on the meaning of the sentence and how the world is, the meaning of the sentence being exactly that feature of the sentence which determines how the world has to be in order for the sentence to be true. Although far from being unanimously accepted, the idea that the meaning of a sentence is the same as its truth condition has been influential both in philosophy of language and in logic. Wittgenstein in the Tractatus took it up after Frege, and it is a cornerstone in, for instance, Davidson's philosophy of meaning.

As regards logic, the idea is expounded in Church's [1956] classical textbook and is nowadays often presented in teaching of elementary logic, where sentential operators are said to get their meaning by their truth tables. More generally, the meaning of a logical constant c is said to be determined by the uniform truth condition of the sentences with c as the main sign, given in the form of an equivalence such as:

¬A is true if and only if A is not true.

AB is true if and only if both A and B are true.

AB is true if and only if at least one of A and B is true.

and so on.

For a quantified sentence (∀xD)A(x) the truth condition may be stated:

(∀xD)A(x) is true if and only if A(t) is true for all terms t that denote elements in D.

(This formulation requires that names for all elements of D are available in the language. Alternatively, if we do not want to rely on such an assumption, we speak of assignments of values to the variables and define the notion of truth (or satisfaction) under such assignments in a similar way.)

Questions about the meaning of the logical consequence thus seem to have a straightforward answer in terms of such truth conditions. However, the substance (p.680) of this answer depends on what we take truth to be. One must here beware of a possible confusion, which Dummett (see, e.g., Dummett [1978], pp xx–xxi), especially, has drawn attention to. The truth conditions for compound sentences of different logical form coincide formally with recursive clauses that occur in the definition of truth as given by Tarski [1935/1936] or in the definition of truth relative to a possible model (as is standard in model theory), and one may therefore be led to think that truth is what is defined in this way. But obviously the truth conditions cannot simultaneously do service both in a definition of truth and in an explanation of the meaning of the sentences in question. In other words, the equivalences of the kind exemplified above cannot be taken as clauses in a recursive definition of truth, and at the same time can be taken as explaining the meaning of the logical constants exhibited—this would be like solving two unknowns, given only one equation. If we have defined a set S of sentences by saying that it is the least set of sentences containing certain atomic formulas and satisfying certain equivalences, such as

AB belongs to S if and only if both A and B belong to S,

then obviously we get no information about the meaning of the logical constants by being told again that these equivalences hold. Similarly, a person who does not know what truth is, but is informed that it is a notion satisfying certain equivalences of the kind given above, does not get to know the meaning of the logical constants by then being told again that these equivalences hold.

We must conclude that truth conditions can serve as meaning explanations only if we already have a grasp of truth. Accordingly, we find that Frege and those who follow him in thinking that meaning is given by truth conditions are careful to note that truth is taken as an already known notion. In contrast, Tarski, who wants to define truth but is anxious that his definition be adequate, is careful to note that he is taking the meaning of the sentences for which truth is defined as already known. Tarski takes instances of the equivalences given above, so‐called T‐sentences, which of course follow from his definition of truth, as showing the material adequacy of his definition of truth. In order that they will serve in this way, showing us the correctness of the truth definition, we must of course know that the T‐sentences themselves are correct, and this we can know only if we already know the meaning of the sentences that are mentioned in the T‐sentences.5

An informative answer to the question of what the logical constants mean thus requires that we further ask what truth is.

(p.681) Truth

The notion of truth comes up not only when we ask about the meaning of the logical constants and want to use the idea that it is given by truth conditions, but occurs prominently also at the outset of an analysis of logical consequence. It can therefore not be passed over in an analysis of logical consequence. The question of what a true sentence is true in virtue of, is particularly significant. There are at least two different ways of thinking about this that are relevant here because of how they connect with the notion of evidence.

What may be called a realist conception takes sentences to be true in virtue of facts of the world given independently of what it is for us to know them. According to an epistemic conception, truth is instead determined in terms of what it is for us to acquire knowledge, and sentences are true in virtue of the potential existence of evidence for them. If the facts of the world are what are described by true sentences, then on this conception the world is seen as constructed in terms of potential evidence, and in the end, in terms of what it is for us to acquire evidence. One may therefore also call it a constructive conception of truth.

To analyze the modal ingredient of logical consequence in terms of evidence seems a hopeful project only if truth is understood in this constructive way. From this point of view, evidence or what it is to acquire knowledge must be taken as a more fundamental concept than truth—truth may then be defined as the potential existence of evidence. This is the path that I shall follow in the sequel.

With this understanding of truth we may go back to the idea that meaning is determined by truth conditions. A more profound way of accounting for the meaning of a sentence is now opened up, namely, in terms of what counts as evidence for the sentence. Knowing what counts as evidence for the sentence, one also knows the truth condition of the sentence. Hence, there is no conflict between the present suggestion to account for meaning in terms of evidence and the idea that meaning is determined by truth conditions, provided that truth is understood constructively.

A vital theory of meaning should be able to connect linguistic meaning with how language is used and functions. The presence of evidence is the ground on which we assert sentences. Thus, the use of sentences in assertions and the meaning of them now become clearly linked to each other.6 It remains, however, to discuss the notion of evidence.

For observation sentences, evidence typically takes the form of a relevant observation. In general, we may speak of conclusive evidence for the truth of (p.682) a sentence as verification. A verification may take the form of a combination of observations and arguments. In mathematics the former fall away, and evidence occurs in the form of valid arguments or proofs. We shall here restrict ourselves to mathematics, and therefore to the notions of valid argument and proof.

Proofs

We turned to the meanings of the logical constants, thinking that these meanings are what may make an argument valid or something toward a proof. But now the question at issue has been turned around, and it is suggested that meaning be accounted for in terms of proofs. There seem to be two clashing intuitions at work here, which also occur in more general discussions concerning the relation between the meaning and the use of a term; as was noted above, what counts as a proof of a sentence is one feature of the use of the sentence.

Sometimes it seems reasonable to equate meaning and use in line with the slogan “meaning is use,” inspired by the later Wittgenstein. If someone asks why 3 + 1 = 4, a natural answer is that this is what “4” means, or that this how “4” is defined and used. Similarly, what can we answer someone who questions the drawing of the conclusion A → B, given a proof of B from A, except that this is how A → B is used, it is a part of what A → B means? But a similar answer to the question why 2 + 2 = 4 or why we infer A → B from ¬B → ¬A seems inadequate. That 2 + 2 = 4 or that we infer A → B from ¬B → ¬A is not reasonably looked upon as a usage that can be equated with the meaning of the expressions involved, but rather is something that is to be justified in terms of what the expressions mean. To answer all doubts about a certain usage of language by saying that this is how the terms are used, or that this is a part of their meaning, would be a ludicrously conservative way of meeting demands for justification. But for some such doubts the reference to common usage is very reasonable and may be the only thing to resort to.

Leaving the question of how use and meaning are related to each other aside for a while, one may ask what kind of proofs one has in mind when it is suggested that it is in terms of them that the notion of logical consequence and the meanings of sentences are to be accounted for. Are we to think of proofs as formal proofs given by formal deductive systems?

A formal system such as the one for Peano Arithmetic is of course to be seen as an attempt to codify valid reasoning within a part of intuitive mathematics. But the notion of logical consequence cannot be analyzed in terms of formal systems. An arithmetical sentence is not a logical consequence of other arithmetic sentences because of the existence of a proof in Peano Arithmetic. Nor can the meaning of (p.683) an arithmetic sentence A be accounted for in terms of what counts as a proof of A within Peano Arithmetic. In the case of arithmetic we know by Gödel's Incompleteness Theorem that such an analysis would be outright wrong. We even know that no formal system can be used to exhaustively describe the intuitive meaning of the arithmetic sentences or the relation of logical consequences for arithmetic sentences: indeed, because of the meaning of these sentences, for every consistent system there will be true sentences that cannot be proved in the system.

However, we really do not need to refer to Gödel's Incompleteness Theorem to see that ordinary proof theory has nothing to offer an analysis of logical consequence. A deductive system is, as already said, an attempt to codify proofs within a given language, but when setting up such a system, one does not ordinarily try to analyze what makes something a proof. Nor does proof theory ordinarily try to justify a deductive system except for trying to prove its consistency.

An ironic illustration of the need for justification, and of the fact that not every use of a language affords the involved sentences meaning, is given by Prior's example of a sentential connective “tonk” (also referred to by Shapiro in his Chap. 21, this volume). “Tonk” is governed by two inference rules, one allowing the inference of “A tonk B” from a premiss consisting of either A or B, and one allowing the inference from “A tonk B” either of A or of B. We cannot find a meaning for “tonk” that accords with such inference rules; as is easily seen, they allow the deduction of any sentence we like. One cannot, therefore, say that sentences mean whatever the rules that govern them make them mean—the rules that govern them may very well reduce the sentences to sheer nonsense.

The notion of proof in a formal deductive system is thus not relevant for our purpose of analyzing logical consequence. Instead, we need a notion of proof such that a proof of a sentence can reasonably be said to constitute evidence in view of the meaning of the sentence. Within intuitionism one has often tried to explain the meaning of the logical constants by resorting to a notion of proof. This may proceed by recursive clauses, not in principle unlike the ones occurring in Tarksi's definition of truth, as follows:

A proof of a conjunction AB is a pair consisting of one proof of A and one of B.

A proof of AB is a proof of either A or B together with an indication of which of the two it is a proof of.

A proof of A → B consists of a function which is recognized to yield a proof of B when applied to a proof of A.

A proof of ∀x A(x) consists of a function which is recognized to yield a proof of A(t) when applied to a term t denoting an element in the domain of the quantifier.

A proof of ∃xA(x) consists of a pair whose first member is a term t denoting an element in the domain of the quantifier and whose second member is a proof of A(t).

(p.684) A problem with this explanation is that the notion of proof used here cannot stand for whatever establishes the truth of sentences in a normal intuitive sense. For instance, a proof of a disjunction A(t) ∨ B(t) may very well proceed even intuitionistically by first proving ∀x(A(x) ∨ B(x)) and then applying universal instantiation to infer A(t) ∨ B(t). Given such a proof, we do not know which of the two disjuncts holds. Hence, it is not correct to say that a proof of a disjunct needs to consist of a proof of one of the disjuncts together with indication of which disjunct is proved. In intuitionistic meaning explanations of the kind exemplified above, proof must thus be meant in a quite restrictive way.

It was clear at the beginning of this section, when discussing the relation between meaning and use, that the project of accounting for the meaning of a sentence in terms of what counts as a proof of it needs a more restrictive notion of proof than the usual one. Some particular use of a linguistic expression may very well be seen as constitutive of its meaning, and other uses must then be accounted for or justified in terms of this meaning. More specifically, it may be constitutive of the meaning of sentence A that certain ways of arguing for its truth are recognized as a proof of A. But it would be counterintuitive to identify the meaning of a sentence with all the ways in which it can be proved. This is so for the simple reason that we may very well understand a sentence without having any clear idea about all the ways in which it may be proved. Furthermore, the finding of new ways to prove a sentence does not automatically amount to a change of its meaning.

In a satisfactory approach to meaning via proofs, therefore, we cannot use proofs in general to account for meaning, but must instead single out something we may call direct or canonical proofs that are constitutive of meaning.7 This possibility has now to be explored.

Canonical Proofs

The first to suggest that certain ways of proving a sentence could be seen as determining its meaning seems to have been Gentzen [1934]. After having presented his system of natural deduction, in which logical inferences are broken down into two kinds of basic steps, called introductions and eliminations, one of each kind for every logical constant, Gentzen remarks:

The introductions constitute, as it were, the “definitions” of the symbols concerned, and the eliminations are, in the final analysis, only consequences of this.

(p.685) Since it is clear that the introductions are not literal definitions, a better way to express the idea is to say that the rules for introduction inferences determine the meanings of the logical constant concerned, while the rules for elimination inferences are justified by these meanings.

An introduction for a logical constant is a form of inference whose conclusion has the constant as its main sign. For each logical constant of the language of first‐order predicate logic, Gentzen gives a rule for how to form its introductions, which may be represented by the following figures:

I ) A B A B I ) A A B B A B I ) A ( t ) xA ( x ) I ) [ A ] I ) B A B A ( y ) xA ( x )

The sentence A shown within the square brackets in the introduction rule for implication (→I) indicates that in such an inference, it is allowed to discharge occurrences of A standing as hypotheses.

Negation, which does not occur in the list of introductions, is here supposed to be defined with the help of → and the constant ⊥ for absurdity (i.e., ¬A is short for A → ⊥), which makes negation introduction (better known as the constructive form of reductio ad absurdum) a special case of implication introduction. The introduction rule for ⊥ is empty (i.e., it is the rule that says there is no introduction whose conclusion is ⊥).

The question is now in what way these introductions determine the meanings of the logical constants. A way to answer this question is in terms of the notion of a direct or a canonical proof, mentioned at the end of the preceding section. The general idea is that the meaning of a sentence A is given by what counts as direct evidence for it. Using the term “canonical proof” for what in mathematics constitutes direct evidence, we have to state for various sentences what forms canonical proofs of them are supposed to have. This will now be indicated for first‐order sentences understood constructively. In the next section, I shall carry out the same thing on a more concrete linguistic level by specifying forms of verbal arguments that are to count as canonical.8

(p.686) The general form of a canonical proof of a compound sentence C with the logical constant c as main sign can be written Oc(Π), where Oc is an operation that stands for the recognition that we have obtained direct evidence for C because of Π (e.g., in the first case below, O is the recognition of the fact that because of having evidence both for A and for B, one is in possession of evidence for AB). We have to specify what Π is in the different cases.

If C is a conjunction AB, then Π is to consist of canonical proofs of A and of B conjoined.

If C is a disjunction AB, then Π is to consist of either a canonical proof of A and an indication of the fact that it is the first disjunct that one has obtained evidence for, or a canonical proof of B and an indication of the fact that it is the second disjunct that one has obtained evidence for.

If C is an existentially generalized sentence ∃x A(x), then Π is to consist of a individual term t (in the domain of quantification) together with a canonical proof of A(t).

When C is an implication or a universally generalized sentence, we need to rely on the notions of hypothetical proofs and general proofs in order to specify the forms of canonical proofs. To do this, we need first of all to say what a proof is—we may say categorical proof to distinguish it from other kinds of proof.

The explication of these notions was of course a main goal for the whole present approach to the modal ingredient of logical consequence. That they are needed already in the specification of the canonical proofs may seem circular, but means only that we must proceed by recursion. The specification of the canonical proofs of implications and universally generalized sentences requires the notions of categorical, hypothetical, and general proof of sentences of less complexity, which in turn require that we know what the canonical proofs of these less complex sentences are.

To elucidate the notion of categorical proof, we may note that one may rightly assert a sentence A without having a canonical proof of A. But the idea that one can single out some proofs as canonical involves the idea that a proof can in principle be given in canonical form. The assertion of a sentence is thus constructively to be understood as claiming the existence of a canonical proof of it. This existence is not to be understood in the sense that a canonical proof has already been constructed and is at hand, but only that there is such a proof and that it therefore could in principle be constructed. Consequently, we may rightly assert a sentence A as long as we know that there is a categorical proof of A, which, understood constructively, means that we have an effective method for finding one. Such a method is thus to count as a proof. By a categorical proof of A, we shall just mean such an effective method for finding a canonical proof of A.

The notion of a hypothetical proof, that is a proof of a sentence A from sentences A 1, A 2, … , An, is in the same vein proposed to be analyzed as being an n‐ary effective function which, applied to categorical proofs of A 1, A 2, … , An, yields a categorical proof of A. Finally, we propose that a general proof of an open (p.687) sentence A(x 1,x 2, … ,xn), relative to a set of individual terms T, is an effective function which, applied to any elements t 1, t 2, … , t n of T, yields a proof of A(t 1,t 2, … ,tn).

We can now specify that a canonical proof of a sentence A → B is to be of the form O (Π) where Π is a hypothetical proof of B from A, while a canonical proof of a sentence ∀xA(x) is to be of the form O (Π) where Π is a general proof of A(x) (relative to the domain T of quantification).9 To this has to be added that nothing is a canonical proof of ⊥.

Canonical Arguments

It may be instructive to see how a corresponding specification proceeds if we stay on the more concrete level of verbal arguments; Gentzen was obviously thinking of deductions composed of sentences or formulas. One need then lay down some conventions for how the sentences of the arguments are to be arranged. It is convenient to assume that they are arranged in tree form. A sentence in a tree that stands immediately below some other sentences is claimed to follow from these sentences, and such a part of the tree thus represents an inference. The occurrences of sentences at the top of the tree are either axioms, assumptions of the argument, or hypotheses that are made temporarily (for the sake of the argument, as one says). In the latter case the hypothesis is discharged or, as I shall say, bound by some inference in the course of the argument. Above we have seen one kind of inference, →I, that may bind assumptions. An occurrence of a sentence is said to depend on the assumptions and hypotheses that stand above it in the tree and are not bound by some inference occurring above it.

An inference may also bind a variable in the argument, in which case it must occur free neither in any assumption or hypothesis that a premiss of the inference depends on, nor in the conclusion of the inference. Above we have seen one inference, namely, ∀I, that binds the variable indicated in the premiss.

An argument whose last sentence (the sentence at the bottom of the tree) is A and whose assumptions (sentences at the top of the tree that are neither axioms nor hypotheses bound in the course of the argument) are A 1, A 2, … , An is said to be a argument for A depending on A 1, A 2, … , An. Something is an argument for A from Γ if it is an argument for A depending on no other sentences than those (p.688) belonging to Γ. An argument is said to be closed provided it has no assumptions and all the variables that occur free in some open sentence of the tree are bound in the course of the argument. An argument that is not closed is said to be open.

An open argument is to be understood as a schema that becomes an argument when we make appropriate substitutions for the variables and assumptions that are not bound. This is to say that an open argument

Δ(x1, x2, … , xm; A1, A2, … , An)

containing exactly variables x 1, x 2, … , xm and assumptions A 1, A 2, … , An that are not bound in Δ is valid if and only if certain results of carrying out substitutions in Δ are valid, namely, all results

Δ (t1, t2, … , tm; Δ1, Δ2, … , Δn)

obtained from Δ by first substituting any terms t 1, t 2, … , tm denoting individuals in the domain of quantification for x 1, x 2, … , xm and then any valid closed arguments Δ1, Δ2, … , Δn for the assumptions A 1*, A 2*, … , An*, where Ai* is the result of carrying out the first substitution of terms for variables in Ai.

The canonical arguments for the various first‐order sentences may now be specified by simply saying that something is a canonical argument for A if and only if it is an argument for A whose last inference is an introduction and whose immediate part(s) is (are) valid argument(s) for the premiss(es) of that introduction.

It is to be noted that in case the last inference is an →I or an ∀I, the immediate part of the argument is an open argument for which we have already defined what it is to be valid. This definition depends, however, on the notion of valid argument for sentences of lower complexity than A, actually subsentences of A. There is thus a recursion involved in these definitions of the same kind as the one noted above in connection with the notion of canonical proof.

It remains to say when a closed argument in general, one that is not necessarily canonical, is valid. In the same way that we have to require of a categorical proof that it is either canonical or is a method for finding a canonical proof, a closed argument should be defined as valid if it either is canonical or provides a method for finding a canonical proof. However, it is not immediately obvious how an argument may provide such a method. At this point we should consider some examples.

An open argument of the form

A 1 ^ A 2 A i ,

(p.689) which coincides with what Gentzen calls an ∧-elimination, is valid according to the definition given above, if the result of replacing the assumption A 1A 2 (which for simplicity we assume to be a closed sentence) with a valid closed argument Δ for A 1A 2 is valid. The question is now what it is that makes the result

Δ A 1 ^ A 2 A i

valid. According to what was just said above, the validity of the closed argument Δ should provide a method for finding a canonical argument Δ* for A 1A 2. Recalling how the notion of canonical argument was defined, we see that by replacing Δ with this canonical argument Δ*, we get

Δ 1 Δ 2 ( 1 ) A 1 A 2 A 1 ^ A 2 A i

where the two immediate sub‐arguments of Π*

( 2 ) Δ i A i .

are valid. Extracting the left immediate subargument of Δ* if i is 1 and the right immediate subargument of Δ* if i is 2, we thus get a valid argument for Ai. The extraction of (2) from (1) is known as ∧-reduction and is an operation used in the normalizion of deductions.10 It is in view of this operation that we see that an argument consisting of a ∧- elimination is valid, and this is also how Gentzen meant that ∧- elimination is justified (or is a “consequence” of the ∧- introduction, as he expressed it in the quote given in the beginning of the previous section).

In an argument each inference that is not an introduction should be supplied with such a justifying operation in view of which it can be seen that a valid argument for the conclusion can be obtained, given valid arguments for the premisses. We say that an argument Δ reduces to the argument Δ* relative to such operations if Δ* is obtained from Δ by replacing subarguments with what is (p.690) obtained by applying the operations to them in the way that has just been exemplified. It is in this way, when the arguments have been supplied with justifying operations, that they may provide methods for finding canonical arguments, as must be required for validity in accordance with what was said above. A closed argument supplied with justifying operations is thus to be defined as valid if it reduces, relative to suitable operations, to a canonical argument. I shall not here give the formal details of how this is to be defined, but shall show how it works by considering two more examples.11

We illustrate the idea by two slightly more complicated reductions. The validity of implication elimination, that is, an argument of the form

A A B B ,

is first of all due to the fact that given a closed valid argument Δ1 for A and a closed valid argument Δ2 for A → B, we get a closed argument for B

Δ 1 Δ 2 A A B B ,

which because of the validity of Δ2 reduces to

[ A ] Δ Δ 1 B ¯ A A B B ,

where Δ is a valid open argument for B from A. We now see that there is an operation, so‐called →-reduction, which, applied to this argument, yields

Δ 1 [ A ] Δ B

(p.691) (i.e., the result of substituting the valid closed argument Δ1 for the open assumption A in Δ). According to the definition of validity for the open argument Δ, this result is valid, and we thus get a closed valid argument for B, as is required of a justifying operation for →-elimination.

To see the validity of ∃‐elimination

[ A ( y ) ] xA ( x ) B B ,

where the variable y is bound by the inference (which means that it must not occur in B nor in any hypothesis that the conclusion depends on), we have to show that it preserves validity, which means that we must show that a closed argument Δ of the form

[ A ( y ) ] Δ 1 Δ 2 ( y ) xA ( x ) B B

is valid, assuming that its immediate subarguments Δ1 and Δ2(y) are valid. By the validity of the closed argument Δ1 we know that it reduces (relative to the operations assigned to the inferences of Δ1) to canonical form, and hence that Δ reduces to the form

Δ 1 * [ A ( y ) ] A ( t ) ¯ Δ 2 ( y ) xA ( x ) B B .

To see that this argument is valid, we make use of the operation ∃-reduction, which transforms the argument into

Δ 1 * [ A ( t ) ] Δ 2 ( t ) B

We want to see that when ∃-reduction is applied to an argument whose immediate subarguments are valid, it yields a valid argument. This now follows from the (p.692) validity of the open argument Δ2(y), which just means that the result of first substituting t for y in Δ2(y), and then the valid argument Δ1* for the assumption A(t) is valid.

Summing Up

To sum up our analysis of the modal ingredient in logical consequence, we recall that a main goal was to bring out the sense in which it can be said that a sentence follows by necessity of thought from other sentences. It was suggested that to this end we need to bring in something about which it can be said that it commits us or compels us to hold one sentence true, given that we hold some other sentences true. It was proposed that proofs or arguments may be given this role. The idea is thus first that the necessity of logical consequence is expressed by the condition that for A to be a logical consequence of Γ, there must exist a proof or valid argument for A from Γ, and second, that the awareness of such a proof or valid argument compels or commits us to hold A true, given that we hold the sentences of Γ true.

Another main starting point was the idea that the meaning of a sentence is given by what counts as direct evidence for it. In mathematics, evidence takes the form of proof or valid argument, and the meaning of a mathematical sentence, according to this idea, is thus to be given by what counts as a direct or, as we have called it, canonical proof or argument.

In the same vein, the assertion of a sentence is understood constructively as the claim that there is direct evidence for it, and is to be taken as true if such evidence exists. In mathematics the truth of a sentence thus becomes equated with the existence of a canonical proof or argument for it. (In general, the existence of evidence for a sentence also depends, of course, on empirical matters.)

Presupposing an understanding of the general notion of evidence, which is taken as a more basic notion than truth, the forms of the canonical proofs or arguments for sentences within first‐order languages understood constructively have been specified. This has been done by recursive clauses making use of Gentzen's rules for introductions. (They have to be supplemented with specifications of the forms of canonical proofs and arguments for atomic sentences.) At the same time (actually by a simultaneous recursion) the notions of categorical, hypothetical, and general proof—or, alternatively, of closed and open valid argument, have been defined.

As a result of how all these concepts have been developed, we can say of certain things that they constitute proofs of a sentence or valid arguments for a sentence in virtue of the meaning of the sentence. A certain Π is a canonical proof of a sentence A or a certain Δ is a canonical argument for A just because of the way in (p.693) which the meaning of A is given. Furthermore, some inferences—namely, the inferences by introduction—become valid by the very meaning of the conclusion of the inference. Because of this meaning, we are compelled to hold the conclusion true when holding the premisses true. For instance, an inference by ∃-introduction of ∃x A(x) from A(t) is compelling, because by applying O to a categorical proof of A(t) or applying ∃-introduction to a closed argument for A(t), we get by the very meaning of ∃x A(x) a canoncial proof of ∃x A(x). Therefore, if we hold that A(t) is true, and consequently that there is a canonical proof or argument for A(t), we are also committed to holding ∃x A(x) true, since from the canonical proof or argument for A(t) we can form a canonical one for ∃x A(x), which guarantees the truth of ∃x A(x).

An inference in general is compelling when we know a hypothetical proof of its conclusion A from its set of premisses Γ or, alternatively, an open valid argument for A from Γ. This is because knowing such a proof is to be in possession of an effective method which, applied to categorical proofs of the sentences of Γ, yields a categorical proof of A. Similarly, knowing a valid open argument Δ for A from Γ, we get a valid argument for A by replacing the open assumptions in Δ with closed valid arguments for them. For the same general reasons as before, we are in both cases committed to holding the conclusion true if we hold the premisses true.

We have thus shown how knowledge of a (hypothetical) proof of A from Γ or, alternatively, a valid argument for A from Γ—as we have developed these notions—commits us to holding A true if we hold the sentences of Γ true. It seems reasonable, therefore, to say that the way the notions of proof and valid argument have been developed, make condition (2b′) account for the sense of necessity of thought involved in logical consequence.

One might say that any satisfactory analysis of the notions of proof and arguments should have the result that by knowing a proof or valid argument, one gets committed in the way discussed above. The question is how to achieve this result.

The notions of proof and valid argument are often defined as composed of valid inferences. This puts the burden on the notion of valid inference. How is it to be analyzed? If we do not simply say that an inference is valid when its conclusion is a logical consequence of the premisses, which would bring us back to the beginning of this investigation, we have to try to develop some concept of gapfree inference. The “gapfree” inferences must then be shown to have a compelling force. It is far from clear how this could be done.

To show an inference to be valid by breaking it down into simpler inferences that are known to be valid is certainly a good strategy in itself. Although a valid argument has not been defined here as a composition of valid inferences, it is still true that an argument built up of valid inferences is valid. But to define validity in that way would of course have made the whole analysis circular, as already noted when condition (2′) was first introduced. Instead, our basic notion is that of (p.694) canonical proof or argument (linked to the meaning of the sentences). As shown above, this amounts to making inferences by introduction valid—valid by definition, so to say. It should be noted, however, that not every valid inference can be broken down into a sequence of introductions. In other words, there are inferences other than introductions that are gapfree in the sense that they cannot reasonably be broken down into simpler inferences. An essential ingredient in the analysis proposed here is a way of demonstrating the validity of such inferences. We have illustrated how this can be done by some examples where ∧‐, →‐, and ∃‐eliminations were shown to be valid by applying reductions that are seen to transform arguments into ones that are known to be valid.

Finaly, it should be noted that we have been talking only about the meaning of the logical constants, and how proofs and arguments can have a compelling force in virtue of this meaning. But the same compelling force can arise in virtue of the meaning of nonlogical terms. Condition (2b) or (2b′) should therefore not be taken as a defining condition of logical consequence—it explicates a broader notion of consequence. To narrow it down to logical consequence, we have to combine it with a way of formulating condition (1) so that it fits the framework in which (2b′) has been explicated. This can easily be done in the manner of condition (1′) by requiring that for all substitutions S of the kind explained in that context, there is a proof of A S from Γ S. When this requirement is understood constructively, it amounts to demanding the existence of a uniform procedure that for each S takes proofs of the sentences of Γ S into a proof of AS. When speaking about arguments, this uniformity comes out directly by requiring that there be an argument Δ such that for each substitution S, ΔS is an argument for AS from Γ S. To meet the objection that the force of this condition depends on the richness of the language, we have either to extend the language or to introduce a way of speaking of something being an argument under an assignment of values to the nonlogical terms, and then vary over all such assignments, as done in condition (2″).

References

Bibliography references:

Church, Alonzo, [1956], Introduction to Mathematical Logic, Princeton, NJ: Princeton University Press.

———, [1975], “The Logical Basis of Intuitionistic Logic,” in Logic Colloqium '73, ed. H. E. Rose et al. Amsterdam: North-Holland. Reprinted in Dummett [1978].

(p.695) ———, [1976], “What Is a Theory of Meaning? (II)” in Truth and Meaning, ed. G. Evans and J. McDowell, Oxford: Clarendon Press.

———, [1978], Truth and Other Enigmas, London: Duckworth.

———, [1991], The Logical Basis of Metaphysics, London: Duckworth.

Etchemendy, John, [1990], The Concept of Logical Consequence, Cambridge, MA: Harvard University Press.

Gentzen Gerhard, [1934], “Untersuchungen über das logische Schliessen,” Mathematische Zeitschrift 39, 176–210, 405–431. Translated in The Collected Papers of Gerhard Gentzen, Amsterdam: North-Holland, 1969.

Martin-Löf, Per, [1985], “On the Meanings of the Logical Constants and the Justifications of the Logical Laws,” Atti degli Incontri di Logica Matematica, vol. 2, pp. 203–281. Siena: Scuola di Specializzazione in Logica Matematica, Dipartimento di Matematica, Università di Siena. Reprinted in Nordic Journal of Philosophical Logic 1, 11–60.

Prawitz, Dag, [1965], Natural Deduction: A Proof-Theoretical Study. Stockholm: Almqist & Wiksell.

———, [1971], “Ideas and Results in Proof Theory,” in Proceedings of the Second Scandinavian Logic Symposium, ed. J. E. Fenstad, pp. 235–307. Amsterdam: North-Holland.

———, [1973], “Towards a Foundation of a General Proof Theory,” in Logic, Methodology and Philosophy of Science, ed. P. Suppes et al., pp. 225–250, Amsterdam: North-Holland.

———, [1974], “On the Idea of a General Proof Theory,” Synthese 27, 63–67.

———, [1985], “Remarks on Some Approaches to the Concept of Logical Consequence,” Synthese 62, 153–171.

———, [2005], “Meaning Approached via Proofs,” in Proof-Theoretic Semantics, ed. R. Kahle and P. Schroeder-Heister, special issue of Synthese.

Sundholm, Göran, [1998], “Inferences Versus Consequence,” in The LOGICA Yearbook. Prague: Czech Academy of Sciences.

Tarski, Alfred, [1935/1936], “Der Wahrheitsbegriff in den formalisierten Sprachen,” Studia Philosophica 1, 261–405. Translated into English as “The Concept of Truth in Formalized Languages,” in Tarski [1956].

———, [1936], “Über den Begriff der logischen Folgerung,” in Actes du Congrès International de Philosophie Scientifique 7, 9–11. Translated into English as “On the Concept of Logical Consequence,” in Tarski [1956].

———, [1956], Logic, Semantics, Metamathematics. Oxford: Clarendon Press.

Notes:

(1)  This article was written after I had seen a draft of Stewart Shapiro's contribution to this volume, “Logical Consequence, Proof Theory, and Model Theory” (chapter 21), the aim of which is similar to mine. As the reader can see, our viewpoints have some common elements but our conclusions are very different.

(2)  Here I take the field of the consequence relation to consist of sentences and also take the premisses and conclusions of inferences to be sentences. Consequences and inferences are thus treated to be on a par in this respect. However, it may be said that an inference act is a transition from judgments to judgments while consequence is a relation between propositions. This distinction is made by Martin‐Löf [1985] and Sundholm [1998], who discuss validity of inferences and consequence in partly the same direction as I, but plays no role in my discussion here.

(3)  This is a point made essentially by Etchemendy [1990] (e.g., on p. 93). As for the question how we establish the validity of an inference (in the sense of (1″)), it is to be noted that this has to be done on the metalevel by the use of some inference. As is pointed out in Prawitz [1985], this is typically done by simply using the very same type of inference on the metalevel, provided the inference is not naturally broken down into simpler inferences. In the latter case, we may of course resort to these simpler inferences on the metalevel. But then the given inference could have been supported equally well on the object level by replacing it with a series of such simpler inferences; see Dummett [1991] (pp. 200–204) on this point.

(4)  This criticism against the Tarskian definition was leveled by Prawitz [1985] and is a main topic of Etchemendy [1990]. That (1″) deals with actual relations between the truth‐values of the involved sentences rather than with any necessary links between them is quite obvious but seems nevertheless often to be overlooked. Tarski [1936] is, however, close to making this observation when he observes that a sentence that contains no descriptive constants is logically true just in case it is true.

(5)  Elementary expositions of logic are often less clear on this point, and give, rather, the impression that the truth conditions both are clauses in the definition of truth (in a model) and serve to give the meaning of the logical constants.

(6)  Such a link between use and meaning may not be possible to establish when meaning is accounted for in terms of realistically understood truth conditions. Doubts in this direction concerning a theory of meaning based on a realistic conception of truth have forcefully been raised by Dummett in several writings (see, e.g., Dummett [1976]).

(7)  The need to distinguish between proofs in general and direct or canonical proofs has been argued for by Dummett and Prawitz in several papers (see, e.g., Dummett [1975] and Prawitz [1974]).

(8)  When speaking about canonical proofs and related notions on an abstract level, I shall be following Martin‐Löf [1985] in the main lines. (For the source of some differences, see note 2. It is also to be noted that in later publications Martin‐Löf's approach is different in several respects, in particular when he distinguishes between proof objects and demonstrations. Only the latter are supposed to have epistemic significance.) When speaking about canonical arguments and related notions, I shall essentially be following Prawitz [1973]. (See also Prawitz [2005].)

(9)  Since in the case of implications and universal sentences we cannot require that the subproofs of a canonical proof also be canonical, we may as well leave out this requirement in the case of conjunction, disjunction, and existence sentences, and only require that the subproofs be categorical.

(10)  The reductions (for the different logical constants) were introduced in Prawitz [1965] (see also Prawitz [1971]) in his proof of the normalization theorem for first‐order predicate logic and correspond to Gentzen's cut‐eliminations in the proof of his Hauptsatz.

(11)  For a more detailed exposition see Prawitz [1973] or Dummett [1991]. A more recent exposition is Prawitz [2005].