Jump to ContentJump to Main Navigation
Metamind$

Keith Lehrer

Print publication date: 1990

Print ISBN-13: 9780198248507

Published to Oxford Scholarship Online: October 2011

DOI: 10.1093/acprof:oso/9780198248507.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 23 January 2020

Induction, Evidence, and Conceptual Change

Induction, Evidence, and Conceptual Change

Chapter:
(p.127) 5 Induction, Evidence, and Conceptual Change
Source:
Metamind
Author(s):

Keith Lehrer

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780198248507.003.0006

Abstract and Keywords

Induction proceeds from the certain to the uncertain, or so it is commonplace to say. On the contrary, induction is inference from uncertain evidence to uncertain conclusions. This conception of induction is articulated in this chapter. Philosophers have argued that the acceptance of statements in science and other epistemically virtuous enterprises should not be explicated as inductive inference from evidence to hypothesis. It would be misleading to construe the acceptance of theories based on inductive inference from observational evidence to theoretical conclusions. These and other contentions suggest the most fundamental reason philosophers have for rejecting the model of scientific acceptance based on inductive inference: that rejection and acceptance is influenced by conceptual change, by radical shifts in the way people conceive of the world rather than being based simply upon inference from evidence to hypothesis.

Keywords:   induction, certainty, inductive inference, evidence, acceptance, conceptual change, rejection

INDUCTION proceeds from the certain to the uncertain, or so it is commonplace to say. On the contrary, induction is inference from uncertain evidence to uncertain conclusions. This conception of induction will be articulated below. Before turning to this matter, let me explain why I bother.

Philosophers have argued that the acceptance of statements in science and other epistemically virtuous enterprises should not be explicated as inductive inference from evidence to hypothesis. The reasons are multiple. Some philosophers maintain that the distinction between observation-terms and theoretical terms is untenable because observation is laden with theory and presupposes it.1 Hence, they conclude, it would be misleading to construe the acceptance of theories as based on inductive inference from observational evidence to theoretical conclusions. Others deny that any evidence-statements are beyond rejection. They aver that any statement in science may be cast aside to obtain greater explanatory simplicity or coherence.2 Still others contend that scientific acceptance depends upon social factors within science, upon who wins the social revolutions of science.3 All of these contentions suggest the most fundamental reason philosophers have for rejecting the model of scientific acceptance based on inductive inference, to wit, that rejection (p.128) and acceptance is influenced by conceptual change, by radical shifts in the way we conceive of the world, rather than being based simply upon inference from evidence to hypothesis.

The theory of inductive inference without certainty, which I shall champion, can accommodate all of the preceding considerations within a theory of inductive inference. Indeed, such considerations motivate my programme. The fact of conceptual change leads to the conclusion that there are no certain and irrefutable evidence-statements constituting the foundations of inductive inference. If a statement is certain, then there is no chance that it is wrong. But there is some chance that any contingent statement is wrong, as may be illustrated by reflecting on the conceptual revolutions of the past. There is some chance that we shall arrive at the conclusion that any concept lacks a denotation. The concepts of demons and entelechies are now on the junk heap of discarded concepts, and perhaps tomorrow the concepts of mind and existence may join them. There is a chance, however slender you might think it is. Consequently, if we restrict the base of evidence to what is certain, we shall be so restricted that there will be nothing to infer. Instead, we shall abandon certainty to obtain richer epistemic fruits from the tree of inductive inference.

I

Let us now turn to the positive task of constructing a theory of inductive inference without certainty. We must first solve the problem of choosing evidence-statements when nothing is certain. Some might appeal to observation to fill the emptied coffers of evidence, but we have noted that to do so may land us in doubtful dealings. Moreover, since observation-statements are not certain, we are unjustified in restricting evidence to what is observed.

Instead of appealing to material conditions of adequacy, we shall begin with some quite abstract conditions to be satisfied by a selection rule for evidence-statements. They are as follows.

E1. If e is an evidence-statement and d is an evidence-statement, then so is the conjunction of e and d.

(p.129) E2. There is some evidence-statement T which entails every evidence-statement. T is a statement of total evidence.

E3. A statement T of total evidence is logically consistent.

The first condition tells us that the conjunction of evidence is evidence. The second says that some statement entails every evidence-statement; a conjunction of all evidence-statements would be such a statement, though it would be infinite in length. The third tells us that our evidence-statements must not contradict each other. All of these conditions are to be understood as relative to a person and a time. It is the evidence of a person at a time that must satisfy these conditions. Within a theory of uncertain evidence, what is evidence at one time might not be evidence at another, and, moreover, what is evidence at one time might be inconsistent with what is evidence at another.

A rule for the selection of uncertain evidence-statements satisfying these conditions can be constructed on the basis of the subjective probabilities of statements. We have said that there is some chance that any contingent statement is false. What the chance of truth is, may be expressed as a probability. The probability represents a degree of belief a person has in the truth of the statement, and it may thus be considered a subjective probability in contrast to objective probabilities such as frequencies or propensities. The subjective probabilities are quantitative degrees of belief conforming to the calculus of probability. The development of a subjective theory of probability has been in progress for some time, and, in my opinion, the prognosis for success is good in spite of unsolved problems.4 Without attempting to defend such a theory, I shall assume the viability of subjective probability. The theory of subjective probability construed as degrees of belief assumes nothing about what factors influence such probabilities. Social, perceptual, explanatory, and conceptual factors may all influence the degree to which we believe a statement. Of course, such probabilities are relative to a person and a time. The subjective probabilities of one person may differ from another, and the subjective (p.130) probabilities of the same person may change over time. Moreover, we need not assume that any contingent statement has a probability of 0 or 1.5 In line with the conviction that any contingent statement has some chance of being wrong, we shall assume that no contingent statement has a probability of either 0 or 1. The probability of a contingent statement falls between these extremes.

Suppose that for some language L we can assign subjective probabilities to the statements of the language for a person at a given time. We shall suppress future references to the person, for the sake of economy, but references to times will be made explicit. Thus, for any statement h of L, we assign a probability, p j(h), where the variable j ranges over times, t 0, t 1, and so forth. Thus, p 3(h) is the probability assigned to h at t 3. A time is a temporal interval of unspecified duration. Assuming that such a probability assignment is made, how should we select evidence-statements on the basis of probabilities? The statements selected as evidence should be ones that compete favourably for the status and the ones it would be most reasonable to expect to be true on the basis of the probabilities. Moreover, if a statement selected as evidence turns out to be true, this should be at least partially explained by the probabilities conceived of as subjective estimates of frequencies.

These informal conditions can be met by a fairly simple rule of evidence. The simplest procedure would be to specify some high probability, though one less than 1, and to lay it down that any statement having that probability or a greater one may be selected as evidence. But this procedure would violate the consistency requirement (E3), as is shown by the lottery paradox.6 Thus, we must complicate matters somewhat by requiring that the statements selected as evidence are more probable than any of the statements with which they might compete. With what other statements might a given statement compete? It is natural to assume that a statement would (p.131) compete with those statements with which it is inconsistent. I shall assume that a statement competes with every other member of any inconsistent set to which it belongs provided that every member of the set is required to deduce the inconsistency.

More formally, we shall first define the notion of a minimally inconsistent set, and in terms of that we shall define the concept of competition.

DMIS. A set S is a minimally inconsistent set if and only if S ├┌p&∼p┐ and no proper subset C of S is such that C ├┌p&∼p┐.

DComp. A statement h competes with k if and only hk and there is a minimally inconsistent set S of which h and k are both members.

Having defined competition in this way, our rule of evidence may now be formulated. Letting ‘E j(e)’ mean ‘e is evidence at time j’ the rule is as follows:

RE. E j(e) if and only if e is logically consistent and for every statement s such that e competes with s, p j(e) exceeds P j(s).

This rule tells us that a statement is evidence if and only if it is consistent and is more probable than any statement with which it competes. In finite languages the rule (RE) is equivalent to the following rule:

E j(e) if and only if e is logically consistent and such that for any other statement s, either es or P j(e) exceeds P j(s).

It can be demonstrated that (RE) satisfies conditions (E1), (E2), and (E3) above.7

We cannot demonstrate formally that (RE) is a principle of reasonable expectation and explanation, but we can offer some intuitive arguments by considering the results of applying (RE). First let us suppose that the language from which the statements referred to in the rule are drawn is specified in terms of some partition of statements. A partition is a set of statements that are logically incompatible in pairs and logically exhaustive as a set. (p.132) A most simple language would be one consisting of a finite partition and truth-functional combinations thereof. The statements of such a language which are neither contradictory nor tautological are all logically equivalent to a member of the partition or to some disjunction of such members. Thus, we obtain an assignment of subjective probabilities to the statements of such a language by assigning probabilities to the members, P 1, P 2, and so forth to Pn, of the partition and to disjunctions of such members, while assigning a probability of 0 to all and only contradictions, and a probability of 1 to all and only tautologies.

Now let us consider the results of applying (RE) in such a language, leaving consideration of richer languages for subsequent discussion. A member of the partition, P k, is such that E j(P k) if and only if P j(P k) exceeds 0.5. A disjunction, D”, of n different members of the partition is such that E j(D n) if and only if P j(D n) exceeds P j(D m), where D m is any disjunction of members of the partition such that it is not the case that D nD m. These results guarantee that the rule has the following feature: if a statement is true, and that statement is included in the evidence, then the statement is more probable than any false statement of the language.8 This result is essential in order to ensure that evidence-statements, if true, are such that their truth is explained by the probabilities. If an evidence-statement could be true when some equally probable statement is false, then the probabilities would not explain the truth of the evidence-statement in question.

The preceding abstract characterization of the results obtained from (RE) can be made more concrete by thinking of the members P 1, P 2 and so forth of the partition as statements describing the outcome of a lottery of nature, where P 1 affirms that the first ‘ticket’ that might be chosen by nature is chosen, P 2 affirms that the second ‘ticket’ is chosen, and so forth. Here the choice of a ticket corresponds to one out of the n possible states of the world describable in the language. What (RE) tells us is that if the probability of the members of the partition are all equal, then we cannot select anything beyond tautologies as evidence, because any other choice would be arbitrary. Any (p.133) disjunction of different members will be no more probable than the other disjunctions of the same number of different members, and the conjunction of such disjunctions would be contradictory. If one member of the partition has a probability greater than its denial, that is, greater than 0.5, then it may be selected as the evident description of nature. If one member of the partition has a lower probability than any other member, then the denial of that member, or, what is the same thing, the disjunction of all the other members, may be selected as evidence. Any member, or disjunction of members, competes for selection with all other statements except contradictions, tautologies, and its own logical consequences: it must be more probable than the statements with which it competes in order to be selected.9 Consequently, a selected member or disjunction of such members is, if true, more probable than any statement except its logical consequences; and since its logical consequences must be true if it is, it follows that the statement selected as evidence is, if true, more probable than any false statement. For this reason, the truth of the statement is explained by the probabilities.

One further consequence of the rule may be grounds for protest. It is a consequence of (RE) that any two statements which are selected as evidence must be logically related in that one logically implies the other. The total evidence Twill logically imply statements which are logically independent of each other, but not all these statements will be selected by (RE) as evidence. The rule guarantees that the conjunction of all evidence-statements is an evidence-statement, but it does not allow that all logical consequences of evidence-statements are evidence-statements. This consequence is unnatural, because it is natural to suppose that the logical consequences of evidence-statements are also evidence-statements, and, therefore, that some logically independent statements are evidence-statements.

There are two replies to this objection. The first is that we cannot have evidence-statements with a probability of less than 1 which are logically independent and also such that the probability of a conjunction of evidence-statements is no less probable than at least one of the statements conjoined. For, the (p.134) general multiplication axiom tells us that p(h&k) = p(h) × p(k,h), and that p(k,h) equals 1 in the sort of language we are considering only if h logically implies k. Hence, if h does not logically imply k, then the probability of the conjunction will be less than the probability of h. By an exactly similar argument, if k does not logically imply h, then the probability of the conjunction will be less than the probability of k. If neither logical implication holds, then the conjunction will be less probable than either conjunct, and, if it is less probable, then the fact that the conjuncts are evidence-statements because of their probability should not guarantee that the conjunction is an evidence-statement because of its probability. In short, the idea that logically independent statements must be allowed as evidence-statements may be repudiated as an intuitive hangover from the common-sense conception of evidence-statements as certain. The intuition turns out to be unacceptable when evidence-statements are uncertain and have some genuine probability of being erroneous.

The second reply involves appeal to the consideration that the set of evidence-statements selected by (RE) can be extended by reapplication. One method for such reapplication is a procedure of obtaining new probabilities from old on the basis of evidence, and it is known as simple conditionalization.10 Suppose T is the total evidence such that E 0(T) by (RE) from the probability function p 0. We can then obtain a new probability function P 1 as follows:

                   Induction, Evidence, and Conceptual Change

Having obtained the new probability function p 1 by conditionalization, we may then consider what the total evidence T 1 is such that E 1(T 1) by (RE), move on to p 2 by conditionalization, and so forth. The set Sg containing all those statements selected by (RE) as evidence relative to the sequences of probability functions, that is, the set Sg such that s is a member of S E if and only if E 0(s) or Ei(s) or E 2(s) and so forth, is logically consistent and deductively closed, that is, all deductive consequences in the language of members of Sg are also members of the set. If we consider S E an evidential extension generated from p 0, then we (p.135) can say that there are logically independent statements belonging to the evidential extension. Thus, if we extend the set of evidence-statements in this manner, we may select logically independent evidence-statements.

The method of evidential extension has, however, an unwanted consequence. It is an assumption of our investigation that no contingent statement has a probability of 0 or 1, and the procedure of simple conditionalization violates this assumption. This is obvious when one reflects that p o(T,T) = 1, and hence, p 1(T) = 1 by simple conditionalization. So this procedure for shifting probabilities is prohibited to us. Moreover, it is clear that the procedure is not acceptable because there is no reason to suppose that a person will shift her degree of belief in a statement to 1 simply as a consequence of selecting the statement as evidence. Some shift in the probability of a statement may result, but it is more plausible to suppose that the subjective probabilities will remain unchanged. Therefore we cannot rely on the procedure of simple conditionalization to extend the scope of evidence.

Fortunately, exactly the same extension of evidence can be achieved even though the probability function remains unchanged by the selection of evidence-statements. To accomplish this end, it is necessary to introduce a rule for selecting statements as indirect evidence. Letting ‘IEj(e,s)’ mean ‘e is indirect evidence on the basis of s at time j’, we formulate a rule of indirect evidence as follows:

RIE. IEj(e,s) if and only if e is logically consistent with s and for any other statement k, either ┌e&s┐ ├ k or pj(e,s) exceeds P j(k,s).

It is a consequence of (RIE) that there is a statement Ts, which is the total indirect evidence on the basis of s. Now consider the set of all those statements selected as evidence by (RE), all those statements selected as indirect evidence by (RIE) on the basis of a statement of total evidence T, all those statements selected as indirect evidence by (RIE) on the basis of T 1 which is the total indirect evidence on the basis of T, and so forth. We shall find that this set has the same members as the evidential extension generated from P0 by simple conditionalization. However, we have obtained this extension of evidence without going beyond (p.136) the probability function p0! The same probability function may be employed in the application of (RE) and (RIE) to generate the set SE.11 Hence the extended set of evidence-statements which is logically consistent, deductively closed, and contains logically independent members is obtained without assigning a probability of 0 or 1 to any contingent statement.

We have rejected the procedure of simple conditionalization and noted that evidence may be extended without assuming any shift in probabilities. However, it is also important to note that even if the selecting of statements as evidence should produce a change in probabilities, there is no need to assume that such a shift in subjective probabilities conforms to simple conditionalization. As Richard Jeffrey has shown, the shift from p0 to p1 resulting from a change in the probability of evidence-statements conforms to the pattern

                   Induction, Evidence, and Conceptual Change

provided T is a statement of total evidence selected by (RE) on p0 such that

                   Induction, Evidence, and Conceptual Change

and

                   Induction, Evidence, and Conceptual Change
12

There is no guarantee that the probabilities will shift in this way, if they shift at all, but this formula gives us a procedure for calculating new probabilities from earlier ones as a result of selecting statements as evidence without assigning a new probability of 1 to the evidence-statements.

The most important feature of (RE) is the capacity of the rule to accommodate those considerations that lead us to abandon the requirement that evidence-statements be certain. Other rules for the selection of evidence yield the consequence that the probability of evidence-statements goes to 1 by simple conditionalization or else they yield the same effects through other (p.137) methods. All such rules, whether pragmatic or empiricistic, have the fatal defect of failing to explain how any statement accepted as evidence at one time can be rejected as evidence at a subsequent time. For, if a statement has a probability of 1, then it will retain that probability as long as shifts in probability conform to simple conditionalization. Moreover, even those pragmatists like Isaac Levi and Abner Shimony who would concede that we may reasonably change what constitutes evidence fail to provide a rule to explicate how such a change takes place.13 By contrast, (RE) gives us a very simple and direct answer to the question of how statements accepted as evidence may subsequently be rejected. Such rejection will occur precisely when the subjective probability of an evidence-statement decreases so that it is less probable than other statements with which it competes. Given a probability function p0, we might accept a statement as evidence at time t0 because it is more probable than the statements with which it competes, but at a subsequent time t1 we might reject the statement as evidence because it is less probable, given probability function p1, than some statements with which it competes.

Defects of other theories of evidence and induction are avoided by a theory of induction based on (RE). First, there is no need to restrict evidence-statements to observation-statements, or even to assume that any precise distinction between observation-terms and theoretical terms can be drawn. What statements are chosen as evidence depends not on whether they are observation-statements or theoretical statements, but on the degree to which they are believed; on their subjective probability. Our account is consistent with the assignment of very high probabilities to highly theoretical statements, general statements or whatever. Such statements might have a higher subjective probability than reports of observation or other very concrete statements. Rule (RE) is completely neutral with respect to the question of the form and content of evidence-statements. Second, factors of explanatory simplicity and coherence may influence the degree to which statements are believed, (p.138) and consequently, even determine which statements are accepted as evidence. The same is true of other factors. Moreover, statements selected as evidence by (RE) can fulfil the same functions in scientific and practical enquiry as evidence-statements assigned a probability of 1 by other theories. Evidence-statements selected by (RE) confirm or test hypotheses and theories as well as providing the data to be explained by them. However, on the theory of evidence we advocate, such appeals and provisions are subject to revision as a result of subsequent investigation.

II

Having formulated rules for the selection of evidence on the basis of subjective probabilities, we can now explicate the manner in which conceptual change may alter what counts as evidence. There are two basically different kinds of conceptual change to consider. The first kind of conceptual change involves a radical shift in the probability of some hypothesis which has neither a probability of 1 or 0 prior to or subsequent to the shift. This kind of change results when one conceives of a problem in an altogether new way; for example, when one comes to conceive of constant motion as natural without any first motion. If the new way of looking at things is focused on a single member of the partition, Pj, then we may be able to calculate the new probabilities from the old by substituting the Pj for T in Jeffrey's formula given above. If more than one member of the partition is the focus, say, Pj, Pk,…, then we employ the following expanded formula:

                   Induction, Evidence, and Conceptual Change

where, for any of the conditional probabilities in the formula

                   Induction, Evidence, and Conceptual Change

The second kind of conceptual change is more profound because it involves a change in the semantic status of some statement. This would be a change from being contingent to noncontingent, or vice versa, or even from being logically true to logically false or vice versa. An example of a statement (p.139) shifting from being logically true to being contingent is the statement—atoms are indivisible. Originally ‘atoms’ were, by definition, indivisible and the statement was logically true, but now the meaning has altered and we agree that it is a contingent truth that the atom is divisible. A shift in the other direction is embodied in our conception of water. At one time, when the chemical constituents of water were being discovered, it surely was a contingent truth that water is a combination of hydrogen and oxygen, but now the meaning of the term ‘water’ is such that it is true by definition, and hence logically true, that water is a combination of hydrogen and oxygen. The former kind of shift is more surprising and more important than the latter. It is with amazement that one reads that time moves backward or that two particles are in the same place at the same time, but such pronouncements reflect the alteration in the semantic status of statements characteristic of conceptual change.

The impact of the second variety of conceptual change on the selection of evidence by (RE) is somewhat more difficult to explicate than the more mundane variety. Again consider our language and the partitions thereof. Suppose a statement shifts from being contingent to being logically true. Let us suppose that the statement is sT Here it may be acceptable to employ simple conditionalization to obtain the new probability functor by the formula

                   Induction, Evidence, and Conceptual Change

provided that only the probability of sT is directly effected by conceptual change and all other shifts in probability are propagated from that one.

Similarly, if the change is from the contingency of a statement to logical falsity, then, if the statement is sF, we may, on the same assumption, employ simple conditionalization to obtain the new probability function according to the formula

                   Induction, Evidence, and Conceptual Change

Of course, the shift in probabilities resulting from conceptual change need not conform to simple conditionalization. The altered semantic status of one statement may affect the degrees of belief we have in other statements in a variety of ways. Nevertheless, the shifts that would result from simple conditionalization illustrate how a statement might be accepted into (p.140) evidence as a result of conceptual change. For, as a result of the kind of conceptual change in question, some statement may no longer compete with another statement, because the statement has become logically true or logically false. As a result, a statement may become more probable than any with which it competes, and hence be selected as evidence by (RE) on the new probability function when it had not been so selected prior to conceptual change. On the other hand, a statement which was more probable than any with which it competed prior to the conceptual change may be less probable on sT or ∼sF than some other statement with which it competes, and consequently, the statement will no longer be selected as evidence by (RE) subsequent to conceptual change.

Conceptual change which alters the semantic status of a statement from logical truth or logical falsity to contingency will also affect what is selected as evidence by (RE), but a different procedure is required to explicate the consequences of such change. First, we must consider the manner in which members of the partition are constructed. Assume that we have a language with atomic sentences A 1, A 2, and so forth to Am. We may then construct a partition by forming a set of conjunctions where every conjunction contains each atomic sentence or its denial (not both) as conjuncts in numerical order. The conjunctions, of which there will be 2m, will be what Carnap called state descriptions if they are logically consistent.14

However, as Kemeny showed, if we suppose that there are logical relations between atomic sentences, for example, if ˜(A 1&A 2) is a logical truth, then some of the maximal conjunctions will be logically inconsistent.15 Ordinarily, one thinks of a partition as being minimal in the sense that no member of the partition is logically inconsistent; but a partition may contain a contradictory member. Any contradictory statement will be incompatible with any other statement, and any logically exhaustive set of statements to which it is added will remain logically exhaustive. We shall call partitions ‘inflated’ partitions when they contain contradictory members. If we now consider a language defined by an inflated partition, we can (p.141) explicate what will result from a conceptual shift of the kind under consideration. Our probability function p0 will assign a probability of 0 to all members of the partition that inflate it because they are contradictory. Now imagine that as a result of conceptual change some statement changes its conceptual status from logical truth or logical falsity to logical contingency. This will mean that at least one member of the partition which was inflationary will no longer be so. For example, suppose that the statement ˜(A 1&A 2), which was a logical truth, specified to be such by the meaning-postulates of the language, has altered its semantic status and is now contingent. In that case, all those members of the partition which inflated it because they contained both A 1 and A 2 may no longer be contradictory and, consequently, they will no longer inflate the partition. Thus, the effect of changing a statement within a language from being logically true or logically false to being contingent is simply that of changing some members of the inflated partition from being contradictory inflationary members of the partition to being contingent noninflationary members.

What shifts such conceptual changes might bring about in the probability function cannot be restricted to any one procedure. The changes may be expected to alter our degrees of belief in statements in a variety of ways. However, there is one very simple kind of shift that is worth consideration. Suppose that Ic is the inflationary member, or, if there is more than one, the disjunction of such members that have become contingent noninflationary members of the partition as a result of conceptual change. Assuming that the effects of this change are to be felt equally by other members of the partition, the new probability, P 1, of any originally noninflationary member, Pj, of the partition can be calculated by the formula

                   Induction, Evidence, and Conceptual Change

where n equals the number of noninflationary members of the partition at time 0. Probability assignments for other statements can be obtained from the probability of members of the partition. Such shifts may result in some statement accepted by (RE) as evidence on p0 being rejected as evidence by (RE) on P1 because the statement originally accepted as evidence must now compete with some statement which has become contingent due to the conceptual shift. Similarly, some statements may be (p.142) accepted as evidence which were formerly rejected, for example, a statement that was formerly logically false and is now a highly probable contingent statement. Indeed, conceptual shifts of this kind should be expected to produce the most radical changes in evidence. Conceptual shifts involving a change in a statement from being logically true to being logically false or vice versa may be explicated in a similar manner. They too can produce changes in what we accept or reject from evidence by applying (RE).

The preceding results all concern the application of rules of evidence to languages defined in terms of finite partitions. I have argued elsewhere that this restriction is not as confining as it might appear, and subsequent developments in this chapter will be confined to such languages.16 However, a few words concerning the application of the rules to infinite languages are in order. Consider a language with a finite number of predicates and a denumerably infinite number of individual constants. Let us suppose that the language contains expressions of infinite length, among these being maximal conjunctions of atomic sentences formed as they are in finite languages. We also suppose that probabilities are assigned to statements of the language, even ones of infinite length. With these suppositions, it follows that a maximal conjunction is selected as evidence if and only if it is more probable than its denial, and a disjunction of maximal conjunctions is selected as evidence if and only if it is more probable than any other disjunction of maximal conjunctions which is not a logical consequence of it. Statements of finite length may be selected as evidence when they are logically equivalent to disjunctions of maximal conjunctions. This program for the application of our rules of evidence involves both the application of logic and the assignment of probabilities to expressions of infinite length. It is admitted that there are problems in such a project. However, in my opinion, current research can provide the solution to these problems.17

Before concluding our discussion of evidence, some limitations (p.143) of the analysis are worth noting. First, it is not claimed that the rule of evidence explains why or how conceptual change takes place. However, conceptual change may be accommodated within a theory of inductive inference in which evidence is selected by (RE) and offers no reason for abandoning the inductive model. It is conceded that a theory of inductive inference must be supplemented by a theory of conceptual change to provide a complete account of certain shifts in probabilities resulting in the selection and rejection of evidence-statements. Second, many philosophers consider the concept of total evidence to be unrealistic.18 However, the rule (RE) only implies that there is some statement of total evidence; it does not assume that anyone can formulate such a statement. One need not do so to apply (RE). Moreover, when a statement of total evidence was appealed to for the purpose of characterizing the shift to new probabilities from earlier ones, there was no assumption that anyone could formulate such a statement. Nevertheless the rule does imply that the statement of total evidence is a statement of evidence for the person in question, and that may be unrealistic if no one would be capable of formulating such a statement. To avoid this, we may either think of the rule as applying to an idealized subject, or we may relativize the application of the rule to a specific cognitive problem.19

III

The foregoing completes our account of the selection of evidence-statements and the impact of conceptual change on such acceptance. We shall now consider briefly the question of how such evidence may be employed in inductive inference. Inductive inference has many ends. First, once one has selected (p.144) statements as evidence, there remains the question of which of the many hypotheses consistent with the evidence is confirmed by the evidence. Second, there is the question of which of the hypotheses is to be the focus of further enquiry, of testing and experimentation. Third, there is the problem of finding some hypothesis that enables one to explain the evidence.

In other work, I have shown that these objectives of enquiry can be met by a single rule of inductive inference.20 There are a number of ways of formulating the rule. One way is in terms of the notions of minimal inconsistency and competition which were introduced earlier. The rule is as follows: letting ‘Ij(h,e)’ mean ‘h may be inductively inferred from e at time j’, the rule of induction is as follows:

RI. Ij(h,e) if and only if for any set of statements S, if h is a member of S and S is minimally inconsistent, then there is some member k of S with which h competes such that Pj(h,e) exceeds Pj(k,e).21

This rule of induction is not deductively closed, that is, the set of statements that may be inductively inferred by (RI) does not contain all of its deductive consequences. This is not implausible, as Harman and Kyburg have insisted, because induction is one thing and deduction another.22 Nevertheless, we do seem to be rationally committed to the deductive consequences of any set of hypotheses we accept on the basis of inductive inference. The conclusions we accept are not immune from criticism directed at the deductive consequences of the set of hypotheses inductively inferred. Hence there is a sense in which it is reasonable to accept the logical consequences of what one accepts as evidence and as a result of inductive inference. It is reasonable to accept them in the sense that one who knew what those consequences were would be committed to accepting them.

We may obtain a rule of rational acceptance from (RI). Letting ‘Aj(h)’ mean ‘it is reasonable to accept h at time j’, the rule is as follows:

(p.145) RA. Aj(h) if and only if either (i) Ej(h) or (ii) Ij(h,T) where T is a statement of total evidence at time j, or (iii) there is some set S such that Sh and such that k is a member of S if and only if either, Ej(k) or Ij(k,T).

We shall now consider the implications of (RA).

A number of philosophers, Carl Hempel, Jaakko Hintikka, Risto Hilpinen, and Isaac Levi most notably, have defended the idea that inductive inference should be conceived of as a cognitive decision based on certain epistemic values or utilities.23 A rule of acceptance is then formulated which tells us it is reasonable to accept that hypothesis which has the highest expected utility, as well as the deductive consequences of that hypothesis in conjunction with the evidence. I have shown elsewhere that if one greatly values accepting hypotheses of high explanatory content and defines a utility function accordingly, one will adopt a rule equivalent to (RA) telling us to maximize the following function:

                   Induction, Evidence, and Conceptual Change
24

This function is a measure of the confirmatory relevance of the total evidence to hypotheses. The higher the value of the function the higher the positive relevance of the evidence to the hypothesis. Hence, by accepting a hypothesis h for which the function is maximal, we accept a statement toward which the evidence has the greatest positive confirmatory relevance.

Moreover, the function is equal to the following:

                   Induction, Evidence, and Conceptual Change

This is a measure of the positive relevance of the hypothesis to the evidence. This function is also a measure of explanatory relevance, for a hypothesis which has the highest positive relevance to the evidence also explains the evidence. Hence the (p.146) same function is a measure of the confirmatory relevance of evidence and the explanatory relevance of a hypothesis. Moreover, (RA) is equivalent to the prescription to accept the statement h having a maximal value for this function together with the deductive consequences of that statement and the evidence. It is possible that more than one statement may have a maximal value, in which case we adopt a rule for ties, proposed by Levi, telling us to accept a disjunction of all those members of the partition that are maximal.25 Whether or not we need to employ the rule for ties, the results coincide with those obtained from (RA). Of course, a statement having a maximum of explanatory relevance to the evidence and deriving a maximum of confirmatory relevance from the evidence is a reasonable hypothesis to subject to test and experimental enquiry.

The results that we obtain by employing (RA) are very easy to characterize. If there is some member of the partition that is more probable on the evidence than any other member of the partition, then the set of hypotheses it is reasonable to accept consists of the logical consequences of that member and the evidence. This follows from the fact that we may inductively infer that each of the other members is not true by (RI), and the conjunction of those logically implies that the remaining member of the partition is true. If there is some set of members of the partition that are equally probable on the evidence and more probable than any member not belonging to the set, then the set of statements we may reasonably accept consists of the logical consequences of the disjunction of those equally probable members and the evidence. For, by (RI) we may inductively infer the falsity of each of the members of the partition not belonging to the set of members having the highest probability on the evidence.26

Those who have accomplished the most in terms of the development of the concept of subjective probability, Jeffrey for example, have spurned as spurious rules for the selection of evidence and inductive rules based on evidence.27 But such accounts fail to offer an account of why subjective probabilities (p.147) shift in the manner they do. Jeffrey has suggested that such shifts may result from observation, but he leaves us totally uninformed as to why we should make the observations we do and hence shift probabilities on the basis of such observations.28 Of course, sometimes more or less random observations may elicit changes in degrees of belief, but controlled observation, characteristic of scientific investigation, is a consequence of attempts to test statements accepted as evidence and inferred from that evidence. Such testing promotes the scientific objectives of confirmation, explanation, and verification. The pursuit of these objectives justifies the rules of evidence, induction, and acceptance advocated in this chapter. These methods of enquiry assume that nothing is certain, and that both evidence and hypothesis are subject to re-examination and rejection. We are thus left without any permanent foundation for inference and acceptance. We are forever free to rebuild our foundation of evidence and to construct a new edifice by inductive inference and rational acceptance.

Notes:

(1) N. R. Hanson, Patterns of Scientific Discovery (Cambridge, 1958); and M. Scriven, ‘Explanations, Predictions and Laws’, in H. Feigl and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science, 3 (Minneapolis, 1962).

(2) W. V. O. Quine, From a Logical Point of View (Cambridge, Mass., 1953), esp. his chapter on ‘Two Dogmas of Empiricism’, W. F. Sellars, Science, Perception, and Reality (London and New York, 1963), esp. his chapter on ‘Some Reflections on Language Games’ (also my review of Sellars in Journal of Philosophy, 63 (1966), 266–77); and G. H. Harman, ‘Induction’, in M. Swain (ed.), Induction, Acceptance and Rational Belief (Dordrecht, 1970).

(3) Cf. T. S. Kuhn, The Structure of Scientific Revolutions (Chicago, 1962).

(4) For a collection of articles on subjective probability, see H. E. Kyburg and H. E. Smokier (eds.), Studies in Subjective Probability (New York, 1964). For a unified approach, see R. C. Jeffrey, The Logic of Decision (New York, 1965).

(5) Jeffrey, The Logic of Decision, 153–70.

(6) The lottery paradox was first formulated, to my knowledge, by H. E. Kyburg, Jr., in Probability and the Logic of Rational Belief (Middletown, Conn., 1961), 197. I propose a solution in ‘Induction, Reason and Consistency’, in British Journal for the Philosophy of Science, 21 (1970), 103–14, and in ‘Justification, Explanation, and Induction’ in Swain (ed.), Induction, Acceptance and Rational Belief.

(7) Proofs of these claims are to be found in my ‘Justification, Explanation, and Induction’, 127–31, and in H. E. Kyburg, Jr., ‘Conjunctivitis’, 73–6, in Swain (ed.), Induction, Acceptance and Rational Belief.

(8) Proofs are in my ‘Justification, Explanation, and Induction’, 127–31.

(9) Ibid.

(10) Cf. Kyburg and Smokier, Studies in Subjective Probability, and Jeffrey, The Logic of Decision, 153–4.

(11) For proofs concerning this rule see my ‘Induction, Reason, and Constitency’, 110–14.

(12) Jeffrey, The Logic of Decision, 153–70, esp. 158. For discussion of Jeffrey's procedure see I. Levi, ‘Probability Kinematics’, British Journal for the Philosophy of Science, 18 (1967–8), 197–209; W. Harper and H. E. Kyburg, Jr., ‘Discussion: The Jones Case’, and Jeffrey's reply ‘Acceptance vs. Partial Belief’, in Swain (ed.), Induction, Acceptance and Rational Belief, It is clear from this article and others by Jeffrey that he rejects the notion of acceptance employed in this chapter.

(13) I. Levi, Gambling with Truth (New York, 1967); and A. Shimony, ‘Scientific Inference’, in R. G. Colodny (ed.), The Nature and Function of Scientific Theories: Essays in Comtemporary Science and Philosophy, vol. 4 in the University of Pittsburgh Series in the Philosophy of Science (Pittsburgh, 1970).

(14) R. Carnap, The Logical Foundations of Probability (Chicago, 1962).

(15) J. G. Kemeny, ‘Extensions of the Methods of Inductive Logic’, Philosophical Studies, 3 (1952), 38–42.

(16) I argue that finite languages are adequate to deal with problems concerning length and other concepts defined in infinite languages in ch. 4 of this volume.

(17) Cf. C. R. Karp, Languages with Expressions of Infinite Length (Amsterdam, 1964); and P. Krauss and D. Scott, ‘Assigning Probabilities to Logical Formulas’, in J. Hintikka and P. Suppres (eds.), Aspects of Inductive Logic (Amsterdam, 1966), 219–64.

(18) Cf. I. Levi, ‘Probability and Evidence’ in Swain (ed.), Induction, Acceptance and Rational Belief.

(19) I explore the idealized approach in ‘Induction, Reason, and Consistency’, as do Swain in ‘The Consistency of Rational Belief’, in Swain (ed.), Induction, Acceptance and Rational Belief, J. Hintikka and R. Hilpinen in ‘Knowledge, Acceptance, and Inductive Logic’, in Aspects of Inductive Logic and in other works. The more pragmatic approach is adopted by Levi in Gambling with Truth and by Shimony, in ‘Scientific Inference’.

(20) These matters are elaborated further in my ‘Belief and Error’, in M. S. Gram and E. D. Klemke (eds.), The Ontological Turn (Iowa City, 1974).

(21) Proofs concerning this rule are given in ch. 4.

(22) This matter is discussed by G. Harman in ‘Induction’ and by Kyburg in ‘Conjunctivitis’.

(23) C. G. Hempel, ‘Deductive-Nomological vs. Statistical Explanation’, in H. Peigl and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science, 3 (Minneapolis, 1962), 98–169; J. Hintikka and J. Pietarinen, ‘Semantic Information and Inductive Logic’, Aspects of Inductive Logic, 96–112; R. Hilpinen, Rules of Acceptance and Inductive Logic (Acta Philosophica Fennica, 22; Amsterdam, 1968); and Levi, Gambling with Truth.

(24) H. Finch argued for the importance of this function in ‘Confirming Power of Observations Metricized for Decisions among Hypotheses’, Philosophy of Science, 27 (1960), 293–307 and 391–404. See also my ‘Belief and Error’, in which I examine the implications of this function.

(25) Levi, Gambling with Truth, 83–90.

(26) These results are proved in ch. 4.

(27) R. C. Jeffrey, ‘Valuation and Acceptance of Scientific Hypotheses’, Philosophy of Science, 23 (1956) 237–46.

(28) Jeffrey, The Logic of Decision, 153–70.