Jump to ContentJump to Main Navigation
Against CoherenceTruth, Probability, and Justification$

Erik J. Olsson

Print publication date: 2005

Print ISBN-13: 9780199279999

Published to Oxford Scholarship Online: July 2005

DOI: 10.1093/0199279993.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 24 February 2017

(p.216) Appendix C Proofs of Observations

(p.216) Appendix C Proofs of Observations

Source:
Against Coherence
Publisher:
Oxford University Press

Observation 2.1: P ( H / E 1 , , E w ) = 1 1 + ( n 1 ) ( 1 i i ( n 1 ) ) w in the generalized Huemer model.

Proof: P ( H / E 1 , , E w ) = P ( E 1 , , E w / H ) P ( H ) P ( E 1 , , E w ) ( by Bayes ' s theorem ) = P ( E 1 / H ) P ( E w / H ) P ( H ) P ( E 1 / H ) P ( E w / H ) P ( H ) + P ( E 1 / ¬ H ) P ( E w / ¬ H ) P ( ¬ H ) (by generalized conditional independence)

= i w 1 n i w 1 n + ( 1 i n i ) w ( n 1 n ) = 1 1 + ( n 1 ) ( 1 i i ( n 1 ) ) w

Observation 3.1: Given (i)–(viii) in section 3.2.3, P ( H / E 1 , E 2 ) = P ( R ) + P ( H ) 2 P ( U ) P ( R ) + P ( H ) P ( U ) .

Proof: By Bayes's theorem:

( 1 ) P ( H / E 1 , E 2 ) = P ( E 1 , E 2 / H ) P ( H ) P ( E 1 , E 2 )
We will now calculate the right-hand side of (1), noting that
( 2 ) P ( E 1 , E 2 / H ) = P ( E 1 , E 2 / H , R ) P ( R / H ) + P ( E 1 , E 2 / H , U ) P ( U / H )
From (2) and our background assumptions we deduce
( 3 ) P ( E 1 , E 2 / H ) = P ( R ) + P ( H ) 2 P ( U )
Turning to the denominator of (1),
( 4 ) P ( E 1 , E 2 ) = P ( E 1 , E 2 / R ) P ( R ) + P ( E 1 , E 2 / U ) P ( U )
(p.217) By (4) and our assumptions,
( 5 ) P ( E 1 , E 2 ) = P ( R ) P ( H ) + P ( H ) 2 P ( U )
Finally, by combining (3) and (5) we get (after some simplification)
P ( H / E 1 , E 2 ) = P ( R ) + P ( H ) 2 P ( U ) P ( R ) + P ( H ) P ( U ) .

Observation 3.2: P(H/E)=P(R)+P(H)P(U)

Proof: We first note

( 1 ) P ( E / H ) = P ( E / H , R ) P ( R / H ) + P ( E / H , U ) P ( U / H ) = P ( R ) + P ( H ) P ( U ) ( by ( i ) , ( iii ) , ( vii ) , and ( viii ) )
( 2 ) P ( E ) = P ( E / R ) P ( R ) + P ( E / U ) P ( U ) = [ P ( E / R , H ) P ( H / R ) + P ( E / R , ¬ ( H ) P ( ¬ ( H / R ) ] P ( R ) + [ P ( E / U , H ) P ( H / U ) + P ( E / U , ¬ ( H ) P ( ¬ H / U ) ] P ( U ) = [ P ( H ) ] P ( R ) + [ P ( H ) P ( H ) + P ( H ) P ( ¬ H ) ] P ( U ) ( by ( i ) , ( iii ) , ( vii ) , and ( viii ) ) = P ( H ) P ( R ) + P ( H ) P ( U )
By Bayes's theorem,
P ( H / E ) = P ( E / H ) P ( H ) P ( E ) = [ P ( R ) + P ( H ) P ( U ) ] P ( H ) P ( H ) P ( R ) + P ( H ) P ( U ) ( from ( 1 ) and ( 2 ) ) = P ( R ) + P ( H ) P ( U ) P ( R ) + P ( U ) = P ( R ) + P ( H ) P ( U ) ( since P ( R ) + P ( U ) = 1 )

Observation 3.3: If P(U) is non-extreme, then P(H/E)>P(H).

Proof: By Observation 3.2, P(H/E)=P(R)+P(H)P(U). By algebra, P(R)+P(H)P(U)>P(H) given that P(U) and P(H) are non-extreme.

Observation 3.4: P(H/E 1,E 2)>P(H/E 1).

Proof: Let a=P(R), b=P(H), and c=P(U). We have assumed, as part of the model, that c < 1 and a+b=1. The statement to be proved follows from these two assumptions given Observations 3.1. and 3.2. What we need to prove is

a + b 2 c a + b c > a + b c .

(p.218) This is established as follows:

a + b 2 c a + b c > a + b c a + b 2 c > ( a + b c ) 2 1 c + b 2 c > ( 1 c ) 2 + b 2 c 2 + 2 ( 1 c ) b c ( by assumption : a + b = 1 ) 1 c + b 2 c > 1 + c 2 2 c + b 2 c 2 + 2 b c 2 b c 2 b 2 1 > c 2 + b 2 c + 2 b 2 b c 1 + b 2 2 b = c ( 1 + b 2 2 b ) 1 > c ( by assumption )

Observation 3.5: If P(H/E 1)=P(H/E 2)=P(H), then P(H/E 1,E 2)= P(H/E 1).

Proof: We first show that P(H/E 1)=P(H/E 2)=P(H) only if P(U)=1. By Observation 2.2, P(H/E 1)=P(H) only if P(R)+P(H)P(U)=P(H) which, by algebra, entails P(U)=1. Reasoning as in the proof of Observation 3.3, we can now show that P(U)=1 entails

P ( R ) + P ( H ) 2 P ( U ) P ( R ) + P ( H ) P ( U ) = P ( R ) + P ( H ) P ( U )
By Observation 3.1, the left-hand side of that equality equals P(H/E 1,E 2) and by Observation 3.2 the right-hand side equals P(H/E 1).

Observation 4.1: Suppose (1) P(E i/H)=P(E i), (2) P(E 1,E 2/H)= P(E 1/H)P(E 2/H) and (3) P(E 1,E 2H)=P(E 1H)P(E 2H): Then P(H/E 1,E 2)=P(H).

Proof: Bayes's theorem yields:

P ( H / E 1 , E 2 ) = P ( E 1 , E 2 / H ) P ( H ) P ( E 1 , E 2 / H ) P ( H ) + P ( E 1 , E 2 / ¬ H ) P ( ¬ H ) = P ( E 1 / H ) P ( E 2 / H ) P ( H ) P ( E 1 / H ) P ( E 2 / H ) P ( H ) + P ( E 1 / ¬ H ) P ( E 2 / ¬ H ) P ( ¬ H ) ( By ( 2 ) and ( 3 ) ) = P ( E 1 ) P ( E 2 ) P ( H ) P ( E 1 ) P ( E 2 ) P ( H ) + P ( E 1 ) P ( E 2 ) P ( ¬ H ) = P ( H ) ( by ( 1 ) )

Observation 4.2: (Tomoji Shogenji 2002) Suppose that report E lacks individual credibility, so that P(H/E)=P(H). Then P(L)=(n−1)P(R) (p.219) and hence P(L)>P(R), when n>2, and P(L)=P(R), when n=2. Moreover, if P(L)=P(R) and n>2, then P(H/E)>P(H).

Proof: If a witness is a truth-teller, her report will be E if and only if H is actually true. If she is a randomizer, she will report E one out of n times no matter what is actually the case. If she is a liar, she will report E only if it is actually the case that H is false; and if it is not the case that H, she tells E one out of n−1 times. Hence,

P ( E ) = P ( E / R ) P ( R ) + P ( E / U ) P ( U ) + P ( E / L ) P ( L ) = P ( E ) P ( R ) + 1 n P ( U ) + [ P ( E / H , L ) P ( H / L ) + P ( E / ¬ H , L ) P ( ¬ H / L ) ] = 1 n P ( R ) + 1 n P ( U ) + [ 0 + 1 n 1 P ( L ) P ( ¬ H ) ] = 1 n [ P ( R ) + P ( U ) + P ( L ) ] = 1 n
The probability that a given witness reports E given that H is true is
P ( E / H ) = P ( E / R , H ) P ( R / H ) + P ( E / U , H ) P ( U / H ) + P ( E / L , H ) P ( L / H ) = P ( R ) + 1 n P ( U )
Let us now assume that P(H/E)=P(H) or, equivalently, P(E/H)= P(E). It follows that
P ( R ) + 1 n P ( U ) = 1 n ,
whence
P ( U ) = 1 n P ( R )
But P(L)=1−P(R)−P(U), and so
P ( L ) = 1 P ( R ) ( 1 n P ( R ) ) = ( n 1 ) P ( R )
It follows from (3) that P(L)=P(R), when n=2, and P(L)>P(R), when n>2. By analogous reasoning, that if P(L)=P(R) and n>2, then P(H/E)>P(H).

Observation 4.3: Suppose truth-telling (R), randomization (U), and lying (L) are mutually exclusive and exhaustive hypotheses about the reliability. Then P(E 1/H, E 2) ≈ P(E 1/H).

(p.220) Informal argument: We have

P ( E 1 / H , E 2 ) = P ( E 1 / R , H , E 2 ) P ( R / H , E 2 ) + P ( E 1 / U , H , E 2 ) P ( U / H , E 2 ) + P ( E 1 / L , H , E 2 ) P ( L / H , E 2 )
and
P ( E 1 / H ) = P ( E 1 / R , H ) P ( R / H ) + P ( E 1 / U , H ) P ( U / H ) + P ( E 1 / L , H ) P ( L / H )
The ‘liar terms’ in these equations will equal 0 since P(E 1/L, H, E 2)=0 and P(E 1/L, H)=0. Hence,
P ( E 1 / H , E 2 ) = P ( R / H , E 2 ) + 1 n P ( U / H , E 2 ) ,
and
P ( E 1 / H ) = P ( R ) + 1 n P ( U )
Clearly, if the reporter has delivered a true report that fact should raise the probability of her being reliable and diminish the probability of her being a mere randomizer: P(R/H, E 2)>P(R) and P(U/H, E 2)< P(U). As a consequence, we should expect P(E 1/H, E 2) ≈ P(E 1/H). We note that it does not matter whether the lying is coordinated or uncoordinated.

Observation 4.4: Suppose truth-telling (R), randomization (U), and lying (L) are mutually exclusive and exhaustive hypotheses about the reliability. Then P(E 1H, E 2) ≈ P(E 1H), if the liars are uncoordinated. Moreover, P(E 1H, E 2)>P(E 1H), if the liars are coordinated and n is large.

Proof: In general,

P ( E 1 / ¬ H , E 2 ) = P ( E 1 / R , ¬ H , E 2 ) P ( R / ¬ H , E 2 ) + P ( E 1 / U , ¬ H , E 2 ) P ( U / ¬ H , E 2 ) + P ( E 1 / L , ¬ H , E 2 ) P ( L / ¬ H , E 2 )
In the case of coordinated lying, the probability of one lying witness's testifying to the same effect as another lying witness is 1, that is to say, P(E 1/LH, E 2)=1. Hence,
( I ) P ( E 1 / ¬ H , E 2 ) = 1 n P ( U / H , E 2 ) + P ( L / ¬ H , E 2 )
(p.221) For uncoordinated lying, one the other hand, P ( E 1 / ¬ H , E 2 ) = 1 n 1 , , and so
( 2 ) P ( E 1 / ¬ H , E 2 ) = 1 n P ( U / H , E 2 ) + 1 n 1 P ( L / ¬ H , E 2 )
Now compare each of these two equations with
( 3 ) P ( E 1 / ¬ H ) = P ( E 1 / R , ¬ H ) P ( R / ¬ H ) + P ( E 1 / U , ¬ H ) P ( U / ¬ H ) + P ( E 1 / L , ¬ H ) = 1 n P ( U ) + 1 n 1 P ( L )
By (2) and (3), P(E 1/¬(H, E 2) ≈ P(E 1H), if the liars are uncoordinated, since although P(U)>P(U/H, E 2), this will be counteracted by the fact that P(LH, E 2)>P(L).

It remains to be shown that P(E 1H, E 2)>P(E 1H), if the liars are coordinated and n is large. Clearly, (3) goes to 0 as n goes to ∞. Let us see what happens to (1) as n goes to ∞. The left-hand term in (1) obviously goes to 0. But what happens to the right-hand term? An application of Bayes's theorem gives

P ( L / ¬ H , E 2 ) = P ( ¬ H , E 2 / L ) P ( L ) P ( ¬ H , E 2 / L ) P ( L ) + P ( ¬ H , E 2 / U ) P ( U ) + P ( ¬ H , E 2 / R ) P ( R )
Since PH, E 2/R)P(R)=0 and PH, E 2/U) < PH, E 2/L),
P ( L / ¬ H , E 2 ) > P ( ¬ H , E 2 / L ) P ( L ) P ( ¬ H , E 2 / L ) P ( L ) + P ( ¬ H , E 2 / L ) P ( L ) P ( U ) = P ( L ) P ( L ) + P ( U ) > 0
Hence, while (3) P(E 1H) goes to 0 as n approaches ∞, (1) P(E 1H, E 2) then approaches a constant greater than 0. We may conclude that (1) is greater than (3) if n is large.

Observation 7.1: Suppose that the following hold:

  1. (i) E 1 and E 2 are independent reports on A 1 and A 2.

  2. (ii) P(A 1/E 1)=P(A 1) and P(A 2/E 2)=P(A 2).

  3. (iii) A 1A 2, A 1∧¬A 2, ¬A 1A 2, and ¬A 1∧¬A 2 all have non-zero probability.

Then P(A 1,A 2/E 1,E 2)=P(A 1,A 2).

(p.222) Proof: By Bayes's theorem,

( I ) P ( A 1 , A 2 / E 1 , E 2 ) = P ( E 1 , E 2 / A 1 , A 2 ) P ( A 1 , A 2 ) P ( E 1 , E 2 )
By conditional independence, P(E 1,E 2/A 1,A 2)=P(E 1/A 1)P(E 2/A 2). It follows from (ii) and familiar probabilistic facts that P(E i/A i)=P(E i), i=1, 2. Hence,
( 2 ) P ( E 1 , E 2 / A 1 , A 2 ) = P ( E 1 ) P ( E 2 )
By (iii) and the theorem of total probability, P(E 1,E 2)=P(E 1,E 2/A 1,A 2) P(A 1,A 2)+P(E 1,E 2/A 1A 2)P(A 1A 2)+P(E 1,E 2A 1,A 2)PA 1, A 2)+ P(E 1,E 2A 1A 2)PA 1A 2). By conditional independence, the right-hand side of that equation equals P(E 1/A 1)P(E 2/A 2)P(A 1,A 2)+ P(E 1/A 1)P(E 2A 2)P(A 1A 2)+P(E 1A 1)P(E 2/A 2)PA 1,A 2)+ P(E 1/ ¬A 1) P(E 2A 2)PA 1A 2). As already noticed, it follows from (ii) that P(E 1/A 1) = P(E 1) and P(E 2/A 2) = P(E 2). It also follows from (ii) that P(E 2A 2)=P(E 2) and P(E 1A 1)=P(E 1). Combining all this yields
( 3 ) P ( E 1 , E 2 ) = P ( E 1 ) P ( E 2 ) P ( A 1 , A 2 ) + P ( E 1 ) P ( E 2 ) P ( A 1 , ¬ A 2 ) + P ( E 1 ) P ( E 2 ) P ( ¬ A 1 , A 2 ) + P ( E 1 ) P ( E 2 ) P ( ¬ A 1 , ¬ A 2 ) = P ( E 1 ) P ( E 2 ) .
It follows from (1), (2), and (3) that P(A 1A 2|E 1,E 2)=P(A 1A 2), which ends the proof.

Observation 8.1: P 1 ( H / E 1 ) = h r = 0 1 h h + h ¯ h r ¯ 1 h + h ¯ r 1 d r 1 = 1 + h 2

Proof: By arithmetic simplification,

( 1 ) 0 1 h h + h ¯ h r ¯ 1 h + h ¯ r 1 d r 1 = 0 1 ( h ¯ r 1 + h ) d r 1
In general,
( 2 ) ( a x + b ) n d x = ( a x + b ) n + 1 ( n + 1 ) a
From (1) and (2),
( 3 ) 0 1 ( h ¯ r 1 + h ) d r 1 = [ ( h ¯ r 1 + h ) 2 2 h ¯ ] 1 [ ( h ¯ r 1 + h ) 2 2 h ¯ ] 0 = = 1 + h 2

Observation 8.2: The function

P 2 ( H / E 1 , E 2 ) = f ( h ) = ( h + 1 2 h ¯ ) 2 h + 1 4 h ¯
takes on its minimum for h=1/3 in the interval h ∈ (0, 1).

(p.223) Proof: To find the minimum of this function we calculate its derivative with respect to h, set this derivative equal to 0, and solve for h ∈ (0, 1). By arithmetic simplification and derivation,

d d h ( h + 1 2 h ¯ ) 2 h + 1 4 h ¯ = d d h ( h + 1 ) 2 ( 3 h + 1 ) = 3 h 2 + 2 h 1 ( 3 h + 1 ) 2
We set the derivate equal to 0 and solve for h ∈ (0, 1).
3 h 2 + 2 h 1 ( 3 h + 1 ) 2 = 0 3 h 2 + 2 h 1 = 0 h = 2 ± 4 6
The only extreme value for h ∈ (0, 1) is h=1/3 which can be verified to be a minimum.