Putting Logic in Its Place: Formal Constraints on Rational Belief: Formal Constraints in Rational Be

Introduction. At least since Aristotle, deductive logic has had a special place in Western philosophy. Although the history of Western.
Table of contents

If, however, beliefs are seen as graded, or coming in degrees, a probabilistic constraint based on standard logic is imposed by ideal rationality. This constraint, probabilistic coherence, explains both the appeal of the standard deductive constraints and the power of deductive arguments. Moreover, it can be defended without taking degrees of belief as many decision-theoretic philosophers have to be somehow defined or constituted by preferences.

Although probabilistic coherence is humanly unattainable, this does not undermine its normative status as a constraint in a suitably idealized understanding of epistemic rationality.

Similar books and articles

Don't have an account? Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use for details see www.

Putting Logic in its Place: Formal Constraints on Rational Belief - Oxford Scholarship

University Press Scholarship Online. Rational Requirements in Epistemology. Find it on Scholar. Request removal from index. Google Books no proxy From the Publisher via CrossRef no proxy Setup an account with your affiliations in order to access resources via your University's proxy server Configure custom proxy use this if your affiliation does not provide a proxy. Andrew Moon - forthcoming - Canadian Journal of Philosophy: On the Propositional Structure of Implicit Bias. Belief, Credence, and Norms. Lara Buchak - - Philosophical Studies 2: The Normative Role of Knowledge.

A Reply to Williams. Macintosh - - Religious Studies 30 4: Putting Logic in Its Place: Formal Constraints on Rational Belief.

2005.10.04

Given X's opinions about Y and Z's superior fact-checking and his confidence that even their books contain errors, even if X currently believes every statement in his book, Christensen thinks it is intuitively quite absurd to think that it could be rational for X to believe that his new book is error-free. But, of course, that belief is required by deductive cogency. The most entertaining part of the example comes later, when Christensen considers how serious the problem is for the advocates of deductive cogency e. Christensen imagines that there is a Society for Historical Exactitude which has offered a medal and a monetary prize for any historical book advancing substantial theses not shown to contain an error within one year of publication.

Deductive cogency would require X to believe that his book will win the prize. Because the amount of prize money is large enough to make a big difference in X's life, X would have to believe that there would be big changes in his life soon. For example, it would not make sense for him to accept a good deal on a used car now when he was so close to being able to afford the car of his dreams , Christensen concludes that the irrationality required by deductive cogency would send "ripples of intuitive irrationality" throughout his belief system Then Christensen generalizes the example from a problem for book authors to a problem for all human believers.

All of us find ourselves in fallibility paradoxes, when, for example, we believe that at least one of our memory beliefs is false, or when we simply believe that at least one of our beliefs is false.

How to think like a Lawyer: Filtering Perspectives

Think of how insufferable a person would be if, when there was a conflict of memories, she always insisted that other people's memories were mistaken, never her own. Christensen provides many more examples than I can review here. In each case, the example is one in which deductive cogency would require a human being to have beliefs that strike most people as intuitively irrational. Christensen considers and argues effectively against many attempts to neutralize the deductive cogency requirement from the effects of lottery and fallibility paradoxes However, he fails to see that the fallibility paradoxes are a sword that cuts both ways.

To see that the fallibility paradoxes generate problems for Christensen's probabilist view, note first that the computational difficulties of maintaining a consistent set of beliefs and the lack of decision procedure for the deductive consequence relation translate directly into practical difficulties for satisfying Christensen's probabilist constraints. In Christensen's probabilism, the analog of the consistency constraint is the requirement that one's degrees of belief be consistent with the probability laws.

This would be a significant computational problem even for some beings with a finite number of degrees of confidence. But Christensen's probabilism requires an infinite number of degrees of confidence, including assigning degree of confidence of one to all instances of classical logical truths of which there are infinitely many and degree of confidence of zero to all logical falsehoods of which there are also infinitely many.

Like the deductive cogency requirement, Christensen's probabilistic coherence requirement can only be satisfied by a subject who is logically omniscient. Christensen is aware that his account involves extreme idealization The problem is that his idealization generates the same kinds of problems that he used to cast doubt on the deductive cogency requirement.

Access Check

Consider, for example, the evidence from Tversky and Kahneman and others showing that human beings, even those trained in statistics, tend to make assignments of degrees of confidence that are inconsistent with the probability laws, even in quite simple cases. And consider the computational difficulty of checking for consistency even in simple cases.

This evidence provides the basis for a new kind of fallibility paradox, to the conclusion that it is extremely unlikely that any human being's degrees of confidence are completely coherent. It would be easy to construct a dialog paralleling Christensen's example of the historian X in which a psychologist, call him Amos , accepted all the evidence of human statistical errors and then insisted that he himself had completely coherent degrees of confidence.

But, of course, Christensen's ideally rational agent would have completely coherent degrees of confidence, so Christensen's ideal could not be used to explain the irrationality of Amos's overconfidence. But the problems with Christensen's idealization are even more serious than this example suggests. Christensen's ideally rational agents are logically omniscient.

On Christensen's view, rationality requires anyone who knows the definition of pi to be certain or almost certain of the answer to the following question, without engaging in any empirical inquiry: What is the trillionth digit in the decimal expansion of pi? Christensen acknowledges that no human being could answer this question on the basis of a priori reasoning alone , but he does not pursue the implications of this fact for his view. To find out the trillionth digit of pi, human beings would have to rely on empirical evidence.

Putting Logic in its Place: Formal Constraints on Rational Belief

They would have to run software that they had good reason to trust on computer hardware that they had good reason to trust or they would have to obtain the information from a source that they had good reason to trust, for example by googling it, as I did. Even if it were humanly possible to do the calculation, the computer calculation would justify much greater confidence than the results of a manual calculation carried out over years, because we have lots of evidence of the fallibility of human calculations.

So Christensen's model of ideal rationality is completely useless if we want to know what degree of confidence it is rational for a given human being to assign to the ten alternatives for the trillionth digit of pi. A good case can be made that, in the absence of empirical evidence, a rational human agent would assign an equal degree of confidence to each of the ten alternatives. Moreover, there is now substantial empirical evidence that each of the ten digits occurs with approximately equal frequency in the decimal expansion of pi.

Thus, for many human beings, that empirical evidence makes it rational to assign equal degree of confidence to each of the ten alternatives.

And no human being could rationally assign a high degree of confidence to the correct answer 2 without substantial reliance on empirical evidence. Here is one final example of a fallibility paradox that presents problems for Christensen's account: When presented with a complex derivation of a surprising new theorem, any rational mathematician or logician will have less confidence in the conclusion than in the conjunction of the premises, to represent the significant possibility that there is an as-yet-undetected error of reasoning.

For example, shortly after Andrew Wiles first presented his first "proof" of Fermat's Last Theorem, a colleague discovered an error in it. A year later, Wiles produced a new proof. It would have been irrational for Wiles to be certain that the new proof contained no errors when he first presented it, and thus it would have been irrational for Wiles to be as confident in the conclusion as he was in the conjunction of the premises.


  • David Christensen, Putting Logic in its Place: Formal Constraints on Rational Belief - PhilPapers!
  • Download options;
  • Mémoires du duc de Rovigo, pour servir à lhistoire de lempereur Napoléon Tome V (French Edition).
  • Notre Dame Journal of Formal Logic.
  • Dragon Soul?

Over time, as other mathematicians reviewed the proposed proof and did not uncover any problems, it became rational for all mathematicians, including Wiles, to increase their confidence in the validity of the proof. So, again, Christensen's account makes it impossible to explain important examples of human rationality based on our recognition of human fallibility, including our failures of logical omniscience. One of the most influential arguments for the deductive cogency requirement is the Argument Argument. The challenge is to make sense of our practices of reasoning without supposing that deductive consistency and deductive closure are requirements of rationality.

As I have already mentioned, in chapter 4 Christensen responds to this challenge by arguing that a probabilist account contains analogs to deductive cogency that provide an alternative explanation of our practices of reasoning. However, it seems to me that the best way of understanding Christensen's position is not as a rejection of the deductive cogency requirement, but as a change in the level to which it applies. Consider, for example, a simple deduction: To the advocates of deductive cogency, this deductive argument can be useful in explaining why it would be rational for someone who believes the premises HM and SH to believe the conclusion SM.