Academia.eduAcademia.edu

Interview with Timothy Williamson

2021

Abstract

HK: The continuum hypothesis (CH) claims that any infinite set of real numbers is in 1-1 correspondence with either the natural numbers or the real numbers. From the work of Kurt Gödel and Paul Cohen, we know that CH can't be proved or refuted by current methods of mathematical proof. Thus, given that CH is either true or false, either CH or its negation is a mathematical truth which can't be proved by current methods. However, in your paper "Absolute Provability and Safe Knowledge of Axioms", you argue that no mathematical truth is absolutely unprovable, by the following line of thought. Suppose that A is a true interpreted mathematical formula which eludes present human techniques of provability. First, you argue that since A is a mathematical truth, it is metaphysically necessary. You then enjoin one to consider the following scenario: a metaphysical possibility in which there is a species which can and does prove A. You write: "In current epistemological terms, their knowledge of A meets the condition of safety: they could not easily have been wrong in a relevantly similar case. Here the relevantly similar cases include cases in which the creatures are presented with sentences that are similar to, but still discriminably different from, A, and express different and false propositions; by hypothesis, the creatures refuse to accept such other sentences, although they may also refuse to accept their negations."

Key takeaways

  • TW: It's what philosophers call 'metaphysical modality'.
  • TW: Epistemic possibility in the standard sense is relative to a stock of knowledge: an epistemic possibility is one compatible with the given stock, in the sense that everything in the stock is true in that possibility.
  • TW: There is plenty of good mathematics in homotopy type theory, which can of course be applied to natural science in the usual way.
  • TW: My reasons are the same as those mentioned earlier for preferring an intensional account of propositions.
  • TW: No, I'm not at all sympathetic to such hyperintensional approaches, for reasons already mentioned.
Timothy Williamson interviewed by Hasen Khudairi 18 May 2021 HK: The continuum hypothesis (CH) claims that any infinite set of real numbers is in 1-1 correspondence with either the natural numbers or the real numbers. From the work of Kurt Gödel and Paul Cohen, we know that CH can’t be proved or refuted by current methods of mathematical proof. Thus, given that CH is either true or false, either CH or its negation is a mathematical truth which can’t be proved by current methods. However, in your paper "Absolute Provability and Safe Knowledge of Axioms", you argue that no mathematical truth is absolutely unprovable, by the following line of thought. Suppose that A is a true interpreted mathematical formula which eludes present human techniques of provability. First, you argue that since A is a mathematical truth, it is metaphysically necessary. You then enjoin one to consider the following scenario: a metaphysical possibility in which there is a species which can and does prove A. You write: "In current epistemological terms, their knowledge of A meets the condition of safety: they could not easily have been wrong in a relevantly similar case. Here the relevantly similar cases include cases in which the creatures are presented with sentences that are similar to, but still discriminably different from, A, and express different and false propositions; by hypothesis, the creatures refuse to accept such other sentences, although they may also refuse to accept their negations.” TW: The key part of my argument is showing how a possible species does have a safe way of coming to believe A, because their brains are suitably structured. Thus they can know A, and (I argue) are entitled to treat A as an axiom. A strikes them as obvious, even though it doesn’t strike us that way. In their mathematics, A is an axiom, and so is trivially provable. HK: Therefore, A is absolutely provable. That is, A “can in principle be known by a normal mathematical process” such as derivation in an axiomatizable formal system with quantification and identity. You write: "The claim is not just that A would be absolutely provable if there were such creatures. The point is the stronger one that A is absolutely provable because there could in principle be such creatures." TW: That’s right. HK: In Modal Logic as Metaphysics, you discuss a principle of mathematical induction and note that instances of it presuppose, for their derivation, the validity of instances of a higherorder modal comprehension scheme. But you also emphasize, however, that mathematical languages are extensional. TW: Let’s sort out what is going on here. The core language of mathematics contains no modal operators like ‘possibly’ or ‘necessarily’. For example, the standard principle of mathematical induction just says that for any property P, if 0 has P and, whenever a natural number n has P, n + 1 has P too, then every natural number has P. It doesn’t use expressions like ‘could’ or ‘couldn’t’, just ‘is’ or ‘isn’t’, ‘has’ or ‘hasn’t’. Although mathematical truths are necessary—they couldn’t have been otherwise—that’s not what the mathematics itself talks about. However, when we come to apply mathematics, we don’t just want to apply it to our actual situation, how things actually are, we also want to apply it to counterfactual 1 situations, how things could have been otherwise. For example, in an investigation of an accident in a nuclear power station, you might want to work out what would have happened if an emergency procedure had been initiated one minute earlier or later than it actually was. When you make calculations about those counterfactual situations, you assume that the mathematics you use holds not just of the actual world but of other possible worlds. That’s why we need mathematics to be necessarily true, not just actually true. We also need it to apply across different possible situations, for example when we compare what happened to what could have happened. In Modal Logic as Metaphysics, one issue is what sort of background logic we need in order to apply mathematics when we are explicitly considering different possibilities. I argued that principles like mathematical induction won’t do what we need them to do unless the background logic provides an adequate supply of properties (‘P’ above) which we can use in comparing different possibilities. The modal comprehension scheme you mention says in effect that there are the properties we need for such purposes. HK: Aside from both the role of the higher-order modal comprehension scheme in mathematical induction, and the metaphysical possibility of a species which could decide CH being a guide to the absolute provability of CH, could you say more about what you believe the role of modality in mathematics might be, if any? If you do countenance a role for modality in mathematics, what interpretation do you take it to have? Metaphysical, practical, epistemic, logical, a proprietary mathematical type, or perhaps combinations of the foregoing? TW: It’s what philosophers call ‘metaphysical modality’. It covers however things could have been otherwise, in a broadly objective sense. It doesn’t cover everything we can pretend to be the case, because we can pretend that mathematics doesn’t hold. At this point, a warning is needed against a common confusion. People sometimes think that 2 + 2 could have been 5 because we could have used ‘5’ to mean 4. That’s like saying that I could have jumped to the Moon because I could have used ‘Moon’ to mean sofa. The interesting issue is not whether ‘2 + 2 = 4’ could have meant something different and false but whether that which it actually means could have been false. In the relevant sense, it couldn’t. HK: You and Ruth Byrne both emphasize the role of conditionals in the imagination. In "Possibilities and the parallel meanings of factual and counterfactual conditionals", Espino, Byrne, and Johnson-Laird suggest that there are epistemic possibilities, and in your work on epistemic logic frames are pairs of a set of epistemically possible worlds and an accessibility relation. Chalmers defines the epistemic possibility of p as p not being ruled out a priori. Do you have an alternative interpretation of what epistemic possibilities are and of what role they should play in the imagination? TW: Epistemic possibility in the standard sense is relative to a stock of knowledge: an epistemic possibility is one compatible with the given stock, in the sense that everything in the stock is true in that possibility. Epistemic possibility in David Chalmers’ sense would be the special case where the stock of knowledge is all a priori knowledge. In general, what we can imagine is not constrained by epistemic possibility. I know that Donald Trump was US President in 2020, but I can imagine Hillary Clinton being President then instead. What we can imagine is not even constrained by epistemic possibility relative to a priori knowledge. I can imagine that there are true contradictions in black holes, even though logic excludes that scenario a priori. Of course I can’t imagine it in perfect detail, but our imaginings are never perfectly detailed. I’m assuming there that classical logic is a priori, but one can imagine violations of other logics too. 2 HK: In "How Deep is the Distinction between A Priori and A Posteriori Knowledge?", you raise problems for standards ways of distinguishing between a priori and a posteriori knowledge or justification, either top-down by the distinction between independence of experience and dependence on experience or bottom-up by examples of each type. You argue that they do not yield a distinction of much epistemological interest because they neglect the cognitive role of the imagination in gaining knowledge. A different definition of a priority takes it to be epistemic necessity, and this might be a way to defend the theoretical significance of a priority against your examples of cases where experience plays an intermediate role. Do you have any objections to the definition of a priority as epistemic necessity? TW: Yes, I have. It’s circular. Epistemic necessity is just the dual of epistemic possibility: for something to be epistemically necessary is for it to be not epistemically possible that it is not the case. Thus epistemic necessity is just as relative to a stock of knowledge as epistemic possibility. When you specify which stock of knowledge you are relativizing epistemic necessity to in using it to define a priori knowledge, your answer will in effect be the same as David Chalmers’: the relevant stock is a priori knowledge, which is just what we were trying to define! Any other choice for the relevant stock of knowledge would give incorrect results. KH: Do you think that Gödel's incompleteness theorems -- by demonstrating that if a system including primitive recursive arithmetic is consistent, then there are true sentences that are not provable in the system and that such systems cannot prove their own consistency -- might demonstrate that epistemic theorizing exceeds the computational capacity of a Turing machine? TW: No, I don’t. John Lucas and Roger Penrose have tried to make arguments like that, in which people much more expert than them in mathematical logic have identified fallacies. Their arguments are vague at crucial points. Some of the details get technical. A short version is this: Lucas and Penrose argue that they are better thinkers than any Turing machine (computer), but everything they say could have been said by a Turing machine about itself. Gödel himself was much more cautious than Lucas and Penrose in drawing philosophical conclusions from his mathematical results, but in the article you mentioned earlier ("Absolute Provability and Safe Knowledge of Axioms") I argue that even he went too far, because he overlooked a crucial ambiguity in the way he used the phrase ‘the human mind’ (as a competitor with Turing machines). HK: A Fregean account of propositions identifies propositions with a composition of senses; a Russellian approach identifies propositions with complexes of objects and properties; a possible-worlds approach identifies propositions with sets of possible worlds; and a hyperintensional topic-sensitive approach identifies propositions with a fusion of topics or subject-matters i.e. states or situations which are parts of worlds and which entrain the distinction between necessarily equivalent contents. Are you more sympathetic to Fregean, Russellian, possible-worlds, or hyperintensional topic-sensitive accounts of propositions? TW: In my recent book Suppose and Tell: The Semantics and Heuristics of Conditionals (Oxford University Press, 2020), I come to the conclusion that the best view is the very coarse-grained one that propositions are simply sets of metaphysically possible worlds (that version will do for present purposes, though ideally one would use higher-order logic instead). All the other views introduce massive complications for very meagre rewards. The 3 extra distinctions they make respect some cognitive distinctions between necessary equivalents, but no account of propositions will respect all such cognitive distinctions, since there are cognitive distinctions between synonymous words. For example, ‘furze’ and ‘gorse’ are synonyms, simply two words for the same natural kind, but a speaker can come to understand them in the usual way, by examples, and still not realize that they refer to the same thing. Moving to the idiolect of a single speaker doesn’t solve the problem, since it is an illusion that the contents of one’s own mind are transparent to one. Theories of fine-grained propositions end up being half-hearted projections of some but not all features of language onto the world. Fregean theories of propositions as thoughts (the senses of declarative sentences) try to capture all relevant cognitive distinctions but fail. The other fine-grained approaches try to avoid merely cognitive distinctions (as opposed to ones out there in the world), but it is not plausible that they succeed. ‘Topics’ and ‘subject-matters’ are primarily features of discourse. Russellian theories project the syntactic structure of sentences onto the language-independent entities they are supposed to express. As for a full-bloodedly finegrained approach to individuating propositions, it turns out to be inconsistent, by what is known as the Russell-Myhill paradox. The best strategy is to work with simple, coarsegrained contents but, when merely cognitive differences matter, to deal with them openly, by explicitly referring to the vehicles of content, such as sentences, or sentences in contexts. HK: Homotopy Type Theory is a recently developed homotopical interpretation of constructive intensional type theory and a new foundation of mathematics, where a homotopy is a continuous mapping or path between a pair of functions, the types can be interpreted as shapes, and shapes are comprised of points and the paths between them. In "Alternative Logics and Applied Mathematics", you write that the Homotopy Type Theory book by the Univalent Foundations Program "presents homotopy type theory in a way which fails to enable applications outside itself. In effect, it presents homotopy type theory as failing to meet the condition of adequacy on foundations of mathematics". In "The Hole Argument in Homotopy Type Theory", Ladyman and Presnell examine an application of Homotopy Type Theory to physics. Despite this application, do you think that there are other arguments against the adequacy of Homotopy Type Theory as a foundation of mathematics? TW: There is plenty of good mathematics in homotopy type theory, which can of course be applied to natural science in the usual way. But when the theory is dressed up as a new foundation for mathematics, it is combined with philosophical claims about the mathematics on which those applications don’t really make sense. For example, mathematics is claimed to be interpreted in terms of proofs, but mathematical equations about the physical world can’t be proved in the relevant sense. Isomorphic systems are claimed to be identical, but physical systems can have the same structure without being identical. Part of the problem is that the foundational claims haven’t been made logically rigorous. I suspect that we are dealing with a good but limited product surrounded by lots of hype and false advertising. HK: Do you have a preferred approach to the philosophy of mathematics, beyond abductive methodology i.e. non-deductive ampliative inference to the best explanation? E.g., set theory; category theory; abstractionism (according to which abstraction principles figure in the foundations of mathematical theories); full-blooded platonism (according to which all the mathematical objects that logically possibly could exist, do exist); ante rem structuralism (according to which mathematical objects are positions in platonic structures), modal structuralism (according to which mathematical objects are eschewed for possible structures), or nominalism more generally (according to which there are no abstract objects)? 4 TW: Nominalism strikes me as a crude prejudice. Abstractionism isn’t powerful enough to do the required work. Some form of higher-order logic is needed to get the full intended effect of principles like mathematical induction. Structuralism is appropriate for most of mathematics. However, we shouldn’t identify mathematical objects with positions in structures, for the natural principle of individuation for the latter is that the position of an object o1 in a structure S1 is the same as the position of an object o2 in a structure S2 if and only if φ(o1) = o2 for some isomorphism (structure-preserving mapping) φ from S1 to S2; but that principle implies that the complex numbers i and –i occupy the same position in (an instance of) the structure of the complex numbers (because complex conjugation is an automorphism of that structure, that is, an isomorphism between it and itself), so if we identify the complex numbers with the positions they occupy we conclude that i = –i, which is absurd. Anyway, structuralism can’t be the whole story about the metaphysics of mathematics, since it doesn’t work without a theory of structures, which cannot itself be treated in a structuralist way on pain of an infinite regress. How far we need to postulate distinctively mathematical abstract objects in addition to the apparatus of higher-order logic I’m not sure; but I am sure that we shouldn’t approach the question burdened by nominalist prejudice. HK: In Probabilistic Knowledge, Sarah Moss argues that some probabilistic beliefs or credences, i.e. subjective probabilities, can be knowledge, where the sources of such knowledge include testimony, perception, inference, and memory. Would you agree that credences can sometimes constitute knowledge? TW: I have great respect for Sarah as a philosopher. I taught her when she was more or less starting the subject, doing the BPhil at Oxford after she had majored in mathematics at Harvard, and had the pleasure of watching her think of various ideas for herself that had made great reputations for philosophers a few generations before her. Later, when I visited the University of Michigan, we had long discussions of draft chapters of Probabilistic Knowledge. Her discussion of credences as knowledge is based on her highly original quasiexpressivist account of meaning in probabilistic terms, which she develops in that book. I have to say that I completely disagree with her about the relation between meaning and probability. When probability operators are not merely parenthetical evidential glosses, they are simply graded modal operators. There is nothing expressivist about the past-tense probability operator ‘it was probable that …’—it just enables us to describe past probabilities—and ‘it is probable that ….’ is simply its present-tense analogue. HK: Intensionality permits the substitution salva veritate (keeping truth-value constant) of sentences that are necessarily equivalent. Hyperintensionality draws distinctions between necessarily equivalent sentences and eschews such substitution. In Modal Logic as Metaphysics, you write "Hyperintensionality arises at the level of thought and linguistic meaning, and should be explained at that level, not at the level of anything like a general theory of properties and relations. For present purposes, a coarser-grained intensional standard of individuation is more plausible, and certainly much simpler". Could you perhaps say more about the reasons for which you prefer intensionality over hyperintensionality in your choice of logic? TW: My reasons are the same as those mentioned earlier for preferring an intensional account of propositions. Indeed, individuating properties and relations hyperintensionally is even more obviously a projection of linguistic or cognitive structure onto the world than in the case of propositions. 5 HK: Your ideas that philosophical methodology proceeds via abduction and model-building first show up explicitly in your book, Modal Logic as Metaphysics, and in the papers "Abductive Philosophy", "Model-building in Philosophy", and "Semantic Paradoxes and Abductive Methodology". Did anything in your research - such as the defense of Necessitism, the thesis that necessarily everything is necessarily something - lead you to be explicit about the significance of these methods? TW: Even when I was a student at Oxford in the 1970s, my philosophical methodology was already abductivist. Michael Dummett, who was my supervisor for my final year of doctoral research (1979-80), identified that as the main methodological difference between us. When I was writing my dissertation, my closest philosopher friend was Peter Lipton, who was writing a dissertation that turned into his classic monograph Inference to the Best Explanation, and we often discussed the topic. In my book Vagueness (1994), I use an implicitly abductive methodology to defend classical logic for vague languages. I didn’t say much about abduction in The Philosophy of Philosophy (2007), because I was preoccupied with other issues, but then often found myself adverting to abduction when discussing the book. The issue of theory choice came up quite saliently in Modal Logic as Metaphysics because many philosophers of logic have a perversely anti-abductive methodology for logic on which the weaker the logic, the better—that’s connected with the illusion that logic should be a neutral referee in arguments between substantive theories. Taking that idea all the way results in a referee too weak to blow his whistle. HK: A metaphysically fundamental fact can be defined as one that is an ungrounded feature of reality, where ground is an operator on facts expressing that a fact is explained by, or holds in virtue of, a distinct fact. In "Modal Science" and your "Reply to Sider", you examine what you call physical necessitism, an example of which is the role of modality in characterizing phase spaces. Sider objects that metaphysical modality is not metaphysically fundamental. Do you have an argument that objective modalities such as metaphysical modality are metaphysically fundamental, or do you think that such an argument can be avoided? TW: I’m pretty sceptical about most of the grounding literature, and the hyperintensional ideology which tends to go with it, partly for reasons I’ve already given. It’s largely motivated by examples, which often depend on unstated and unexamined crudely reductionist assumptions. The so-called ‘hyperintensional revolution’ has achieved nothing to compare with the explanatory successes of the intensional revolution in modal logic of the 1960s and 70s, with its deep technical results and its widespread applications outside philosophy (in computer science, theoretical economics, and semantics). I have seen no convincing argument that metaphysical modality isn’t fundamental, and attempts to reduce it to something else tend to reduce it to things less tractable than metaphysical modality itself. Sider argues that abductive methods are applicable only close to the metaphysically fundamental level, but that strikes me as obviously wrong. Archaeology and police detection couldn’t manage without abduction. HK: In "Williamsonian Scepticism about the A Priori", Melis and Wright write with regard to your argument that the a priori - a posteriori distinction does not cut at the epistemic joints because of the centrality of cases involving the imagination where experience is more than enabling and less than evidential as defined in Question 3 above: "But we could propose in response that Williamson's observation calls for complication, not rejection; that we need a tripartite division. There is the traditional a priori: knowledge acquired by means in which experience plays only an enabling role. There is the traditional a posteriori, in which 6 experience plays both an enabling and an evidential role. And there is a third category in which experience plays both an enabling and the third, intermediate role described by Williamson, but plays no evidential role ... A more satisfactory taxonomy, it may be suggested, might accordingly involve three types of propositions: (i) Propositions that can be known only through processes in which experience plays an evidential role – ordinary empirical propositions; (ii) Propositions that can be known through methods in which experience is involved merely in an enabling, concept-supplying role; — the traditional analytic a priori; and (iii) Propositions that cannot be known merely on the basis of grasp of the concepts involved but can be known without reliance on experiential evidence, by routines which involve essential play with thought-experimentation or imagination and which rest on experience only in the intermediate role that Williamson gestures at." How would you respond to the suggestion that there ought to be a tripartite division between the a priori, the a posteriori, and the imagination, rather than the a priori - a posteriori distinction not being theoretically robust? TW: On my view, their category (ii) would be empty. In The Philosophy of Philosophy I argue at length against the analytic a priori (epistemological analyticity). Furthermore, the distinction between categories (i) and (iii) would be shallow, since they use much of the same experientially calibrated cognitive apparatus, online for (i) and offline for (iii). HK: Do you think that infinity is complete or potential? TW: Cantor was right: all potential infinity depends on actual infinity. The necessitism of Modal Logic as Metaphysics provides some support for that view. HK: Are you sympathetic to the cumulative hierarchy view of sets, according to which there is only one universe of sets, or the multiverse view, according to which there are multiple universes no one of which is canonical? TW: The multiverse view faces a version of the infinite regress challenge to structuralism-allthe-way-down, mentioned earlier. If there are multiple universes, we need an explicit theory of the plurality of universes. In first-order form, it too will have multiple models, which we may call ‘meta-universes’. Then someone will say that there are multiple meta-universes, which will take us to meta-meta-universes, and so on. HK: In "Voluntary Imagination: A Fine-grained Analysis", Canavotto, Berto, and Giordanni suggest that the imagination should be modeled using intensions, i.e. functions from worlds to extensions, and a mereology of hyperintensional topics or subject-matters as defined in Question 5 above. Would you be sympathetic to availing of topics in order to countenance hyperintensionality, or perhaps a hyperintensional truthmaker semantics for conditionals according to which propositions are made true, i.e. verified, by states which are wholly relevant to them, by contrast to availing of possible worlds and conditionals in accounting for exercises of the imagination? TW: No, I’m not at all sympathetic to such hyperintensional approaches, for reasons already mentioned. Of course, to understand the imagination in detail we must track fine-grained cognitive distinctions, but distorting the theory of content for that purpose is not a progressive move. As I’ve already explained, it will never exhaust all relevant cognitive differences. Rather, we must be willing to theorize explicitly about the vehicles of content (sentences, 7 sentences in contexts, ….) as well as the contents themselves. After all, the form of speech most closely associated with the imagination is poetry. Whatever filter we use to extract content from discourse, much of the poetry will be left behind (rhyme and metre are obvious examples), so the content will never exhaust what matters to the imagination. In the end, we can’t do without the distinction between what is imagined and how it is imagined. 8