0% found this document useful (0 votes)
44 views61 pages

Philosophical Logic

First Pages of Book

Uploaded by

Juan Roderick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views61 pages

Philosophical Logic

First Pages of Book

Uploaded by

Juan Roderick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

“This is the perfect book for coverage of classic debates in mainstream philosophy

of logic. It’s also the perfect source for exceptionally clear reviews of standard
logical machinery (e.g., standard modal machinery, quantifier machinery, higher-
order machinery, etc.). Very user-friendly, clear, and accurate on all of the topics
that it covers, this is my new required text for classic debates in the philosophy of
logic.”
Jc Beall, University of Notre Dame

“John MacFarlane displays his usual lively and engaging writing style, and is neutral
on controversial issues, giving the arguments employed by both sides. It is an
excellent overview of some key topics in the field.”
Stewart Shapiro, Ohio State University
Philosophical Logic

Introductory logic is generally taught as a straightforward technical discipline.


In this book, John MacFarlane helps the reader think about the limitations of,
presuppositions of, and alternatives to classical first-order predicate logic, making
this an ideal introduction to philosophical logic for any student who already has
completed an introductory logic course.

The book explores the following questions. Are there quantificational idioms that
cannot be expressed with the familiar universal and existential quantifiers? How
can logic be extended to capture modal notions like necessity and obligation?
Does the material conditional adequately capture the meaning of ‘if’—and if not,
what are the alternatives? Should logical consequence be understood in terms of
models or in terms of proofs? Can one intelligibly question the validity of basic
logical principles like Modus Ponens or Double Negation Elimination? Is the fact
that classical logic validates the inference from a contradiction to anything a flaw,
and if so, how can logic be modified to repair it? How, exactly, is logic related to
reasoning? Must classical logic be revised in order to be applied to vague language,
and if so how? Each chapter is organized around suggested readings and includes
exercises designed to deepen the reader’s understanding.

Key Features:
• An integrated treatment of the technical and philosophical issues comprising
philosophical logic
• Designed to serve students taking only one course in logic beyond the introduc-
tory level
• Provides tools and concepts necessary to understand work in many areas of
analytic philosophy
• Includes exercises, suggested readings, and suggestions for further exploration
in each chapter

John MacFarlane is Professor of Philosophy and a member of the Group in Logic


and the Methodology of Science at the University of California, Berkeley. He is
the author of Assessment Sensitivity: Relative Truth and Its Applications (2014).
ROUTLEDGE CONTEMPORARY INTRODUCTIONS TO PHILOSOPHY

Series editor:
Paul K. Moser
Loyola University of Chicago

This innovative, well-structured series is for students who have already done an
introductory course in philosophy. Each book introduces a core general subject
in contemporary philosophy and offers students an accessible but substantial
transition from introductory to higher-level college work in that subject. The
series is accessible to non-specialists and each book clearly motivates and expounds
the problems and positions introduced. An orientating chapter briefly introduces
its topic and reminds readers of any crucial material they need to have retained
from a typical introductory course. Considerable attention is given to explaining
the central philosophical problems of a subject and the main competing solutions
and arguments for those solutions. The primary aim is to educate students in
the main problems, positions and arguments of contemporary philosophy rather
than to convince students of a single position.

Recently Published Volumes:

Philosophy of Language
3rd Edition
William G. Lycan

Philosophy of Mind
4th Edition
John Heil

Philosophy of Science
4th Edition
Alex Rosenberg and Lee McIntyre

Philosophy of Western Music


Andrew Kania

Phenomenology
Walter Hopp

Philosophical Logic
John MacFarlane

For a full list of published Routledge Contemporary Introductions to Phi-


losophy, please visit https://www.routledge.com/Routledge-Contemporary-
Introductions-to-Philosophy/book-series/SE0111
Philosophical Logic
A Contemporary Introduction

John MacFarlane
First published 2021
by Routledge
52 Vanderbilt Avenue, New York, NY 10017

and by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN

Routledge is an imprint of the Taylor & Francis Group, an informa business

© 2021 John MacFarlane

The right of John MacFarlane to be identified as author of this work has been asserted
by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents
Act 1988.

All rights reserved. No part of this book may be reprinted or reproduced or utilized
in any form or by any electronic, mechanical, or other means, now known or hereafter
invented, including photocopying and recording, or in any information storage or
retrieval system, without permission in writing from the publishers.

Trademark notice: Product or corporate names may be trademarks or registered


trademarks, and are used only for identification and explanation without intent to
infringe.

Library of Congress Cataloging-in-Publication Data


A catalog record for this title has been requested

ISBN: 978-1-138-73764-8 (hbk)


ISBN: 978-1-138-73765-5 (pbk)
ISBN: 978-1-315-18524-8 (ebk)

Typeset in EB Garamond, Garamond-Math, and Gill Sans by the author.

Publisher’s Note
This book has been prepared from camera-ready copy provided by the author.
Contents

List of Exercises xii


Preface xv
Acknowledgements xix

1 Fundamentals 1
1.1 Propositional logic 1
1.1.1 Grammar 1
1.1.2 Semantics 2
1.1.3 Proofs 6
1.1.4 Proof strategy 13
1.1.5 The relation of semantics and proofs 14
1.2 Predicate logic 15
1.2.1 Grammar 16
1.2.2 Scope 17
1.2.3 Semantics 17
1.2.4 Proofs 21
1.3 Identity 26
1.3.1 Grammar 28
1.3.2 Semantics 28
1.3.3 Proofs 28
1.4 Use and mention 29

2 Quantifiers 35
2.1 Beyond ∀ and ∃ 35
2.1.1 What is a quantifier? 35
2.1.2 Semantics of binary quantifiers 37
2.1.3 Most: an essentially binary quantifier 37
2.1.4 Unary quantifiers beyond ∀ and ∃ 38
2.1.5 Generalized quantifiers 39
2.2 Definite descriptions 39
viii Contents

2.2.1 Terms or quantifiers? 39


2.2.2 Definite descriptions and scope 41
2.2.3 Russell’s theory of descriptions 41
2.2.4 Proofs 43
2.3 Second-order quantifiers 44
2.3.1 Standard semantics for monadic second-order logic 46
2.3.2 Expressive limitations of first-order logic 47
2.3.3 Set theory in sheep’s clothing? 50
2.3.4 Boolos’s plural interpretation 52
2.3.5 Beyond monadic second-order logic 54
2.4 Substitutional quantifiers 57
2.4.1 Objectual and substitutional quantification 57
2.4.2 Nonexistent objects 58
2.4.3 Quantifying into attitude reports 59
2.4.4 Sentence quantifiers 60
2.4.5 Quantifying into quotes 61
2.4.6 Defining truth 61
2.4.7 Quantifying into quotes and paradox 62
2.4.8 The circularity worry 64

3 Modal Logic 67
3.1 Modal propositional logic 67
3.1.1 Grammar 67
3.1.2 Semantics 68
3.1.3 Modal logics from K to S5 70
3.1.4 Proofs 74
3.2 Modal predicate logic 80
3.2.1 Opaque contexts 80
3.2.2 Opaque contexts and quantification 81
3.2.3 The number of planets argument 82
3.2.4 Smullyan’s reply 83
3.3 The slingshot argument 85
3.3.1 Applications of slingshot arguments 87
3.3.2 The Gödel slingshot 87
3.3.3 Critique of the slingshot 88
3.4 Kripke’s defense of de re modality 90
3.4.1 Kripke’s strategy 90
3.4.2 The contingent a priori 91
3.4.3 The necessary a posteriori 93
3.4.4 Epistemic and alethic modals 94
Contents ix

4 Conditionals 97
4.1 The material conditional 97
4.1.1 Indicative vs. counterfactual 97
4.1.2 Entailments between indicatives and material conditionals 99
4.1.3 Thomson against the “received opinion” 100
4.2 No truth conditions? 101
4.2.1 Arguments for the material conditional analysis 102
4.2.2 Arguments against the material conditional analysis 102
4.2.3 Rejecting Or-to-if 104
4.2.4 Edgington’s positive view 105
4.2.5 Against truth conditions 107
4.3 Stalnaker’s semantics and pragmatics 109
4.3.1 Propositions, assertion, and the common ground 109
4.3.2 Semantics 110
4.3.3 Reasonable but invalid inferences 111
4.3.4 Contraposition and Hypothetical Syllogism 113
4.3.5 The argument for fatalism 114
4.4 Is Modus Ponens valid? 115
4.4.1 The intuitive counterexamples 116
4.4.2 McGee’s counterexamples as seen by Edgington 117
4.4.3 McGee’s counterexamples as seen by Stalnaker 119
4.4.4 Modus Ponens vs. Exportation 120

5 Logical Consequence via Models 123


5.1 Informal characterizations of consequence 123
5.1.1 In terms of necessity 123
5.1.2 In terms of proof 126
5.1.3 In terms of counterexamples 128
5.2 Tarski’s account of logical consequence 132
5.2.1 Tarski’s aim 132
5.2.2 Why proof-based approaches won’t work 132
5.2.3 Criteria of adequacy 135
5.2.4 The insufficiency of (F) 136
5.2.5 The semantic definition 137
5.2.6 Satisfying the criteria of adequacy 138
5.2.7 Logical constants 139
5.3 Interpretational and representational semantics 140

6 Logical Consequence via Proofs 145


6.1 Introduction rules as self-justifying 145
x Contents

6.1.1 Carnap’s Copernican turn 146


6.1.2 Prior’s article 146
6.1.3 Stevenson’s response 147
6.1.4 Belnap’s Response 148
6.1.5 Prawitz’s Response 150
6.2 Prawitz’s proof-theoretic account of consequence 151
6.2.1 Arguments 152
6.2.2 Validity 152
6.2.3 ∧ Intro and Elim 153
6.2.4 ∨ Intro and Elim 154
6.2.5 Philosophical reflections 155
6.3 Intuitionistic logic 156
6.4 Kripke semantics for intuitionistic logic 159
6.5 Fundamental logical disagreement 162
6.5.1 Changing the subject? 163
6.5.2 Interpreting classical logic in intuitionistic logic 164
6.5.3 Interpreting intuitionistic logic in classical logic 166
6.5.4 Logical pluralism 167

7 Relevance, Logic, and Reasoning 169


7.1 Motivations for relevance logic 170
7.2 The Lewis Argument 171
7.2.1 Rejecting Disjunctive Weakening 172
7.2.2 Rejecting transitivity 173
7.2.3 Rejecting Disjunctive Syllogism 175
7.3 First-degree entailment 176
7.3.1 A syntactic procedure 176
7.3.2 The four-valued truth tables 180
7.4 Logic and reasoning 181
7.5 Uses for relevance logic 185
7.5.1 Dialetheism 186
7.5.2 The moderate approach 187
7.5.3 Truth in a corpus 188

8 Vagueness and the Sorites Paradox 191


8.1 What is vagueness? 191
8.2 Three-valued logics 194
8.2.1 Semantics for connectives 194
8.2.2 Defining validity in multivalued logics 196
8.2.3 Application to the sorites 196
Contents xi

8.3 Fuzzy logics 198


8.3.1 Semantics 199
8.3.2 Application to the sorites 199
8.3.3 Can we make sense of degrees of truth? 200
8.3.4 Troubles with degree-functionality 202
8.4 Supervaluations 203
8.4.1 Application to sorites 206
8.4.2 Higher-order vagueness 207
8.4.3 The logic of definiteness 208
8.5 Vagueness in the world? 209
8.5.1 Evans on vague identity 210
8.5.2 Evans and Quine 212

Appendix A Greek Letters 215


Appendix B Set-Theoretic Notation 217
Appendix C Proving Unrepresentability 219
References 223
Index 231
List of Exercises

1.1 Basic concepts 5


1.2 Deductions and invalidity 15
1.3 Translations 18
1.4 Semantics for predicate logic 22
1.5 Deductions for predicate logic 27
1.6 Identity 30
1.7 Quotation and quasiquotation 33

2.1 Infinite domains 38


2.2 Definite descriptions 45
2.3 Second-order quantifiers 51
2.4 Boolos’s translation scheme 55
2.5 Defining generalized quantifiers in second-order logic 57
2.6 Substitutional quantifiers 65

3.1 Semantics for modal logics 75


3.2 Modal natural deductions 79
3.3 Opaque contexts 82
3.4 Quine and Smullyan 84
3.5 The slingshot argument 89

4.1 Material conditionals 108


4.2 Stalnaker on conditionals 114

5.1 Logical consequence 143

6.1 Uniqueness of a connective 149


6.2 Prawitz’s definition of consequence 156
6.3 Intuitionistic logic 161
6.4 Intuitionistic and classical logic 167
List of Exercises xiii

7.1 Disjunctive Syllogism 176


7.2 Tautological entailments 179
7.3 Truth tables for first-degree entailment 181

8.1 Three-valued logics 197


8.2 Supervaluationism 206
8.3 The logic of 𝐷 209
8.4 The logic of ∆ and ∇ 211
Preface

If you tried to figure out what philosophical logic was by looking in the literature,
you might easily become confused. John Burgess characterizes philosophical logic
as a branch of formal logic: “Philosophical logic as understood here is the part of
logic dealing with what classical logic leaves out, or allegedly gets wrong” (Burgess
2009, p. 1). Sybil Wolfram, by contrast, sets philosophical logic apart from formal
logic: “Rather than setting out to codify valid arguments and to supply axioms and
notations allowing the assessment of increasingly complex arguments, it examines
the bricks and mortar from which such systems are built”—for example, meaning,
truth, and proposition (Wolfram 1989, pp. 2–3). Their textbooks, both entitled
Philosophical Logic, cover entirely different subject matters.
The root of this confusion is that the term ‘philosophical logic’ is ambiguous.
Just as ‘mathematical logic’ means both (a) the mathematical investigation of basic
notions of logic and (b) the deployment of logic to help with mathematical prob-
lems, so ‘philosophical logic’ means both (a) the philosophical investigation of the
basic notions of logic and (b) the deployment of logic to help with philosophical
problems. In the first sense, philosophical logic is the philosophy of logic: the
investigation of the fundamental concepts of logic. In the second sense, it consists
largely in the formal investigation of alternatives and extensions to classical logic.
It is common to avoid the ambiguity, as Wolfram and Burgess both do, by
using ‘philosophical logic’ in one sense or the other. But in this text we embrace
the ambiguity, introducing students to philosophical logic in both its senses. On
the one hand, students will consider philosophical questions about truth values,
logical consequence, de re modality, fundamental logical disagreement, and the
relation of logic to reasoning. And on the other hand, they will learn about modal
logic, intuitionistic logic, relevance logic, plural and substitutional quantifiers,
conditionals, and vagueness.
Why approach things this way, rather than focusing on one side of the ambigu-
ity? Because doing each well requires doing the other. For example, relevance logic
and intuitionistic logic are best motivated by reflection on the notion of logical
consequence and the way it is explicated in classical logic. The assessment of these
logics, too, depends on philosophical issues about logical consequence. So it is
xvi Preface

quite artificial to separate the study of nonclassical logics from the philosophical
study of the basic notions of logic.
On the other hand, many of the philosophical questions about the basic build-
ing blocks of logic can only be properly discussed once we have some nonclassical
logics clearly in view. For example, thinking clearly about whether relevance
should be required for logical consequence requires understanding the tradeoffs
one would need to make in actually developing a relevance logic. Any discussion
of the meaning of truth values requires us to see the role truth values might play
in a multi-valued logic. And discussions of modality and propositions can be
illuminated by a close examination of the slingshot argument, which requires a
bit of instruction in modal logic and quantification.
This book is meant for advanced undergraduates or graduate students who
have taken a first course in symbolic logic. It aims to impart a sense of the limits
of first-order logic, a familiarity with and facility with logical systems, and an
understanding of some of the important philosophical issues that can be raised
about logic. A side benefit will be increased comprehension of work in other areas
of analytic philosophy, in which a certain amount of “logical culture” is often
taken for granted.
The chapters of this book have been arranged in what seems to me the most
sensible order, but it should be possible to plot various courses through the ma-
terial, as the chapters are only lightly coupled. Each chapter is built around some
readings, which should be read in conjunction with the chapter. (At the end of
the chapter there are some suggestions for further reading, for students who want
to go deeper.) Each chapter also contains exercises, which are designed to help
students think more deeply about the material. (Exercises marked with a ⋆ are
harder and more open-ended; in a course they might be made optional.)

Chapter 1: Fundamentals. We presuppose that readers will have had an intro-


ductory course in symbolic logic. But since introductory courses vary greatly
both in what they cover and in how they cover it, we begin with a brief review of
propositional logic and first-order predicate logic with identity, covering syntax,
semantics, natural deduction proofs, and definitions of basic concepts.

Chapter 2: Quantifiers. Students often have the impression that the existential and
universal quantifiers they learned in introductory logic suffice for the formalization
of all quantificational idioms. We will see that this isn’t the case and explore some
ways the machinery of quantification might be extended or reinterpreted. Topics
include definite descriptions, generalized quantifiers, second-order quantifiers,
and substitutional quantifiers.
Preface xvii

Chapter 3: Modal Logic. In addition to talking about what is the case, we talk
about what might have been the case and what could not have been otherwise.
Modal logic gives us tools to analyze reasoning involving these notions. We will
acquire a basic grasp of the fundamentals of propositional modal logic (syntax,
semantics, and proofs), and look at some different ways the modalities might be
interpreted. We will then delve into some hairy conceptual problems surrounding
quantified modal logic, explored by W. V. O. Quine, Saul Kripke, and others. We
will also look at the famous slingshot argument, which was used by Quine and
Donald Davidson to reject modal logic and correspondence theories of truth.
(Assessing this argument will require bringing together our work on modal logic
with our work on quantifiers.)

Chapter 4: Conditionals. In introductory logic classes one is taught to translate


English conditionals using the material conditional, a truth-functional connective.
This leads to some odd results: for example, it implies that ‘If the coin landed
heads, Sam won ten dollars’ is true if the coin landed tails—even if Sam only
bet one dollar on heads. We will consider some attempts to defend the material-
conditional analysis of indicative conditionals in English. Then we will consider
two alternatives: Dorothy Edgington’s view that indicative conditionals have no
truth-conditions and Robert Stalnaker’s influential modal account. Finally, we
will look at Vann McGee’s “counterexample to Modus Ponens,” and consider
whether this sacrosanct inference rule is actually invalid.

Chapter 5: Logical Consequence via Models. Logic is sometimes described as


the study of what follows from what—that is, of logical consequence. But how
should we think of this relation? Different ways of explicating consequence seem
to have different implications for what follows from what, and hence different
implications for formal logic. In this chapter we will look at Alfred Tarski’s account
of logical consequence, which has become the orthodox account. On this account,
logical consequence is a matter of truth preservation: 𝑃 follows from 𝑄 if there is
no model in which 𝑃 is true and 𝑄 false. We will discuss how this account relates
to the older idea that 𝑃 follows from 𝑄 if it is impossible for 𝑃 to be true and 𝑄
false, and how it makes consequence relative to a choice of logical constants. We
will also consider some criticisms of this account.

Chapter 6: Logical Consequence via Proofs. Instead of thinking of the meaning of


logical constants semantically—for example, as truth functions—we might try to
understand them in terms of the stipulated rules governing their use. The logical
consequences of a set of premises can then be defined as the sentences that can
be proven from them using only the rules that define the logical constants. We
xviii Preface

consider some classic objections to this strategy, and look at how Dag Prawitz
overcomes them in his proof-theoretic account of logical consequence. Prawitz’s
account yields a nonclassical logic, intuitionistic logic. The dispute between classi-
cal and intuitionistic logicians about basic inference forms like Double Negation
Elimination is a paradigm example of fundamental logical disagreement. We will
consider to what extent this disagreement can be thought of as a verbal one, about
the meanings of the logical connectives.

Chapter 7: Relevance, Logic, and Reasoning. Students often find it counterintuitive


that, in classical logic, anything follows from a contradiction. One source of
resistance is the idea that the premises of a valid argument must be relevant to the
conclusion. We will look at several ways to develop nonclassical logics that respect
this idea, with the aim of getting a sense of the costs of imposing a requirement of
relevance. Then we will look more carefully at the motivation for relevance logic,
with attention to how logic relates to reasoning, and to how a logic that allows
statements to be both true and false might be interpreted.

Chapter 8: Vagueness and the Sorites Paradox. The ancient sorites paradox, or
paradox of the heap, concludes that one grain of sand makes a heap, since 5000
grains of sand make a heap, and taking a single grain of sand from a heap cannot
make it a non-heap. Some philosophical logicians have suggested that it is a
mistake to use classical logic and semantics in analyzing this argument, and they
have proposed a number of alternatives. We will consider three of them: (a) a three-
valued logic, (b) a continuum-valued (or fuzzy) logic, and (c) a supervaluational
approach that preserves classical logic but not classical semantics. We will also look
at a short argument by Gareth Evans that purports to show that vagueness must
be a semantic phenomenon: that is, that there is no vagueness “in the world.”
Acknowledgements

This book had its genesis in a Philosophical Logic course I have been teaching at
Berkeley, on and off, for more than a decade. I am grateful to all of the students
who have taken this course for helping me see what works and what doesn’t. I am
also grateful to Kenny Easwaran, Fabrizio Cariani, Justin Bledin, Justin Vlasits,
and James Walsh, who all served as teaching assistants for the course and gave me
much helpful feedback.
For invaluable comments on the entire manuscript I am grateful to James
Walsh and an anonymous reviewer for Routledge. Wesley Holliday gave me useful
feedback on the proof system in Chapter 1. Andy Beck, my editor at Routledge,
deserves credit for proposing the project in the first place and helping it to comple-
tion. To typeset the book I relied on LATEX, and in particular the excellent memoir
package.
I began the book while on sabbatical in Paris in the 2016/17 academic year. I
am very grateful for a fellowship from the Paris Institute for Advanced Studies,
with the financial support of the French State managed by the Agence Nationale
de la Recherche, programme “Investissements d’avenir,” (ANR-11-LABX-0027-
01 Labex RFIEA+), and the Fondation Maison des Sciences de l’Homme. I am
also grateful to UC Berkeley for a Humanities Research Fellowship.
My most basic debt is to Nuel Belnap. Without his brilliant logic pedagogy I
would never have gotten interested in philosophical logic.
1 Fundamentals

In general, the task of describing a logical system comes in three parts:


Grammar Describing what counts as a formula.

Semantics Defining truth in a model (and, derivatively, logical consequence


and related notions).
Proofs Describing what counts as a proof.
In this chapter, we will go through these three parts for propositional logic, predi-
cate logic, and the logic of identity. We will also review the distinction between
use and mention and introduce Quine’s device of “quasiquotation,” which we
will need later to keep from getting confused.
We assume you have taken a first course in symbolic logic, covering proposi-
tional and predicate logic (but not metalogical results like soundness and complete-
ness). But, because such courses differ considerably in the symbols, terminology,
and proof system they use, a brief review of these fundamentals will help ensure
that everyone is on the same page with the basics.
Some or all of this may be old hat. Other things may be unfamiliar. Many logic
textbooks do not give a rigorous account of the semantics of first-order logic, and
many do not teach the Fitch-style natural deduction proofs used here. Sometimes
courses in predicate logic do not cover identity at all. Before going further in this
book, you should be comfortable doing exercises of the sort given in this chapter.

1.1 Propositional logic

1.1.1 Grammar

• A propositional constant (a capital letter, possibly with a numerical subscript)


is a formula. There are infinitely many propositional constants: 𝐴, 𝐵15 ,
𝑍731 , etc.
• ⊥ is a formula.
2 Fundamentals

• If 𝑝 and 𝑞 are formulas, then (𝑝 ∨ 𝑞), (𝑝 ∧ 𝑞), (𝑝 ⊃ 𝑞), (𝑝 ≡ 𝑞), and ¬𝑝


are formulas.
• Nothing else is a formula.
You might have used different symbols in your logic class: & or • for conjunc-
tion, ∼ or – for negation, → for the conditional, ↔ for the biconditional. You
might not have seen ⊥ (called bottom, falsum, or das Absurde): we will explain its
meaning shortly.
The lowercase letters 𝑝 and 𝑞 are not propositional constants. They are used to
mark places where an arbitrary formula may be inserted into a schema: a pattern
that many different formulas can “fit” or “instantiate.” Here are some instances
of the schema (𝑝 ∧ (𝑝 ∨ 𝑞)):
𝑝 𝑝 𝑞

(1) ((𝐴 ⏞
∧ 𝐵) ∧((𝐴 ⏞
∧ 𝐵) ∨ ¬𝐴))
𝑝 𝑝 𝑞

(2) ((𝐴 ⏞
∨ ¬𝐵) ∧((𝐴 ⏞
∨ ¬𝐵) ∨ (𝐴 ∨ ¬𝐵)))
In (1), we substituted the formula (𝐴 ∧ 𝐵) for the letter 𝑝 in the schema, and we
substituted the formula ¬𝐴 for 𝑞. In (2), we substituted (𝐴 ∨ ¬𝐵) for both 𝑝 and
𝑞. Can you see why the following formulas are not instances of (𝑝 ∧ (𝑝 ∨ 𝑞))?
(3) (𝐴 ∧ (𝐵 ∨ 𝐶))
(4) (¬𝐴 ∨ (¬𝐴 ∧ 𝐵))

Convention for parentheses. The parentheses can get a bit bothersome, so we


will adopt the following conventions:
• Outer parentheses may be dropped: so, for example, 𝐴∨𝐵 is an abbreviation
for (𝐴 ∨ 𝐵).
• We will consider (𝑝 ∨ 𝑞 ∨ 𝑟) as an abbreviation for ((𝑝 ∨ 𝑞) ∨ 𝑟), and
(𝑝 ∧ 𝑞 ∧ 𝑟) as an abbreviation for ((𝑝 ∧ 𝑞) ∧ 𝑟)

1.1.2 Semantics

Logicians don’t normally concern themselves much with truth simpliciter. Instead,
they use a relativized notion of truth: truth in a model. You may not be familiar
with this terminology, but you should be acquainted with the idea of truth in a
row of a truth table, and in (classical) propositional logic, that is basically what
truth in a model amounts to.
A model is something that provides enough information to determine truth
values for all of the formulas in a language. How much information is required
Propositional logic 3

depends on the language. In the simple propositional language we’re consider-


ing, we have a very limited vocabulary—propositional constants and a few truth-
functional connectives—and that allows us to use very simple models. When we
add quantifiers, and, later, modal operators, we will need more complex models.
What does it mean to say that a connective is truth-functional? It means that
the only information we need to determine the truth value of a compound formula
formed with one of these connectives is the truth values of the formulas it connects.
Thus, for example, all we need to know to determine the truth value of ¬𝐵 ∧ 𝐶
are the truth values of ¬𝐵 and 𝐶. And all we need to know to determine the truth
value of ¬𝐵 is the truth value of 𝐵. No further information about the meaning of
𝐵 is needed.
Because all of our connectives are truth functional, once the truth values of
the propositional constants are fixed, the truth values of all the formulas in the
language are fixed as a result. Because of this, a model for classical propositional
logic can be just an assignment of truth values (true or false) to each propositional
constant.
Although there are infinitely many propositional constants, usually we only
need to concern ourselves with a few of them—those that occur in the arguments
we’re analyzing. Suppose the formulas we’re looking at contain the constants 𝐴,
𝐵, and 𝐶. Then we can describe two different models (𝑣1 and 𝑣2 ) by describing
the truth values they give to these constants:

𝑣1 (𝐴) = True, 𝑣1 (𝐵) = False, 𝑣1 (𝐶) = False


𝑣2 (𝐴) = False, 𝑣2 (𝐵) = False, 𝑣2 (𝐶) = True

This notation is a bit tedious, though. We can present the same information in
tabular form:
𝐴 𝐵 𝐶

𝑣1 𝑇 𝐹 𝐹
𝑣2 𝐹 𝐹 𝑇
You can see that a model is basically a row of a truth table.1
Why are logicians interested in truth in a model? Because all of the fundamental
semantic logical relations are defined in terms of it:
1
“Basically,” because in fact a row of a truth table represents infinitely many models that agree
on their assignments to the propositional constants represented in the table, but disagree on their
assignments to propositional constants not listed. We can safely ignore this subtlety for most purposes,
because assignments to propositional constants not contained in a formula are irrelevant to its truth
in a model.
4 Fundamentals

An argument2 is valid iff there is no model in which all of its premises are
true and its conclusion false. In this case the conclusion is said to be a logical
consequence of the premises.
A formula 𝑝 implies another formula 𝑞 iff there is no model in which 𝑝 is
true and 𝑞 is false.
Two formulas are equivalent iff they have the same truth value in every
model.
A set of formulas is satisfiable iff there is a model in which all are true.
A formula 𝑝 is a logical truth if it is true in every model, a logical contradiction
or logical falsehood if it is false in every model, and logically contingent if it is
neither a logical truth nor a contradiction.
Sometimes the terms defined above are qualified to indicate the kind of models
we are considering. For example, when we are considering only models of classical
propositional logic, where all the connectives are truth-functional, we can talk of
“truth-functional validity,” “truth-functional equivalence,” and so on, to make
that clear. The term tautology is sometimes used for truth-functional logical truth.
As we’ve seen, in classical propositional logic, a model is just a row of a truth table.
So, in classical propositional logic, a tautology is a formula that is true in all rows
of a truth table; two formulas are equivalent iff they have the same truth values in
each row of a truth table, and so on.
To give the semantics of our language, we need to define truth in a model 𝑣 for
arbitrary formulas:
• When 𝑝 is a propositional constant, 𝑝 is true in 𝑣 iff 𝑣(𝑝) = True.
• ⊥ is not true in any model 𝑣.
• ¬𝑝 is true in 𝑣 iff 𝑝 is not true in 𝑣.
• 𝑝 ∧ 𝑞 is true in 𝑣 iff 𝑝 is true in 𝑣 and 𝑞 is true in 𝑣.
• 𝑝 ∨ 𝑞 is true in 𝑣 iff 𝑝 is true in 𝑣 or 𝑞 is true in 𝑣 (or both).
• 𝑝 ⊃ 𝑞 is true in 𝑣 iff 𝑝 is not true in 𝑣 or 𝑞 is true in 𝑣.
• 𝑝 ≡ 𝑞 is true in 𝑣 iff either both 𝑝 and 𝑞 are true in 𝑣 or neither 𝑝 nor 𝑞 is
true in 𝑣.
2
An argument, in the logician’s sense, is just a pair consisting of a set of premises and a conclusion.
This is a departure from the ordinary sense of ‘argument’, which is usually used either for a dispute or
for the reasoning that connects the premises with the conclusion. The logician’s notion of proof is
related to this latter sense.
Propositional logic 5

Exercise 1.1: Basic concepts

1. Give an example of a truth-functional connective other than the usual


ones (conjunction, disjunction, negation, material conditional and bi-
conditional). Explain what makes it truth-functional. Give an exam-
ple of a non-truth-functional connective, and show that it is not truth-
functional.
2. Write out truth tables for the following formulas:
a) 𝑃 ∨ ¬(𝑅 ≡ 𝑆)
b) 𝑄 ∨ ¬(¬𝑄 ∧ ¬𝑅)
3. What does it mean to say that a formula of propositional logic is a tautol-
ogy? A contradiction? Contingent? In which categories do the following
formulas fall?
a) 𝑃 ⊃ (⊥ ⊃ ¬𝑃)
b) 𝑃 ∨ (𝑄 ∧ (¬𝑃 ∨ ¬𝑄))
Note: ⊥ is a special propositional constant (or, if you like, a 0-place
connective—a connective that takes 0 formulas and yields a new formula).
It is False in every model, so when you do your truth tables, you can just
write F in every row under ⊥.
4. Is the following set of formulas satisfiable?
𝑃 ⊃ 𝑄, 𝑄 ⊃ 𝑆, ¬𝑆 ⊃ ¬𝑃

5. What does it mean to say that two formulas are logically equivalent?
Give an (interesting) example of two logically equivalent formulas of
propositional logic.
6. Does 𝑃 ⊃ (𝑄 ∧ ¬𝑄) truth-functionally imply ¬𝑃 ∨ 𝑅? Does 𝑃 ⊃ (𝑄 ⊃
𝑅) truth-functionally imply 𝑅 ⊃ (¬𝑄 ⊃ ¬𝑃)?
6 Fundamentals

These clauses, which express the information encoded in the classical truth tables,
determine a truth value for any formula built up from propositional constants
and ⊥ using ∧, ∨, and ¬. For example:
((𝐴 ∧ 𝐵) ∨ ¬𝐴) is true in 𝑣
iff (𝐴 ∧ 𝐵) is true in 𝑣 or ¬𝐴 is true in 𝑣 (by the clause for ∨)
iff (𝐴 is true in 𝑣 and 𝐵 is true in 𝑣) or ¬𝐴 is true in 𝑣 (by the clause for ∧)
iff (𝐴 is true in 𝑣 and 𝐵 is true in 𝑣) or 𝐴 is not true in 𝑣 (by the clause for ¬)
iff (𝑣(𝐴) = True and 𝑣(𝐵) = True) or it is not the case that 𝑣(𝐴) = True (by
the clause for propositional constants).

1.1.3 Proofs

There are many different proof systems for propositional logic. Natural deduction
systems try to formalize patterns of ordinary logical reasoning. In your introduc-
tory logic course, you might have learned a “Lemmon-style system,” in which
numbers are used to keep track of the hypotheses on which a given line depends. I
favor “Fitch-style systems,” which keep track of undischarged assumptions using
vertical lines, rather than numbers.3 This geometrical presentation makes the
hypothesis structure of a proof more perspicuous.

Structural Rules
Fitch-style systems have two kinds of rules. First, there are structural rules, which
concern structural aspects of proofs and do not involve any specific connectives
or quantifiers.

Hyp
Any formula may be written down at any time, above a horizontal line. The justi-
fication may be written “Hyp” (for “hypothesis”), or the justification may simply
be omitted, since it is clear from the horizontal line itself. Alternatively, several
formulas may be simultaneously hypothesized, one per line, with a horizontal line
below them (see Example 1.1).

3
These are named after F. B Fitch, who invented them (Fitch 1952). I learned Fitch-style deduc-
tions from Fitch’s student Nuel Belnap, and my presentation here draws on his unpublished textbook
(Belnap 2009) and on Barwise and Etchemendy 1999.
Propositional logic 7

1 𝑆∧𝑇 Hyp
2 𝑄 Hyp
3 ⋮
4 ⋮
5 𝑅 Hyp
(1.1)
6 ⋮
7 𝑃 Hyp
8 ⋮
9 ⋮
10 ⋮

Each hypothesis begins a subproof , which we signify by a vertical line to the left
of the formulas. Subsequent steps in the subproof are considered to be proved
“under” the hypothesis (or hypotheses), not proved outright; that is, they are as-
serted as true under the supposition that the hypothesis is true, not as categorically
true. In Example 1.1, a formula at line 3, 4, or 10 is being asserted as true on
the assumptions 𝑆 ∧ 𝑇 and 𝑄; a formula at line 6 is being asserted as true on the
assumptions 𝑆 ∧ 𝑇, 𝑄, and 𝑅; and a formula at line 8 is being asserted as true on
the assumptions 𝑆 ∧ 𝑇, 𝑄, 𝑅, and 𝑃.
Subproofs may be nested. A subproof occurring inside another subproof is
said to be subordinate to it (and the containing subordinate is superordinate to
the one contained). In Example 1.1, the subproof that extends from lines 5–9 is
subordinate to the subproof that extends from lines 1–10. And the subproof that
extends from lines 7–8 is subordinate to both the subproof that extends from
lines 5–9 and to the subproof that extends from lines 1–10.
8 Fundamentals

Reit
The “Reit” rule allows you to reiterate any formula into any subordinate subproof.
Here is an example:

1 𝑆∧𝑇 Hyp
2 𝑅 Hyp
3 𝑆∧𝑇 Reit 1 (1.2)
4 𝑄 Hyp
5 𝑆∧𝑇 Reit 1

The Reit rule is needed because our rules for the connectives (to be given below)
require the premises to be in the same subproof. The natural deduction system you
learned may not have had a Reit rule: some systems allow rules to use premises in
superordinate subproofs without bringing them together into the same subproof
through explicit reiteration. In §3.1.4, when we study natural deduction systems
for modal logic, we will see the point of keeping track of reiteration explicitly
To avoid tedium, if a formula can be derived using another rule, together
with one or more obvious applications of Reit, we will allow these rules to be
“collapsed,” with the justification mentioning both the other rule and “+ Reit.”
Thus, instead of

1 𝑃 Hyp
2 𝑅 Hyp
3 𝑄 Hyp (1.3)
4 𝑃 Reit 1
5 𝑃∧𝑄 ∧ Intro 3, 4

one can write:

1 𝑃 Hyp
2 𝑅 Hyp
(1.4)
3 𝑄 Hyp
4 𝑃∧𝑄 ∧ Intro + Reit 1, 3
Propositional logic 9

Rules for propositional connectives


The remaining rules concern specific connectives. In general, we will have two
rules for each connective: one to introduce the connective, and one to eliminate
it. (These paired rules are sometimes called intelim rules.) The exceptions to
this generalization are ⊥, which has no introduction rule, and ¬, which has two
distinct elimination rules.

∧ Intro
If a subproof contains a formula 𝑝 and a formula 𝑞, you may write down 𝑝 ∧ 𝑞 in
the same subproof with the justification “∧ Intro.” Example:

1 𝑃 Hyp
2 𝑅∨𝑆 Hyp (1.5)
3 𝑃 ∧ (𝑅 ∨ 𝑆) ∧ Intro 1, 2

∧ Elim
If a formula 𝑝 ∧ 𝑞 occurs in a subproof, you may write down either 𝑝 or 𝑞 in the
same subproof with the justification “∧ Elim.” Example:

1 𝑅∧𝑆 Hyp
(1.6)
2 𝑆 ∧ Elim 1

⊃ Intro (Conditional Proof)


If a proof contains a subproof with a single hypothesis 𝑝 and last line 𝑞, you may
close the subproof and write, on the very next line, the conditional 𝑝 ⊃ 𝑞, with
the justification “⊃ Intro” (citing the lines of the subproof). Example:

1 ¬𝑃 ∧ (𝑃 ∧ 𝑅) Hyp
2 𝑃∧𝑅 ∧ Elim 1
(1.7)
3 𝑃 ∧ Elim 2
4 (¬𝑃 ∧ (𝑃 ∧ 𝑅)) ⊃ 𝑃 ⊃ Intro 1–3

Note that the vertical line indicating the subproof ends just before the line
containing the conditional conclusion (4). The hypothesis has been “discharged,”
10 Fundamentals

and the conditional is no longer being asserted merely “under the hypothesis”
stated in line 1.
Be careful to add parentheses around the antecedent and consequent of the
conditional when needed to avoid ambiguity.

⊃ Elim (Modus Ponens)


If the formulas 𝑝 and 𝑝 ⊃ 𝑞 both occur in a subproof, you may write down 𝑞 in
the same subproof with the justification “⊃ Elim.” Example:

1 𝑃 Hyp
2 𝑃 ⊃ (𝑆 ∧ ¬𝑄) Hyp (1.8)
3 𝑆 ∧ ¬𝑄 ⊃ Elim 1, 2

≡ Intro
You prove a biconditional by combining two Conditional Proof subproofs, one
in each direction. Example:

1 𝑃∧𝑃 Hyp
2 𝑃 ∧ Elim 1
3 𝑃 Hyp (1.9)
4 𝑃∧𝑃 ∧ Intro 3, 3
5 (𝑃 ∧ 𝑃) ≡ 𝑃 ≡ Intro 1–4

≡ Elim
If the formulas 𝑝 and either 𝑝 ≡ 𝑞 or 𝑞 ≡ 𝑝 both occur in a subproof, you may
write down 𝑞 in the same subproof with justification “≡ Elim.” Example:

1 𝑃 Hyp
2 𝑃 ≡ (𝑆 ∧ ¬𝑄) Hyp (1.10)
3 𝑆 ∧ ¬𝑄 ≡ Elim 1, 2
Propositional logic 11

⊥ Elim
If ⊥ occurs in a subproof, you may write down any formula in the same subproof,
with justification “⊥ Elim.” Example:

1 ⊥ Hyp
(1.11)
2 ¬(𝑃 ∨ 𝑄) ⊃ 𝑅 ⊥ Elim 1

The basic idea: “the absurd” proves anything. (Why, you ask? That is a question
we’ll return to later.)

¬ Intro
If a proof contains a subproof with hypothesis 𝑝 and last line ⊥, you may close off
the subproof and write, as the very next line, ¬𝑝, with the justification “¬ Intro”
(citing the lines of the subproof). Example:

1 ¬𝑃 ⊃ ⊥ Hyp
2 ¬𝑃 Hyp
(1.12)
3 ⊥ ⊃ Elim + Reit, 1, 2
4 ¬¬𝑃 ¬ Intro 2–3

Note: We can’t get 𝑃 directly by ¬ Intro, because it is not the negation of the
hypothesis (though it is equivalent to the negation of the hypothesis). To get 𝑃
we would need to use the ¬¬ Elim rule (described below).

¬ Elim
If a formula 𝑝 and its negation ¬𝑝 both occur in a subproof, you may write down
⊥ in the same subproof with justification “¬ Elim.” Example:

1 ¬(𝑃 ∧ 𝑅) Hyp
2 𝑃∧𝑅 Hyp (1.13)
3 ⊥ ¬ Elim 1–2

The basic idea is that “the absurd” can be derived directly from any pair of explicitly
contradictory formulas.
12 Fundamentals

¬¬ Elim (Double Negation Elimination, DNE)


If a proof contains a ¬¬𝑝, you may write down 𝑝, with the justification “¬¬ Elim”
(citing the line of the doubly negated formula). Example:

1 ¬¬𝑃 Hyp
(1.14)
2 𝑃 ¬¬ Elim 1
This rule (also called Double Negation Elimination or DNE) is an anomaly in
that, unlike the other elimination rules, it removes two connectives. We will discuss
it further in §6.3.

∨ Intro
If a formula 𝑝 occurs in a subproof, then you may write down either 𝑝 ∨ 𝑞 or
𝑞 ∨ 𝑝 in the same subproof, with the justification “∨ Intro.” Example:

1 ¬𝑄 Hyp
(1.15)
2 𝑃 ∨ ¬𝑄 ∨ Intro 1

∨ Elim (Dilemma)
If a subproof contains a disjunction 𝑝∨𝑞 and immediately contains two subproofs,
the first hypothesizing 𝑝 and ending with 𝑟, the second hypothesizing 𝑞 and ending
with 𝑟, the you may write down 𝑟 in the same subproof, with justification “∨
Elim.”
Here is the pattern and a concrete example:

𝑝∨𝑞
1 𝐴 ∨ (¬𝐴 ∧ 𝐶) Hyp
𝑝
2 𝐴 Hyp

3 𝐴∨𝐶 ∨ Intro 2
𝑟
4 ¬𝐴 ∧ 𝐶 Hyp (1.16)
𝑞
5 𝐶 ∧ Elim 4

6 𝐴∨𝐶 ∨ Intro 5
𝑟
7 𝐴∨𝐶 ∨ Elim 1–6
𝑟
Propositional logic 13

1.1.4 Proof strategy

When trying to construct a proof, it is often helpful to start by looking at the main
connective of the conclusion, and asking what it would take to obtain it using
the introduction rule for that connective. By repeating this process one can often
derive the whole proof structure.
For example, suppose we are asked to prove 𝑅 ⊃ ((𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅)) from
𝑃 ∨ 𝑄. The main connective of the conclusion is ⊃, so we can start by sketching
out an application of ⊃ Intro, leaving lots of space to fill it in:

𝑃∨𝑄 Hyp
𝑅 Hyp

??? (1.17)

(𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅)
𝑅 ⊃ ((𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅)) ⊃ Intro

Our problem is now reduced to that of deriving (𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅) from 𝑃 ∨ 𝑄


and 𝑅. Can we use ∨ Intro? For that we’d need to have one of the disjuncts, either
𝑃 ∧ 𝑅 or 𝑄 ∧ 𝑅. There doesn’t seem to be any way to get either of these, so we
switch gears and try to work forwards, from the premises. We have a disjunction,
𝑃 ∨ 𝑄, so the natural thing to try is ∨ Elim: if we can derive (𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅)
from 𝑃, and then from 𝑄, we can derive it from 𝑃 ∨ 𝑄.
14 Fundamentals

We can sketch in what this would look like mechanically, leaving space for the
“guts” of the proof:

𝑃∨𝑄 Hyp
𝑅 Hyp
𝑃∨𝑄 Reit 1
𝑃 Hyp
???
(𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅) (1.18)
𝑄 Hyp
???
(𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅)
(𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅) ∨ Elim
𝑅 ⊃ ((𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅)) ⊃ Intro
Now it just remains to fill in the gaps marked ???. (𝑃∧𝑅)∨(𝑄∧𝑅) is a disjunction,
so the first thing to try is ∨ Intro. Can we get either of its disjuncts from 𝑃? Yes:

𝑃 Hyp
𝑅 Reit 2
(1.19)
𝑃∧𝑅 ∧ Intro
(𝑃 ∧ 𝑅) ∨ (𝑄 ∧ 𝑅) ∨ Intro
We leave it to the reader to complete the other gap in proof (1.18) and fill in the
line numbers.

1.1.5 The relation of semantics and proofs

Once we have a semantics and a proof system for our logic, we can ask questions
about how they are related. Ideally, we’d like to have the following two properties:
Our system is sound if, whenever 𝑞 can be proved from hypotheses 𝑝1 , … , 𝑝𝑛
in our proof system, 𝑞 is a logical consequence of 𝑝1 , … , 𝑝𝑛 .
Our system is complete if, whenever 𝑞 is a logical consequence of 𝑝1 , … , 𝑝𝑛 ,
𝑞 can be proved from hypotheses 𝑝1 , … , 𝑝𝑛 in our system.
Predicate logic 15

Exercise 1.2: Deductions and invalidity

1. For each of the following arguments, either show that it is valid by giving
a proof in our system, or show that it is invalid by describing a model on
which the premises are true and the conclusion false:

𝐴 𝐴 ≡ (𝐵 ∨ 𝐶) 𝐴 ∨ (𝐵 ⊃ 𝐶)
a) 𝐵 ⊃ (𝐴 ⊃ 𝐵) b) 𝐴 ∨ 𝐵 c) 𝐵
𝐵 𝐴 𝐴∨𝐶

2. I’ve just completed a correct deduction of ¬𝑝 from 𝑞 in a sound deduc-


tion system. Can I conclude that 𝑞 does not truth-functionally imply 𝑝?
Why or why not?

In fact, our proof system does have both these properties relative to our seman-
tics. But this is not just obvious. It is something that has to be proved. (If you
take a course in metalogic, you can find out how this is done.)

1.2 Predicate logic

The inference

Felix is a cat
(5)
Something is a cat

is intuitively valid. But from the point of view of propositional logic, we can only
represent it as

𝐹
(6)
𝑆

which is invalid. To capture the validity of (5), we need to be able to represent the
way in which sentences are composed out of names, predicates, and quantifiers,
as well as the sentential connectives. That calls for some new syntax.
16 Fundamentals

1.2.1 Grammar

• A variable is a lowercase 𝑤, 𝑥, 𝑦, or 𝑧, possibly with a numerical subscript


(𝑥1 , 𝑦14 , etc.).
• An individual constant is a lowercase letter other than 𝑤, 𝑥, 𝑦, or 𝑧, possibly
with a numerical subscript.
• A term is an individual constant or a variable.
• A predicate is a capital letter other than 𝑊, 𝑋, 𝑌, or 𝑍, possibly with a
numerical subscript. Predicates can be classified as one-place, two-place,
and in general 𝑛-place, depending on how many argument places they have.
• A formula is any of the following:
– ⊥
– An atomic formula—an 𝑛-place predicate followed by 𝑛 terms. (For
example: 𝐹𝑥𝑦, 𝐺1 𝑎15 .)
– ∀𝛼𝜙 or ∃𝛼𝜙, where 𝛼 is a variable and 𝜙 is a formula.
– ¬𝜙, where 𝜙 is a formula.
– (𝜙 ∨ 𝜓), (𝜙 ∧ 𝜓), (𝜙 ⊃ 𝜓), or (𝜙 ≡ 𝜓), where 𝜙 and 𝜓 are formulas.

Nothing else is a formula.


Here we use Greek letters 𝜙, 𝜓, 𝜒, and 𝜉 as metavariables ranging over formulas.
Similarly, 𝛼 and 𝛽 range over terms, and Φ and Ψ range over predicates.4 They
are called metavariables because, like ‘𝑝’ and ‘𝑞’ in §1.1, they are not part of the
logical language we are describing (the object language), but the language we use
to talk about it (the use language or metalanguage).
The operators ∀𝛼 and ∃𝛼 are called quantifiers.5 ∀𝛼 is the universal quantifier
and may be read “everything is such that ….” ∃𝛼 is the existential quantifier and
may be read “at least one thing is such that ….” In translating from predicate logic
to English, we may start with these renderings and then find more colloquial ones.
For example:
4
See Appendix A for a guide to pronouncing the Greek letters used here.
5
You may have seen different notation: sometimes (𝑥) is used instead of ∀𝑥, and sometimes the
quantifier is surrounded by parentheses, as in (∀𝑥).
Predicate logic 17

(7) ∀𝑥(𝐹𝑥 ⊃ 𝐺𝑥)


Everything is such that if it is 𝐹, then it is 𝐺.
All 𝐹s are 𝐺s.6
(8) ¬∃𝑥(∀𝑦𝑅𝑥𝑦)
It is not the case that at least one thing (𝑥) is such that everything (𝑦) is
such that it (𝑥) 𝑅s it (𝑦).
It is not the case that at least one thing is such that it 𝑅s everything.
There isn’t anything that 𝑅s everything.
Nothing 𝑅s everything.
Note that ∃𝑥∃𝑥𝐹𝑥 is a formula in our system, as is ∃𝑥𝐹𝑎. Such formulas may have
been disallowed in the logical system you learned, but our semantics for quantifiers
(§1.2.3) gives them a clear interpretation.

1.2.2 Scope

The scope of a quantifier is the formula directly following the quantifier:


• In ∀𝑥(𝐹𝑥 ⊃ 𝐺𝑥), the scope of the quantifier is the formula (𝐹𝑥 ⊃ 𝐺𝑥).
• In ∀𝑥𝐹𝑥 ⊃ 𝐺𝑥, the scope of the quantifier is the formula 𝐹𝑥. (Remember,
𝐹𝑥 ⊃ 𝐺𝑥 is not a formula, because it lacks the required parentheses. We
may omit these only at the outer level, as a convenience.)
• In ∀𝑥¬𝐹𝑥 ∨ 𝐺𝑎, the scope of the quantifier is the formula ¬𝐹𝑥.
A quantifier ∃𝛼 or ∀𝛼 will bind all occurrences of 𝛼 within its scope, except
those that are already bound by other quantifiers. A variable that is not bound
by a quantifier is called free. A formula containing free variables is called an open
formula. A formula without free variables is called a closed formula or sentence.

1.2.3 Semantics

In §1.1, we described a model as something that provides enough information


to determine truth values for all of the formulas in a language. (In propositional
logic, this is just an assignment of truth values to the propositional constants.)
Now we must qualify that slightly: a model must determine truth values for all of
the closed formulas (sentences) in a language. Open formulas do not have truth
values.
A model for our language of predicate logic consists in
6
Although this is a standard rendering, you might find yourself wondering whether these last two
lines are really equivalent. We will revisit this issue in Chapters 2 and 4.
18 Fundamentals

Exercise 1.3: Translations

1. Translate the following into logical notation. Provide a “dictionary” that


associates individual constants and predicate letters with English names
and predicates, and be sure to specify a domain.
a) There’s a woman who adopts every cat she meets.
b) Not all cats and dogs are furry.
c) Every dog despises at least one cat that has scratched one of its (the
dog’s) friends.
2. Translate the following into English (provide a dictionary—you may
make it up):
a) ¬∃𝑥(𝐿𝑥 ∧ ∀𝑦(𝑃𝑦 ⊃ 𝑆𝑥𝑦))
b) ∀𝑥((𝐹𝑥 ∧ ∀𝑦(𝐺𝑦 ⊃ 𝐻𝑥𝑦)) ⊃ ∃𝑧(𝐶𝑧 ∧ 𝐿𝑥𝑧))
3. In each of the following sentences, circle the free variables and draw
arrows from each of the bound variables to the quantifier that binds it.
a) ∀𝑥(𝐹𝑦 ⊃ 𝐺𝑥𝑦) ⊃ 𝐺𝑦𝑥
b) ∀𝑥∃𝑦(𝐺𝑥𝑦 ⊃ ∃𝑥𝐺𝑦𝑥)
c) ∀𝑥(𝐹𝑦 ∧ ∃𝑥𝐹𝑥)

• a nonempty set of objects—the domain, and


• an interpretation function, which assigns an interpretation to each individual
constant and predicate letter. More specifically, it maps
– each individual constant to an object in the domain
– each one-place predicate letter to a set of objects in the domain
– each two-place predicate letter to a set of ordered pairs of objects in
the domain
– each 𝑛-place predicate letter (𝑛 > 2) to a set of ordered 𝑛-tuples of
objects in the domain.
Predicate logic 19

ℳ1 ℳ2 ℳ3
𝐷 {1, 2, 3, Paris} the set of integers {𝑥 ∶ 𝑥 has played basketball}
𝐼(𝐹) {1, 3} {𝑥 ∶ 𝑥 > 0} {𝑥 ∶ 𝑥 is Chinese}
𝐼(𝐺) {⟨1, 2⟩, ⟨3, 3⟩} {⟨𝑥, 𝑦⟩ ∶ 𝑥 > 𝑦} {⟨𝑥, 𝑦⟩ ∶ 𝑥 is taller than 𝑦}
𝐼(𝑎) Paris 1 Michael Jordan
Table 1.1: Three models specified using set-theoretic notation. Here 𝐹 is a one-place
predicate, 𝐺 is a two-place predicate, and 𝑎 is an individual constant. The formula
𝐹𝑎 is true in ℳ2 (since 1 > 0) but not in ℳ1 (Paris is not a member of the set {1, 3})
or ℳ3 (Michael Jordan is not Chinese). The formula ∀𝑥(𝐹𝑥 ⊃ ∃𝑦𝐺𝑥𝑦) is true in
ℳ2 , since every integer greater than 0 is greater than some integer. Is it true in ℳ1 ?
What would you need to know in order to know if it is true in ℳ3 ?

In specifying a model, we’ll generally only write down the interpretations of


individual constants and predicate letters that are relevant for our purposes, in the
same way that we omitted irrelevant propositional constants when giving models
for propositional logic.
Table 1.1 gives some examples of models, using set-theoretic notation. (If you
are not familiar with this notation, see Appendix B for a quick primer.) We can
also specify models informally using pictures, as illustrated in Fig. 1.1.
We say that a sentence (closed formula) is true in a model just in case it is true
when the quantifiers are interpreted as ranging over objects in the domain (and
no others) and the individual constants and predicate letters are interpreted as
having just the extensions assigned to them by the interpretation function. (The
extension of an individual constant is the object it refers to; the extension of a
predicate is the set of objects it is true of, or for a relation, the set of tuples of
objects that satisfy it.)
To state this condition precisely, we need to define truth in a model for each
type of formula, as we did for propositional logic in §1.1.2. But here we hit a snag.
Truth in a model is defined only for closed formulas: an open formula such as
𝐹𝑥𝑎 cannot be said to be true or false in a model, because a model only interprets
predicates and individual constants, not variables. But quantified formulas like
∀𝑥𝐹𝑥𝑎 have open formulas as their parts. So we cannot do what we did before,
defining truth in ℳ for each formula in terms of truth in ℳ for its constituents.
The solution to this problem (due to Tarski 1935) is to start by defining truth
in a model on an assignment of values to the variables, and then define truth in a
model in terms of this. An assignment is a function that maps each variable to an
object in the domain. To avoid verbiage, we will adopt the following abbreviations:
20 Fundamentals

ℳ4 ℳ5

Fido Rover Fido Rover

Fluffy Tiger Fifi Fluffy Tiger

Figure 1.1: Two models given pictorially. Here the rectangles are the 𝐷s and the
arrows represent the 𝐶 relation. The formula ∀𝑥(𝐷𝑥 ⊃ ∃𝑦𝐶𝑥𝑦) is true in ℳ4 but
not in ℳ5 . To see this, it may help to interpret 𝐷𝑥 as ‘𝑥 is a dog’, and 𝐻𝑥𝑦 as ‘𝑥
chases 𝑦’. Try evaluating the formulas ∃𝑥∃𝑦(𝐷𝑥 ∧ ¬𝐷𝑦 ∧ 𝐶𝑥𝑦) and ∃𝑥𝐶𝑥𝑥 in
both models. Unless noted otherwise, it is assumed in such diagrams that the domain
comprises just the objects pictured.

⊨𝑣ℳ 𝜙 𝜙 is true in the model ℳ on the assignment 𝑣


⊭𝑣ℳ 𝜙 𝜙 is not true in the model ℳ on the assignment 𝑣
⟦𝛼⟧𝑣ℳ = 𝑣(𝛼) if 𝛼 is a variable
= 𝐼(𝛼) if 𝛼 is an individual constant.
We can now specify what it is for an arbitrary formula 𝜙 (open or closed) to be
true in a model ℳ = ⟨𝐷, 𝐼⟩ on an assignment 𝑣.
• If 𝜙 is an atomic formula Φ𝛼1 … 𝛼𝑛 , where Φ is an 𝑛-place predicate and
𝛼1 … 𝛼𝑛 are terms, ⊨𝑣ℳ 𝜙 iff ⟨⟦𝛼1 ⟧𝑣ℳ , … , ⟦𝛼𝑛 ⟧𝑣ℳ ⟩ ∈ 𝐼(Φ).
• If 𝜙 is ⊥, then ⊭𝑣ℳ 𝜙.
• If 𝜙 is ¬𝜓, then ⊨𝑣ℳ 𝜙 iff ⊭𝑣ℳ 𝜓.
• If 𝜙 is (𝜓 ∧ 𝜒), then ⊨𝑣ℳ 𝜙 iff ⊨𝑣ℳ 𝜓 and ⊨𝑣ℳ 𝜒.
• If 𝜙 is (𝜓 ∨ 𝜒), then ⊨𝑣ℳ 𝜙 iff ⊨𝑣ℳ 𝜓 or ⊨𝑣ℳ 𝜒.
• If 𝜙 is (𝜓 ⊃ 𝜒), then ⊨𝑣ℳ 𝜙 iff ⊭𝑣ℳ 𝜓 or ⊨𝑣ℳ 𝜒.
• If 𝜙 is (𝜓 ≡ 𝜒), then ⊨𝑣ℳ 𝜙 iff either ⊨𝑣ℳ 𝜓 and ⊨𝑣ℳ 𝜒 or ⊭𝑣ℳ 𝜓 and ⊭𝑣ℳ 𝜒.
• If 𝜙 is ∀𝛼𝜓, where 𝛼 is a variable, then ⊨𝑣ℳ 𝜙 iff for every assignment 𝑣′ that

agrees with 𝑣 on the values of every variable except possibly 𝛼, ⊨𝑣ℳ 𝜓.
Predicate logic 21

(The idea here is that we want to consider every way of assigning a value to
𝛼. This means looking at multiple assignments. But we only want to shift
the value of 𝛼, not other variables, so we only look at assignments that agree
with 𝑣 on all variables other than 𝛼.)
• If 𝜙 is ∃𝛼𝜓, where 𝛼 is a variable, then ⊨𝑣ℳ 𝜙 iff for some assignment 𝑣′ that

agrees with 𝑣 on the values of every variable except possibly 𝛼, ⊨𝑣ℳ 𝜓.
Having defined the condition for any open or closed formula 𝜙 to be true in a
model ℳ on an assignment 𝑣, we can define truth in a model (not relativized to
an assignment) for closed formulas as follows:
A closed formula 𝜙 is true in a model ℳ iff for every assignment 𝑣, ⊨𝑣ℳ 𝜙.7
Once we have defined truth in a model in this way, we can define logical con-
sequence, logical truth, logical equivalence, logical independence, and so on by
quantifying over models, just as we did for propositional logic (see §1.1.2). The
only difference is that the models we quantify over are now more complicated.
Thus, for example, to show that an argument is invalid, we now need to find a
domain 𝐷 and interpretation 𝐼 such that the premises are true in 𝐷, 𝐼 but the
conclusion is false in 𝐷, 𝐼.

1.2.4 Proofs

All of the rules for propositional logic can be used in predicate logic as well, but
we need a few new rules to deal with quantifiers.

Substitution instances
A substitution instance of a quantified formula is the result of deleting the quanti-
fier and its associated variable, then replacing every variable bound by the quanti-
fier with the same individual constant. Thus, for example, 𝐹𝑎𝑎𝑏 is a substitution
instance of ∃𝑥𝐹𝑥𝑥𝑏 (replace every 𝑥 with 𝑎) and also of ∀𝑦𝐹𝑦𝑎𝑏 (replace every 𝑦
with 𝑎), but not of ∀𝑥𝐹𝑥𝑎𝑥.

∀ Elim (Universal Instantiation)


You may write down any substitution instance of any universally quantified for-
mula that occurs in the same subproof, with the justification “∀ Elim” (citing the
line containing the quantified formula). Example:
7
We could have said “some assignment” instead of “every assignment”; it doesn’t matter, because
if 𝜙 is a closed sentence, its truth won’t vary from one assignment to the next.
22 Fundamentals

Exercise 1.4: Semantics for predicate logic

1. For each of the three sample models in Table 1.1, above, say which of the
following sentences are true in that model:
a) ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥𝑎)
b) ∃𝑥∃𝑦(𝐺𝑥𝑦 ∧ 𝐺𝑦𝑥)
c) ∃𝑥∀𝑦¬𝐺𝑦𝑥
2. Complete the definitions, using the first line as a paradigm:
a) A sentence is logically true iff it is true in all models.
b) A sentence is logically false iff …
c) Two sentences are logically equivalent iff …
d) One sentence logically implies another iff …
e) A sentence (𝑆) is a logical consequence of a set of sentences (Γ) iff …
f) An argument is logically valid iff …
3. Use models to show the following:
a) ∃𝑥∀𝑦𝐹𝑥𝑦 and ∀𝑦∃𝑥𝐹𝑥𝑦 are not logically equivalent.
b) (𝐹𝑎 ⊃ ∀𝑥𝐹𝑥) ⊃ 𝐹𝑏 is not a logical truth.
c) 𝐹𝑎 ∧ 𝐺𝑏 does not logically imply ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥).

1 ∀𝑥∃𝑦𝐹𝑥𝑎𝑦 Hyp
2 ∃𝑦𝐹𝑎𝑎𝑦 ∀ Elim 1 𝑎/𝑥 (1.20)
3 ∃𝑦𝐹𝑏𝑎𝑦 ∀ Elim 1 𝑏/𝑥
Notes:
1. It is a very good habit to indicate which constant is replacing which variable,
as in the example.
Predicate logic 23

2. There are no restrictions on which individual constant you use. Just be


sure you replace every occurrence of the bound variable with the same
constant. You can’t use ∀ Elim to go from ∀𝑥𝐹𝑥𝑎𝑥 to 𝐹𝑏𝑎𝑥, because not
every occurrence of the bound variable 𝑥 was replaced by 𝑏.
3. A universally quantified formula is a formula whose main connective is ∀.
You can’t use ∀ Elim to go from ∀𝑥𝐹𝑥 ∨ ∀𝑥𝐺𝑥 to 𝐹𝑎 ∨ 𝐺𝑎, because the
former is not a universally quantified formula (the main connective is ∨).

∃ Intro (Existential Generalization)


If a substitution instance of an existentially quantified formula occurs in a sub-
proof, you may write down the existentially quantified formula in the same sub-
proof, with the justification “∃ Intro” (citing the line containing the instance).
Example:
1 ∀𝑦𝐹𝑎𝑦𝑎 Hyp
2 ∃𝑥∀𝑦𝐹𝑥𝑦𝑎 ∃ Intro 1 𝑎/𝑥 (1.21)
3 ∃𝑥∀𝑦𝐹𝑥𝑦𝑥 ∃ Intro 1 𝑎/𝑥
Notes:
1. An existentially quantified formula is a formula whose main connective is
∃. ∃𝑥𝐹𝑥 ∨ 𝐺𝑎 is not an existentially quantified formula, and it can’t be
obtained by ∃ Intro from 𝐹𝑏 ∨ 𝐺𝑎.
2. Whereas with ∀ Elim you move from a quantified formula to an instance,
with ∃ Intro you move from an instance to a quantified formula.
3. Line 1 above is an instance of both 2 and 3.

∀ Intro (Universal Generalization)


You can derive a universally quantified formula ∀𝛼𝜙 from a subproof whose last
step is a substitution instance, with an individual constant in place of 𝛼, and
whose first step is a flagging step containing that individual constant in a box. The
justification is “∀ Intro” (citing the lines of the subproof).
A flagging step is like a hypothesis, but instead of a formula, it consists of an
individual constant in a box:

1 𝑎

There is one important restriction:


24 Fundamentals

Flagging restriction The flagged constant may not occur outside of the sub-
proof where it is introduced.
So pick a constant that does not occur in the premises or conclusion or in any
previous flagging step.
The flagging step is a formal representation of “Take an arbitrary individual—
call it Joe.” We then argue that Joe has such and such a property, and since Joe was
arbitrary, the same could be shown about any object. The flagging restrictions are
there to make sure the individual is really arbitrary, not one that you have already
said something about elsewhere in the proof.8
Example:

1 ∀𝑥(𝐺𝑥 ⊃ 𝐻𝑥) Hyp


2 ∀𝑥(𝐻𝑥 ⊃ 𝐹𝑥) Hyp
3 𝑎

4 𝐺𝑎 ⊃ 𝐻𝑎 ∀ Elim + Reit 1 𝑎/𝑥


5 𝐻𝑎 ⊃ 𝐹𝑎 ∀ Elim + Reit 2 𝑎/𝑥
(1.22)
6 𝐺𝑎 Hyp
7 𝐻𝑎 ⊃ Elim 4 + Reit 6
8 𝐹𝑎 ⊃ Elim 5 + Reit 7
9 𝐺𝑎 ⊃ 𝐹𝑎 ⊃ Intro 6–8
10 ∀𝑥(𝐺𝑥 ⊃ 𝐹𝑥) ∀ Intro 3–9 𝑎/𝑥

∃ Elim (Existential Instantiation)


If an existentially quantified formula occurs in a subproof, you may start a new
subproof with an instance as a hypothesis and the instantial constant “flagged”
in a box. You can close the new subproof at any point where you have a formula
not containing the flagged constant. This final formula may then be written
outside the subproof, with justification “∃ Elim”, citing the existentially quantified
formula and the subproof. As before, the flagged constant may not occur outside
8
You may have learned a deduction system that does not require a subproof for ∀ Intro, instead
allowing ∀𝐹(𝑥) to be inferred directly from any instance 𝐹(𝑐), provided the constant 𝑐 is not used in
any undischarged assumptions. But requiring a subproof with a flagged constant makes the constant’s
role as denoting an arbitrary individual more explicit.
Predicate logic 25

of the subproof where it is introduced. Example:

1 ∃𝑥(𝐺𝑥 ∧ 𝐻𝑎)
2 𝐺𝑏 ∧ 𝐻𝑎 𝑏 𝑏/𝑥
3 𝐺𝑏 ∧ Elim 2 (1.23)
4 ∃𝑥𝐺𝑥 ∃ Intro 3
5 ∃𝑥𝐺𝑥 ∃ Elim 1, 2–4
Notes:
1. We could not have closed off the subproof after line 3, since the flagged
constant cannot occur in the last line of the main proof.
2. We could not have used 𝑎 as our flagged term in line 2, since it occurs in
line 1.

Substitution rules
The introduction and elimination rules for the quantifiers and propositional
connectives, together with the structural rules, give us all we need for a complete
proof system. But to make quantificational proofs less tedious, we will also allow
the use of two more rules. Unlike the rules we have seen so far, these are substitution
rules, which allow one formula to be substituted for another, even if it just part of
a larger formula.

QNE (Quantifier-Negation Equivalences)


You may use the following substitution rules at any point in a proof, citing “QNE”
and the line number as justification. They are all reversible. (See the examples to
follow.)
¬∀𝑥𝜙 ⟺ ∃𝑥¬𝜙
¬∃𝑥𝜙 ⟺ ∀𝑥¬𝜙
Examples:
1 ¬∃𝑥(𝐺𝑥 ∧ 𝐻𝑎)
(1.24)
2 ∀𝑥¬(𝐺𝑥 ∧ 𝐻𝑎) QNE 1

1 𝐻𝑎 ⊃ ∀𝑥¬𝐺𝑥
(1.25)
2 𝐻𝑎 ⊃ ¬∃𝑥𝐺𝑥 QNE 1
26 Fundamentals

Note that QNE is applied to a subformula in example (1.25). The main connective
in (1) is ‘⊃,’ not a quantifier. That’s okay, because the QNE rules are substitution
rules, not rules of inference.

Taut Equiv (Tautological Equivalence)


What if you wanted to derive ∀𝑥(𝐺𝑥 ⊃ ¬𝐻𝑥) from ¬∃𝑥(𝐺𝑥 ∧ 𝐻𝑥)? Given the
rules we have so far, you’d have to take a circuitous path:

1 ¬∃𝑥(𝐺𝑥 ∧ 𝐻𝑥)
2 ∀𝑥¬(𝐺𝑥 ∧ 𝐻𝑥) QNE 1
3 𝑏

4 ¬(𝐺𝑏 ∧ 𝐻𝑏) ∀ Elim + Reit 2, 𝑏/𝑥


5 𝐺𝑏 Hyp
6 𝐻𝑏 Hyp (1.26)
7 𝐺𝑏 ∧ 𝐻𝑏 ∧ Intro + Reit 5, 6
8 ⊥ ¬ Elim + Reit 4, 7
9 ¬𝐻𝑏 ¬ Intro 6–8
10 𝐺𝑏 ⊃ ¬𝐻𝑏 ⊃ Intro 5–9
11 ∀𝑥(𝐺𝑥 ⊃ ¬𝐻𝑥) ∀ Intro 3–10, 𝑏/𝑥
To simplify this kind of proof, we introduce a new substitution rule, Taut
Equiv, that allows you to substitute truth-functionally equivalent formulas for
each other, even when they occur embedded inside quantifiers or other operators.
Then we can do:

1 ¬∃𝑥(𝐺𝑥 ∧ 𝐻𝑥)
2 ∀𝑥¬(𝐺𝑥 ∧ 𝐻𝑥) QNE 1 (1.27)
3 ∀𝑥(𝐺𝑥 ⊃ ¬𝐻𝑥) Taut Equiv 2
We’ll allow Taut Equiv only in proofs involving quantifiers.

1.3 Identity

The following inference seems valid:


Identity 27

Exercise 1.5: Deductions for predicate logic

1. Use Fitch-style natural deductions to prove the following theorems:


a) ∀𝑥((𝐹𝑥 ∧ 𝐺𝑥) ⊃ 𝐹𝑥)
b) ¬∃𝑥(𝐹𝑥 ∧ 𝐺𝑥) ⊃ (∀𝑥𝐹𝑥 ⊃ ¬∃𝑥𝐺𝑥)
c) ∃𝑥∀𝑦∀𝑧𝐹𝑥𝑦𝑧 ⊃ ∀𝑦∀𝑧∃𝑥𝐹𝑥𝑦𝑧
2. Use Fitch-style natural deductions to prove

∃𝑥(𝑃𝑥 ∧ 𝑆𝑥) ∀𝑥(𝑃𝑥 ⊃ ∃𝑦𝐹𝑦𝑥)


a) ∀𝑥(𝑆𝑥 ⊃ 𝑅𝑥𝑏) b) ∀𝑥∀𝑦(𝐹𝑦𝑥 ⊃ 𝐿𝑦𝑥)
∃𝑥𝑅𝑥𝑏 ∀𝑥(𝑃𝑥 ⊃ ∃𝑦𝐿𝑦𝑥)

3. Use Fitch-style natural deductions to prove ∃𝑥¬𝑃𝑥 from ¬∀𝑥𝑃𝑥 with-


out using the QNE rules.

There is at most one cat that is black


(9) There are at least two cats
There is at least one cat that is not black

However, we cannot capture its validity using just the resources of basic predicate
logic. To represent its premises and conclusion, we will need to introduce a sign
for identity.
In ordinary language, when we say that two shirts are identical, we mean that
they are the same color, style, fit, and so on. In logic, the term ‘identity’ is used for
numerical identity: to say that 𝐴 is identical to 𝐵 is to say that they are the same
object. In the logical sense, Clark Kent and Superman are identical, but Clark
Kent is not identical with his twin who looks just like him.
As we will see, adding a sign for identity to predicate logic increases its expressive
power, allowing us to say things we couldn’t have said without it. Without an
identity sign, for example, we can’t say that there are at least two things that are 𝐹.
∃𝑥∃𝑦(𝐹𝑥 ∧ 𝐹𝑦) can be true even if there’s just one object in the domain that is 𝐹.
To say that there are at least two, we need to be able to say that 𝑥 and 𝑦 are not the
same: ∃𝑥∃𝑦(𝐹𝑥 ∧ 𝐹𝑦 ∧ ¬𝑥=𝑦).
28 Fundamentals

1.3.1 Grammar

The identity sign (=) is a two-place predicate. By convention, we write one argu-
ment on the left and one on the right (as in 𝑎=𝑏). We should not let this convention
obscure the fact that grammatically = is just a two-place predicate, like the 𝐺 in
𝐺𝑎𝑏. We could just as well have written = 𝑎𝑏 or 𝐼𝑎𝑏.
Sometimes the nonidentity sign (≠) is also used. We can introduce it as a
defined term:
Nonidentity 𝛼 ≠ 𝛽 ≡ ¬(𝛼=𝛽)

1.3.2 Semantics

The extension of = in a model is the relation each thing bears to itself and to no
other thing (the identity relation). For example, if the domain is {1, 2, 3}, then
the extension of = is {⟨1, 1⟩, ⟨2, 2⟩, ⟨3, 3⟩}. That is not to say that all true identity
statements are the tautologous kind (𝑎=𝑎). 𝑎=𝑏 can be true in a model, provided
that 𝑎 and 𝑏 get assigned the same interpretation (the same object) in that model.

1.3.3 Proofs

To do proofs with identity, we’ll need two new rules.


= Intro (Reflexivity) Where 𝛼 is an individual constant, you may write 𝛼=𝛼 on
any line of a proof, with justification “= Intro.”
= Elim (Substitution of Identicals) From premises 𝛼=𝛽 (or 𝛽=𝛼) and 𝜙, where 𝛼
and 𝛽 are individual constants and 𝜙 a sentence containing 𝛼, you may conclude
any formula 𝜓 that results from 𝜙 by replacing one or more occurrences of 𝛼
with 𝛽.
Note that both = Elim steps in the following proof are valid:

1 ∃𝑥𝑅𝑎𝑥𝑎 Hyp
2 𝑎=𝑏 Hyp
(1.28)
3 ∃𝑥𝑅𝑏𝑥𝑏 = Elim 1, 2
4 ∃𝑥𝑅𝑏𝑥𝑎 = Elim 1, 2

(3) is a valid step because it is the result of substituting 𝑏 for both occurrences of 𝑎
in line (1). (4) is a valid step because it is the result of substituting 𝑏 for the first
occurrence of 𝑎 in (1).
Use and mention 29

This is all you need for proofs with identity. Here’s an example.

1 ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥𝑏) Hyp


2 𝑎=𝑏 Hyp
3 ∃𝑥(𝐹𝑥 ∧ 𝐺𝑥𝑎) = Elim 1, 2
4 𝐹𝑐 ∧ 𝐺𝑐𝑎 𝑐 3 𝑐/𝑥
5 𝐺𝑐𝑎 ∧ Elim 4 (1.29)
6 𝑐=𝑐 = Intro
7 𝐺𝑐𝑎 ∧ 𝑐=𝑐 ∧ Intro 5,6
8 ∃𝑥(𝐺𝑥𝑎 ∧ 𝑥=𝑥) ∃ Intro 7 𝑐/𝑥
9 ∃𝑥(𝐺𝑥𝑎 ∧ 𝑥=𝑥) ∃ Elim 3, 4–8

1.4 Use and mention

The word ‘sentence’ is used to say things about sentences. But when I say
(10) The word ‘sentence’ has eight letters.
I am not saying anything about sentences; I’m talking instead about the word
‘sentence’, which I am not using but mentioning.
Here I have used the convention (introduced by Frege 1893) of putting a
phrase in single quotation marks to indicate that one is mentioning it. This is
just one of several conventions one might adopt. Some authors use italics to
indicate mention. And others just use the words autonymously—that is, as names
of themselves (Church 1956, §08; Carnap 2002, §42)—and leave it to the reader
to figure out from context when they are functioning as names of themselves and
when they are being used in their normal way. That is what we have done so far in
this chapter, and it works well when the language being used is different from the
language being discussed.
However, we will soon be discussing some issues where use/mention ambigu-
ities can lead to fallacious reasoning. So we will start being more explicit, using
single quotation marks to indicate mention. Thus, instead of writing, confusingly,
(11) a. Boston is a city. Boston is the name of a city.
b. An hour is longer than a minute, but minute is longer than hour.
c. An expressively complete logic can contain either and and not or or
and not.
30 Fundamentals

Exercise 1.6: Identity

1. How would you express the following in predicate logic with identity?
a) Every logician loves someone other than herself.
b) The only one who respects Richard is Sue.
c) There are at least two rich dogs.
d) There are at most two smart dogs.
e) Liz is the tallest spy.
f) Liz is the tallest rider who roped at least two calves.
2. (a) Give a formula, using quantifiers and identity, that is true in every
model with a domain of one object and false in some model with a
domain of two objects. (b) Give a formula, not using quantifiers or
identity, that has this property.
3. Without an identity sign you can’t produce a sentence that says that
there are at least two 𝐹s. However, you can produce a sentence without
identity that is only true in models whose domains contain at least two
things that fall into the extension of 𝐹. Can you find one?
4. Prove that the following rules are valid. (Give a deduction.) Once you
have done this, you may use these derived rules to simplify proofs with
identity.
𝑎=𝑏 𝑎=𝑏
Symmetry
𝑏=𝑎 Transitivity 𝑏=𝑐
𝑎=𝑐
5. Prove 𝐹𝑎 ≡ ∃𝑥(𝐹𝑥 ∧ 𝑥=𝑎).
6. Suppose that you have a quantifier ∃𝑛 𝑥, meaning “there are at least 𝑛 𝑥…”
How could you define ∃𝑛+1 𝑥 in terms of ∃𝑛 𝑥?
7. Translate argument (9) from the beginning of this section, and give a
deduction to show that it is valid.
8. ⋆The identity sign is treated differently from other predicates in first-
order logic. Can you think of any reasons for this?
Use and mention 31

we will write
(12) a. Boston is a city. ‘Boston’ is the name of a city.
b. An hour is longer than a minute, but ‘minute’ is longer than ‘hour’.
c. An expressively complete logic can contain either ‘and’ and ‘not’ or
‘or’ and ‘not’.
In giving semantic clauses for logical connectives and operators, we have used
phrasing like this:
(13) Where 𝜙 and 𝜓 are formulas, 𝜙 ∧ 𝜓 is true in a model ℳ iff 𝜙 is true in ℳ
and 𝜓 is true in ℳ.
How can we rephrase this in a way that is more careful about use and mention?
Well, we might try:
(14) Where 𝜙 and 𝜓 are formulas, ‘𝜙 ∧ 𝜓’ is true in a model ℳ iff ‘𝜙’ is true in
ℳ and ‘𝜓’ is true in ℳ.
But this won’t work! Remember, 𝜙 and 𝜓 are variables whose values are formulas.
They are not formulas themselves. The expression
(15) ‘𝜙 ∧ 𝜓’
denotes the sequence of symbols:
(16) 𝜙 ∧ 𝜓
which is not, itself, a formula of the language we are describing.
How can we say what we want to say, then? Well, we could say this:
(17) Where 𝜙 and 𝜓 are formulas, the formula consisting of 𝜙 concatenated
with ‘∧’ concatenated with 𝜓 is true in a model ℳ iff 𝜙 is true in ℳ and
𝜓 is true in ℳ.
We could make this simpler by introducing a notation for concatenation (‘⌢’).
But the result is still pretty ugly:
(18) Where 𝜙 and 𝜓 are formulas, 𝜙 ⌢ ‘ ∧ ’ ⌢ 𝜓 is true in a model ℳ iff 𝜙 is
true in ℳ and 𝜓 is true in ℳ.
For this reason, W. V. O. Quine (1940) invented a device of quasiquotation or
corner quotes.9 Using corner quotes, we can write our semantic clause like this:
9
Quine’s device is used not just in philosophical logic, but in programming languages: Lisp, for
example, contains dedicated syntax for quasiquotation.
32 Fundamentals

(19) Where 𝜙 and 𝜓 are formulas, ⌜𝜙 ∧ 𝜓⌝ is true in a model ℳ iff 𝜙 is true in


ℳ and 𝜓 is true in ℳ.
You can think of corner quotes as a notational shortcut:
(20) ⌜𝜙 ∧ 𝜓⌝
means just the same as
(21) 𝜙 ⌢ ‘ ∧ ’ ⌢ 𝜓
More intuitively, you can understand the corner quote notation as follows.
A corner-quote expression always denotes another expression, relative to an as-
signment of values to its variables. To find out what expression it denotes on
a given assignment of values to these variables, first replace the variables inside
corner quotes with their values (which should be expressions), then convert the
corner quotes to regular quotes. So, for example, when 𝜙 = ‘Cats are furry’ and 𝜓
= ‘Snow is black’, ⌜(𝜙 ∧ 𝜓)⌝ = ‘(Cats are furry ∧ Snow is black)’.
Suppose the baby is learning to talk, and says, ‘I like blue,’ ‘I like red,’ ‘I like
white,’ ‘I like green,’ and so on for all the color words she knows. To report what
happened in a compact way, you might say something like:
(22) For every color word 𝐶, she said, ‘I like 𝐶.’
Strictly speaking, though, this reports her as having said, ‘I like 𝐶’, not ‘I like blue’,
etc. To get the desired reading, you can use corner quotes:
(23) For every color word 𝐶, she said, ⌜I like 𝐶⌝.
which means just the same as:
(24) For every color word 𝐶, she said, ‘I like’ ⌢ 𝐶.

Further readings

For an excellent course in the fundamentals of first-order logic, with exercises, I


recommend Nuel Belnap’s unpublished Notes on the Art of Logic (Belnap 2009).
Use and mention 33

Exercise 1.7: Quotation and quasiquotation

1. Add quotation marks where they are needed in the following sentences
to mark mention:
a) Word is a four-letter word.
b) Boston denotes the name Boston, which denotes the city Boston.
c) We substitute 𝑎 + 3 for 𝑥, if 𝑎 + 3 is a prime number. (Carnap 2002,
§42)
2. Rewrite the following using corner-quote notation:
a) 𝜙 ⌢ ‘ + ’ ⌢ 𝜓 ⌢ ‘=’ ⌢ 𝜓
b) ‘∀’ ⌢ 𝛼 ⌢ 𝜙
3. Rewrite the following using regular quotes and the concatenation sign:
a) ⌜∃𝛼(𝜙 ∧ 𝜓)⌝
b) ⌜𝜙 ⊃ 𝜙⌝
c) ⌜𝜙⌝
4. Write the expression denoted by the following terms, under an assign-
ment of ‘𝐹𝑥’ to 𝜙, ‘(𝐹𝑥 ⊃ 𝐺𝑥)’ to 𝜓, and ‘𝑥’ to 𝛼:
a) ⌜𝜙 ∧ 𝜓⌝
b) ⌜∀𝛼(𝜓 ⊃ 𝜙)⌝
References

Ackermann, Willhelm (1956). “Begründung einer strengen Implikation.” Journal of Sym-


bolic Logic 21, pp. 113–128.
Anderson, Alan Ross and Nuel D. Belnap Jr. (1975). Entailment. Vol. 1. Princeton: Prince-
ton University Press.
Anderson, Alan Ross, Nuel D. Belnap Jr., and J. Michael Dunn (1992). Entailment. Vol. 2.
Princeton: Princeton University Press.
Aristotle (1984). The Complete Works of Aristotle. The Revised Oxford Translation. Ed. by
Jonathan Barnes. Vol. 1. Princeton: Princeton University Press.
Barwise, Jon and Robin Cooper (1981). “Generalized Quantifiers and Natural Language.”
Linguistics and Philosophy 4, pp. 159–219.
Barwise, Jon and John Etchemendy (1999). Language, Truth and Logic. Palo Alto, CA:
CSLI.
Barwise, Jon and Solomon Feferman (1985). Model-Theoretic Logics. Berlin: Springer.
Barwise, Jon and John Perry (1981). “Semantic Innocence and Uncompromising Situa-
tions.” Midwest Studies in Philosophy 6, pp. 387–404.
Beall, Jc and Greg Restall (2006). Logical Pluralism. Oxford: Oxford University Press.
Belnap Jr., Nuel D. (1961). “Tonk, Plonk and Plink.” Analysis 22, pp. 130–134.
Belnap Jr., Nuel D. (2009). “Notes on the Art of Logic.” url: http://www.pitt.edu/
~belnap/nal.pdf.
Bennett, Jonathan (2003). A Philosophical Guide to Conditionals. Oxford: Oxford Univer-
sity Press.
Bolzano, Bernard (1929/1931). Wissenschaftslehre. Versuch einer ausführlichen und grössten-
theils neuen Darstellung der Logik mit steter Rücksicht auf deren bisherige Bearbeiter.
2nd ed. Leipzig: Felix Meiner.
Boolos, George (1975). “On Second-Order Logic.” Journal of Philosophy 72, pp. 509–527.
Boolos, George (1984). “To Be is to Be a Value of a Variable (or to Be Some Values of Some
Variables).” Journal of Philosophy 81, pp. 430–449.
Boolos, George (1985). “Nominalist Platonism.” Philosophical Review 94, pp. 327–344.
Burgess, John (2005). “No Requirement of Relevance.” In: The Oxford Handbook of Phi-
losophy of Mathematics and Logic. Ed. by Stewart Shapiro. Oxford: Oxford University
Press, pp. 727–750.
Burgess, John (2009). Philosophical Logic. Princeton: Princeton University Press.
Carnap, Rudolf (1956). Meaning and Necessity. 2nd ed. Chicago: University of Chicago
Press.
224 References

Carnap, Rudolf (2002). The Logical Syntax of Language. Trans. by A. Smeaton. Open
Court Classics. Peru, IL: Open Court.
Christensen, David (2004). Putting Logic in its Place: Formal Constraints on Rational
Belief. Oxford: Oxford University Press.
Church, Alonzo (1943). “Review of R. Carnap, Introduction to Semantics.” Philosophical
Review 52, pp. 298–304.
Church, Alonzo (1956). Introduction to Mathematical Logic. 2nd ed. Princeton: Princeton
University Press.
Coffa, J. Alberto (1975). “Machian Logic.” Communication and Cognition 8, pp. 103–129.
Coffa, J. Alberto (1991). The Semantic Tradition from Kant to Carnap. Cambridge: Cam-
bridge University Press.
Cook, Roy T. (2010). “Let a Thousand Flowers Bloom: A Tour of Logical Pluralism.”
Philosophy Compass 5, pp. 492–504.
Correia, Fabrice (2003). “Review of Stephen Neale, Facing Facts.” Dialectica 57, pp. 439–
444.
Davidson, Donald (1984). “True to the Facts.” In: Inquiries into Truth and Interpretation.
Oxford: Oxford University Press.
Dummett, Michael (1964). “Bringing About the Past.” Philosophical Review 73, pp. 338–
359.
Dummett, Michael (1978). “The Philosophical Basis of Intuitionistic Logic.” In: Truth
and Other Enigmas. Cambridge, MA: Harvard University Press, pp. 215–247.
Dummett, Michael (1979). Elements of Intuitionism. Oxford: Oxford University Press.
Dummett, Michael (1991). The Logical Basis of Metaphysics. Cambridge, MA: Harvard
University Press.
Dunn, J. Michael and Nuel D. Belnap Jr. (1968). “The Substitution Interpretation of the
Quantifiers.” Nous 2, pp. 177–185.
Dunn, J. Michael and Greg Restall (2002). “Relevance Logic.” In: Handbook of Philosophi-
cal Logic. Ed. by D. Gabbay and F. Guenthner. 2nd ed. Vol. 6. Dordrecht: Springer,
pp. 1–128.
Edgington, Dorothy (1993). “Do Conditionals Have Truth-Conditions?” In: A Philosoph-
ical Companion to First-Order Logic. Ed. by R. I. G. Hughes. Indianapolis: Hackett,
pp. 28–49.
Edgington, Dorothy (1997). “Vagueness by Degrees.” In: Vagueness: A Reader. Ed. by
Rosanna Keefe and Peter Smith. Cambridge, MA: MIT, pp. 294–316.
Edgington, Dorothy (2014). “Indicative Conditionals.” In: The Stanford Encyclopedia of
Philosophy. Ed. by Edward N. Zalta. Winter 2014. Metaphysics Research Lab, Stanford
University. url: https://plato.stanford.edu/archives/win2014/entries/
conditionals/.
Etchemendy, John (1990). The Concept of Logical Consequence. Cambridge, MA: Harvard
University Press.
Etchemendy, John (2008). “Reflections on Consequence.” In: New Essays on Tarski and
Philosophy. Ed. by Douglas Patterson. Oxford: Oxford University Press.
Evans, Gareth (1978). “Can There Be Vague Objects?” Analysis 38, p. 208.
References 225

Fara, Delia Graff (2000). “Shifting Sands: An Interest-Relative Theory of Vagueness.”


Philosophical Topics 28, pp. 45–81.
Field, Hartry (2000). “Indeterminacy, Degree of Belief, and Excluded Middle.” Nous 34,
pp. 1–30.
Field, Hartry (2009a). “Pluralism in Logic.” Review of Symbolic Logic 2, pp. 342–359.
Field, Hartry (2009b). “The Normative Role of Logic.” Proceedings of the Aristotelian
Society s.v. 83, pp. 251–268.
Fine, Kit (1997). “Vagueness, Truth and Logic.” In: Vagueness: A Reader. Ed. by Rosanna
Keefe and Peter Smith. Cambridge, MA: MIT, pp. 119–150.
Fitch, Frederic Brenton (1952). Symbolic Logic: An Introduction. New York: Ronald Press
Co.
Foley, Richard (1992). “The Epistemology of Belief and the Epistemology of Degrees of
Belief.” American Philosophical Quarterly 29, pp. 111–124.
Forbes, Graham (1985). The Metaphysics of Modality. Oxford: Oxford University Press.
Frege, Gottlob (1892). “Über Sinn und Bedeutung.” Zeitschrift für Philosophie und
philosophische Kritik 100, pp. 25–50.
Frege, Gottlob (1893). Grundgesetze der Arithmetik. Vol. 1. Jena: H. Pohle.
Frege, Gottlob (1980). “On Sense and Meaning.” In: Translations from the Philosophical
Writings of Gottlob Frege. Ed. and trans. by Peter Geach and Max Black. 3rd ed. Oxford:
Blackwell, pp. 56–78.
Geach, P. T. (1958). “Entailment.” Proceedings of the Aristotelian Society s.v. 32, pp. 157–
172.
Geach, P. T. (1970). “Entailment.” Philosophical Review 79, pp. 237–239.
Gibbard, Allan (1981). “Two Recent Theories of Conditionals.” In: IFS: Conditionals,
Belief, Decision, Chance, and Time. Ed. by William L. Harper, Robert Stalnaker, and
Glenn Pearce, pp. 211–247.
Gibbard, Allan (2003). Thinking How to Live. Cambridge, MA: Harvard University Press.
Girle, Rod (2000). Modal Logics and Philosophy. Montreal: McGill-Queen’s.
Gödel, Kurt (1944). “Russell’s Mathematical Logic.” In: The Philosophy of Bertrand Russell.
Ed. by P. A. Schilpp. Evanston: Northwestern University Press, pp. 125–153.
Goguen, J. A. (1969). “The Logic of Inexact Concepts.” Synthese 19, pp. 325–73.
Gómez-Torrente, Mario (1996). “Tarski on Logical Consequence.” Notre Dame Journal of
Formal Logic 37, pp. 125–151.
Gómez-Torrente, Mario (2002). “The Problem of Logical Constants.” Bulletin of Symbolic
Logic 8, pp. 1–37.
Goodman, Nelson (1955). Fact, Fiction, and Forecast. Cambridge, MA: Harvard University
Press.
Grice, H. P. (1989). “Logic and Conversation.” In: Studies in the Way of Words. Cambridge,
MA: Harvard University Press.
Harman, Gilbert (1984). “Logic and Reasoning.” Synthese 60, pp. 107–127.
Heyting, Arend (1956). Intuitionism: An Introduction. Amsterdam: North Holland Pub-
lishing.
226 References

Hughes, G. E. and M. J. Cresswell (1996). A New Introduction to Modal Logic. London:


Routledge.
Kant, Immanuel (1965). Critique of Pure Reason. Trans. by Norman Kemp Smith. New
York: St. Martin’s Press.
Katz, Bernard D. (1999). “On a Supposed Counterexample to Modus Ponens.” Journal of
Philosophy 96, pp. 404–415.
Keefe, Rosanna (2000). Theories of Vagueness. Cambridge: Cambridge University Press.
Keefe, Rosanna and Peter Smith, eds. (1997). Vagueness: A Reader. Cambridge, MA: MIT.
Kennedy, Christopher (2007). “Vagueness and Grammar: The Semantics of Relative and
Absolute Gradable Adjectives.” Linguistics and Philosophy 30, pp. 1–45.
Kleene, Stephen Cole (1952). Introduction to Metamathematics. New York: Van Nostrand.
Kolodny, Niko and John MacFarlane (2010). “Ifs and Oughts.” Journal of Philosophy 107,
pp. 115–143.
Kripke, Saul (1965). “Semantical Analysis of Intuitionistic Logic.” In: Formal Systems and
Recursive Functions. Ed. by J. Crossley and M. A. E. Dummett. Amsterdam: North
Holland Publishing, pp. 92–130.
Kripke, Saul (1976). “Is There a Problem about Substitutional Quantification?” In: Truth
and Meaning. Ed. by Gareth Evans and John McDowell. Oxford: Oxford University
Press.
Kripke, Saul (1980). Naming and Necessity. Cambridge, MA: Harvard University Press.
Lewis, C. I. and C. H. Langford (1959). Symbolic Logic. 2nd ed. New York: Dover.
Lewis, David (1968). “Counterpart Theory and Quantified Modal Logic.” Journal of
Philosophy 65, pp. 113–126.
Lewis, David (1973). Counterfactuals. Oxford: Basil Blackwell.
Lewis, David (1976). “Probabilities of Conditionals and Conditional Probabilities.” Philo-
sophical Review 85, pp. 297–315.
Lewis, David (1986). On the Plurality of Worlds. Oxford: Basil Blackwell.
Lewis, David (1988). “Vague Identity: Evans Misunderstood.” Analysis 48, pp. 128–130.
Lewis, David (1998). “Logic for Equivocators.” In: Papers in Philosophical Logic. Cam-
bridge: Cambridge University Press, pp. 97–110.
Lindström, Per (1966). “First Order Predicate Logic with Generalized Quantifiers.” Theoria
32, pp. 186–195.
Linsky, Bernard (2016). “The Notation in Principia Mathematica.” In: The Stanford Ency-
clopedia of Philosophy. Ed. by Edward N. Zalta. Fall 2016. Metaphysics Research Lab,
Stanford University. url: https://plato.stanford.edu/archives/fall2016/
entries/pm-notation/.
Linsky, Leonard (1972). “Two Concepts of Quantification.” Nous 6, pp. 224–239.
Loux, Michael J., ed. (1979). The Possible and the Actual. Ithica, NY: Cornell University
Press.
MacFarlane, John (2006). “The Things We (Sorta Kinda) Believe.” Philosophy and Phe-
nomenological Research 73, pp. 218–224.
References 227

MacFarlane, John (2008). “Truth in the Garden of Forking Paths.” In: Relative Truth.
Ed. by Max Kölbel and Manuel Garcia-Carpintero. Oxford: Oxford University Press,
pp. 81–102.
MacFarlane, John (2010). “Fuzzy Epistemicism.” In: Cuts and Clouds. Ed. by Richard
Dietz and Sebastiano Moruzzi. Oxford: Oxford University Press, pp. 438–463.
MacFarlane, John (2014). Assessment Sensitivity: Relative Truth and Its Applications. Ox-
ford: Oxford University Press.
MacFarlane, John (2016). “Vagueness as Indecision.” Proceedings of the Aristotelin Society
s.v. 90, pp. 255–283.
MacFarlane, John (2017). “Logical Constants.” In: The Stanford Encyclopedia of Philosophy.
Ed. by Edward N. Zalta. Winter 2017. Metaphysics Research Lab, Stanford University.
url: https://plato.stanford.edu/archives/win2017/entries/logical-
constants/.
Marcus, Ruth Barcan (1962). “Interpreting Quantification.” Inquiry 5, pp. 252–259.
Marcus, Ruth Barcan (1972). “Quantification and Ontology.” Nous 6, pp. 240–250.
McGee, Vann (1985). “A Counterexample to Modus Ponens.” Journal of Philosophy 82,
pp. 462–471.
McGee, Vann and Brian McLaughlin (1998). “Review of Timothy Williamson, Vagueness.”
Linguistics and Philosophy 21, pp. 221–235.
McGee, Vann and Brian McLaughlin (2004). “Logical Commitment and Semantic Inde-
terminacy: A Reply to Williamson.” Linguistics and Philosophy 27, pp. 221–235.
Meyer, Robert K. (1971). “Entailment.” Journal of Philosophy 68, pp. 808–818.
Mostowski, Andrzej (1957). “On a Generalization of Quantifiers.” Fundamenta Mathe-
maticae 44, pp. 12–36.
Neale, Stephen (1990). Descriptions. Cambridge, MA: MIT Press.
Neale, Stephen (1995). “The Philosophical Significance of Gödel’s Slingshot.” Mind 104,
pp. 761–825.
Neale, Stephen (2001). Facing Facts. Oxford: Oxford University Press.
Normore, Calvin (1993). “The Necessity in Deduction: Cartesian Inference and Its Me-
dieval Background.” Synthese 96, pp. 437–454.
Ostertag, Gary, ed. (1998). Definite Descriptions: A Reader. Cambridge, MA: MIT Press.
Parry, W. T. (1932). “Implication.” PhD thesis. Harvard University.
Parry, W. T. (1933). “Ein Axiomensystem für eine neue Art von Implikation (analytische
Implikation).” Ergebnisse eines mathematischen Kolloquiums 4, pp. 4–6.
Platts, Mark (1979). Ways of Meaning. London: Routledge and Kegan Paul.
Prawitz, Dag (1985). “Remarks on Some Approaches to the Concept of Logical Conse-
quence.” Synthese 62, pp. 153–171.
Prawitz, Dag (2005). “Logical Consequence From a Constructivist Point of View.” In:
The Oxford Handbook of Philosophy of Mathematics and Logic. Ed. by Stewart Shapiro.
Oxford: Oxford University Press, pp. 671–695.
Prawitz, Dag (2006). “Meaning Approached via Proofs.” Synthese 148, pp. 507–524.
Price, Huw (1983). “Does ‘Probably’ Modify Sense?” Australasian Journal of Philosophy
61, pp. 396–408.
228 References

Priest, Graham (1979). “Two Dogmas of Quineanism.” Philosophical Quarterly 29,


pp. 289–301.
Priest, Graham (1998). “What Is So Bad About Contradictions?” Journal of Philosophy 95,
pp. 410–426.
Prior, A. N. (1960). “The Runabout Inference-Ticket.” Analysis 21, pp. 38–39.
Puryear, Stephen (2013). “Frege on Vagueness and Ordinary Language.” Philosophical
Quarterly 250, pp. 120–140.
Putnam, Hilary (1968). “Is Logic Empirical?” Boston Studies in the Philosophy of Science 5,
pp. 216–241.
Quine, W. V. O. (1940). Mathematical Logic. Cambridge, MA: Harvard University Press.
Quine, W. V. O. (1948). “On What There Is.” Review of Metaphysics 2, pp. 21–38.
Quine, W. V. O. (1951). “Two Dogmas of Empiricism.” Philosophical Review 60, pp. 20–
43.
Quine, W. V. O. (1960). Word and Object. Cambridge, MA: MIT Press.
Quine, W. V. O. (1961). “Reference and Modality.” In: From a Logical Point of View.
2nd ed. Cambridge, MA: Harvard University Press, pp. 139–159.
Quine, W. V. O. (1970). Philosophy of Logic. Cambridge, MA: Harvard University Press.
Quine, W. V. O. (1976). “Three Grades of Modal Involvement.” In: The Ways of Paradox
and Other Essays. Revised and enlarged. Cambridge, MA: Harvard University Press,
pp. 158–176.
Ray, Grey (1996). “Logical Consequence: A Defense of Tarski.” Journal of Philosophical
Logic 25, pp. 617–677.
Rayo, Agustín and Stephen Yablo (2001). “Nominalism through De-Nominalization.”
Nous 35, pp. 74–92.
Read, Stephen (1988). Relevant Logic: A Philosophical Examination of Inference. Oxford:
Basil Blackwell.
Read, Stephen (1994). “Formal and Material Consequence.” Journal of Philosophical Logic
23, pp. 247–265.
Rieger, Adam (2013). “Conditionals are Material: The Positive Arguments.” Synthese 190,
pp. 3161–3174.
Russell, Bertrand (1905). “On Denoting.” Mind 14, pp. 479–493.
Russell, Bertrand (1920). Introduction to Mathematical Philosophy. 2nd ed. London:
George Allen and Unwin.
Russell, Bertrand and Alfred North Whitehead (1910). Principia Mathematica. Vol. 1.
Cambridge: Cambridge University Press.
Sagüillo, José M. (1997). “Logical Consequence Revisited.” Bulletin of Symbolic Logic 3,
pp. 216–241.
Sainsbury, R. M. (1995). Paradoxes. 2nd ed. Cambridge: Cambridge University Press.
Schiffer, Stephen (2003). The Things We Mean. Oxford: Oxford University Press.
Shapiro, Stewart (1991). Foundations without Foundationalism: A Case for Second-order
Logic. Oxford: Clarendon Press.
Shapiro, Stewart (2006). Vagueness in Context. Oxford: Oxford University Press.
Shapiro, Stewart (2014). Varieties of Logic. Oxford: Oxford University Press.
References 229

Sher, Gila (1991). The Bounds of Logic: A Generalized Viewpoint. Cambridge, MA: MIT
Press.
Sher, Gila (1996). “Did Tarski Commit ‘Tarski’s Fallacy’?” Journal of Symbolic Logic 61,
pp. 653–686.
Smiley, Timothy (1959). “Entailment and Deducibility.” Proceedings of the Aristotelian
Society n.s. 59, pp. 233–254.
Smullyan, Arthur (1948). “Modality and Description.” Journal of Symbolic Logic 13,
pp. 31–37.
Soames, Scott (1999). Understanding Truth. Oxford: Oxford University Press.
Stalnaker, Robert (1975). “Indicative Conditionals.” Philosophia 5, pp. 269–286.
Stalnaker, Robert (1976). “Possible Worlds.” Nous 10, pp. 65–75.
Steinberger, Florian (2017). “The Normative Status of Logic.” In: The Stanford Encyclope-
dia of Philosophy. Ed. by Edward N. Zalta. Spring 2017. Metaphysics Research Lab,
Stanford University. url: https://plato.stanford.edu/archives/spr2017/
entries/logic-normative/.
Steinberger, Florian (2019). “Logical Pluralism and Logical Normativity.” Philosopher’s
Imprint 19, pp. 1–19.
Stevenson, J. T. (1961). “Roundabout the Runabout Inference-ticket.” Analysis 21,
pp. 124–128.
Strawson, P. F. (1958). “Review of G. H. von Wright, Logical Studies.” Philosophical
Quarterly 8, pp. 372–376.
Tarski, Alfred (1935). “Der Wahrheitsbegriff in den formalisierten Sprachen.” Studia
Philosophica 1, pp. 261–405.
Tarski, Alfred (1983a). “On the Concept of Logical Consequence.” In: Logic, Semantics,
Metamathematics. Ed. by John Corcoran. Indianapolis: Hackett, pp. 409–420.
Tarski, Alfred (1983b). “The Concept of Truth in Formalized Languages.” In: Logic, Se-
mantics, Metamathematics. Ed. by John Corcoran. Indianapolis: Hackett, pp. 152–
278.
Tarski, Alfred (1986). “What are Logical Notions?” History and Philosophy of Logic 7.
Ed. by John Corcoran, pp. 143–154.
Tennant, Neil (1994). “The Transmission of Truth and the Transitivity of Deduction.”
In: What Is a Logical System? Ed. by D. M. Gabbay. Oxford: Oxford University Press,
pp. 161–177.
Thomason, Richmond H. (1982). “Identity and Vagueness.” Philosophical Studies 42,
pp. 329–332.
Thomson, James F. (1990). “In Defense of ‘⊃’.” Journal of Philosophy 87, pp. 57–70.
Van Inwagen, Peter (1981). “Why I Don’t Understand Substitutional Quantification.”
Philosophical Studies 39, pp. 281–285.
Weatherson, Brian (2005). “True, Truer, Truest.” Philosophical Studies 123, pp. 47–70.
White, Morton (1987). “A Philosophical Letter of Alfred Tarski.” Journal of Philosophy 84,
pp. 28–32.
Williamson, Timothy (1987). “Equivocation and Existence.” Proceedings of the Aristotelian
Society n.s. 88, pp. 109–127.
230 References

Williamson, Timothy (1994). Vagueness. London: Routledge.


Wittgenstein, Ludwig (1958). Philosophical Investigations. 2nd ed. Oxford: Blackwell.
Wolfram, Sybil (1989). Philosophical Logic. London: Routledge.
Wright, Crispin (2001). “On Being in a Quandary: Relativism, Vagueness, Logical Revi-
sionism.” Mind 110, pp. 45–98.
Wright, G. H. von (1957). Logical Studies. London: Routledge.

You might also like