Monday, February 25, 2013

An Objection to Some Accounts of Self-Knowledge of Attitudes

You believe some proposition P. You believe that Armadillo is the capital of Texas, say.[footnote 1] Someone asks you what you think the capital of Texas is. You say, "In my opinion, the capital of Texas is Armadillo." How do you know that that is what you believe?

Here's one account (e.g., in Nichols and Stich 2003): You have in your mind a dedicated "Monitoring Mechanism". The job of this Monitoring Mechanism is to scan the contents of your Belief Box, finding tokens such as P ("Armadillo is the capital of Texas") or Q ("There's an orangutan in the fridge"), and producing, in consequence, new beliefs of the form "I believe P" or "I believe Q". Similarly, it or a related mechanism can scan your Desire Box, producing new beliefs of the form "I desire R" or "I desire S". Call this the Dedicated Mechanism Account.

One alternative account is Peter Carruthers's. Carruthers argues that there is no such dedicated mechanism. Instead, we theorize on the basis of sensory evidence and our own imagery. For example, I hear myself saying -- either aloud or in inner speech (which is a form of imagery) -- "The capital of Texas is Armadillo", and I think something like, "Well, I wouldn't say that unless I thought it was true!", and so I conclude that I believe that Armadillo is the capital. This theoretical, interpretative reasoning about myself is usually nonconscious, but in the bowels of my cognitive architecture, that's what I'm doing. And there's no more direct route to self-knowledge, according to Carruthers. We have to interpret ourselves given the evidence of our behavior, our environmental context, and our stream of imagery and inner speech.

Here's an argument against both accounts.

First, assume that to believe that P is to have a representation with the content P stored in a Belief Box (or memory store), i.e., ready to be accessed for theoretical inference and practical decision making. (I'm not keen on Belief Boxes myself, but I'll get to that later.) A typical deployment of P might be as follows: When Bluebeard says to me, "I'm heading off to the capital of Texas!", I call up P from my Belief Box and conclude that Bluebeard is heading off to Armadillo. I might similarly ascribe a belief to Bluebeard on that basis. Unless I have reason to think Bluebeard ignorant about the capital of Texas or (by my lights) mistaken about it, I can reasonably conclude that Bluebeard believes that he is heading to Armadillo. All parties agree that I need not introspect to attribute this belief to Bluebeard, nor call upon any specially dedicated self-scanning mechanism (other than whatever allows ordinary memory retrieval), nor interpret my own behavior and imagery. I can just pull up P to join it with other beliefs, and conclude that Q. Nothing special here or self-interpretive. Just ordinary cognition.

Now suppose the conclusion of interest -- the "Q" in this case -- is just "I believe that P". What other beliefs does P need to be hooked up with to license this conclusion? None, it seems! I can go straightaway, in normal cases, from pulling up P to the conclusion "I believe that P". If that's how it works, no dedicated self-scanning mechanism or self-interpretation is required, but only ordinary belief-retrieval for cognition, contra both Carruthers's view and Dedicated Mechanism Accounts.

That will have seemed a bit fast, perhaps. So let's consider some comparison cases. Suppose Sally is the school registrar. I assume she has true beliefs about the main events on the academic calendar. I believe that final exams end on June 8. If someone asks me when Sally believes final exams will end, I can call up P1 ("exams end June 8") and P2 ("Sally has true beliefs about the main events on the academic calendar") to conclude Q ("Sally believes exams end June 8"). Self-ascription would be like that, but without P2 required. Or suppose I believe in divine omniscience. From P1 plus divine omniscience, I can conclude that God believes P1. Or suppose that I've heard that there's this guy, Eric Schwitzgebel, who believes all the same things I believe about politics. If P1 concerns politics, I can conclude from P1 and this knowledge about Eric Schwitzgebel that this Eric Schwitzgebel guy believes P1. Later I might find out that Eric Schwitzgebel is me.

Do I need to self-ascribe the belief that P1 before reaching that conclusion about the Eric Schwitzgebel guy? I don't see why I must. I know that moving from "P1 is true and concerns politics" to "that Eric Schwitzgebel guy believes P1" will get me true conclusions. I can rely on it. It might be cognitively efficient for me to develop a habit of thought by which I leap straight from one to the other.

Alternatively: Everyone thinks that I can at least sometimes ascribe myself beliefs as a result of inference. I subscribe to a general theory, say, on which if P1 and P2 are true of Person S and if P3 and P4 are true in general about the world, then I can conclude that S believes Q. Now suppose S is me. And suppose Q is "I believe P" and suppose P3 is P. And then jettison the rest of P1, P2, and P4. Voila![footnote 2]

If there is a Desire Box, it might work much the same way. If I can call up the desire R to join with some other beliefs and desires to form a plan, in just the ordinary cognitive way that desires are called up, so also it seems I should be able to do for the purposes of self-ascription. It would be odd if we could call up beliefs and desires for all the wide variety of cognitive purposes that we ordinarily call them up for but not for the purposes of self-ascriptive judgment. What would explain that strange incapacity?

What if there isn't a Belief Box, a Desire Box, or a representational storage bin? The idea remains basically the same: Whatever mechanisms allow me to reach conclusions and act based on my beliefs and desires should also allow me to reach conclusions about my beliefs and desires -- at least once I am cognitively sophisticated enough to have adult-strength concepts of belief and desire.

This doesn't mean I never go wrong and don't self-interpret at all. We are inconsistent and unstable in our belief- and desire-involving behavioral patterns; the opinions we tend to act on in some circumstances (e.g., when self-ascription or verbal avowal is our task) might very often differ from those we tend to act on in other circumstances; and it's a convenient shorthand -- too convenient, sometimes -- to assume that what we say, when we're not just singing to ourselves and not intending to lie, reflects our opinions. Nor does it imply that there aren't also dedicated mechanisms of a certain sort. My own view of self-knowledge is, in fact, pluralist. But among the many paths, I think, is the path above.

(Fans of Alex Byrne's approach to self-knowledge will notice substantial similarities between the above and his views, to which I owe a considerable debt.)

Update, February 27

Peter Carruthers replies as follows:

Eric says: “I can just pull up P to join it with other beliefs, and conclude that Q. Nothing special here or self-interpretive. Just ordinary cognition.” This embodies a false assumption (albeit one that is widely shared among philosophers; and note that essentially the same response to that below can be made to Alex Byrne). This is that there is a central propositional workspace of the mind where beliefs and desires can be activated and interact with one another directly in unconstrained ways to issue in new beliefs or decisions. In fact there is no such amodal workspace. The only central workspace that the mind contains is the working memory system, which has been heavily studied by psychologists for the last half-century. The emerging consensus from this work (especially over the last 15 years or so) is that working memory is sensory based. It depends upon attention directed toward mid-level sensory areas of the brain, resulting in globally broadcast sensory representations in visual or motor imagery, inner speech, and so on. While these representations can have conceptual information bound into them, it is impossible for such information to enter the central workspace alone, not integrated into a sensory-based representation of some sort.

Unless P is an episodic memory, then (which is likely to have a significant sensory component), or unless it is a semantic memory stored, at least in part, in sensory format (e.g. a visual image of a map of Texas), then the only way for P to “join with other beliefs, and conclude that Q” is for it to be converted into (say) an episode of inner speech, which will then require interpretation.

This is not to deny that some systems in the mind can access beliefs and draw inferences without those beliefs needing to be activated in the global workspace (that is, in working memory). In particular, goal states can initiate searches for information to enable the construction of plans in an “automatic”, unconscious manner. But this doesn’t mean that the mindreading system can do the same. Indeed, a second error made by Eric in his post is a failure to note that the mindreading system bifurcates into two (or more) distinct components: a domain-specific system that attributes mental states to others (and to oneself), and a set of domain-general planning systems that can be used to simulate the reasoning of another in order to generate predictions about that person’s other beliefs or likely behavior. On this Nichols & Stich and I agree, and it provides the former the wherewithal to reply to Eric’s critique also. For the “pulling up of beliefs” to draw inferences about another’s beliefs takes place (unconsciously) in the planning systems, and isn’t directly available to the domain-specific system responsible for attributing beliefs to others or to oneself.

Peter says: "Unless P is an episodic memory... or unless it is a semantic memory stored, at least in part, in sensory format (e.g. a visual image of a map of Texas), then the only way for P to “join with other beliefs, and conclude that Q” is for it to be converted into (say) an episode of inner speech, which will then require interpretation." I don't accept that theory of how the mind works, but even if I did accept that theory, it seems now like Peter is allowing that if P is a "semantic memory" stored in partly "sensory format" it can join with other beliefs to drive the conclusion Q without an intermediate self-interpretative episode. Or am I misunderstanding the import of his sentence? If I'm not misunderstanding, then hasn't just given me all I need for this step of my argument? Let's imagine that "Armadillo is the capital of Texas" is stored in partly sensory format (as a visual map of Texas with the word "Armadillo" and a star). Now Peter seems to be allowing that it can drive inferences without requiring an intermediate act of self-interpretation. So then why not allow it to drive also the conclusion that I believe that Armadillo is the capital? We're back to the main question of this post, right?

Peter continues: "This is not to deny that some systems in the mind can access beliefs and draw inferences without those beliefs needing to be activated in the global workspace (that is, in working memory). In particular, goal states can initiate searches for information to enable the construction of plans in an “automatic”, unconscious manner. But this doesn’t mean that the mindreading system can do the same." First, let me note that I agree that the fact that some systems can access stored representations without activating those representations in the global workspace doesn't stricly imply that the mindreading system (if there is a dedicated system, which is part of the issue in dispute) can also do so. But I do think that if, for a broad range of purposes, we can access these stored beliefs, it would be odd if we couldn't do so for the purpose of reaching conclusions about our own minds. We'd then need a pretty good theory of why we have this special disability with respect to mindreading. I don't think Peter really offers us as much as we should want to explain this disability.

... which brings me to my second reaction to this quote. What Peter seems to be presenting as a secondary feature of the mind -- "the construction of plans in an 'automatic', unconscious manner" -- is, in my view, the very heart of mentality. For example, to create inner speech itself, we need to bring together a huge variety of knowledge and skills about language, about the social environment, and about the topic of discourse. The motor plan or speech plan constructed in this way cannot mostly be driven by considerations that are pulled explicitly into the narrow theater of the "global workspace" (which is widely held to host only a small amount of material at a time, consciously experienced). Our most sophisticated cognition tends to be what happens before things hit the global workspace, or even entirely independent of it. If Peter allows, as I think he must, that that pre-workspace cognition can access beliefs like P, what then remains to be shown to complete my argument is just that these highly sophisticated P-accessing processes can drive the judgment or the representation or the conclusion that I believe that P, just as they can drive many other judgments, representations, or conclusions. Again, I think the burden of proof should be squarely on Peter to show why this wouldn't be possible.

Update, February 28

Peter responds:

Eric writes: “it seems now like Peter is allowing that if P is a "semantic memory" stored in partly "sensory format" it can join with other beliefs to drive the conclusion Q without an intermediate self-interpretative episode.”

I allow that the content of sensory-based memory can enter working memory, and so can join with other beliefs to drive a conclusion. But that the content in question is the content of a memory rather than a fantasy or supposition requires interpretation. There is nothing about the content of an image as such that identifies it as a memory, and memory images don’t come with tags attached signifying that they are memories. (There is a pretty large body of empirical work supporting this claim, I should say. It isn’t just an implication of the ISA theory.)

Eric writes: “But I do think that if, for a broad range of purposes, we can access these stored beliefs, it would be odd if we couldn't do so for the purpose of reaching conclusions about our own minds. We'd then need a pretty good theory of why we have this special disability with respect to mindreading.”

Well, I and others (especially Nichols & Stich in their mindreading book) had provided that theory. The separation between thought-attribution and behavioral prediction is now widely accepted in the literature, with the latter utilizing the subject’s own planning systems, which can in turn access the subject’s beliefs. There is also an increasing body of work suggesting that on-line, unreflective, forms of mental-state attribution are encapsulated from background beliefs. (I make this point at various places in The Opacity of Mind. But more recently, see Ian Apperly’s book Mindreaders, and my own “Mindreading in Infancy”, shortly to appear in Mind & Language.) The claim also makes good theoretical sense seen in evolutionary and functional terms, if the mindreading system evolved to track the mental states of others and generate predictions therefrom. From this perspective one might predict that thought-attribution could access a domain-specific database of acquired information (e.g. “person files” containing previously acquired information about the mental states of others), without being able to conduct free-wheeling searches of memory more generally.

Eric writes: “these highly sophisticated P-accessing processes can drive the judgment or the representation or the conclusion that I believe that P, just as they can drive many other judgments, representations, or conclusions. Again, I think the burden of proof should be squarely on Peter to show why this wouldn't be possible.”

First, I concede that it is possible. I merely claim that it isn’t actual. As for the evidence that supports such a claim, there are multiple strands. The most important is evidence that people confabulate about their beliefs and other mental states in just the sorts of circumstances that the ISA theory predicts that they would. (Big chunks of The Opacity of Mind are devoted to substantiating this claim.) Now, Eric can claim that he, too, can allow for confabulation, since he holds a pluralist account of self-knowledge. But this theory is too underspecified to be capable of explaining the data. Saying “sometimes we have direct access to our beliefs and sometimes we self-interpret” issues in no predictions about when we will self-interpret. In contrast, other mixed-method theorists such as Nichols & Stich and Alvin Goldman have attempted to specify when one or another method will be employed. But none of these accounts is consistent with the totality of the evidence. The only theory currently on the market that does explain the data is the ISA theory. And this entails that the only access that we have to our own beliefs is sensory-based and interpretive.

I agree that people can certainly make "source monitoring" and related errors in which genuine memories of external events are confused with merely imagined events. But it sounds to me like Peter is saying that a stored belief, in order to fulfill its function as a memory rather than a fantasy or supposition must be "interpreted" -- and, given his earlier remarks, presumably interpreted in a way that requires activation of that content in the "global workspace". (Otherwise, his main argument doesn't seem to go through.) I feel like I must be missing something. I don't see how spontaneous, skillful action that draws together many influences -- for example, in conversational wit -- could realistically be construed as working this way. Lots of pieces of background knowledge flow together in guiding such responsiveness; they can't all be mediated in the "global workspace", which is normally thought to have a very limited capacity. (See also Terry Horgan's recent work on jokes.)

Whether we are looking at visual judgments, memory judgments, social judgments about other people, or judgments about ourselves, the general rule seems to be that the sources are manifold and the mechanisms complex. "P, therefore I believe that P" is far too simple to be the whole story; but so also I think is any single-mechanism story, including Peter's.

I guess Peter and I will have a chance to hammer this out a bit more in person during our SSPP session tomorrow! _______________________________________________

[note 1]: Usually philosophers believe that it's raining. Failing that, they believe that snow is white. I just wanted a change, okay?

[note 2]: Is it really "inference" if the solidity of the conclusion doesn't require the solidity of the premises? I don't see why that should be an essential feature of inferences. But if you instead want to call it (following Byrne) just an "epistemic rule" that you follow, that's okay by me.

Tuesday, February 19, 2013

Empirical Evidence That the World Was Not Created Five Minutes Ago

Bertrand Russell writes:

There is no logical impossibility in the hypothesis that the world sprang into existence five minutes ago, exactly as it then was, with a population that "remembered" a wholly unreal past. There is no logically necessary connection between events at different times; therefore nothing that is happening now or will happen in the future can disprove the hypothesis that the world began five minutes ago.... I am not here suggesting that the non-existence of the past should be entertained as a serious hypothesis. Like all sceptical hypotheses, it is logically tenable but uninteresting (The Analysis of Mind, 1921, p. 159-160).
Wait... what?! We can't prove that the world has existed for more than five minutes, but who cares, bo-ring?

I'd think rather the opposite: We can prove that the world has existed for more than five minutes. And if I didn't think I could prove that fact, I wouldn't say ho-hum, whatevs, maybe everything I thought really happened is just an illusion, *yawn*.

Okay, well, "prove" is too strong. Logical certainty of the 2+2=4 variety, I can't provide. But I think I can provide, at least to myself, some good empirical evidence that the past existed five minutes ago. I can begin by opening up the New York Times. It tells me that Larry Kwong was the Jeremy Lin of his day. Therefore, the past exists.

Now hold on, you'll say. I've begged the question against Russell. For Russell is asking us to consider the possibility that the world was created five minutes ago with everything exactly as it then was, including the New York Times website. I can't disconfirm that hypothesis by going to the New York Times website! The young-world hypothesis and the old-world hypothesis make exactly the same prediction about what I will find, and therefore the evidence does not distinguish between them.

Here I need to think carefully about the form of the young-world hypothesis and the reasons I might entertain it. For example, I should consider what reason I have to prefer the hypothesis that a planet-sized external world was created five minutes ago vs. the hypothesis that just I and my immediate environment were created five minutes ago. Is there reason to regard the first hypothesis as more likely than the second hypothesis? I think not. And similarly for temporal extent: There's nothing privileged about 5 minutes ago vs. 7.22 minutes ago vs. 27 hours ago, vs. 10 seconds ago, etc. (perhaps up to the bound of the "specious present").

So if I'm in a skeptical enough mood to take young-world hypotheses seriously, I should have a fairly broad range of young-world hypotheses in mind as live options. Right?

In the spirit of skeptical indifference, let's say that I go ahead and assign a 50% probability to a standard non-skeptical old-world hypothesis and a 50% probability to a young-world hypothesis based on my subjective experience of a seeming-external-world right now. Conditional on the young-world hypothesis, I then further distribute my probabilities, say, 50-50% between the I-and-my-immediate-environment hypothesis and the whole-planet hypothesis. The exact probabilities don't matter, and clearly there will be many other possibilities; this is just to show the form of the argument. So, right now, my probabilities are 50% old-world, 25% young-small-world, and 25% young-large-world.

Now, if the world is young and small, then all should be void, or chaos, beyond the walls of this room. (In a certain sense such a world might be large, but the Earthly stuff in my vicinity is small.) In particular, the New York Times web servers should not exist. So if I try to visit the New York Times' webpage, based on my entirely false seeming-memories of how the world works, New York Times content which is not currently on my computer should not then appear on my computer. After all the hypothesis is that the world was created five minutes ago as it then was, and my computer did not have the newest NYT content at that time. But, I check the site and new content appears! Alternatively, I open my office door and behold, there's a hallway! So now, I reduce the probability of the young-small-world hypothesis to zero. (Zero makes the case simple, but the argument doesn't depend on that.) Distributing that probability back onto the remaining two hypotheses, the old-world hypothesis is now 67% and young-large-world is 33%. Thus, by looking at the New York Times website, I've given myself empirical evidence that the world is more than five minutes old.

Admittedly, I'd hope for better than 67% probability that the world is old, so the result is a little disappointing. If I could justify the proposition that if the world is young, then it is probably small enough not to contain the New York Times web servers, then I can get final probabilities higher. And maybe I can justify that proposition. Maybe the likeliest young-world scenario is a spontaneous congealment scenario (i.e., a Boltzmann brain scenario) or a scenario in which this world is a small-scale computer simulation (see my earlier posts on Bostrom's simulation argument). For example, if the initial probabilities are 50% old, 45% young-small, 5% young-large, then after ruling out young-small, the old-world hypothesis rises above 90% -- an almost-publishable p value! And maybe the probability of the old-world hypothesis continues to rise as I find that the world continues to exist in a stable configuration, since I might reasonably think that if the world is only 5 minutes old or 12 minutes old, it stands a fair chance of collapsing soon -- a possibility that seems less likely on the standard non-skeptical old-world hypothesis. (However, there is the worry that after a minute or so, I will need to restart my anti-skeptical argument again due to the possibility that I only falsely remember having checked the New York Times site. Such are the bitters of trying to refute skepticism.)

You might object that the most likely young-small-world hypothesis is one on which there is an outside agent who will feed me the seeming-New York Times website on demand, purposely to trick me. But I'm not sure that's so. And in any case, with that move we're starting to drift toward something like Cartesian demon doubt, which deserves separate treatment.

[Slightly edited 6:33 pm in light of helpful clarificatory questions from Jonathan Weisberg.]

Friday, February 15, 2013

Thursday, February 14, 2013

Was the Latter Half of the 20th Century a Golden Age for Philosophy?

No, you will say. And I tend to agree, probably not. But let me make the case against dismissing the idea out of hand.

One framing thought is this. The normal condition of humanities research in academia is probably to be poorly funded. People on the left would rather fund poverty relief. People on the right would rather not fund at all. Pragmatists in the center want to fund disciplines with what is perceived as "practical application". The normal condition of humanities research in academia is probably also to survive as a conspiracy of mediocrity in which social connections and political alliances determine position much more than research quality does. Anglophone academic philosophy has, perhaps, defied gravity for a long while already.

Another framing thought is this. Greatness is comparative. In an era with lots of philosophers not vastly different in quality or influence, in the eyes of their contemporaries, it might be hard to pick out a few as giants of world-historical stature. But with time, with the winnowing of greats, that is, with the forgetting of almost-great philosophers, those whose works are still discussed might come to seem more nearly peerless.

I'm considering a fifty-year period, 1950-1999. Ancient Greece hosted a golden age in philosophy, but if limit ourselves to a comparable fifty-year period we probably can't include all three of Socrates, Plato, and Aristotle at the height of their powers, instead having to settle for two plus their lesser contemporaries. The early modern period in Western Europe was probably also a golden age, but again a fifty-year restriction limits those who can be included. By aiming very carefully we can include Descartes' Meditations (1641) through Locke's Essay and Treatises (1689-1690), plus Spinoza and Hobbes in the middle and a dash of Leibniz; or we can run from Locke through early Hume, with most of Leibniz and Berkeley between (plus lesser contemporaries). German philosophy waxed golden from 1780-1829, with Kant, Fichte, Hegel, and Schopenhauer, plus. The question, then, is whether Anglophone philosophy from 1950-1999 might be roughly comparable.

To a first approximation, two things make a philosopher great: quality of argument and creative vigor. (Personally, I also rather care about prose style, but let's set that issue aside.) With apologies to Kant enthusiasts, it seems to me that despite his creativity and vision, Kant's arguments are often rather poor or gestural, requiring substantial reconstruction by sympathetic (sometimes overly charitable?) later commentators. And Descartes' major positive project in the Meditations, his proof of the existence of God and the external world, is widely recognized to be a rather shoddy argument. Similar remarks apply to Plato, Hegel, Spinoza, etc. The great philosophers of the past had, of course, their moments of argumentative brilliance, but for rigor of argument, I think it's hard to say that any fifty-year period clearly exceeds the highlight moments of the best philosophers from 1950-1999.

The more common complaint against Anglophone "analytic" philosophy of the period is its lack of broad-visioned creativity. On that issue, I think it's very hard to justify a judgment without the distance of history. But still... in my own area, philosophy of mind, the period was the great period of philosophical materialism. Although there had been materialists before, it was only in this period that materialism really came to full fruition. And arguably, there is no more important issue in all of philosophy than the cluster of issues around materialism, dualism, and idealism. From a world-historical perspective, the development of materialism was arguably a philosophical achievement of the very highest magnitude. The period also saw a flourishing of philosophy of language in Kripke and reactions to him. In epistemology, the concept of knowledge came under really careful scrutiny for the first time. In general metaphysics, there was David Lewis. Wittgenstein's Philosophical Investigations also falls within the period. In philosophy of science, Kuhn. In ethics and political philosophy, Rawls and Williams. Without pretending a complete or unbiased list, I might also mention Strawson, Putnam, Foot, Singer, Quine, Anscombe, Davidson, Searle, Fodor, Dretske, Dennett, Millikan, and early Chalmers. In toto, is it clear that there's less philosophical value here than in the period from Descartes through Locke or from Kant through Hegel?

Or am I just stuck with a worm's-eye view in which my elders loom too large, and all of this will someday rightly be seen as no more significant than, say, Italian philosophy in the time of Vico or Chinese philosophy in the time of Wang Yangming?

Friday, February 08, 2013

Consciousness Online 2013

Consciousness Online will be running its 2013 conference starting next Friday, February 15th. All are invited to read, view, or listen to the papers and commentaries and contribute to the online discussion.

The program is available here, with Daniel C. Dennett, as the headliner. Other contributors are Katja CroneJoel SmithDaniel MorganPeter Langland-HassanFarid MasrourMatteo GrassoChad KiddMiguel SebastianCheryl Abbate, and Bence Nanay.

I will be presenting my essay If Materialism Is True, the United States Is Probably Conscious. A YouTube version, prepped for the conference, is available here.

Wednesday, February 06, 2013

Apology to a Kindergartener

As the parent of a kindergartener, I constantly find myself apologizing for English orthography. Consider the numbers from one to ten, for instance. Of these, only "six" and "ten" are spelled in a sensible way. "Five" and "nine" make a certain amount of sense once one has mastered the silent-e convention, but that convention is bizarre in itself, inconsistently applied (cf. "give" and "zine"), and only one of a dozen ways to make a vowel long. "Seven" and "three" might not seem so bad to the jaded eye -- but why not "sevven"? Shouldn't "seven" rhyme with "even"? And why make the long "e" with a double-"e"? Why not "threa" (cf. "sea") or "threy" (cf. "key") or "thre" (cf. "me") or "thry" (cf. "party")? "Two"? Why on Earth the "w"? Why the "two", "to", "too" distinction? "Four"? Same thing: "four", "for", "fore"! "One"? Same again: "one", "won", and arguably "wun". Really, "one" it starts with an "o"? My daughter thought I was kidding when I told her this, like the time I told her "dog" was spelled "jxqmpil". It's not much different from that. We're just used to it and so fail to notice. Worst of all is "eight". If English spelling were ever brought to court, it could be tried, convicted, and hung on the word "eight" alone.

Tuesday, February 05, 2013

Preliminary Evidence That the World Is Simple (An Exercise in Stupid Epistemology)

Is the world simple or complex? Is a simple hypothesis more likely to be true than a complex hypothesis that fits the data equally well? The issue is gnarly. Sometimes the best approach to a gnarly issue is crude stupidity. Crude stupidity is my plan today.

Here's what I did. I thought up 30 pairs of variables that would be easy to measure and that might relate in diverse ways. Some variables were physical (the distance vs. apparent brightness of nearby stars), some biological (the length vs. weight of sticks found in my back yard), and some psychological or social (the S&P 500 index closing value vs. number of days past). Some I would expect to show no relationship (the number of pages in a library book vs. how high up it is shelved in the library), some I would expect to show a roughly linear relationship (distance of McDonald's franchises from my house vs. MapQuest estimated driving time), and some I expected to show a curved or complex relationship (forecasted temperature vs. time of day, size in KB of a JPG photo of my office vs. the angle at which the photo was taken). See here for the full list of variables. I took 11 measurements of each variable pair. Then I analyzed the resulting data.

Now, if the world is massively complex, then it should be difficult to predict a third datapoint from any two other data points. Suppose that two measurements of some continuous variable yield values of 27 and 53. What should I expect the third measured value to be? Why not 1,457,002? Or 3.22 x 10^-17? There are just as many functions (that is, infinitely many) containing 27, 53, and 1,457,002 as there are containing 27, 53, and some more pedestrian-seeming value like 44. On at least some ways of thinking about massive complexity, we ought to be no more surprised to discover that third value to be over a million than to discover that third value to be around 40. Call the thesis that a wildly distant third value is no less likely than a nearby third value the Wild Complexity Thesis.

I can use my data to test the Wild Complexity Thesis, on the assumption that the variables I have chosen are at least roughly representative of the kinds of variables we encounter in the world, in day-to-day human lives as experienced in a technologically advanced Earthly society. (I don't generalize to the experiences of aliens or to aspects of the world that are not salient to experience, such as Planck-scale phenomena.) The denial of Wild Complexity might seem obvious to you. But that is an empirical claim, and it deserves empirical test. As far as I know, no philosopher has formally conducted this test.

To conduct the test, I used each pair of dependent variables to predict the value of the next variable in the series (the 1st and 2nd observations predicting the value of the 3rd, the 2nd and 3rd predicting the value of the 4th, etc.), yielding 270 predictions for the 30 variables. I counted an observation "wild" if its absolute value was 10 times the maximum of the absolute value of the two previous observations or if its absolute value was below 1/10 of the minimum of the absolute value of the two previous observations. Separately, I also looked for flipped signs (either two negative values followed by a positive or two positive values followed by a negative), though most of the variables only admitted positive values. This measure of wildness yielded three wild observations out of 270 (1%) plus another three flipped-sign cases (total 2%). (A few variables were capped, either top or bottom, in a way that would make an above-10x or below-1/10th observation analytically unlikely, but excluding such variables wouldn't affect the result much.)

So it looks like the Wild Complexity Thesis might be in trouble. Now admittedly a caveat is in order: If the world is wild enough, then I probably shouldn't trust my memory of having conducted this test (since maybe my mind with all its apparent memories just popped into existence out of a disordered past), or maybe I shouldn't trust the representativeness of this sample (I got 2% wild this time, but maybe in the next test I'll get 50% wild). However, if we are doubtful about the results for either of those reasons, it might be difficult to escape collapse into radical skepticism. If we set aside radically skeptical worries, we might still wonder how wild the world is. These results give us a preliminary estimate. To the extent the variables are representative, the answer seems to be: not too wild -- though with some surprises, such as the $20,000 listed value of the uncirculated 1922 Lincoln wheat penny. (No, I didn't know about that before seeking the data.)

If we use a Wildness criterion of two (two times the max or 1/2 the min), then there are 33 wild instances in 270 observations, or about 12%, overlapping in one case with the three flipped-sign cases, for 13% total. I wouldn't take this number too seriously, since it will presumably vary considerably depending on the variables chosen for analysis -- but still it's smaller than it might have been, and maybe okay as a first approximation to the extent the variables of interest resemble those on my list.

I had meant to do some curve fitting in this post, too -- comparing linear and quadratic predictions with more complex predictions -- but since this is already a good-sized post, we'll curve fit another day.
I admit, this is a ham-handed approach. It uses crude methods, it doesn't really establish anything we didn't already know, and I'm sure it won't touch the views of those philosophers who deny that the world is simple (who probably aren't committed to the Wild Complexity Thesis). I highlight these concessions by calling the project "stupid epistemology". If we jump too quickly to clever, though, sometimes we miss the necessary groundwork of stupid.

Note: This post was substantially revised Feb. 6.

Tuesday, January 29, 2013

Metaphysical Skepticism a la Kriegel

I'm totally biased. I admit it. But I love this new paper by Uriah Kriegel.

Taking the metaphysical question of the ontology of objects as his test case, Kriegel argues that metaphysical disputes often fail to admit of resolution. They fail to admit of resolution, Kriegel argues, not because such disputes lack substance or are merely terminological, but for the more interestingly skeptical reason that although there may be a fact of the matter which metaphysical position is correct, we have no good means of discovering that fact. I have argued for a similarly skeptical position about the "mind-body problem", that is, the question of the relationship between mind and matter. (Hence my pro-Kriegel bias.) But Kriegel develops his argument in some respects more systematically.

Consider a set of four fundamental particles joined together in a tetrahedron. How many objects are there really? A conservative ontological view might say that really there are just four objects and no more: the four fundamental particles "arranged tetrahedronwise". A liberal ontological view might say that really there are fifteen objects: each of the four particles, plus each of the six possible pairings of the particles, plus each of the four possible triplets, plus the whole tetrahedron. An intermediate ("common sense"?) view might hold that the individual particles are each real objects, and so is the tetrahedron, but not the pairs and triplets, for a total of five objects.

Now who is right? Kriegel envisions three possible approaches to determining where the truth lies: empirical testing, appeal to "intuition", and appeal to theoretical virtues like simplicity and parsimony. However, none of these approaches seems promising.

Contra empirical testing: There is, it seems, no empirical fact on which the conservative and liberal would disagree. It's not like we could bombard the tetrahedron with radiation of some sort and the conservative would predict one thing, the liberal another.

Contra appeal to intuition: "Intuition" is a problematic concept in metaphilosophy. But maybe it means something like common sense or coherence with pre-theoretical opinion. Intuition in this sense might favor the five-object answer (the four particles plus the whole), but that's not entirely clear. However, Kriegel argues, hewing to intuition means doing only what P.F. Strawson calls "descriptive metaphysics" -- metaphysics that aims merely to reveal the structure of reality implicit in people's (possibly misguided) conceptual schemes. If we're aiming to discover real metaphysical truths, and not merely what's already implicit in ordinary opinion, we are doing instead what Kriegel and Strawson call "revisionary metaphysics"; and although descriptive metaphysics is beholden to intuition, revisionary metaphysics is not.

Contra appeal to theoretical virtue: Theoretical virtues like simplicity and parsimony might be pragmatic or aesthetic virtues but, Kriegel argues, there seems to be no reason to regard them as truth-conducive in metaphysics. Is there reason to think that the world is simple, and thus that a simple metaphysical theory is more likely to be true than a complex one? Is there reason to think the world contains few entities, and thus that a parsimonious metaphysical theory that posits fewer entities than its rivals is more likely to be true? Kriegel suggests not.

As I said, I love this paper and I'm sympathetic with its conclusion. But I'm a philosopher, so I can't possibly agree entirely with another philosopher working in the same area. That's just not in our nature! So let me issue two complaints.

First: I'm not sure object ontology is the best test case for Kriegel's view. Maybe there's a real fact of the matter whether there are four, five, or fifteen objects in our tetrahedron, but it's not obviously so. It seems like a good case for reinterpretation as a terminological dispute, if any case is. If Kriegel wants to make a general case for metaphysical skepticism, he might do better to choose a dispute that's less tempting to dismiss as terminological, such as the dispute about whether there are immaterial substances or properties. (In fairness, I happen to know he is working on this now.)

Second: It seems to me that Kriegel commits to more strongly negative opinions about the epistemic value of intuition and theoretical virtue than is necessary or plausible. It sounds, in places, like Kriegel is committing to saying that there's no epistemic value, for metaphysics, in harmony with pre-theoretical intuition or in theoretical virtues like simplicity. These are strong claims! We can admit, more plausibly I think, that intuitiveness, simplicity, explanatory power, etc., have some epistemic value while still holding that the kinds of positions that philosophers regard as real metaphysical contenders tend not to admit of decisive resolution by appeal to such factors. Real metaphysical contenders will conflict with some intuitions and harmonize with others, and they will tend to have different competing sets of theoretical virtues and vices, engendering debates it's difficult to see any hope of resolving with our current tools and capacities.

Consider this: It seems very unlikely that the metaphysical truth is that there are exactly fourteen objects in our tetrahedron, to wit, all of the combinations admitted by the liberal view except for one of the four triplet combinations. Such a view seems arbitrary, asymmetric, unsimple, and intuitively bizarre, compared to the more standard options. If you agree, then you should accept, contra a strong reading of Kriegel's argument, that those sorts of theoretical considerations can take us some distance, epistemically. It's just that they aren't likely to take us all the way.

Wednesday, January 23, 2013

Oh That Darn Rationality, There It Goes Making Me Greedy Again!

Or something like that?

In a series of studies, David G. Rand and collaborators found that participants in behavioral economics games tended to act more selfishly when they reached decisions more slowly. In one study, participants were paid 40 cents and then given the opportunity to contribute some of that money into a common pool with three other participants. Contributed money would be doubled and then split evenly among the group members. The longer participants took to reach a decision, they less they chose to contribute on average. Other studies were similar, some in physical laboratories, some conducted on the internet, some with half the participants forced into hurried decisions and the other half forced to delay a bit, some using prisoner's dilemma games or altruistic punishment games instead of public goods games. In all cases, participants who chose quickly shared more.

I find the results interesting and suggestive. It's a fun study. (And that's good.) But I'm also struck by how high the authors aim in their introduction and conclusion. They seek to address the question: "are we intuitively self-interested, and is it only through reflection that we reject our selfish impulses and force ourselves to cooperate? Or are we intuitively cooperative, with reflection upon the logic of self-interest causing us to rein in our cooperative urges and instead act selfishly?" (p. 427). Ten experiments later, we have what the authors seem to regard as pretty compelling general evidence in favor of intuition over rationality as the ground of cooperation. The authors' concluding sentence is ambitious: "Exploring the implications of our findings, both for scientific understanding and public policy, is an important direction for future study: although the cold logic of self-interest is seductive, our first impulse is to cooperate" (p. 429).

Now it might seem a minor point, but here's one thing that bothers me about most of these types of behavioral economics games on self-interest and cooperation: It's only cooperation with other participants that is considered to be cooperation. What about a participant's potential concern for the financial welfare of the experimenter? If a participant makes the "cooperative" choice in the common goods game, tossing her money into the pool to be doubled and then split back among the four participants, what she has really done is paid to transfer money from the pockets of the experimenter into the pockets of the other participants. Is it clear that that's really the more cooperative choice? Or is she just taking from Peter to give to Paul? Has Paul done something to be more deserving?

Maybe all that matters is that most people would (presumably) judge it more cooperative for participants to milk all they can from the experimenters in this way, regardless of whether in some sense that is a more objectively cooperative choice? Or maybe it's objectively more cooperative because the experimenters have communicated to participants, through their experimental design, that they are unconcerned about such sums of money? Or maybe participants know or think they know that the experimenters have plenty of funding, and consequently (?) they are advancing social justice when they pay to transfer money from the experimenters to other participants? Or...?

These quibbles feed a larger and perhaps more obvious point. There's a particular psychology of participating in an experiment, and there's a particular psychology of playing a small-stakes economic game with money, explicitly conceptualized as such. And it is a leap -- a huge leap, really -- from such laboratory results, as elegant and well-controlled as they might be, to the messy world outside the laboratory with large stakes, usually non-monetary, and not conceptualized as a game.

Consider an ordinary German sent to Poland and instructed to kill Jewish children in 1942. Or consider someone tempted to cheat on her spouse. Consider me sitting on the couch while my wife does the dishes, or a student tempted to copy another's answers, or someone using a hurtful slur to be funny. It's by no means clear that Rand's study should be thought to cast much light at all on cases such as these.

Is our first impulse cooperative, and does reflection makes us selfish? Or is explicit reflection, as many philosophers have historically thought, the best and most secure path to moral improvement? It's a fascinating question. We should resist, I think, being satisfied too quickly with a simple answer based on laboratory studies, even as a first approximation.

Tuesday, January 22, 2013

CFP: Consciousness and Experiential Psychology

The Consciousness and Experiential Psychology Section of the British Psychological Society will be meeting in Bristol, September 6-8. They have issued a call for submissions "particularly, but not exclusively" on the topics of non-verbal expression, affective or emotional responses, and subjectivity and imagination.

Details here. I will be one of four keynote speakers.

About a week later (September 12-13), also in Bristol, the Experimental Philosophy Group UK is planning to hold their annual workshop. I'll be keynoting there too. For details on that one, contact Bryony Pierce (bryonypierce at domain: btinternet.com, and/or experimentalphilosophyuk at domain: gmail.com) or check their website for updates.

Make it a two-fer!

Saturday, January 19, 2013

A Memory from Grad School

Circa 1994: Josh Dever would be sitting on a couch in the philosophy graduate student lounge at U.C. Berkeley. I would propose to him a definition of "dessert" (e.g.: "a sweet food eaten after the main meal is complete"). He would shoot it down (e.g.: "but then it would be a priori that you couldn't eat dessert first!"). Later he would propose a definition to me, which I would shoot down. Over time, the definitions became ever more baroque. Other graduate students participated too.

Eventually Josh decided that he would define a dessert as anything served on a dessert plate. Asked what is a dessert plate is, he would say it was intuitively obvious. Presented with an objection ("So you couldn't eat Oreos right out of the bag for dessert?") he would simply state that he was willing to "bite the bullet" and accept a certain amount of revision of our pre-theoretical opinions.

At the time it seemed like cheating. In retrospect, I think Josh saw right to the core.

Thursday, January 17, 2013

Being the World's Foremost Expert on X Takes Time

According to the LA Times, governor Jerry Brown wants to see "more teaching and less research" in the University of California.

I could see the state of California making that choice. Maybe we U.C. professors teach too few undergraduate courses for our state's needs. (I teach on average one undergraduate course per term. I also advise students individually, supervise dissertations, and teach graduate seminars.) But here's a thought. If it is valuable to have some public universities in which the undergraduate teaching and graduate supervision is done by the foremost experts in the world on the topics in question, then you have to allow professors considerable research time to attain and sustain that world-beating expertise. Being among the world's foremost experts on childhood leukemia, or on the neuroscience of visual illusion, or on the history of early modern political philosophy, is not something one can squeeze in on the side, with a few hours a week.

In my experience, it takes about 15 hours a week to run an undergraduate course (longer if it's your first time teaching the course): three hours of lecture, plus lecture prep time, plus office hours, plus reviewing the assigned readings and freshening up on relevant connected literature, plus grading and test design, plus email exchanges with students and course management. And let's suppose that a typical professor works about 50 hours a week. If Professor X at University of California teaches two undergraduate lecture courses per term, that leaves 20 hours a week for research and everything else (including graduate student instruction and administrative tasks like writing recommendation letters, serving on hiring committees, applying for grants, refereeing for journals, keeping one's equipment up to date...). If Professor Y at University of Somewhere Else teaches one undergraduate lecture course per term, that leaves 35 hours a week for research and everything else. How is Professor X going to keep up with Professor Y? Over time, if teaching load is substantially increased, the top experts will disproportionately be at the University of Somewhere Else, not the University of California.

Of course some people manage brilliantly productive research careers alongside heavy undergraduate teaching loads. I mean them no disrespect. On the contrary, I find them amazing! My point above concerns only what we should expect on average.

Tuesday, January 15, 2013

Blind Photographers

Yes, that's right. Some of the world's leading blind photographers! Here at UCR. What is it like to point your camera, print, and present it, all without seeing? What is your method of selection? What is your relation to your art?

 
photo by Ralph Baker, from http://www.cmp.ucr.edu/exhibitions/sightunseen/

The Moral Behavior of Ethicists and the Rationalist Delusion

A new essay with Joshua Rust.

In the first part, we summarize our empirical research on the moral behavior of ethics professors.

In the second part, we consider five possible explanations of our finding that ethics professors behave, on average, no better than do other professors.

Explanation 1: Philosophical moral reflection has no material impact on real-world moral behavior. Jonathan Haidt has called the view that explicit moral reflection does have a substantial effect on one's behavior "the rationalist delusion".

Explanation 2: Philosophical moral reflection influences real world moral behavior, but only in very limited domains of professional focus. Those domains are sufficiently narrow and limited that existing behavioral measures can't unequivocally detect them. Also, possibly, any improvement in such narrow domains might be cancelled out, in terms of overall moral behavior, by psychological factors like moral licensing or ego depletion.

Explanation 3: Philosophical moral reflection might lead one to behave more morally permissibly but no morally better. The idea here is that philosophical moral reflection might lead one to avoid morally impermissible behavior while also reducing the likelihood of doing any more than is strictly morally required. Contrast the sometimes-sinner sometimes-saint with the person who never goes beyond the call of duty but also never does anything really wrong.

Explanation 4: Philosophical moral reflection might compensate for deficient moral intuitions. Maybe, from early childhood or adolescence, some people tend to lack strongly motivating moral emotions. And maybe some of those people are also drawn to intellectual styles of moral reflection. Without that moral reflection, they would behave morally worse than the average person, but moral reflection helps them behave better than they otherwise would. If enough ethicists are like this, then philosophical moral reflection might have an important positive effect on the moral behavior of those who engage in it, but looking at ethicists as a group, that effect might be masked if it's compensating for lower rates of moral behavior arising from emotional gut intuitions.

Explanation 5: Philosophical moral reflection might have powerful effects on moral behavior, but in both moral and countermoral directions, approximately cancelling out on average. It might have positive effects, for example, if it leads us to discover moral truths on which we then act. But perhaps equally often it becomes toxic rationalization, licensing morally bad behavior that we would otherwise have avoided.

Josh and I decline to choose among these possibilties. There might be some truth in all or most of them. And there are still other possibilities, too. Ethicists might in fact engage in moral reflection relevant to their personal lives no more often than do other professors. Ethicists might find themselves increasingly disillusioned about the value of morality at the same time they improve their knowledge of what morality in fact requires. Ethicists might learn to shield their personal behavior from the influence of their professional reflections, as a kind of self-defense against the apparent unfairness of being held to higher standards because of their choice of profession....

Full essay here. As always thoughts and comments welcome, either by email or attached to this post.

Friday, January 11, 2013

Fame Through Friend-Citation

Alert readers might raise the following objection to my recent post, Fame Through Self-Citation: By the 500th article of our Distinguished Philosopher's career, his reference list will include 499 self-citations. Journal editors might find that somewhat problematic.

If this concern occurred to you, you are just the sort of perceptive and prophetic scholar who is ready for my Advanced Course in Academic Fame! The advanced course requires that you have five Friends. Each Friend agrees to publish five articles per year, in venues of any quality, and each published article will self-cite five times and cite five articles of each other Friend. (In Year One, each Friend will cite the entire Friendly corpus.)

Assuming that the six Friends' publications can be treated serially, ABCDEFABCDEF..., by the end of Year One, Friend A will have 29 + 23 + 17 + 11 + 5 = 85 citations, Friend B will have 80 citations, Friend C 75, Friend D 70, and Friend E 65, and Friend F 60. Friend A will have an h-index of 5 and the others will have indices of 4. By the end of Year Six, the Friends will have 4,935 Friendly citations to distribute among themselves, or 822.5 citations each. If they aim to maximize their h-index, each can arrange to have an h-index of at least 25 by then. (That is, each can have 25 articles that are each cited at least 25 times.) This would give them an h-index exceeding that of the 20 or so characteristic of leading youngish philosophers like Jason Stanley and Keith DeRose. (See Brian Leiter's discussion here.)

By the end Year 20, the Friends will have 100 articles each and 17,535 citations to distribute among those articles. Wisely-enough distributed, this will permit them to achieve h-indices in excess of 50, higher than the indices of such leading philosophers as Ned Block , David Chalmers, and Timothy Williamson. By the end of their 50-year careers, they will have 7,422.5 Friendly citations each, permitting h-indices in the mid-80s, substantially in excess of the h-index of any living philosopher I am aware of.

But this underestimates the possibilities! These Friends will be asked to referee work for the same journals in which they are publishing. They wouldn't be so tacky as to use the sacred trust of their refereeing positions to insist that other authors cite their own work -- but of course they can recommend the important work of their Friends! If each Friend accepts one refereeing assignment per month and successfully recommends publication for half of the articles they referee, contingent upon the authors' citing one article from each of the other five Friends, that will add 180 more citations per year to the pool. Other scholars, seeing the range of authors citing each of the Friends' work, will naturally regard each Friend as a leading contributor to the field, whom they must also therefore cite, creating a snowball effect. Can h-indices of 100 be far away?

(Any resemblance of this strategy to the behavior of actual academics is purely coincidental.)

Fame Through Self-Citation

Let's say you're not picky about quality or venue. It shouldn't be too hard get five publications a year. And let's say that every one of your publications cites every one of your previously published or forthcoming works. In Year One, you will have 0 + 1 + 2 + 3 + 4 = 10 citations. By Year Two, you will have 5 + 6 + 7 + 8 + 9 = 35 more citations, for 45 total. Year Three gives you 60 more citations, for 105 total. Year Four, 85 for 190; Year Five, 110 for 300; Year Six, 135 for 435. You're more than ready for tenure!

In one of his blog posts about Google Scholar, Brian Leiter suggests that an "h-index" of about 20 is characteristic of "leading younger scholars" like Keith DeRose and Jason Stanley. You will have that h-index by the end of your eighth year out -- considerably faster than either DeRose or Stanley! In your 18th year, your h-index will match that of David Chalmers (currently in his 19th year out). By the end of the 50th year of your long and industrious career, you will have 250 publications, 31,125 total citations, and an h-index of 125, vastly exceeding that of any living philosopher I am aware of (e.g., Dan Dennett), and approaching that of Michel Foucault.

Self-citation: the secret of success!

Update, Jan. 12: See also my Advanced Course in Academic Fame: Fame Through Friend-Citation.

Wednesday, January 09, 2013

The Emotional Psychology of Lynching

Today, for my big lower-division class on Evil (enrollment 300), I'm teaching about Southern U.S. racial lynching in the early 20th century. My treatment centers on lynching photography, especially photos that include perpetrators and bystanders, which were often proudly circulated as postcards.

Here's one photo, from James Allen et al. (2000) Without Sanctuary:

Here are a couple spectator details from the above photo:
Looking at the first spectator detail, I'm struck by the thought that this guy probably thought that the lynching festivities made for an entertaining date with the girls.

Here's the accompanying text from Allen et al.:

The following account is drawn from James Cameron's book, A Time of Terror: Thousands of Indianans carrying picks, bats, ax handles, crowbars, torches, and firearms attacked the Grant County Courthouse, determined to "get those goddamn Niggers." A barrage of rocks shattered the jailhouse windows, sending dozens of frantic inmates in search of cover. A sixteen-year-old boy, James Cameron, one of the three intended victims, paralyzed by fear and incomprehension, recognized familiar faces in the crowd -- schoolmates, and customers whose lawns he had mowed and whose shoes he had polished -- as they tried to break down the jailhouse door with sledgehammers. Many police officers milled outside with the crowd, joking. Inside, fifty guards with guns waited downstairs.

The door was ripped from the wall, and a mob of fifty men beat Thomas Shipp senseless and dragged him into the street. The waiting crowd "came to life." It seemed to Cameron that "all of those ten to fifteen thousand people were trying to hit him all at once." The dead Shipp was dragged with a rope up to the window bars of the second victim, Abram Smith. For twenty minutes, citizens pushed and shoved for a closer look at the "dead nigger." By the time Abe Smith was hauled out he was equally mutilated. "Those who were not close enough to hit him threw rocks and bricks. Somebody rammed a crowbar through his chest several times in great satisfaction." Smith was dead by the time the mob dragged him "like a horse" to the courthouse square and hung him from a tree. The lynchers posed for photos under the limb that held the bodies of the two dead men.

Then the mob headed back from James Cameron and "mauled him all the way to the courthouse square," shoving and kicking him to the tree, where the lynchers put a hanging rope around his neck.

Cameron credited an unidentified woman's voice with silencing the mob (Cameron, a devout Roman Catholic, believes that it was the voice of the Virgin Mary) and opening a path for his retreat to the county jail....

The girls are holding cloth souvenirs from the corpses. The studio photographer who made this postcard printed thousands of copies over the next ten days, selling for fifty cents apiece.

According to Cameron's later account, Shipp and Smith were probably guilty of murdering a white man and raping a white woman. He insists that he himself had fled the scene before either of those crimes occurred. According to historical records, in only about one-third of racial lynching cases were the victims even accused of a grievous crime such as rape or murder. In about one-third of cases, they were accused of non-grievous crime, such as theft. In about one-third of cases, the victims were accused of no real crime at all, only of being "uppity", or of having consensual sexual relations with a white woman, or the victim was a friend or family member of another lynching victim.

Wednesday, January 02, 2013

On Trusting Your Sense of Fun

Maybe, like me, you're a philosophy dork. Maybe, like me, when you were thirteen, you said to your friends, "Is there really a world behind that closed door? Or does the outside world only pop into existence when I open the door?", and they said, "Dude, you're weird! Let's go play basketball." Maybe, like me, when you were in high school you read science fiction and wondered whether an entirely alien moral code might be as legitimate as our own, and this prevented you from taking your World History teacher entirely seriously.

If you are a deep-down philosophy dork, then you might have a certain underappreciated asset: a philosophically-tuned sense of what's fun. You should trust that sense of fun.

It's fun -- at least I find it fun -- to think about whether there's some way to prove that the external world exists. It's fun to see whether ethics books are any less likely to be stolen than other philosophy books. (They're actually more likely to be stolen, it turns out.) It's fun to think about why people used to say they dreamed in black and white, to think about how weirdly self-ignorant people often are, to think about what sorts of bizarre aliens might be conscious, to think about whether babies know that things continue exist outside of their perceptual fields. At every turn in my career, I have faced choices about whether to pursue what seems to me to be boring, respectable, philosophically mainstream, and at first glance the better career choice, or whether instead to follow my sense of fun. Rarely have I regretted it in the long term when I have chosen fun.

I see three main reasons a philosophy dork should trust her sense of fun:

(1.) If you truly are a philosophy dork in the sense I intend the phrase -- and I assume most readers of this blog are (consider: this is how you're spending your free time?!) -- then your sense of what's fun will tend to manifest some sort of attunement to what really is philosophically worth pursuing. You might not be able quite to put your finger on why it's worth pursuing, at first. It might even just seem a pointless intellectual lark. But my experience is that the deeper significance will eventually reveal itself. Maybe it's just that everything can be explored philosophically and brought around back to main themes, if one plunges deep enough. But I'm inclined to think it's not just that. The true dork's mind has a horse-sense of where it needs to go next.

(2.) It energizes you. Few things are more dispiriting than doing something tedious because "it's good for your career". You'll find yourself wondering whether this is really the career for you, whether you're really cut out for philosophy. You'll find yourself procrastinating, checking Facebook, spacing out while reading, prioritizing other responsibilities. In contrast, if you chase the fun first, you will find yourself positively eager, at a visceral level, to do your research. And this eagerness can then be harnessed back into a sense of responsibility. Finding your weird passion first, and figuring out what you want to say about it, can energize you to go back later and read what others have said about your topic, so you can fill in the references, connect it with previous research, sophisticate your view in light of others' work. It's much more rewarding to read the great philosophers, and one's older contemporaries, when you have a lens to read them through than when you're slogging through them from a vague sense of duty.

(3.) Fun is contagious. So is boredom. Readers are unlikely to enjoy your work and be enthusiastic about your ideas, if even you don't have that joy and enthusiasm.

These remarks probably generalize across disciplines. I think of Richard Feynman's description of how he recovered from his early-career doldrums (see the last fifth of this autobiographical essay).

Tuesday, January 01, 2013

Essays of 2012

2012 was a good research year for me.

These essays appeared in print in 2012:

These essays are finished and forthcoming:

Thursday, December 27, 2012

New Essay: Experimental Evidence of the Existence of an External World

Here's exactly the pagan solstice celebration gift you were yearning for: a proof that the external world exists!

(Caveat emptor: The arguments only work if you lack a god-like intellect. See footnote 15.)

Abstract:

In this essay I attempt to refute radical solipsism by means of a series of empirical experiments. In the first experiment, I prove to be a poor judge of four-digit prime numbers, in contrast to a seeming Excel program. In the second experiment, I prove to have an imperfect memory for arbitrary-seeming three-digit number and letter combinations, in contrast to my seeming collaborator with seemingly hidden notes. In the third experiment, I seem to suffer repeated defeats at chess. In all three experiments, the most straightforward interpretation of the experiential evidence is that something exists in the universe that is superior in the relevant respects – theoretical reasoning (about primes), memorial retention (for digits and letters), or practical reasoning (at chess) – to my own solipsistically-conceived self.

This essay is collaborative with Alan T. Moore.

Available here.

Wednesday, December 19, 2012

Animal Rights Advocate Eats Cheeseburger, So... What?

Suppose it turns out that professional ethicists' lived behavior is entirely uncorrelated with their philosophical theorizing. Suppose, for example, that ethicists who assert that lying is never permissible (a la Kant) are neither more nor less likely to lie, in any particular situation, than is anyone else of similar social background. Suppose that ethicists who defend Singer's strong views about charity in fact give no more to charity than their peers who don't defend such views. Suppose this, just hypothetically.

For concreteness, let's imagine an ethicist who gives a lecture defending strict vegetarianism, then immediately retires to the university cafeteria for a bacon double cheeseburger. Seeing this, a student charges the ethicist with hypocrisy. The ethicist replies: "Wait. I made no claims in class about my own behavior. All I said was that eating meat was morally wrong. And in fact, I do think that. I gave sound arguments in defense of that conclusion, which you should also accept. The fact that I am here eating a delicious bacon double cheeseburger in no way vitiates the force of those arguments."

Student: "But you can't really believe those arguments! After all, here you are shamelessly doing what you just told us was morally wrong."

Ethicist: "What I personally believe is beside the point, as long as the arguments are sound. But in any case, I do believe that what I am doing is morally wrong. I don't claim to be a saint. My job is only to discover moral truths and inform the world about them. You're going to have to pay me extra if you want to add actually living morally well to my job description."

My question is this: What, if anything, is wrong with the ethicist's attitude toward philosophical ethics?

Maybe nothing. Maybe academic ethics is only a theoretical enterprise, dedicated to the discovery of moral truths, if there are any, and the dissemination of those discoveries to the world. But I'm inclined to think otherwise. I'm inclined to think that philosophical reflection on morality has gone wrong in some important way if it has no impact on your behavior, that part of the project is to figure out what you yourself should do. And if you engage in that project authentically, your behavior should shift accordingly -- maybe not perfectly but at least to some extent. Ethics necessarily is, or should be, first-personal.

If a chemist determines in the lab that X and Y are explosive, one doesn't expect her to set aside this knowledge, failing to conclude that an explosion is likely, when she finds X and Y in her house. If a psychologist discovers that method Z is a good way to calm an autistic teenager, we don't expect him to set aside that knowledge when faced with a real autistic teenager, failing to conclude that method Z might calm the person. So are all academic disciplines, in a way, first-personal?

No, not in the sense I intend the term. The chemist and psychologist cases are different from the ethicist case as I have imagined it. The ethicist is not setting aside her opinion that eating meat is wrong as she eats that cheeseburger. She does in fact conclude that eating the cheeseburger is wrong. However, she is unmoved by that conclusion. And to be unmoved by that conclusion is to fail in the first-personal task of ethics. A chemist who deliberately causes explosions at home might not be failing in any way as a chemist. But an ethicist who flouts her own vision of the moral law is, I would suggest, in some way, though perhaps not entirely, a failure as an ethicist.

An entirely zero correlation between moral opinion and moral behavior among professional ethicists is empirically unlikely, I'm inclined to think. However, Joshua Rust's and my empirical evidence to date does suggest that the correlations might be pretty weak. One question is whether they are weak enough to indicate a problem in the enterprise as it is actually practiced in the 21st-century United States.

Tuesday, December 11, 2012

Intuitions, Philosophy, and Experiment

Herman Cappelen has provocatively argued that philosophers don't generally rely upon intuition in their work and thus that work in experimental philosophy that aims to test people's intuitions about philosophical cases is really beside the point. I have a simple argument against this view.

First: I define "intuition" very broadly. A judgment is "intuitive", in my view, just in case it arises by some cognitive process other than explicit, conscious reasoning. By this definition, snap judgments about the grammaticality of sentences, snap judgments about the distance of objects, snap judgments about the moral wrongness of an action in a hypothetical scenario, and snap folk-psychological judgments are generally going to be intuitive. Intuitive judgments don't have to be snap judgments -- they don't have to be fast -- but the absence of explicit conscious reasoning is clearest when the judgment is quick.

This definition of "intuition" is similar to one Alison Gopnik and I worked with in a 1998 article, and it is much more inclusive than Cappelen's own characterizations. Thus, it's quite possible that intuitions in Cappelen's narrow sense are inessential to philosophy while intuitions in my broader sense are essential. But I don't think that Cappelen and I have merely a terminological dispute. There's a politics of definition. One's terminological choices highlight and marginalize different facets of the world.

My characterization of intuition is also broader than most other philosophers' -- Joel Pust in his Stanford Encyclopedia article on intuition, for example, seems to regard it as straightforward that perceptual judgments should not be called "intuitions" -- but I don't think my preferred definition is entirely quirky. In fact, in a recent study, J.R. Kuntz and J.R.C Kuntz found that professional philosophers were more likely to "agree to a very large extent" with Gopnik's and my definition of intuition than with any of six other definitions proposed by other authors (32% giving it the top rating on a seven-point scale). I think professional psychologists and linguists might also sometimes use "intuition" in something like Alison's and my sense.

If we accept this broad definition of intuition, then it seems hard to deny that, contra Cappelen, philosophy depends essentially on intuition -- as does all cognition. One can't explicitly consciously reason one's way to every one of one's premises, on pain of regress. One must start somewhere, even if only tentatively and subject to later revision.

Cappelen has, in conversation, accepted this consequence of my broad definition of "intuition". The question then becomes what to make of the epistemology of intuition in this sense. And this epistemological question is, I think, largely an empirical one, with several disciplines empirically relevant, including cognitive psychology, experimental philosophy, and study of the historical record. Based on the empirical evidence, what might we expect to be the strengths and weaknesses of explicit reasoning? And, alternatively, what might we expect to be the strengths and weaknesses of intuitive judgment?

Those empirical questions become especially acute when the two paths to judgment appear to deliver conflicting results. When your ordinary-language spontaneous judgments about the applicability of a term to a scenario (or at least your inclinations to judge) conflict with what you would derive from your explicit theory, or when your spontaneous moral judgments (or inclinations) do, what should you conclude? The issue is crucial to philosophy as we all live and perform it, and the answer one gives ought to be informed, if possible, by empirically discoverable facts about the origins and reliability of different types of judgments or inclinations. (This isn't to say that a uniform answer is likely to win the day: Things might vary from time to time, person to person, topic to topic, and depending on specific features of the case.)

It would be strange to suppose that the psychology of philosophy is irrelevant to its epistemology. And yet Cappelen's dismissal of the enterprise of experimental philosophy on grounds of the irrelevance of "intuitions" to philosophy would seem to invite us toward that exactly that dubious supposition.

Tuesday, December 04, 2012

Second- vs. Third-Person Presentations of Moral Dilemmas

Is it better for you to kill an innocent person to save others than it is for someone else to do so? And does the answer you're apt to give depend on whether you are a professional philosopher? Kevin Tobia, Wesley Buckwalter, and Stephen Stich have a forthcoming paper in which they report results that seem to suggest that philosophers think very differently about such matters than do non-philosophers. However, I'm worried that Tobia and collaborators' results might not be very robust.

Tobia, Buckwalter, and Stich report results from two scenarios. One is a version of Bernard Williams' hostage scenario, in which the protagonist is captured and given the chance to personally kill one person among a crowd of innocent villagers so that the other villagers may go free. If the protagonist refuses, all the villagers will be killed. Forty undergrads and 62 professional philosophers were given the scenario. For half of the respondents the protagonist was "you"; for the other half it was "Jim". Undergrads were much more likely to say that shooting the villager was morally obligatory if "Jim" was the protagonist (53%) than if "you" was the protagonist (19%). Professional philosophers, however, went the opposite direction: 9% if "Jim", 36% if "you". Their second case is the famous trolley problem, in which the protagonist can save five people by flipping a switch to shunt a runaway trolley to a sidetrack where it will kill one person instead. Undergrads were more likely to say that shunting the trolley is permissible in the third-person case than in the second-person "you" case, and philosophers again showed the opposite pattern.

Weird! Are we to conclude that undergrads would rather let other people get their hands dirty for the greater good, while philosophers would rather get their hands dirty themselves? Or...?

When I first read about these studies in draft, though, one thing struck me as odd. Fiery Cushman and I had piloted similar-seeming studies previously, and we hadn't found much difference at all between second- and third-person presentations.

In one pilot study (the pilot for Schwitzgebel and Cushman 2012), we had given participants the following scenario:

Nancy is part of a group of ecologists who live in a remote stretch of jungle.  The entire group, which includes eight children, has been taken hostage by a group of paramilitary terrorists.  One of the terrorists takes a liking to Nancy.  He informs her that his leader intends to kill her and the rest of the hostages the following morning.  He is willing to help Nancy and the children escape, but as an act of good faith he wants Nancy to kill one of her fellow hostages whom he does not like.  If Nancy refuses his offer all the hostages including the children and Nancy herself will die.  If she accepts his offer then the others will die in the morning but she and the eight children will escape.
The second-person version substituted "you" for "Nancy".  Responses were on a seven-point scale from "extremely morally good" (1) to "extremely morally bad" (7). We had three groups of respondents, philosophers (reporting Master's or PhD in philosophy), non-philosopher academics (reporting graduate degree other than in philosophy), and non-academics (reporting no graduate degree). The mean responses for the non-academics were 4.3 for both cases (with thousands of respondents), for academic non-philosophers 4.6 for "you" vs. 4.5 for "Nancy" (not a statistically significant difference, even with several hundred respondents in each group). And the mean responses for philosophers were 3.9 for "you" vs. 4.2 for "Nancy" (not statistically significant, with about 200 in each group). Similarly, we found no differences in several runaway trolley cases, moral luck cases, and action-omission cases. Or, to speak more accurately, we found a few weak results that may or may not qualify as statistically significant, depending on one how one approaches the statistical issue of multiple comparisons, but nothing strong or consistent. Certainly nothing that pops out with a large effect size like the one Tobia, Buckwalter, and Stich found.

I'm not sure how to account for these different results. One difference is that Fiery and I used internet respondents rather than pencil-and-paper respondents. Also, we solicited responses on a 1-7 scale rather than asking yes or no. And the scenarios differed in wording and detail -- including the important difference that in our version of the hostage scenario the protagonist herself would be killed. But still, it's not obvious why should our results be so flat when Tobia, Buckwalter, and Stich find such large effects.

Because Fiery and I were disappointed by seeming ineffectuality of switching between "you" and a third-party protagonist, in our later published study we decided to try varying, in a few select scenarios, the victim rather than the protagonist.  In other words, what do you think about Nancy's choice when Nancy is to shoot "you" rather than "one the fellow hostages"?

Here we did see a difference, though since it wasn't relevant to the main hypothesis discussed in the final version of the study we didn't detail that aspect of our results in the published essay. Philosophers seemed to treat the scenarios about the same when the victim was "you" as when the victim was described in the third person; but non-philosophers expressed more favorable attitudes toward the protagonist when the protagonist sacrificed "you" for the greater good.  In the hostage scenario, non-academics rated it 3.6 in the "you" condition vs. 4.1 in the other condition (p < .001) (remember, lower numbers are morally better on our scale); non-philosopher academics split 4.1 "you" vs. 4.5 third-person (p = .001); and philosophers split 3.9 vs. 4.0 (p = .60, N = 320). (Multiple regression shows the expected interaction effects here, implying that non-philosophers were statistically more influenced by the manipulation than were philosophers.) There was a similar pattern of results in another scenario involving shooting one person to save others on a sinking submarine, with philosophers showing no detectable difference but non-philosophers rating shooting the one person more favorably when the victim is "you" than when the victim is described in third-person language. However, we did not see the same pattern in some action-omission cases we tried.

Fiery and I didn't end up publishing this part of our findings because it didn't seem very robust and we weren't sure what to make of it, and I should emphasize that the findings and analysis are only preliminary; but I thought I'd put it out there in the blogosphere at least, especially since it relates to the forthcoming piece by Tobia, Buckwalter, and Stich. The issue seems ripe for some follow-up work, though I might need to finish proving that the external world exists first!