Wednesday, July 31, 2013

Schoenfeld and Ioannidis: Is Everything We Eat Associated with Cancer? A Systematic Cookbook Review

Okay, I'm now officially a John Ioannidis fanboy.

You might have heard of Ioannidis. In 2005 he published the notorious and elegant "Why Most Published Research Findings Are False". But what he has been up to recently, I wondered, and went to his homepage. Just reading the titles and abstracts of his recent publications was a pleasure for a nerdy, stats-loving skeptic like me!

Let me share just one: Schoenfeld and Ioannidis 2013: "Is Everything We Eat Associated with Cancer? A Systematic Cookbook Review". Schoenfeld and Ioannidis chose recipes at random from a recent cookbook and compiled a list of 50 ingredients used in those recipes (almonds, bacon, baking soda, bay leaf, beef, bread, butter, carrot, celery...). Then they searched the medical literature for evidence that the selected ingredients were associated with cancer. For 40 of the 50 ingredients, there was at least one study. 39% of published studies concluded that the ingredient was associated with an increased risk of cancer; 33% concluded a decreased risk; 5% concluded a borderline risk; and only 23% found no evidence of an association. Of the 40 ingredients, 36 had at least one study claiming increased or decreased risk.

Schoenfeld and Ioannidis do not conclude that almost everything we eat either causes or prevents cancer. Rather, they conclude that the methodology and reporting standards in the field are a mess. About half of the ingredients are exonerated in meta-analyses, but Schoenfeld and Ioannidis argue that even meta-analyses might tend to overstate associations given standard practices in the field.

A similar tendency to report spurious positive findings probably distorts psychology (see, e.g., here). Certainly, that has been my own impression when I have systematically reviewed various subliteratures such as those attempting to associate self-reports of visual imagery vividness with performance on visual tasks and attempting to demonstrate the effectiveness of university-level ethics instruction. (I'm currently updating my knowledge of the latter literature -- new post soon, I hope.)

Okay, I can't resist mentioning Ioannidis's delightful piece on grant funding. First sentence: "The research funding system is broken: scientists don't have time for science any more". (I've written about this here and Helen De Cruz has a nice post here.)

Oh, and here he is bashing the use of statistics in neuroscience. I guess I couldn't mention only one study after all. You know, because fanboys lack statistical discipline.

Tuesday, July 23, 2013

On the Morality of Hypotenuse Walking

As you can infer from the picture below, the groundkeepers at UC Riverside don't like it when we walk on the grass:

But I want to walk on the grass! In time-honored philosophical tradition, then, I will create a moral rationalization. (This is one thing that philosophical training in ethics seems to be especially good for.)


Let's start with the math.

One concrete edge of the site pictured above is (I just measured it) 38 paces; the other edge is 30 paces. Pythagoras tells me that the hypotenuse must be 48 paces -- 20 fewer paces through the grass than on the concrete. At a half-second per pace, the grass walker ought to defeat the concrete walker by 10 seconds.

This particular corner is highly traveled (despite its empty off-hours summer appearance above), standing as it does along the most efficient path from the main student parking lot to the center of campus. There are 21,000 students at UCR. Assuming that on any given weekday 1/10 of them could save time getting to and from their cars by cutting across this grass, and multiplying by 200 weekdays, we can estimate the annual cost of forbidding travel along this hypotenuse at 8,400,000 seconds' worth of walking -- the equivalent of 16 years. Summing similar situations across the whole campus, I find lifetimes' worth of needless footsteps.

The main reason for blocking the hypotenuse is presumably aesthetic. I submit that UCR is acting unreasonably to demand, every year, 16 years' worth of additional walking from its students to prevent the appearance of a footpath along this hypotenuse. Footpaths through grass are simply not that much of an eyesore.

But even granting that unpaved footpaths are a terrible eyesore, the problem could be easily remedied! Suppose it costs $10,000 a year to build and maintain an aesthetically pleasing concrete footpath along the hypotenuse -- at least as pleasing as plain grass (perhaps including even an additional tree or flowers if necessary for aesthetic equivalence). To demand 16 years' needless walking from students to save the campus $10,000 is to value students' time at the unconscionable rate of seven cents an hour.

These calculations don't even take into account UCR's costs of enforcement: The yellow rope itself is an aesthetic crime worse than the footpath it prevents!

In light of UCR's egregious moral and aesthetic choices vis-a-vis footpaths, I am therefore entirely in the right to stride across the grass whenever I see fit. Raise the pitchforks. Fight the power.

But I can't seem to do it while looking a groundskeeper in the eye.

Monday, July 22, 2013

Epictetus on Living One's Philosophy

Today I was reading that grand old Stoic, Epictetus (what else is summer for?), and I was struck by this passage:

Observe yourselves thus in your actions, and you will find of what sect you are. You will find that most of you are Epicureans; a few are Peripatetics, and those without convictions. For by what action will you prove that you think virtue equal, and even superior, to all other things? Show me a Stoic, if you have one. Where? Or how should you? You can show, indeed, a thousand who repeat the Stoic reasonings.... Show me one who is sick, and happy; in danger, and happy; dying, and happy; exiled, and happy; disgraced, and happy. Show him to me; for by heaven! I long to see a Stoic. But you have not one fully developed? Show me then one who is developing; one who is approaching towards this character. Do me this favor. Do not refuse an old man a sight which he has never yet seen (Discourses II.19, Higginson trans.)
I imagine myself his Stoic student, listening to this speech, feeling thoroughly and rightly and personally rebuked.

Although I reject the Stoic value system -- happiness under all conditions seems to me to require a problematic emotional disengagement from the world -- the passage beautifully connects two of my central interests: an approach to belief on which believing something is at least as much a matter of how one lives as a matter of what words come out of your mouth, and a practical, first-person approach to ethics that focuses on self-criticism and self-improvement.

Thursday, July 18, 2013

Tuesday, July 16, 2013

Ethics in the First Person

Bernard Williams begins his classic Ethics and the Limits of Philosophy with a quote from Plato:

It is not a trivial question, Socrates said: what we are talking about is how one should live (Republic, 352d).
Williams highlights the impersonality of Socrates' question:
"How should one live?" -- the generality of one already stakes a claim. The Greek language does not even give us one: the formula is impersonal. The implication is that something relevant or useful can be said to anyone, in general... (1985, p. 4).
The generality of the question, Williams says, is part of what makes the inquiry philosophical (p. 2).


Williams' thought seems to be that philosophy starts impersonally and then works its way back to the personal question as a particular instance. But I'm inclined to think that to begin with the general question is to set off in the wrong direction. Good philosophy is self-critical -- grounded in a sense of one's own capacities for critique and especially one's limits and biases. Before painting the universe in your philosophical colors, know the shortcomings of your palette.

Yes, these are impersonal considerations for starting with personal reflection -- exactly what is needed to persuade someone inclined to start with the impersonal!

A simple conversion of "How should one live?" to "How should I live?" is one way to go. But to the extent you're moved by the thought that it's best to start with self-critical evaluation, a different type of starting place beckons.

For example: Am I a jerk? If yes, I should probably shut up about how others ought to live and work on myself. Being a jerk is not only a moral failing, but -- in my analysis -- also an epistemic one, a failure properly to appreciate the perspectives of others around you. A jerk ethicist not only is likely to be viewed by others as hypocritical or noxiously self-rationalizing but also works, I suggest, with an epistemic disability likely to taint his conclusions.

If I am part-jerk, then my next thought maybe ought to be whether I'm okay with that; and if I'm not okay with that, what might I do about it -- a very different line of thought, and a very different plan for self-adjustment than is likely to arise from impersonal reflection on how one ought to live. Similarly, I might reflect on: "Am I a loving husband?", "Do I engage in lots of self-serving rationalizations?"

You might object: Such first-personal questions carry presuppositions of exactly the sort philosophers should question, e.g., whether being a loving husband is a good thing to aim for. We should back up and consider the more abstract questions first, such as how ought people live in general. I reply: The answers to these more abstract questions also build in presuppositions, though less visibly to me the less light I shine on my own moral and epistemic failings.

Such first-personal moral epistemology is difficult and uncertain work. If I aim at a critical first-person ethics, I must take a hard look in the mirror, and I must think carefully about the relation between what I think I see and what is really there. I must vividly fear that I am not the person I previously hoped and thought I was.

This is a less pleasant task, I find, than the abstract task of figuring out how everyone in general should live, and a different kind of philosophical ambition.

Tuesday, July 09, 2013

New Philosophy Journal: Ergo

... here. Open access, triple-blind review, good list of editors. Sounds good to me!

Consider submitting. If the journal gets a good track record of excellent papers early on it could follow the path of Philosophers' Imprint into the upper reaches of philosophy journals. I'd love to see open access journals like this one someday displace journals operated by noxious entities like Springer. So I'm rooting for it and appreciative of the editors for being willing to dedicate their time to a worthwhile enterprise of this sort.

The Individual Differences Reply to Introspective Pessimism

I'm an introspective pessimist: I think people's introspective judgments about their stream of conscious experience are often radically mistaken. This is the topic of my 2011 book, Perplexities of Consciousness. Over the past few years, I've found the most common objection to my work to be what I'll call the Individual Differences Reply.

Many of my arguments depend on what I call the Argument from Variability: Some people report experiences of Type X, others report experience of Type not-X, but it's not plausible that Group 1 and Group 2 differ enough in their underlying stream of experience to make both claims true. Therefore, someone must be mistaken about their experience. So for example: People used to say they dreamed in black and white; now people don't say that; so someone is wrong about their dream experiences. Some people say that tilted coins in some sense look elliptical as though projected upon a 2D visual screen, while others say that visual experience is robustly 3D with no projective distortions at all. Some people say that objects seen 20 degrees from the point of fixation look fairly clear and colorful, while others say objects 20 degrees from center look extremely sketchy and indeterminate of color. Assuming a common underlying experience, disagreement reveals someone to be mistaken. (Fortunately, we needn't settle which is the mistaken party.)

The Individual Differences Reply challenges this assumption of underlying commonality. If people differ in their experience as radically as is suggested by their introspective reports, then no one need be mistaken! Maybe people did really used to dream in black and white and now they dream in color (see Ned Block's comment on a recent Splintered Mind post). Maybe some people really do see tilted coins as elliptical while others do not.

There are two versions of the Individual Difference Reply, which I will call the Stable Differences and the Temporary Differences versions. On the Stable Differences version, people durably differ in their experiences in such a way as to render their reports accurate as generalizations about their experiences. On the Temporary Differences version, people might be similar in their experiences generally, but when faced with the introspective task itself their experience shifts in such a way as to match their reports. For example, maybe everyone generally has similar experiences 20 degrees into the visual periphery, but when asked to introspect their visual experience, some people experience (and accurately report) clarity while others experience (and accurately report) haziness.

People who have pressed versions of the Individual Differences Reply on me include Alsmith forthcoming, Hohwy 2011, Jacovides 2012 and Hurlburt in Hurlburt & Schwitzgebel 2007.)

Here's how I think the dispute should be resolved: Look, on a case-by-case basis, for measurable brain or behavioral differences that tightly correlate with the differences in introspective report and which are plausibly diagnostic of the underlying variability. If those correlations are found, accept that the reports reveal real differences. If not, conclude that at least one party is wrong about their experience.

Consider black and white dreaming. Theoretically, we could hook up brain imagining machinery to people who report black and white dreams and to people who report color dreams. If the latter group shows substantially more neural activity in color-associated neural areas while dreaming, the reports are substantiated. If not, the reports are undermined. Alternatively, examine color term usage in dream narratives: How often do people use terms like "red", "green", etc., in dream diaries? If the rates are different, that supports the existence of difference; if not, that supports the hypothesis of error. In fact, I looked at exactly this in Chapter 1 of Perplexities, and found rates of color-term usage in dream reports to be the same during the peak period of black-and-white dream reporting (USA circa 1950) and recently (USA circa 2000).

Generally, I think the evidence often shows a poor relationship between differences in introspective report and differences in plausibly corresponding behavior or physiology.

Sometimes there is no systematic evidence, but antecedent plausibility considerations suggest against at least the Stable Differences version of the Individual Differences Reply: It seems highly unlikely that the people who report sharp visual experiences 20 degrees into the periphery differ vastly in general visual capacity or visual physiology from those who report highly indeterminate shape and color 20 degrees into the periphery. But that's an empirical question, so test it if you like!

Wednesday, June 26, 2013

Ethics in the Second Person

Still in China, so just a brief recollection.

When my son Davy was about six or seven, I asked him what the point is in thinking about right and wrong, good and bad, fair and unfair. He said that most of the kids who talked a lot about things like sharing and fairness seemed to want you to share with them.

We might think of this as "ethics in the second person" -- ethics that focuses on telling the people around you what they are morally required do, with no particular concern about applying the same norms to one's own actions. Of course, ethics in the second person needn't always arise from the motives that drive it in envious six-year-olds! And yet I'm inclined to think that one advantage of hanging around with children is that they reveal to us our vices purer, simpler, and less well disguised.

Thursday, June 20, 2013

In China

... and Blogger is hard to access. Trying to work around, but posting and checking comments will be spotty for a couple weeks.

Telepathic Cyborg Rats!

...

It was only a matter of time.

The next step, of course, is aerial transmitters atop our heads which allow direct human brain-to-brain interface without all that slow-paced language business (as envisioned in Churchland 1981).

HT: Nathan Westbrook.

...

Thursday, June 13, 2013

Portrayals of Dream Coloration in Mid-Twentieth Century Cinema

From the 1930s-1950s, people in the U.S. thought they dreamed mostly in black and white. Nowadays, people think they dream mostly in color. In previous work, I've presented evidence that this change in opinion was driven by people's over-analogizing dreams to movies -- assuming their dreams are colored if the film media around them are colored, assuming their dreams are black and white if the film media around them are black and white. A few days ago, I summarized my research on this at the Velaslavasay Panorama Museum in L.A., and media scholar Ann-Sophie Lehmann, who was in the audience, raised this question: If people thought they dreamed in black and white in that period, did the cinema of the time tend to portray dreams as black and white?

Here's the idea: If Hollywood directors in the 1930s-1950s thought that dreams were black and white, then color films from that period ought often to portray dream sequences in black and white. This would presumably have been, by the directors' lights, a realistic way to portray dreams, and it would also solve the cinematic problem of how to let the audience know that they're viewing a dream sequence. But that doesn't seem to have been the pattern. In fact, one of the most famous movies of the era actually goes the reverse direction: The Wizard of Oz (1939) portrays Oz in color and Kansas in black and white, and arguably Oz is Dorothy's dream.

I'm not worried about my thesis that people in the U.S. in that era didn't think they dreamed in color -- the evidence is too overwhelming -- but it's interesting that American cinema in that era did not tend to portray dreams as black and white. Why not? Or am I wrong about the cinema of the period? It seems worth a more systematic look. Thoughts? Suggestions?

Thursday, June 06, 2013

My Boltzmann Continuants

Lightning strikes me and I die. Fortunately (let's suppose), the universe is infinite and consciousness supervenes on the arrangement of molecules in one's body. So somewhere in my forward light cone -- maybe in about a double-boggle years [note 1], arises an enduring, Earthly, Boltzmann continuant of me.

A Boltzmann continuant for Person X at time T is, I stipulate, any being that, at time T-prime, arises suddenly from disorganized chaos, into a entity particle-for-particle identical to Person X at time T, within an error range of a thousandth of a Planck length. [note 2] A Boltzmann continuant is enduring just in case it survives in human-like form for at least one day. A Boltzmann continuant is Earthly just in case it exists in an environment that, at time T-prime, is particle-for-particle similar to Earth at T, within a range of 10,000 light years, and obeys the same laws of nature -- except allowing for minor differences in features that had not been observed before time T but would be plausible epistemic possibilities to human observers, such as the unobserved top of a cloud twisting one way rather than another, an unobserved flower in the Sierra being one centimeter to the left, differences in the details of how storms play out on distant planets, etc.

According to mainstream physics (back to Ludwig Boltzmann), there is an extremely tiny but finite chance that such an enduring, Earthly, Boltzmann continuant of me could arise. So if the universe exists long enough and doesn't settle into some inescapable loop, presumably I will eventually have a Boltzmann continuant.

By hypothesis, my Boltzmann continuant is not killed by the lightning strike; he survives at least one day. Maybe he survives a near miss with lightning. By hypothesis, my Boltzmann continuant will have the same arrangement of molecules in his body at time T-prime as I do at time T; and since the environment is Earthly, presumably things will proceed fairly normally from time T-prime forward, despite the chaos before time T-prime. By hypothesis, consciousness supervenes on the arrangement of molecules, so presumably my Boltzmann continuant will have conscious experiences very much like the ones I would have had if I had not been struck by the lightning. Maybe my continuant will have an episode of thinking to himself something like "Wow, that lightning struck close! I'd better get inside!" [note 3]

Earth is a pretty safe and stable place. So too, then, is continuant-Earth. My continuant returns "home", greets the continuant versions of his family, comes to the continuant version of his office, works on a post for the continuant version of The Splintered Mind. I have no future. He has no past. But we hook together seamlessly into one "Eric Schwitzgebel" with an undetectable double-boggle-year gap between us. Call the entity or quasi-entity composed of these two parts gappy-Eric.

In a way, it would be odd to think it mattered hugely that there is such a gap between these two half-Eric Schwitzgebels. From the inside, gappy-Eric will feel just like he's a continuous, Earthly Eric Schwitzgebel. From the outside, too, at least through the next 10,000 years, no one on continuant-Earth will have cause to suspect a gap in Eric or in the world. Gappy-Eric's family life, his professional life, the whole planet -- all would seem the same, all would seem to continue unabated. Continuant-existence would seem to be survival enough.

Of course, I needn't be struck by lightning for there to be enduring, Earthly, Boltzmann continuants of me. If we accept the that the universe is infinite, diverse, and subject to Boltzmannian chances, then every time slice of me will have an infinite number of enduring, Earthly, Boltzmann continuants somewhere in the future. So I needn't fear any early, chancy death: Some appropriate Boltzmann continuant of me will launch at precisely the right subjective moment to continue me seamlessly. Gappy-Eric lives! In some cases, going back a few seconds might be necessary to find an appropriate time T from which my death was not inevitable in any Earthly environment, but it seems like quibbling to think those few seconds make a huge ontological difference.

With infinitely many continuants of me, sprouting off from every moment of my life, whose continuant-bodies on continuant-Earth are for practical purposes as good a continuation of me as is my own body on Earth, maybe I shouldn't care about my individual death at all, in any circumstances -- or rather maybe I should care about it only as the loss of one soldier in an infinite army of me.

Yes, this is entirely bonkers.

[revised Nov. 19, 2014]

-----------------------------------------------------

[note 1]: The number of particles in the observable universe is estimated at about 10^80. Maybe 10^75-ish of those are within 10,000 light years of us. To have enough particles suddenly conform to the structure described in the next paragraph from a previous state of chaos (rather than in some more normal-seeming way) might require a very long time -- longer, perhaps, than the Poincare recurrence time of the observable universe. If 10^100 is a googol and 10^10^100 is a googolplex, let's call a "boggle" 10^10^... [repeated a googolplex times] ...^100. A "double-boggle", then, could be 10^10^... [repeated a boggle times] ...^100. I'm hoping that's big enough.

[note 2]: I assume that differences of less than a thousandth of a Planck length don't matter to consciousness. If necessary, we can narrow the error range. If we use an ontology of fields, presumably an error measure similiar in spirit could be developed. There will also be an issue about temporal spread -- perhaps more serious if consciousness spreads across a specious present. If necessary, there could be a brief period during which consciousness slips into the Boltzmann continuant before full supervenience takes hold.

[note 3]: The question arises whether the continuant's thoughts would have that meaning, if the meaning of words depends on facts about learning history; but given our stipulations, at least the continuant's conscious experience of that episode of inner speech will be like mine would have been, even if it doesn't tack down its reference in the external world in quite the right way.

Monday, June 03, 2013

Invisible Revisions

Imagine an essay manuscript: Version A. Monday morning, I read through Version A. I'm not satisfied. Monday, Tuesday, Wednesday, I revise and revise -- cutting some ideas, adding others, tweaking the phrasing, trying to perfect the manuscript. Wednesday night I have the new version, Version B. My labor is complete. I set it aside.

Three weeks later, I re-read the manuscript -- Version B, of course. It lacks something. The ideas I had made more complex seem now too complex. They lack vigor. Conversely, what I had simplified for Version B now seems flat and cartoonish. The new sentences are clumsy, the old ones better. My first instincts had been right, my second thoughts poor. I change everything back to the way it was, one piece at a time, thoughtfully. Now I have Version C -- word-for-word identical with Version A.

To your eyes, Version A and Version C look the same, but I know them to be vastly different. What was simplistic in Version A is now, in Version C, elegantly simple. What I overlooked in Version A, Version C instead subtly finesses. What was rough prose in Version A is now artfully casual. Every sentence of Version C is deeper and more powerful than in Version A. A journal would rightly reject Version A but rightly accept Version C.

Wednesday, May 29, 2013

1% Skepticism

I find myself, right now, 99% confident that I am who I think I am, living in a broad world of the kind I think I live in. The remaining 1% of my credence I reserve for all radically skeptical scenarios combined.

Most of us, I think, don't reserve even 1% of our credence for radically skeptical scenarios. Maybe if you're philosophically inclined and not entirely hostile to skepticism, you'd be willing to say, in certain reflective moments, that there is some chance, maybe about 1%, that some radically skeptical scenario obtains. But such acknowledgements are typically not truly felt and lived -- not in any durable way. Skeptical doubts stay in the classroom, in the office, in the books. They don't come home with you.

What if skeptical doubt did come home with you? What would it be like really to live with 1% of your credence distributed among radically skeptical scenarios?

It depends in part on what the scenarios are. Let mention three that I have trouble entirely dispelling.

Random origins skepticism. Most physicists think that there is a finite though tiny chance that a brain or brain and body could randomly congeal from disorganized matter. In a sufficiently large and diverse universe, we should expect this chance sometimes to be realized. A question that then arises is: Are randomly congealed beings relatively more or less common than beings arising from what we think of as the ordinary process of billions of years of biological evolution? A difficult cosmological question! I see room for some doubt about the matter, so it seems I ought to reserve some subjective credence for the possibility that such freak-chance beings are common enough that I might be one of them, and thus that I lack the sort of past, and probably future, that I think I have. (This is the Boltzmann brain hypothesis.)

Simulation skepticism. If it is possible to create consciousness artificially inside computers, then likely it is also possible to create conscious beings with radically false autobiographical memories and radically false impressions about the sort of world they live in. I don't feel I can entirely exclude the possibility that I am such a radically mistaken artificial being -- for example a being limited to a few hours' existence in a 22nd-century child's computer game. (This is a skeptical version of the "simulation" possibility, discussed non-skeptically by Nick Bostrom here and by David Chalmers here.)

Dream skepticism. Zhuangzi dreamed he was a butterfly. After he woke he wasn't sure if he was Zhuangzi who had dreamed he was a butterfly or a butterfly dreaming he was Zhuangzi. I'm inclined to think the phenomenology of dreaming is very different from the phenomenology of waking, and thus that my current experiences are excellent grounds for thinking I am indeed awake. But I am not absolutely sure of that. And if I might be dreaming, then I might not really have the family and career and past life I think I do.

I don't believe that any such skeptical scenario is true, or even very likely. On reflection, I am inclined to grant all such skeptical scenarios combined about 1% of my credence. In a way, that's not much. But in another way, that's quite a bit.

Suppose someone came to me with two ten-sided dice and said this: I will roll these two dice, and if they both come up "1", you will die. Would I rest in gentle confidence that a 99% chance of living is quite an excellent chance? Or suppose I were to learn that seven unnamed students from my daughter's largish elementary school had just been abducted into irrecoverable slavery. Would I feel sorry for those students but feel no real concern about my daughter, since the odds are so good she is not among them?

I sit on by back patio. If some radically skeptical scenario is true, then my daughter does not exist, or I will be dead within a minute as the disorganized soup from which I've congealed consumes me again, or my life will end in an hour when the child grows bored with the game in which I am instantiated. Should I be unconcerned about these possibilities because I judge it to be 99% likely that nothing of this sort is so?

I hear a voice from inside the house. It's my wife. How would it change things if instead of taking it for granted that she exists, I held it to be 99% likely that she exists? -- or 99.5% likely, if I allow a 50% chance of her real existence given a skeptical scenario?

Often, I visually imagine negative events that have a small chance of occurring. This morning, the newspaper told me of five teenagers who died in a street race; I looked at my own teenage son, who was readying for school, imagining his death. Or I'm passenger in an airplane descending through heavy fog and turbulence, and I imagine a crash. I hear of a streak of identity thefts in Riverside, see that someone has messed with our mailbox, and imagine our bank accounts drained. If I am a 1% skeptic, shall I then also imagine the world's suddenly splitting into void, Godzilla's rising over the horizon for the child's entertainment, my suddenly floating off into the air, opening my front door to find no ordinary suburban street but rather Wonderland or darkness?

I have, in fact, started to imagine these things more often. I don't believe them -- no more than I believe the plane will crash. Given non-skepticism, the plane is much less than 1% likely to crash. If I am a 1% skeptic, then I should probably think it more likely that am I Boltzmann brain or a short-lived artificial consciousness or a much-deceived dreamer, than that the plane will crash. Would it be more rational, then, for me to dwell on those possibilities than on the possible plane crash?

I am undecided about doing some chore. I could weed the yard. Or I could sip tea, enjoy the shade, and read Borges. I teeter right on the cusp about which is the wiser choice. But skepticism has not yet crossed my mind. Once it does, the scale is tipped. The 1% chance that the weeds are an illusion or a mere temporary thing -- the small but non-trivial possibility that this moment here is all I have before I die or the world collapses around me or I wake to something entirely new -- favors Borges.

If my credence in the durable reality of this patio and this book were to fall much below 99% -- if my credence were to fall, say, to only 80% or 50%, I would be laid low. My death might well be upon me within minutes. I would seek my family, my seeming-family; at the same time, my doubts would isolate me from them. I could not, I think, feel full proper intimacy while I regard my partner in intimacy as fairly likely to be mere froth or illusion. Even if my wife and children are not froth and illusion, such large doubt is disaster, for it would derange my choices and emotions, and few people would understand my doubts, even if my own careful reflection revealed those doubts to be well-grounded. Everyone around would see me only as insane with foolish philosophy.

1% skepticism does not have this effect though. I can still enjoy my Borges, even flatter myself somewhat with my excellent excuse for avoiding the weeds, which I am of course in fact almost certain do exist.

[For related reflections see Waterfall Skepticism.]

Monday, May 27, 2013

Joshua Afire in Canaan

Today is Memorial Day in the U.S. I have written a story about war, two pages from the perspective of Joshua from the Old Testament -- a celebration of violence and genocide, in Joshua's hard, sure voice. I hope it's unnecessary to add that Joshua's perspective is not my own.

Story here.


Wednesday, May 22, 2013

Developments at the Brains Blog

Readers of the Splintered Mind might be interested in this news about developments at the Brains blog, including regular symposia on articles from the journal Mind & Language, a new series of podcasts by Adam Shriver, and a stint as Brains Featured Scholar by former Splintered Mind guest blogger Lisa Bortolotti.

Sounds fun to me!

Is Crazyism Obvious?

"Crazyism" about the metaphysics of mind, as I define it, is the view that something bizarre and undeserving of credence must be among the core truths about the mind. In my central article on the topic, I develop the view by defending two subordinate theses: universal bizarreness and universal dubiety.

Univeral bizarreness is the view that well-developed metaphysical approaches to the mind will inevitably conflict sharply with common sense. My defense of universal bizarreness turns on the failure of any contemporary or historical philosopher to develop a thoroughly commonsensical metaphysics of mind. That empirical fact about the history of philosophy is best explained, I think, by the incoherence of common sense in matters of mental metaphysics. If commonsense is incoherent, no broad-reaching coherent metaphysical system could respect it all.

Universal dubiety is the view that all of the broad approaches to the metaphysics of mind -- materialism, dualism, idealism, or some rejection or compromise alternative -- are dubious, none warranting credence much above 50%. My defense of universal dubiety turns on the apparent inability of any combination of empirical scientific, abstract theoretical, or commonsensical methods, in anything like their current state, to resolve fundamental metaphysical questions of this sort.

The most common objection I hear to crazyism is that it's obvious. I am somewhat puzzled by this objection!

Is it obvious that all coherent, well-developed approaches to the metaphysics of mind conflict with common sense? Perhaps that was Kant’s view, especially in the antimonies, but Kant’s view of the antimonies is not universally accepted. Scientifically oriented materialists often reject common sense, but doing so is entirely consistent with thinking that there might be a commonsensical way of to develop dualism. Also, it remains common argumentative practice in the metaphysics of mind to highlight sharp violations of common sense in views one opposes – idealism, panpsychism, Chinese-nation functionalism, eliminative materialism – as though the bizarreness of those views were a powerful consideration against them. This practice seems problematic if it is generally agreed that all well-developed metaphysical theories sharply violate common sense.

Is it obvious that no existing combination of methods could, within the next several decades – within our active philosophical lifetimes – appropriately push us to a warranted belief (or credence much above 50%) in materialism or whatever option might usurp materialism’s current popularity in the philosophical community? This doesn’t seem to be the attitude of most of my materialist friends. Indeed, even other skeptics about our ability to solve the mind-body problem, such as Colin McGinn and Noam Chomsky, seem to assume a broadly naturalistic, scientific perspective toward the world, excluding from outset such options as idealism and substance dualism.

Finally, it's odd that the supposed obviousness of crazyism is offered as an objection to my work on the topic. Even if the view I am espousing is just boringly obvious to well-informed philosophers, it might still be worth gathering and presenting the considerations in favor of it, if (as I believe), it hasn't properly been done before. But I am eagerly open to reading suggestions on this last point!

Friday, May 17, 2013

Tree

Driving her son to school, she saw the perfect tree. The perfect tree stood small and twisted upon the center divider. It commanded its cousins, suburbanly spaced along the same divider. It commanded the giant eucalyptuses that lined the old road. It was centerpoint of a universe of weeds and flowers, cars and houses, birds, beetles, clouds, dust, stone, gutters, children, and crumpled paper. She drove over the small lip of the road onto the sidewalk and the dead leaves, parking. Her son asked was something wrong? She opened her door, walked across onto the median, and sat in the dirt, facing the tree.

Her son followed but did not understand. After a while, he walked toward school.

That afternoon, the phone rang in her car. That afternoon, she received a parking ticket. That evening, her husband came and sat with her beneath the tree. He said some words that seemed like gentle pleading. He left, he came back, he fell asleep at her feet while she sat.

Dawn speared through the eucalyptus, painting patches on the perfect tree. The perfect tree had a thousand red elbows. The perfect tree offered the world its berries, its light, its air, its scent of apple, of dust, chocolate, rubber, marjoram, closet floors. Its leaves were a chaos on which it would be impossible to improve. She breathed the oxygen of its photosynthesis. She drew a finger across a branch, leaving an invisible trace of her skin’s oil. Her husband brought breakfast, cancelled her classes, defended her rights against the police. A friend drove her car away.

.... [For the full-length and updated version, please email me.

Wednesday, May 15, 2013

What Are You Noticing When You Adjust Your Binoculars?

Maja Spener has written an interesting critique of my 2011 book, Perplexities of Consciousness, for a forthcoming book symposium at Philosophical Studies. My book is an extended argument that people have very poor knowledge of their own conscious experience, even of what one might have thought would be fairly obvious-seeming features of their currently ongoing experience (such as whether it is sparse or full of abundant detail, whether it is visually doubled, whether they have pervasive auditory experience of the silent surfaces around them). Maja argues that we have good introspective knowledge in at least one class of cases: cases in which our skillful negotiation with the world depends on what she calls "introspection-reliant abilities".

Maja provides examples of two such introspection-reliant abilities: choosing the right amount of food to order at a restaurant (which relies on knowledge of how hungry you feel) and adjusting binoculars (which relies on knowing how blurry the target objects look).

Let's consider, then, Maja's binoculars. What, exactly, are you noticing when you adjust your binoculars?

Consider other cases of optical reflection and refraction. I see an oar half-submerged in water. In some sense, maybe, it "looks bent". Is this a judgment about my visual experience of the oar or instead a judgment about the optical structure of my environment? -- or a bit of both? How about when I view myself in the bathroom mirror? Normally during my morning routine, I seem to be reaching judgments primarily about my body and secondarily about the mirror, and hardly at all about my visual experience. (At least, not unless we accept some kind of introspective foundationalism on which all judgments about the outside world are grounded on prior judgments about experience.) Or suppose I'm just out of the shower and the mirror is foggy; in this case my primary judgment seems to concern neither my body nor my visual experience but rather the state of the mirror. Similarly, perhaps, if the mirror is warped -- or if I am looking through a warped window, or a fisheye lens, or using a maladjusted review mirror at night, or looking in a carnival mirror.

If I can reach such judgments directly without antecedent judgments about my visual experience, perhaps analogously in the binoculars case? Or is there maybe instead some interaction among phenomenal judgments about experience and optical or objectual judgments, or some indeterminacy about what sort of judgment it really is? We can create, I'm inclined to think, a garden path between the normal bathroom mirror case (seemingly not an introspective judgment about visual experience) and the blurry binoculars case (seemingly an introspective judgment about visual experience), going either direction, and thus into self-contradiction. (For related cases feeding related worries see my essay in draft The Problem of Known Illusion and the Resemblance of Experience to Reality.)

Another avenue of resistance to the binoculars case might be this. Suppose I'm looking at a picture on a rather dodgy computer monitor and it looks too blue, so I blame the monitor and change the settings. Arguably, I could have done this without introspection: I reach a normal, non-introspective visual judgment that the picture of the baby seal is tinted blue. But I know that baby seals are white. So I blame the monitor and adjust. Or maybe I have a real baby seal in a room with dubious, theatrical lighting. I reach a normal visual assessment of the seal as tinted blue, so I know the lighting much be off and I ask the lighting techs to adjust. Similarly perhaps, then, if I gaze at a real baby seal through blurry binoculars: I reach a normal, non-introspective judgment of the baby seal as blurry-edged. But I know that baby seals have sharp edges. So I blame the binoculars and adjust. Need this be introspective at all, need it concern visual experience at all?

In the same way, maybe, I see spears of light spiking out of streetlamps at night -- an illusion, some imperfection of my eyes or eyewear. When I know I am being betrayed by optics, am I necessarily introspecting, or might I just be correcting my normal non-introspective visual assessments?

This is a nest of issues I am struggling with, making less progress than I would like. Maybe Maja is right, then. But if so, it will take further work to show.

Wednesday, May 08, 2013

SarahAbraham

Through a harmony of dendrites, Sarah came to feel touch upon Abraham’s skin. They were waltzing upon the beach, Ishbak on the immutable piano, and a frond brushed Abraham’s back. Sarah felt the frond as though on her own back. She caressed Abraham’s left shoulder and felt the caress as though upon her own shoulder. Abraham’s right hand was touching the skin under Sarah’s arm, and Sarah felt not only his fingers there in the usual way, but also a new complement: the smoothness of her ribs upon her own right hand. She had been feeling that smoothness for a while, she realized, intermingled with the more familiar touch of Abraham’s left hand upon her right as they danced.

They danced three songs in this manner, then lay together upon the sand. Sarah touched the back of her neck with a twig and saw Abraham scratch his own neck. They made love in a new way.

Sarah gazed upon Abraham as he observed the sky. She called to Ishbak and Midian. Sarah and Abraham lay face down upon blankets while the sons of Keturah touched their backs, and Sarah learned to distinguish Abraham’s sensations from her own. She now had two backs, two bodies.

Through a harmony of retinas, Sarah came to see through Abraham’s eyes. At first, it was a faint tint upon her field of view – her own form, maybe, bent toward the fire pit, as Abraham watched her from the side, her figure like a wraith upon the fire that she more vividly saw. The wraith was jumpy, unpredictable; she could not fully guess Abraham’s saccades as his eyes gathered the scene. Over days, the visions livened and settled. Sarah did not know if she merely learned better to anticipate Abraham’s eye movements or if she instead also gained partial control.

[for the second half of this story, email me at eschwitz at domain: ucr.edu]

Monday, May 06, 2013

Two Types of Hallucination

Oliver Sacks is one of the great essayists of our time. I have just finished his book Hallucinations.

Sacks does not, I think, adequately distinguish two types of hallucination. I will call them doxastic and phenomenal. In a phenomenal hallucination of A, one has sensory experience as of A. In a doxastic hallucination, one thinks one has sensory experience as of A. The two can come apart.

Consider this description, from page 99 of Hallucinations (and attributed to Daniel Breslaw via David Ebin's book The Drug Experience).

The heavens above me, a night sky spangled with eyes of flame, dissolve into the most overpowering array of colors I have ever seen or imagined; many of the colors are entirely new -- areas of the spectrum which I seem to have hitherto overlooked. The colors do not stand still, but move and flow in every direction; my field of vision is a mosaic of unbelievable complexity. To reproduce an instant of it would involve years of labor, that is, if one were able to reproduce colors of equivalent brilliance and intensity.
Here are two ways in which you might come to believe the above about your experience: (1.) You might actually have visual experiences of the sort described, including of colors entirely new and previously unimagined and of a complexity that would require years of labor to describe. Or (2.) you might shortcut all that and simply arrive straightaway at the belief that you are undergoing or have undergone such an experience -- perhaps with the aid of some unusual visual experiences, but not really of the novelty and complexity described. If the former, you have phenomenally hallucinated wholly novel colors. If the latter, you have merely doxastically hallucinated them.

The difference seems important -- crucial even, if we are going to understand the boundaries of experience as revealed by hallucination. And yet phenomenal vs. merely doxastic hallucinations might be hard to distinguish on the basis of verbal report alone, and almost by definition subjects will be apt to confuse the two. I can recall no point in the book where Sacks displays sensitivity to this issue.

Once I was attuned to it, the issue nagged at me again and again in reading:

Time was immensely distended. The elevator descended, "passing a floor every hundred years" (p. 100).

Then my whole life flashed in my mind from birth to the present, with every detail that ever happened, every feeling and thought, visual and emotional was there in an instant (p. 102).

I have had musical hallucinations (when taking chloral hydrate as a sleeping aid) which were continuations of dream music into the waking state -- once with a Mozart quintet. My normal musical memory and imagery is not that strong -- I am quite incapable of hearing every instrument in a quintet, let alone an orchestra -- so the experience of hearing the Mozart, hearing every instrument, was a startling (and beautiful) one (p. 213).

The possibility of merely doxastic hallucination might arise especially acutely when subjects report highly unusual, almost inconceivable, experiences or incredible detail beyond normal perception and imagery; but of course the possibility is present in more mundane hallucination reports too.


(A fan of Dennett might suggest that there's no difference between the phenomenal and doxastic hallucinations; but I don't know what Dennett himself would say -- probably something more complex than that.)

Wednesday, May 01, 2013

The Tyrant's Headache

When the doctors couldn’t cure the Tyrant’s headache, he called upon the philosophers. “Show me some necessary condition for having a headache, which I can defeat!”

The philosophers sent forth the great David K. Lewis in magician’s robes....

[See the remainder of this story here. Hint: It doesn't have a happy ending.]

Monday, April 29, 2013

Waterfall Skepticism

Yesterday morning around dawn I sat hypnotized by “Paradise Falls”. I had hiked there from my parents’ house while my family slept, as I often do when we visit my parents.

Although at first it didn’t feel that way, I wonder if I have been harmed by philosophy. I gazed at the waterfall, thinking about Boltzmann brains – thinking, that is, about the possibility that I had no past, or a past radically unlike what I usually suppose it to be, instead having just then randomly congealed by freak chance from disorganized matter. On some ways of thinking about cosmology, there are many more randomly congealed brains, or randomly congealed brains-plus-local-pieces-of-environment, than there are intelligent beings who have arisen in what we think of as the normal way, from billions of years of evolution in a large, stable environment. If such a cosmology is true, then it might be much more likely that I have randomly congealed than that I have lived a full forty-five years in human form. The thought troubled me, but also the spark of doubt felt comfortable in a way. I am accustomed to skeptical roads.

(Paradise Falls, 6:25 a.m., April 28, 2013)

Of course, most cosmologists flee from the Boltzmann brains hypothesis. If a cosmology implies that you are very likely a Boltzmann brain, that’s normally taken to be a reductio ad absurdum of that cosmology. But as I sat there thinking, I wondered if such dismissals arose more from fear of skepticism than from sound reasoning. I am no expert cosmologist, with a view very likely to be true about the origin and nature of the universe or multiverse and thus about the number of Boltzmann brains vs. evolved consciousnesses in existence – but neither are any professional cosmologists sufficiently expert to claim secure knowledge of these matters, the field is so uncertain and changing. As I gazed around Paradise Falls, the Boltzmann brain hypothesis started to seem impossible to assess. This seemed especially so to me given the limited tools at hand – not even an internet connection! – though I wondered whether having such tools would really help after all. Still, the world did not dissolve around me, as I suppose it must around most spontaneously congealed brains. So as I endured, I came to feel more justified in my preferred opinion that I am not a Boltzmann brain. However, I also had to admit the possibility that my seeming to have endured over the sixty seconds of contemplating these issues was itself the false memory of a just-congealed Boltzmann brain. My skepticism renewed itself, though somehow this second time only as a shadow, without the force of genuine doubt.

I considered the possibility that I was a computer program in a simulated environment. If consciousness can arise in programmed silicon chips, then presumably there’s something it’s like to be such a computerized consciousness. Maybe such computer consciousnesses sometimes seem to dwell in natural environments, fed with simulated visual inputs (for example of waterfalls), simulated tactile inputs (for example of sitting on a stone), and false memories (for example of having hiked to the waterfall that morning). If Nick Bostrom is right, there might be many more such simulated beings than naturally evolved human beings.

I considered Dan Dennett’s argument against skepticism: Throw a skeptic a coin, Dennett says, and “in a second or two of hefting, scratching, ringing, tasting, and just plain looking at how the sun glints on its surface, the skeptic will consume more information than a Cray supercomputer can organize in a year” (1991, p. 6). Our experience, he says, has an informational wealth that cannot realistically be achieved by computational imitation. In graduate school, I had found this argument tempting. But it seemed to me yesterday that my vision of the waterfall was not as high fidelity as that, and could easily be reproduced on a computer. I fingered the mud at my feet. The complexity of tactile sensation did not seem to me the sort of thing beyond the capacity of a computer artificially to supply, if we suppose a future of computers advanced enough to host consciousness. We are so eager to reject skepticism that we satisfy ourselves too quickly with weak arguments against it.

Now maybe John Searle is right, and no computer could ever host consciousness. Or maybe, though computer consciousness is possible, it is never actually achieved, or achieved only so rarely that the vast majority of conscious beings are organically evolved beings of the sort I usually consider myself to be. But I hardly felt sure of these possibilities.

The philosophers who most prominently acknowledge the possibility that they are simulated beings instantiated by computer programs don’t seem very worried by it. They don’t push it in skeptical directions. Nick Bostrom seems to think it likely that if we are in a simulation, it is a large stable one. David Chalmers emphasizes that if we are in a simulation scenario like that depicted in the movie The Matrix, skepticism needn’t follow. And maybe it is the case that the easiest and most common way to create an artificial consciousness is to evolve it up through a billion or a million years in a stable environment; and maybe the easiest, cheapest way to create seeming conversation partners is to give those seeming conversation partners real consciousness themselves, rather than making them Eliza-like shells of simple response patterns. But on the other hand, if I take the simulation possibility seriously, then I feel compelled to take seriously also the possibility that my memories are mostly false, that I am instantiated within a smallish environment of short duration, perhaps inside a child’s game. I am the citizen to be surprised when Godzilla comes through; I am the victim to be rescued by the child’s swashbuckling hero; I am the hero himself, born new and not yet apprised of my magic. Nor did I have, at that moment, a clever conversation partner to convince me of her existence. I might be Adam entirely alone.

Fred Dretske and Alvin Goldman say that as long as my beliefs have been reliably enough caused by my environment, by virtue of well-functioning perceptual and memory systems, then I know that there’s a real waterfall there, I know that I have hiked the two kilometers from my parents’ house. But this seems to be a merely conditional comfort. If my beliefs have been reliably enough caused…. But have they? And I was no longer sure I believed, in any case. What is it, to believe? I still would have bet on the existence of my parents’ house – what else could I do, since skepticism offers no advice? – but my feeling of doubtless confidence had evaporated. Had everything dissolved around me at that moment, though I would have been surprised, I would not have felt utter shock. I was not seamlessly sure that the world as I knew it existed beyond the ridge.

I turned to hike back, and as I began to mount the slope, I considered Descartes’s madman. In his Meditations on First Philosophy, Descartes seems to say that it would be madness seriously to consider the possibility that one is merely a madman, like those who believe they are kings when they are paupers or who believe their heads are made of glass. But why is it madness to consider this? Or maybe it is madness, but then, since I am now in fact considering it, should that count as evidence that I am mad? Am I a philosopher who works at U.C. Riverside, whom some readers take seriously, or am I indeed just a madman lost in weird skepticism, with merely confused and stupid thoughts? Somehow, this skepticism felt less pleasantly meditative than my earlier reflections.

I returned home. That afternoon, in philosophical conversation I told my father that I thought he did probably exist other than as a figment of my mind. It seemed the wrong thing to say. I wanted to jettison my remnants of skepticism and fully join the human community. I felt isolated and ridiculous. Fortunately, my wife then called me in for a round of living-room theater, and playing the fox to my daughter's gingerbread girl cured me of my mood.

I thought about writing up this confession of my thoughts. I thought about whether readers would relate to it or see me only as possessed for a day by foolish, laughable doubts. Sextus Empiricus was wrong; I have not found that skepticism leads to equanimity.

Wednesday, April 24, 2013

John Searle's Homunculus Announces Phased Retirement

Details here.

The truth is, there are actually two homunculi in there, they've been squabbling, and this is part of a divorce settlement.

Monday, April 22, 2013

A Somewhat Impractical Plan for Immortality

... and arguably evil, too, though let's set that aside.

My plan requires the truth of a psychological theory of personal identity, a "vehicle externalist" account of memory, and some radical social changes. But it requires no magic or computer technology, and arguably we could actually implement it.

Psychological theory of personal identity. Most philosophers think that personal identity over time is grounded by something psychological . Twenty-year-old you and forty-year-old you are (or will be) the same person because of some psychological linkage over time -- maybe continuity of memory, maybe some other sort of counterfactual-supporting causal connectedness between psychological states over time. Maybe traits, values, plans, and projects come into the picture, too. In practice, people don't have the right kind of psychological connectedness without having biological bodily continuity. But that, perhaps, is merely a contingent fact about us.

Vehicle externalism about memory. What is memory? If a madman thinks he is Napoleon remembering Waterloo, he does not remember Waterloo, even if by chance he happens upon exactly the same memory images as Napoleon himself had later in life. Memory requires, it seems, the right kind of causal connectedness to the original event. But need the relevant causal connectedness be entirely interior to the skull? Vehicle externalists about memory say no, there is nothing sacred about the brain-environment boundary. External objects can hold, or partly hold, our memories, if they are hooked up to us with the right kind of reliable causal chains. Consider Andy Clark's and David Chalmers's delightful short paper on Otto, whose ever-available notebook serves as part of his mind; or consider a science-fiction case in which part one's memory is temporarily transferred onto a computer chip and then later recovered.

Implementation. Could one's temporary memory reservoir be another person? I don't see why not, on a vehicle externalist account. And could the memories -- and the values and projects and whatever else is essential to personal identity -- then be transferred into another human body, for example, over the course of a decade or two into the body of a newborn baby as she grows up? I don't see why not, if we accept at least somewhat liberal versions of all the premises so far, and if we assume the most excellent possible shaping of that child.

By formatting a new child with your memories, your personality, your values, your projects, your loves, your hopes and regrets, you could thus continue into a new body. Presumably, you could continue this process indefinitely, taking a new body every fifty years or so.

As I said, a madman's dream of being Napoleon is no continuation of Napoleon. But the situation would be very different from that. There would be no madness. The memories would have well-preserved causal traces back to the original events; the crucial functional role of memory, to save those traces, would be perserved; everything would be held steadily in place by the person or people implementing this plan on your behalf, as a stable network of correctly functioning cognitive processes. And the result would be not just something on paper or in a memory chip but a consciously experienced memory image, felt by its owner to be a real authentic memory of the original event.

This could, it seems, be done with existing technology, using clever mnemonic and psychological techniques. One would need mnemonists who knew everything possible about you, who observed the same events and shared your same memories, and who were exceptionally skilled at preserving this information and transferring it to the child. The question then would be whether it would be true that the child, when she grew up, would really be you, with your authentic memories, instead of a mad Napoleon. And the answer to that question depends when whether certain theories of personal identity and memory are true. If the right theories are indeed true, then immortality -- or rather, longevity potentially in perpetuity -- would be possible for sufficiently wealthy and powerful people now, if they only chose to implement it.

I have written a story about this: The Mnemonists.

The Mnemonists

[This is a draft of a short story. See here for explicit discussion of the philosophical idea behind the story.]

[Revised April 23]

When he was four years old, my Oligarch wandered away from his caretakers to gaze into an oval fountain. At sixteen, he blushingly refused the kiss he had so desperately longed for. A week before his death, he made plans (which must now be postponed) to visit an old friend in Lak-Blilin. I, his mnemonist, have internalized all this. I remember it just as he does, see the same images, feel the same emotions as he does in remembering those things. I have adopted his attitudes, absorbed his personality. My whole life is arranged to know him as perfectly as one person can know another. My first twenty years I learned the required arts. Since then, I have concentrated on nothing but the Oligarch.

My Oligarch knows that to hide from me is to obliterate part of himself. He whispers to me his most shameful thoughts. I memorize the strain on his face as he defecates; I lay my hands on his tensing stomach. When my Oligarch forces himself on his friend’s daughter, I press against him in the dark. I feel the girl’s breasts as he does. I forget my sex and hallucinate his ejaculation.

At my fiftieth birthday, my Oligarch toasts me, raising and then drinking down his fine crystal cup of hemlock. As he dies, I study his face. I mimic his last breath. A newborn baby boy is brought and my second task begins.

By age three, the boy has absorbed enough of the Oligarch’s identity to know that he is the Oligarch now again, in a new body. A new apprentice mnemonist joins us now, waiting in the shadows. At age four, the Oligarch finally visits his friend in Lak-Blilin, apologizing for the long delay. He begins to berate his advisors as he always had, at first clumsily, in a young child’s vocal register. He comes to take the same political stands, comes to dispense the same advice. I am ever at his side helping in all this, the apprentice mnemonist behind me; his trust in us is instant and absolute. At age eight, the Oligarch understands enough to try to apologize to his friend’s daughter – though he also notices her hair again in the same way, so good am I.

My Oligarch boy does not intentionally memorize his old life. He recalls it with assistance. Just as I might suggest to you a memory image, wholly fake, of a certain view of the sea with ragged mountains and gulls, which you then later mistake for a real memory image from your own direct experience, so also are my suggestions adopted by the Oligarch, but uncritically and with absolutely rigorous sincerity on both sides. The most crucial memory images I paint and voice and verbally elaborate. Sometimes I brush my fingers or body against him to better convey the feel, or flex his limbs, or ride atop him, narrating. I give him again the oval fountain. I give him again the refused kiss.

A madman’s dream of being Napoleon is no continuation of Napoleon. But here there is no madness. My Oligarch’s memories have continuous properly-caused traces back to the original events, his whole psychology continued by a stable network of processes, as he well knows. His plans and passions, commitments and obligations, legal contracts, attitudes and resolutions, vengeances, thank-yous and regrets – all are continued without fail, if temporarily set aside through infancy as though through sleep.

The boy, now eleven, is only middling bold, though in previous form, my Oligarch had been among the boldest in the land. I renew my stories of bold heroes, remind him of his long habit of boldness, subtly condition and reinforce him. I push the boundaries of acceptable technique. Though I feel the dissonance sharply, the boy does not. He knows who he is. He feels he has only changed his mind.

[continued here]

Thursday, April 18, 2013

Does Tylenol Ease Existential Angst?

Intriguing evidence here. Also my vote for best use of a David Lynch video so far this year.

I'm dubious about the model and mechanism and curious about whether it will prove replicable by researchers with different theoretical perspectives. But still. How cool is that study?

Wednesday, April 17, 2013

The Jerk-Sweetie Spectrum

A central question of moral epistemology is, or should be: Am I a jerk? Until you figure that one out, you probably ought to be cautious in morally assessing others.

But how to know if you're a jerk? It's not obvious. Some jerks seem aware of their jerkitude, but most seem to lack self-knowledge. So can you rule out the possibility that you're one of those self-ignorant jerks? Maybe a general theory of jerks will help!

I'm inclined to think of the jerk as someone who fails to appropriately respect the individual perspectives of the people around him, treating them as tools or objects to be manipulated, or idiots to be dealt with, rather than as moral and epistemic peers with a variety of potentially valuable perspectives. The characteristic phenomenology of the jerk is "I'm important, and I'm surrounded by idiots!" However, the jerk needn't explicitly think that way, as long as his behavior and reactions fit the mold. Also, the jerk might regard other high-status people as important and regard people with manifestly superior knowledge as non-idiots.

To the jerk, the line of people in the post office is a mass of unimportant fools; it's a felt injustice that he must wait while they bumble around with their requests. To the jerk, the flight attendant is not an individual doing her best in a difficult job, but the most available face of the corporation he berates for trying to force him to hang up his phone. To the jerk, the people waiting to board the train are not a latticework of equals with interesting lives and valuable projects but rather stupid schmoes to be nudged and edged out and cut off. Students and employees are lazy complainers. Low-level staff are people who failed to achieve meaningful careers through their own incompetence who ought to take the scut work and clean up the messes. (If he is in a low-level position, it's a just a rung on the way up or a result of crimes against him.)

Inconveniencing others tends not to register in the jerk's mind. Some academic examples drawn from some of my friends' reports: a professor who schedules his office hours at 7 pm Friday evenings to ensure that students won't come (and who then doesn't always show up himself); a TA who tried to reschedule his section times (after all the undergrads had already signed up and presumably arranged their own schedules accordingly) because they interfered with his napping schedule, and who then, when the staffperson refused to implement this change, met with the department chair to have the staffer reprimanded (fortunately, the chair would have none of it); the professor who harshly penalizes students for typos in their essays but whose syllabus is full of typos.

These examples suggest two derivative features of the jerk: a tendency to exhibit jerkish behavior mostly down the social hierarchy and a lack of self-knowledge of how one will be perceived by others. The first feature follows from the tendency to treat people as objects to be manipulated. Manipulating those with power requires at least a surface-level respect. Since jerkitude is most often displayed down the social ladder, people of high social status often have no idea who the jerks are. It's the secretaries, the students, the waitresses who know, not the CEO. The second feature follows from the limited perspective-taking: If one does not value others' perspectives, there's not likely to be much inclination to climb into their minds to imagine how one will be perceived by them.

In considering whether you yourself are a jerk, you might take comfort in the fact that you have never scheduled your office hours for Friday night or asked 70 people to rearrange their schedules for your nap. But it would be a mistake to comfort oneself so easily. There are many manifestations of jerkitude, and even hard-core jerks are only going to exhibit a sample. The most sophisticated, self-delusional jerks also employ the following clever trick: Find one domain in which one's behavior is exemplary and dwell upon that as proof of one's rectitude. Often, too, the jerk emits an aura of self-serving moral indignation -- partly, perhaps, as an anticipated defense against the potential criticisms of others, and partly due to his failure to think about how others' seemingly immoral actions might be justified from their own point of view.

The opposite of the jerk is the sweetheart or the sweetie. The sweetie is vividly aware of the perspectives of others around him -- seeing them as individual people who merit concern as equals, whose desires and interests and opinions and goals warrant attention and respect. The sweetie offers his place in line to the hurried shopper, spends extra time helping the student in need, calls up an acquaintance with an embarrassed apology after having been unintentionally rude.

Being reluctant to think of other people as jerks is one indicator of being a sweetie: The sweetie charitably sees things from the jerk's point of view! In contrast, the jerk will err toward seeing others as jerks.

We are all of us, no doubt, part jerk and part sweetie. The perfect jerk is a cardboard fiction. We occupy different points in the middle of the jerk-sweetie spectrum, and different contexts will call out the jerk and the sweetie in different people. No way do I think there's going to be a clean sorting.

-------------------------------------------------------
I'm accumulating examples of jerkish behavior here. Please add your own! I'm interested both in cases that conform to the theory above that those that don't seem to.

Compare also Aaron James's theory of assholes, which I discuss here.

Monday, April 15, 2013

Wanted: Examples of Jerkish Behavior

I'm working on theory of jerks and I need data. In the comments section, I'm hoping some of you (ideally, lots of you) will describe examples of what you think of as typical "jerkish" behavior.

Here's why: I'm working on a theory of jerks. This theory is aimed largely at the question of how you can know if you are, in fact, a jerk. (Do you know?) Toward this end, I've worked a bit on the phenomenology of being a jerk and on the "jerk-sucker ratio". Soon, I plan to propose a "jerk-sweetie spectrum". But before I get too deep into this, I'd appreciate some thoughts from people not much influenced by my theorizing. I want to check my theory against proposed cases. Also, I'd like to draw a "portrait of a jerk", and I need things to include in the portrait.

Favorite examples I will pull up into the body of this post as updates. (And I'll keep my ear out for examples via comments on this post as long as I actively maintain this blog, since comments filter into my email.) Also, readers who provide any examples that I incorporate in my portrait of a jerk will receive due name credit in the final published version of my planned paper on this topic.

But please: no names of individuals. And nothing that will clearly single out a particular individual. And if you sign your true name, please be careful to be sufficiently vague that you risk no reprisal from the perpetrator!

The anti-hero of my portrait will probably be an academic jerk, so academic examples are especially welcome. However, this jerk lives outside of academia too, and my theory of jerks is meant to apply broadly, so I need a good range of non-academic examples, too.

I've Googled "What a Jerk" as a source of examples to kick the thing off. Below are a few. No obligation to read them before diving in with your own.

From Alan Lurie at Huffington Post:

I turned to see a tall bald man looking down at me as the train pulled in to the platform. I let two people in before me, and that's when I felt the push. As we turned toward the seats I felt another push on my back, and again looked at the man, who now released an annoyed huff of breath. What a jerk! I thought. Does he think that he's the only one who deserves a seat? Then I felt a poke on my shoulder, and in a loud angry voice the tall bald man said, "What are you looking at? You got a problem, buddy?"
From Sarah Cliff (2001):
My AA, Maureen, flubbed a meeting time - scheduled over something else-and he really lit into her. Not the end of the world - she had made a mistake, and he had to rearrange an appointment - but he could have gotten the point across more tactfully. And she is *my* AA. (And I am *his* boss, and he did it in front of me.)
From Richard Norquist (1961):
I know a college president who can be described only as a jerk. He is not an unintelligent man, nor unlearned, nor even unschooled in the social amenities. Yet he is a jerk cum laude, because of a fatal flaw in his nature--he is totally incapable of looking into the mirror of his soul and shuddering at what he sees there. A jerk, then, is a man (or woman) who is utterly unable to see himself as he appears to others. He has no grace, he is tactless without meaning to be, he is a bore even to his best friends, he is an egotist without charm.
From Florian Meuck:
He is such an unlikeable character. You never invited him; he sat down on your sofa and hasn’t left since. He never stops talking, which is quite annoying. But it’s getting worse: he doesn’t like to talk about energetic, positive, uplifting stuff. No – it’s the opposite! He’s a total bummer! He cheats, he betrays, he deceives, he fakes, he misleads, he tricks, and he swindles. He is negative, sometimes even malicious. He’s a black hole! He promotes fear – not joy. He persuades you to think small – not big. He convinces you to incarcerate your potential – not to unlock it.
Update, 4:43 p.m.:

Good comments so far! I'm finding this helpful. Thanks! I'm going to start pulling up some favorites into the body of the post, but that doesn't mean the others aren't helpful and interesting too.

* At the gym a few weeks ago. A man there (working out) had probably 10 weights of various sizes strewn in a wide radius around him, blocking other people's potential work-out space. I asked him if the weights were his, and he said "no - the person before me left them here, and I DON'T PICK UP OTHER PEOPLE'S WEIGHTS." [from anon 02:55 pm]

* the professor who has hard deadlines for their students, but then doesn't respond or reply promptly themselves, or expects perfection in writing but then has a syllabus and other written materials full of typos. [from Theresa]

* Anyone who blames low-level folk for problems that are obviously originating many levels higher up (or to the side). For example, berating a clerk for the store's return policy, the stewardess for the airline's cell phone rules, the waiter for the steak's doneness, etc. [from Jesse, 01:50 pm]

* If I'm descending the stairs towards the eastbound subway platform and I hear an approaching train, then I'll generally speed up if I see that the train is eastbound and I'll slow down if it's the westbound train. If there's no one in front of me on the stairs but there are several people following me, they'll use my change of pace as a signal re. whether the approaching train is eastbound or westbound. No one agreed on this tendency or explicitly recommended it. It's just a behaviour that arose spontaneously and became standard. So, if, on seeing that the train is indeed eastbound I deviate from the norm and slow my pace, thereby leading others behind me to slow down and miss the train, I'd say I've engaged in jerkish behaviour [from praymont, Apr 17]

Thursday, April 11, 2013

Adam and Eve in the Happiness Pod

The Institute for Evil Neuroscience has finally done it: human consciousness -- or quasi-human -- on a computer. By special courier, I receive the 2 exabyte memory stick. I plug it into my computer's port and install the proprietary software. A conscious being! By default, she has an IQ of 130, the ordinary range of human cognitive skills and emotional reactions, and sensory experiences as though she has just awakened on an uninhabited tropical island. I set my monitors to see through her eyes, my speakers to play her inner speech. She wonders where she is and how she got there. She admires the beauty of the island. She cracks a coconut, drinks its juice, and tastes its flesh. She thinks about where she will sleep when the sun sets.

With a few mouse clicks, I give her a mate -- a man who has woken on a nearby part of the island. The two meet. I have set the island for abundance and comfort: no predators, no extreme temperatures, a ready supply of seeming fruit that will meet all their biological (apparently biological) needs. The man and the woman talk -- Adam and Eve, their default names. They seem to remember no past, but they find themselves with island-appropriate skills and knowledge. They make plans to explore the island, which I can arbitrarily enlarge and populate.

Since Adam and Eve really are, by design, rational and conscious and capable of the full human range of feeling, the decision I just made to install them on my computer was as morally important as was my decision fifteen years ago to have children. Wasn't it? And arguably my moral obligations to Adam and Eve are no less important than my moral obligations to my children. It would be cruel -- not just pretend-cruel, like when I release Godzilla in SimCity or let a tamagotchi starve -- but really, genuinely cruel, were I to make them suffer. Their environment might not seem real to me, but their pains and pleasures are as real as my own. I should want them happy. I should seek, maybe, to maximize their happiness. Deleting their files would be murder.

They want children. They want the stimulation of social life. My computer has lots of spare capacity. Why not give them all that? I could create an archipelago of 100,000 happy people. If it's good to bring two happy children into the world, isn't it 50,000 times better to bring 100,000 happy island citizens into the world -- especially if they are no particular drain upon the world's (the "real world's") resources? It seems that bringing my Archipelago to life is by far the most significant thing I will ever do -- a momentous moral accomplishment, if also, in a way, a rather mundane and easy accomplishment. Click, click, click, an hour and it's done. A hundred thousand lives, brimming with joy and fulfillment, in a fist-sized pod! The coconuts might not be real (or are they? -- what is a "coconut", to them?), but their conversations and plans and loves have authentic Socratic depth.

By disposition, my people are friendly. There are no wars here. They will reproduce to the limit of my computer's resources, then they will find themselves infertile -- which they experience as somewhat frustrating but only one small disappointment in their enviably excellent lives.

If I was willing to spend thousands on fertility treatments to bring one child into the world, shouldn't I also be willing to spend thousands to bring a hundred thousand more Archipelagists (as I now call them) into the world? I buy a new computer and connect it to my old one. My archipelago is doubled. What a wealth of happiness and fulfillment I have just enabled! Shouldn't I do even more, then? I have tens of thousands of dollars saved up in my childrens' college funds. Surely a million lives brimming with joy and fulfillment are worth more than my two children's college education? I spend the money.

I devote my entire existence to maximizing the happiness, the fulfillment, the moral good character, and the triumphant achievements of as many of these people as I can make. This is no pretense. This is, for them, reality, and I treat it as earnestly as they do. I become a public speaker. I argue that there is nothing more important that Earthly society could do than to bring into existence a superabundance of maximally excellent Archipelagists. And as a society, we could easily create trillions of them -- trillions of trillions if we truly dedicated our energies to it -- many more Archipelagists than ordinary Earthlings.

Could there be any greater achievement? In comparison, the moon shot was nothing. The plays of Shakespeare, nothing. The Archipelagists might have a hundred trillion Shakespeares, if we do it right.

We face decisions: How much Earthling suffering is worth trading off for Archipelagist suffering? (One to one?) Is it better to give Archipelagists constant feelings of absolutely maximal bliss, even if doing so means reducing their intelligence and behavior to cow-like levels, or is it better to give them a broader range of emotions and behaviors? Should the Archipelagists know conflict, deprivation, and suffering or always only joy, abundance, and harmony? Should there be death and replacement or perpetual life as long as computer resources exist to sustain it? Is it better to build the Archipelagists so that they always by nature choose the moral good, or should they be morally more complex? Are there aesthetic values we should aim to achieve in their world and not just morality-and-happiness maximizing values? Should we let them know that they are "merely" sims? Should we allow them to rise to superintelligence, if that becomes possible? And if so, what should our subsequent relationship with them be? Might we ourselves be Archipelagists, without knowing it, in some morally dubious god's vision of a world it would be cool to create?

A virus invades my computer. It's a brutal one. I should have known; I should have protected my computer better with so much depending on it. I fight the virus with passion and urgency. I must spend the last of my money, the money I had set aside for my kidney treatments. I thus die to save the lives of my Archipelagists. You will, I know, carry on my work.

Wednesday, April 10, 2013

Philosophers' Carnival Sequicentmensial!

Sesquicentmensial? Okay, I admit, I made the word up. I was going to say "sesquicentennial", but it's been 150 months, not 150 years, so I swapped in "mensis" ("month" in Latin) for "annus" ("year"). I think you'll agree that the result is semi-pulchritudinous!

The Philsosophers' Carnival, as you probably know, posts links to selected posts from around the blogosphere, chosen and hosted by a different blogger every month. Since philosophers are just a bunch of silly children in grown-up bodies, I use a playground theme.

The Metaphysical Whirligig:
All the kids on the playground know who Thomas Nagel is. He's the one riding the Whirligig saying he has no idea what it's like to be a bat! Recently, he's been saying something about evolution and teleology that sounds suspiciously anti-Darwinian. But maybe most of us are too busy with our own toys to read it? Peter at Conscious Entities has a lucid and insightful review (part one and part two). Meanwhile, Michael McKenna at Flickers of Freedom is telling us that "free will" is just a term of art and so we can safely ignore, for example, what those experimental philosophy kids are doing, polling the other kids in the sandbox. Whoa, I'm getting dizzy!

The Philosophy of Mind Sandpit:
Some of the kids here are paralyzed on one side of their body, and they don't even know it. How sad! They grab their toys only from one side and the toys tumble out of their hands. Glenn Carruthers at Brains muses about what these anosognosics' lack of self-knowledge really amounts to. I like the nuance of his description, compared with the black-or-white portrayals of anosognosias some of the philosophy kids offer.

The Curving Tunnel of Philosophy of Language:
Wolfgang Schwarz is looking down the tunnel at a single red dot, viewed through two tubes, one for each eye -- but he doesn't know it's only one dot! What he really sees, Wo says, is just another Frege case, nothing requiring centered worlds, contra David Chalmers. In the comments, Chalmers responds. Meanwhile Rebecca Kukla and Cassie Herbert are listening to what the philosophy kids are whispering to each other on the side in "peripheral" forums, like blogs. Why are the boys getting all the attention?

The Epistemic Slide:
Some children stand at the top of the slide, afraid to go down and ruining the fun for everyone. Me for instance! I remain unconvinced that Hans Reichenbach or Elliott Sober have satisfactorily proven that the external world exists.

The Moral Teeter-Totter:
At Philosophy, Et Cetera, Richard Chappell is scolding those fun-loving up-and-down moral antirealists: Though they might think they can accept all the same first-order norms as do moral realists, they can't. Concern for others, is for antirealists, just too optional. Anti-realists thus fail to see people as really mattering "in themselves". Are the antirealist children hearing this? No! They plug their ears, sing, and keep on endorsing whatever norms they feel like! At the Living Archives on Eugenics Blog, Moyralang discusses a fascinating case of parents trying to force a surrogate mother to abort her disabled baby. Some children just can't play nice with the special needs kids.

The Philosophy of Science Picnic Table:
See that kid sitting at the table with a winning lottery ticket? Why is she crying? At Mind Hacks, Tom Stafford gives a primer on research that money won't buy happiness. Meanwhile, the kids at Machines Like Us are gossiping about a new study suggesting that a large proportion of neuroscience research is unreliable due to small sample size. And that girl at the picnic table with the iPad? She's just seeing what Google autocompletes when you enter "women can't", "women should", "women need", vs. "men can't", "men should", "men need", etc. Nothing interesting there for us boys, of course!

The Historical Jungle Gym:
Steve Angle, at Warp, Weft, and Way -- what have you just put in your mouth?! Steve argues against PJ Ivanhoe's interpretation of the Confucian tradition as treating the moral skill as a kind of connoisseurship, like cultivation of taste in wine (or bugs). After all, even the poorly educated know that bugs taste bad!

The Metaphilosophy Spiderclimb:
What have all these children learned, really? Not much, maybe! Empty boasting might be the order of the day. Joshua Knobe at Experimental Philosophy pulls together the existing empirical evidence on philosophical expertise.

The Issues in the Profession Nanny:
Why aren't there more children on the equipment, you might wonder? So do I! It turns out they're wasting all their time applying for grants! Shame on them, says playground watcher Helen De Cruz at NewAPPS -- or rather, shame on the system. Children should be playing and jumping and throwing sand at each other, not forced to spend all their time hunting around in the grass for nickels. Meanwhile, Janet Stemwedel at Adventures in Ethics & Science tells a nice anecdote about the philosophy boys' cluelessness about the prevalence of sexual harassment -- still! But I know you philosophy blog readers won't be so ignorant, since you've been keeping up with the steady wave of shockers over at What Is It Like to Be a Woman in Philosophy?.

The next Philosophers' Carnival will be hosted by Camels with Hammers.