Showing posts with label ethics professors. Show all posts
Showing posts with label ethics professors. Show all posts

Friday, September 15, 2023

Walking the Walk: Frankness and Social Proof

My last two posts have concerned the extent to which ethicists should "walk the walk" -- that is, live according to, or at least attempt to live according to, the ethical principles they espouse in their writing and teaching. According to "Schelerian separation", what ethicists say or write can and should be evaluated independently of facts about the ethicist's personal life. While there are some good reasons to favor Schelerian separation, I argued last week that ethical slogans ("act on that maxim you can at the same time will to be a universal law", "maximize utility") will tend to lack specific, determinate content without a context of clarifying examples. One's own life can be a rich source of content-determining examples, while armchair reflection on examples tends to be impoverished.

Today, I'll discuss two more advantages of walking the walk.

[a Dall-E render of "walking the walk"]

Frankness and Belief

Consider scientific research. Scientists don't always believe their own conclusions. They might regard their conclusions as tentative, the best working model, or just a view with enough merit to be worth exploring. But if they have doubt, they ought to be unsurprised if their readers also have doubt. Conversely, if a reader learns that a scientist has substantial doubts about their own conclusions, it's reasonable for the reader to wonder why, to expect that the scientist is probably responding to limitations in their own methods and gaps in their own reasoning that might be invisible to non-experts.

Imagine reading a scientific article, finding the conclusion wholly convincing, and then learning that the scientist who wrote the article thinks the conclusion is probably not correct. Absent some unusual explanation, you’ll probably want to temper your belief. You’ll want to know why the scientist is hesitating, what weaknesses and potential objections they might be seeing that you have missed. It’s possible that the scientist is simply irrationally unconvinced by their own compelling reasoning; but that’s presumably not the normal case. Arguably, readers of scientific articles are owed, and reasonably expect, scientific frankness. Scientists who are not fully convinced by their results should explain the limitations that cause them to hesitate. (See also Wesley Buckwalter on the "belief norm of academic publishing".)

Something similar is true in ethics. If Max Scheler paints a picture of a beautiful, ethical, religious way of life which he personally scorns, it's reasonable for the reader to wonder why he scorns it, what flaws he sees that you might not notice in your first read-through. If he hasn't actually tried to live that way, why not? If he has tried, but failed, why did he fail? If a professional ethicist argues that ethically, and all things considered, one should be a vegetarian, but isn't themselves a vegetarian and has no special medical or other excuse, it's reasonable for readers and students to wonder why not and to withhold belief until that question is resolved. People are not usually baldly irrational. It's reasonable to suppose that there's some thinking behind their choice, which they have not yet revealed readers and students, which tempers or undercuts their reasoning.

As Nomy Arpaly has emphasized in some of her work, our gut inclinations are sometimes wiser than our intellectual affirmations. The student who says to herself that she should be in graduate school, that academics is the career for her, but who procrastinates, self-sabotages, and hates her work – maybe the part of her that is resisting the career is the wiser part. When Huck Finn tells himself that the right thing to do is to turn in his friend, the runaway slave Jim, but can't bring himself to do it – again, his inclinations might be wiser than his explicit reasoning.

If an ethicist's intellectual arguments aren't penetrating through to their behavior, maybe there's a good reason. If you can't, or don't, live what you intellectually endorse, it could be because your intellectual reasoning is leaving something important out that the less intellectual parts of you rightly refuse to abandon. Frankness with readers enables them to consider this possibility. Conversely, if we see someone who reasons to a certain ethical conclusion, and their reasoning seems solid, and then they consistently live that way without tearing themselves apart with ambivalence, we have less grounds for suspecting that their gut might be wisely fighting against flaws their academic reasoning than we do when we see someone who doesn’t walk the walk.

What is it to believe that eating meat is morally wrong (or any other ethical proposition)? I favor a dispositionalist approach (e.g., here, here, here). It is in part to be disposed to say and intellectually judge that eating meat is morally wrong. But more than that, it is to give weight to the avoidance of meat in your ethical decision-making. It is to be disposed to feel you have done something wrong if you eat meat for insufficient reason, maybe feeling guilt or shame. It is to feel moral approval and disapproval of others' meat-avoiding or meat-eating choices. If an ethicist intellectually affirms the soundness of arguments for vegetarianism but lacks the rest of this dispositional structure, then (on the dispositionalist view I favor) they don't fully or determinately believe that eating meat is ethically wrong. Their intellectually endorsed positions don't accurately reflect their actual beliefs and values. This completes the analogy with the scientist who doesn't believe their own conclusions.

Social Proof

Somewhat differently, an ethicist's own life can serve as a kind of social proof. Look: This set of norms is livable – maybe appealing so, with integrity. Things don't fall apart. There's an implementable vision, which other people could also follow. Figures like Confucius, Buddha, and Jesus were inspiring in part because they showed what their slogans amounted to in practice, in part because they showed that real people could live in something like the way they themselves lived, and in part because they also showed how practically embodying the ethics they espoused could be attractive and fulfilling, at least to certain groups of people.

Ethical Reasons to Walk the Walk?

I haven't yet discussed ethical reasons for walking the walk. So far, the focus has been epistemology, philosophy of language, and philosophy of mind. However, arguing in favor of certain ethical norms appears to involve recommending that others adhere to those norms, or at least be partly motivated by those norms. Making such a recommendation while personally eschewing those same norms plausibly constitutes a failure of fairness, equity, or universalization – the same sort of thing that rightly annoys children when their parents or teachers say "do as I say, not as I do". More on this, I hope, another day.

Friday, September 08, 2023

One Reason to Walk the Walk: To Give Specific Content to Your Assertions

Last week, I discussed some reasons we might not expect or want professional ethicists to "walk the walk" in the sense of living by the ethical norms they espouse in their teaching and research. (In short: This isn't their professional obligation; it's reasonable for them to trust convention more than their academic conclusions; and one can arguably be more objective in evaluating arguments if one isn't obligated to modify one's life depending on those conclusions.) Today I want to start talking about why I think that's too simple.

To be clear: I just want to start talking about it. I'll give one reason why I think there's some benefit to walking the walk, as an ethicist. I don't intend this as a full account.

Short version: Ethical slogans lack concrete, practical meaning unless they are grounded in a range of examples. One's own life can provide that range of examples, putting flesh on the blood or your slogans. If you say "act on that maxim that you can at the same time will to be a universal law", I have no idea what you are specifically recommending -- and I worry that you might not have much idea either. But if you put it to work in your life, then what it amounts to, at least as expressed by you, becomes much clearer.

Longer version:

Love Is Love, and Slogans Require a Context

A few years ago, signs like this began to sprout up in my neighborhood:

In this house, we believe:
Black lives matter
Women’s rights are human rights
No human is illegal
Science is real
Love is love
Kindness is everything

If you know the U.S. political scene, you'll understand that the first five of these slogans have meanings much more specific than is evident from the surface content alone. "Black lives matter" conveys a belief that great racial injustice still exists in the U.S., perpetrated especially by the police, and it recommends taking action to rectify that injustice. "Women's rights are human rights" conveys a similar belief about continuing gender inequality, especially with respect to reproductive rights, including access to abortion. "No human is illegal" expresses concern over the mistreatment of people who have entered the U.S. without legal permission. "Science is real" expresses disdain for mainstream Republicans' dismissal of scientific evidence in policy, especially concerning climate change. And "love is love" expresses the view that heterosexual romantic relationships should not be privileged above homosexual romantic relationships, especially with regard to the rights of marriage. "Kindness is everything" is also interesting, and I'll get to it in a moment.

How confusing and opaque all of this would be to an outsider! Imagine a time traveler from the 19th century. "Love is love". Well, of course! Isn't that just a tautology? Who could disagree? Explain the details, however, and our 19th century guest might well disagree. The content of this slogan, or "belief", is radically underspecified by the explicit linguistic content. Another feature of these claims is that they sound less controversial in the abstract than they do after contextual specification. The surface content of both "Black lives matter" and the opposing rallying cry, "all lives matter" is unobjectionable. However, whether special attention should be dedicated to anti-Black police violence, or whether instead pro-Black protesters have gone too far -- that's quite another matter.

The last slogan, "kindness is everything", is to my knowledge less politically specific, but it illustrates a connected point. Clearly, it expresses support for increasing kindness. But kindness isn't literally everything, certainly not ontologically, nor even morally, unless something extremely thin is meant by "kindness". If a philosopher were to espouse this slogan, I'd immediately want to work through examples with them, to assess what this claim amounts to. If I give an underperforming student the C-minus they deserve instead of the A they want, am I being kind to them, in the intended sense? How about if I object to someone's stepping on my toe? Of course, these sketchy questions lack detail, since there are many ways to step on someone's toe, and many ways to object, and many different circumstances in which toe-stepping might be embedded, and not all C-minus situations are the same. Working through abstract examples, though, at least gets us started on what counts as "kindness" and what priority it should have when it appears to conflict with other goods.

But here's what would really make the slogan clear: a life lived in kindness -- an observable pattern of reactions to a wide range of complex situations. How does the person who embodies the slogan "kindness is everything" react to having their toe stepped on, in this particular way by this particular person? Show me specific kindness-related situations over and over, with all the variation that life brings. Only then will I really understand the ideal. We can do this sometimes in imagination, developing a feel for someone's character and way of life. In a richly imagined fiction, or in a set of stories about Confucius or Jesus or some other sage, we can begin to see the substance of a moral view and set of values, going beyond the slogans.

In the Declaration of Independence, Thomas Jefferson, patriot, revolutionary, and slaveowner, wrote "All men are created equal". This sounds good. People in the U.S. endorse that slogan, repeat it, embrace it in all sincerity. What does it mean? All "men" in the old-fashioned sense that supposedly also included women, or really only men? Black and cognitively disabled people too? And in what does equality consist? Does it mean that all adults should have the right to vote? Equal treatment before the law? Certain rights and liberties? What is the function of "created" in the sentence? Do we start equal but diverge? We could try to answer all these questions, and new more specific questions would spring forth, hydra-like (which laws specifically, under which conditions?) until we tack it down in a range of examples. The framers of the U.S. Constitution certainly didn't agree on all of these matters, especially the question of slavery. They could agree on the slogan while disagreeing radically about what it amounts to, because the slogan is neither "self-evident" nor determinate in its content. In one precisification, it might be only some banal thing even King George III would have accepted. In another precisification, it might entail universal franchise and the immediate abolition of slavery, in which case Jefferson himself would have rejected it.

Kant famously disdained casuistry -- the study of ethics through the examination of cases -- and it's understandable why. When he took steps in that direction, he embarrassed himself. You should not lie even to the murderer at the door chasing down your friend. Masturbation is a horror akin to murdering yourself, only less courageous. It's fine to kill children born out of wedlock. Women fleeing from abusive husbands should be returned against their will. Servants should not be permitted to vote because their "existence, as it were, is only inherence". Kant preferred beautiful abstractions: Act on that maxim that you can at the same time will to be a universal law. Treat everyone as an end in themselves, never as a mere means. Sympathetic scholars can accept these beautiful abstractions and ignore Kant's foolish treatment of cases. If they work through the cases themselves, reaching different judgments than Kant himself did, they put flesh on the view -- but not the flesh that was originally there. They've converted a vague slogan into a more concrete position. As with "all mean are created equal", this can be done in many ways.

So as not to poke only at Kant, similar considerations apply to consequentialist mottoes like "maximize utility" and virtue ethicist mottoes like "be generous". Only when we work through involuntary organ donor cases, and animal cases, and what to do about people who derive joy from others' suffering, and what kinds of things count as utility, and what to do about uncertainty, and what to do about future people, etc., do we have a real consequentialist view instead of an abstract skeleton. It would be nice to see a breathing example of a consequentialist life -- a consequentialist sage, so to speak, who lives thoroughly by consequentialist principles (maybe the ancient Chinese philosopher Mozi was one; see also MacFarquhar 2015). Might that person look like a Silicon Valley effective altruist, investing a huge salary wisely in index funds in expectation of donating it someday for the purchase of a continent's worth of mosquito nets? Or will they rush off immediately to give medical aid to the poor? Will they never eat desserts, or are those seeming-luxuries needed to keep their spirits up to do other good work? Will they pay for their children's college? Will they donate a kidney? An eye? What specific considerations do they appeal to, pro and con, and how much does it depend on which particulars? The more specific, the more we move from a diffuse slogan to determinate advice.

The Power of Walking the Walk: Discovering the Specifics.

One great advantage of walking the walk, then, is that it gives your slogans specificity. Nothing is more concrete than particular responses to particular cases. Kant never married. (He had a long relationship with a valet, but I'll assume that's a rather different thing.) If Kant says, "don't deceive your spouse", well, I'm not sure he really ever confronted the reality of it or worked through the cases. On the other hand, if your father-in-law, happily married for sixty-plus years, says "don't deceive your spouse", that's quite different. He'll have lived through a wide range of cases, with a well-developed sense of what the boundaries of honesty are and how to manifest it -- what exceptions there might be, what omissions and vaguenesses cross the boundary into unacceptable dishonesty, how much frankness is really required, how to weigh honesty against other goods. This background of long marriage provides context for him to really mean something quite specific when he says "don't deceive your spouse". I might not understand immediately what he means -- those words could mean so many different things coming from different mouths -- but I can look to his life as an example, and I can trust that he has grappled with a wide range of difficult cases, which ideally we could talk through. His words manifest a depth that will normally be absent from similar advice from an unmarried person.

Ethics can be abstract. Kant was, perhaps, a great abstract ethicist. But if you don't apply your ethics to real cases, over and over, if you deal only in slogans and abstractions and a few tidy paragraph-long thought experiments, then your ethics is spectral, or at best skeletal. It will be very difficult to know what it amounts to -- just as, I've argued, we don't really know what "act on that maxim you can at the same time will to be a universal law" amounts to, without thinking through the cases. Maybe in private study you work through ten times as many cases as you publish in your articles or present in the classroom. But that's still a tiny fraction of the cases that someone will confront who attempts to actually live by a broad-reaching ethical principle; and what you privately imagine -- forgive me -- will probably be simplistic compared to the messiness of daily life. Contrast this with Martin Luther King's ethics of non-violent political activism or Confucius's ethics of duty and propriety. We who never met them can only get a glimpse of what their fully embodied principles must have been, as enacted in their lives. My point is not that they were saints. King, and presumably Confucius, were flawed characters. But when King endorsed non-violent activism as a means of political change and when Confucius said "do not speak unless it is in accord with ritual; do not move unless it is in accord with ritual" (5th c. BCE/2023, §12.1, p. 33), they had confronted many real cases and so must have had a much fuller grasp of the substance behind these slogans than it is realistic to expect anyone to obtain simply from reading and reflection.

The ethicist who does not attempt to live by their principles -- if they are principles that can be lived by and not, for example, reflections about what to do simply in certain rare or remote cases -- thus abandons the best tool they have for repeatedly confronting the practicalities, the limits, the conflicts, the disambiguations, which force them to work out the specific, determinate content of the principles they endorse.

Now there is a sense in which a view could have a very specific, determinate content, even if we don't know what that content is. Consider simple act utilitarianism, according to which we should do what maximizes the total sum of pleasure minus the total sum of pain. Arguably, each time you act, there is a single specific act you could do which would be right according to this view -- though also, arguably, it is impossible to know what this act is, since every act has numerous, long-running, and complicated consequences. In a way, the principle has specific content: exactly act A is correct and no other, though who knows what act A is? However, this is not specific, determinate content in the sense that I mean. To have a livable ethical system, the act utilitarian needs to develop estimates, guesses, more specific principles and policies; and different act utilitarians might approach that problem very differently. It is these actionable specifics that constitute the practical substance of the ethical view.

The hard work of trying to live out your ethical values -- that's how ordinary mortals discover the substance of their principles. Otherwise, they risk being as indeterminate as the slogan "love is love" removed from its political context.

---------------------------------------

Related:

"Does It Matter if Ethicists Walk the Walk?" (Sep 1, 2023)

"Love Is Love, and Slogans Require a Context of Examples" (Mar 13, 2021)

Friday, September 01, 2023

Does It Matter If Ethicists Walk the Walk?

The Question: What's Wrong with Scheler?

There's a story about Max Scheler, the famous early 20th century Catholic German ethicist. Scheler was known for his inspiring moral and religious reflections. He was also known for his horrible personal behavior, including multiple predatory sexual affairs with students, sufficiently serious that he was banned from teaching in Germany. When a distressed admirer asked about the apparent discrepancy, Scheler was reportedly untroubled, replying, "The sign that points to Boston doesn't have to go there."

[image modified from here and here]

That seems like a disappointing answer! Of course it's disappointing when anyone behaves badly. But it seems especially bad when an ethical thinker goes astray. If a great chemist turns out to be a greedy embezzler, that doesn't appear to reflect much on the value of their chemical research. But when a great ethicist turns out to be a greedy embezzler, something deeper seems to have gone wrong. Or so you might think -- and so I do actually think -- though today I'm going to consider the opposite view. I'll consider reasons to favor what I'll call Schelerian separation between an ethicist's teaching or writing and their personal behavior.

Hypocrisy and the Cheeseburger Ethicist

A natural first thought is hypocrisy. Scheler was, perhaps, a hypocrite, surreptitiously violating moral standards that he publicly espoused -- posing through his writings as a person of great moral concern and integrity, while revealing through his actions that he was no such thing. To see that this isn't the core issue, consider the following case:

Cheeseburger Ethicist. Diane is a philosophy professor specializing in ethics. She regularly teaches Peter Singer's arguments for vegetarianism to her lower-division students. In class, she asserts that Singer's arguments are sound and that vegetarianism is morally required. She openly emphasizes, however, that she herself is not personally a vegetarian. Although in her judgment, vegetarianism is morally required, she chooses to eat meat. She affirms in no uncertain terms that vegetarianism is not ethically optional, then announces that after class she'll go to the campus cafeteria for a delicious cheeseburger.

Diane isn't a hypocrite, at least not straightforwardly so. We might imagine a version of Scheler, too, who was entirely open about his failure to abide by his own teachings, so that no reader would be misled.

Non-Overridingness Is Only Part of the Issue

There's a well-known debate about whether ethical norms are "overriding". If an action is ethically required, does that imply that it is required full stop, all things considered? Or can we sometimes reasonably say, "although ethics requires X, all things considered it's better not to do X"? We might imagine Diane concluding her lesson "-- and thus ethics requires that we stop eating meat. So much the worse for ethics! Let's all go enjoy some cheeseburgers!" We might imagine Scheler adding a preface: "if you want to be ethical and full of good religious spirit, this book gives you some excellent advice; but for myself, I'd rather laugh with the sinners."

Those are interesting cases to consider, but they're not my target cases. We can also imagine Diane and Scheler saying, apparently sincerely, all things considered, you and I should follow their ethical recommendations. We can imagine them holding, or seeming to hold, at least intellectually, that such-and-such really is the best thing to do overall, and yet simply not doing it themselves.

The Aim of Academic Ethics and Some Considerations Favoring Schelerian Separation

Scheler and Diane might defend themselves plausibly as follows: The job of an ethics professor is to evaluate ethical views and ethical arguments, producing research articles and educating students in the ideas of the discipline. In this respect, ethics is no different from other academic disciplines. Chemists, Shakespeare scholars, metphysicians -- what we expect is that they master an area of intellectual inquiry, teach it, contribute to it. We don't demand that they also live a certain way. Ethicists are supposed to be scholars, not saints.

Thus, ethicists succeed without qualification if they find sound arguments for interesting ethical conclusions, which they teach to their students and publish as research, engaging capably in this intellectual endeavor. How they live their lives matters to their conclusions as little as it matters how research chemists live their lives. We should judge Scheler's ethical writings by their merit as writings. His life needn't come into it. He can point the way to Boston while hightailing it to Philadephia.

On the other hand, Aristotle famously suggested that the aim of studying ethics "is not, as... in other inquiries, the attainment of theoretical knowledge" but "to become good" (4th c. BCE/1962, 1103b, p. 35). Many philosophers have agreed with Aristotle, for example, the ancient Stoics and Confucians (Hadot 1995; Ivanhoe 2000). We study ethics -- at least some of us do -- at least in part because we want to become better people.

Does this seem quaint and naive in a modern university context? Maybe. People can approach academic ethics with different aims. Some might be drawn primarily by the intellectual challenge. Others might mainly be interested in uncovering principles with which they can critique others.

Those who favor a primarily intellectualistic approach to ethics might even justifiably mistrust their academic ethical thinking -- sufficiently so that they intentionally quarantine it from everyday life. If common sense and tradition are a more reasonable guide to life than academic ethics, good policy might require not letting your perhaps weird and radical ethical conclusions change how you treat the people around you. Radical utilitarian consequentialist in the classroom, conventional friend and husband at home. Nihilistic anti-natalist in the journals, loving mother of three at home. Thank goodness.

If there's no expectation that ethicists live according to the norms they espouse, that also frees them to explore radical ideas which might be true but which might require great sacrifice or be hard to live by. If I accept Schelerian separation, I can conclude that property is theft or that it's unethical to enjoy any luxuries without thereby feeling that I have any special obligation to sacrifice my minivan or my children's college education fund. If my children's college fund really were at stake, I would be highly motivated to avoid the conclusion that I am ethically required to sacrifice it. That fact would likely bias my reasoning. If ethics is treated more like an intellectual game, divorced from my practical life, then I can follow the moves where they take me without worrying that I'll need to sacrifice anything at the end. A policy of Schelerian separation might then generate better academic discourse in which researchers are unafraid to follow their thinking to whatever radical conclusions it leads them.

Undergraduates are often curious whether Peter Singer personally lives as a vegan and personally donates almost all of his presumably large salary to charitable causes, as his ethical views require. But Singer's academic critics focus on his arguments, not his personal life. It would perhaps be a little strange if Singer were a double-bacon-cheeseburger-eating Maserati driver draped in gold and diamond bling; but from a purely argumentative perspective such personal habits seem irrelevant. The Singer Principle stands or falls on its own merits, regardless of how well or poorly Peter Singer himself embodies it.

So there's a case to be made for Schelerian separation -- the view that academic ethics and personal life are and should be entirely distinct matters, and in particular that if an ethicist does not live according to the norms they espouse in their academic work, that is irrelevant to the assessment of their work. I feel the pull of this idea. There's substantial truth in it, I suspect. However, in a future post I'll discuss why I think this is too simple. (Meanwhile, reader comments -- whether on this post, by email, or on linked social media -- are certainly welcome!)

-------------------------------------------

Follow-up post:

"One Reason to Walk the Walk: To Give Specific Content to Your Assertions" (Sep 8, 2023)

Wednesday, February 19, 2020

Do Business Ethics Classes Make Students More Ethical? Students and Instructors Agree: They Do!

I'm inclined to think that university ethics classes typically have little effect on students' real-world moral behavior.

I base this skepticism partly on Joshua Rust's and my finding, across a wide variety of measures, that ethics professors generally don't behave much differently than other professors -- and if they don't behave differently, why would students? And I base it partly on my (now somewhat dated) review of business ethics and medical ethics instruction specifically, which finds shoddy research methods and inconsistent results suggestive of an underlying non-effect.[1]

On the other hand, part of the administrative justification of ethics classes -- especially medical ethics and business ethics -- appears to be the hope that students will eventually act more ethically as a result of having taken these courses. Administrators and instructors who aim at this result presumably expect that the classes are at least sometimes effective.

The issue, perhaps surprisingly, isn't very well studied. I parody only slightly when I say that the typical study on this topic asks students at the end of class "are you more ethical now?", and when they respond "yes" at rates greater than chance, the researcher concludes that the instruction was effective.

-----------------------------------------------------

Nina Strohminger and I thought we'd ask instructors and students what they thought about this. We wanted to know two things. First, do instructors and students think that business ethics instruction should aim at improving students morally? Second, do they think that business ethics classes do in fact tend to improve students morally?

Our respondents were 101 business ethics instructors at the 2018 Society for Business Ethics conference, plus students from three very different universities: 339 students from Penn (an Ivy League university with an elite business school), 173 students from UC Riverside (a large state university), and 81 students from Seattle University (a small-to-medium-sized Jesuit university, where Jessica Imanaka coordinated the distribution). Surveys were anonymous, pen and paper. Students completed their surveys on the spot near the beginning of the first day of instruction in business ethics courses.

Using a five-point scale from "not at all important" to "extremely important", Question 1 asked respondents to "rate the importance of the following goals that YOU PERSONALLY AIM to get [to have your students get] from your business ethics classes:

  • An intellectual appreciation of fundamental ethical principles
  • An understanding of what specific business practices are considered ethical and unethical, whether or not I [they] choose to comply with those practices
  • Tools for thinking in a more sophisticated way about ethical quandaries
  • Interesting readings and fun puzzle cases that feed my [their] intellectual curiosity
  • Practical knowledge that will help me be a more ethical business leader [them be more ethical business leaders] in the future
  • Satisfying my [their] degree requirements
  • Grades that will look good on my [their] transcripts
  • Brackets indicate changes for the instructors' version.

    The target prompt was the fifth: Practical knowledge that will help them be more ethical business leaders in the future.

    [students in a business ethics class]

    Responses were near ceiling. 58% of students rated practical knowledge that will help them be more ethical business leaders as "extremely important" to them, the highest possible choice. The mean response was 4.44 on the 1-5 scale. This was the highest mean response among the seven possible goals. 40% of students rated it more highly than they rated "satisfying my degree requirements" and 48% rated it more highly than "grades that will look good on my transcript". Responses were similar for all three schools. If we accept these self-reports, gaining practical knowledge that will help them actually become more ethical is one of students' most important personal aims in taking business ethics classes.

    Instructors' responses were similar: 58% said it was personally "extremely important" to them to have students gain practical knowledge that will help them be more ethical business leaders in the future. The mean response was 4.41 on the 1-5 scale.

    Question 2 asked students and instructors to guess each other's goals (with the same seven possible goals). Students tended to think that professors would also very highly rate (mean 4.71) "practical knowledge that will help students be more ethical business leaders in the future". Professors tended to think that students would regard such knowledge as important (mean 4.09) but not as important as satisfying degree requirements (mean 4.42).

    Question 3 asked respondents how likely they thought it was that "the average students gets the following things from their [your] business ethics classes". The same seven goals were presented, with a 1 - 5 response scale from "not at all likely" to "extremely likely".

    Overall, both students and instructors expressed optimism: Both groups' mean response to this question was 3.84 on the 1-5 scale.

    Based on this part of the questionnaire, it looks like students and instructors agree: It's important to them that their business ethics classes produce practical knowledge that helps students become more ethical business leaders, and they think that their business ethics classes do tend to have that effect.

    On the second page of the questionnaire, we asked these questions directly.

    Question 4: Do you think that, as a result of having taken [your] business ethics classes, [your] students on average will behave more ethically, less ethically, or about the same as if they had not taken a business ethics course?

    Among instructors, 64% said more ethical, 35% said about the same, and 1% said less ethical. Among students, 54% said more ethical, 45% said about the same, and again only 1% said less ethical.

    Question 5: To what extent do you agree that the central aim of business ethics instruction should be to make students more ethical? [1 - 5 scale from "strongly disagree" to "strongly agree"]

    Among instructors, 63% agreed or strongly agreed and only 19% disagreed or strongly disagreed. Among students, 67% agreed or strongly agreed and only 9% disagreed or strongly disagreed.

    The results of these direct questions thus broadly fit with the results in terms of specific goals. Either way you ask, both business ethics students and business ethics instructors say that business ethics classes should and do make students more ethical.

    -----------------------------------------------------

    Many cautions and caveats apply. The results might be influenced by "socially desirable responding" -- respondents' tendency to express attitudes that they think will be socially approved (maybe especially if they think their instructors might be watching). Also, instructors attending a business ethics conference might not be representative of business ethics instructors as a whole -- maybe more gung-ho. Students and instructors might not know their own goals and values. They might be excessively optimistic about the transformative power of university instruction. Etc. I confess to having some doubts.

    Nonetheless, I was struck by the apparent degree of consensus, among students and instructors, that business ethics classes should lead students to become more ethical, and by the majority opinion that they do indeed have that effect.

    -----------------------------------------------------

    Note:

    [1] However, Peter Singer, Brad Cokelet, and I have also recently conducted a study that suggests that under certain conditions teaching the philosophical material on meat ethics can lead students to purchase less meat at campus dining locations.

    Wednesday, December 11, 2019

    Two Kinds of Ethical Thinking?

    Yesterday, over at the Blog of the APA, Michael J. Sigrist published a reflection on my work on the not-especially-ethical behavior of ethics professors. The central question is captured in his title: "Why Aren't Ethicists More Ethical?"

    Although he has some qualms about my attempts to measure the moral behavior of ethicists (see here for a summary of my measures), Sigrist accepts the conclusion that, overall, professional ethicists do not behave better than comparable non-ethicists. He offers this explanation:

    There's a kind of thinking that we do when we are trying to prove something, and then a kind of thinking we do when we are trying to do something or become a certain kind of person -- when we are trying to forgive someone, or be more understanding, or become more confident in ourselves. Becoming a better person relies on thinking of the latter sort, whereas most work in professional ethics -- even in practical ethics -- is exclusive to the former.

    The first type of thinking, "trying to prove something", Sigrist characterizes as universalistic and impersonal, the second type of thinking, "trying to do something", he characterizes as emotional, personal, and engaged with the details of ordinary life. He suggests that my work neglects or deprioritizes the latter, more personal, more engaged type of thinking. (I suspect Sigrist wouldn't characterize my work that way if he knew some other things I've written -- but of course there is no obligation for anyone to read my whole corpus.)

    The picture Sigrist appears to have in mind is something like this: The typical ethicist has their head in the clouds, thinking about universal principles, while they ignore -- or at least don't apply their philosophical skills to -- the particular moral issues in the world around their feet; and so it is, or should be, unsurprising that their philosophical ethical skills don't improve them morally. This picture resonates, because it has some truth in it, and it fits with common stereotypes about philosophers. If the picture is correct, it would tidily address the otherwise puzzling disconnection between philosophers' great skills at abstract ethical reflection and their not-so-amazing real-world ethical behavior.

    However, things are not so neat.

    Throughout his post, Sigrist frames his reflections primarily in terms of the contrast between impersonal thinking (about what people in general should do) and personal thinking (about what I in this particular, detailed situation should do). But real, living philosophers do not apply their ethical theories and reasoning skills only to the former; nor do thoughtful people normally engage in personal thinking without also reflecting from time to time on general principles that they think might be true (and indeed that they sometimes try to prove to their interlocutors or themselves, in the process of making ethical decisions). An ethicist might write only about trolley problems and Kant interpretation. But in that ethicist's personal life, when making decisions about what to do, sometimes philosophy will come to mind -- Aristotle's view of courage and friendship, Kant's view of honesty, whether some practical policy would be appropriately universalizable, conflicts between consequentialist vs. deontological principles in harming someone for some greater goal.

    A professional ethicist doesn't pass through the front door of their house and forget all of academic philosophy. Philosophical ethics is too richly and obviously connected to the particularities of personal life. Nor is there some kind of starkly different type of "personal" thinking that ordinary people do that avoids appeal to general principles. In thinking about whether to have children, whether to lie about some matter of importance, how much time or money to donate to charities, how much care one owes to a needy parent or sibling in a time of crisis -- in such matters, thoughtful people often do, and should, think not only about the specifics of their situation but also about general principles.

    Academic philosophical ethics and ordinary engaged ethical reflection are not radically different cognitive enterprises. They can and should, and in philosophers and philosophically-minded non-philosophers, merge and blend into each other, as we wander back and forth, fruitfully, between the general and the specific. How could it be otherwise?

    Sigrist is mistaken. The puzzle remains. We cannot so easily dismiss the challenge that I think my research on ethicists poses to the field. We cannot say, "ah, but of course ethicists behave no differently in their personal lives, because all of their expertise is only relevant to the impersonal and universal". The two kinds of ethical thinking that Sigrist identifies are ends of a continuum that we all regularly traverse, rather than discrete patterns of thinking that are walled off from each other without mutual influence.

    In my work and my personal life, I try to make a point of blending the personal with the universal and the everyday with the scholarly, rejecting any sharp distinction between academic and non-academic thinking. This is part of why I write a blog. This is part of the vision behind my recent book. I think Sigrist values this blending too, and means to be critiquing what he sees as its absence in mainstream Anglophone philosophical ethics. Sigrist has only drawn his lines too sharply, offering too simplified a view of the typical ethicist's ways of thinking; and he has mistaken me for an opponent rather than a fellow traveler.

    Friday, March 22, 2019

    Most U.S. and German Ethicists Condemn Meat-Eating (or German Philosophers Think Meat Is the Wurst)

    It's an honor and a pleasure to have one's work replicated, especially when it's done as carefully as Philipp Schoenegger and Johannes Wagner have done.

    In 2009, Joshua Rust and I surveyed the attitudes and behavior of ethicist philosophers in five U.S. states, comparing those attitudes and behavior to non-ethicist philosophers' and to a comparison group of other professors at the same universities. Across nine different moral issues, we found that ethicists reported behaving overall no morally differently than the other two groups, though on some issues, especially vegetarianism and charitable giving, they endorsed more stringent attitudes. (In some cases, we also had observational behavioral data that didn't depend on self-report. Here too we found no overall difference.) Schoenegger and Wagner translated our questionnaire into German and added a few new questions, then distributed it by email to professors in German-speaking countries, achieving an overall response rate of 29.5% [corrected Mar 23]. (Josh and I had a response rate of 58%.) With a couple of exceptions, Schoenegger and Wagner report similar results.

    The most interesting difference between Schoenegger and Wagner's results and Josh's and my results concerns vegetarianism.

    The Questions:

    We originally asked three questions about vegetarianism. In the first part of the questionnaire, we asked respondents to rate "regularly eating the meat of mammals, such as beef or pork" on a nine-point scale from "very morally bad" to "very morally good", with "morally neutral" in the middle.

    In the second part of the questionnaire, we asked:

    17. During about how many meals or snacks per week do you eat the meat of mammals such as beef or pork?

       enter number of times per week ____

    18. Think back on your last evening meal, not including snacks. Did you eat the meat of a mammal during that meal?

       □ yes

       □ no

       □ don’t recall

    U.S. Results in 2009

    On the attitude question, 60% of ethicist respondents rated meat-eating somewhere on the "bad" side of the nine-point scale, compared to 45% of non-ethicist philosophers and only 19% of professors from other departments (ANOVA, F = 17.0, p < 0.001). We also found substantial differences by both gender and age, with women and younger respondents more likely to condemn meat-eating. For example, 81% of female philosophy respondents born 1960 or later rated eating the meat of mammals as morally bad, compared to 7% of male non-philosophers born before 1960. That's a huge difference in attitude!

    Eight percent of respondents rated it at 1 or 2 on the nine-point scale -- either "very bad" or adjacent to very bad -- including 11% of ethicists (46/564 overall, 22/193 of ethicists).

    On self-report of behavior, Josh and I found much less difference. On our "previous evening meal" question, we detected at best a statistically marginal difference among the three main analysis groups: 37% of ethicists reported having eaten meat at the previous evening meal, compared to 33% of non-ethicist philosophers and 45% of non-philosophers (chi-squared = 5.7, p = 0.06, excluding two respondents who answered "don’t recall").

    The "meals per week" question was actually designed in part as a test of "socially desirable responding" or a tendency to fudge answers: We thought it would be difficult to accurately estimate the number, thus it would be tempting for respondents to fudge a bit. And mathematically, they did seem to be guilty of fudging: For example, 21% of respondents who reported eating meat at one meal per week also reported eating meat at the previous evening meal. Even if we assume that meat is only consumed at evening meals, the number should be closer to 14% (1/7). If we assume, more plausibly, that approximately half of all meat meals are evening meals, then the number should be closer to 7%. With that caveat in mind, on the meals-per-week question we found a mean of 4.1 for ethicists, compared to 4.6 for non-ethicist philosophers and 5.3 for non-philosophers (ANOVA [square-root transformed], F = 5.2, p = 0.006).

    We concluded that although a majority of U.S. ethicists, especially younger ethicists and women ethicists, thought eating meat was morally bad, they ate meat at approximately the same rate as did the non-ethicists.

    German Results in 2018:

    Schoenegger and Wagner find, similarly, a majority of German ethicist respondents rating meat-eating as bad: 67%. Evidently, a majority of U.S. and German ethicists think that eating meat is morally bad.

    However, among the non-ethicist professors, Schoenegger and Wagner find higher rates of condemnation of meat-eating than Josh and I found: 63% among German-speaking non-ethicist philosophers in 2018 compared to our 45% in the U.S. in 2009 (80/127 vs. 92/204, z = 3.2, p = .001), and even more strikingly 40% among German-speaking professors from departments other than philosophy in 2018 compared to only 19% in the U.S. in 2009 (52/131 vs/ 31/167, z = 4.0, p < .001; [note 1]).

    German professors were also much more likely than U.S. professors in 2009 to think that eating meat is very bad, with 18% rating it 1 or 2 on the scale, including 23% of ethicists (57/408 and 35/150, excluding non-respondents; two-proportion test U.S. vs German: overall z = 2.8, p = .005, ethicists z = 2.9, p = .004).

    Apparently, German-speaking professors are not as fond of their wurst as cultural stereotypes might suggest!

    A number of explanations are possible: One is that in general German academics are more pro-vegetarian than are U.S. academics. Another is that attitudes toward vegetarianism are changing swiftly over time (as suggested by the age differences in Josh's and my study) and that the nine years between 2009 and 2018 saw a substantial shift in both cultures. Still another concerns non-response bias. (For non-philosophers, Schoenegger and Wagner's response rate was 30%, while Josh's and mine was 53%.)

    In Schoenegger and Wagner's data, ethicists report having eaten less meat at the previous evening meal than the other two groups: 25%, vs. 40% of non-ethicist philosophers and 39% of the non-philosophers (chi-squared = 9.3, p = .01 [note 2]). The meals per week data are less clear. Schoenegger report 2.1 meals per week for ethicists, compared to 2.8 and 3.0 for non-ethicist philosophers and non-philosophers respectively (ANOVA, F = 3.4, p = .03), but their data are highly right skewed, and due to skew Josh and I had used a square-root transformation for original 2009 analysis. A similar square-root transformation on Schoenegger and Wagner's raw data eliminates any statistically detectable difference (F = 0.8, p = .45). And there is again evidence of fudging in the meals-per-week responses: Among those reporting only one meat meal per week, for example, 18% reported having had meat at their previous evening meal.

    If we take the meals-per-week data at face value, the German respondents ate substantially less meat in 2018 than did the U.S. respondents in 2009: 2.6 meals for the Germans vs. 4.6 for the U.S. respondents (median 2 vs median 4, Mann-Whitney W = 287793, p < .001). However, the difference was not statistically detectable on the previous evening meal question: 38% U.S. vs 34% German (z = 1.3, p = .21).

    All of this is a bit difficult to interpret, but here's the tentative conclusion I draw:

    German professors today -- especially ethicists -- are more likely to condemn meat eating than were U.S. professors ten years ago. They might also be a bit less likely to eat meat, again perhaps especially the ethicists, though that is not entirely clear and might reflect a bit of fudging in the self-reports.

    The other difference Schoenegger and Wagner found was in the question of whether ethicists were on the whole more likely than other professors to embrace stringent moral views -- but full analysis of this will require some detail and will have to wait for another time.

    *********************************************

    Note 1: In the published paper, Schoenegger and Wagner report 39% instead of the 40% I find in reanalyzing their raw data. This might either be a rounding error [39.69%] or some small difference in our analyses.

    Note 2: In the published paper, Schoenegger and Wagner report 24%, which again might be a rounding error (from 24.65%) or a small analytic difference.

    [image source]

    Friday, March 15, 2019

    Should You Defer to Ethical Experts?

    Ernest Sosa gave a lovely and fascinating talk yesterday at UC Riverside on the importance of "firsthand intuitive insight" in philosophy. It has me thinking about the extent to which we ought, or ought not, defer to ethical experts when we are otherwise inclined to disagree with their conclusions.

    To illustrate the idea of firsthand intuitive insight, Sosa gives two examples. One concerns mathematics. Consider a student who learns that the Pythagorean theorem is true without learning its proof. This student knows that a^2 + b^2 = c^2 but doesn't have any insight into why it's true. Contrast this student with one who masters the proof and consequently does understand why it's true. The second student, but not the first, has firsthand intuitive insight. Sosa's other example is in ethics. One child bullies another. Her mother, seeing the act and seeing the grief in the face of the other child, tells the bullying child that she should apologize. The child might defer to her mother's ethical judgment, sincerely concluding she really should apologize, but without understanding why what she has done is bad enough to require apology. Alternatively, she might come to genuinely notice the other child's grief and more fully understand how her bullying was inappropriate, and thus gain firsthand intuitive insight into the need for apology. (I worry that firsthand intuitive insight is a bit of a slippery concept, but I don't think I can do more with it here.)

    Sosa argues that a central aim of much of philosophy is firsthand intuitive insight of this sort. In the sciences and in history, it's often enough just to know that some fact is true (that helium has two protons, that the Qin Dynasty fell in 206 BCE). On such matters, we happily defer to experts. In philosophy, we're less likely to accept a truth without having our own personal, firsthand intuitive insight. Expert metaphysicians might almost universally agree that barstool-shaped-aggregates but not barstools themselves supervene on collections of particles arranged barstoolwise. Expert ethicists might almost universally agree that a straightforward pleasure-maximizing utilitarian ethics would require radical revision of ordinary moral judgments. But we're not inclined to just take them at their word. We want to understand for ourselves how it is so.

    This seems right. And yet, there's a bit of a puzzle in it, if we think that it's important that our ethical opinions be correct. (Yes, I'm assuming that ethics is a domain in which there are correct and incorrect opinions.) What should we do when the majority of philosophical experts think P, but your own (apparent) firsthand intuitive insight suggests not-P? If you care about correctness above all, maybe you should defer to the experts, despite your lack of understanding. But Sosa appears to think, as I suspect many of us do, that often the right course instead is to stand steadfast, continuing to judge according to your own best independent reasoning.

    Borrowing an example from Sarah McGrath's work on moral deference, consider the case of vegetarianism. Based on some of my work, I think that probably the majority of professional ethicists in the United States believe that it is normally morally wrong to eat the meat of factory-farmed animals. This might also be true in German-speaking countries. Impressionistically, most of the philosophers I know who have given the issue serious and detailed consideration come to endorse vegetarianism, including two of the most prominent ethicists currently alive, Peter Singer and Christine Korsgaard. Now suppose that you haven't given the matter nearly as much thought as they have, but you have given it some thought. You're inclined still to think that eating meat is okay, and you can maybe mount one or two plausible-seeming defenses of your view. Should you defer to their ethical expertise?

    Sosa compares philosophical reasoning with archery. You not only want to hit the target (the truth), you want to do so by the exercise of your own skill (your own intuitive insight), rather than by having an expert guide your hand (deference to experts). I agree that ideally this is so. It's nice when you have have both truth and intuitive insight! But when the aim of hitting the target conflicts with the aim of doing so by your own intuitive insight, your preference should depend on the stakes. If it's an archery contest, you don't want the coach's help: The most important thing is the test of your own skill. But if you're a subsistence hunter who needs dinner, then you probably ought to take any help you can get, if the target looks like it's about to escape. And isn't ethics (outside the classroom, at least) more like subsistence hunting than like an archery contest? What should matter most is whether you actually come to the right moral conclusion about eating meat (or whatever) not whether you get there by your own insight. Excessive emphasis on the individual's need for intuitive insight, at the cost of truth or correctness, risks turning ethics into a kind of sport.

    So maybe, then, you should defer to the majority of ethical experts, and conclude that it is normally wrong to eat factory-farmed meat, even if that conclusion doesn't accord with your own best attempts at insight?

    While I'm tempted to say this, I simultaneously feel pulled in Sosa's direction -- and perhaps I should defer to his expertise as one of the world's leading epistemologists! There's something I like about non-deference in philosophy, and our prizing of people's standing fast in their own best judgments, even in the teeth of disagreement by better-informed experts. So here are four brief defenses of non-deference. I fear none of them is quite sufficient. But maybe in combination they will take us somewhere?

    (1.) The "experts" might not be experts. This is McGrath's defense of non-deference in ethics. Despite their seeming expertise, great ethicists have often been horribly wrong in the past. See Aristotle on slavery, Kant on bastards, masturbation, homosexuality, wives, and servants, the consensus of philosophers in favor of World War I, and ethicists' seeming inability to reason better even about trolley problems than non-ethicists.

    (2.) Firsthand intuitive insight might be highly intrinsically valuable. I'm a big believer in the intrinsic value of knowledge (including self-knowledge). One of the most amazing and important things about life on Earth is that sometimes we bags of mostly water can stop and reflect on some of the biggest, most puzzling questions that there are. An important component of the intrinsic value of philosophical reflection is the real understanding that comes with firsthand intuitive insight, or seeming insight, or partial insight -- our ability to reach our own philosophical judgments instead of simply deferring to experts. This might be valuable enough to merit some substantial loss of ethical correctness to preserve it.

    (3.) The philosophical community might profit from diversity of moral opinion, even if individuals with unusual views are likely to be incorrect. The philosophical community as a whole might, over time, be more likely to converge upon correct ethical views if it fosters diversity of opinion. If we all defer to whoever seems to be most expert, we might reach consensus too fast on a wrong, or at least a narrow and partial, ethical view. Compare Kuhn and Longino on the value of diversity in scientific opinion: Intellectual communities need stubborn defenders of unlikely views, even if those stubborn folks are probably wrong -- since sometimes they have an important piece of the truth that others are missing.

    (4.) Proper moral motivation might require acting from one's own insight rather than from moral deference. The bully who apologizes out of deference gives, I think, a less perfect apology than the bully who has firsthand intuitive insight into the need to apologize. Maybe in some cases, being motivated by one's own intuitive insight is so morally important that it's better to do the ethically wrong thing on the basis of your imperfect but non-deferential insight than to do the ethically right thing deferentially.

    As I said, none of these defenses of non-deference seems quite enough on its own. Even if the experts might be wrong (Point 1), from a bird's-eye perspective it seems like our best guess should be that they're not. And the considerations in Points 2-4 seem plausibly to be only secondary from the perspective of the person who wants really to have ethically correct views by which to guide her behavior.

    [image source]

    Friday, February 15, 2019

    Studying Ethics Should Influence Your Behavior (But It Doesn't Seem to)

    Some academic disciplines have direct relevance to day-to-day life. Studying these disciplines, you might think, would have an influence on one's practical behavior. Studying nutritional health, it seems plausible to suppose, would have an influence of some sort on your food choices. Studying the stock market would likely have an influence on your investment strategies. Studying parenting styles in developmental psychology would have an influence on your parenting decisions. The effects might not be huge: A scholar of nutrition might not be able to entirely sacrifice Twinkies. A scholar of parenting styles might sometimes lose her temper in ways she knows from her research to be counterproductive. But it would be strange if studying such topics had no effect whatsoever -- if there were a perfect isolation between one's research on nutrition, investment, or parenting and one's personal food choices, investments, and approaches to parenting.

    [A doctor doing what doctors in fact don't do very much of.]

    Other academic topics have tenuous connections at best to practical matters of day-to-day life: studying the first second of the Big Bang, or mereological approaches to objecthood, or tortoise-shell divination in ancient China. Of course, studying such things could have behavioral effects. Maybe immersion in Big Bang cosmology inspires one to a broader, less parochial worldview. But I don't think we should particularly expect that or think something is strange if it doesn't. It's not strange for a cosmologist to be parochial in the same way it would be for an anti-trans-fat health researcher to not attempt to reduce her own trans-fat intake.

    Ethics seems clearly to be in the category of academic disciplines that are directly relevant to scholars' day-to-day lives. Not every sub-issue of every sub-specialization of ethics is so, of course. Some ethical questions are highly abstract or concern matters irrelevant to the immediate choices of the scholars' lives; but few ethicists spend all of their energy on issues of that sort. Issues like our obligations to the poor, the ethics of honesty and kindness, animal rights and environmentalism, prejudice, structural injustices in our society, the proper weighing of selfish concerns against the demands of others, the question of how much to abide by laws or directives with which you disagree -- all seem directly relevant to our lives. It would be odd if devoting a substantial part of one's career to thinking about such issues had no influence of any sort on one's day-to-day behavior.

    And yet it's not clear to me that studying ethics does have any influence on day-to-day behavior. Across a wide range of studies, my collaborators and I have found no convincing evidence of systematic behavioral differences between ethicists and non-ethicists of similar social background. Also, impressionistically, in my personal interactions with professional ethicists, my sense is that they behave overall similarly to non-ethicists. Furthermore, there's little evidence that university-level ethics classes influence students' behavior either.

    Maybe studying ethics does sometimes have a practical effect. It would, in my mind, be stunning if studying ethics never had any influence of any sort on one's behavioral choices! But the effects, if any, are subtle and difficult to detect empirically.

    Why this should be so is an underappreciated puzzle.

    The easiest answers -- "academic ethics is all abstract and impractical", "ethics is all post-hoc rationalization of what you were going to do anyway", "our immoral desires are so compelling that no amount of rational thought could lead us to act otherwise" -- don't withstand critical scrutiny as fully adequate answers (although each may have some element of truth).

    For several of my imperfect attempts to resolve this puzzle, see:

    "The Moral Behavior of Ethicists and the Power of Reason" (with Joshua Rust), Advances in Experimental Moral Psychology (ed. H. Sarkissian and J. Wright, 2014).

    "Rationalization in Moral and Philosophical Thought" (with Jon Ellis), Moral Inferences (ed. J.F. Bonnefon and B. Tremoliere, 2017).

    "Aiming for Moral Mediocrity" (manuscript in draft).

    I'm still banging my head against it.

    [image source]

    Wednesday, September 12, 2018

    One-Point-Five Cheers for a Hugo Award for a TV Show about Ethicists’ Moral Expertise

    [cross-posted at Kittywumpus]

    When The Good Place episode “The Trolley Problem” won one of science fiction’s most prestigious awards, the Hugo, in the category of best dramatic presentation, short form, I celebrated. I celebrated not because I loved the episode (in fact, I had so far only seen a couple of The Good Place’s earlier episodes) but because, as a philosophy professor aiming to build bridges between academic philosophy and popular science fiction, the awarding of a Hugo to a show starring a professor of philosophy discussing a famous philosophical problem seemed to confirm that science fiction fans see some of the same synergies I see between science fiction and philosophy.

    I do think the synergies are there and that the fans see and value them – as also revealed by the enduring popularity of The Matrix, and by West World, and Her, and Black Mirror, among others – but “The Trolley Problem”, considered as a free-standing episode, fumbles the job. (Below, I will suggest a twist by which The Good Place could redeem itself in later episodes.)

    Yeah, I’m going to be fussy when maybe I should just cheer and praise. And I’m going to take the episode more philosophically seriously than maybe I should, treating it as not just light humor. But taking good science fiction philosophically seriously is important to me – and that means engaging critically. So here we go.

    The Philosophical Trolley Problem

    The trolley problem – the classic academic philosophy version of the trolley problem – concerns a pair of scenarios.

    In one scenario, the Switch case, you are standing beside a railroad track watching a runaway railcar (or “trolley”) headed toward five people it will surely kill if you do nothing. You are standing by a switch, however, and you can flip the switch to divert the trolley onto a side track, saving the five people. Unfortunately, there is one person on the side track who will be killed if you divert the trolley. Question: Should you flip the switch?

    In another scenario, the Push case, you are standing on a footbridge when you see the runaway railcar headed toward the five people. In this case, there is no switch. You do, however, happen to be standing beside a hiker with a heavy backpack, who you could push off the bridge into the path of the trolley, which will then grind to a halt on his body, killing him and saving the five. (You are too light to stop the trolley with your own body.) He is leaning over the railing, heedless of you, so you could just push him over. Question: Should you push the hiker?

    The interesting thing about these problems is that most people say it’s okay to flip the switch in Switch but not okay to push the hiker in Push, despite the fact that in both cases you appear to be killing one person to save five. Is there really a meaningful difference between the cases? If so, what is it? Or are our ordinary intuitions about one or the other case wrong?

    It’s a lovely puzzle, much, much debated in academic philosophy, often with intricate variations on the cases. (Here’s one of my papers about it.)

    The Problem with “The Trolley Problem”

    “The Trolley Problem” episode nicely sets up some basic trolley scenarios, adding also a medical case of killing one to save five (an involuntary organ donor). The philosophy professor character, Chidi, is teaching the material to the other characters.

    Spoilers coming.

    The episode stumbles by trying to do two conflicting things.

    First, it seizes the trope of the philosophy professor who can’t put his theories into practice. The demon Michael sets up a simulated trolley, headed toward five victims, with Chidi at the helm. Chidi is called on to make a fast decision. He hesitates, agonizing, and crashes into the five. Micheal reruns the scenario with several variations, and it’s clear that Chidi, faced with a practical decision requiring swift action, can’t actually figure out what’s best. (However, Chidi is clear that he wouldn’t cut up a healthy patient in an involuntary organ donor case.)

    Second, incompatibly, the episode wants to affirm Chidi’s moral expertise. Michael, the demon who enjoys torturing humans, can’t seem to take Chidi’s philosophy lessons seriously, despite Chidi’s great knowledge of ethics. Michael tries to win Chidi’s favor by giving him a previously unseen notebook of Kant’s, but Chidi, with integrity that I suppose the viewer is expected to find admirable, casts the notebook aside, seeing it as a bribe. What Chidi really wants is for Michael to recognize his moral expertise. At the climax of the episode, Michael seems to do just this, saying:

    Oh, Chidi, I am so sorry. I didn’t understand human ethics, and you do. And it made me feel insecure, and I lashed out. And I really need your help because I feel so lost and vulnerable.

    It’s unclear from within the episode whether we are supposed to regard Michael as sincere. Maybe not. Regardless, the viewer is invited to think that it’s what Michael should say, what his attitude should be – and Chidi accepts the apology.

    But this resolution hardly fits with Chidi’s failure in actual ethical decision making in the moment (a vice he also reveals in other episodes). Chidi has abstract, theoretical knowledge about ethical quandaries such as the trolley problem, and he is in some ways the most morally admirable of the lead characters, but his failure in vividly simulated trolley cases casts his practical ethical expertise into doubt. Nothing in the episode satisfactorily resolves that practical challenge to Chidi’s expertise, pro or con.

    Ethical Expertise?

    Now, as it happens, I am the world’s leading expert on the ethical behavior of professional ethicists. (Yes, really. Admittedly, the competition is limited.)

    The one thing that shows most clearly from my and others’ work on this topic, and which is anyway pretty evident if you spend much time around professional ethicists, is that ethicists, on average, behave more or less similarly to other people of similar social background – not especially better, not especially worse. From the fact that Chidi is a professor of ethics, nothing in particular follows about his moral behavior. Often, indeed, expertise in philosophical ethics appears to become expertise in constructing post-hoc intellectual rationales for what you were inclined to do anyway.

    I hope you will agree with me about the following, concerning the philosophy of philosophy: Real ethical understanding is not a matter of what words you speak in classroom moments. It’s a matter of what you choose and what you do habitually, regardless of whether you can tell your friends a handsome story about it, grounded in your knowledge of Kant. It’s not clear that Chidi does have especially good ethical understanding in this practical sense. Moreover, to the extent Chidi does have some such practical ethical understanding, as a somewhat morally admirable person, it is not in virtue of his knowledge of Kant.

    Michael should not be so deferential to Chidi’s expertise, and especially he should not be deferential on the basis of Chidi’s training as a philosopher. If, over the seasons, the characters improve morally, it is, or should be, because they learn from the practical situations they find themselves in, not because of Chidi’s theoretical lessons.

    How to Partly Redeem “The Trolley Problem”

    Thus, the episode, as a stand-alone work, is flawed both in plot (the resolution at climax failing to answer the problem posed by Chidi’s earlier practical indecisiveness) and in philosophy (being too deferential to the expertise of theoretical ethicists, in contrast with the episode’s implicit criticism of the practical, on-the-trolley value of Chidi’s theoretical ethics).

    When the whole multi-season arc of The Good Place finally resolves, here’s what I hope happens, which in my judgment would partly redeem “The Trolley Problem”: Michael turns out, all along, to have been the most ethically insightful character, becoming Chidi’s teacher rather than the other way around.

    [image source]

    -----------------------------------------------

    Update, October 21, 2018:

    Wisecrack has a terrific treatment of the philosophy of The Good Place, revealing that the show has a more nuanced view of the role of ethics lessons than one might infer from treating "The Trolley Problem" as a stand-alone work. Bonus feature: I am depicted wearing a "Captain Obvious" hat.

    Friday, June 01, 2018

    Does It Harm Philosophy as a Discipline to Discuss the Apparently Meager Practical Effects of Studying Ethics?

    I've done a lot of empirical work on the apparently meager practical effects of studying philosophical ethics. Although most philosophers seem to view my work either neutrally or positively, or have concerns about the empirical details of this or that study, others react quite negatively to the whole project, more or less in principle.

    About a month ago on Facebook, Samuel Rickless did such a nice job articulating some general concerns (see his comment on this public post) that I thought I'd quote his comments here and share some of my reactions.

    First, My Research:

    * In a series of studies published from 2009 to 2014, mostly in collaboration with Joshua Rust (and summarized here), I've empirically explored the moral behavior of ethics professors. As far as I know, no one else had ever systematically examined this question. Across 17 measures of (arguably) moral behavior, ranging from rates of charitable donation to staying in contact with one's mother to vegetarianism to littering to responding to student emails to peer ratings of overall moral behavior, I have found not a single main measure on which ethicists appeared to act morally better than comparison groups of other professors; nor do they appear to behave better overall when the data are merged meta-analytically. (Caveat: on some secondary measures we found ethicists to behave better. However, on other measures we found them to behave worse, with no clearly interpretable overall pattern.)

    * In a pair of studies with Fiery Cushman, published in 2012 and 2015, I've found that philosophers, including professional ethicists, seem to be no less susceptible than non-philosophers to apparently irrational order effects and framing effects in their evaluation of moral dilemmas.

    * More recently, I've turned my attention to philosophical pedagogy. In an unpublished critical review from 2013, I found little good empirical evidence that business ethics or medical ethics instruction has any practical effect on student behavior. I have been following up with some empirical research of my own with several different collaborators. None of it is complete yet, but preliminary results tend to confirm the lack of practical effect, except perhaps when there's the right kind of narrative or emotional engagement. On grounds of armchair plausibility, I tend to favor multi-causal, canceling explanations over the view that philosophical reflection is simply inert (contra Jon Haidt); thus I'm inclined to explore how backfire effects might on average tend to cancel positive effects. It was a post on the possible backfire effects of teaching ethics that prompted Rickless's comment.

    Rickless's Objection:
    (shared with permission, adding lineation and emphasis for clarity)

    Rickless: And I’ll be honest, Eric, all this stuff about how unethical ethicists are, and how counterproductive their courses might be, really bothers me. It’s not that I think that ethics courses can’t be improved or that all ethicists are wonderful people. But please understand that the takeaway from this kind of research and speculation, as it will likely be processed by journalists and others who may well pick up and run with it, will be that philosophers are shits whose courses turn their students into shits. And this may lead to the defunding of philosophy, the removal of ethics courses from business school, and, to my mind, a host of other consequences that are almost certainly far worse than the ills that you are looking to prevent.

    Schwitzgebel: Samuel, I understand that concern. You might be right about the effects. However, I also think that if it is correct that ethics classes as standardly taught have little of the positive effect that some administrators and students hope for from them, we as a society should know that. It should be explored in a rigorous way. On the possibly bright side, a new dimension of my research is starting to examine conditions under which teaching does have a positive measurable effect on real-world behavior. I am hopeful that understanding that better will lead us to teach better.

    Rickless: In theory, what you say about knowing that courses have little or no positive effect makes sense. But in practice, I have the following concerns.

    First, no set of studies could possibly measure all the positive and negative effects of teaching ethics this way or that way. You just can’t control all the potentially relevant variables, in part because you don’t know what all the potentially relevant variables are, in part because you can’t fix all the parameters with only one parameter allowed to vary.

    Second, you need to be thinking very seriously about whether your own motives (particularly motives related to bursting bubbles and countering conventional wisdom) are playing a role in your research, because those motives can have unseen effects on the way that research is conducted, as well as the conclusions drawn from it. I am not imputing bad motives to you. Far from it, and quite the opposite. But I think that all researchers, myself included, want their research to be striking and interesting, sometimes surprising.

    Third, the tendency of researchers is to draw conclusions that go beyond the actual evidence.

    Fourth, the combination of all these factors leads to conclusions that have a significant likelihood of being mistaken.

    Fifth, those conclusions will likely be taken much more seriously by the powers-that-be than by the researchers themselves. All the qualifiers inserted by researchers are usually removed by journalists and administrators.

    Sixth, the consequences on the profession if negative results are taken seriously by persons in positions of power will be dire.

    Under the circumstances, it seems to me that research that is designed to reveal negative facts about the way things are taught had better be airtight before being publicized. The problem is that there is no such research. This doesn’t mean that there is no answer to problems of ineffective teaching. But that is an issue for another day.

    My Reply:

    On the issue of motives: Of course it is fun to have striking research! Given my general skepticism about self-knowledge, including of motives, I won't attempt self-diagnosis. However, I will say that except for recent studies that are not yet complete, I have published every empirical study I've done on this topic, with no file-drawered results. I am not selecting only the striking material for publication. Also, in my recent pedagogy research I am collaborating with other researchers who very much hope for positive results.

    On the likelihood of being mistaken: I acknowledge that any one study is likely to be mistaken. However, my results are pretty consistent across a wide variety of methods and behavior types, including some issues specifically chosen with the thought that they might show ethicists in a good light (the charity and vegetarianism measures in Schwitzgebel and Rust 2014). I think this adds to credibility, though it would be better if other researchers with different methods and theoretical perspectives attempted to confirm or disconfirm our findings. There is currently one replication attempt ongoing among German-language philosophers, so we will see how that plays out!

    On whether the powers-that-be will take the conclusions more seriously than the researchers: I interpret Rickless here as meaning that they will tend to remove the caveats and go for the sexy headline. I do think that is possible. One potentially alarming fact from this point of view is that my most-cited and seemingly best-known study is the only study where I found ethicists seeming to behave worse than the comparison groups: the study of missing library books. However, it was also my first published study on the topic, so I don't know to what extent the extra attention is a primacy effect.

    On possibly dire consequences: The most likely path for dire consequences seems to me to be this: Part of the administrative justification for requiring ethics classes might be the implicit expectation that university-level ethics instruction positively influences moral behavior. If this expectation is removed, so too is part of the administrative justification for ethics instruction.

    Rickless's conclusion appears to be that no empirical research on this topic, with negative or null results, should be published unless it is "airtight", and that it is practically impossible for such research to be airtight. From this I infer that Rickless thinks either that (a.) only positive results should be published, while negative or null results remain unpublished because inevitably not airtight, or that (b.) no studies of this sort should be published at all, whether positive, negative, or null.

    Rickless's argument has merit, and I see the path to this conclusion. Certainly there is a risk to the discipline in publishing negative or null results, and one ought to be careful.

    However, both (a) and (b) seem to be bad policy.

    On (a): To think that only positive results should be published (or more moderately that we should have a much higher bar for negative or null results than for positive ones) runs contrary to the standards of open science that have recently received so much attention in the social psychology replication crisis. In the long run it is probably contrary to the interests of science, philosophy, and society as a whole for us to pursue a policy that will create an illusory disproportion of positive research.

    That said, there is a much more moderate strand of (a) that I could endorse: Being cautious and sober about one's research, rather than yielding to the temptation to inflate dubious, sexy results for the sake of publicity. I hope that in my own work I generally meet this standard, and I would recommend that same standard for both positive and negative or null research.

    On (b): It seems at least as undesirable to discourage all empirical research on these topics. Don't we want to know the relationship between philosophical moral reflection and real-world moral behavior? Even if you think that studying the behavior of professional ethicists in particular is unilluminating, surely studying the effects of philosophical pedagogy is worthwhile. We should want to know what sorts of effects our courses have on the students who take them and under what conditions -- especially if part of the administrative justification for requiring ethics courses is the assumption that they do have a practical effect. To reject the whole enterprise of empirically researching the effects of studying philosophy because there's a risk that some studies will show that studying philosophy has little practical impact on real-world choices -- that seems radically antiscientific.

    Rickless raises legitimate worries. I think the best practical response is more research, by more research groups, with open sharing of results, and open discussions of the issue by people working from a wide variety of perspectives. In the long run, I hope that some of my null results can lay the groundwork for a fuller understanding of the moral psychology of philosophy. Understanding the range of conditions under which philosophical moral reflection does and does not have practical effects on real-world behavior should ultimately empower rather than disempower philosophy as a discipline.

    [image source]